Conformance test: not doing test setup. I0111 18:46:04.852549 7333 e2e.go:92] Starting e2e run "6fa49f5f-a05c-48cd-8d27-16a4ce1eac7f" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1578768363 - Will randomize all specs Will run 41 of 4731 specs Jan 11 18:46:04.888: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Deleting namespaces Jan 11 18:46:05.344: INFO: namespace : pod-network-test-5324 api call to delete is complete STEP: Waiting for namespaces to vanish I0111 18:46:05.344483 7333 suites.go:70] Waiting for deletion of the following namespaces: [pod-network-test-5324] Jan 11 18:46:17.435: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jan 11 18:46:17.705: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jan 11 18:46:18.093: INFO: 20 / 20 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jan 11 18:46:18.093: INFO: expected 12 pod replicas in namespace 'kube-system', 12 are Running and Ready. Jan 11 18:46:18.093: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jan 11 18:46:18.193: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'calico-node' (0 seconds elapsed) Jan 11 18:46:18.193: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jan 11 18:46:18.193: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'node-exporter' (0 seconds elapsed) Jan 11 18:46:18.193: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'node-problem-detector' (0 seconds elapsed) Jan 11 18:46:18.193: INFO: e2e test version: v1.16.4 Jan 11 18:46:18.282: INFO: kube-apiserver version: v1.16.4 Jan 11 18:46:18.282: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config Jan 11 18:46:18.374: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 18:46:18.376: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename daemonsets Jan 11 18:46:18.736: INFO: Found PodSecurityPolicies; assuming PodSecurityPolicy is enabled. Jan 11 18:46:18.918: INFO: Found ClusterRoles; assuming RBAC is enabled. STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in daemonsets-7379 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should rollback without unnecessary restarts [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Jan 11 18:46:19.827: INFO: Create a RollingUpdate DaemonSet Jan 11 18:46:19.917: INFO: Check that daemon pods launch on every node of the cluster Jan 11 18:46:20.098: INFO: Number of nodes with available pods: 0 Jan 11 18:46:20.098: INFO: Node ip-10-250-27-25.ec2.internal is running more than one daemon pod Jan 11 18:46:21.279: INFO: Number of nodes with available pods: 2 Jan 11 18:46:21.279: INFO: Number of running nodes: 2, number of available pods: 2 Jan 11 18:46:21.279: INFO: Update the DaemonSet to trigger a rollout Jan 11 18:46:21.459: INFO: Updating DaemonSet daemon-set Jan 11 18:46:34.820: INFO: Roll back the DaemonSet before rollout is complete Jan 11 18:46:35.000: INFO: Updating DaemonSet daemon-set Jan 11 18:46:35.000: INFO: Make sure DaemonSet rollback is complete Jan 11 18:46:35.090: INFO: Wrong image for pod: daemon-set-5rrwb. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 11 18:46:35.090: INFO: Pod daemon-set-5rrwb is not available Jan 11 18:46:36.272: INFO: Wrong image for pod: daemon-set-5rrwb. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 11 18:46:36.272: INFO: Pod daemon-set-5rrwb is not available Jan 11 18:46:37.272: INFO: Pod daemon-set-9nz8l is not available [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7379, will wait for the garbage collector to delete the pods Jan 11 18:46:37.824: INFO: Deleting DaemonSet.extensions daemon-set took: 91.337928ms Jan 11 18:46:38.324: INFO: Terminating DaemonSet.extensions daemon-set pods took: 500.298932ms Jan 11 18:46:48.214: INFO: Number of nodes with available pods: 0 Jan 11 18:46:48.214: INFO: Number of running nodes: 0, number of available pods: 0 Jan 11 18:46:48.305: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7379/daemonsets","resourceVersion":"32973"},"items":null} Jan 11 18:46:48.395: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7379/pods","resourceVersion":"32973"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 18:46:48.666: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-7379" for this suite. Jan 11 18:46:55.026: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 18:46:58.256: INFO: namespace daemonsets-7379 deletion completed in 9.499011992s •SS ------------------------------ [sig-scheduling] SchedulerPriorities [Serial] Pod should avoid nodes that have avoidPod annotation /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:157 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 18:46:58.256: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename sched-priority STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in sched-priority-7294 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:76 Jan 11 18:46:58.894: INFO: Waiting up to 1m0s for all nodes to be ready Jan 11 18:47:59.535: INFO: Waiting for terminating namespaces to be deleted... Jan 11 18:47:59.624: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jan 11 18:47:59.898: INFO: 20 / 20 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jan 11 18:47:59.898: INFO: expected 12 pod replicas in namespace 'kube-system', 12 are Running and Ready. [It] Pod should avoid nodes that have avoidPod annotation /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:157 Jan 11 18:47:59.898: INFO: ComputeCPUMemFraction for node: ip-10-250-27-25.ec2.internal Jan 11 18:47:59.992: INFO: Pod for on the node: calico-node-m8r2d, Cpu: 100, Mem: 104857600 Jan 11 18:47:59.992: INFO: Pod for on the node: kube-proxy-rq4kf, Cpu: 20, Mem: 67108864 Jan 11 18:47:59.992: INFO: Pod for on the node: node-exporter-l6q84, Cpu: 5, Mem: 10485760 Jan 11 18:47:59.992: INFO: Pod for on the node: node-problem-detector-9z5sq, Cpu: 20, Mem: 20971520 Jan 11 18:47:59.992: INFO: Node: ip-10-250-27-25.ec2.internal, totalRequestedCPUResource: 245, cpuAllocatableMil: 1920, cpuFraction: 0.12760416666666666 Jan 11 18:47:59.992: INFO: Node: ip-10-250-27-25.ec2.internal, totalRequestedMemResource: 308281344, memAllocatableVal: 6577812679, memFraction: 0.04686684754404816 Jan 11 18:47:59.992: INFO: ComputeCPUMemFraction for node: ip-10-250-7-77.ec2.internal Jan 11 18:48:00.086: INFO: Pod for on the node: addons-kubernetes-dashboard-78954cc66b-69k8m, Cpu: 50, Mem: 52428800 Jan 11 18:48:00.086: INFO: Pod for on the node: addons-nginx-ingress-controller-7c75bb76db-cd9r9, Cpu: 100, Mem: 104857600 Jan 11 18:48:00.086: INFO: Pod for on the node: addons-nginx-ingress-nginx-ingress-k8s-backend-95f65778d-4fk7d, Cpu: 100, Mem: 209715200 Jan 11 18:48:00.086: INFO: Pod for on the node: blackbox-exporter-54bb5f55cc-452fk, Cpu: 5, Mem: 5242880 Jan 11 18:48:00.086: INFO: Pod for on the node: calico-kube-controllers-79bcd784b6-c46r9, Cpu: 100, Mem: 209715200 Jan 11 18:48:00.086: INFO: Pod for on the node: calico-node-dl8nk, Cpu: 100, Mem: 104857600 Jan 11 18:48:00.086: INFO: Pod for on the node: calico-typha-deploy-9f6b455c4-vdrzx, Cpu: 100, Mem: 209715200 Jan 11 18:48:00.086: INFO: Pod for on the node: calico-typha-horizontal-autoscaler-85c99966bb-6j6rp, Cpu: 10, Mem: 209715200 Jan 11 18:48:00.086: INFO: Pod for on the node: calico-typha-vertical-autoscaler-5769b74b58-r8t6r, Cpu: 100, Mem: 209715200 Jan 11 18:48:00.086: INFO: Pod for on the node: coredns-59c969ffb8-57m7v, Cpu: 50, Mem: 15728640 Jan 11 18:48:00.086: INFO: Pod for on the node: coredns-59c969ffb8-fqq79, Cpu: 50, Mem: 15728640 Jan 11 18:48:00.086: INFO: Pod for on the node: kube-proxy-nn5px, Cpu: 20, Mem: 67108864 Jan 11 18:48:00.086: INFO: Pod for on the node: metrics-server-7c797fd994-4x7v9, Cpu: 20, Mem: 104857600 Jan 11 18:48:00.086: INFO: Pod for on the node: node-exporter-gp57h, Cpu: 5, Mem: 10485760 Jan 11 18:48:00.086: INFO: Pod for on the node: node-problem-detector-jx2p4, Cpu: 20, Mem: 20971520 Jan 11 18:48:00.086: INFO: Pod for on the node: vpn-shoot-5d76665b65-6rkww, Cpu: 100, Mem: 104857600 Jan 11 18:48:00.086: INFO: Node: ip-10-250-7-77.ec2.internal, totalRequestedCPUResource: 630, cpuAllocatableMil: 1920, cpuFraction: 0.328125 Jan 11 18:48:00.086: INFO: Node: ip-10-250-7-77.ec2.internal, totalRequestedMemResource: 921698304, memAllocatableVal: 6577812679, memFraction: 0.14012230949393992 Jan 11 18:48:00.179: INFO: Waiting for running... Jan 11 18:48:05.370: INFO: Waiting for running... STEP: Compute Cpu, Mem Fraction after create balanced pods. Jan 11 18:48:10.471: INFO: ComputeCPUMemFraction for node: ip-10-250-27-25.ec2.internal Jan 11 18:48:10.564: INFO: Pod for on the node: calico-node-m8r2d, Cpu: 100, Mem: 104857600 Jan 11 18:48:10.564: INFO: Pod for on the node: kube-proxy-rq4kf, Cpu: 20, Mem: 67108864 Jan 11 18:48:10.564: INFO: Pod for on the node: node-exporter-l6q84, Cpu: 5, Mem: 10485760 Jan 11 18:48:10.564: INFO: Pod for on the node: node-problem-detector-9z5sq, Cpu: 20, Mem: 20971520 Jan 11 18:48:10.564: INFO: Pod for on the node: 3c645ec9-142a-404f-a726-ab1db28242dd-0, Cpu: 715, Mem: 2980624995 Jan 11 18:48:10.564: INFO: Node: ip-10-250-27-25.ec2.internal, totalRequestedCPUResource: 960, cpuAllocatableMil: 1920, cpuFraction: 0.5 Jan 11 18:48:10.564: INFO: Node: ip-10-250-27-25.ec2.internal, totalRequestedMemResource: 3288906339, memAllocatableVal: 6577812679, memFraction: 0.4999999999239869 STEP: Compute Cpu, Mem Fraction after create balanced pods. Jan 11 18:48:10.564: INFO: ComputeCPUMemFraction for node: ip-10-250-7-77.ec2.internal Jan 11 18:48:10.658: INFO: Pod for on the node: addons-kubernetes-dashboard-78954cc66b-69k8m, Cpu: 50, Mem: 52428800 Jan 11 18:48:10.658: INFO: Pod for on the node: addons-nginx-ingress-controller-7c75bb76db-cd9r9, Cpu: 100, Mem: 104857600 Jan 11 18:48:10.658: INFO: Pod for on the node: addons-nginx-ingress-nginx-ingress-k8s-backend-95f65778d-4fk7d, Cpu: 100, Mem: 209715200 Jan 11 18:48:10.658: INFO: Pod for on the node: blackbox-exporter-54bb5f55cc-452fk, Cpu: 5, Mem: 5242880 Jan 11 18:48:10.658: INFO: Pod for on the node: calico-kube-controllers-79bcd784b6-c46r9, Cpu: 100, Mem: 209715200 Jan 11 18:48:10.658: INFO: Pod for on the node: calico-node-dl8nk, Cpu: 100, Mem: 104857600 Jan 11 18:48:10.658: INFO: Pod for on the node: calico-typha-deploy-9f6b455c4-vdrzx, Cpu: 100, Mem: 209715200 Jan 11 18:48:10.658: INFO: Pod for on the node: calico-typha-horizontal-autoscaler-85c99966bb-6j6rp, Cpu: 10, Mem: 209715200 Jan 11 18:48:10.658: INFO: Pod for on the node: calico-typha-vertical-autoscaler-5769b74b58-r8t6r, Cpu: 100, Mem: 209715200 Jan 11 18:48:10.658: INFO: Pod for on the node: coredns-59c969ffb8-57m7v, Cpu: 50, Mem: 15728640 Jan 11 18:48:10.658: INFO: Pod for on the node: coredns-59c969ffb8-fqq79, Cpu: 50, Mem: 15728640 Jan 11 18:48:10.658: INFO: Pod for on the node: kube-proxy-nn5px, Cpu: 20, Mem: 67108864 Jan 11 18:48:10.658: INFO: Pod for on the node: metrics-server-7c797fd994-4x7v9, Cpu: 20, Mem: 104857600 Jan 11 18:48:10.658: INFO: Pod for on the node: node-exporter-gp57h, Cpu: 5, Mem: 10485760 Jan 11 18:48:10.658: INFO: Pod for on the node: node-problem-detector-jx2p4, Cpu: 20, Mem: 20971520 Jan 11 18:48:10.658: INFO: Pod for on the node: vpn-shoot-5d76665b65-6rkww, Cpu: 100, Mem: 104857600 Jan 11 18:48:10.658: INFO: Pod for on the node: d327ec3d-e183-4efb-8496-56ab284e9dc6-0, Cpu: 330, Mem: 2367208035 Jan 11 18:48:10.658: INFO: Node: ip-10-250-7-77.ec2.internal, totalRequestedCPUResource: 960, cpuAllocatableMil: 1920, cpuFraction: 0.5 Jan 11 18:48:10.658: INFO: Node: ip-10-250-7-77.ec2.internal, totalRequestedMemResource: 3288906339, memAllocatableVal: 6577812679, memFraction: 0.4999999999239869 STEP: Create a RC, with 0 replicas STEP: Trying to apply avoidPod annotations on the first node. STEP: Scale the RC: scheduler-priority-avoid-pod to len(nodeList.Item)-1 : 1. STEP: Scaling ReplicationController scheduler-priority-avoid-pod in namespace sched-priority-7294 to 1 STEP: Verify the pods should not scheduled to the node: ip-10-250-27-25.ec2.internal STEP: deleting ReplicationController scheduler-priority-avoid-pod in namespace sched-priority-7294, will wait for the garbage collector to delete the pods Jan 11 18:48:21.516: INFO: Deleting ReplicationController scheduler-priority-avoid-pod took: 91.275463ms Jan 11 18:48:21.617: INFO: Terminating ReplicationController scheduler-priority-avoid-pod pods took: 100.292085ms [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 18:48:28.117: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-priority-7294" for this suite. Jan 11 18:48:48.477: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 18:48:51.701: INFO: namespace sched-priority-7294 deletion completed in 23.493451996s [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:73 •SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:411 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 18:48:51.702: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename sched-pred STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in sched-pred-7162 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:87 Jan 11 18:48:52.350: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jan 11 18:48:52.620: INFO: Waiting for terminating namespaces to be deleted... Jan 11 18:48:52.710: INFO: Logging pods the kubelet thinks is on node ip-10-250-27-25.ec2.internal before test Jan 11 18:48:52.906: INFO: kube-proxy-rq4kf from kube-system started at 2020-01-11 15:56:04 +0000 UTC (1 container statuses recorded) Jan 11 18:48:52.906: INFO: Container kube-proxy ready: true, restart count 0 Jan 11 18:48:52.906: INFO: node-problem-detector-9z5sq from kube-system started at 2020-01-11 15:56:04 +0000 UTC (1 container statuses recorded) Jan 11 18:48:52.906: INFO: Container node-problem-detector ready: true, restart count 0 Jan 11 18:48:52.906: INFO: calico-node-m8r2d from kube-system started at 2020-01-11 15:56:04 +0000 UTC (1 container statuses recorded) Jan 11 18:48:52.906: INFO: Container calico-node ready: true, restart count 0 Jan 11 18:48:52.906: INFO: node-exporter-l6q84 from kube-system started at 2020-01-11 15:56:04 +0000 UTC (1 container statuses recorded) Jan 11 18:48:52.906: INFO: Container node-exporter ready: true, restart count 0 Jan 11 18:48:52.906: INFO: Logging pods the kubelet thinks is on node ip-10-250-7-77.ec2.internal before test Jan 11 18:48:53.014: INFO: blackbox-exporter-54bb5f55cc-452fk from kube-system started at 2020-01-11 15:55:58 +0000 UTC (1 container statuses recorded) Jan 11 18:48:53.014: INFO: Container blackbox-exporter ready: true, restart count 0 Jan 11 18:48:53.014: INFO: node-problem-detector-jx2p4 from kube-system started at 2020-01-11 15:55:58 +0000 UTC (1 container statuses recorded) Jan 11 18:48:53.014: INFO: Container node-problem-detector ready: true, restart count 0 Jan 11 18:48:53.014: INFO: addons-nginx-ingress-nginx-ingress-k8s-backend-95f65778d-4fk7d from kube-system started at 2020-01-11 15:56:08 +0000 UTC (1 container statuses recorded) Jan 11 18:48:53.014: INFO: Container nginx-ingress-nginx-ingress-k8s-backend ready: true, restart count 0 Jan 11 18:48:53.014: INFO: calico-typha-horizontal-autoscaler-85c99966bb-6j6rp from kube-system started at 2020-01-11 15:56:08 +0000 UTC (1 container statuses recorded) Jan 11 18:48:53.014: INFO: Container autoscaler ready: true, restart count 0 Jan 11 18:48:53.014: INFO: node-exporter-gp57h from kube-system started at 2020-01-11 15:55:58 +0000 UTC (1 container statuses recorded) Jan 11 18:48:53.014: INFO: Container node-exporter ready: true, restart count 0 Jan 11 18:48:53.014: INFO: calico-kube-controllers-79bcd784b6-c46r9 from kube-system started at 2020-01-11 15:56:08 +0000 UTC (1 container statuses recorded) Jan 11 18:48:53.014: INFO: Container calico-kube-controllers ready: true, restart count 0 Jan 11 18:48:53.014: INFO: metrics-server-7c797fd994-4x7v9 from kube-system started at 2020-01-11 15:56:08 +0000 UTC (1 container statuses recorded) Jan 11 18:48:53.014: INFO: Container metrics-server ready: true, restart count 0 Jan 11 18:48:53.014: INFO: addons-kubernetes-dashboard-78954cc66b-69k8m from kube-system started at 2020-01-11 15:56:08 +0000 UTC (1 container statuses recorded) Jan 11 18:48:53.014: INFO: Container kubernetes-dashboard ready: true, restart count 0 Jan 11 18:48:53.014: INFO: calico-typha-vertical-autoscaler-5769b74b58-r8t6r from kube-system started at 2020-01-11 15:56:13 +0000 UTC (1 container statuses recorded) Jan 11 18:48:53.014: INFO: Container autoscaler ready: true, restart count 5 Jan 11 18:48:53.014: INFO: coredns-59c969ffb8-fqq79 from kube-system started at 2020-01-11 15:56:08 +0000 UTC (1 container statuses recorded) Jan 11 18:48:53.014: INFO: Container coredns ready: true, restart count 0 Jan 11 18:48:53.014: INFO: addons-nginx-ingress-controller-7c75bb76db-cd9r9 from kube-system started at 2020-01-11 15:56:13 +0000 UTC (1 container statuses recorded) Jan 11 18:48:53.014: INFO: Container nginx-ingress-controller ready: true, restart count 0 Jan 11 18:48:53.014: INFO: coredns-59c969ffb8-57m7v from kube-system started at 2020-01-11 15:56:11 +0000 UTC (1 container statuses recorded) Jan 11 18:48:53.014: INFO: Container coredns ready: true, restart count 0 Jan 11 18:48:53.014: INFO: vpn-shoot-5d76665b65-6rkww from kube-system started at 2020-01-11 15:56:13 +0000 UTC (1 container statuses recorded) Jan 11 18:48:53.014: INFO: Container vpn-shoot ready: true, restart count 0 Jan 11 18:48:53.014: INFO: calico-typha-deploy-9f6b455c4-vdrzx from kube-system started at 2020-01-11 16:21:07 +0000 UTC (1 container statuses recorded) Jan 11 18:48:53.014: INFO: Container calico-typha ready: true, restart count 0 Jan 11 18:48:53.014: INFO: calico-node-dl8nk from kube-system started at 2020-01-11 15:55:58 +0000 UTC (1 container statuses recorded) Jan 11 18:48:53.014: INFO: Container calico-node ready: true, restart count 0 Jan 11 18:48:53.014: INFO: kube-proxy-nn5px from kube-system started at 2020-01-11 15:55:58 +0000 UTC (1 container statuses recorded) Jan 11 18:48:53.014: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeAffinity is respected if not matching /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:411 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.15e8e99dcbee23f4], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 18:48:54.469: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-7162" for this suite. Jan 11 18:49:00.829: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 18:49:04.054: INFO: namespace sched-pred-7162 deletion completed in 9.494596707s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:78 •SSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:123 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 18:49:04.054: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename sched-pred STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in sched-pred-1910 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:87 Jan 11 18:49:04.694: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jan 11 18:49:04.966: INFO: Waiting for terminating namespaces to be deleted... Jan 11 18:49:05.055: INFO: Logging pods the kubelet thinks is on node ip-10-250-27-25.ec2.internal before test Jan 11 18:49:05.155: INFO: node-problem-detector-9z5sq from kube-system started at 2020-01-11 15:56:04 +0000 UTC (1 container statuses recorded) Jan 11 18:49:05.155: INFO: Container node-problem-detector ready: true, restart count 0 Jan 11 18:49:05.155: INFO: calico-node-m8r2d from kube-system started at 2020-01-11 15:56:04 +0000 UTC (1 container statuses recorded) Jan 11 18:49:05.155: INFO: Container calico-node ready: true, restart count 0 Jan 11 18:49:05.155: INFO: node-exporter-l6q84 from kube-system started at 2020-01-11 15:56:04 +0000 UTC (1 container statuses recorded) Jan 11 18:49:05.155: INFO: Container node-exporter ready: true, restart count 0 Jan 11 18:49:05.155: INFO: kube-proxy-rq4kf from kube-system started at 2020-01-11 15:56:04 +0000 UTC (1 container statuses recorded) Jan 11 18:49:05.155: INFO: Container kube-proxy ready: true, restart count 0 Jan 11 18:49:05.155: INFO: Logging pods the kubelet thinks is on node ip-10-250-7-77.ec2.internal before test Jan 11 18:49:05.264: INFO: calico-typha-deploy-9f6b455c4-vdrzx from kube-system started at 2020-01-11 16:21:07 +0000 UTC (1 container statuses recorded) Jan 11 18:49:05.264: INFO: Container calico-typha ready: true, restart count 0 Jan 11 18:49:05.264: INFO: calico-node-dl8nk from kube-system started at 2020-01-11 15:55:58 +0000 UTC (1 container statuses recorded) Jan 11 18:49:05.264: INFO: Container calico-node ready: true, restart count 0 Jan 11 18:49:05.264: INFO: kube-proxy-nn5px from kube-system started at 2020-01-11 15:55:58 +0000 UTC (1 container statuses recorded) Jan 11 18:49:05.264: INFO: Container kube-proxy ready: true, restart count 0 Jan 11 18:49:05.264: INFO: blackbox-exporter-54bb5f55cc-452fk from kube-system started at 2020-01-11 15:55:58 +0000 UTC (1 container statuses recorded) Jan 11 18:49:05.264: INFO: Container blackbox-exporter ready: true, restart count 0 Jan 11 18:49:05.264: INFO: node-problem-detector-jx2p4 from kube-system started at 2020-01-11 15:55:58 +0000 UTC (1 container statuses recorded) Jan 11 18:49:05.264: INFO: Container node-problem-detector ready: true, restart count 0 Jan 11 18:49:05.264: INFO: addons-nginx-ingress-nginx-ingress-k8s-backend-95f65778d-4fk7d from kube-system started at 2020-01-11 15:56:08 +0000 UTC (1 container statuses recorded) Jan 11 18:49:05.264: INFO: Container nginx-ingress-nginx-ingress-k8s-backend ready: true, restart count 0 Jan 11 18:49:05.264: INFO: calico-typha-horizontal-autoscaler-85c99966bb-6j6rp from kube-system started at 2020-01-11 15:56:08 +0000 UTC (1 container statuses recorded) Jan 11 18:49:05.264: INFO: Container autoscaler ready: true, restart count 0 Jan 11 18:49:05.264: INFO: metrics-server-7c797fd994-4x7v9 from kube-system started at 2020-01-11 15:56:08 +0000 UTC (1 container statuses recorded) Jan 11 18:49:05.264: INFO: Container metrics-server ready: true, restart count 0 Jan 11 18:49:05.264: INFO: addons-kubernetes-dashboard-78954cc66b-69k8m from kube-system started at 2020-01-11 15:56:08 +0000 UTC (1 container statuses recorded) Jan 11 18:49:05.264: INFO: Container kubernetes-dashboard ready: true, restart count 0 Jan 11 18:49:05.264: INFO: calico-typha-vertical-autoscaler-5769b74b58-r8t6r from kube-system started at 2020-01-11 15:56:13 +0000 UTC (1 container statuses recorded) Jan 11 18:49:05.264: INFO: Container autoscaler ready: true, restart count 5 Jan 11 18:49:05.264: INFO: node-exporter-gp57h from kube-system started at 2020-01-11 15:55:58 +0000 UTC (1 container statuses recorded) Jan 11 18:49:05.264: INFO: Container node-exporter ready: true, restart count 0 Jan 11 18:49:05.264: INFO: calico-kube-controllers-79bcd784b6-c46r9 from kube-system started at 2020-01-11 15:56:08 +0000 UTC (1 container statuses recorded) Jan 11 18:49:05.264: INFO: Container calico-kube-controllers ready: true, restart count 0 Jan 11 18:49:05.264: INFO: coredns-59c969ffb8-fqq79 from kube-system started at 2020-01-11 15:56:08 +0000 UTC (1 container statuses recorded) Jan 11 18:49:05.264: INFO: Container coredns ready: true, restart count 0 Jan 11 18:49:05.264: INFO: addons-nginx-ingress-controller-7c75bb76db-cd9r9 from kube-system started at 2020-01-11 15:56:13 +0000 UTC (1 container statuses recorded) Jan 11 18:49:05.264: INFO: Container nginx-ingress-controller ready: true, restart count 0 Jan 11 18:49:05.264: INFO: coredns-59c969ffb8-57m7v from kube-system started at 2020-01-11 15:56:11 +0000 UTC (1 container statuses recorded) Jan 11 18:49:05.265: INFO: Container coredns ready: true, restart count 0 Jan 11 18:49:05.265: INFO: vpn-shoot-5d76665b65-6rkww from kube-system started at 2020-01-11 15:56:13 +0000 UTC (1 container statuses recorded) Jan 11 18:49:05.265: INFO: Container vpn-shoot ready: true, restart count 0 [It] validates MaxPods limit number of pods that are allowed to run [Slow] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:123 Jan 11 18:49:05.265: INFO: Node: {{ } {ip-10-250-27-25.ec2.internal /api/v1/nodes/ip-10-250-27-25.ec2.internal af7f64f3-a5de-4df3-9e07-f69e835ab580 33318 0 2020-01-11 15:56:03 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:m5.large beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:us-east-1 failure-domain.beta.kubernetes.io/zone:us-east-1c kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-10-250-27-25.ec2.internal kubernetes.io/os:linux node.kubernetes.io/role:node worker.garden.sapcloud.io/group:worker-1 worker.gardener.cloud/pool:worker-1] map[node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.250.27.25/19 projectcalico.org/IPv4IPIPTunnelAddr:100.64.1.1 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []} {100.64.1.0/24 [100.64.1.0/24] aws:///us-east-1c/i-0a8c404292a3c92e9 false [] nil } {map[attachable-volumes-aws-ebs:{{25 0} {} 25 DecimalSI} cpu:{{2 0} {} 2 DecimalSI} ephemeral-storage:{{28730179584 0} {} BinarySI} hugepages-1Gi:{{0 0} {} 0 DecimalSI} hugepages-2Mi:{{0 0} {} 0 DecimalSI} memory:{{8054267904 0} {} BinarySI} pods:{{110 0} {} 110 DecimalSI}] map[attachable-volumes-aws-ebs:{{25 0} {} 25 DecimalSI} cpu:{{1920 -3} {} 1920m DecimalSI} ephemeral-storage:{{27293670584 0} {} 27293670584 DecimalSI} hugepages-1Gi:{{0 0} {} 0 DecimalSI} hugepages-2Mi:{{0 0} {} 0 DecimalSI} memory:{{6577812679 0} {} 6577812679 DecimalSI} pods:{{110 0} {} 110 DecimalSI}] [{KernelDeadlock False 2020-01-11 18:48:47 +0000 UTC 2020-01-11 15:56:58 +0000 UTC KernelHasNoDeadlock kernel has no deadlock} {ReadonlyFilesystem False 2020-01-11 18:48:47 +0000 UTC 2020-01-11 15:56:58 +0000 UTC FilesystemIsNotReadOnly Filesystem is not read-only} {FrequentUnregisterNetDevice False 2020-01-11 18:48:47 +0000 UTC 2020-01-11 15:56:58 +0000 UTC NoFrequentUnregisterNetDevice node is functioning properly} {FrequentKubeletRestart False 2020-01-11 18:48:47 +0000 UTC 2020-01-11 15:56:58 +0000 UTC NoFrequentKubeletRestart kubelet is functioning properly} {FrequentDockerRestart False 2020-01-11 18:48:47 +0000 UTC 2020-01-11 15:56:58 +0000 UTC NoFrequentDockerRestart docker is functioning properly} {FrequentContainerdRestart False 2020-01-11 18:48:47 +0000 UTC 2020-01-11 15:56:58 +0000 UTC NoFrequentContainerdRestart containerd is functioning properly} {CorruptDockerOverlay2 False 2020-01-11 18:48:47 +0000 UTC 2020-01-11 15:56:58 +0000 UTC NoCorruptDockerOverlay2 docker overlay2 is functioning properly} {NetworkUnavailable False 2020-01-11 15:56:18 +0000 UTC 2020-01-11 15:56:18 +0000 UTC CalicoIsUp Calico is running on this node} {MemoryPressure False 2020-01-11 18:49:03 +0000 UTC 2020-01-11 15:56:03 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2020-01-11 18:49:03 +0000 UTC 2020-01-11 15:56:03 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2020-01-11 18:49:03 +0000 UTC 2020-01-11 15:56:03 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2020-01-11 18:49:03 +0000 UTC 2020-01-11 15:56:13 +0000 UTC KubeletReady kubelet is posting ready status}] [{InternalIP 10.250.27.25} {Hostname ip-10-250-27-25.ec2.internal} {InternalDNS ip-10-250-27-25.ec2.internal}] {{10250}} {ec280dba3c1837e27848a3dec8c080a9 ec280dba-3c18-37e2-7848-a3dec8c080a9 89e42b89-b944-47ea-8bf6-5f2fe6d80c97 4.19.86-coreos Container Linux by CoreOS 2303.3.0 (Rhyolite) docker://18.6.3 v1.16.4 v1.16.4 linux amd64} [{[eu.gcr.io/gardener-project/k8s.gcr.io/hyperkube@sha256:1d8d7ef8bae1a6c8564d97a7d83a3661ea4b43127b0a6d901f3cd4b1126ee102 eu.gcr.io/gardener-project/k8s.gcr.io/hyperkube:v1.16.4] 601224435} {[gcr.io/google-samples/gb-frontend@sha256:35cb427341429fac3df10ff74600ea73e8ec0754d78f9ce89e0b4f3d70d53ba6 gcr.io/google-samples/gb-frontend:v6] 373099368} {[k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa k8s.gcr.io/etcd:3.3.15] 246640776} {[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0] 195659796} {[eu.gcr.io/gardener-project/3rd/quay_io/calico/node@sha256:d017c694acb9df5ad8e957a14b4c5a613c3a42771a34904f40c279dd2f61461e eu.gcr.io/gardener-project/3rd/quay_io/calico/node:v3.8.2-mod-1] 185406766} {[eu.gcr.io/gardener-project/3rd/quay_io/calico/cni@sha256:fe6cb51f30add991b76eadfa26ec10fa8796383a1ddf807be5d4228725207b9d eu.gcr.io/gardener-project/3rd/quay_io/calico/cni:v3.8.2-mod-1] 153790666} {[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine] 126894770} {[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine] 123781643} {[eu.gcr.io/gardener-project/3rd/k8s_gcr_io/node-problem-detector@sha256:00aceed3b4ef20d0d578aff3f904212daa2f0aaf18350d3e213cf4ca0703ccf0 eu.gcr.io/gardener-project/3rd/k8s_gcr_io/node-problem-detector:v0.7.1-mod-1] 96768084} {[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:1bafcc6fb1aa990b487850adba9cadc020e42d7905aa8a30481182a477ba24b0 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.10] 61365829} {[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6] 57345321} {[eu.gcr.io/gardener-project/3rd/quay_io/calico/typha@sha256:52298609a808087c774e95ded163e91828106bed6cf3117c51aba3f4d3b7943c eu.gcr.io/gardener-project/3rd/quay_io/calico/typha:v3.8.2] 49771411} {[redis@sha256:50899ea1ceed33fa03232f3ac57578a424faa1742c1ac9c7a7bdb95cdf19b858 redis:5.0.5-alpine] 29331594} {[eu.gcr.io/gardener-project/3rd/quay_io/prometheus/node-exporter@sha256:fea82a3a79228af2840c72ff394d7446ace51ae035f5b26cd9767b250baf13b7 eu.gcr.io/gardener-project/3rd/quay_io/prometheus/node-exporter:v0.18.1] 22933477} {[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine] 16032814} {[eu.gcr.io/gardener-project/3rd/quay_io/calico/pod2daemon-flexvol@sha256:fd246ba4eb5b96a7b97bfd8d99eb823ba179e6eeb9852cb3e3f7bf2f44a800a8 eu.gcr.io/gardener-project/3rd/quay_io/calico/pod2daemon-flexvol:v3.8.2] 9371181} {[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1] 9349974} {[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0] 6757579} {[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0] 4753501} {[gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6 gcr.io/kubernetes-e2e-test-images/kitten:1.0] 4747037} {[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0] 4732240} {[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0] 1563521} {[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0] 1450451} {[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29] 1154361} {[eu.gcr.io/gardener-project/3rd/gcr_io/google_containers/pause-amd64@sha256:ffa28932647c3b6cab6a618aafe98d33dd185d96158ecf9b1addf042d6244025 k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea eu.gcr.io/gardener-project/3rd/gcr_io/google_containers/pause-amd64:3.1 k8s.gcr.io/pause:3.1] 742472}] [] [] nil}} Jan 11 18:49:05.265: INFO: Node: {{ } {ip-10-250-7-77.ec2.internal /api/v1/nodes/ip-10-250-7-77.ec2.internal 3773c02c-1fbb-4cbe-a527-8933de0a8978 33325 0 2020-01-11 15:55:58 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:m5.large beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:us-east-1 failure-domain.beta.kubernetes.io/zone:us-east-1c kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-10-250-7-77.ec2.internal kubernetes.io/os:linux node.kubernetes.io/role:node worker.garden.sapcloud.io/group:worker-1 worker.gardener.cloud/pool:worker-1] map[node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.250.7.77/19 projectcalico.org/IPv4IPIPTunnelAddr:100.64.0.1 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []} {100.64.0.0/24 [100.64.0.0/24] aws:///us-east-1c/i-0551dba45aad7abfa false [] nil } {map[attachable-volumes-aws-ebs:{{25 0} {} 25 DecimalSI} cpu:{{2 0} {} 2 DecimalSI} ephemeral-storage:{{28730179584 0} {} BinarySI} hugepages-1Gi:{{0 0} {} 0 DecimalSI} hugepages-2Mi:{{0 0} {} 0 DecimalSI} memory:{{8054267904 0} {} BinarySI} pods:{{110 0} {} 110 DecimalSI}] map[attachable-volumes-aws-ebs:{{25 0} {} 25 DecimalSI} cpu:{{1920 -3} {} 1920m DecimalSI} ephemeral-storage:{{27293670584 0} {} 27293670584 DecimalSI} hugepages-1Gi:{{0 0} {} 0 DecimalSI} hugepages-2Mi:{{0 0} {} 0 DecimalSI} memory:{{6577812679 0} {} 6577812679 DecimalSI} pods:{{110 0} {} 110 DecimalSI}] [{ReadonlyFilesystem False 2020-01-11 18:48:32 +0000 UTC 2020-01-11 15:56:28 +0000 UTC FilesystemIsNotReadOnly Filesystem is not read-only} {CorruptDockerOverlay2 False 2020-01-11 18:48:32 +0000 UTC 2020-01-11 15:56:28 +0000 UTC NoCorruptDockerOverlay2 docker overlay2 is functioning properly} {FrequentUnregisterNetDevice False 2020-01-11 18:48:32 +0000 UTC 2020-01-11 15:56:28 +0000 UTC NoFrequentUnregisterNetDevice node is functioning properly} {FrequentKubeletRestart False 2020-01-11 18:48:32 +0000 UTC 2020-01-11 15:56:28 +0000 UTC NoFrequentKubeletRestart kubelet is functioning properly} {FrequentDockerRestart False 2020-01-11 18:48:32 +0000 UTC 2020-01-11 15:56:28 +0000 UTC NoFrequentDockerRestart docker is functioning properly} {FrequentContainerdRestart False 2020-01-11 18:48:32 +0000 UTC 2020-01-11 15:56:28 +0000 UTC NoFrequentContainerdRestart containerd is functioning properly} {KernelDeadlock False 2020-01-11 18:48:32 +0000 UTC 2020-01-11 15:56:28 +0000 UTC KernelHasNoDeadlock kernel has no deadlock} {NetworkUnavailable False 2020-01-11 15:56:16 +0000 UTC 2020-01-11 15:56:16 +0000 UTC CalicoIsUp Calico is running on this node} {MemoryPressure False 2020-01-11 18:49:04 +0000 UTC 2020-01-11 15:55:58 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2020-01-11 18:49:04 +0000 UTC 2020-01-11 15:55:58 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2020-01-11 18:49:04 +0000 UTC 2020-01-11 15:55:58 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2020-01-11 18:49:04 +0000 UTC 2020-01-11 15:56:08 +0000 UTC KubeletReady kubelet is posting ready status}] [{InternalIP 10.250.7.77} {Hostname ip-10-250-7-77.ec2.internal} {InternalDNS ip-10-250-7-77.ec2.internal}] {{10250}} {ec223a25fa514279256b8b36a522519a ec223a25-fa51-4279-256b-8b36a522519a 652118c2-7bd4-4ebf-b248-be5c7a65a3aa 4.19.86-coreos Container Linux by CoreOS 2303.3.0 (Rhyolite) docker://18.6.3 v1.16.4 v1.16.4 linux amd64} [{[eu.gcr.io/gardener-project/k8s.gcr.io/hyperkube@sha256:1d8d7ef8bae1a6c8564d97a7d83a3661ea4b43127b0a6d901f3cd4b1126ee102 eu.gcr.io/gardener-project/k8s.gcr.io/hyperkube:v1.16.4] 601224435} {[eu.gcr.io/gardener-project/3rd/quay_io/kubernetes-ingress-controller/nginx-ingress-controller@sha256:4980f4ee069f767334c6fb6a7d75fbdc87236542fd749e22af5d80f2217959f4 eu.gcr.io/gardener-project/3rd/quay_io/kubernetes-ingress-controller/nginx-ingress-controller:0.22.0] 551728251} {[eu.gcr.io/gardener-project/3rd/quay_io/calico/node@sha256:d017c694acb9df5ad8e957a14b4c5a613c3a42771a34904f40c279dd2f61461e eu.gcr.io/gardener-project/3rd/quay_io/calico/node:v3.8.2-mod-1] 185406766} {[eu.gcr.io/gardener-project/3rd/quay_io/calico/cni@sha256:fe6cb51f30add991b76eadfa26ec10fa8796383a1ddf807be5d4228725207b9d eu.gcr.io/gardener-project/3rd/quay_io/calico/cni:v3.8.2-mod-1] 153790666} {[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine] 126894770} {[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine] 123781643} {[eu.gcr.io/gardener-project/3rd/k8s_gcr_io/kubernetes-dashboard-amd64@sha256:2f4fefeb964b1b7b09a3d2607a963506a47a6628d5268825e8b45b8a4c5ace93 eu.gcr.io/gardener-project/3rd/k8s_gcr_io/kubernetes-dashboard-amd64:v1.10.1] 121711221} {[eu.gcr.io/gardener-project/3rd/k8s_gcr_io/node-problem-detector@sha256:00aceed3b4ef20d0d578aff3f904212daa2f0aaf18350d3e213cf4ca0703ccf0 eu.gcr.io/gardener-project/3rd/k8s_gcr_io/node-problem-detector:v0.7.1-mod-1] 96768084} {[eu.gcr.io/gardener-project/gardener/ingress-default-backend@sha256:17b68928ead12cc9df88ee60d9c638d3fd642a7e122c2bb7586da1a21eb2de45 eu.gcr.io/gardener-project/gardener/ingress-default-backend:0.7.0] 69546830} {[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6] 57345321} {[eu.gcr.io/gardener-project/3rd/quay_io/calico/typha@sha256:52298609a808087c774e95ded163e91828106bed6cf3117c51aba3f4d3b7943c eu.gcr.io/gardener-project/3rd/quay_io/calico/typha:v3.8.2] 49771411} {[eu.gcr.io/gardener-project/3rd/quay_io/calico/kube-controllers@sha256:242c3e83e41c5ad4a246cba351360d92fb90e1c140cd24e42140e640a0ed3290 eu.gcr.io/gardener-project/3rd/quay_io/calico/kube-controllers:v3.8.2] 46809393} {[eu.gcr.io/gardener-project/3rd/coredns/coredns@sha256:b1f81b52011f91ebcf512111caa6d6d0896a65251188210cd3145d5b23204531 eu.gcr.io/gardener-project/3rd/coredns/coredns:1.6.3] 44255363} {[eu.gcr.io/gardener-project/3rd/k8s_gcr_io/cpvpa-amd64@sha256:5843435c534f0368f8980b1635976976b087f0b2dcde01226d9216da2276d24d eu.gcr.io/gardener-project/3rd/k8s_gcr_io/cpvpa-amd64:v0.8.1] 40616150} {[eu.gcr.io/gardener-project/3rd/k8s_gcr_io/cluster-proportional-autoscaler-amd64@sha256:2cdb0f90aac21d3f648a945ef929bfb81159d7453499b2dce6164c78a348ac42 eu.gcr.io/gardener-project/3rd/k8s_gcr_io/cluster-proportional-autoscaler-amd64:1.7.1] 40067731} {[eu.gcr.io/gardener-project/3rd/k8s_gcr_io/metrics-server-amd64@sha256:c3c8fb8757c3236343da9239a266c6ee9e16ac3c98b6f5d7a7cbb5f83058d4f1 eu.gcr.io/gardener-project/3rd/k8s_gcr_io/metrics-server-amd64:v0.3.3] 39933796} {[redis@sha256:50899ea1ceed33fa03232f3ac57578a424faa1742c1ac9c7a7bdb95cdf19b858 redis:5.0.5-alpine] 29331594} {[eu.gcr.io/gardener-project/3rd/quay_io/prometheus/node-exporter@sha256:fea82a3a79228af2840c72ff394d7446ace51ae035f5b26cd9767b250baf13b7 eu.gcr.io/gardener-project/3rd/quay_io/prometheus/node-exporter:v0.18.1] 22933477} {[eu.gcr.io/gardener-project/3rd/quay_io/prometheus/blackbox-exporter@sha256:c09cbb653e4708a0c14b205822f56026669c6a4a7d0502609c65da2dd741e669 eu.gcr.io/gardener-project/3rd/quay_io/prometheus/blackbox-exporter:v0.14.0] 17584252} {[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine] 16032814} {[eu.gcr.io/gardener-project/gardener/vpn-shoot@sha256:6054c6ae62c2bca2f07c913390c3babf14bb8dfa80c707ee8d4fd03c06dbf93f eu.gcr.io/gardener-project/gardener/vpn-shoot:0.16.0] 13732716} {[eu.gcr.io/gardener-project/3rd/quay_io/calico/pod2daemon-flexvol@sha256:fd246ba4eb5b96a7b97bfd8d99eb823ba179e6eeb9852cb3e3f7bf2f44a800a8 eu.gcr.io/gardener-project/3rd/quay_io/calico/pod2daemon-flexvol:v3.8.2] 9371181} {[eu.gcr.io/gardener-project/3rd/gcr_io/google_containers/pause-amd64@sha256:ffa28932647c3b6cab6a618aafe98d33dd185d96158ecf9b1addf042d6244025 k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea eu.gcr.io/gardener-project/3rd/gcr_io/google_containers/pause-amd64:3.1 k8s.gcr.io/pause:3.1] 742472}] [] [] nil}} STEP: Starting additional 200 Pods to fully saturate the cluster max pods and trying to start another one Jan 11 18:49:23.457: INFO: Waiting for running... STEP: Considering event: Type = [Normal], Name = [maxp-0.15e8e9a09bb52dec], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-0 to ip-10-250-27-25.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-0.15e8e9a0e219cd52], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-0.15e8e9a0ecb3f623], Reason = [Created], Message = [Created container maxp-0] STEP: Considering event: Type = [Normal], Name = [maxp-0.15e8e9a111974a95], Reason = [Started], Message = [Started container maxp-0] STEP: Considering event: Type = [Normal], Name = [maxp-1.15e8e9a0a113cab6], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-1 to ip-10-250-27-25.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-1.15e8e9a0efff2dfc], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-1.15e8e9a0f85f931a], Reason = [Created], Message = [Created container maxp-1] STEP: Considering event: Type = [Normal], Name = [maxp-1.15e8e9a134c941fe], Reason = [Started], Message = [Started container maxp-1] STEP: Considering event: Type = [Normal], Name = [maxp-10.15e8e9a0d1b557c4], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-10 to ip-10-250-7-77.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-10.15e8e9a1105d0f05], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-10.15e8e9a1144d24e3], Reason = [Created], Message = [Created container maxp-10] STEP: Considering event: Type = [Normal], Name = [maxp-10.15e8e9a12f67e410], Reason = [Started], Message = [Started container maxp-10] STEP: Considering event: Type = [Normal], Name = [maxp-100.15e8e9a2b782d69d], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-100 to ip-10-250-27-25.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-100.15e8e9a77843c056], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-100.15e8e9a8388ec5c4], Reason = [Created], Message = [Created container maxp-100] STEP: Considering event: Type = [Normal], Name = [maxp-100.15e8e9aa47b06631], Reason = [Started], Message = [Started container maxp-100] STEP: Considering event: Type = [Normal], Name = [maxp-101.15e8e9a2bce72cb8], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-101 to ip-10-250-7-77.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-101.15e8e9a6092b207a], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-101.15e8e9a692ceecf3], Reason = [Created], Message = [Created container maxp-101] STEP: Considering event: Type = [Normal], Name = [maxp-101.15e8e9a9cc184c53], Reason = [Started], Message = [Started container maxp-101] STEP: Considering event: Type = [Normal], Name = [maxp-102.15e8e9a2c24d0ba3], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-102 to ip-10-250-7-77.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-102.15e8e9a5d49e3bc0], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-102.15e8e9a6936302f5], Reason = [Created], Message = [Created container maxp-102] STEP: Considering event: Type = [Normal], Name = [maxp-102.15e8e9a9e5d19dc1], Reason = [Started], Message = [Started container maxp-102] STEP: Considering event: Type = [Normal], Name = [maxp-103.15e8e9a2c7a8887f], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-103 to ip-10-250-7-77.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-103.15e8e9a606834722], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-103.15e8e9a6caf0857c], Reason = [Created], Message = [Created container maxp-103] STEP: Considering event: Type = [Normal], Name = [maxp-103.15e8e9aa03fa5cc1], Reason = [Started], Message = [Started container maxp-103] STEP: Considering event: Type = [Normal], Name = [maxp-104.15e8e9a2cd14f293], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-104 to ip-10-250-7-77.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-104.15e8e9a62e37cf0b], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-104.15e8e9a68ef2fc6b], Reason = [Created], Message = [Created container maxp-104] STEP: Considering event: Type = [Normal], Name = [maxp-104.15e8e9a9e74ab936], Reason = [Started], Message = [Started container maxp-104] STEP: Considering event: Type = [Normal], Name = [maxp-105.15e8e9a2d26ac094], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-105 to ip-10-250-7-77.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-105.15e8e9a5c8e7c0b2], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-105.15e8e9a670fff171], Reason = [Created], Message = [Created container maxp-105] STEP: Considering event: Type = [Normal], Name = [maxp-105.15e8e9a9e75eef5f], Reason = [Started], Message = [Started container maxp-105] STEP: Considering event: Type = [Normal], Name = [maxp-106.15e8e9a2d7d5aa45], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-106 to ip-10-250-7-77.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-106.15e8e9a6e65b4fb6], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-106.15e8e9a70b8ca642], Reason = [Created], Message = [Created container maxp-106] STEP: Considering event: Type = [Normal], Name = [maxp-106.15e8e9aa1e661c0f], Reason = [Started], Message = [Started container maxp-106] STEP: Considering event: Type = [Normal], Name = [maxp-107.15e8e9a2dd2f1171], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-107 to ip-10-250-7-77.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-107.15e8e9a68dea72dc], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-107.15e8e9a6d3fcc930], Reason = [Created], Message = [Created container maxp-107] STEP: Considering event: Type = [Normal], Name = [maxp-107.15e8e9aa1b89ca31], Reason = [Started], Message = [Started container maxp-107] STEP: Considering event: Type = [Normal], Name = [maxp-108.15e8e9a2e299c370], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-108 to ip-10-250-27-25.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-108.15e8e9a80e86cd39], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-108.15e8e9a8bcf95774], Reason = [Created], Message = [Created container maxp-108] STEP: Considering event: Type = [Normal], Name = [maxp-108.15e8e9aadf555c2a], Reason = [Started], Message = [Started container maxp-108] STEP: Considering event: Type = [Normal], Name = [maxp-109.15e8e9a2e7f577e0], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-109 to ip-10-250-7-77.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-109.15e8e9a6075110c8], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-109.15e8e9a68feeca5b], Reason = [Created], Message = [Created container maxp-109] STEP: Considering event: Type = [Normal], Name = [maxp-109.15e8e9a9c7031a93], Reason = [Started], Message = [Started container maxp-109] STEP: Considering event: Type = [Normal], Name = [maxp-11.15e8e9a0d71b5c2f], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-11 to ip-10-250-7-77.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-11.15e8e9a12172e7cf], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-11.15e8e9a1340c2f3f], Reason = [Created], Message = [Created container maxp-11] STEP: Considering event: Type = [Normal], Name = [maxp-11.15e8e9a15e0417ed], Reason = [Started], Message = [Started container maxp-11] STEP: Considering event: Type = [Normal], Name = [maxp-110.15e8e9a2ed67f7cb], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-110 to ip-10-250-27-25.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-110.15e8e9a729fab939], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-110.15e8e9a7edce5a56], Reason = [Created], Message = [Created container maxp-110] STEP: Considering event: Type = [Normal], Name = [maxp-110.15e8e9aa896d9bb1], Reason = [Started], Message = [Started container maxp-110] STEP: Considering event: Type = [Normal], Name = [maxp-111.15e8e9a2f2c65f67], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-111 to ip-10-250-7-77.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-111.15e8e9a68d52ed69], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-111.15e8e9a6d3fd95e0], Reason = [Created], Message = [Created container maxp-111] STEP: Considering event: Type = [Normal], Name = [maxp-111.15e8e9a9fe35a463], Reason = [Started], Message = [Started container maxp-111] STEP: Considering event: Type = [Normal], Name = [maxp-112.15e8e9a2f82c1cf8], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-112 to ip-10-250-27-25.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-112.15e8e9a73fd6418c], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-112.15e8e9a817d1b29c], Reason = [Created], Message = [Created container maxp-112] STEP: Considering event: Type = [Normal], Name = [maxp-112.15e8e9aa9ede66b5], Reason = [Started], Message = [Started container maxp-112] STEP: Considering event: Type = [Normal], Name = [maxp-113.15e8e9a2fd8bd80a], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-113 to ip-10-250-27-25.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-113.15e8e9a815d3bab3], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-113.15e8e9a8c175b7de], Reason = [Created], Message = [Created container maxp-113] STEP: Considering event: Type = [Normal], Name = [maxp-113.15e8e9ab26aabfb2], Reason = [Started], Message = [Started container maxp-113] STEP: Considering event: Type = [Normal], Name = [maxp-114.15e8e9a302f51beb], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-114 to ip-10-250-27-25.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-114.15e8e9a7f430d6cd], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-114.15e8e9a8b713ac67], Reason = [Created], Message = [Created container maxp-114] STEP: Considering event: Type = [Normal], Name = [maxp-114.15e8e9ab1f474c8a], Reason = [Started], Message = [Started container maxp-114] STEP: Considering event: Type = [Normal], Name = [maxp-115.15e8e9a3084a2afe], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-115 to ip-10-250-27-25.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-115.15e8e9a70e79fc57], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-115.15e8e9a7cfa6fc54], Reason = [Created], Message = [Created container maxp-115] STEP: Considering event: Type = [Normal], Name = [maxp-115.15e8e9aa54bc8e6d], Reason = [Started], Message = [Started container maxp-115] STEP: Considering event: Type = [Normal], Name = [maxp-116.15e8e9a30da96fec], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-116 to ip-10-250-27-25.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-116.15e8e9a8877923d6], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-116.15e8e9a90ea60405], Reason = [Created], Message = [Created container maxp-116] STEP: Considering event: Type = [Normal], Name = [maxp-116.15e8e9ab26cff00d], Reason = [Started], Message = [Started container maxp-116] STEP: Considering event: Type = [Normal], Name = [maxp-117.15e8e9a31316938e], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-117 to ip-10-250-27-25.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-117.15e8e9a69ac256e9], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-117.15e8e9a7016e13e5], Reason = [Created], Message = [Created container maxp-117] STEP: Considering event: Type = [Normal], Name = [maxp-117.15e8e9aa43f54bbb], Reason = [Started], Message = [Started container maxp-117] STEP: Considering event: Type = [Normal], Name = [maxp-118.15e8e9a31880a5d5], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-118 to ip-10-250-27-25.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-118.15e8e9a7bdadfb68], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-118.15e8e9a8913f361c], Reason = [Created], Message = [Created container maxp-118] STEP: Considering event: Type = [Normal], Name = [maxp-118.15e8e9aadf632b43], Reason = [Started], Message = [Started container maxp-118] STEP: Considering event: Type = [Normal], Name = [maxp-119.15e8e9a31de3dc0a], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-119 to ip-10-250-7-77.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-119.15e8e9a685b5a85a], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-119.15e8e9a6caed3c65], Reason = [Created], Message = [Created container maxp-119] STEP: Considering event: Type = [Normal], Name = [maxp-119.15e8e9a9e760e799], Reason = [Started], Message = [Started container maxp-119] STEP: Considering event: Type = [Normal], Name = [maxp-12.15e8e9a0dc995c25], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-12 to ip-10-250-27-25.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-12.15e8e9a244126512], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-12.15e8e9a284a73bb7], Reason = [Created], Message = [Created container maxp-12] STEP: Considering event: Type = [Normal], Name = [maxp-12.15e8e9a41371c9f9], Reason = [Started], Message = [Started container maxp-12] STEP: Considering event: Type = [Normal], Name = [maxp-120.15e8e9a32339fc0d], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-120 to ip-10-250-27-25.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-120.15e8e9a87db11761], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-120.15e8e9a907b9cf0d], Reason = [Created], Message = [Created container maxp-120] STEP: Considering event: Type = [Normal], Name = [maxp-120.15e8e9ab1a99706a], Reason = [Started], Message = [Started container maxp-120] STEP: Considering event: Type = [Normal], Name = [maxp-121.15e8e9a328a03c5b], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-121 to ip-10-250-27-25.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-121.15e8e9a7f8baec9d], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-121.15e8e9a8ba3c8af9], Reason = [Created], Message = [Created container maxp-121] STEP: Considering event: Type = [Normal], Name = [maxp-121.15e8e9ab1aba41cf], Reason = [Started], Message = [Started container maxp-121] STEP: Considering event: Type = [Normal], Name = [maxp-122.15e8e9a32dfcaf35], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-122 to ip-10-250-7-77.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-122.15e8e9a61052915f], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-122.15e8e9a682882587], Reason = [Created], Message = [Created container maxp-122] STEP: Considering event: Type = [Normal], Name = [maxp-122.15e8e9a9ff4cea73], Reason = [Started], Message = [Started container maxp-122] STEP: Considering event: Type = [Normal], Name = [maxp-123.15e8e9a33359e2b7], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-123 to ip-10-250-27-25.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-123.15e8e9a8100bc4e2], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-123.15e8e9a8c24a2d6a], Reason = [Created], Message = [Created container maxp-123] STEP: Considering event: Type = [Normal], Name = [maxp-123.15e8e9ab250ab849], Reason = [Started], Message = [Started container maxp-123] STEP: Considering event: Type = [Normal], Name = [maxp-124.15e8e9a338be92be], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-124 to ip-10-250-27-25.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-124.15e8e9a7419fe0c9], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-124.15e8e9a7e4c00dfe], Reason = [Created], Message = [Created container maxp-124] STEP: Considering event: Type = [Normal], Name = [maxp-124.15e8e9aa897e12e4], Reason = [Started], Message = [Started container maxp-124] STEP: Considering event: Type = [Normal], Name = [maxp-125.15e8e9a33e28472d], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-125 to ip-10-250-27-25.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-125.15e8e9a7b401cfa4], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-125.15e8e9a881596dc0], Reason = [Created], Message = [Created container maxp-125] STEP: Considering event: Type = [Normal], Name = [maxp-125.15e8e9aafbf0e905], Reason = [Started], Message = [Started container maxp-125] STEP: Considering event: Type = [Normal], Name = [maxp-126.15e8e9a3438885b6], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-126 to ip-10-250-7-77.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-126.15e8e9a6f562a192], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-126.15e8e9a712d6884b], Reason = [Created], Message = [Created container maxp-126] STEP: Considering event: Type = [Normal], Name = [maxp-126.15e8e9a9cc166608], Reason = [Started], Message = [Started container maxp-126] STEP: Considering event: Type = [Normal], Name = [maxp-127.15e8e9a348ede998], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-127 to ip-10-250-7-77.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-127.15e8e9a6e68dbb2a], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-127.15e8e9a6fac92eef], Reason = [Created], Message = [Created container maxp-127] STEP: Considering event: Type = [Normal], Name = [maxp-127.15e8e9a9e7491dfc], Reason = [Started], Message = [Started container maxp-127] STEP: Considering event: Type = [Normal], Name = [maxp-128.15e8e9a34e51431e], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-128 to ip-10-250-27-25.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-128.15e8e9a8794812b0], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-128.15e8e9a8fff8d32e], Reason = [Created], Message = [Created container maxp-128] STEP: Considering event: Type = [Normal], Name = [maxp-128.15e8e9ab09aece23], Reason = [Started], Message = [Started container maxp-128] STEP: Considering event: Type = [Normal], Name = [maxp-129.15e8e9a353b8b9d5], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-129 to ip-10-250-27-25.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-129.15e8e9a87c2eba29], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-129.15e8e9a8fd1e53ea], Reason = [Created], Message = [Created container maxp-129] STEP: Considering event: Type = [Normal], Name = [maxp-129.15e8e9ab2eacf925], Reason = [Started], Message = [Started container maxp-129] STEP: Considering event: Type = [Normal], Name = [maxp-13.15e8e9a0e1e6752a], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-13 to ip-10-250-7-77.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-13.15e8e9a13787b789], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-13.15e8e9a14016e442], Reason = [Created], Message = [Created container maxp-13] STEP: Considering event: Type = [Normal], Name = [maxp-13.15e8e9a16d271c82], Reason = [Started], Message = [Started container maxp-13] STEP: Considering event: Type = [Normal], Name = [maxp-130.15e8e9a3591ae6ae], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-130 to ip-10-250-27-25.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-130.15e8e9a82af551de], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-130.15e8e9a8cd5166e6], Reason = [Created], Message = [Created container maxp-130] STEP: Considering event: Type = [Normal], Name = [maxp-130.15e8e9ab005be0ce], Reason = [Started], Message = [Started container maxp-130] STEP: Considering event: Type = [Normal], Name = [maxp-131.15e8e9a35e791dc1], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-131 to ip-10-250-27-25.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-131.15e8e9a86550881a], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-131.15e8e9a902a4bc37], Reason = [Created], Message = [Created container maxp-131] STEP: Considering event: Type = [Normal], Name = [maxp-131.15e8e9ab43de7a01], Reason = [Started], Message = [Started container maxp-131] STEP: Considering event: Type = [Normal], Name = [maxp-132.15e8e9a363e31ff4], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-132 to ip-10-250-7-77.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-132.15e8e9a68deecbc0], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-132.15e8e9a6ca51ee5e], Reason = [Created], Message = [Created container maxp-132] STEP: Considering event: Type = [Normal], Name = [maxp-132.15e8e9aa1e61ec2c], Reason = [Started], Message = [Started container maxp-132] STEP: Considering event: Type = [Normal], Name = [maxp-133.15e8e9a3693ee53b], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-133 to ip-10-250-7-77.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-133.15e8e9a6fce4edc7], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-133.15e8e9a714e4a6a2], Reason = [Created], Message = [Created container maxp-133] STEP: Considering event: Type = [Normal], Name = [maxp-133.15e8e9aa0cf53d45], Reason = [Started], Message = [Started container maxp-133] STEP: Considering event: Type = [Normal], Name = [maxp-134.15e8e9a36ea7058f], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-134 to ip-10-250-7-77.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-134.15e8e9a75b95bd97], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-134.15e8e9a7c72ead9e], Reason = [Created], Message = [Created container maxp-134] STEP: Considering event: Type = [Normal], Name = [maxp-134.15e8e9aa1c40179e], Reason = [Started], Message = [Started container maxp-134] STEP: Considering event: Type = [Normal], Name = [maxp-135.15e8e9a37408dd77], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-135 to ip-10-250-27-25.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-135.15e8e9a81a23577d], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-135.15e8e9a8bf1c684f], Reason = [Created], Message = [Created container maxp-135] STEP: Considering event: Type = [Normal], Name = [maxp-135.15e8e9ab2037a60c], Reason = [Started], Message = [Started container maxp-135] STEP: Considering event: Type = [Normal], Name = [maxp-136.15e8e9a3796b3461], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-136 to ip-10-250-27-25.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-136.15e8e9a8e17049a6], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-136.15e8e9a9381988c0], Reason = [Created], Message = [Created container maxp-136] STEP: Considering event: Type = [Normal], Name = [maxp-136.15e8e9ab26d15e46], Reason = [Started], Message = [Started container maxp-136] STEP: Considering event: Type = [Normal], Name = [maxp-137.15e8e9a37eccfd6c], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-137 to ip-10-250-27-25.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-137.15e8e9a948e532fb], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-137.15e8e9a963eb435a], Reason = [Created], Message = [Created container maxp-137] STEP: Considering event: Type = [Normal], Name = [maxp-137.15e8e9ab2fbb25cd], Reason = [Started], Message = [Started container maxp-137] STEP: Considering event: Type = [Normal], Name = [maxp-138.15e8e9a3842eeda9], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-138 to ip-10-250-27-25.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-138.15e8e9aac45bd133], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-138.15e8e9aaec2cca16], Reason = [Created], Message = [Created container maxp-138] STEP: Considering event: Type = [Normal], Name = [maxp-138.15e8e9ab747adc41], Reason = [Started], Message = [Started container maxp-138] STEP: Considering event: Type = [Normal], Name = [maxp-139.15e8e9a38992d287], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-139 to ip-10-250-27-25.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-139.15e8e9aac31f4fc8], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-139.15e8e9aaee55ca2a], Reason = [Created], Message = [Created container maxp-139] STEP: Considering event: Type = [Normal], Name = [maxp-139.15e8e9ab7fab1c2a], Reason = [Started], Message = [Started container maxp-139] STEP: Considering event: Type = [Normal], Name = [maxp-14.15e8e9a0e771453c], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-14 to ip-10-250-27-25.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-14.15e8e9a243ffc97b], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-14.15e8e9a284a864a9], Reason = [Created], Message = [Created container maxp-14] STEP: Considering event: Type = [Normal], Name = [maxp-14.15e8e9a3bc5155e6], Reason = [Started], Message = [Started container maxp-14] STEP: Considering event: Type = [Normal], Name = [maxp-140.15e8e9a38ef377f0], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-140 to ip-10-250-7-77.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-140.15e8e9a753a0e921], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-140.15e8e9a7c5d642df], Reason = [Created], Message = [Created container maxp-140] STEP: Considering event: Type = [Normal], Name = [maxp-140.15e8e9aa13cf7a88], Reason = [Started], Message = [Started container maxp-140] STEP: Considering event: Type = [Normal], Name = [maxp-141.15e8e9a3945ab4f1], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-141 to ip-10-250-7-77.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-141.15e8e9a73e39e3f2], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-141.15e8e9a751dc5467], Reason = [Created], Message = [Created container maxp-141] STEP: Considering event: Type = [Normal], Name = [maxp-141.15e8e9aa156ae89c], Reason = [Started], Message = [Started container maxp-141] STEP: Considering event: Type = [Normal], Name = [maxp-142.15e8e9a399b9b966], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-142 to ip-10-250-27-25.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-142.15e8e9a8e2502dda], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-142.15e8e9a93582ad67], Reason = [Created], Message = [Created container maxp-142] STEP: Considering event: Type = [Normal], Name = [maxp-142.15e8e9ab2dae8fcc], Reason = [Started], Message = [Started container maxp-142] STEP: Considering event: Type = [Normal], Name = [maxp-143.15e8e9a39f1614c8], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-143 to ip-10-250-7-77.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-143.15e8e9a7d4bd675b], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-143.15e8e9a8285d17fe], Reason = [Created], Message = [Created container maxp-143] STEP: Considering event: Type = [Normal], Name = [maxp-143.15e8e9aa391541ad], Reason = [Started], Message = [Started container maxp-143] STEP: Considering event: Type = [Normal], Name = [maxp-144.15e8e9a3a4777b0e], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-144 to ip-10-250-7-77.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-144.15e8e9a70f3c138c], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-144.15e8e9a7220ea52c], Reason = [Created], Message = [Created container maxp-144] STEP: Considering event: Type = [Normal], Name = [maxp-144.15e8e9aa156f4e3f], Reason = [Started], Message = [Started container maxp-144] STEP: Considering event: Type = [Normal], Name = [maxp-145.15e8e9a3a9d84624], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-145 to ip-10-250-27-25.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-145.15e8e9a92adc8eee], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-145.15e8e9a965009e68], Reason = [Created], Message = [Created container maxp-145] STEP: Considering event: Type = [Normal], Name = [maxp-145.15e8e9ab63060992], Reason = [Started], Message = [Started container maxp-145] STEP: Considering event: Type = [Normal], Name = [maxp-146.15e8e9a3af45cff5], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-146 to ip-10-250-27-25.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-146.15e8e9a963b766f6], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-146.15e8e9a9a08d3ab3], Reason = [Created], Message = [Created container maxp-146] STEP: Considering event: Type = [Normal], Name = [maxp-146.15e8e9ab3bdf6c52], Reason = [Started], Message = [Started container maxp-146] STEP: Considering event: Type = [Normal], Name = [maxp-147.15e8e9a3b4a52918], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-147 to ip-10-250-27-25.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-147.15e8e9a96228ec37], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-147.15e8e9a9aa8d58cc], Reason = [Created], Message = [Created container maxp-147] STEP: Considering event: Type = [Normal], Name = [maxp-147.15e8e9ab4f50cde0], Reason = [Started], Message = [Started container maxp-147] STEP: Considering event: Type = [Normal], Name = [maxp-148.15e8e9a3ba00ee7b], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-148 to ip-10-250-7-77.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-148.15e8e9a751e84519], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-148.15e8e9a7ba15c7eb], Reason = [Created], Message = [Created container maxp-148] STEP: Considering event: Type = [Normal], Name = [maxp-148.15e8e9aa1e9e5e53], Reason = [Started], Message = [Started container maxp-148] STEP: Considering event: Type = [Normal], Name = [maxp-149.15e8e9a3bf6f22ec], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-149 to ip-10-250-7-77.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-149.15e8e9a6e962d60b], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-149.15e8e9a7086feee3], Reason = [Created], Message = [Created container maxp-149] STEP: Considering event: Type = [Normal], Name = [maxp-149.15e8e9a9cc1497bb], Reason = [Started], Message = [Started container maxp-149] STEP: Considering event: Type = [Normal], Name = [maxp-15.15e8e9a0ecceeeeb], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-15 to ip-10-250-7-77.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-15.15e8e9a15b5f59c2], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-15.15e8e9a16d25d89b], Reason = [Created], Message = [Created container maxp-15] STEP: Considering event: Type = [Normal], Name = [maxp-15.15e8e9a1aee6e022], Reason = [Started], Message = [Started container maxp-15] STEP: Considering event: Type = [Normal], Name = [maxp-150.15e8e9a3c4cf1abc], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-150 to ip-10-250-27-25.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-150.15e8e9a9db242a4c], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-150.15e8e9aa12965fa7], Reason = [Created], Message = [Created container maxp-150] STEP: Considering event: Type = [Normal], Name = [maxp-150.15e8e9ab5fc602ac], Reason = [Started], Message = [Started container maxp-150] STEP: Considering event: Type = [Normal], Name = [maxp-151.15e8e9a3ca34c1f7], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-151 to ip-10-250-27-25.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-151.15e8e9a9a58c7d2d], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-151.15e8e9aa0a567966], Reason = [Created], Message = [Created container maxp-151] STEP: Considering event: Type = [Normal], Name = [maxp-151.15e8e9ab722107cf], Reason = [Started], Message = [Started container maxp-151] STEP: Considering event: Type = [Normal], Name = [maxp-152.15e8e9a3cf99661b], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-152 to ip-10-250-7-77.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-152.15e8e9a8f0682dae], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-152.15e8e9a922e48e12], Reason = [Created], Message = [Created container maxp-152] STEP: Considering event: Type = [Normal], Name = [maxp-152.15e8e9aae772d035], Reason = [Started], Message = [Started container maxp-152] STEP: Considering event: Type = [Normal], Name = [maxp-153.15e8e9a3d527a452], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-153 to ip-10-250-7-77.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-153.15e8e9a94058f2a8], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-153.15e8e9a99d75f6e3], Reason = [Created], Message = [Created container maxp-153] STEP: Considering event: Type = [Normal], Name = [maxp-153.15e8e9aafde6665c], Reason = [Started], Message = [Started container maxp-153] STEP: Considering event: Type = [Normal], Name = [maxp-154.15e8e9a3da79c3df], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-154 to ip-10-250-27-25.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-154.15e8e9a9855423be], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-154.15e8e9a9e27df4d9], Reason = [Created], Message = [Created container maxp-154] STEP: Considering event: Type = [Normal], Name = [maxp-154.15e8e9ab69650ed6], Reason = [Started], Message = [Started container maxp-154] STEP: Considering event: Type = [Normal], Name = [maxp-155.15e8e9a3dfdd5c66], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-155 to ip-10-250-7-77.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-155.15e8e9aaec2376b9], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-155.15e8e9ab1bb796b0], Reason = [Created], Message = [Created container maxp-155] STEP: Considering event: Type = [Normal], Name = [maxp-155.15e8e9ab6d3aecec], Reason = [Started], Message = [Started container maxp-155] STEP: Considering event: Type = [Normal], Name = [maxp-156.15e8e9a3e547ecec], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-156 to ip-10-250-7-77.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-156.15e8e9a8448efb09], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-156.15e8e9a8631ff20a], Reason = [Created], Message = [Created container maxp-156] STEP: Considering event: Type = [Normal], Name = [maxp-156.15e8e9aa421ecc3a], Reason = [Started], Message = [Started container maxp-156] STEP: Considering event: Type = [Normal], Name = [maxp-157.15e8e9a3eaa7c97f], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-157 to ip-10-250-7-77.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-157.15e8e9a90b52209a], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-157.15e8e9a98b6c7d3e], Reason = [Created], Message = [Created container maxp-157] STEP: Considering event: Type = [Normal], Name = [maxp-157.15e8e9aaedd0489c], Reason = [Started], Message = [Started container maxp-157] STEP: Considering event: Type = [Normal], Name = [maxp-158.15e8e9a3f003c881], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-158 to ip-10-250-27-25.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-158.15e8e9aad21d708f], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-158.15e8e9ab0dfb9bc1], Reason = [Created], Message = [Created container maxp-158] STEP: Considering event: Type = [Normal], Name = [maxp-158.15e8e9ab8562d9ff], Reason = [Started], Message = [Started container maxp-158] STEP: Considering event: Type = [Normal], Name = [maxp-159.15e8e9a3f566f7f8], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-159 to ip-10-250-7-77.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-159.15e8e9a88050cb04], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-159.15e8e9a8c8042008], Reason = [Created], Message = [Created container maxp-159] STEP: Considering event: Type = [Normal], Name = [maxp-159.15e8e9aaec68e071], Reason = [Started], Message = [Started container maxp-159] STEP: Considering event: Type = [Normal], Name = [maxp-16.15e8e9a0f2377297], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-16 to ip-10-250-7-77.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-16.15e8e9a174b7cf40], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-16.15e8e9a179c5ca58], Reason = [Created], Message = [Created container maxp-16] STEP: Considering event: Type = [Normal], Name = [maxp-16.15e8e9a1dda95865], Reason = [Started], Message = [Started container maxp-16] STEP: Considering event: Type = [Normal], Name = [maxp-160.15e8e9a3faddcc85], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-160 to ip-10-250-27-25.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-160.15e8e9aa0cf482f4], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-160.15e8e9aa7e63d936], Reason = [Created], Message = [Created container maxp-160] STEP: Considering event: Type = [Normal], Name = [maxp-160.15e8e9ab70a9d61e], Reason = [Started], Message = [Started container maxp-160] STEP: Considering event: Type = [Normal], Name = [maxp-161.15e8e9a4003f2d47], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-161 to ip-10-250-27-25.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-161.15e8e9a9b4be997d], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-161.15e8e9aa16c268a2], Reason = [Created], Message = [Created container maxp-161] STEP: Considering event: Type = [Normal], Name = [maxp-161.15e8e9ab62ae3882], Reason = [Started], Message = [Started container maxp-161] STEP: Considering event: Type = [Normal], Name = [maxp-162.15e8e9a405ab8b83], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-162 to ip-10-250-27-25.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-162.15e8e9a9a582c706], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-162.15e8e9aa0568be08], Reason = [Created], Message = [Created container maxp-162] STEP: Considering event: Type = [Normal], Name = [maxp-162.15e8e9ab72c99ba4], Reason = [Started], Message = [Started container maxp-162] STEP: Considering event: Type = [Normal], Name = [maxp-163.15e8e9a40b0ccee4], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-163 to ip-10-250-7-77.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-163.15e8e9a873d37cf8], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-163.15e8e9a8c6f6af74], Reason = [Created], Message = [Created container maxp-163] STEP: Considering event: Type = [Normal], Name = [maxp-163.15e8e9aaedcb8cc9], Reason = [Started], Message = [Started container maxp-163] STEP: Considering event: Type = [Normal], Name = [maxp-164.15e8e9a410706de6], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-164 to ip-10-250-27-25.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-164.15e8e9aac55e047a], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-164.15e8e9aaeb0978c8], Reason = [Created], Message = [Created container maxp-164] STEP: Considering event: Type = [Normal], Name = [maxp-164.15e8e9ab7683362a], Reason = [Started], Message = [Started container maxp-164] STEP: Considering event: Type = [Normal], Name = [maxp-165.15e8e9a415cf971a], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-165 to ip-10-250-27-25.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-165.15e8e9a963257375], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-165.15e8e9a9c1a8c084], Reason = [Created], Message = [Created container maxp-165] STEP: Considering event: Type = [Normal], Name = [maxp-165.15e8e9ab6170d1dd], Reason = [Started], Message = [Started container maxp-165] STEP: Considering event: Type = [Normal], Name = [maxp-166.15e8e9a41b2f5acd], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-166 to ip-10-250-7-77.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-166.15e8e9a8ec85c409], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-166.15e8e9a95ce100b6], Reason = [Created], Message = [Created container maxp-166] STEP: Considering event: Type = [Normal], Name = [maxp-166.15e8e9aaf0e37f1c], Reason = [Started], Message = [Started container maxp-166] STEP: Considering event: Type = [Normal], Name = [maxp-167.15e8e9a4209a67d7], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-167 to ip-10-250-7-77.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-167.15e8e9a9b09ac6e7], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-167.15e8e9aa0fc6d1f2], Reason = [Created], Message = [Created container maxp-167] STEP: Considering event: Type = [Normal], Name = [maxp-167.15e8e9ab0540f783], Reason = [Started], Message = [Started container maxp-167] STEP: Considering event: Type = [Normal], Name = [maxp-168.15e8e9a425fac93c], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-168 to ip-10-250-7-77.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-168.15e8e9a8f441d7f2], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-168.15e8e9a937edf27c], Reason = [Created], Message = [Created container maxp-168] STEP: Considering event: Type = [Normal], Name = [maxp-168.15e8e9aaedcdf59a], Reason = [Started], Message = [Started container maxp-168] STEP: Considering event: Type = [Normal], Name = [maxp-169.15e8e9a42b56d4cc], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-169 to ip-10-250-27-25.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-169.15e8e9aa0494709a], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-169.15e8e9aa7e0ee1a8], Reason = [Created], Message = [Created container maxp-169] STEP: Considering event: Type = [Normal], Name = [maxp-169.15e8e9ab6cf46754], Reason = [Started], Message = [Started container maxp-169] STEP: Considering event: Type = [Normal], Name = [maxp-17.15e8e9a0f79b0bdd], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-17 to ip-10-250-27-25.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-17.15e8e9a24406924c], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-17.15e8e9a2842ef907], Reason = [Created], Message = [Created container maxp-17] STEP: Considering event: Type = [Normal], Name = [maxp-17.15e8e9a397174b7d], Reason = [Started], Message = [Started container maxp-17] STEP: Considering event: Type = [Normal], Name = [maxp-170.15e8e9a430bc5975], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-170 to ip-10-250-27-25.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-170.15e8e9aacdb1ce78], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-170.15e8e9aaf9fbd0cc], Reason = [Created], Message = [Created container maxp-170] STEP: Considering event: Type = [Normal], Name = [maxp-170.15e8e9ab86b72ed6], Reason = [Started], Message = [Started container maxp-170] STEP: Considering event: Type = [Normal], Name = [maxp-171.15e8e9a436177ea2], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-171 to ip-10-250-7-77.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-171.15e8e9a9b7b87610], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-171.15e8e9aa00acfe04], Reason = [Created], Message = [Created container maxp-171] STEP: Considering event: Type = [Normal], Name = [maxp-171.15e8e9aaf0e495dd], Reason = [Started], Message = [Started container maxp-171] STEP: Considering event: Type = [Normal], Name = [maxp-172.15e8e9a43b7db320], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-172 to ip-10-250-27-25.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-172.15e8e9a9dc679720], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-172.15e8e9aa59dcba7d], Reason = [Created], Message = [Created container maxp-172] STEP: Considering event: Type = [Normal], Name = [maxp-172.15e8e9ab70eb05c6], Reason = [Started], Message = [Started container maxp-172] STEP: Considering event: Type = [Normal], Name = [maxp-173.15e8e9a440db8ca1], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-173 to ip-10-250-7-77.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-173.15e8e9a9b8135ee1], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-173.15e8e9a9f292ed11], Reason = [Created], Message = [Created container maxp-173] STEP: Considering event: Type = [Normal], Name = [maxp-173.15e8e9aafe366130], Reason = [Started], Message = [Started container maxp-173] STEP: Considering event: Type = [Normal], Name = [maxp-174.15e8e9a4463ce369], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-174 to ip-10-250-27-25.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-174.15e8e9aac5736a64], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-174.15e8e9aaf3f13a2e], Reason = [Created], Message = [Created container maxp-174] STEP: Considering event: Type = [Normal], Name = [maxp-174.15e8e9ab82dd053b], Reason = [Started], Message = [Started container maxp-174] STEP: Considering event: Type = [Normal], Name = [maxp-175.15e8e9a44ba24a18], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-175 to ip-10-250-7-77.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-175.15e8e9a84f92b9bf], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-175.15e8e9a89a39d84e], Reason = [Created], Message = [Created container maxp-175] STEP: Considering event: Type = [Normal], Name = [maxp-175.15e8e9aa3ee99a4c], Reason = [Started], Message = [Started container maxp-175] STEP: Considering event: Type = [Normal], Name = [maxp-176.15e8e9a4510d703e], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-176 to ip-10-250-27-25.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-176.15e8e9aa86c6e198], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-176.15e8e9aac3646958], Reason = [Created], Message = [Created container maxp-176] STEP: Considering event: Type = [Normal], Name = [maxp-176.15e8e9ab7219ec8c], Reason = [Started], Message = [Started container maxp-176] STEP: Considering event: Type = [Normal], Name = [maxp-177.15e8e9a4567139c6], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-177 to ip-10-250-27-25.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-177.15e8e9aac4558f9d], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-177.15e8e9aaf3f4a5b1], Reason = [Created], Message = [Created container maxp-177] STEP: Considering event: Type = [Normal], Name = [maxp-177.15e8e9ab84b0d243], Reason = [Started], Message = [Started container maxp-177] STEP: Considering event: Type = [Normal], Name = [maxp-178.15e8e9a45bcf3274], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-178 to ip-10-250-7-77.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-178.15e8e9aa7b5e0cf3], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-178.15e8e9aabcfff27b], Reason = [Created], Message = [Created container maxp-178] STEP: Considering event: Type = [Normal], Name = [maxp-178.15e8e9ab117b6ae5], Reason = [Started], Message = [Started container maxp-178] STEP: Considering event: Type = [Normal], Name = [maxp-179.15e8e9a4612840c5], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-179 to ip-10-250-27-25.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-179.15e8e9aac5647c03], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-179.15e8e9aaff77522e], Reason = [Created], Message = [Created container maxp-179] STEP: Considering event: Type = [Normal], Name = [maxp-179.15e8e9ab86c4a648], Reason = [Started], Message = [Started container maxp-179] STEP: Considering event: Type = [Normal], Name = [maxp-18.15e8e9a0fd075ea5], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-18 to ip-10-250-27-25.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-18.15e8e9a2816b0760], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-18.15e8e9a2a9fe9148], Reason = [Created], Message = [Created container maxp-18] STEP: Considering event: Type = [Normal], Name = [maxp-18.15e8e9a4002c0ee9], Reason = [Started], Message = [Started container maxp-18] STEP: Considering event: Type = [Normal], Name = [maxp-180.15e8e9a4668ee0b8], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-180 to ip-10-250-27-25.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-180.15e8e9aa3df3e8d6], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-180.15e8e9aab744f57d], Reason = [Created], Message = [Created container maxp-180] STEP: Considering event: Type = [Normal], Name = [maxp-180.15e8e9ab70bb77f4], Reason = [Started], Message = [Started container maxp-180] STEP: Considering event: Type = [Normal], Name = [maxp-181.15e8e9a46bf3a05d], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-181 to ip-10-250-7-77.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-181.15e8e9aaf523e4f2], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-181.15e8e9ab2024a8d2], Reason = [Created], Message = [Created container maxp-181] STEP: Considering event: Type = [Normal], Name = [maxp-181.15e8e9ab72a98fef], Reason = [Started], Message = [Started container maxp-181] STEP: Considering event: Type = [Normal], Name = [maxp-182.15e8e9a4715384c9], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-182 to ip-10-250-7-77.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-182.15e8e9a9b0a35afb], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-182.15e8e9a9f45bf7b5], Reason = [Created], Message = [Created container maxp-182] STEP: Considering event: Type = [Normal], Name = [maxp-182.15e8e9aafdf18005], Reason = [Started], Message = [Started container maxp-182] STEP: Considering event: Type = [Normal], Name = [maxp-183.15e8e9a476b36202], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-183 to ip-10-250-7-77.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-183.15e8e9a9b7b41682], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-183.15e8e9a9c72e6194], Reason = [Created], Message = [Created container maxp-183] STEP: Considering event: Type = [Normal], Name = [maxp-183.15e8e9aaf0e1aa63], Reason = [Started], Message = [Started container maxp-183] STEP: Considering event: Type = [Normal], Name = [maxp-184.15e8e9a47c1629eb], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-184 to ip-10-250-27-25.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-184.15e8e9a9f028fd21], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-184.15e8e9aa5c8914cc], Reason = [Created], Message = [Created container maxp-184] STEP: Considering event: Type = [Normal], Name = [maxp-184.15e8e9ab604c7932], Reason = [Started], Message = [Started container maxp-184] STEP: Considering event: Type = [Normal], Name = [maxp-185.15e8e9a4817a62f5], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-185 to ip-10-250-7-77.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-185.15e8e9a9e8f6ba18], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-185.15e8e9aa1414781e], Reason = [Created], Message = [Created container maxp-185] STEP: Considering event: Type = [Normal], Name = [maxp-185.15e8e9ab020a6ceb], Reason = [Started], Message = [Started container maxp-185] STEP: Considering event: Type = [Normal], Name = [maxp-186.15e8e9a486db874d], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-186 to ip-10-250-7-77.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-186.15e8e9a9c369f2d7], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-186.15e8e9a9fefdc840], Reason = [Created], Message = [Created container maxp-186] STEP: Considering event: Type = [Normal], Name = [maxp-186.15e8e9aafe33fc5b], Reason = [Started], Message = [Started container maxp-186] STEP: Considering event: Type = [Normal], Name = [maxp-187.15e8e9a48c3ab24c], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-187 to ip-10-250-7-77.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-187.15e8e9aad54ef6d8], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-187.15e8e9ab0156143b], Reason = [Created], Message = [Created container maxp-187] STEP: Considering event: Type = [Normal], Name = [maxp-187.15e8e9ab5df29b17], Reason = [Started], Message = [Started container maxp-187] STEP: Considering event: Type = [Normal], Name = [maxp-188.15e8e9a491a05720], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-188 to ip-10-250-7-77.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-188.15e8e9aa951544ce], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-188.15e8e9aabe9e6e80], Reason = [Created], Message = [Created container maxp-188] STEP: Considering event: Type = [Normal], Name = [maxp-188.15e8e9ab16478a33], Reason = [Started], Message = [Started container maxp-188] STEP: Considering event: Type = [Normal], Name = [maxp-189.15e8e9a496fa21fc], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-189 to ip-10-250-7-77.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-189.15e8e9aac2db971f], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-189.15e8e9aaedd1ae37], Reason = [Created], Message = [Created container maxp-189] STEP: Considering event: Type = [Normal], Name = [maxp-189.15e8e9ab28a4c3ce], Reason = [Started], Message = [Started container maxp-189] STEP: Considering event: Type = [Normal], Name = [maxp-19.15e8e9a102616a72], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-19 to ip-10-250-7-77.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-19.15e8e9a1883449d2], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-19.15e8e9a1a1c264eb], Reason = [Created], Message = [Created container maxp-19] STEP: Considering event: Type = [Normal], Name = [maxp-19.15e8e9a23edbb5e7], Reason = [Started], Message = [Started container maxp-19] STEP: Considering event: Type = [Normal], Name = [maxp-190.15e8e9a49c583c5c], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-190 to ip-10-250-7-77.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-190.15e8e9a90bea141a], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-190.15e8e9a98e04c1e5], Reason = [Created], Message = [Created container maxp-190] STEP: Considering event: Type = [Normal], Name = [maxp-190.15e8e9aadda302c1], Reason = [Started], Message = [Started container maxp-190] STEP: Considering event: Type = [Normal], Name = [maxp-191.15e8e9a4a1c990ff], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-191 to ip-10-250-7-77.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-191.15e8e9a8ec79ce48], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-191.15e8e9a949a68059], Reason = [Created], Message = [Created container maxp-191] STEP: Considering event: Type = [Normal], Name = [maxp-191.15e8e9aaced33c29], Reason = [Started], Message = [Started container maxp-191] STEP: Considering event: Type = [Normal], Name = [maxp-192.15e8e9a4a725ab50], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-192 to ip-10-250-7-77.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-192.15e8e9aaec6f7fcf], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-192.15e8e9ab1b2c73d9], Reason = [Created], Message = [Created container maxp-192] STEP: Considering event: Type = [Normal], Name = [maxp-192.15e8e9ab70dd8493], Reason = [Started], Message = [Started container maxp-192] STEP: Considering event: Type = [Normal], Name = [maxp-193.15e8e9a4ac83a207], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-193 to ip-10-250-7-77.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-193.15e8e9aaf6c449ce], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-193.15e8e9ab2530e1f1], Reason = [Created], Message = [Created container maxp-193] STEP: Considering event: Type = [Normal], Name = [maxp-193.15e8e9ab70757c70], Reason = [Started], Message = [Started container maxp-193] STEP: Considering event: Type = [Normal], Name = [maxp-194.15e8e9a4b1ed79c2], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-194 to ip-10-250-7-77.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-194.15e8e9aaec93356e], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-194.15e8e9ab13ebffef], Reason = [Created], Message = [Created container maxp-194] STEP: Considering event: Type = [Normal], Name = [maxp-194.15e8e9ab513ae2f2], Reason = [Started], Message = [Started container maxp-194] STEP: Considering event: Type = [Normal], Name = [maxp-195.15e8e9a4b74d10ab], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-195 to ip-10-250-7-77.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-195.15e8e9aaf9424639], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-195.15e8e9ab1b276953], Reason = [Created], Message = [Created container maxp-195] STEP: Considering event: Type = [Normal], Name = [maxp-195.15e8e9ab720a5b9b], Reason = [Started], Message = [Started container maxp-195] STEP: Considering event: Type = [Normal], Name = [maxp-196.15e8e9a4bcaacbbd], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-196 to ip-10-250-7-77.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-196.15e8e9aaf0f18886], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-196.15e8e9ab1b2b9cfd], Reason = [Created], Message = [Created container maxp-196] STEP: Considering event: Type = [Normal], Name = [maxp-196.15e8e9ab65f376bb], Reason = [Started], Message = [Started container maxp-196] STEP: Considering event: Type = [Normal], Name = [maxp-197.15e8e9a4c215d085], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-197 to ip-10-250-7-77.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-197.15e8e9aaf0d52b0b], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-197.15e8e9ab1bb87c1c], Reason = [Created], Message = [Created container maxp-197] STEP: Considering event: Type = [Normal], Name = [maxp-197.15e8e9ab6c1f2b53], Reason = [Started], Message = [Started container maxp-197] STEP: Considering event: Type = [Normal], Name = [maxp-198.15e8e9a4c77cbe50], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-198 to ip-10-250-7-77.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-198.15e8e9aaf52856f6], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-198.15e8e9ab201e08bc], Reason = [Created], Message = [Created container maxp-198] STEP: Considering event: Type = [Normal], Name = [maxp-198.15e8e9ab6f6cfc90], Reason = [Started], Message = [Started container maxp-198] STEP: Considering event: Type = [Normal], Name = [maxp-199.15e8e9a4ccdde062], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-199 to ip-10-250-7-77.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-199.15e8e9aae05ed2b3], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-199.15e8e9ab0ac47760], Reason = [Created], Message = [Created container maxp-199] STEP: Considering event: Type = [Normal], Name = [maxp-199.15e8e9ab606dd4b9], Reason = [Started], Message = [Started container maxp-199] STEP: Considering event: Type = [Normal], Name = [maxp-2.15e8e9a0a6851f82], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-2 to ip-10-250-27-25.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-2.15e8e9a2213213bb], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-2.15e8e9a243a2008e], Reason = [Created], Message = [Created container maxp-2] STEP: Considering event: Type = [Normal], Name = [maxp-2.15e8e9a2f69ac3c1], Reason = [Started], Message = [Started container maxp-2] STEP: Considering event: Type = [Normal], Name = [maxp-20.15e8e9a107ce0a53], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-20 to ip-10-250-7-77.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-20.15e8e9a1cb7b57c9], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-20.15e8e9a1db1f84bf], Reason = [Created], Message = [Created container maxp-20] STEP: Considering event: Type = [Normal], Name = [maxp-20.15e8e9a266bad302], Reason = [Started], Message = [Started container maxp-20] STEP: Considering event: Type = [Normal], Name = [maxp-21.15e8e9a10d2cb022], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-21 to ip-10-250-7-77.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-21.15e8e9a1e334f6d9], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-21.15e8e9a1f80bee72], Reason = [Created], Message = [Created container maxp-21] STEP: Considering event: Type = [Normal], Name = [maxp-21.15e8e9a27d8f12ec], Reason = [Started], Message = [Started container maxp-21] STEP: Considering event: Type = [Normal], Name = [maxp-22.15e8e9a1129c6b44], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-22 to ip-10-250-27-25.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-22.15e8e9a2b4550555], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-22.15e8e9a2d510b053], Reason = [Created], Message = [Created container maxp-22] STEP: Considering event: Type = [Normal], Name = [maxp-22.15e8e9a47aeb62be], Reason = [Started], Message = [Started container maxp-22] STEP: Considering event: Type = [Normal], Name = [maxp-23.15e8e9a117fdb8ad], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-23 to ip-10-250-27-25.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-23.15e8e9a2b6cbb99a], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-23.15e8e9a2d19ee117], Reason = [Created], Message = [Created container maxp-23] STEP: Considering event: Type = [Normal], Name = [maxp-23.15e8e9a47aee669b], Reason = [Started], Message = [Started container maxp-23] STEP: Considering event: Type = [Normal], Name = [maxp-24.15e8e9a11d62547f], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-24 to ip-10-250-27-25.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-24.15e8e9a2b85330a0], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-24.15e8e9a2f514389e], Reason = [Created], Message = [Created container maxp-24] STEP: Considering event: Type = [Normal], Name = [maxp-24.15e8e9a47a807dfc], Reason = [Started], Message = [Started container maxp-24] STEP: Considering event: Type = [Normal], Name = [maxp-25.15e8e9a122c34d8a], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-25 to ip-10-250-7-77.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-25.15e8e9a2003a1369], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-25.15e8e9a209097afa], Reason = [Created], Message = [Created container maxp-25] STEP: Considering event: Type = [Normal], Name = [maxp-25.15e8e9a2c17fe6bd], Reason = [Started], Message = [Started container maxp-25] STEP: Considering event: Type = [Normal], Name = [maxp-26.15e8e9a1282b6854], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-26 to ip-10-250-7-77.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-26.15e8e9a23ee351e6], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-26.15e8e9a25b70b7c8], Reason = [Created], Message = [Created container maxp-26] STEP: Considering event: Type = [Normal], Name = [maxp-26.15e8e9a3357868ba], Reason = [Started], Message = [Started container maxp-26] STEP: Considering event: Type = [Normal], Name = [maxp-27.15e8e9a12dcbccc9], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-27 to ip-10-250-27-25.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-27.15e8e9a2d91a57b0], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-27.15e8e9a316e92fb0], Reason = [Created], Message = [Created container maxp-27] STEP: Considering event: Type = [Normal], Name = [maxp-27.15e8e9a49adf3d87], Reason = [Started], Message = [Started container maxp-27] STEP: Considering event: Type = [Normal], Name = [maxp-28.15e8e9a133429c17], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-28 to ip-10-250-7-77.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-28.15e8e9a235256be0], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-28.15e8e9a253e6ea6e], Reason = [Created], Message = [Created container maxp-28] STEP: Considering event: Type = [Normal], Name = [maxp-28.15e8e9a334ad3738], Reason = [Started], Message = [Started container maxp-28] STEP: Considering event: Type = [Normal], Name = [maxp-29.15e8e9a138af13ad], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-29 to ip-10-250-27-25.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-29.15e8e9a307b1bb33], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-29.15e8e9a36e0145c1], Reason = [Created], Message = [Created container maxp-29] STEP: Considering event: Type = [Normal], Name = [maxp-29.15e8e9a543a94ed0], Reason = [Started], Message = [Started container maxp-29] STEP: Considering event: Type = [Normal], Name = [maxp-3.15e8e9a0abe97c84], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-3 to ip-10-250-27-25.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-3.15e8e9a1a9e0d2bc], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-3.15e8e9a1cb015de1], Reason = [Created], Message = [Created container maxp-3] STEP: Considering event: Type = [Normal], Name = [maxp-3.15e8e9a2244020e1], Reason = [Started], Message = [Started container maxp-3] STEP: Considering event: Type = [Normal], Name = [maxp-30.15e8e9a13e24ff3c], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-30 to ip-10-250-7-77.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-30.15e8e9a277120fd9], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-30.15e8e9a28b1da3de], Reason = [Created], Message = [Created container maxp-30] STEP: Considering event: Type = [Normal], Name = [maxp-30.15e8e9a339d44067], Reason = [Started], Message = [Started container maxp-30] STEP: Considering event: Type = [Normal], Name = [maxp-31.15e8e9a14393bbcc], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-31 to ip-10-250-27-25.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-31.15e8e9a302ea0e5f], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-31.15e8e9a3679fb217], Reason = [Created], Message = [Created container maxp-31] STEP: Considering event: Type = [Normal], Name = [maxp-31.15e8e9a52e8d9edf], Reason = [Started], Message = [Started container maxp-31] STEP: Considering event: Type = [Normal], Name = [maxp-32.15e8e9a148f55a96], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-32 to ip-10-250-27-25.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-32.15e8e9a30f8e3e67], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-32.15e8e9a36e6e3d77], Reason = [Created], Message = [Created container maxp-32] STEP: Considering event: Type = [Normal], Name = [maxp-32.15e8e9a5231a427a], Reason = [Started], Message = [Started container maxp-32] STEP: Considering event: Type = [Normal], Name = [maxp-33.15e8e9a14e6d39f4], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-33 to ip-10-250-7-77.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-33.15e8e9a200d2055d], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-33.15e8e9a20cfc8e7c], Reason = [Created], Message = [Created container maxp-33] STEP: Considering event: Type = [Normal], Name = [maxp-33.15e8e9a2e0dd3082], Reason = [Started], Message = [Started container maxp-33] STEP: Considering event: Type = [Normal], Name = [maxp-34.15e8e9a153cb1102], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-34 to ip-10-250-27-25.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-34.15e8e9a307eee679], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-34.15e8e9a3685962c7], Reason = [Created], Message = [Created container maxp-34] STEP: Considering event: Type = [Normal], Name = [maxp-34.15e8e9a4f61b6724], Reason = [Started], Message = [Started container maxp-34] STEP: Considering event: Type = [Normal], Name = [maxp-35.15e8e9a15936cc99], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-35 to ip-10-250-27-25.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-35.15e8e9a307ea7601], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-35.15e8e9a3685a5419], Reason = [Created], Message = [Created container maxp-35] STEP: Considering event: Type = [Normal], Name = [maxp-35.15e8e9a586f71c8d], Reason = [Started], Message = [Started container maxp-35] STEP: Considering event: Type = [Normal], Name = [maxp-36.15e8e9a15e99381c], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-36 to ip-10-250-27-25.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-36.15e8e9a3225a424d], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-36.15e8e9a368330f74], Reason = [Created], Message = [Created container maxp-36] STEP: Considering event: Type = [Normal], Name = [maxp-36.15e8e9a52e263e79], Reason = [Started], Message = [Started container maxp-36] STEP: Considering event: Type = [Normal], Name = [maxp-37.15e8e9a164120ea4], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-37 to ip-10-250-7-77.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-37.15e8e9a2d219bfb5], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-37.15e8e9a303bde6cb], Reason = [Created], Message = [Created container maxp-37] STEP: Considering event: Type = [Normal], Name = [maxp-37.15e8e9a3bbbd8795], Reason = [Started], Message = [Started container maxp-37] STEP: Considering event: Type = [Normal], Name = [maxp-38.15e8e9a16969ac55], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-38 to ip-10-250-7-77.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-38.15e8e9a2a8b95b90], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-38.15e8e9a2e2c02adb], Reason = [Created], Message = [Created container maxp-38] STEP: Considering event: Type = [Normal], Name = [maxp-38.15e8e9a3e60fc1ed], Reason = [Started], Message = [Started container maxp-38] STEP: Considering event: Type = [Normal], Name = [maxp-39.15e8e9a16ed01103], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-39 to ip-10-250-7-77.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-39.15e8e9a2af5941ec], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-39.15e8e9a2f158f2d1], Reason = [Created], Message = [Created container maxp-39] STEP: Considering event: Type = [Normal], Name = [maxp-39.15e8e9a3f4720841], Reason = [Started], Message = [Started container maxp-39] STEP: Considering event: Type = [Normal], Name = [maxp-4.15e8e9a0b14d4bfa], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-4 to ip-10-250-27-25.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-4.15e8e9a1ea1c9a4b], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-4.15e8e9a2160d8fa6], Reason = [Created], Message = [Created container maxp-4] STEP: Considering event: Type = [Normal], Name = [maxp-4.15e8e9a29fa9ec3e], Reason = [Started], Message = [Started container maxp-4] STEP: Considering event: Type = [Normal], Name = [maxp-40.15e8e9a17443d9fd], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-40 to ip-10-250-7-77.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-40.15e8e9a28b1618cb], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-40.15e8e9a2a9b1a003], Reason = [Created], Message = [Created container maxp-40] STEP: Considering event: Type = [Normal], Name = [maxp-40.15e8e9a3c1dfb106], Reason = [Started], Message = [Started container maxp-40] STEP: Considering event: Type = [Normal], Name = [maxp-41.15e8e9a179a50000], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-41 to ip-10-250-27-25.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-41.15e8e9a32da1fe4c], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-41.15e8e9a37d549d94], Reason = [Created], Message = [Created container maxp-41] STEP: Considering event: Type = [Normal], Name = [maxp-41.15e8e9a5ae10c92f], Reason = [Started], Message = [Started container maxp-41] STEP: Considering event: Type = [Normal], Name = [maxp-42.15e8e9a17f0d6ac3], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-42 to ip-10-250-27-25.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-42.15e8e9a32f15271f], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-42.15e8e9a37d51c470], Reason = [Created], Message = [Created container maxp-42] STEP: Considering event: Type = [Normal], Name = [maxp-42.15e8e9a568bbfbcd], Reason = [Started], Message = [Started container maxp-42] STEP: Considering event: Type = [Normal], Name = [maxp-43.15e8e9a1846d2a8f], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-43 to ip-10-250-7-77.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-43.15e8e9a38076ba5d], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-43.15e8e9a39ef2380e], Reason = [Created], Message = [Created container maxp-43] STEP: Considering event: Type = [Normal], Name = [maxp-43.15e8e9a5959b7652], Reason = [Started], Message = [Started container maxp-43] STEP: Considering event: Type = [Normal], Name = [maxp-44.15e8e9a189d76bd9], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-44 to ip-10-250-27-25.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-44.15e8e9a349baf42e], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-44.15e8e9a37c3f6e52], Reason = [Created], Message = [Created container maxp-44] STEP: Considering event: Type = [Normal], Name = [maxp-44.15e8e9a5bf62240d], Reason = [Started], Message = [Started container maxp-44] STEP: Considering event: Type = [Normal], Name = [maxp-45.15e8e9a18f3c0641], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-45 to ip-10-250-7-77.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-45.15e8e9a3418e7085], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-45.15e8e9a36d7fe7bc], Reason = [Created], Message = [Created container maxp-45] STEP: Considering event: Type = [Normal], Name = [maxp-45.15e8e9a5e53651e6], Reason = [Started], Message = [Started container maxp-45] STEP: Considering event: Type = [Normal], Name = [maxp-46.15e8e9a194ad1453], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-46 to ip-10-250-7-77.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-46.15e8e9a3cffb0c9a], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-46.15e8e9a3fb7e1514], Reason = [Created], Message = [Created container maxp-46] STEP: Considering event: Type = [Normal], Name = [maxp-46.15e8e9a606b86a05], Reason = [Started], Message = [Started container maxp-46] STEP: Considering event: Type = [Normal], Name = [maxp-47.15e8e9a19a0c40be], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-47 to ip-10-250-7-77.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-47.15e8e9a3794ee0ea], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-47.15e8e9a39ee13ef8], Reason = [Created], Message = [Created container maxp-47] STEP: Considering event: Type = [Normal], Name = [maxp-47.15e8e9a5ab182967], Reason = [Started], Message = [Started container maxp-47] STEP: Considering event: Type = [Normal], Name = [maxp-48.15e8e9a19f73c315], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-48 to ip-10-250-7-77.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-48.15e8e9a3680174a6], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-48.15e8e9a37a63b5e4], Reason = [Created], Message = [Created container maxp-48] STEP: Considering event: Type = [Normal], Name = [maxp-48.15e8e9a57c253f0c], Reason = [Started], Message = [Started container maxp-48] STEP: Considering event: Type = [Normal], Name = [maxp-49.15e8e9a1a4ed2a48], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-49 to ip-10-250-27-25.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-49.15e8e9a3e38dcd6b], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-49.15e8e9a4453d2f1d], Reason = [Created], Message = [Created container maxp-49] STEP: Considering event: Type = [Normal], Name = [maxp-49.15e8e9a68e843b86], Reason = [Started], Message = [Started container maxp-49] STEP: Considering event: Type = [Normal], Name = [maxp-5.15e8e9a0b6beb6ee], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-5 to ip-10-250-27-25.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-5.15e8e9a1403e0c56], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-5.15e8e9a1514b121c], Reason = [Created], Message = [Created container maxp-5] STEP: Considering event: Type = [Normal], Name = [maxp-5.15e8e9a185abaee5], Reason = [Started], Message = [Started container maxp-5] STEP: Considering event: Type = [Normal], Name = [maxp-50.15e8e9a1aa472d84], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-50 to ip-10-250-27-25.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-50.15e8e9a3c51babf5], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-50.15e8e9a4289da8d4], Reason = [Created], Message = [Created container maxp-50] STEP: Considering event: Type = [Normal], Name = [maxp-50.15e8e9a667edda0c], Reason = [Started], Message = [Started container maxp-50] STEP: Considering event: Type = [Normal], Name = [maxp-51.15e8e9a1afab1d61], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-51 to ip-10-250-27-25.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-51.15e8e9a33505c602], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-51.15e8e9a378007d9a], Reason = [Created], Message = [Created container maxp-51] STEP: Considering event: Type = [Normal], Name = [maxp-51.15e8e9a52df69ffd], Reason = [Started], Message = [Started container maxp-51] STEP: Considering event: Type = [Normal], Name = [maxp-52.15e8e9a1b516fec3], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-52 to ip-10-250-27-25.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-52.15e8e9a3f41cd73f], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-52.15e8e9a43c910231], Reason = [Created], Message = [Created container maxp-52] STEP: Considering event: Type = [Normal], Name = [maxp-52.15e8e9a70bed8e3a], Reason = [Started], Message = [Started container maxp-52] STEP: Considering event: Type = [Normal], Name = [maxp-53.15e8e9a1ba782013], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-53 to ip-10-250-27-25.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-53.15e8e9a4ecf429be], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-53.15e8e9a5397f79fe], Reason = [Created], Message = [Created container maxp-53] STEP: Considering event: Type = [Normal], Name = [maxp-53.15e8e9a7e0835f10], Reason = [Started], Message = [Started container maxp-53] STEP: Considering event: Type = [Normal], Name = [maxp-54.15e8e9a1bfd6ab0f], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-54 to ip-10-250-27-25.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-54.15e8e9a404fd0cc7], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-54.15e8e9a485aa03fc], Reason = [Created], Message = [Created container maxp-54] STEP: Considering event: Type = [Normal], Name = [maxp-54.15e8e9a7a2d184a9], Reason = [Started], Message = [Started container maxp-54] STEP: Considering event: Type = [Normal], Name = [maxp-55.15e8e9a1c536e852], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-55 to ip-10-250-27-25.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-55.15e8e9a4cdb0c507], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-55.15e8e9a5546fbf9a], Reason = [Created], Message = [Created container maxp-55] STEP: Considering event: Type = [Normal], Name = [maxp-55.15e8e9a8645eac50], Reason = [Started], Message = [Started container maxp-55] STEP: Considering event: Type = [Normal], Name = [maxp-56.15e8e9a1ca973ce8], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-56 to ip-10-250-7-77.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-56.15e8e9a3c25b7d1c], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-56.15e8e9a3e9c0bbc6], Reason = [Created], Message = [Created container maxp-56] STEP: Considering event: Type = [Normal], Name = [maxp-56.15e8e9a6064d0aa7], Reason = [Started], Message = [Started container maxp-56] STEP: Considering event: Type = [Normal], Name = [maxp-57.15e8e9a1cff5c05a], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-57 to ip-10-250-27-25.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-57.15e8e9a4efe48ce2], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-57.15e8e9a5658d5ae7], Reason = [Created], Message = [Created container maxp-57] STEP: Considering event: Type = [Normal], Name = [maxp-57.15e8e9a8389310db], Reason = [Started], Message = [Started container maxp-57] STEP: Considering event: Type = [Normal], Name = [maxp-58.15e8e9a1d557dd52], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-58 to ip-10-250-27-25.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-58.15e8e9a4efbe9608], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-58.15e8e9a57451d372], Reason = [Created], Message = [Created container maxp-58] STEP: Considering event: Type = [Normal], Name = [maxp-58.15e8e9a82dc94679], Reason = [Started], Message = [Started container maxp-58] STEP: Considering event: Type = [Normal], Name = [maxp-59.15e8e9a1dabb2227], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-59 to ip-10-250-7-77.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-59.15e8e9a3d6063c58], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-59.15e8e9a3f2446d2a], Reason = [Created], Message = [Created container maxp-59] STEP: Considering event: Type = [Normal], Name = [maxp-59.15e8e9a60e85c755], Reason = [Started], Message = [Started container maxp-59] STEP: Considering event: Type = [Normal], Name = [maxp-6.15e8e9a0bc1dbb94], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-6 to ip-10-250-27-25.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-6.15e8e9a1a773489b], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-6.15e8e9a1b1786809], Reason = [Created], Message = [Created container maxp-6] STEP: Considering event: Type = [Normal], Name = [maxp-6.15e8e9a21611b475], Reason = [Started], Message = [Started container maxp-6] STEP: Considering event: Type = [Normal], Name = [maxp-60.15e8e9a1e0187459], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-60 to ip-10-250-27-25.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-60.15e8e9a4e5648d97], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-60.15e8e9a5591f3953], Reason = [Created], Message = [Created container maxp-60] STEP: Considering event: Type = [Normal], Name = [maxp-60.15e8e9a85b417cd3], Reason = [Started], Message = [Started container maxp-60] STEP: Considering event: Type = [Normal], Name = [maxp-61.15e8e9a1e583fc15], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-61 to ip-10-250-7-77.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-61.15e8e9a3ef9b6e35], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-61.15e8e9a461c40638], Reason = [Created], Message = [Created container maxp-61] STEP: Considering event: Type = [Normal], Name = [maxp-61.15e8e9a718bb4e60], Reason = [Started], Message = [Started container maxp-61] STEP: Considering event: Type = [Normal], Name = [maxp-62.15e8e9a1eaec6fac], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-62 to ip-10-250-27-25.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-62.15e8e9a40a058153], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-62.15e8e9a48c331bb9], Reason = [Created], Message = [Created container maxp-62] STEP: Considering event: Type = [Normal], Name = [maxp-62.15e8e9a79f71eacc], Reason = [Started], Message = [Started container maxp-62] STEP: Considering event: Type = [Normal], Name = [maxp-63.15e8e9a1f05887d0], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-63 to ip-10-250-27-25.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-63.15e8e9a4c1a37626], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-63.15e8e9a537e4641f], Reason = [Created], Message = [Created container maxp-63] STEP: Considering event: Type = [Normal], Name = [maxp-63.15e8e9a86d981d5a], Reason = [Started], Message = [Started container maxp-63] STEP: Considering event: Type = [Normal], Name = [maxp-64.15e8e9a1f5b04a3c], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-64 to ip-10-250-27-25.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-64.15e8e9a4ed00290e], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-64.15e8e9a58393ed2a], Reason = [Created], Message = [Created container maxp-64] STEP: Considering event: Type = [Normal], Name = [maxp-64.15e8e9a8c406d9ec], Reason = [Started], Message = [Started container maxp-64] STEP: Considering event: Type = [Normal], Name = [maxp-65.15e8e9a1fb0fd2ab], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-65 to ip-10-250-27-25.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-65.15e8e9a4f4aeb6f7], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-65.15e8e9a5918b5403], Reason = [Created], Message = [Created container maxp-65] STEP: Considering event: Type = [Normal], Name = [maxp-65.15e8e9a8bba82c51], Reason = [Started], Message = [Started container maxp-65] STEP: Considering event: Type = [Normal], Name = [maxp-66.15e8e9a200768826], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-66 to ip-10-250-7-77.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-66.15e8e9a3d6f0c0b4], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-66.15e8e9a3f2a405cb], Reason = [Created], Message = [Created container maxp-66] STEP: Considering event: Type = [Normal], Name = [maxp-66.15e8e9a61c71a392], Reason = [Started], Message = [Started container maxp-66] STEP: Considering event: Type = [Normal], Name = [maxp-67.15e8e9a205de79c8], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-67 to ip-10-250-7-77.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-67.15e8e9a3f0ee85a9], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-67.15e8e9a45637def8], Reason = [Created], Message = [Created container maxp-67] STEP: Considering event: Type = [Normal], Name = [maxp-67.15e8e9a662ff7853], Reason = [Started], Message = [Started container maxp-67] STEP: Considering event: Type = [Normal], Name = [maxp-68.15e8e9a20b3b9caf], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-68 to ip-10-250-27-25.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-68.15e8e9a4f4e5f606], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-68.15e8e9a5638781eb], Reason = [Created], Message = [Created container maxp-68] STEP: Considering event: Type = [Normal], Name = [maxp-68.15e8e9a830f924a7], Reason = [Started], Message = [Started container maxp-68] STEP: Considering event: Type = [Normal], Name = [maxp-69.15e8e9a210975bca], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-69 to ip-10-250-27-25.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-69.15e8e9a5f7eb195a], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-69.15e8e9a66daa586c], Reason = [Created], Message = [Created container maxp-69] STEP: Considering event: Type = [Normal], Name = [maxp-69.15e8e9a9f335a325], Reason = [Started], Message = [Started container maxp-69] STEP: Considering event: Type = [Normal], Name = [maxp-7.15e8e9a0c181cbd5], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-7 to ip-10-250-27-25.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-7.15e8e9a1f422103f], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-7.15e8e9a22af5af7b], Reason = [Created], Message = [Created container maxp-7] STEP: Considering event: Type = [Normal], Name = [maxp-7.15e8e9a2f69f9224], Reason = [Started], Message = [Started container maxp-7] STEP: Considering event: Type = [Normal], Name = [maxp-70.15e8e9a215f745ee], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-70 to ip-10-250-27-25.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-70.15e8e9a4fd67c092], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-70.15e8e9a589055eff], Reason = [Created], Message = [Created container maxp-70] STEP: Considering event: Type = [Normal], Name = [maxp-70.15e8e9a862c3b7fe], Reason = [Started], Message = [Started container maxp-70] STEP: Considering event: Type = [Normal], Name = [maxp-71.15e8e9a21b51dd98], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-71 to ip-10-250-27-25.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-71.15e8e9a59862ce2c], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-71.15e8e9a627ad61d7], Reason = [Created], Message = [Created container maxp-71] STEP: Considering event: Type = [Normal], Name = [maxp-71.15e8e9a9fb623ece], Reason = [Started], Message = [Started container maxp-71] STEP: Considering event: Type = [Normal], Name = [maxp-72.15e8e9a220bdd2a5], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-72 to ip-10-250-7-77.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-72.15e8e9a429b9c385], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-72.15e8e9a47524c8b8], Reason = [Created], Message = [Created container maxp-72] STEP: Considering event: Type = [Normal], Name = [maxp-72.15e8e9a704e63785], Reason = [Started], Message = [Started container maxp-72] STEP: Considering event: Type = [Normal], Name = [maxp-73.15e8e9a2261c76b5], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-73 to ip-10-250-27-25.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-73.15e8e9a5037d8c2a], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-73.15e8e9a59d339036], Reason = [Created], Message = [Created container maxp-73] STEP: Considering event: Type = [Normal], Name = [maxp-73.15e8e9a9124d739f], Reason = [Started], Message = [Started container maxp-73] STEP: Considering event: Type = [Normal], Name = [maxp-74.15e8e9a22b816215], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-74 to ip-10-250-7-77.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-74.15e8e9a4a4212e3e], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-74.15e8e9a4fdb1b2d9], Reason = [Created], Message = [Created container maxp-74] STEP: Considering event: Type = [Normal], Name = [maxp-74.15e8e9a75f025769], Reason = [Started], Message = [Started container maxp-74] STEP: Considering event: Type = [Normal], Name = [maxp-75.15e8e9a230ed487d], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-75 to ip-10-250-27-25.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-75.15e8e9a5f67e6f97], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-75.15e8e9a66a8be573], Reason = [Created], Message = [Created container maxp-75] STEP: Considering event: Type = [Normal], Name = [maxp-75.15e8e9a9f2fa7353], Reason = [Started], Message = [Started container maxp-75] STEP: Considering event: Type = [Normal], Name = [maxp-76.15e8e9a2364ba7aa], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-76 to ip-10-250-27-25.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-76.15e8e9a5b8ac6776], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-76.15e8e9a62dbb64cf], Reason = [Created], Message = [Created container maxp-76] STEP: Considering event: Type = [Normal], Name = [maxp-76.15e8e9a9ccebf97f], Reason = [Started], Message = [Started container maxp-76] STEP: Considering event: Type = [Normal], Name = [maxp-77.15e8e9a23bac9a34], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-77 to ip-10-250-7-77.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-77.15e8e9a49fe3b99a], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-77.15e8e9a527f5e09c], Reason = [Created], Message = [Created container maxp-77] STEP: Considering event: Type = [Normal], Name = [maxp-77.15e8e9a7c54d63ec], Reason = [Started], Message = [Started container maxp-77] STEP: Considering event: Type = [Normal], Name = [maxp-78.15e8e9a24117728e], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-78 to ip-10-250-7-77.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-78.15e8e9a424877ca5], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-78.15e8e9a461630e3b], Reason = [Created], Message = [Created container maxp-78] STEP: Considering event: Type = [Normal], Name = [maxp-78.15e8e9a6d200dbb1], Reason = [Started], Message = [Started container maxp-78] STEP: Considering event: Type = [Normal], Name = [maxp-79.15e8e9a246794888], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-79 to ip-10-250-27-25.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-79.15e8e9a5f74730b1], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-79.15e8e9a6631b3624], Reason = [Created], Message = [Created container maxp-79] STEP: Considering event: Type = [Normal], Name = [maxp-79.15e8e9a9f7ca1bed], Reason = [Started], Message = [Started container maxp-79] STEP: Considering event: Type = [Normal], Name = [maxp-8.15e8e9a0c6ea9cd8], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-8 to ip-10-250-27-25.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-8.15e8e9a2169ff3c5], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-8.15e8e9a2430b4171], Reason = [Created], Message = [Created container maxp-8] STEP: Considering event: Type = [Normal], Name = [maxp-8.15e8e9a2f9c3b08e], Reason = [Started], Message = [Started container maxp-8] STEP: Considering event: Type = [Normal], Name = [maxp-80.15e8e9a24bde646c], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-80 to ip-10-250-27-25.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-80.15e8e9a591f25b1b], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-80.15e8e9a623d613d3], Reason = [Created], Message = [Created container maxp-80] STEP: Considering event: Type = [Normal], Name = [maxp-80.15e8e9a9fb66afe9], Reason = [Started], Message = [Started container maxp-80] STEP: Considering event: Type = [Normal], Name = [maxp-81.15e8e9a251448027], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-81 to ip-10-250-27-25.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-81.15e8e9a591bbe721], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-81.15e8e9a62b5762ab], Reason = [Created], Message = [Created container maxp-81] STEP: Considering event: Type = [Normal], Name = [maxp-81.15e8e9a9e4bef64a], Reason = [Started], Message = [Started container maxp-81] STEP: Considering event: Type = [Normal], Name = [maxp-82.15e8e9a256a7df2d], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-82 to ip-10-250-27-25.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-82.15e8e9a5c414c91c], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-82.15e8e9a64e930115], Reason = [Created], Message = [Created container maxp-82] STEP: Considering event: Type = [Normal], Name = [maxp-82.15e8e9a9e792c8e3], Reason = [Started], Message = [Started container maxp-82] STEP: Considering event: Type = [Normal], Name = [maxp-83.15e8e9a25c00a914], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-83 to ip-10-250-27-25.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-83.15e8e9a5c3240fdd], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-83.15e8e9a6563d091d], Reason = [Created], Message = [Created container maxp-83] STEP: Considering event: Type = [Normal], Name = [maxp-83.15e8e9a9e2df574b], Reason = [Started], Message = [Started container maxp-83] STEP: Considering event: Type = [Normal], Name = [maxp-84.15e8e9a26162ab37], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-84 to ip-10-250-7-77.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-84.15e8e9a4b93d6512], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-84.15e8e9a527f7c786], Reason = [Created], Message = [Created container maxp-84] STEP: Considering event: Type = [Normal], Name = [maxp-84.15e8e9a75eed5553], Reason = [Started], Message = [Started container maxp-84] STEP: Considering event: Type = [Normal], Name = [maxp-85.15e8e9a266bde66f], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-85 to ip-10-250-7-77.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-85.15e8e9a4b8b89818], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-85.15e8e9a4fd86d9f3], Reason = [Created], Message = [Created container maxp-85] STEP: Considering event: Type = [Normal], Name = [maxp-85.15e8e9a772089b43], Reason = [Started], Message = [Started container maxp-85] STEP: Considering event: Type = [Normal], Name = [maxp-86.15e8e9a26c2787d9], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-86 to ip-10-250-7-77.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-86.15e8e9a4b8c6ca89], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-86.15e8e9a52755cbea], Reason = [Created], Message = [Created container maxp-86] STEP: Considering event: Type = [Normal], Name = [maxp-86.15e8e9a7c371bdbb], Reason = [Started], Message = [Started container maxp-86] STEP: Considering event: Type = [Normal], Name = [maxp-87.15e8e9a2718aac64], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-87 to ip-10-250-27-25.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-87.15e8e9a69bf9fae0], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-87.15e8e9a70f974b30], Reason = [Created], Message = [Created container maxp-87] STEP: Considering event: Type = [Normal], Name = [maxp-87.15e8e9aa45609049], Reason = [Started], Message = [Started container maxp-87] STEP: Considering event: Type = [Normal], Name = [maxp-88.15e8e9a276eda9c7], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-88 to ip-10-250-7-77.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-88.15e8e9a5a0c343d2], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-88.15e8e9a5fce487a0], Reason = [Created], Message = [Created container maxp-88] STEP: Considering event: Type = [Normal], Name = [maxp-88.15e8e9a7918ffebb], Reason = [Started], Message = [Started container maxp-88] STEP: Considering event: Type = [Normal], Name = [maxp-89.15e8e9a27c55849f], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-89 to ip-10-250-27-25.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-89.15e8e9a6eeec3a72], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-89.15e8e9a7ab3a2ad9], Reason = [Created], Message = [Created container maxp-89] STEP: Considering event: Type = [Normal], Name = [maxp-89.15e8e9aa6c088d60], Reason = [Started], Message = [Started container maxp-89] STEP: Considering event: Type = [Normal], Name = [maxp-9.15e8e9a0cc556ccc], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-9 to ip-10-250-27-25.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-9.15e8e9a2440aa21d], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-9.15e8e9a28c14d2b3], Reason = [Created], Message = [Created container maxp-9] STEP: Considering event: Type = [Normal], Name = [maxp-9.15e8e9a3bbae2d9e], Reason = [Started], Message = [Started container maxp-9] STEP: Considering event: Type = [Normal], Name = [maxp-90.15e8e9a281b9c4be], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-90 to ip-10-250-27-25.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-90.15e8e9a6097ec4c7], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-90.15e8e9a66f0dc58f], Reason = [Created], Message = [Created container maxp-90] STEP: Considering event: Type = [Normal], Name = [maxp-90.15e8e9aa06437ffb], Reason = [Started], Message = [Started container maxp-90] STEP: Considering event: Type = [Normal], Name = [maxp-91.15e8e9a2871310a9], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-91 to ip-10-250-27-25.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-91.15e8e9a786e08630], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-91.15e8e9a85a9d0401], Reason = [Created], Message = [Created container maxp-91] STEP: Considering event: Type = [Normal], Name = [maxp-91.15e8e9aabd2982d6], Reason = [Started], Message = [Started container maxp-91] STEP: Considering event: Type = [Normal], Name = [maxp-92.15e8e9a28c7befd5], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-92 to ip-10-250-7-77.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-92.15e8e9a57047f550], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-92.15e8e9a5f75494eb], Reason = [Created], Message = [Created container maxp-92] STEP: Considering event: Type = [Normal], Name = [maxp-92.15e8e9a7910188cb], Reason = [Started], Message = [Started container maxp-92] STEP: Considering event: Type = [Normal], Name = [maxp-93.15e8e9a291db71c3], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-93 to ip-10-250-7-77.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-93.15e8e9a5a9f07399], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-93.15e8e9a65d3fe33f], Reason = [Created], Message = [Created container maxp-93] STEP: Considering event: Type = [Normal], Name = [maxp-93.15e8e9a921ae46dc], Reason = [Started], Message = [Started container maxp-93] STEP: Considering event: Type = [Normal], Name = [maxp-94.15e8e9a2973b1469], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-94 to ip-10-250-7-77.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-94.15e8e9a5aa0e58e7], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-94.15e8e9a64ffb6161], Reason = [Created], Message = [Created container maxp-94] STEP: Considering event: Type = [Normal], Name = [maxp-94.15e8e9a94cf7ccc3], Reason = [Started], Message = [Started container maxp-94] STEP: Considering event: Type = [Normal], Name = [maxp-95.15e8e9a29c9840c1], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-95 to ip-10-250-7-77.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-95.15e8e9a609e0d92a], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-95.15e8e9a69363de36], Reason = [Created], Message = [Created container maxp-95] STEP: Considering event: Type = [Normal], Name = [maxp-95.15e8e9a9a83789b4], Reason = [Started], Message = [Started container maxp-95] STEP: Considering event: Type = [Normal], Name = [maxp-96.15e8e9a2a2021365], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-96 to ip-10-250-27-25.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-96.15e8e9a6af14d7f1], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-96.15e8e9a706dee810], Reason = [Created], Message = [Created container maxp-96] STEP: Considering event: Type = [Normal], Name = [maxp-96.15e8e9aa4dbc08a1], Reason = [Started], Message = [Started container maxp-96] STEP: Considering event: Type = [Normal], Name = [maxp-97.15e8e9a2a76d57a3], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-97 to ip-10-250-27-25.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-97.15e8e9a75aabe2c1], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-97.15e8e9a82b492986], Reason = [Created], Message = [Created container maxp-97] STEP: Considering event: Type = [Normal], Name = [maxp-97.15e8e9aa86d15b41], Reason = [Started], Message = [Started container maxp-97] STEP: Considering event: Type = [Normal], Name = [maxp-98.15e8e9a2acc38ba7], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-98 to ip-10-250-27-25.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-98.15e8e9a87e99e842], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-98.15e8e9a9054a9718], Reason = [Created], Message = [Created container maxp-98] STEP: Considering event: Type = [Normal], Name = [maxp-98.15e8e9ab2505c1fa], Reason = [Started], Message = [Started container maxp-98] STEP: Considering event: Type = [Normal], Name = [maxp-99.15e8e9a2b21b6371], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1910/maxp-99 to ip-10-250-7-77.ec2.internal] STEP: Considering event: Type = [Normal], Name = [maxp-99.15e8e9a6067e5d97], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [maxp-99.15e8e9a68fefbb93], Reason = [Created], Message = [Created container maxp-99] STEP: Considering event: Type = [Normal], Name = [maxp-99.15e8e9a9c2955127], Reason = [Started], Message = [Started container maxp-99] STEP: Considering event: Type = [Warning], Name = [additional-pod.15e8e9ad0fa46794], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 Insufficient pods.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 18:50:00.066: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1910" for this suite. Jan 11 18:50:10.427: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 18:50:13.651: INFO: namespace sched-pred-1910 deletion completed in 13.493973263s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:78 •SSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon with node affinity /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:195 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 18:50:13.651: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename daemonsets STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in daemonsets-3202 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should run and stop complex daemon with node affinity /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:195 Jan 11 18:50:14.738: INFO: Creating daemon "daemon-set" with a node affinity STEP: Initially, daemon pods should not be running on any nodes. Jan 11 18:50:14.918: INFO: Number of nodes with available pods: 0 Jan 11 18:50:14.918: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Jan 11 18:50:15.280: INFO: Number of nodes with available pods: 0 Jan 11 18:50:15.280: INFO: Node ip-10-250-27-25.ec2.internal is running more than one daemon pod Jan 11 18:50:16.370: INFO: Number of nodes with available pods: 0 Jan 11 18:50:16.370: INFO: Node ip-10-250-27-25.ec2.internal is running more than one daemon pod Jan 11 18:50:17.370: INFO: Number of nodes with available pods: 0 Jan 11 18:50:17.370: INFO: Node ip-10-250-27-25.ec2.internal is running more than one daemon pod Jan 11 18:50:18.370: INFO: Number of nodes with available pods: 0 Jan 11 18:50:18.370: INFO: Node ip-10-250-27-25.ec2.internal is running more than one daemon pod Jan 11 18:50:19.370: INFO: Number of nodes with available pods: 0 Jan 11 18:50:19.371: INFO: Node ip-10-250-27-25.ec2.internal is running more than one daemon pod Jan 11 18:50:20.370: INFO: Number of nodes with available pods: 0 Jan 11 18:50:20.370: INFO: Node ip-10-250-27-25.ec2.internal is running more than one daemon pod Jan 11 18:50:21.370: INFO: Number of nodes with available pods: 0 Jan 11 18:50:21.370: INFO: Node ip-10-250-27-25.ec2.internal is running more than one daemon pod Jan 11 18:50:22.370: INFO: Number of nodes with available pods: 0 Jan 11 18:50:22.370: INFO: Node ip-10-250-27-25.ec2.internal is running more than one daemon pod Jan 11 18:50:23.370: INFO: Number of nodes with available pods: 0 Jan 11 18:50:23.370: INFO: Node ip-10-250-27-25.ec2.internal is running more than one daemon pod Jan 11 18:50:24.370: INFO: Number of nodes with available pods: 0 Jan 11 18:50:24.370: INFO: Node ip-10-250-27-25.ec2.internal is running more than one daemon pod Jan 11 18:50:25.370: INFO: Number of nodes with available pods: 0 Jan 11 18:50:25.370: INFO: Node ip-10-250-27-25.ec2.internal is running more than one daemon pod Jan 11 18:50:26.370: INFO: Number of nodes with available pods: 0 Jan 11 18:50:26.370: INFO: Node ip-10-250-27-25.ec2.internal is running more than one daemon pod Jan 11 18:50:27.370: INFO: Number of nodes with available pods: 0 Jan 11 18:50:27.370: INFO: Node ip-10-250-27-25.ec2.internal is running more than one daemon pod Jan 11 18:50:28.370: INFO: Number of nodes with available pods: 0 Jan 11 18:50:28.370: INFO: Node ip-10-250-27-25.ec2.internal is running more than one daemon pod Jan 11 18:50:29.370: INFO: Number of nodes with available pods: 0 Jan 11 18:50:29.370: INFO: Node ip-10-250-27-25.ec2.internal is running more than one daemon pod Jan 11 18:50:30.370: INFO: Number of nodes with available pods: 0 Jan 11 18:50:30.370: INFO: Node ip-10-250-27-25.ec2.internal is running more than one daemon pod Jan 11 18:50:31.370: INFO: Number of nodes with available pods: 0 Jan 11 18:50:31.370: INFO: Node ip-10-250-27-25.ec2.internal is running more than one daemon pod Jan 11 18:50:32.370: INFO: Number of nodes with available pods: 0 Jan 11 18:50:32.370: INFO: Node ip-10-250-27-25.ec2.internal is running more than one daemon pod Jan 11 18:50:33.370: INFO: Number of nodes with available pods: 0 Jan 11 18:50:33.370: INFO: Node ip-10-250-27-25.ec2.internal is running more than one daemon pod Jan 11 18:50:34.370: INFO: Number of nodes with available pods: 0 Jan 11 18:50:34.370: INFO: Node ip-10-250-27-25.ec2.internal is running more than one daemon pod Jan 11 18:50:35.370: INFO: Number of nodes with available pods: 0 Jan 11 18:50:35.370: INFO: Node ip-10-250-27-25.ec2.internal is running more than one daemon pod Jan 11 18:50:36.370: INFO: Number of nodes with available pods: 0 Jan 11 18:50:36.370: INFO: Node ip-10-250-27-25.ec2.internal is running more than one daemon pod Jan 11 18:50:37.371: INFO: Number of nodes with available pods: 1 Jan 11 18:50:37.371: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Remove the node label and wait for daemons to be unscheduled Jan 11 18:50:37.731: INFO: Number of nodes with available pods: 0 Jan 11 18:50:37.731: INFO: Number of running nodes: 0, number of available pods: 0 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3202, will wait for the garbage collector to delete the pods Jan 11 18:50:38.102: INFO: Deleting DaemonSet.extensions daemon-set took: 91.012388ms Jan 11 18:50:38.102: INFO: Terminating DaemonSet.extensions daemon-set pods took: 59.426µs Jan 11 18:50:43.892: INFO: Number of nodes with available pods: 0 Jan 11 18:50:43.892: INFO: Number of running nodes: 0, number of available pods: 0 Jan 11 18:50:43.982: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3202/daemonsets","resourceVersion":"34964"},"items":null} Jan 11 18:50:44.071: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3202/pods","resourceVersion":"34965"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 18:50:44.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-3202" for this suite. Jan 11 18:50:50.705: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 18:50:53.943: INFO: namespace daemonsets-3202 deletion completed in 9.510982417s •SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] NoExecuteTaintManager Single Pod [Serial] eventually evict pod with finite tolerations from tainted nodes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/taints.go:242 [BeforeEach] [sig-scheduling] NoExecuteTaintManager Single Pod [Serial] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 18:50:53.944: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename taint-single-pod STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in taint-single-pod-5551 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] NoExecuteTaintManager Single Pod [Serial] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/taints.go:164 Jan 11 18:50:54.582: INFO: Waiting up to 1m0s for all nodes to be ready Jan 11 18:51:55.127: INFO: Waiting for terminating namespaces to be deleted... [It] eventually evict pod with finite tolerations from tainted nodes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/taints.go:242 Jan 11 18:51:55.216: INFO: Starting informer... STEP: Starting pod... Jan 11 18:51:55.398: INFO: Pod is running on ip-10-250-27-25.ec2.internal. Tainting Node STEP: Trying to apply a taint on the Node STEP: verifying the node has the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute STEP: Waiting to see if a Pod won't be deleted Jan 11 18:53:00.672: INFO: Pod wasn't evicted STEP: Waiting for Pod to be deleted Jan 11 18:53:07.900: INFO: Pod was evicted after toleration time run out. Test successful STEP: verifying the node doesn't have the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute [AfterEach] [sig-scheduling] NoExecuteTaintManager Single Pod [Serial] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 18:53:08.174: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "taint-single-pod-5551" for this suite. Jan 11 18:53:14.534: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 18:53:17.761: INFO: namespace taint-single-pod-5551 deletion completed in 9.49661864s •SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:161 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 18:53:17.762: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename sched-preemption STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in sched-preemption-2567 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:76 Jan 11 18:53:18.670: INFO: Waiting up to 1m0s for all nodes to be ready Jan 11 18:54:19.308: INFO: Waiting for terminating namespaces to be deleted... [It] validates lower priority pod preemption by critical pod /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:161 STEP: Create pods that use 60% of node resources. Jan 11 18:54:19.490: INFO: Created pod: pod0-sched-preemption-low-priority Jan 11 18:54:19.581: INFO: Created pod: pod1-sched-preemption-medium-priority STEP: Wait for pods to be scheduled. STEP: Run a critical pod that use 60% of a node resources. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 18:54:36.485: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-2567" for this suite. Jan 11 18:54:50.845: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 18:54:54.074: INFO: namespace sched-preemption-2567 deletion completed in 17.497647208s [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:70 •SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Downward API [Serial] [Disruptive] [NodeFeature:EphemeralStorage] Downward API tests for local ephemeral storage should provide container's limits.ephemeral-storage and requests.ephemeral-storage as env vars /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:293 [BeforeEach] [k8s.io] Downward API [Serial] [Disruptive] [NodeFeature:EphemeralStorage] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 18:54:54.347: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename downward-api STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-4172 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Downward API tests for local ephemeral storage /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:289 [It] should provide container's limits.ephemeral-storage and requests.ephemeral-storage as env vars /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:293 STEP: Creating a pod to test downward api env vars Jan 11 18:54:55.077: INFO: Waiting up to 5m0s for pod "downward-api-ce64e65e-be73-441a-a332-1ed499285495" in namespace "downward-api-4172" to be "success or failure" Jan 11 18:54:55.166: INFO: Pod "downward-api-ce64e65e-be73-441a-a332-1ed499285495": Phase="Pending", Reason="", readiness=false. Elapsed: 89.841576ms Jan 11 18:54:57.257: INFO: Pod "downward-api-ce64e65e-be73-441a-a332-1ed499285495": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.180138178s STEP: Saw pod success Jan 11 18:54:57.257: INFO: Pod "downward-api-ce64e65e-be73-441a-a332-1ed499285495" satisfied condition "success or failure" Jan 11 18:54:57.347: INFO: Trying to get logs from node ip-10-250-27-25.ec2.internal pod downward-api-ce64e65e-be73-441a-a332-1ed499285495 container dapi-container: STEP: delete the pod Jan 11 18:54:57.651: INFO: Waiting for pod downward-api-ce64e65e-be73-441a-a332-1ed499285495 to disappear Jan 11 18:54:57.741: INFO: Pod downward-api-ce64e65e-be73-441a-a332-1ed499285495 no longer exists [AfterEach] [k8s.io] Downward API [Serial] [Disruptive] [NodeFeature:EphemeralStorage] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 18:54:57.741: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4172" for this suite. Jan 11 18:55:04.101: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 18:55:07.333: INFO: namespace downward-api-4172 deletion completed in 9.501704555s •SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 18:55:07.337: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename sched-pred STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in sched-pred-3910 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:87 Jan 11 18:55:07.976: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jan 11 18:55:08.248: INFO: Waiting for terminating namespaces to be deleted... Jan 11 18:55:08.337: INFO: Logging pods the kubelet thinks is on node ip-10-250-27-25.ec2.internal before test Jan 11 18:55:08.434: INFO: kube-proxy-rq4kf from kube-system started at 2020-01-11 15:56:04 +0000 UTC (1 container statuses recorded) Jan 11 18:55:08.434: INFO: Container kube-proxy ready: true, restart count 0 Jan 11 18:55:08.434: INFO: node-problem-detector-9z5sq from kube-system started at 2020-01-11 15:56:04 +0000 UTC (1 container statuses recorded) Jan 11 18:55:08.434: INFO: Container node-problem-detector ready: true, restart count 0 Jan 11 18:55:08.434: INFO: node-exporter-l6q84 from kube-system started at 2020-01-11 15:56:04 +0000 UTC (1 container statuses recorded) Jan 11 18:55:08.434: INFO: Container node-exporter ready: true, restart count 0 Jan 11 18:55:08.434: INFO: calico-node-m8r2d from kube-system started at 2020-01-11 15:56:04 +0000 UTC (1 container statuses recorded) Jan 11 18:55:08.434: INFO: Container calico-node ready: true, restart count 0 Jan 11 18:55:08.434: INFO: Logging pods the kubelet thinks is on node ip-10-250-7-77.ec2.internal before test Jan 11 18:55:08.547: INFO: calico-node-dl8nk from kube-system started at 2020-01-11 15:55:58 +0000 UTC (1 container statuses recorded) Jan 11 18:55:08.547: INFO: Container calico-node ready: true, restart count 0 Jan 11 18:55:08.547: INFO: node-problem-detector-jx2p4 from kube-system started at 2020-01-11 15:55:58 +0000 UTC (1 container statuses recorded) Jan 11 18:55:08.547: INFO: Container node-problem-detector ready: true, restart count 0 Jan 11 18:55:08.547: INFO: node-exporter-gp57h from kube-system started at 2020-01-11 15:55:58 +0000 UTC (1 container statuses recorded) Jan 11 18:55:08.547: INFO: Container node-exporter ready: true, restart count 0 Jan 11 18:55:08.547: INFO: calico-kube-controllers-79bcd784b6-c46r9 from kube-system started at 2020-01-11 15:56:08 +0000 UTC (1 container statuses recorded) Jan 11 18:55:08.547: INFO: Container calico-kube-controllers ready: true, restart count 0 Jan 11 18:55:08.547: INFO: metrics-server-7c797fd994-4x7v9 from kube-system started at 2020-01-11 15:56:08 +0000 UTC (1 container statuses recorded) Jan 11 18:55:08.547: INFO: Container metrics-server ready: true, restart count 0 Jan 11 18:55:08.547: INFO: coredns-59c969ffb8-57m7v from kube-system started at 2020-01-11 15:56:11 +0000 UTC (1 container statuses recorded) Jan 11 18:55:08.547: INFO: Container coredns ready: true, restart count 0 Jan 11 18:55:08.547: INFO: calico-typha-deploy-9f6b455c4-vdrzx from kube-system started at 2020-01-11 16:21:07 +0000 UTC (1 container statuses recorded) Jan 11 18:55:08.547: INFO: Container calico-typha ready: true, restart count 0 Jan 11 18:55:08.547: INFO: kube-proxy-nn5px from kube-system started at 2020-01-11 15:55:58 +0000 UTC (1 container statuses recorded) Jan 11 18:55:08.547: INFO: Container kube-proxy ready: true, restart count 0 Jan 11 18:55:08.547: INFO: calico-typha-horizontal-autoscaler-85c99966bb-6j6rp from kube-system started at 2020-01-11 15:56:08 +0000 UTC (1 container statuses recorded) Jan 11 18:55:08.547: INFO: Container autoscaler ready: true, restart count 0 Jan 11 18:55:08.547: INFO: calico-typha-vertical-autoscaler-5769b74b58-r8t6r from kube-system started at 2020-01-11 15:56:13 +0000 UTC (1 container statuses recorded) Jan 11 18:55:08.547: INFO: Container autoscaler ready: true, restart count 5 Jan 11 18:55:08.547: INFO: addons-nginx-ingress-controller-7c75bb76db-cd9r9 from kube-system started at 2020-01-11 15:56:13 +0000 UTC (1 container statuses recorded) Jan 11 18:55:08.547: INFO: Container nginx-ingress-controller ready: true, restart count 0 Jan 11 18:55:08.547: INFO: vpn-shoot-5d76665b65-6rkww from kube-system started at 2020-01-11 15:56:13 +0000 UTC (1 container statuses recorded) Jan 11 18:55:08.547: INFO: Container vpn-shoot ready: true, restart count 0 Jan 11 18:55:08.547: INFO: addons-nginx-ingress-nginx-ingress-k8s-backend-95f65778d-4fk7d from kube-system started at 2020-01-11 15:56:08 +0000 UTC (1 container statuses recorded) Jan 11 18:55:08.547: INFO: Container nginx-ingress-nginx-ingress-k8s-backend ready: true, restart count 0 Jan 11 18:55:08.547: INFO: addons-kubernetes-dashboard-78954cc66b-69k8m from kube-system started at 2020-01-11 15:56:08 +0000 UTC (1 container statuses recorded) Jan 11 18:55:08.547: INFO: Container kubernetes-dashboard ready: true, restart count 0 Jan 11 18:55:08.547: INFO: blackbox-exporter-54bb5f55cc-452fk from kube-system started at 2020-01-11 15:55:58 +0000 UTC (1 container statuses recorded) Jan 11 18:55:08.547: INFO: Container blackbox-exporter ready: true, restart count 0 Jan 11 18:55:08.547: INFO: coredns-59c969ffb8-fqq79 from kube-system started at 2020-01-11 15:56:08 +0000 UTC (1 container statuses recorded) Jan 11 18:55:08.547: INFO: Container coredns ready: true, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-ad9674a5-a94f-465d-a43c-0f863f69b207 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-ad9674a5-a94f-465d-a43c-0f863f69b207 off the node ip-10-250-27-25.ec2.internal STEP: verifying the node doesn't have the label kubernetes.io/e2e-ad9674a5-a94f-465d-a43c-0f863f69b207 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 18:55:18.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-3910" for this suite. Jan 11 18:55:38.661: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 18:55:41.890: INFO: namespace sched-pred-3910 deletion completed in 23.499112082s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:78 •SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for git_repo [Serial] [Slow] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/empty_dir_wrapper.go:199 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 18:55:41.891: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-wrapper-2029 STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for git_repo [Serial] [Slow] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/empty_dir_wrapper.go:199 STEP: Creating RC which spawns configmap-volume pods Jan 11 18:55:45.279: INFO: Pod name wrapped-volume-race-3fe6230d-e2fe-4c11-8208-a41cb010e463: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-3fe6230d-e2fe-4c11-8208-a41cb010e463 in namespace emptydir-wrapper-2029, will wait for the garbage collector to delete the pods Jan 11 18:55:50.105: INFO: Deleting ReplicationController wrapped-volume-race-3fe6230d-e2fe-4c11-8208-a41cb010e463 took: 91.589306ms Jan 11 18:55:50.206: INFO: Terminating ReplicationController wrapped-volume-race-3fe6230d-e2fe-4c11-8208-a41cb010e463 pods took: 100.214401ms STEP: Creating RC which spawns configmap-volume pods Jan 11 18:56:34.181: INFO: Pod name wrapped-volume-race-91bd5a23-081f-48fe-bb96-5dfcc9890a4d: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-91bd5a23-081f-48fe-bb96-5dfcc9890a4d in namespace emptydir-wrapper-2029, will wait for the garbage collector to delete the pods Jan 11 18:56:39.003: INFO: Deleting ReplicationController wrapped-volume-race-91bd5a23-081f-48fe-bb96-5dfcc9890a4d took: 91.110654ms Jan 11 18:56:39.104: INFO: Terminating ReplicationController wrapped-volume-race-91bd5a23-081f-48fe-bb96-5dfcc9890a4d pods took: 100.276578ms STEP: Creating RC which spawns configmap-volume pods Jan 11 18:57:24.079: INFO: Pod name wrapped-volume-race-313b22ff-20ec-4273-a0d1-081c3be45585: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-313b22ff-20ec-4273-a0d1-081c3be45585 in namespace emptydir-wrapper-2029, will wait for the garbage collector to delete the pods Jan 11 18:57:28.900: INFO: Deleting ReplicationController wrapped-volume-race-313b22ff-20ec-4273-a0d1-081c3be45585 took: 90.877865ms Jan 11 18:57:29.400: INFO: Terminating ReplicationController wrapped-volume-race-313b22ff-20ec-4273-a0d1-081c3be45585 pods took: 500.286799ms STEP: Cleaning up the git server pod STEP: Cleaning up the git server svc [AfterEach] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 18:58:04.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-2029" for this suite. Jan 11 18:58:10.448: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 18:58:13.691: INFO: namespace emptydir-wrapper-2029 deletion completed in 9.511583439s •SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl taint [Serial] should remove all the taints with the same key off a node /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1877 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 18:58:13.693: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename kubectl STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-8214 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:225 [It] should remove all the taints with the same key off a node /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1877 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: adding the taint kubernetes.io/e2e-taint-key-002-7bf0a9d6-792a-4f2d-a889-4b31df05ef96=testing-taint-value:NoSchedule to a node Jan 11 18:58:16.802: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config taint nodes ip-10-250-27-25.ec2.internal kubernetes.io/e2e-taint-key-002-7bf0a9d6-792a-4f2d-a889-4b31df05ef96=testing-taint-value:NoSchedule' Jan 11 18:58:17.946: INFO: stderr: "" Jan 11 18:58:17.946: INFO: stdout: "node/ip-10-250-27-25.ec2.internal tainted\n" Jan 11 18:58:17.947: INFO: stdout: "node/ip-10-250-27-25.ec2.internal tainted\n" STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-002-7bf0a9d6-792a-4f2d-a889-4b31df05ef96=testing-taint-value:NoSchedule Jan 11 18:58:17.947: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config describe node ip-10-250-27-25.ec2.internal' Jan 11 18:58:18.775: INFO: stderr: "" Jan 11 18:58:18.775: INFO: stdout: "Name: ip-10-250-27-25.ec2.internal\nRoles: \nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/instance-type=m5.large\n beta.kubernetes.io/os=linux\n failure-domain.beta.kubernetes.io/region=us-east-1\n failure-domain.beta.kubernetes.io/zone=us-east-1c\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=ip-10-250-27-25.ec2.internal\n kubernetes.io/os=linux\n node.kubernetes.io/role=node\n worker.garden.sapcloud.io/group=worker-1\n worker.gardener.cloud/pool=worker-1\nAnnotations: node.alpha.kubernetes.io/ttl: 0\n projectcalico.org/IPv4Address: 10.250.27.25/19\n projectcalico.org/IPv4IPIPTunnelAddr: 100.64.1.1\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sat, 11 Jan 2020 15:56:03 +0000\nTaints: kubernetes.io/e2e-taint-key-002-7bf0a9d6-792a-4f2d-a889-4b31df05ef96=testing-taint-value:NoSchedule\nUnschedulable: false\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n FrequentDockerRestart False Sat, 11 Jan 2020 18:57:52 +0000 Sat, 11 Jan 2020 15:56:58 +0000 NoFrequentDockerRestart docker is functioning properly\n FrequentContainerdRestart False Sat, 11 Jan 2020 18:57:52 +0000 Sat, 11 Jan 2020 15:56:58 +0000 NoFrequentContainerdRestart containerd is functioning properly\n CorruptDockerOverlay2 False Sat, 11 Jan 2020 18:57:52 +0000 Sat, 11 Jan 2020 15:56:58 +0000 NoCorruptDockerOverlay2 docker overlay2 is functioning properly\n KernelDeadlock False Sat, 11 Jan 2020 18:57:52 +0000 Sat, 11 Jan 2020 15:56:58 +0000 KernelHasNoDeadlock kernel has no deadlock\n ReadonlyFilesystem False Sat, 11 Jan 2020 18:57:52 +0000 Sat, 11 Jan 2020 15:56:58 +0000 FilesystemIsNotReadOnly Filesystem is not read-only\n FrequentUnregisterNetDevice False Sat, 11 Jan 2020 18:57:52 +0000 Sat, 11 Jan 2020 15:56:58 +0000 NoFrequentUnregisterNetDevice node is functioning properly\n FrequentKubeletRestart False Sat, 11 Jan 2020 18:57:52 +0000 Sat, 11 Jan 2020 15:56:58 +0000 NoFrequentKubeletRestart kubelet is functioning properly\n NetworkUnavailable False Sat, 11 Jan 2020 15:56:18 +0000 Sat, 11 Jan 2020 15:56:18 +0000 CalicoIsUp Calico is running on this node\n MemoryPressure False Sat, 11 Jan 2020 18:58:16 +0000 Sat, 11 Jan 2020 15:56:03 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Sat, 11 Jan 2020 18:58:16 +0000 Sat, 11 Jan 2020 15:56:03 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Sat, 11 Jan 2020 18:58:16 +0000 Sat, 11 Jan 2020 15:56:03 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Sat, 11 Jan 2020 18:58:16 +0000 Sat, 11 Jan 2020 15:56:13 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 10.250.27.25\n Hostname: ip-10-250-27-25.ec2.internal\n InternalDNS: ip-10-250-27-25.ec2.internal\nCapacity:\n attachable-volumes-aws-ebs: 25\n cpu: 2\n ephemeral-storage: 28056816Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 7865496Ki\n pods: 110\nAllocatable:\n attachable-volumes-aws-ebs: 25\n cpu: 1920m\n ephemeral-storage: 27293670584\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 6577812679\n pods: 110\nSystem Info:\n Machine ID: ec280dba3c1837e27848a3dec8c080a9\n System UUID: ec280dba-3c18-37e2-7848-a3dec8c080a9\n Boot ID: 89e42b89-b944-47ea-8bf6-5f2fe6d80c97\n Kernel Version: 4.19.86-coreos\n OS Image: Container Linux by CoreOS 2303.3.0 (Rhyolite)\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: docker://18.6.3\n Kubelet Version: v1.16.4\n Kube-Proxy Version: v1.16.4\nPodCIDR: 100.64.1.0/24\nPodCIDRs: 100.64.1.0/24\nProviderID: aws:///us-east-1c/i-0a8c404292a3c92e9\nNon-terminated Pods: (4 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system calico-node-m8r2d 100m (5%) 500m (26%) 100Mi (1%) 700Mi (11%) 3h2m\n kube-system kube-proxy-rq4kf 20m (1%) 0 (0%) 64Mi (1%) 0 (0%) 3h2m\n kube-system node-exporter-l6q84 5m (0%) 25m (1%) 10Mi (0%) 100Mi (1%) 3h2m\n kube-system node-problem-detector-9z5sq 20m (1%) 200m (10%) 20Mi (0%) 100Mi (1%) 3h2m\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 145m (7%) 725m (37%)\n memory 194Mi (3%) 900Mi (14%)\n ephemeral-storage 0 (0%) 0 (0%)\n attachable-volumes-aws-ebs 0 0\nEvents: \n" Jan 11 18:58:18.775: INFO: stdout: "Name: ip-10-250-27-25.ec2.internal\nRoles: \nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/instance-type=m5.large\n beta.kubernetes.io/os=linux\n failure-domain.beta.kubernetes.io/region=us-east-1\n failure-domain.beta.kubernetes.io/zone=us-east-1c\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=ip-10-250-27-25.ec2.internal\n kubernetes.io/os=linux\n node.kubernetes.io/role=node\n worker.garden.sapcloud.io/group=worker-1\n worker.gardener.cloud/pool=worker-1\nAnnotations: node.alpha.kubernetes.io/ttl: 0\n projectcalico.org/IPv4Address: 10.250.27.25/19\n projectcalico.org/IPv4IPIPTunnelAddr: 100.64.1.1\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sat, 11 Jan 2020 15:56:03 +0000\nTaints: kubernetes.io/e2e-taint-key-002-7bf0a9d6-792a-4f2d-a889-4b31df05ef96=testing-taint-value:NoSchedule\nUnschedulable: false\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n FrequentDockerRestart False Sat, 11 Jan 2020 18:57:52 +0000 Sat, 11 Jan 2020 15:56:58 +0000 NoFrequentDockerRestart docker is functioning properly\n FrequentContainerdRestart False Sat, 11 Jan 2020 18:57:52 +0000 Sat, 11 Jan 2020 15:56:58 +0000 NoFrequentContainerdRestart containerd is functioning properly\n CorruptDockerOverlay2 False Sat, 11 Jan 2020 18:57:52 +0000 Sat, 11 Jan 2020 15:56:58 +0000 NoCorruptDockerOverlay2 docker overlay2 is functioning properly\n KernelDeadlock False Sat, 11 Jan 2020 18:57:52 +0000 Sat, 11 Jan 2020 15:56:58 +0000 KernelHasNoDeadlock kernel has no deadlock\n ReadonlyFilesystem False Sat, 11 Jan 2020 18:57:52 +0000 Sat, 11 Jan 2020 15:56:58 +0000 FilesystemIsNotReadOnly Filesystem is not read-only\n FrequentUnregisterNetDevice False Sat, 11 Jan 2020 18:57:52 +0000 Sat, 11 Jan 2020 15:56:58 +0000 NoFrequentUnregisterNetDevice node is functioning properly\n FrequentKubeletRestart False Sat, 11 Jan 2020 18:57:52 +0000 Sat, 11 Jan 2020 15:56:58 +0000 NoFrequentKubeletRestart kubelet is functioning properly\n NetworkUnavailable False Sat, 11 Jan 2020 15:56:18 +0000 Sat, 11 Jan 2020 15:56:18 +0000 CalicoIsUp Calico is running on this node\n MemoryPressure False Sat, 11 Jan 2020 18:58:16 +0000 Sat, 11 Jan 2020 15:56:03 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Sat, 11 Jan 2020 18:58:16 +0000 Sat, 11 Jan 2020 15:56:03 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Sat, 11 Jan 2020 18:58:16 +0000 Sat, 11 Jan 2020 15:56:03 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Sat, 11 Jan 2020 18:58:16 +0000 Sat, 11 Jan 2020 15:56:13 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 10.250.27.25\n Hostname: ip-10-250-27-25.ec2.internal\n InternalDNS: ip-10-250-27-25.ec2.internal\nCapacity:\n attachable-volumes-aws-ebs: 25\n cpu: 2\n ephemeral-storage: 28056816Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 7865496Ki\n pods: 110\nAllocatable:\n attachable-volumes-aws-ebs: 25\n cpu: 1920m\n ephemeral-storage: 27293670584\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 6577812679\n pods: 110\nSystem Info:\n Machine ID: ec280dba3c1837e27848a3dec8c080a9\n System UUID: ec280dba-3c18-37e2-7848-a3dec8c080a9\n Boot ID: 89e42b89-b944-47ea-8bf6-5f2fe6d80c97\n Kernel Version: 4.19.86-coreos\n OS Image: Container Linux by CoreOS 2303.3.0 (Rhyolite)\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: docker://18.6.3\n Kubelet Version: v1.16.4\n Kube-Proxy Version: v1.16.4\nPodCIDR: 100.64.1.0/24\nPodCIDRs: 100.64.1.0/24\nProviderID: aws:///us-east-1c/i-0a8c404292a3c92e9\nNon-terminated Pods: (4 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system calico-node-m8r2d 100m (5%) 500m (26%) 100Mi (1%) 700Mi (11%) 3h2m\n kube-system kube-proxy-rq4kf 20m (1%) 0 (0%) 64Mi (1%) 0 (0%) 3h2m\n kube-system node-exporter-l6q84 5m (0%) 25m (1%) 10Mi (0%) 100Mi (1%) 3h2m\n kube-system node-problem-detector-9z5sq 20m (1%) 200m (10%) 20Mi (0%) 100Mi (1%) 3h2m\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 145m (7%) 725m (37%)\n memory 194Mi (3%) 900Mi (14%)\n ephemeral-storage 0 (0%) 0 (0%)\n attachable-volumes-aws-ebs 0 0\nEvents: \n" STEP: adding another taint kubernetes.io/e2e-taint-key-002-7bf0a9d6-792a-4f2d-a889-4b31df05ef96=another-testing-taint-value:PreferNoSchedule to the node Jan 11 18:58:18.775: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config taint nodes ip-10-250-27-25.ec2.internal kubernetes.io/e2e-taint-key-002-7bf0a9d6-792a-4f2d-a889-4b31df05ef96=another-testing-taint-value:PreferNoSchedule' Jan 11 18:58:19.404: INFO: stderr: "" Jan 11 18:58:19.404: INFO: stdout: "node/ip-10-250-27-25.ec2.internal tainted\n" Jan 11 18:58:19.404: INFO: stdout: "node/ip-10-250-27-25.ec2.internal tainted\n" STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-002-7bf0a9d6-792a-4f2d-a889-4b31df05ef96=another-testing-taint-value:PreferNoSchedule Jan 11 18:58:19.404: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config describe node ip-10-250-27-25.ec2.internal' Jan 11 18:58:20.165: INFO: stderr: "" Jan 11 18:58:20.165: INFO: stdout: "Name: ip-10-250-27-25.ec2.internal\nRoles: \nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/instance-type=m5.large\n beta.kubernetes.io/os=linux\n failure-domain.beta.kubernetes.io/region=us-east-1\n failure-domain.beta.kubernetes.io/zone=us-east-1c\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=ip-10-250-27-25.ec2.internal\n kubernetes.io/os=linux\n node.kubernetes.io/role=node\n worker.garden.sapcloud.io/group=worker-1\n worker.gardener.cloud/pool=worker-1\nAnnotations: node.alpha.kubernetes.io/ttl: 0\n projectcalico.org/IPv4Address: 10.250.27.25/19\n projectcalico.org/IPv4IPIPTunnelAddr: 100.64.1.1\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sat, 11 Jan 2020 15:56:03 +0000\nTaints: kubernetes.io/e2e-taint-key-002-7bf0a9d6-792a-4f2d-a889-4b31df05ef96=testing-taint-value:NoSchedule\n kubernetes.io/e2e-taint-key-002-7bf0a9d6-792a-4f2d-a889-4b31df05ef96=another-testing-taint-value:PreferNoSchedule\nUnschedulable: false\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n FrequentDockerRestart False Sat, 11 Jan 2020 18:57:52 +0000 Sat, 11 Jan 2020 15:56:58 +0000 NoFrequentDockerRestart docker is functioning properly\n FrequentContainerdRestart False Sat, 11 Jan 2020 18:57:52 +0000 Sat, 11 Jan 2020 15:56:58 +0000 NoFrequentContainerdRestart containerd is functioning properly\n CorruptDockerOverlay2 False Sat, 11 Jan 2020 18:57:52 +0000 Sat, 11 Jan 2020 15:56:58 +0000 NoCorruptDockerOverlay2 docker overlay2 is functioning properly\n KernelDeadlock False Sat, 11 Jan 2020 18:57:52 +0000 Sat, 11 Jan 2020 15:56:58 +0000 KernelHasNoDeadlock kernel has no deadlock\n ReadonlyFilesystem False Sat, 11 Jan 2020 18:57:52 +0000 Sat, 11 Jan 2020 15:56:58 +0000 FilesystemIsNotReadOnly Filesystem is not read-only\n FrequentUnregisterNetDevice False Sat, 11 Jan 2020 18:57:52 +0000 Sat, 11 Jan 2020 15:56:58 +0000 NoFrequentUnregisterNetDevice node is functioning properly\n FrequentKubeletRestart False Sat, 11 Jan 2020 18:57:52 +0000 Sat, 11 Jan 2020 15:56:58 +0000 NoFrequentKubeletRestart kubelet is functioning properly\n NetworkUnavailable False Sat, 11 Jan 2020 15:56:18 +0000 Sat, 11 Jan 2020 15:56:18 +0000 CalicoIsUp Calico is running on this node\n MemoryPressure False Sat, 11 Jan 2020 18:58:16 +0000 Sat, 11 Jan 2020 15:56:03 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Sat, 11 Jan 2020 18:58:16 +0000 Sat, 11 Jan 2020 15:56:03 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Sat, 11 Jan 2020 18:58:16 +0000 Sat, 11 Jan 2020 15:56:03 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Sat, 11 Jan 2020 18:58:16 +0000 Sat, 11 Jan 2020 15:56:13 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 10.250.27.25\n Hostname: ip-10-250-27-25.ec2.internal\n InternalDNS: ip-10-250-27-25.ec2.internal\nCapacity:\n attachable-volumes-aws-ebs: 25\n cpu: 2\n ephemeral-storage: 28056816Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 7865496Ki\n pods: 110\nAllocatable:\n attachable-volumes-aws-ebs: 25\n cpu: 1920m\n ephemeral-storage: 27293670584\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 6577812679\n pods: 110\nSystem Info:\n Machine ID: ec280dba3c1837e27848a3dec8c080a9\n System UUID: ec280dba-3c18-37e2-7848-a3dec8c080a9\n Boot ID: 89e42b89-b944-47ea-8bf6-5f2fe6d80c97\n Kernel Version: 4.19.86-coreos\n OS Image: Container Linux by CoreOS 2303.3.0 (Rhyolite)\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: docker://18.6.3\n Kubelet Version: v1.16.4\n Kube-Proxy Version: v1.16.4\nPodCIDR: 100.64.1.0/24\nPodCIDRs: 100.64.1.0/24\nProviderID: aws:///us-east-1c/i-0a8c404292a3c92e9\nNon-terminated Pods: (4 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system calico-node-m8r2d 100m (5%) 500m (26%) 100Mi (1%) 700Mi (11%) 3h2m\n kube-system kube-proxy-rq4kf 20m (1%) 0 (0%) 64Mi (1%) 0 (0%) 3h2m\n kube-system node-exporter-l6q84 5m (0%) 25m (1%) 10Mi (0%) 100Mi (1%) 3h2m\n kube-system node-problem-detector-9z5sq 20m (1%) 200m (10%) 20Mi (0%) 100Mi (1%) 3h2m\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 145m (7%) 725m (37%)\n memory 194Mi (3%) 900Mi (14%)\n ephemeral-storage 0 (0%) 0 (0%)\n attachable-volumes-aws-ebs 0 0\nEvents: \n" Jan 11 18:58:20.165: INFO: stdout: "Name: ip-10-250-27-25.ec2.internal\nRoles: \nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/instance-type=m5.large\n beta.kubernetes.io/os=linux\n failure-domain.beta.kubernetes.io/region=us-east-1\n failure-domain.beta.kubernetes.io/zone=us-east-1c\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=ip-10-250-27-25.ec2.internal\n kubernetes.io/os=linux\n node.kubernetes.io/role=node\n worker.garden.sapcloud.io/group=worker-1\n worker.gardener.cloud/pool=worker-1\nAnnotations: node.alpha.kubernetes.io/ttl: 0\n projectcalico.org/IPv4Address: 10.250.27.25/19\n projectcalico.org/IPv4IPIPTunnelAddr: 100.64.1.1\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sat, 11 Jan 2020 15:56:03 +0000\nTaints: kubernetes.io/e2e-taint-key-002-7bf0a9d6-792a-4f2d-a889-4b31df05ef96=testing-taint-value:NoSchedule\n kubernetes.io/e2e-taint-key-002-7bf0a9d6-792a-4f2d-a889-4b31df05ef96=another-testing-taint-value:PreferNoSchedule\nUnschedulable: false\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n FrequentDockerRestart False Sat, 11 Jan 2020 18:57:52 +0000 Sat, 11 Jan 2020 15:56:58 +0000 NoFrequentDockerRestart docker is functioning properly\n FrequentContainerdRestart False Sat, 11 Jan 2020 18:57:52 +0000 Sat, 11 Jan 2020 15:56:58 +0000 NoFrequentContainerdRestart containerd is functioning properly\n CorruptDockerOverlay2 False Sat, 11 Jan 2020 18:57:52 +0000 Sat, 11 Jan 2020 15:56:58 +0000 NoCorruptDockerOverlay2 docker overlay2 is functioning properly\n KernelDeadlock False Sat, 11 Jan 2020 18:57:52 +0000 Sat, 11 Jan 2020 15:56:58 +0000 KernelHasNoDeadlock kernel has no deadlock\n ReadonlyFilesystem False Sat, 11 Jan 2020 18:57:52 +0000 Sat, 11 Jan 2020 15:56:58 +0000 FilesystemIsNotReadOnly Filesystem is not read-only\n FrequentUnregisterNetDevice False Sat, 11 Jan 2020 18:57:52 +0000 Sat, 11 Jan 2020 15:56:58 +0000 NoFrequentUnregisterNetDevice node is functioning properly\n FrequentKubeletRestart False Sat, 11 Jan 2020 18:57:52 +0000 Sat, 11 Jan 2020 15:56:58 +0000 NoFrequentKubeletRestart kubelet is functioning properly\n NetworkUnavailable False Sat, 11 Jan 2020 15:56:18 +0000 Sat, 11 Jan 2020 15:56:18 +0000 CalicoIsUp Calico is running on this node\n MemoryPressure False Sat, 11 Jan 2020 18:58:16 +0000 Sat, 11 Jan 2020 15:56:03 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Sat, 11 Jan 2020 18:58:16 +0000 Sat, 11 Jan 2020 15:56:03 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Sat, 11 Jan 2020 18:58:16 +0000 Sat, 11 Jan 2020 15:56:03 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Sat, 11 Jan 2020 18:58:16 +0000 Sat, 11 Jan 2020 15:56:13 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 10.250.27.25\n Hostname: ip-10-250-27-25.ec2.internal\n InternalDNS: ip-10-250-27-25.ec2.internal\nCapacity:\n attachable-volumes-aws-ebs: 25\n cpu: 2\n ephemeral-storage: 28056816Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 7865496Ki\n pods: 110\nAllocatable:\n attachable-volumes-aws-ebs: 25\n cpu: 1920m\n ephemeral-storage: 27293670584\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 6577812679\n pods: 110\nSystem Info:\n Machine ID: ec280dba3c1837e27848a3dec8c080a9\n System UUID: ec280dba-3c18-37e2-7848-a3dec8c080a9\n Boot ID: 89e42b89-b944-47ea-8bf6-5f2fe6d80c97\n Kernel Version: 4.19.86-coreos\n OS Image: Container Linux by CoreOS 2303.3.0 (Rhyolite)\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: docker://18.6.3\n Kubelet Version: v1.16.4\n Kube-Proxy Version: v1.16.4\nPodCIDR: 100.64.1.0/24\nPodCIDRs: 100.64.1.0/24\nProviderID: aws:///us-east-1c/i-0a8c404292a3c92e9\nNon-terminated Pods: (4 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system calico-node-m8r2d 100m (5%) 500m (26%) 100Mi (1%) 700Mi (11%) 3h2m\n kube-system kube-proxy-rq4kf 20m (1%) 0 (0%) 64Mi (1%) 0 (0%) 3h2m\n kube-system node-exporter-l6q84 5m (0%) 25m (1%) 10Mi (0%) 100Mi (1%) 3h2m\n kube-system node-problem-detector-9z5sq 20m (1%) 200m (10%) 20Mi (0%) 100Mi (1%) 3h2m\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 145m (7%) 725m (37%)\n memory 194Mi (3%) 900Mi (14%)\n ephemeral-storage 0 (0%) 0 (0%)\n attachable-volumes-aws-ebs 0 0\nEvents: \n" STEP: adding NoExecute taint kubernetes.io/e2e-taint-key-002-7bf0a9d6-792a-4f2d-a889-4b31df05ef96=testing-taint-value-no-execute:NoExecute to the node Jan 11 18:58:20.165: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config taint nodes ip-10-250-27-25.ec2.internal kubernetes.io/e2e-taint-key-002-7bf0a9d6-792a-4f2d-a889-4b31df05ef96=testing-taint-value-no-execute:NoExecute' Jan 11 18:58:20.828: INFO: stderr: "" Jan 11 18:58:20.828: INFO: stdout: "node/ip-10-250-27-25.ec2.internal tainted\n" Jan 11 18:58:20.828: INFO: stdout: "node/ip-10-250-27-25.ec2.internal tainted\n" STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-002-7bf0a9d6-792a-4f2d-a889-4b31df05ef96=testing-taint-value-no-execute:NoExecute Jan 11 18:58:20.829: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config describe node ip-10-250-27-25.ec2.internal' Jan 11 18:58:21.690: INFO: stderr: "" Jan 11 18:58:21.690: INFO: stdout: "Name: ip-10-250-27-25.ec2.internal\nRoles: \nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/instance-type=m5.large\n beta.kubernetes.io/os=linux\n failure-domain.beta.kubernetes.io/region=us-east-1\n failure-domain.beta.kubernetes.io/zone=us-east-1c\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=ip-10-250-27-25.ec2.internal\n kubernetes.io/os=linux\n node.kubernetes.io/role=node\n worker.garden.sapcloud.io/group=worker-1\n worker.gardener.cloud/pool=worker-1\nAnnotations: node.alpha.kubernetes.io/ttl: 0\n projectcalico.org/IPv4Address: 10.250.27.25/19\n projectcalico.org/IPv4IPIPTunnelAddr: 100.64.1.1\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sat, 11 Jan 2020 15:56:03 +0000\nTaints: kubernetes.io/e2e-taint-key-002-7bf0a9d6-792a-4f2d-a889-4b31df05ef96=testing-taint-value-no-execute:NoExecute\n kubernetes.io/e2e-taint-key-002-7bf0a9d6-792a-4f2d-a889-4b31df05ef96=testing-taint-value:NoSchedule\n kubernetes.io/e2e-taint-key-002-7bf0a9d6-792a-4f2d-a889-4b31df05ef96=another-testing-taint-value:PreferNoSchedule\nUnschedulable: false\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n FrequentDockerRestart False Sat, 11 Jan 2020 18:57:52 +0000 Sat, 11 Jan 2020 15:56:58 +0000 NoFrequentDockerRestart docker is functioning properly\n FrequentContainerdRestart False Sat, 11 Jan 2020 18:57:52 +0000 Sat, 11 Jan 2020 15:56:58 +0000 NoFrequentContainerdRestart containerd is functioning properly\n CorruptDockerOverlay2 False Sat, 11 Jan 2020 18:57:52 +0000 Sat, 11 Jan 2020 15:56:58 +0000 NoCorruptDockerOverlay2 docker overlay2 is functioning properly\n KernelDeadlock False Sat, 11 Jan 2020 18:57:52 +0000 Sat, 11 Jan 2020 15:56:58 +0000 KernelHasNoDeadlock kernel has no deadlock\n ReadonlyFilesystem False Sat, 11 Jan 2020 18:57:52 +0000 Sat, 11 Jan 2020 15:56:58 +0000 FilesystemIsNotReadOnly Filesystem is not read-only\n FrequentUnregisterNetDevice False Sat, 11 Jan 2020 18:57:52 +0000 Sat, 11 Jan 2020 15:56:58 +0000 NoFrequentUnregisterNetDevice node is functioning properly\n FrequentKubeletRestart False Sat, 11 Jan 2020 18:57:52 +0000 Sat, 11 Jan 2020 15:56:58 +0000 NoFrequentKubeletRestart kubelet is functioning properly\n NetworkUnavailable False Sat, 11 Jan 2020 15:56:18 +0000 Sat, 11 Jan 2020 15:56:18 +0000 CalicoIsUp Calico is running on this node\n MemoryPressure False Sat, 11 Jan 2020 18:58:16 +0000 Sat, 11 Jan 2020 15:56:03 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Sat, 11 Jan 2020 18:58:16 +0000 Sat, 11 Jan 2020 15:56:03 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Sat, 11 Jan 2020 18:58:16 +0000 Sat, 11 Jan 2020 15:56:03 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Sat, 11 Jan 2020 18:58:16 +0000 Sat, 11 Jan 2020 15:56:13 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 10.250.27.25\n Hostname: ip-10-250-27-25.ec2.internal\n InternalDNS: ip-10-250-27-25.ec2.internal\nCapacity:\n attachable-volumes-aws-ebs: 25\n cpu: 2\n ephemeral-storage: 28056816Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 7865496Ki\n pods: 110\nAllocatable:\n attachable-volumes-aws-ebs: 25\n cpu: 1920m\n ephemeral-storage: 27293670584\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 6577812679\n pods: 110\nSystem Info:\n Machine ID: ec280dba3c1837e27848a3dec8c080a9\n System UUID: ec280dba-3c18-37e2-7848-a3dec8c080a9\n Boot ID: 89e42b89-b944-47ea-8bf6-5f2fe6d80c97\n Kernel Version: 4.19.86-coreos\n OS Image: Container Linux by CoreOS 2303.3.0 (Rhyolite)\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: docker://18.6.3\n Kubelet Version: v1.16.4\n Kube-Proxy Version: v1.16.4\nPodCIDR: 100.64.1.0/24\nPodCIDRs: 100.64.1.0/24\nProviderID: aws:///us-east-1c/i-0a8c404292a3c92e9\nNon-terminated Pods: (4 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system calico-node-m8r2d 100m (5%) 500m (26%) 100Mi (1%) 700Mi (11%) 3h2m\n kube-system kube-proxy-rq4kf 20m (1%) 0 (0%) 64Mi (1%) 0 (0%) 3h2m\n kube-system node-exporter-l6q84 5m (0%) 25m (1%) 10Mi (0%) 100Mi (1%) 3h2m\n kube-system node-problem-detector-9z5sq 20m (1%) 200m (10%) 20Mi (0%) 100Mi (1%) 3h2m\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 145m (7%) 725m (37%)\n memory 194Mi (3%) 900Mi (14%)\n ephemeral-storage 0 (0%) 0 (0%)\n attachable-volumes-aws-ebs 0 0\nEvents: \n" Jan 11 18:58:21.690: INFO: stdout: "Name: ip-10-250-27-25.ec2.internal\nRoles: \nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/instance-type=m5.large\n beta.kubernetes.io/os=linux\n failure-domain.beta.kubernetes.io/region=us-east-1\n failure-domain.beta.kubernetes.io/zone=us-east-1c\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=ip-10-250-27-25.ec2.internal\n kubernetes.io/os=linux\n node.kubernetes.io/role=node\n worker.garden.sapcloud.io/group=worker-1\n worker.gardener.cloud/pool=worker-1\nAnnotations: node.alpha.kubernetes.io/ttl: 0\n projectcalico.org/IPv4Address: 10.250.27.25/19\n projectcalico.org/IPv4IPIPTunnelAddr: 100.64.1.1\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sat, 11 Jan 2020 15:56:03 +0000\nTaints: kubernetes.io/e2e-taint-key-002-7bf0a9d6-792a-4f2d-a889-4b31df05ef96=testing-taint-value-no-execute:NoExecute\n kubernetes.io/e2e-taint-key-002-7bf0a9d6-792a-4f2d-a889-4b31df05ef96=testing-taint-value:NoSchedule\n kubernetes.io/e2e-taint-key-002-7bf0a9d6-792a-4f2d-a889-4b31df05ef96=another-testing-taint-value:PreferNoSchedule\nUnschedulable: false\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n FrequentDockerRestart False Sat, 11 Jan 2020 18:57:52 +0000 Sat, 11 Jan 2020 15:56:58 +0000 NoFrequentDockerRestart docker is functioning properly\n FrequentContainerdRestart False Sat, 11 Jan 2020 18:57:52 +0000 Sat, 11 Jan 2020 15:56:58 +0000 NoFrequentContainerdRestart containerd is functioning properly\n CorruptDockerOverlay2 False Sat, 11 Jan 2020 18:57:52 +0000 Sat, 11 Jan 2020 15:56:58 +0000 NoCorruptDockerOverlay2 docker overlay2 is functioning properly\n KernelDeadlock False Sat, 11 Jan 2020 18:57:52 +0000 Sat, 11 Jan 2020 15:56:58 +0000 KernelHasNoDeadlock kernel has no deadlock\n ReadonlyFilesystem False Sat, 11 Jan 2020 18:57:52 +0000 Sat, 11 Jan 2020 15:56:58 +0000 FilesystemIsNotReadOnly Filesystem is not read-only\n FrequentUnregisterNetDevice False Sat, 11 Jan 2020 18:57:52 +0000 Sat, 11 Jan 2020 15:56:58 +0000 NoFrequentUnregisterNetDevice node is functioning properly\n FrequentKubeletRestart False Sat, 11 Jan 2020 18:57:52 +0000 Sat, 11 Jan 2020 15:56:58 +0000 NoFrequentKubeletRestart kubelet is functioning properly\n NetworkUnavailable False Sat, 11 Jan 2020 15:56:18 +0000 Sat, 11 Jan 2020 15:56:18 +0000 CalicoIsUp Calico is running on this node\n MemoryPressure False Sat, 11 Jan 2020 18:58:16 +0000 Sat, 11 Jan 2020 15:56:03 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Sat, 11 Jan 2020 18:58:16 +0000 Sat, 11 Jan 2020 15:56:03 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Sat, 11 Jan 2020 18:58:16 +0000 Sat, 11 Jan 2020 15:56:03 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Sat, 11 Jan 2020 18:58:16 +0000 Sat, 11 Jan 2020 15:56:13 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 10.250.27.25\n Hostname: ip-10-250-27-25.ec2.internal\n InternalDNS: ip-10-250-27-25.ec2.internal\nCapacity:\n attachable-volumes-aws-ebs: 25\n cpu: 2\n ephemeral-storage: 28056816Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 7865496Ki\n pods: 110\nAllocatable:\n attachable-volumes-aws-ebs: 25\n cpu: 1920m\n ephemeral-storage: 27293670584\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 6577812679\n pods: 110\nSystem Info:\n Machine ID: ec280dba3c1837e27848a3dec8c080a9\n System UUID: ec280dba-3c18-37e2-7848-a3dec8c080a9\n Boot ID: 89e42b89-b944-47ea-8bf6-5f2fe6d80c97\n Kernel Version: 4.19.86-coreos\n OS Image: Container Linux by CoreOS 2303.3.0 (Rhyolite)\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: docker://18.6.3\n Kubelet Version: v1.16.4\n Kube-Proxy Version: v1.16.4\nPodCIDR: 100.64.1.0/24\nPodCIDRs: 100.64.1.0/24\nProviderID: aws:///us-east-1c/i-0a8c404292a3c92e9\nNon-terminated Pods: (4 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system calico-node-m8r2d 100m (5%) 500m (26%) 100Mi (1%) 700Mi (11%) 3h2m\n kube-system kube-proxy-rq4kf 20m (1%) 0 (0%) 64Mi (1%) 0 (0%) 3h2m\n kube-system node-exporter-l6q84 5m (0%) 25m (1%) 10Mi (0%) 100Mi (1%) 3h2m\n kube-system node-problem-detector-9z5sq 20m (1%) 200m (10%) 20Mi (0%) 100Mi (1%) 3h2m\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 145m (7%) 725m (37%)\n memory 194Mi (3%) 900Mi (14%)\n ephemeral-storage 0 (0%) 0 (0%)\n attachable-volumes-aws-ebs 0 0\nEvents: \n" STEP: removing all taints that have the same key kubernetes.io/e2e-taint-key-002-7bf0a9d6-792a-4f2d-a889-4b31df05ef96 of the node Jan 11 18:58:21.690: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config taint nodes ip-10-250-27-25.ec2.internal kubernetes.io/e2e-taint-key-002-7bf0a9d6-792a-4f2d-a889-4b31df05ef96-' Jan 11 18:58:22.303: INFO: stderr: "" Jan 11 18:58:22.303: INFO: stdout: "node/ip-10-250-27-25.ec2.internal untainted\n" Jan 11 18:58:22.303: INFO: stdout: "node/ip-10-250-27-25.ec2.internal untainted\n" STEP: verifying the node doesn't have the taints that have the same key kubernetes.io/e2e-taint-key-002-7bf0a9d6-792a-4f2d-a889-4b31df05ef96 Jan 11 18:58:22.303: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config describe node ip-10-250-27-25.ec2.internal' Jan 11 18:58:23.033: INFO: stderr: "" Jan 11 18:58:23.033: INFO: stdout: "Name: ip-10-250-27-25.ec2.internal\nRoles: \nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/instance-type=m5.large\n beta.kubernetes.io/os=linux\n failure-domain.beta.kubernetes.io/region=us-east-1\n failure-domain.beta.kubernetes.io/zone=us-east-1c\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=ip-10-250-27-25.ec2.internal\n kubernetes.io/os=linux\n node.kubernetes.io/role=node\n worker.garden.sapcloud.io/group=worker-1\n worker.gardener.cloud/pool=worker-1\nAnnotations: node.alpha.kubernetes.io/ttl: 0\n projectcalico.org/IPv4Address: 10.250.27.25/19\n projectcalico.org/IPv4IPIPTunnelAddr: 100.64.1.1\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sat, 11 Jan 2020 15:56:03 +0000\nTaints: \nUnschedulable: false\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n FrequentDockerRestart False Sat, 11 Jan 2020 18:57:52 +0000 Sat, 11 Jan 2020 15:56:58 +0000 NoFrequentDockerRestart docker is functioning properly\n FrequentContainerdRestart False Sat, 11 Jan 2020 18:57:52 +0000 Sat, 11 Jan 2020 15:56:58 +0000 NoFrequentContainerdRestart containerd is functioning properly\n CorruptDockerOverlay2 False Sat, 11 Jan 2020 18:57:52 +0000 Sat, 11 Jan 2020 15:56:58 +0000 NoCorruptDockerOverlay2 docker overlay2 is functioning properly\n KernelDeadlock False Sat, 11 Jan 2020 18:57:52 +0000 Sat, 11 Jan 2020 15:56:58 +0000 KernelHasNoDeadlock kernel has no deadlock\n ReadonlyFilesystem False Sat, 11 Jan 2020 18:57:52 +0000 Sat, 11 Jan 2020 15:56:58 +0000 FilesystemIsNotReadOnly Filesystem is not read-only\n FrequentUnregisterNetDevice False Sat, 11 Jan 2020 18:57:52 +0000 Sat, 11 Jan 2020 15:56:58 +0000 NoFrequentUnregisterNetDevice node is functioning properly\n FrequentKubeletRestart False Sat, 11 Jan 2020 18:57:52 +0000 Sat, 11 Jan 2020 15:56:58 +0000 NoFrequentKubeletRestart kubelet is functioning properly\n NetworkUnavailable False Sat, 11 Jan 2020 15:56:18 +0000 Sat, 11 Jan 2020 15:56:18 +0000 CalicoIsUp Calico is running on this node\n MemoryPressure False Sat, 11 Jan 2020 18:58:16 +0000 Sat, 11 Jan 2020 15:56:03 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Sat, 11 Jan 2020 18:58:16 +0000 Sat, 11 Jan 2020 15:56:03 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Sat, 11 Jan 2020 18:58:16 +0000 Sat, 11 Jan 2020 15:56:03 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Sat, 11 Jan 2020 18:58:16 +0000 Sat, 11 Jan 2020 15:56:13 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 10.250.27.25\n Hostname: ip-10-250-27-25.ec2.internal\n InternalDNS: ip-10-250-27-25.ec2.internal\nCapacity:\n attachable-volumes-aws-ebs: 25\n cpu: 2\n ephemeral-storage: 28056816Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 7865496Ki\n pods: 110\nAllocatable:\n attachable-volumes-aws-ebs: 25\n cpu: 1920m\n ephemeral-storage: 27293670584\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 6577812679\n pods: 110\nSystem Info:\n Machine ID: ec280dba3c1837e27848a3dec8c080a9\n System UUID: ec280dba-3c18-37e2-7848-a3dec8c080a9\n Boot ID: 89e42b89-b944-47ea-8bf6-5f2fe6d80c97\n Kernel Version: 4.19.86-coreos\n OS Image: Container Linux by CoreOS 2303.3.0 (Rhyolite)\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: docker://18.6.3\n Kubelet Version: v1.16.4\n Kube-Proxy Version: v1.16.4\nPodCIDR: 100.64.1.0/24\nPodCIDRs: 100.64.1.0/24\nProviderID: aws:///us-east-1c/i-0a8c404292a3c92e9\nNon-terminated Pods: (4 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system calico-node-m8r2d 100m (5%) 500m (26%) 100Mi (1%) 700Mi (11%) 3h2m\n kube-system kube-proxy-rq4kf 20m (1%) 0 (0%) 64Mi (1%) 0 (0%) 3h2m\n kube-system node-exporter-l6q84 5m (0%) 25m (1%) 10Mi (0%) 100Mi (1%) 3h2m\n kube-system node-problem-detector-9z5sq 20m (1%) 200m (10%) 20Mi (0%) 100Mi (1%) 3h2m\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 145m (7%) 725m (37%)\n memory 194Mi (3%) 900Mi (14%)\n ephemeral-storage 0 (0%) 0 (0%)\n attachable-volumes-aws-ebs 0 0\nEvents: \n" Jan 11 18:58:23.033: INFO: stdout: "Name: ip-10-250-27-25.ec2.internal\nRoles: \nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/instance-type=m5.large\n beta.kubernetes.io/os=linux\n failure-domain.beta.kubernetes.io/region=us-east-1\n failure-domain.beta.kubernetes.io/zone=us-east-1c\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=ip-10-250-27-25.ec2.internal\n kubernetes.io/os=linux\n node.kubernetes.io/role=node\n worker.garden.sapcloud.io/group=worker-1\n worker.gardener.cloud/pool=worker-1\nAnnotations: node.alpha.kubernetes.io/ttl: 0\n projectcalico.org/IPv4Address: 10.250.27.25/19\n projectcalico.org/IPv4IPIPTunnelAddr: 100.64.1.1\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sat, 11 Jan 2020 15:56:03 +0000\nTaints: \nUnschedulable: false\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n FrequentDockerRestart False Sat, 11 Jan 2020 18:57:52 +0000 Sat, 11 Jan 2020 15:56:58 +0000 NoFrequentDockerRestart docker is functioning properly\n FrequentContainerdRestart False Sat, 11 Jan 2020 18:57:52 +0000 Sat, 11 Jan 2020 15:56:58 +0000 NoFrequentContainerdRestart containerd is functioning properly\n CorruptDockerOverlay2 False Sat, 11 Jan 2020 18:57:52 +0000 Sat, 11 Jan 2020 15:56:58 +0000 NoCorruptDockerOverlay2 docker overlay2 is functioning properly\n KernelDeadlock False Sat, 11 Jan 2020 18:57:52 +0000 Sat, 11 Jan 2020 15:56:58 +0000 KernelHasNoDeadlock kernel has no deadlock\n ReadonlyFilesystem False Sat, 11 Jan 2020 18:57:52 +0000 Sat, 11 Jan 2020 15:56:58 +0000 FilesystemIsNotReadOnly Filesystem is not read-only\n FrequentUnregisterNetDevice False Sat, 11 Jan 2020 18:57:52 +0000 Sat, 11 Jan 2020 15:56:58 +0000 NoFrequentUnregisterNetDevice node is functioning properly\n FrequentKubeletRestart False Sat, 11 Jan 2020 18:57:52 +0000 Sat, 11 Jan 2020 15:56:58 +0000 NoFrequentKubeletRestart kubelet is functioning properly\n NetworkUnavailable False Sat, 11 Jan 2020 15:56:18 +0000 Sat, 11 Jan 2020 15:56:18 +0000 CalicoIsUp Calico is running on this node\n MemoryPressure False Sat, 11 Jan 2020 18:58:16 +0000 Sat, 11 Jan 2020 15:56:03 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Sat, 11 Jan 2020 18:58:16 +0000 Sat, 11 Jan 2020 15:56:03 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Sat, 11 Jan 2020 18:58:16 +0000 Sat, 11 Jan 2020 15:56:03 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Sat, 11 Jan 2020 18:58:16 +0000 Sat, 11 Jan 2020 15:56:13 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 10.250.27.25\n Hostname: ip-10-250-27-25.ec2.internal\n InternalDNS: ip-10-250-27-25.ec2.internal\nCapacity:\n attachable-volumes-aws-ebs: 25\n cpu: 2\n ephemeral-storage: 28056816Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 7865496Ki\n pods: 110\nAllocatable:\n attachable-volumes-aws-ebs: 25\n cpu: 1920m\n ephemeral-storage: 27293670584\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 6577812679\n pods: 110\nSystem Info:\n Machine ID: ec280dba3c1837e27848a3dec8c080a9\n System UUID: ec280dba-3c18-37e2-7848-a3dec8c080a9\n Boot ID: 89e42b89-b944-47ea-8bf6-5f2fe6d80c97\n Kernel Version: 4.19.86-coreos\n OS Image: Container Linux by CoreOS 2303.3.0 (Rhyolite)\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: docker://18.6.3\n Kubelet Version: v1.16.4\n Kube-Proxy Version: v1.16.4\nPodCIDR: 100.64.1.0/24\nPodCIDRs: 100.64.1.0/24\nProviderID: aws:///us-east-1c/i-0a8c404292a3c92e9\nNon-terminated Pods: (4 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system calico-node-m8r2d 100m (5%) 500m (26%) 100Mi (1%) 700Mi (11%) 3h2m\n kube-system kube-proxy-rq4kf 20m (1%) 0 (0%) 64Mi (1%) 0 (0%) 3h2m\n kube-system node-exporter-l6q84 5m (0%) 25m (1%) 10Mi (0%) 100Mi (1%) 3h2m\n kube-system node-problem-detector-9z5sq 20m (1%) 200m (10%) 20Mi (0%) 100Mi (1%) 3h2m\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 145m (7%) 725m (37%)\n memory 194Mi (3%) 900Mi (14%)\n ephemeral-storage 0 (0%) 0 (0%)\n attachable-volumes-aws-ebs 0 0\nEvents: \n" STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-002-7bf0a9d6-792a-4f2d-a889-4b31df05ef96=testing-taint-value-no-execute:NoExecute STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-002-7bf0a9d6-792a-4f2d-a889-4b31df05ef96=another-testing-taint-value:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-002-7bf0a9d6-792a-4f2d-a889-4b31df05ef96=testing-taint-value:NoSchedule [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 18:58:23.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8214" for this suite. Jan 11 18:58:29.931: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 18:58:33.227: INFO: namespace kubectl-8214 deletion completed in 9.566360411s •SSSSSSSSSS ------------------------------ [sig-scheduling] NoExecuteTaintManager Multiple Pods [Serial] only evicts pods without tolerations from tainted nodes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/taints.go:358 [BeforeEach] [sig-scheduling] NoExecuteTaintManager Multiple Pods [Serial] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 18:58:33.228: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename taint-multiple-pods STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in taint-multiple-pods-841 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] NoExecuteTaintManager Multiple Pods [Serial] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/taints.go:345 Jan 11 18:58:33.868: INFO: Waiting up to 1m0s for all nodes to be ready Jan 11 18:59:34.415: INFO: Waiting for terminating namespaces to be deleted... [It] only evicts pods without tolerations from tainted nodes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/taints.go:358 Jan 11 18:59:34.504: INFO: Starting informer... STEP: Starting pods... Jan 11 18:59:34.685: INFO: Pod1 is running on ip-10-250-27-25.ec2.internal. Tainting Node Jan 11 18:59:34.865: INFO: Pod2 is running on ip-10-250-27-25.ec2.internal. Tainting Node STEP: Trying to apply a taint on the Nodes STEP: verifying the node has the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute STEP: Waiting for Pod1 to be deleted Jan 11 18:59:43.774: INFO: Noticed Pod "taint-eviction-a1" gets evicted. STEP: verifying the node doesn't have the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute [AfterEach] [sig-scheduling] NoExecuteTaintManager Multiple Pods [Serial] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:00:40.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "taint-multiple-pods-841" for this suite. W0111 19:01:58.891413 7333 reflector.go:299] k8s.io/kubernetes/test/e2e/scheduling/taints.go:146: watch of *v1.Pod ended with: too old resource version: 36609 (36628) W0111 19:01:58.986897 7333 reflector.go:299] k8s.io/kubernetes/test/e2e/scheduling/taints.go:146: watch of *v1.Pod ended with: too old resource version: 36107 (36628) Jan 11 19:02:40.788: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:02:44.020: INFO: namespace taint-multiple-pods-841 deletion completed in 2m3.516265691s •SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPriorities [Serial] Pod should be scheduled to node that don't match the PodAntiAffinity terms /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:96 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:02:44.022: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename sched-priority STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in sched-priority-8440 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:76 Jan 11 19:02:44.661: INFO: Waiting up to 1m0s for all nodes to be ready Jan 11 19:03:45.382: INFO: Waiting for terminating namespaces to be deleted... Jan 11 19:03:45.471: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jan 11 19:03:45.746: INFO: 20 / 20 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jan 11 19:03:45.746: INFO: expected 12 pod replicas in namespace 'kube-system', 12 are Running and Ready. [It] Pod should be scheduled to node that don't match the PodAntiAffinity terms /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:96 STEP: Trying to launch a pod with a label to get a node which can launch it. STEP: Trying to apply a label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-node-topologyKey topologyvalue Jan 11 19:03:48.290: INFO: ComputeCPUMemFraction for node: ip-10-250-27-25.ec2.internal Jan 11 19:03:48.384: INFO: Pod for on the node: calico-node-m8r2d, Cpu: 100, Mem: 104857600 Jan 11 19:03:48.384: INFO: Pod for on the node: kube-proxy-rq4kf, Cpu: 20, Mem: 67108864 Jan 11 19:03:48.384: INFO: Pod for on the node: node-exporter-l6q84, Cpu: 5, Mem: 10485760 Jan 11 19:03:48.384: INFO: Pod for on the node: node-problem-detector-9z5sq, Cpu: 20, Mem: 20971520 Jan 11 19:03:48.384: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Jan 11 19:03:48.384: INFO: Node: ip-10-250-27-25.ec2.internal, totalRequestedCPUResource: 245, cpuAllocatableMil: 1920, cpuFraction: 0.12760416666666666 Jan 11 19:03:48.384: INFO: Node: ip-10-250-27-25.ec2.internal, totalRequestedMemResource: 308281344, memAllocatableVal: 6577812679, memFraction: 0.04686684754404816 Jan 11 19:03:48.384: INFO: ComputeCPUMemFraction for node: ip-10-250-7-77.ec2.internal Jan 11 19:03:48.477: INFO: Pod for on the node: addons-kubernetes-dashboard-78954cc66b-69k8m, Cpu: 50, Mem: 52428800 Jan 11 19:03:48.477: INFO: Pod for on the node: addons-nginx-ingress-controller-7c75bb76db-cd9r9, Cpu: 100, Mem: 104857600 Jan 11 19:03:48.477: INFO: Pod for on the node: addons-nginx-ingress-nginx-ingress-k8s-backend-95f65778d-4fk7d, Cpu: 100, Mem: 209715200 Jan 11 19:03:48.477: INFO: Pod for on the node: blackbox-exporter-54bb5f55cc-452fk, Cpu: 5, Mem: 5242880 Jan 11 19:03:48.477: INFO: Pod for on the node: calico-kube-controllers-79bcd784b6-c46r9, Cpu: 100, Mem: 209715200 Jan 11 19:03:48.477: INFO: Pod for on the node: calico-node-dl8nk, Cpu: 100, Mem: 104857600 Jan 11 19:03:48.477: INFO: Pod for on the node: calico-typha-deploy-9f6b455c4-vdrzx, Cpu: 100, Mem: 209715200 Jan 11 19:03:48.477: INFO: Pod for on the node: calico-typha-horizontal-autoscaler-85c99966bb-6j6rp, Cpu: 10, Mem: 209715200 Jan 11 19:03:48.477: INFO: Pod for on the node: calico-typha-vertical-autoscaler-5769b74b58-r8t6r, Cpu: 100, Mem: 209715200 Jan 11 19:03:48.477: INFO: Pod for on the node: coredns-59c969ffb8-57m7v, Cpu: 50, Mem: 15728640 Jan 11 19:03:48.477: INFO: Pod for on the node: coredns-59c969ffb8-fqq79, Cpu: 50, Mem: 15728640 Jan 11 19:03:48.477: INFO: Pod for on the node: kube-proxy-nn5px, Cpu: 20, Mem: 67108864 Jan 11 19:03:48.477: INFO: Pod for on the node: metrics-server-7c797fd994-4x7v9, Cpu: 20, Mem: 104857600 Jan 11 19:03:48.477: INFO: Pod for on the node: node-exporter-gp57h, Cpu: 5, Mem: 10485760 Jan 11 19:03:48.477: INFO: Pod for on the node: node-problem-detector-jx2p4, Cpu: 20, Mem: 20971520 Jan 11 19:03:48.477: INFO: Pod for on the node: vpn-shoot-5d76665b65-6rkww, Cpu: 100, Mem: 104857600 Jan 11 19:03:48.477: INFO: Node: ip-10-250-7-77.ec2.internal, totalRequestedCPUResource: 630, cpuAllocatableMil: 1920, cpuFraction: 0.328125 Jan 11 19:03:48.477: INFO: Node: ip-10-250-7-77.ec2.internal, totalRequestedMemResource: 921698304, memAllocatableVal: 6577812679, memFraction: 0.14012230949393992 Jan 11 19:03:48.571: INFO: Waiting for running... Jan 11 19:03:53.762: INFO: Waiting for running... STEP: Compute Cpu, Mem Fraction after create balanced pods. Jan 11 19:03:58.863: INFO: ComputeCPUMemFraction for node: ip-10-250-27-25.ec2.internal Jan 11 19:03:58.956: INFO: Pod for on the node: calico-node-m8r2d, Cpu: 100, Mem: 104857600 Jan 11 19:03:58.956: INFO: Pod for on the node: kube-proxy-rq4kf, Cpu: 20, Mem: 67108864 Jan 11 19:03:58.956: INFO: Pod for on the node: node-exporter-l6q84, Cpu: 5, Mem: 10485760 Jan 11 19:03:58.956: INFO: Pod for on the node: node-problem-detector-9z5sq, Cpu: 20, Mem: 20971520 Jan 11 19:03:58.956: INFO: Pod for on the node: bed28844-33a3-4776-9290-d8c83e760e0e-0, Cpu: 907, Mem: 3638406263 Jan 11 19:03:58.956: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Jan 11 19:03:58.956: INFO: Node: ip-10-250-27-25.ec2.internal, totalRequestedCPUResource: 1152, cpuAllocatableMil: 1920, cpuFraction: 0.6 Jan 11 19:03:58.956: INFO: Node: ip-10-250-27-25.ec2.internal, totalRequestedMemResource: 3946687607, memAllocatableVal: 6577812679, memFraction: 0.5999999999391895 STEP: Compute Cpu, Mem Fraction after create balanced pods. Jan 11 19:03:58.956: INFO: ComputeCPUMemFraction for node: ip-10-250-7-77.ec2.internal Jan 11 19:03:59.050: INFO: Pod for on the node: addons-kubernetes-dashboard-78954cc66b-69k8m, Cpu: 50, Mem: 52428800 Jan 11 19:03:59.050: INFO: Pod for on the node: addons-nginx-ingress-controller-7c75bb76db-cd9r9, Cpu: 100, Mem: 104857600 Jan 11 19:03:59.050: INFO: Pod for on the node: addons-nginx-ingress-nginx-ingress-k8s-backend-95f65778d-4fk7d, Cpu: 100, Mem: 209715200 Jan 11 19:03:59.050: INFO: Pod for on the node: blackbox-exporter-54bb5f55cc-452fk, Cpu: 5, Mem: 5242880 Jan 11 19:03:59.050: INFO: Pod for on the node: calico-kube-controllers-79bcd784b6-c46r9, Cpu: 100, Mem: 209715200 Jan 11 19:03:59.050: INFO: Pod for on the node: calico-node-dl8nk, Cpu: 100, Mem: 104857600 Jan 11 19:03:59.050: INFO: Pod for on the node: calico-typha-deploy-9f6b455c4-vdrzx, Cpu: 100, Mem: 209715200 Jan 11 19:03:59.050: INFO: Pod for on the node: calico-typha-horizontal-autoscaler-85c99966bb-6j6rp, Cpu: 10, Mem: 209715200 Jan 11 19:03:59.050: INFO: Pod for on the node: calico-typha-vertical-autoscaler-5769b74b58-r8t6r, Cpu: 100, Mem: 209715200 Jan 11 19:03:59.050: INFO: Pod for on the node: coredns-59c969ffb8-57m7v, Cpu: 50, Mem: 15728640 Jan 11 19:03:59.050: INFO: Pod for on the node: coredns-59c969ffb8-fqq79, Cpu: 50, Mem: 15728640 Jan 11 19:03:59.050: INFO: Pod for on the node: kube-proxy-nn5px, Cpu: 20, Mem: 67108864 Jan 11 19:03:59.050: INFO: Pod for on the node: metrics-server-7c797fd994-4x7v9, Cpu: 20, Mem: 104857600 Jan 11 19:03:59.050: INFO: Pod for on the node: node-exporter-gp57h, Cpu: 5, Mem: 10485760 Jan 11 19:03:59.050: INFO: Pod for on the node: node-problem-detector-jx2p4, Cpu: 20, Mem: 20971520 Jan 11 19:03:59.050: INFO: Pod for on the node: vpn-shoot-5d76665b65-6rkww, Cpu: 100, Mem: 104857600 Jan 11 19:03:59.050: INFO: Pod for on the node: 1c3e8653-1d10-461f-addd-0f4d61d2d77c-0, Cpu: 522, Mem: 3024989303 Jan 11 19:03:59.050: INFO: Node: ip-10-250-7-77.ec2.internal, totalRequestedCPUResource: 1152, cpuAllocatableMil: 1920, cpuFraction: 0.6 Jan 11 19:03:59.050: INFO: Node: ip-10-250-7-77.ec2.internal, totalRequestedMemResource: 3946687607, memAllocatableVal: 6577812679, memFraction: 0.5999999999391895 STEP: Trying to launch the pod with podAntiAffinity. STEP: Wait the pod becomes running STEP: Verify the pod was scheduled to the expected node. STEP: removing the label kubernetes.io/e2e-node-topologyKey off the node ip-10-250-27-25.ec2.internal STEP: verifying the node doesn't have the label kubernetes.io/e2e-node-topologyKey [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:04:01.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-priority-8440" for this suite. Jan 11 19:04:22.047: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:04:25.216: INFO: namespace sched-priority-8440 deletion completed in 23.438541524s [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:73 •SSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:04:25.216: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename sched-pred STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in sched-pred-9915 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:87 Jan 11 19:04:25.853: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jan 11 19:04:26.123: INFO: Waiting for terminating namespaces to be deleted... Jan 11 19:04:26.212: INFO: Logging pods the kubelet thinks is on node ip-10-250-27-25.ec2.internal before test Jan 11 19:04:26.415: INFO: kube-proxy-rq4kf from kube-system started at 2020-01-11 15:56:04 +0000 UTC (1 container statuses recorded) Jan 11 19:04:26.415: INFO: Container kube-proxy ready: true, restart count 0 Jan 11 19:04:26.415: INFO: node-problem-detector-9z5sq from kube-system started at 2020-01-11 15:56:04 +0000 UTC (1 container statuses recorded) Jan 11 19:04:26.415: INFO: Container node-problem-detector ready: true, restart count 0 Jan 11 19:04:26.415: INFO: node-exporter-l6q84 from kube-system started at 2020-01-11 15:56:04 +0000 UTC (1 container statuses recorded) Jan 11 19:04:26.415: INFO: Container node-exporter ready: true, restart count 0 Jan 11 19:04:26.415: INFO: calico-node-m8r2d from kube-system started at 2020-01-11 15:56:04 +0000 UTC (1 container statuses recorded) Jan 11 19:04:26.415: INFO: Container calico-node ready: true, restart count 0 Jan 11 19:04:26.415: INFO: Logging pods the kubelet thinks is on node ip-10-250-7-77.ec2.internal before test Jan 11 19:04:26.623: INFO: blackbox-exporter-54bb5f55cc-452fk from kube-system started at 2020-01-11 15:55:58 +0000 UTC (1 container statuses recorded) Jan 11 19:04:26.623: INFO: Container blackbox-exporter ready: true, restart count 0 Jan 11 19:04:26.623: INFO: coredns-59c969ffb8-fqq79 from kube-system started at 2020-01-11 15:56:08 +0000 UTC (1 container statuses recorded) Jan 11 19:04:26.623: INFO: Container coredns ready: true, restart count 0 Jan 11 19:04:26.623: INFO: calico-node-dl8nk from kube-system started at 2020-01-11 15:55:58 +0000 UTC (1 container statuses recorded) Jan 11 19:04:26.623: INFO: Container calico-node ready: true, restart count 0 Jan 11 19:04:26.623: INFO: node-problem-detector-jx2p4 from kube-system started at 2020-01-11 15:55:58 +0000 UTC (1 container statuses recorded) Jan 11 19:04:26.623: INFO: Container node-problem-detector ready: true, restart count 0 Jan 11 19:04:26.623: INFO: node-exporter-gp57h from kube-system started at 2020-01-11 15:55:58 +0000 UTC (1 container statuses recorded) Jan 11 19:04:26.623: INFO: Container node-exporter ready: true, restart count 0 Jan 11 19:04:26.623: INFO: calico-kube-controllers-79bcd784b6-c46r9 from kube-system started at 2020-01-11 15:56:08 +0000 UTC (1 container statuses recorded) Jan 11 19:04:26.623: INFO: Container calico-kube-controllers ready: true, restart count 0 Jan 11 19:04:26.623: INFO: metrics-server-7c797fd994-4x7v9 from kube-system started at 2020-01-11 15:56:08 +0000 UTC (1 container statuses recorded) Jan 11 19:04:26.623: INFO: Container metrics-server ready: true, restart count 0 Jan 11 19:04:26.623: INFO: coredns-59c969ffb8-57m7v from kube-system started at 2020-01-11 15:56:11 +0000 UTC (1 container statuses recorded) Jan 11 19:04:26.623: INFO: Container coredns ready: true, restart count 0 Jan 11 19:04:26.623: INFO: calico-typha-deploy-9f6b455c4-vdrzx from kube-system started at 2020-01-11 16:21:07 +0000 UTC (1 container statuses recorded) Jan 11 19:04:26.623: INFO: Container calico-typha ready: true, restart count 0 Jan 11 19:04:26.623: INFO: kube-proxy-nn5px from kube-system started at 2020-01-11 15:55:58 +0000 UTC (1 container statuses recorded) Jan 11 19:04:26.623: INFO: Container kube-proxy ready: true, restart count 0 Jan 11 19:04:26.623: INFO: calico-typha-horizontal-autoscaler-85c99966bb-6j6rp from kube-system started at 2020-01-11 15:56:08 +0000 UTC (1 container statuses recorded) Jan 11 19:04:26.623: INFO: Container autoscaler ready: true, restart count 0 Jan 11 19:04:26.623: INFO: calico-typha-vertical-autoscaler-5769b74b58-r8t6r from kube-system started at 2020-01-11 15:56:13 +0000 UTC (1 container statuses recorded) Jan 11 19:04:26.623: INFO: Container autoscaler ready: true, restart count 5 Jan 11 19:04:26.623: INFO: addons-nginx-ingress-controller-7c75bb76db-cd9r9 from kube-system started at 2020-01-11 15:56:13 +0000 UTC (1 container statuses recorded) Jan 11 19:04:26.623: INFO: Container nginx-ingress-controller ready: true, restart count 0 Jan 11 19:04:26.623: INFO: vpn-shoot-5d76665b65-6rkww from kube-system started at 2020-01-11 15:56:13 +0000 UTC (1 container statuses recorded) Jan 11 19:04:26.623: INFO: Container vpn-shoot ready: true, restart count 0 Jan 11 19:04:26.623: INFO: addons-nginx-ingress-nginx-ingress-k8s-backend-95f65778d-4fk7d from kube-system started at 2020-01-11 15:56:08 +0000 UTC (1 container statuses recorded) Jan 11 19:04:26.623: INFO: Container nginx-ingress-nginx-ingress-k8s-backend ready: true, restart count 0 Jan 11 19:04:26.623: INFO: addons-kubernetes-dashboard-78954cc66b-69k8m from kube-system started at 2020-01-11 15:56:08 +0000 UTC (1 container statuses recorded) Jan 11 19:04:26.623: INFO: Container kubernetes-dashboard ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-4c48bb51-83ab-4541-aa41-8cceaeccffed 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-4c48bb51-83ab-4541-aa41-8cceaeccffed off the node ip-10-250-27-25.ec2.internal STEP: verifying the node doesn't have the label kubernetes.io/e2e-4c48bb51-83ab-4541-aa41-8cceaeccffed [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:06:27.261: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-9915" for this suite. Jan 11 19:06:47.620: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:06:50.894: INFO: namespace sched-pred-9915 deletion completed in 23.542887769s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:78 •SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:452 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:06:50.898: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename sched-pred STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in sched-pred-18 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:87 Jan 11 19:06:51.549: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jan 11 19:06:51.819: INFO: Waiting for terminating namespaces to be deleted... Jan 11 19:06:51.908: INFO: Logging pods the kubelet thinks is on node ip-10-250-27-25.ec2.internal before test Jan 11 19:06:52.109: INFO: calico-node-m8r2d from kube-system started at 2020-01-11 15:56:04 +0000 UTC (1 container statuses recorded) Jan 11 19:06:52.109: INFO: Container calico-node ready: true, restart count 0 Jan 11 19:06:52.109: INFO: kube-proxy-rq4kf from kube-system started at 2020-01-11 15:56:04 +0000 UTC (1 container statuses recorded) Jan 11 19:06:52.109: INFO: Container kube-proxy ready: true, restart count 0 Jan 11 19:06:52.109: INFO: node-problem-detector-9z5sq from kube-system started at 2020-01-11 15:56:04 +0000 UTC (1 container statuses recorded) Jan 11 19:06:52.109: INFO: Container node-problem-detector ready: true, restart count 0 Jan 11 19:06:52.109: INFO: node-exporter-l6q84 from kube-system started at 2020-01-11 15:56:04 +0000 UTC (1 container statuses recorded) Jan 11 19:06:52.109: INFO: Container node-exporter ready: true, restart count 0 Jan 11 19:06:52.109: INFO: Logging pods the kubelet thinks is on node ip-10-250-7-77.ec2.internal before test Jan 11 19:06:52.398: INFO: addons-nginx-ingress-nginx-ingress-k8s-backend-95f65778d-4fk7d from kube-system started at 2020-01-11 15:56:08 +0000 UTC (1 container statuses recorded) Jan 11 19:06:52.398: INFO: Container nginx-ingress-nginx-ingress-k8s-backend ready: true, restart count 0 Jan 11 19:06:52.398: INFO: addons-kubernetes-dashboard-78954cc66b-69k8m from kube-system started at 2020-01-11 15:56:08 +0000 UTC (1 container statuses recorded) Jan 11 19:06:52.398: INFO: Container kubernetes-dashboard ready: true, restart count 0 Jan 11 19:06:52.398: INFO: blackbox-exporter-54bb5f55cc-452fk from kube-system started at 2020-01-11 15:55:58 +0000 UTC (1 container statuses recorded) Jan 11 19:06:52.398: INFO: Container blackbox-exporter ready: true, restart count 0 Jan 11 19:06:52.398: INFO: coredns-59c969ffb8-fqq79 from kube-system started at 2020-01-11 15:56:08 +0000 UTC (1 container statuses recorded) Jan 11 19:06:52.398: INFO: Container coredns ready: true, restart count 0 Jan 11 19:06:52.398: INFO: calico-node-dl8nk from kube-system started at 2020-01-11 15:55:58 +0000 UTC (1 container statuses recorded) Jan 11 19:06:52.398: INFO: Container calico-node ready: true, restart count 0 Jan 11 19:06:52.398: INFO: node-problem-detector-jx2p4 from kube-system started at 2020-01-11 15:55:58 +0000 UTC (1 container statuses recorded) Jan 11 19:06:52.398: INFO: Container node-problem-detector ready: true, restart count 0 Jan 11 19:06:52.398: INFO: node-exporter-gp57h from kube-system started at 2020-01-11 15:55:58 +0000 UTC (1 container statuses recorded) Jan 11 19:06:52.398: INFO: Container node-exporter ready: true, restart count 0 Jan 11 19:06:52.398: INFO: calico-kube-controllers-79bcd784b6-c46r9 from kube-system started at 2020-01-11 15:56:08 +0000 UTC (1 container statuses recorded) Jan 11 19:06:52.398: INFO: Container calico-kube-controllers ready: true, restart count 0 Jan 11 19:06:52.398: INFO: metrics-server-7c797fd994-4x7v9 from kube-system started at 2020-01-11 15:56:08 +0000 UTC (1 container statuses recorded) Jan 11 19:06:52.398: INFO: Container metrics-server ready: true, restart count 0 Jan 11 19:06:52.398: INFO: coredns-59c969ffb8-57m7v from kube-system started at 2020-01-11 15:56:11 +0000 UTC (1 container statuses recorded) Jan 11 19:06:52.398: INFO: Container coredns ready: true, restart count 0 Jan 11 19:06:52.398: INFO: calico-typha-deploy-9f6b455c4-vdrzx from kube-system started at 2020-01-11 16:21:07 +0000 UTC (1 container statuses recorded) Jan 11 19:06:52.398: INFO: Container calico-typha ready: true, restart count 0 Jan 11 19:06:52.398: INFO: kube-proxy-nn5px from kube-system started at 2020-01-11 15:55:58 +0000 UTC (1 container statuses recorded) Jan 11 19:06:52.398: INFO: Container kube-proxy ready: true, restart count 0 Jan 11 19:06:52.398: INFO: calico-typha-horizontal-autoscaler-85c99966bb-6j6rp from kube-system started at 2020-01-11 15:56:08 +0000 UTC (1 container statuses recorded) Jan 11 19:06:52.398: INFO: Container autoscaler ready: true, restart count 0 Jan 11 19:06:52.398: INFO: calico-typha-vertical-autoscaler-5769b74b58-r8t6r from kube-system started at 2020-01-11 15:56:13 +0000 UTC (1 container statuses recorded) Jan 11 19:06:52.398: INFO: Container autoscaler ready: true, restart count 5 Jan 11 19:06:52.398: INFO: addons-nginx-ingress-controller-7c75bb76db-cd9r9 from kube-system started at 2020-01-11 15:56:13 +0000 UTC (1 container statuses recorded) Jan 11 19:06:52.398: INFO: Container nginx-ingress-controller ready: true, restart count 0 Jan 11 19:06:52.398: INFO: vpn-shoot-5d76665b65-6rkww from kube-system started at 2020-01-11 15:56:13 +0000 UTC (1 container statuses recorded) Jan 11 19:06:52.398: INFO: Container vpn-shoot ready: true, restart count 0 [It] validates that required NodeAffinity setting is respected if matching /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:452 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-65cc9b53-2ea4-43c1-8ac2-a592a1f0c57e 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-65cc9b53-2ea4-43c1-8ac2-a592a1f0c57e off the node ip-10-250-27-25.ec2.internal STEP: verifying the node doesn't have the label kubernetes.io/e2e-65cc9b53-2ea4-43c1-8ac2-a592a1f0c57e [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:06:57.669: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-18" for this suite. Jan 11 19:07:06.029: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:07:09.252: INFO: namespace sched-pred-18 deletion completed in 11.492649801s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:78 • ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:07:09.252: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename daemonsets STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in daemonsets-4553 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should retry creating failed daemon pods [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Jan 11 19:07:10.665: INFO: Number of nodes with available pods: 0 Jan 11 19:07:10.665: INFO: Node ip-10-250-27-25.ec2.internal is running more than one daemon pod Jan 11 19:07:11.845: INFO: Number of nodes with available pods: 1 Jan 11 19:07:11.845: INFO: Node ip-10-250-7-77.ec2.internal is running more than one daemon pod Jan 11 19:07:12.845: INFO: Number of nodes with available pods: 2 Jan 11 19:07:12.845: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Jan 11 19:07:13.294: INFO: Number of nodes with available pods: 1 Jan 11 19:07:13.294: INFO: Node ip-10-250-27-25.ec2.internal is running more than one daemon pod Jan 11 19:07:14.475: INFO: Number of nodes with available pods: 1 Jan 11 19:07:14.475: INFO: Node ip-10-250-27-25.ec2.internal is running more than one daemon pod Jan 11 19:07:15.474: INFO: Number of nodes with available pods: 2 Jan 11 19:07:15.474: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4553, will wait for the garbage collector to delete the pods Jan 11 19:07:15.934: INFO: Deleting DaemonSet.extensions daemon-set took: 90.989874ms Jan 11 19:07:16.034: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.305271ms Jan 11 19:07:18.924: INFO: Number of nodes with available pods: 0 Jan 11 19:07:18.924: INFO: Number of running nodes: 0, number of available pods: 0 Jan 11 19:07:19.014: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4553/daemonsets","resourceVersion":"37446"},"items":null} Jan 11 19:07:19.103: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4553/pods","resourceVersion":"37446"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:07:19.373: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-4553" for this suite. Jan 11 19:07:25.732: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:07:28.950: INFO: namespace daemonsets-4553 deletion completed in 9.487376535s •SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local Stress with local volumes [Serial] should be able to process many pods and reuse local volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:520 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:07:28.951: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in persistent-local-volumes-test-9854 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:152 [BeforeEach] Stress with local volumes [Serial] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:445 STEP: Setting up 10 local volumes on node "ip-10-250-27-25.ec2.internal" STEP: Creating tmpfs mount point on node "ip-10-250-27-25.ec2.internal" at path "/tmp/local-volume-test-13c81b79-20ca-42d1-98b8-5adcd46369c3" Jan 11 19:07:32.042: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-9854 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-13c81b79-20ca-42d1-98b8-5adcd46369c3" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-13c81b79-20ca-42d1-98b8-5adcd46369c3" "/tmp/local-volume-test-13c81b79-20ca-42d1-98b8-5adcd46369c3"' Jan 11 19:07:33.363: INFO: stderr: "" Jan 11 19:07:33.363: INFO: stdout: "" STEP: Creating tmpfs mount point on node "ip-10-250-27-25.ec2.internal" at path "/tmp/local-volume-test-29b8c911-f5c3-4e30-b0c3-2b640bd4f382" Jan 11 19:07:33.363: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-9854 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-29b8c911-f5c3-4e30-b0c3-2b640bd4f382" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-29b8c911-f5c3-4e30-b0c3-2b640bd4f382" "/tmp/local-volume-test-29b8c911-f5c3-4e30-b0c3-2b640bd4f382"' Jan 11 19:07:34.677: INFO: stderr: "" Jan 11 19:07:34.677: INFO: stdout: "" STEP: Creating tmpfs mount point on node "ip-10-250-27-25.ec2.internal" at path "/tmp/local-volume-test-5236cd16-d0e5-4746-ae42-17721c552e61" Jan 11 19:07:34.677: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-9854 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-5236cd16-d0e5-4746-ae42-17721c552e61" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-5236cd16-d0e5-4746-ae42-17721c552e61" "/tmp/local-volume-test-5236cd16-d0e5-4746-ae42-17721c552e61"' Jan 11 19:07:35.981: INFO: stderr: "" Jan 11 19:07:35.981: INFO: stdout: "" STEP: Creating tmpfs mount point on node "ip-10-250-27-25.ec2.internal" at path "/tmp/local-volume-test-53fc645b-f503-4f9b-a52a-da875a7d766a" Jan 11 19:07:35.981: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-9854 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-53fc645b-f503-4f9b-a52a-da875a7d766a" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-53fc645b-f503-4f9b-a52a-da875a7d766a" "/tmp/local-volume-test-53fc645b-f503-4f9b-a52a-da875a7d766a"' Jan 11 19:07:37.258: INFO: stderr: "" Jan 11 19:07:37.258: INFO: stdout: "" STEP: Creating tmpfs mount point on node "ip-10-250-27-25.ec2.internal" at path "/tmp/local-volume-test-16eeff24-6e8e-4b32-aa35-639438b63e8d" Jan 11 19:07:37.258: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-9854 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-16eeff24-6e8e-4b32-aa35-639438b63e8d" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-16eeff24-6e8e-4b32-aa35-639438b63e8d" "/tmp/local-volume-test-16eeff24-6e8e-4b32-aa35-639438b63e8d"' Jan 11 19:07:38.577: INFO: stderr: "" Jan 11 19:07:38.577: INFO: stdout: "" STEP: Creating tmpfs mount point on node "ip-10-250-27-25.ec2.internal" at path "/tmp/local-volume-test-89c52ba4-fdd4-4a9e-9ca9-083bed356165" Jan 11 19:07:38.578: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-9854 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-89c52ba4-fdd4-4a9e-9ca9-083bed356165" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-89c52ba4-fdd4-4a9e-9ca9-083bed356165" "/tmp/local-volume-test-89c52ba4-fdd4-4a9e-9ca9-083bed356165"' Jan 11 19:07:39.893: INFO: stderr: "" Jan 11 19:07:39.893: INFO: stdout: "" STEP: Creating tmpfs mount point on node "ip-10-250-27-25.ec2.internal" at path "/tmp/local-volume-test-f14e2045-e72a-4d67-9641-90e7eb8756a5" Jan 11 19:07:39.893: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-9854 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-f14e2045-e72a-4d67-9641-90e7eb8756a5" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-f14e2045-e72a-4d67-9641-90e7eb8756a5" "/tmp/local-volume-test-f14e2045-e72a-4d67-9641-90e7eb8756a5"' Jan 11 19:07:41.195: INFO: stderr: "" Jan 11 19:07:41.195: INFO: stdout: "" STEP: Creating tmpfs mount point on node "ip-10-250-27-25.ec2.internal" at path "/tmp/local-volume-test-ae21742b-cabb-4bba-99e5-b50f1590084e" Jan 11 19:07:41.195: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-9854 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-ae21742b-cabb-4bba-99e5-b50f1590084e" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-ae21742b-cabb-4bba-99e5-b50f1590084e" "/tmp/local-volume-test-ae21742b-cabb-4bba-99e5-b50f1590084e"' Jan 11 19:07:42.465: INFO: stderr: "" Jan 11 19:07:42.466: INFO: stdout: "" STEP: Creating tmpfs mount point on node "ip-10-250-27-25.ec2.internal" at path "/tmp/local-volume-test-ca84dc39-8e61-4b11-a588-360770c1f2e6" Jan 11 19:07:42.466: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-9854 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-ca84dc39-8e61-4b11-a588-360770c1f2e6" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-ca84dc39-8e61-4b11-a588-360770c1f2e6" "/tmp/local-volume-test-ca84dc39-8e61-4b11-a588-360770c1f2e6"' Jan 11 19:07:43.846: INFO: stderr: "" Jan 11 19:07:43.846: INFO: stdout: "" STEP: Creating tmpfs mount point on node "ip-10-250-27-25.ec2.internal" at path "/tmp/local-volume-test-5de04a66-ecf3-4b26-a1fb-ed7e5dffdb6f" Jan 11 19:07:43.847: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-9854 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-5de04a66-ecf3-4b26-a1fb-ed7e5dffdb6f" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-5de04a66-ecf3-4b26-a1fb-ed7e5dffdb6f" "/tmp/local-volume-test-5de04a66-ecf3-4b26-a1fb-ed7e5dffdb6f"' Jan 11 19:07:45.120: INFO: stderr: "" Jan 11 19:07:45.120: INFO: stdout: "" STEP: Setting up 10 local volumes on node "ip-10-250-7-77.ec2.internal" STEP: Creating tmpfs mount point on node "ip-10-250-7-77.ec2.internal" at path "/tmp/local-volume-test-b4ce3796-a431-4b65-a0ee-2edb1a6002b7" Jan 11 19:07:47.389: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-9854 hostexec-ip-10-250-7-77.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-b4ce3796-a431-4b65-a0ee-2edb1a6002b7" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-b4ce3796-a431-4b65-a0ee-2edb1a6002b7" "/tmp/local-volume-test-b4ce3796-a431-4b65-a0ee-2edb1a6002b7"' Jan 11 19:07:48.712: INFO: stderr: "" Jan 11 19:07:48.713: INFO: stdout: "" STEP: Creating tmpfs mount point on node "ip-10-250-7-77.ec2.internal" at path "/tmp/local-volume-test-2b8fa15c-8e85-4cdd-8e95-4ca13dbeb67e" Jan 11 19:07:48.713: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-9854 hostexec-ip-10-250-7-77.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-2b8fa15c-8e85-4cdd-8e95-4ca13dbeb67e" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-2b8fa15c-8e85-4cdd-8e95-4ca13dbeb67e" "/tmp/local-volume-test-2b8fa15c-8e85-4cdd-8e95-4ca13dbeb67e"' Jan 11 19:07:49.995: INFO: stderr: "" Jan 11 19:07:49.995: INFO: stdout: "" STEP: Creating tmpfs mount point on node "ip-10-250-7-77.ec2.internal" at path "/tmp/local-volume-test-9484c4f2-2492-4484-946b-f749249e16c5" Jan 11 19:07:49.995: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-9854 hostexec-ip-10-250-7-77.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-9484c4f2-2492-4484-946b-f749249e16c5" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-9484c4f2-2492-4484-946b-f749249e16c5" "/tmp/local-volume-test-9484c4f2-2492-4484-946b-f749249e16c5"' Jan 11 19:07:51.288: INFO: stderr: "" Jan 11 19:07:51.288: INFO: stdout: "" STEP: Creating tmpfs mount point on node "ip-10-250-7-77.ec2.internal" at path "/tmp/local-volume-test-da99e2c6-d4bc-45eb-8587-1ec50eb6b702" Jan 11 19:07:51.288: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-9854 hostexec-ip-10-250-7-77.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-da99e2c6-d4bc-45eb-8587-1ec50eb6b702" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-da99e2c6-d4bc-45eb-8587-1ec50eb6b702" "/tmp/local-volume-test-da99e2c6-d4bc-45eb-8587-1ec50eb6b702"' Jan 11 19:07:52.570: INFO: stderr: "" Jan 11 19:07:52.570: INFO: stdout: "" STEP: Creating tmpfs mount point on node "ip-10-250-7-77.ec2.internal" at path "/tmp/local-volume-test-a3e12eb9-1501-41fe-bb34-02d7c3a0b593" Jan 11 19:07:52.570: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-9854 hostexec-ip-10-250-7-77.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-a3e12eb9-1501-41fe-bb34-02d7c3a0b593" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-a3e12eb9-1501-41fe-bb34-02d7c3a0b593" "/tmp/local-volume-test-a3e12eb9-1501-41fe-bb34-02d7c3a0b593"' Jan 11 19:07:53.862: INFO: stderr: "" Jan 11 19:07:53.862: INFO: stdout: "" STEP: Creating tmpfs mount point on node "ip-10-250-7-77.ec2.internal" at path "/tmp/local-volume-test-1d2d80b5-5ee5-4f6a-8183-53db60909d26" Jan 11 19:07:53.862: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-9854 hostexec-ip-10-250-7-77.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-1d2d80b5-5ee5-4f6a-8183-53db60909d26" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-1d2d80b5-5ee5-4f6a-8183-53db60909d26" "/tmp/local-volume-test-1d2d80b5-5ee5-4f6a-8183-53db60909d26"' Jan 11 19:07:55.138: INFO: stderr: "" Jan 11 19:07:55.138: INFO: stdout: "" STEP: Creating tmpfs mount point on node "ip-10-250-7-77.ec2.internal" at path "/tmp/local-volume-test-05e5450b-0920-4bf8-90ee-087a7519452f" Jan 11 19:07:55.138: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-9854 hostexec-ip-10-250-7-77.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-05e5450b-0920-4bf8-90ee-087a7519452f" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-05e5450b-0920-4bf8-90ee-087a7519452f" "/tmp/local-volume-test-05e5450b-0920-4bf8-90ee-087a7519452f"' Jan 11 19:07:56.411: INFO: stderr: "" Jan 11 19:07:56.411: INFO: stdout: "" STEP: Creating tmpfs mount point on node "ip-10-250-7-77.ec2.internal" at path "/tmp/local-volume-test-42451ede-e262-495b-9a9c-abde801feacf" Jan 11 19:07:56.411: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-9854 hostexec-ip-10-250-7-77.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-42451ede-e262-495b-9a9c-abde801feacf" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-42451ede-e262-495b-9a9c-abde801feacf" "/tmp/local-volume-test-42451ede-e262-495b-9a9c-abde801feacf"' Jan 11 19:07:57.695: INFO: stderr: "" Jan 11 19:07:57.695: INFO: stdout: "" STEP: Creating tmpfs mount point on node "ip-10-250-7-77.ec2.internal" at path "/tmp/local-volume-test-20f2ea20-4168-440d-a0b8-d184e3466fed" Jan 11 19:07:57.695: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-9854 hostexec-ip-10-250-7-77.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-20f2ea20-4168-440d-a0b8-d184e3466fed" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-20f2ea20-4168-440d-a0b8-d184e3466fed" "/tmp/local-volume-test-20f2ea20-4168-440d-a0b8-d184e3466fed"' Jan 11 19:07:59.041: INFO: stderr: "" Jan 11 19:07:59.041: INFO: stdout: "" STEP: Creating tmpfs mount point on node "ip-10-250-7-77.ec2.internal" at path "/tmp/local-volume-test-228cb7b6-466e-4d79-9dcd-aafcf4a9a5ce" Jan 11 19:07:59.041: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-9854 hostexec-ip-10-250-7-77.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-228cb7b6-466e-4d79-9dcd-aafcf4a9a5ce" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-228cb7b6-466e-4d79-9dcd-aafcf4a9a5ce" "/tmp/local-volume-test-228cb7b6-466e-4d79-9dcd-aafcf4a9a5ce"' Jan 11 19:08:00.378: INFO: stderr: "" Jan 11 19:08:00.378: INFO: stdout: "" STEP: Create 20 PVs STEP: Start a goroutine to recycle unbound PVs [It] should be able to process many pods and reuse local volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:520 STEP: Creating 7 pods periodically STEP: Waiting for all pods to complete successfully Jan 11 19:08:06.793: INFO: Deleting pod security-context-b44b4e7b-dfae-4df7-9edd-dbed7d2d8f5f Jan 11 19:08:06.887: INFO: Deleting PersistentVolumeClaim "pvc-ndw4f" Jan 11 19:08:06.977: INFO: Deleting PersistentVolumeClaim "pvc-srttm" Jan 11 19:08:07.067: INFO: Deleting PersistentVolumeClaim "pvc-7b8ws" STEP: Delete "local-pvg7sdq" and create a new PV for same local volume storage Jan 11 19:08:07.158: INFO: 1/28 pods finished Jan 11 19:08:07.158: INFO: Deleting pod security-context-faf280b7-123f-4530-963e-3c3250236f5f Jan 11 19:08:07.250: INFO: Deleting PersistentVolumeClaim "pvc-vfd47" Jan 11 19:08:07.340: INFO: Deleting PersistentVolumeClaim "pvc-5wbtm" STEP: Delete "local-pv7nh55" and create a new PV for same local volume storage Jan 11 19:08:07.431: INFO: Deleting PersistentVolumeClaim "pvc-l5hcd" Jan 11 19:08:07.521: INFO: 2/28 pods finished STEP: Delete "local-pv2dhss" and create a new PV for same local volume storage Jan 11 19:08:07.793: INFO: Deleting pod security-context-6e801a67-3c2d-4712-93ce-b60767fa277f Jan 11 19:08:07.885: INFO: Deleting PersistentVolumeClaim "pvc-vjhqf" STEP: Delete "local-pv596pr" and create a new PV for same local volume storage Jan 11 19:08:07.975: INFO: Deleting PersistentVolumeClaim "pvc-spfrq" Jan 11 19:08:08.064: INFO: Deleting PersistentVolumeClaim "pvc-sh8vl" Jan 11 19:08:08.155: INFO: 3/28 pods finished STEP: Delete "local-pv4ch6x" and create a new PV for same local volume storage STEP: Delete "local-pvpdzn9" and create a new PV for same local volume storage Jan 11 19:08:08.792: INFO: Deleting pod security-context-b4f6c2dd-8fcf-4523-83b6-5985889e29d2 Jan 11 19:08:08.885: INFO: Deleting PersistentVolumeClaim "pvc-zjfll" STEP: Delete "local-pvgvbt4" and create a new PV for same local volume storage Jan 11 19:08:08.975: INFO: Deleting PersistentVolumeClaim "pvc-jvn9g" Jan 11 19:08:09.066: INFO: Deleting PersistentVolumeClaim "pvc-bg77m" Jan 11 19:08:09.155: INFO: 4/28 pods finished Jan 11 19:08:09.156: INFO: Deleting pod security-context-c45d3f59-91d6-4324-8010-1daa8afe6dfe Jan 11 19:08:09.249: INFO: Deleting PersistentVolumeClaim "pvc-6l55c" STEP: Delete "local-pvq4ht2" and create a new PV for same local volume storage Jan 11 19:08:09.339: INFO: Deleting PersistentVolumeClaim "pvc-w7mh8" Jan 11 19:08:09.429: INFO: Deleting PersistentVolumeClaim "pvc-k9wl8" Jan 11 19:08:09.519: INFO: 5/28 pods finished Jan 11 19:08:09.519: INFO: Deleting pod security-context-cef3aac2-897c-41f8-a8e7-43ac5f861390 STEP: Delete "local-pvl8x2r" and create a new PV for same local volume storage Jan 11 19:08:09.612: INFO: Deleting PersistentVolumeClaim "pvc-w8pxt" Jan 11 19:08:09.702: INFO: Deleting PersistentVolumeClaim "pvc-drdkb" Jan 11 19:08:09.792: INFO: Deleting PersistentVolumeClaim "pvc-mcjwv" Jan 11 19:08:09.881: INFO: 6/28 pods finished STEP: Delete "local-pvjzph4" and create a new PV for same local volume storage STEP: Delete "local-pvxslmt" and create a new PV for same local volume storage STEP: Delete "local-pvl4qv6" and create a new PV for same local volume storage STEP: Delete "local-pv8lkjt" and create a new PV for same local volume storage STEP: Delete "local-pvd6v89" and create a new PV for same local volume storage STEP: Delete "local-pvl8ws6" and create a new PV for same local volume storage STEP: Delete "local-pvd7rz7" and create a new PV for same local volume storage STEP: Delete "local-pvnkv6c" and create a new PV for same local volume storage STEP: Delete "local-pv59crx" and create a new PV for same local volume storage Jan 11 19:08:13.793: INFO: Deleting pod security-context-d9d3b460-687e-466b-8fbe-54309e12ce35 Jan 11 19:08:13.884: INFO: Deleting PersistentVolumeClaim "pvc-xg252" Jan 11 19:08:13.974: INFO: Deleting PersistentVolumeClaim "pvc-7q5m8" Jan 11 19:08:14.064: INFO: Deleting PersistentVolumeClaim "pvc-t29k9" STEP: Delete "local-pvz5g8q" and create a new PV for same local volume storage Jan 11 19:08:14.155: INFO: 7/28 pods finished STEP: Delete "local-pvlzrdf" and create a new PV for same local volume storage STEP: Delete "local-pvlw9dg" and create a new PV for same local volume storage Jan 11 19:08:14.794: INFO: Deleting pod security-context-b76a7513-d261-4f54-bea6-1ea1398e916b Jan 11 19:08:14.886: INFO: Deleting PersistentVolumeClaim "pvc-dlf59" Jan 11 19:08:14.976: INFO: Deleting PersistentVolumeClaim "pvc-x4dfr" Jan 11 19:08:15.067: INFO: Deleting PersistentVolumeClaim "pvc-cwqfx" STEP: Delete "local-pvwm4fl" and create a new PV for same local volume storage Jan 11 19:08:15.157: INFO: 8/28 pods finished STEP: Delete "local-pvjbtwv" and create a new PV for same local volume storage STEP: Delete "local-pvtltdm" and create a new PV for same local volume storage Jan 11 19:08:15.793: INFO: Deleting pod security-context-1493e1a0-8050-48bb-92b7-3f04f6e4e83a Jan 11 19:08:15.886: INFO: Deleting PersistentVolumeClaim "pvc-f4htt" Jan 11 19:08:15.976: INFO: Deleting PersistentVolumeClaim "pvc-npcbr" Jan 11 19:08:16.066: INFO: Deleting PersistentVolumeClaim "pvc-fs4vs" Jan 11 19:08:16.157: INFO: 9/28 pods finished Jan 11 19:08:16.157: INFO: Deleting pod security-context-22f35d2f-8d80-4ba5-9e7e-bc9608401f90 STEP: Delete "local-pv2mpms" and create a new PV for same local volume storage Jan 11 19:08:16.249: INFO: Deleting PersistentVolumeClaim "pvc-47fvz" Jan 11 19:08:16.339: INFO: Deleting PersistentVolumeClaim "pvc-gjtmf" Jan 11 19:08:16.430: INFO: Deleting PersistentVolumeClaim "pvc-kwln9" STEP: Delete "local-pv6jws7" and create a new PV for same local volume storage Jan 11 19:08:16.520: INFO: 10/28 pods finished Jan 11 19:08:16.520: INFO: Deleting pod security-context-a409546c-eb2e-4a96-b3ee-75b93d847b4d Jan 11 19:08:16.612: INFO: Deleting PersistentVolumeClaim "pvc-k9f7q" Jan 11 19:08:16.703: INFO: Deleting PersistentVolumeClaim "pvc-q2rh5" STEP: Delete "local-pvphxxl" and create a new PV for same local volume storage Jan 11 19:08:16.793: INFO: Deleting PersistentVolumeClaim "pvc-vz8h8" Jan 11 19:08:16.883: INFO: 11/28 pods finished STEP: Delete "local-pvhwz8c" and create a new PV for same local volume storage STEP: Delete "local-pvqnxnj" and create a new PV for same local volume storage STEP: Delete "local-pvcgqlb" and create a new PV for same local volume storage STEP: Delete "local-pvfg7b2" and create a new PV for same local volume storage STEP: Delete "local-pv8qdnr" and create a new PV for same local volume storage STEP: Delete "local-pvprs5m" and create a new PV for same local volume storage Jan 11 19:08:19.729: INFO: Deleting pod security-context-7ec581d7-c78d-4956-95b2-d6726b64a7a3 Jan 11 19:08:19.822: INFO: Deleting PersistentVolumeClaim "pvc-msx76" Jan 11 19:08:19.912: INFO: Deleting PersistentVolumeClaim "pvc-mxh7r" Jan 11 19:08:20.003: INFO: Deleting PersistentVolumeClaim "pvc-g9h86" STEP: Delete "local-pvd6nps" and create a new PV for same local volume storage Jan 11 19:08:20.093: INFO: 12/28 pods finished STEP: Delete "local-pvvv88f" and create a new PV for same local volume storage STEP: Delete "local-pvchmgd" and create a new PV for same local volume storage Jan 11 19:08:20.793: INFO: Deleting pod security-context-8378d2a3-6b8e-4352-894d-03d1d81b2d26 Jan 11 19:08:20.884: INFO: Deleting PersistentVolumeClaim "pvc-4b7gg" Jan 11 19:08:20.975: INFO: Deleting PersistentVolumeClaim "pvc-gjmpz" Jan 11 19:08:21.065: INFO: Deleting PersistentVolumeClaim "pvc-88lzp" STEP: Delete "local-pvm2f5j" and create a new PV for same local volume storage Jan 11 19:08:21.155: INFO: 13/28 pods finished Jan 11 19:08:21.155: INFO: Deleting pod security-context-f13752b1-b1ca-43d2-9250-71af8b549bdf Jan 11 19:08:21.248: INFO: Deleting PersistentVolumeClaim "pvc-86k4x" Jan 11 19:08:21.339: INFO: Deleting PersistentVolumeClaim "pvc-gf95v" STEP: Delete "local-pvj2rl6" and create a new PV for same local volume storage Jan 11 19:08:21.429: INFO: Deleting PersistentVolumeClaim "pvc-k6gkq" Jan 11 19:08:21.521: INFO: 14/28 pods finished STEP: Delete "local-pv5t96j" and create a new PV for same local volume storage Jan 11 19:08:21.793: INFO: Deleting pod security-context-25a05601-24ea-4d47-93bc-e9649c562eec Jan 11 19:08:21.886: INFO: Deleting PersistentVolumeClaim "pvc-sjrp8" Jan 11 19:08:21.976: INFO: Deleting PersistentVolumeClaim "pvc-vns4r" STEP: Delete "local-pvnlqxw" and create a new PV for same local volume storage Jan 11 19:08:22.067: INFO: Deleting PersistentVolumeClaim "pvc-qf7xg" Jan 11 19:08:22.157: INFO: 15/28 pods finished STEP: Delete "local-pvlc9rm" and create a new PV for same local volume storage STEP: Delete "local-pvwfgnj" and create a new PV for same local volume storage Jan 11 19:08:22.793: INFO: Deleting pod security-context-1660f368-f3ca-4cd5-a5fe-29e557be849e Jan 11 19:08:22.885: INFO: Deleting PersistentVolumeClaim "pvc-djb2b" Jan 11 19:08:22.975: INFO: Deleting PersistentVolumeClaim "pvc-q6k64" STEP: Delete "local-pv47kcv" and create a new PV for same local volume storage Jan 11 19:08:23.066: INFO: Deleting PersistentVolumeClaim "pvc-s7vhc" Jan 11 19:08:23.156: INFO: 16/28 pods finished Jan 11 19:08:23.156: INFO: Deleting pod security-context-77a92f47-b2de-483f-9221-22da1228917a Jan 11 19:08:23.249: INFO: Deleting PersistentVolumeClaim "pvc-jvngd" STEP: Delete "local-pvhqr97" and create a new PV for same local volume storage Jan 11 19:08:23.339: INFO: Deleting PersistentVolumeClaim "pvc-vcxh2" Jan 11 19:08:23.430: INFO: Deleting PersistentVolumeClaim "pvc-j8lk2" Jan 11 19:08:23.521: INFO: 17/28 pods finished STEP: Delete "local-pvtqpvf" and create a new PV for same local volume storage STEP: Delete "local-pv8ftg6" and create a new PV for same local volume storage STEP: Delete "local-pvx78gp" and create a new PV for same local volume storage STEP: Delete "local-pvhnbvw" and create a new PV for same local volume storage STEP: Delete "local-pvn8nl5" and create a new PV for same local volume storage STEP: Delete "local-pv6w69m" and create a new PV for same local volume storage STEP: Delete "local-pvh7hkt" and create a new PV for same local volume storage Jan 11 19:08:25.792: INFO: Deleting pod security-context-d69c17b5-c608-492a-b996-b6a57c5631cf Jan 11 19:08:25.885: INFO: Deleting PersistentVolumeClaim "pvc-qt2nj" Jan 11 19:08:25.975: INFO: Deleting PersistentVolumeClaim "pvc-rvmkl" Jan 11 19:08:26.066: INFO: Deleting PersistentVolumeClaim "pvc-j4tk9" Jan 11 19:08:26.156: INFO: 18/28 pods finished STEP: Delete "local-pvn4twn" and create a new PV for same local volume storage STEP: Delete "local-pvhczqr" and create a new PV for same local volume storage STEP: Delete "local-pvsld48" and create a new PV for same local volume storage Jan 11 19:08:30.247: INFO: Deleting pod security-context-211c0c32-2ddc-4577-b046-5a125ec1ce80 Jan 11 19:08:30.340: INFO: Deleting PersistentVolumeClaim "pvc-ppgpj" Jan 11 19:08:30.431: INFO: Deleting PersistentVolumeClaim "pvc-pfk8q" Jan 11 19:08:30.521: INFO: Deleting PersistentVolumeClaim "pvc-99hxx" STEP: Delete "local-pvhp8zl" and create a new PV for same local volume storage Jan 11 19:08:30.611: INFO: 19/28 pods finished Jan 11 19:08:30.611: INFO: Deleting pod security-context-b5cca1c1-f82a-44aa-a9b8-9b182ec0392c Jan 11 19:08:30.704: INFO: Deleting PersistentVolumeClaim "pvc-44xhc" Jan 11 19:08:30.795: INFO: Deleting PersistentVolumeClaim "pvc-l6crk" STEP: Delete "local-pvlcfgc" and create a new PV for same local volume storage Jan 11 19:08:30.886: INFO: Deleting PersistentVolumeClaim "pvc-xgjzh" Jan 11 19:08:30.977: INFO: 20/28 pods finished STEP: Delete "local-pvwn4q8" and create a new PV for same local volume storage STEP: Delete "local-pv7gwcq" and create a new PV for same local volume storage STEP: Delete "local-pv4wfcd" and create a new PV for same local volume storage Jan 11 19:08:31.793: INFO: Deleting pod security-context-5d276367-152e-4e51-a7fb-c01209abbcea Jan 11 19:08:31.885: INFO: Deleting PersistentVolumeClaim "pvc-7d8tv" Jan 11 19:08:31.975: INFO: Deleting PersistentVolumeClaim "pvc-59nhn" STEP: Delete "local-pvxnf9t" and create a new PV for same local volume storage Jan 11 19:08:32.065: INFO: Deleting PersistentVolumeClaim "pvc-4s7l9" Jan 11 19:08:32.155: INFO: 21/28 pods finished STEP: Delete "local-pvl4fl4" and create a new PV for same local volume storage Jan 11 19:08:32.793: INFO: Deleting pod security-context-2f5c8a66-b72d-4e3d-bc49-07e75e5ba7e1 STEP: Delete "local-pvnj6td" and create a new PV for same local volume storage Jan 11 19:08:32.886: INFO: Deleting PersistentVolumeClaim "pvc-zdc2x" Jan 11 19:08:32.977: INFO: Deleting PersistentVolumeClaim "pvc-57thj" Jan 11 19:08:33.067: INFO: Deleting PersistentVolumeClaim "pvc-d48tg" Jan 11 19:08:33.158: INFO: 22/28 pods finished Jan 11 19:08:33.158: INFO: Deleting pod security-context-89e5e119-c535-462e-a09b-9e721b9782a7 STEP: Delete "local-pv9vwgd" and create a new PV for same local volume storage Jan 11 19:08:33.250: INFO: Deleting PersistentVolumeClaim "pvc-m6rzr" Jan 11 19:08:33.341: INFO: Deleting PersistentVolumeClaim "pvc-f8jhh" Jan 11 19:08:33.431: INFO: Deleting PersistentVolumeClaim "pvc-mvnk9" Jan 11 19:08:33.522: INFO: 23/28 pods finished Jan 11 19:08:33.522: INFO: Deleting pod security-context-f8b2131b-bbe7-4a4d-8715-7ebcb3bbf80d Jan 11 19:08:33.614: INFO: Deleting PersistentVolumeClaim "pvc-8x7ql" STEP: Delete "local-pv5mn7v" and create a new PV for same local volume storage Jan 11 19:08:33.705: INFO: Deleting PersistentVolumeClaim "pvc-vsd28" Jan 11 19:08:33.795: INFO: Deleting PersistentVolumeClaim "pvc-djqzb" Jan 11 19:08:33.884: INFO: 24/28 pods finished STEP: Delete "local-pvp2qgm" and create a new PV for same local volume storage STEP: Delete "local-pvndmld" and create a new PV for same local volume storage STEP: Delete "local-pv9j68p" and create a new PV for same local volume storage STEP: Delete "local-pvtpvdr" and create a new PV for same local volume storage STEP: Delete "local-pvwn5vg" and create a new PV for same local volume storage STEP: Delete "local-pv8s299" and create a new PV for same local volume storage STEP: Delete "local-pv2zvgh" and create a new PV for same local volume storage Jan 11 19:08:35.793: INFO: Deleting pod security-context-5056de08-697c-4cd2-9e61-e3d7143b6953 Jan 11 19:08:35.885: INFO: Deleting PersistentVolumeClaim "pvc-smfqc" Jan 11 19:08:35.975: INFO: Deleting PersistentVolumeClaim "pvc-9wbbb" STEP: Delete "local-pvsfgq5" and create a new PV for same local volume storage Jan 11 19:08:36.065: INFO: Deleting PersistentVolumeClaim "pvc-ltdvt" Jan 11 19:08:36.155: INFO: 25/28 pods finished Jan 11 19:08:36.792: INFO: Deleting pod security-context-7520e341-61a1-47be-ba90-f035c447b0bf Jan 11 19:08:36.886: INFO: Deleting PersistentVolumeClaim "pvc-x7z6b" STEP: Delete "local-pvzxl9m" and create a new PV for same local volume storage Jan 11 19:08:36.976: INFO: Deleting PersistentVolumeClaim "pvc-jv4hw" Jan 11 19:08:37.067: INFO: Deleting PersistentVolumeClaim "pvc-gc9mt" Jan 11 19:08:37.157: INFO: 26/28 pods finished STEP: Delete "local-pv9zg45" and create a new PV for same local volume storage STEP: Delete "local-pvqp4pz" and create a new PV for same local volume storage Jan 11 19:08:37.792: INFO: Deleting pod security-context-0f783595-7a6e-465c-8e88-2aa4985fba4d STEP: Delete "local-pv2wf5q" and create a new PV for same local volume storage Jan 11 19:08:37.886: INFO: Deleting PersistentVolumeClaim "pvc-z5gqs" Jan 11 19:08:37.976: INFO: Deleting PersistentVolumeClaim "pvc-4nrt7" Jan 11 19:08:38.067: INFO: Deleting PersistentVolumeClaim "pvc-4wxm4" Jan 11 19:08:38.157: INFO: 27/28 pods finished Jan 11 19:08:38.157: INFO: Deleting pod security-context-94ccfa8f-9226-4aab-8f9f-969b55daf101 STEP: Delete "local-pvtkl6c" and create a new PV for same local volume storage Jan 11 19:08:38.251: INFO: Deleting PersistentVolumeClaim "pvc-t7rwp" Jan 11 19:08:38.342: INFO: Deleting PersistentVolumeClaim "pvc-nxhzt" Jan 11 19:08:38.432: INFO: Deleting PersistentVolumeClaim "pvc-hjjr9" STEP: Delete "local-pv645px" and create a new PV for same local volume storage Jan 11 19:08:38.522: INFO: 28/28 pods finished [AfterEach] Stress with local volumes [Serial] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:508 STEP: Stop and wait for recycle goroutine to finish STEP: Clean all PVs STEP: Cleaning up 10 local volumes on node "ip-10-250-27-25.ec2.internal" STEP: Cleaning up PVC and PV Jan 11 19:08:38.708: INFO: pvc is nil Jan 11 19:08:38.708: INFO: Deleting PersistentVolume "local-pv6r5fh" STEP: Cleaning up PVC and PV Jan 11 19:08:38.799: INFO: pvc is nil Jan 11 19:08:38.799: INFO: Deleting PersistentVolume "local-pvx5jjt" STEP: Cleaning up PVC and PV Jan 11 19:08:38.889: INFO: pvc is nil Jan 11 19:08:38.889: INFO: Deleting PersistentVolume "local-pvkb9w4" STEP: Cleaning up PVC and PV Jan 11 19:08:38.979: INFO: pvc is nil Jan 11 19:08:38.979: INFO: Deleting PersistentVolume "local-pvdg7kc" STEP: Cleaning up PVC and PV Jan 11 19:08:39.069: INFO: pvc is nil Jan 11 19:08:39.069: INFO: Deleting PersistentVolume "local-pv2wtk6" STEP: Cleaning up PVC and PV Jan 11 19:08:39.160: INFO: pvc is nil Jan 11 19:08:39.160: INFO: Deleting PersistentVolume "local-pvbxdd4" STEP: Cleaning up PVC and PV Jan 11 19:08:39.250: INFO: pvc is nil Jan 11 19:08:39.250: INFO: Deleting PersistentVolume "local-pvxvg2j" STEP: Cleaning up PVC and PV Jan 11 19:08:39.340: INFO: pvc is nil Jan 11 19:08:39.340: INFO: Deleting PersistentVolume "local-pv9rv5l" STEP: Cleaning up PVC and PV Jan 11 19:08:39.430: INFO: pvc is nil Jan 11 19:08:39.430: INFO: Deleting PersistentVolume "local-pvq7dpx" STEP: Cleaning up PVC and PV Jan 11 19:08:39.521: INFO: pvc is nil Jan 11 19:08:39.521: INFO: Deleting PersistentVolume "local-pv49nv5" STEP: Unmount tmpfs mount point on node "ip-10-250-27-25.ec2.internal" at path "/tmp/local-volume-test-13c81b79-20ca-42d1-98b8-5adcd46369c3" Jan 11 19:08:39.612: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-9854 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-13c81b79-20ca-42d1-98b8-5adcd46369c3"' Jan 11 19:08:41.391: INFO: stderr: "" Jan 11 19:08:41.392: INFO: stdout: "" STEP: Removing the test directory Jan 11 19:08:41.392: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-9854 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-13c81b79-20ca-42d1-98b8-5adcd46369c3' Jan 11 19:08:42.662: INFO: stderr: "" Jan 11 19:08:42.662: INFO: stdout: "" STEP: Unmount tmpfs mount point on node "ip-10-250-27-25.ec2.internal" at path "/tmp/local-volume-test-29b8c911-f5c3-4e30-b0c3-2b640bd4f382" Jan 11 19:08:42.662: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-9854 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-29b8c911-f5c3-4e30-b0c3-2b640bd4f382"' Jan 11 19:08:43.944: INFO: stderr: "" Jan 11 19:08:43.944: INFO: stdout: "" STEP: Removing the test directory Jan 11 19:08:43.944: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-9854 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-29b8c911-f5c3-4e30-b0c3-2b640bd4f382' Jan 11 19:08:45.219: INFO: stderr: "" Jan 11 19:08:45.219: INFO: stdout: "" STEP: Unmount tmpfs mount point on node "ip-10-250-27-25.ec2.internal" at path "/tmp/local-volume-test-5236cd16-d0e5-4746-ae42-17721c552e61" Jan 11 19:08:45.219: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-9854 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-5236cd16-d0e5-4746-ae42-17721c552e61"' Jan 11 19:08:46.488: INFO: stderr: "" Jan 11 19:08:46.488: INFO: stdout: "" STEP: Removing the test directory Jan 11 19:08:46.488: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-9854 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-5236cd16-d0e5-4746-ae42-17721c552e61' Jan 11 19:08:47.766: INFO: stderr: "" Jan 11 19:08:47.766: INFO: stdout: "" STEP: Unmount tmpfs mount point on node "ip-10-250-27-25.ec2.internal" at path "/tmp/local-volume-test-53fc645b-f503-4f9b-a52a-da875a7d766a" Jan 11 19:08:47.766: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-9854 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-53fc645b-f503-4f9b-a52a-da875a7d766a"' Jan 11 19:08:49.033: INFO: stderr: "" Jan 11 19:08:49.033: INFO: stdout: "" STEP: Removing the test directory Jan 11 19:08:49.033: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-9854 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-53fc645b-f503-4f9b-a52a-da875a7d766a' Jan 11 19:08:50.297: INFO: stderr: "" Jan 11 19:08:50.297: INFO: stdout: "" STEP: Unmount tmpfs mount point on node "ip-10-250-27-25.ec2.internal" at path "/tmp/local-volume-test-16eeff24-6e8e-4b32-aa35-639438b63e8d" Jan 11 19:08:50.297: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-9854 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-16eeff24-6e8e-4b32-aa35-639438b63e8d"' Jan 11 19:08:51.567: INFO: stderr: "" Jan 11 19:08:51.567: INFO: stdout: "" STEP: Removing the test directory Jan 11 19:08:51.567: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-9854 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-16eeff24-6e8e-4b32-aa35-639438b63e8d' Jan 11 19:08:52.878: INFO: stderr: "" Jan 11 19:08:52.878: INFO: stdout: "" STEP: Unmount tmpfs mount point on node "ip-10-250-27-25.ec2.internal" at path "/tmp/local-volume-test-89c52ba4-fdd4-4a9e-9ca9-083bed356165" Jan 11 19:08:52.878: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-9854 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-89c52ba4-fdd4-4a9e-9ca9-083bed356165"' Jan 11 19:08:54.152: INFO: stderr: "" Jan 11 19:08:54.152: INFO: stdout: "" STEP: Removing the test directory Jan 11 19:08:54.152: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-9854 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-89c52ba4-fdd4-4a9e-9ca9-083bed356165' Jan 11 19:08:55.426: INFO: stderr: "" Jan 11 19:08:55.426: INFO: stdout: "" STEP: Unmount tmpfs mount point on node "ip-10-250-27-25.ec2.internal" at path "/tmp/local-volume-test-f14e2045-e72a-4d67-9641-90e7eb8756a5" Jan 11 19:08:55.426: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-9854 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-f14e2045-e72a-4d67-9641-90e7eb8756a5"' Jan 11 19:10:00.915: INFO: stderr: "" Jan 11 19:10:00.915: INFO: stdout: "" STEP: Removing the test directory Jan 11 19:10:00.915: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-9854 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-f14e2045-e72a-4d67-9641-90e7eb8756a5' Jan 11 19:10:02.205: INFO: stderr: "" Jan 11 19:10:02.205: INFO: stdout: "" STEP: Unmount tmpfs mount point on node "ip-10-250-27-25.ec2.internal" at path "/tmp/local-volume-test-ae21742b-cabb-4bba-99e5-b50f1590084e" Jan 11 19:10:02.205: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-9854 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-ae21742b-cabb-4bba-99e5-b50f1590084e"' Jan 11 19:10:03.477: INFO: stderr: "" Jan 11 19:10:03.477: INFO: stdout: "" STEP: Removing the test directory Jan 11 19:10:03.477: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-9854 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-ae21742b-cabb-4bba-99e5-b50f1590084e' Jan 11 19:10:04.746: INFO: stderr: "" Jan 11 19:10:04.746: INFO: stdout: "" STEP: Unmount tmpfs mount point on node "ip-10-250-27-25.ec2.internal" at path "/tmp/local-volume-test-ca84dc39-8e61-4b11-a588-360770c1f2e6" Jan 11 19:10:04.746: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-9854 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-ca84dc39-8e61-4b11-a588-360770c1f2e6"' Jan 11 19:10:06.025: INFO: stderr: "" Jan 11 19:10:06.025: INFO: stdout: "" STEP: Removing the test directory Jan 11 19:10:06.025: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-9854 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-ca84dc39-8e61-4b11-a588-360770c1f2e6' Jan 11 19:10:07.331: INFO: stderr: "" Jan 11 19:10:07.331: INFO: stdout: "" STEP: Unmount tmpfs mount point on node "ip-10-250-27-25.ec2.internal" at path "/tmp/local-volume-test-5de04a66-ecf3-4b26-a1fb-ed7e5dffdb6f" Jan 11 19:10:07.331: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-9854 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-5de04a66-ecf3-4b26-a1fb-ed7e5dffdb6f"' Jan 11 19:10:08.635: INFO: stderr: "" Jan 11 19:10:08.635: INFO: stdout: "" STEP: Removing the test directory Jan 11 19:10:08.635: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-9854 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-5de04a66-ecf3-4b26-a1fb-ed7e5dffdb6f' Jan 11 19:10:09.949: INFO: stderr: "" Jan 11 19:10:09.949: INFO: stdout: "" STEP: Cleaning up 10 local volumes on node "ip-10-250-7-77.ec2.internal" STEP: Cleaning up PVC and PV Jan 11 19:10:09.950: INFO: pvc is nil Jan 11 19:10:09.950: INFO: Deleting PersistentVolume "local-pvr2c7t" STEP: Cleaning up PVC and PV Jan 11 19:10:10.040: INFO: pvc is nil Jan 11 19:10:10.040: INFO: Deleting PersistentVolume "local-pvrntv7" STEP: Cleaning up PVC and PV Jan 11 19:10:10.130: INFO: pvc is nil Jan 11 19:10:10.130: INFO: Deleting PersistentVolume "local-pvfg628" STEP: Cleaning up PVC and PV Jan 11 19:10:10.220: INFO: pvc is nil Jan 11 19:10:10.220: INFO: Deleting PersistentVolume "local-pvgqrp9" STEP: Cleaning up PVC and PV Jan 11 19:10:10.310: INFO: pvc is nil Jan 11 19:10:10.310: INFO: Deleting PersistentVolume "local-pv27bnq" STEP: Cleaning up PVC and PV Jan 11 19:10:10.400: INFO: pvc is nil Jan 11 19:10:10.400: INFO: Deleting PersistentVolume "local-pvbj6kl" STEP: Cleaning up PVC and PV Jan 11 19:10:10.490: INFO: pvc is nil Jan 11 19:10:10.490: INFO: Deleting PersistentVolume "local-pvtgtgk" STEP: Cleaning up PVC and PV Jan 11 19:10:10.580: INFO: pvc is nil Jan 11 19:10:10.580: INFO: Deleting PersistentVolume "local-pvvldhp" STEP: Cleaning up PVC and PV Jan 11 19:10:10.670: INFO: pvc is nil Jan 11 19:10:10.670: INFO: Deleting PersistentVolume "local-pv9s6ds" STEP: Cleaning up PVC and PV Jan 11 19:10:10.760: INFO: pvc is nil Jan 11 19:10:10.760: INFO: Deleting PersistentVolume "local-pv94n44" STEP: Unmount tmpfs mount point on node "ip-10-250-7-77.ec2.internal" at path "/tmp/local-volume-test-b4ce3796-a431-4b65-a0ee-2edb1a6002b7" Jan 11 19:10:10.850: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-9854 hostexec-ip-10-250-7-77.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-b4ce3796-a431-4b65-a0ee-2edb1a6002b7"' Jan 11 19:10:12.149: INFO: stderr: "" Jan 11 19:10:12.149: INFO: stdout: "" STEP: Removing the test directory Jan 11 19:10:12.149: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-9854 hostexec-ip-10-250-7-77.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-b4ce3796-a431-4b65-a0ee-2edb1a6002b7' Jan 11 19:10:13.413: INFO: stderr: "" Jan 11 19:10:13.413: INFO: stdout: "" STEP: Unmount tmpfs mount point on node "ip-10-250-7-77.ec2.internal" at path "/tmp/local-volume-test-2b8fa15c-8e85-4cdd-8e95-4ca13dbeb67e" Jan 11 19:10:13.413: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-9854 hostexec-ip-10-250-7-77.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-2b8fa15c-8e85-4cdd-8e95-4ca13dbeb67e"' Jan 11 19:10:14.688: INFO: stderr: "" Jan 11 19:10:14.689: INFO: stdout: "" STEP: Removing the test directory Jan 11 19:10:14.689: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-9854 hostexec-ip-10-250-7-77.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-2b8fa15c-8e85-4cdd-8e95-4ca13dbeb67e' Jan 11 19:10:15.966: INFO: stderr: "" Jan 11 19:10:15.966: INFO: stdout: "" STEP: Unmount tmpfs mount point on node "ip-10-250-7-77.ec2.internal" at path "/tmp/local-volume-test-9484c4f2-2492-4484-946b-f749249e16c5" Jan 11 19:10:15.966: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-9854 hostexec-ip-10-250-7-77.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-9484c4f2-2492-4484-946b-f749249e16c5"' Jan 11 19:10:17.253: INFO: stderr: "" Jan 11 19:10:17.253: INFO: stdout: "" STEP: Removing the test directory Jan 11 19:10:17.253: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-9854 hostexec-ip-10-250-7-77.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-9484c4f2-2492-4484-946b-f749249e16c5' Jan 11 19:10:18.577: INFO: stderr: "" Jan 11 19:10:18.577: INFO: stdout: "" STEP: Unmount tmpfs mount point on node "ip-10-250-7-77.ec2.internal" at path "/tmp/local-volume-test-da99e2c6-d4bc-45eb-8587-1ec50eb6b702" Jan 11 19:10:18.577: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-9854 hostexec-ip-10-250-7-77.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-da99e2c6-d4bc-45eb-8587-1ec50eb6b702"' Jan 11 19:10:19.854: INFO: stderr: "" Jan 11 19:10:19.854: INFO: stdout: "" STEP: Removing the test directory Jan 11 19:10:19.854: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-9854 hostexec-ip-10-250-7-77.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-da99e2c6-d4bc-45eb-8587-1ec50eb6b702' Jan 11 19:10:21.178: INFO: stderr: "" Jan 11 19:10:21.178: INFO: stdout: "" STEP: Unmount tmpfs mount point on node "ip-10-250-7-77.ec2.internal" at path "/tmp/local-volume-test-a3e12eb9-1501-41fe-bb34-02d7c3a0b593" Jan 11 19:10:21.178: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-9854 hostexec-ip-10-250-7-77.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-a3e12eb9-1501-41fe-bb34-02d7c3a0b593"' Jan 11 19:10:22.522: INFO: stderr: "" Jan 11 19:10:22.522: INFO: stdout: "" STEP: Removing the test directory Jan 11 19:10:22.522: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-9854 hostexec-ip-10-250-7-77.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-a3e12eb9-1501-41fe-bb34-02d7c3a0b593' Jan 11 19:10:23.849: INFO: stderr: "" Jan 11 19:10:23.849: INFO: stdout: "" STEP: Unmount tmpfs mount point on node "ip-10-250-7-77.ec2.internal" at path "/tmp/local-volume-test-1d2d80b5-5ee5-4f6a-8183-53db60909d26" Jan 11 19:10:23.850: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-9854 hostexec-ip-10-250-7-77.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-1d2d80b5-5ee5-4f6a-8183-53db60909d26"' Jan 11 19:10:25.150: INFO: stderr: "" Jan 11 19:10:25.150: INFO: stdout: "" STEP: Removing the test directory Jan 11 19:10:25.151: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-9854 hostexec-ip-10-250-7-77.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-1d2d80b5-5ee5-4f6a-8183-53db60909d26' Jan 11 19:10:26.461: INFO: stderr: "" Jan 11 19:10:26.462: INFO: stdout: "" STEP: Unmount tmpfs mount point on node "ip-10-250-7-77.ec2.internal" at path "/tmp/local-volume-test-05e5450b-0920-4bf8-90ee-087a7519452f" Jan 11 19:10:26.462: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-9854 hostexec-ip-10-250-7-77.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-05e5450b-0920-4bf8-90ee-087a7519452f"' Jan 11 19:10:27.839: INFO: stderr: "" Jan 11 19:10:27.839: INFO: stdout: "" STEP: Removing the test directory Jan 11 19:10:27.839: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-9854 hostexec-ip-10-250-7-77.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-05e5450b-0920-4bf8-90ee-087a7519452f' Jan 11 19:10:29.148: INFO: stderr: "" Jan 11 19:10:29.148: INFO: stdout: "" STEP: Unmount tmpfs mount point on node "ip-10-250-7-77.ec2.internal" at path "/tmp/local-volume-test-42451ede-e262-495b-9a9c-abde801feacf" Jan 11 19:10:29.148: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-9854 hostexec-ip-10-250-7-77.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-42451ede-e262-495b-9a9c-abde801feacf"' Jan 11 19:10:30.476: INFO: stderr: "" Jan 11 19:10:30.477: INFO: stdout: "" STEP: Removing the test directory Jan 11 19:10:30.477: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-9854 hostexec-ip-10-250-7-77.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-42451ede-e262-495b-9a9c-abde801feacf' Jan 11 19:10:31.790: INFO: stderr: "" Jan 11 19:10:31.790: INFO: stdout: "" STEP: Unmount tmpfs mount point on node "ip-10-250-7-77.ec2.internal" at path "/tmp/local-volume-test-20f2ea20-4168-440d-a0b8-d184e3466fed" Jan 11 19:10:31.791: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-9854 hostexec-ip-10-250-7-77.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-20f2ea20-4168-440d-a0b8-d184e3466fed"' Jan 11 19:10:33.086: INFO: stderr: "" Jan 11 19:10:33.086: INFO: stdout: "" STEP: Removing the test directory Jan 11 19:10:33.086: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-9854 hostexec-ip-10-250-7-77.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-20f2ea20-4168-440d-a0b8-d184e3466fed' Jan 11 19:10:34.389: INFO: stderr: "" Jan 11 19:10:34.389: INFO: stdout: "" STEP: Unmount tmpfs mount point on node "ip-10-250-7-77.ec2.internal" at path "/tmp/local-volume-test-228cb7b6-466e-4d79-9dcd-aafcf4a9a5ce" Jan 11 19:10:34.389: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-9854 hostexec-ip-10-250-7-77.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-228cb7b6-466e-4d79-9dcd-aafcf4a9a5ce"' Jan 11 19:10:35.710: INFO: stderr: "" Jan 11 19:10:35.710: INFO: stdout: "" STEP: Removing the test directory Jan 11 19:10:35.710: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-9854 hostexec-ip-10-250-7-77.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-228cb7b6-466e-4d79-9dcd-aafcf4a9a5ce' Jan 11 19:10:37.061: INFO: stderr: "" Jan 11 19:10:37.061: INFO: stdout: "" [AfterEach] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:10:37.152: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-9854" for this suite. Jan 11 19:10:43.513: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:10:46.734: INFO: namespace persistent-local-volumes-test-9854 deletion completed in 9.490571692s •SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:10:46.735: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-wrapper-6238 STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Jan 11 19:10:52.142: INFO: Pod name wrapped-volume-race-f706eeb1-593c-4061-97e4-dbcd7556b8da: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-f706eeb1-593c-4061-97e4-dbcd7556b8da in namespace emptydir-wrapper-6238, will wait for the garbage collector to delete the pods Jan 11 19:10:56.972: INFO: Deleting ReplicationController wrapped-volume-race-f706eeb1-593c-4061-97e4-dbcd7556b8da took: 91.74038ms Jan 11 19:10:57.472: INFO: Terminating ReplicationController wrapped-volume-race-f706eeb1-593c-4061-97e4-dbcd7556b8da pods took: 500.303011ms STEP: Creating RC which spawns configmap-volume pods Jan 11 19:11:34.249: INFO: Pod name wrapped-volume-race-11efadd1-4d0b-4f26-a129-47c44e55335c: Found 4 pods out of 5 Jan 11 19:11:39.342: INFO: Pod name wrapped-volume-race-11efadd1-4d0b-4f26-a129-47c44e55335c: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-11efadd1-4d0b-4f26-a129-47c44e55335c in namespace emptydir-wrapper-6238, will wait for the garbage collector to delete the pods Jan 11 19:11:40.074: INFO: Deleting ReplicationController wrapped-volume-race-11efadd1-4d0b-4f26-a129-47c44e55335c took: 91.574429ms Jan 11 19:11:40.574: INFO: Terminating ReplicationController wrapped-volume-race-11efadd1-4d0b-4f26-a129-47c44e55335c pods took: 500.317774ms STEP: Creating RC which spawns configmap-volume pods Jan 11 19:12:24.152: INFO: Pod name wrapped-volume-race-65861a70-addd-46e1-aa4c-3aed42fe69e1: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-65861a70-addd-46e1-aa4c-3aed42fe69e1 in namespace emptydir-wrapper-6238, will wait for the garbage collector to delete the pods Jan 11 19:12:29.066: INFO: Deleting ReplicationController wrapped-volume-race-65861a70-addd-46e1-aa4c-3aed42fe69e1 took: 91.65094ms Jan 11 19:12:29.566: INFO: Terminating ReplicationController wrapped-volume-race-65861a70-addd-46e1-aa4c-3aed42fe69e1 pods took: 500.280045ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:13:18.425: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-6238" for this suite. Jan 11 19:13:24.784: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:13:28.014: INFO: namespace emptydir-wrapper-6238 deletion completed in 9.498362922s •SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:101 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:13:28.015: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename sched-preemption STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in sched-preemption-7146 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:76 Jan 11 19:13:28.923: INFO: Waiting up to 1m0s for all nodes to be ready Jan 11 19:14:29.560: INFO: Waiting for terminating namespaces to be deleted... [It] validates basic preemption works /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:101 STEP: Create pods that use 60% of node resources. Jan 11 19:14:29.742: INFO: Created pod: pod0-sched-preemption-low-priority Jan 11 19:14:29.833: INFO: Created pod: pod1-sched-preemption-medium-priority STEP: Wait for pods to be scheduled. STEP: Run a high priority pod that use 60% of a node resources. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:14:46.643: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-7146" for this suite. Jan 11 19:14:55.003: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:14:58.228: INFO: namespace sched-preemption-7146 deletion completed in 11.493619384s [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:70 •SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should delete fast enough (90 percent of 100 namespaces in 150 seconds) /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/namespace.go:243 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:14:58.501: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename namespaces STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in namespaces-1267 STEP: Waiting for a default service account to be provisioned in namespace [It] should delete fast enough (90 percent of 100 namespaces in 150 seconds) /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/namespace.go:243 STEP: Creating testing namespaces STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nslifetest-48-6490 STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nslifetest-22-7930 STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nslifetest-50-8125 STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nslifetest-49-9111 STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nslifetest-23-1261 STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nslifetest-39-8843 STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nslifetest-0-7127 STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nslifetest-73-2389 STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nslifetest-52-186 STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nslifetest-38-1292 STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nslifetest-42-3891 STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nslifetest-53-3915 STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nslifetest-40-7202 STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nslifetest-43-7835 STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nslifetest-37-8514 STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nslifetest-1-3303 STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nslifetest-51-1913 STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nslifetest-41-6637 STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nslifetest-36-2113 STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nslifetest-62-9235 STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nslifetest-65-4126 STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nslifetest-64-181 STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nslifetest-28-4553 STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nslifetest-44-9180 STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nslifetest-61-3695 STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nslifetest-99-5529 STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nslifetest-55-4244 STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nslifetest-29-9446 STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nslifetest-68-9001 STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nslifetest-63-6874 STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nslifetest-67-2196 STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nslifetest-54-2974 STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nslifetest-46-5758 STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nslifetest-24-7607 STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nslifetest-71-8685 STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nslifetest-45-8878 STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nslifetest-56-7692 STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nslifetest-25-7468 STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nslifetest-69-6527 STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nslifetest-32-9270 STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nslifetest-59-2732 STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nslifetest-27-6023 STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nslifetest-66-3969 STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nslifetest-2-2727 STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nslifetest-58-5834 STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nslifetest-60-1310 STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nslifetest-72-7516 STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nslifetest-47-2561 STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nslifetest-26-5750 STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nslifetest-31-1894 STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nslifetest-70-6539 STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nslifetest-30-6900 STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nslifetest-12-4061 STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nslifetest-57-5292 STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nslifetest-33-9778 STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nslifetest-3-4461 STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nslifetest-6-5243 STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nslifetest-4-4307 STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nslifetest-5-8510 STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nslifetest-8-3731 STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nslifetest-7-9591 STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nslifetest-86-3288 STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nslifetest-11-316 STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nslifetest-10-7274 STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nslifetest-9-2463 STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nslifetest-74-6053 STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nslifetest-75-1646 STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nslifetest-76-4797 STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nslifetest-34-5299 STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nslifetest-79-6098 STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nslifetest-77-7522 STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nslifetest-78-2957 STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nslifetest-80-2989 STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nslifetest-81-1529 STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nslifetest-83-5890 STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nslifetest-84-3249 STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nslifetest-85-1503 STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nslifetest-17-7042 STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nslifetest-82-2100 STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nslifetest-13-3153 STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nslifetest-18-8468 STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nslifetest-19-7492 STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nslifetest-14-6490 STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nslifetest-20-305 STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nslifetest-16-2952 STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nslifetest-15-8077 STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nslifetest-87-1036 STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nslifetest-92-742 STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nslifetest-90-4470 STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nslifetest-88-7769 STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nslifetest-89-4722 STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nslifetest-91-3724 STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nslifetest-21-1082 STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nslifetest-95-4415 STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nslifetest-97-9823 STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nslifetest-94-979 STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nslifetest-93-6473 STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nslifetest-96-8756 STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nslifetest-35-2398 STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nslifetest-98-1189 STEP: Waiting 10 seconds STEP: Deleting namespaces Jan 11 19:15:29.443: INFO: namespace : nslifetest-1-3303 api call to delete is complete Jan 11 19:15:29.444: INFO: namespace : nslifetest-12-4061 api call to delete is complete Jan 11 19:15:29.444: INFO: namespace : nslifetest-8-3731 api call to delete is complete Jan 11 19:15:29.445: INFO: namespace : nslifetest-99-5529 api call to delete is complete Jan 11 19:15:29.445: INFO: namespace : nslifetest-43-7835 api call to delete is complete Jan 11 19:15:29.445: INFO: namespace : nslifetest-59-2732 api call to delete is complete Jan 11 19:15:29.445: INFO: namespace : nslifetest-73-2389 api call to delete is complete Jan 11 19:15:29.533: INFO: namespace : nslifetest-16-2952 api call to delete is complete Jan 11 19:15:29.535: INFO: namespace : nslifetest-51-1913 api call to delete is complete Jan 11 19:15:29.535: INFO: namespace : nslifetest-48-6490 api call to delete is complete Jan 11 19:15:29.535: INFO: namespace : nslifetest-97-9823 api call to delete is complete Jan 11 19:15:29.535: INFO: namespace : nslifetest-2-2727 api call to delete is complete Jan 11 19:15:29.535: INFO: namespace : nslifetest-0-7127 api call to delete is complete Jan 11 19:15:29.535: INFO: namespace : nslifetest-94-979 api call to delete is complete Jan 11 19:15:29.535: INFO: namespace : nslifetest-79-6098 api call to delete is complete Jan 11 19:15:29.535: INFO: namespace : nslifetest-87-1036 api call to delete is complete Jan 11 19:15:29.535: INFO: namespace : nslifetest-15-8077 api call to delete is complete Jan 11 19:15:29.535: INFO: namespace : nslifetest-47-2561 api call to delete is complete Jan 11 19:15:29.536: INFO: namespace : nslifetest-70-6539 api call to delete is complete Jan 11 19:15:29.536: INFO: namespace : nslifetest-17-7042 api call to delete is complete Jan 11 19:15:29.536: INFO: namespace : nslifetest-11-316 api call to delete is complete Jan 11 19:15:29.536: INFO: namespace : nslifetest-88-7769 api call to delete is complete Jan 11 19:15:29.536: INFO: namespace : nslifetest-52-186 api call to delete is complete Jan 11 19:15:29.536: INFO: namespace : nslifetest-90-4470 api call to delete is complete Jan 11 19:15:29.536: INFO: namespace : nslifetest-71-8685 api call to delete is complete Jan 11 19:15:29.536: INFO: namespace : nslifetest-93-6473 api call to delete is complete Jan 11 19:15:29.536: INFO: namespace : nslifetest-45-8878 api call to delete is complete Jan 11 19:15:29.537: INFO: namespace : nslifetest-86-3288 api call to delete is complete Jan 11 19:15:29.537: INFO: namespace : nslifetest-19-7492 api call to delete is complete Jan 11 19:15:29.538: INFO: namespace : nslifetest-96-8756 api call to delete is complete Jan 11 19:15:29.538: INFO: namespace : nslifetest-60-1310 api call to delete is complete Jan 11 19:15:29.538: INFO: namespace : nslifetest-61-3695 api call to delete is complete Jan 11 19:15:29.538: INFO: namespace : nslifetest-80-2989 api call to delete is complete Jan 11 19:15:29.538: INFO: namespace : nslifetest-82-2100 api call to delete is complete Jan 11 19:15:29.538: INFO: namespace : nslifetest-14-6490 api call to delete is complete Jan 11 19:15:29.538: INFO: namespace : nslifetest-18-8468 api call to delete is complete Jan 11 19:15:29.538: INFO: namespace : nslifetest-81-1529 api call to delete is complete Jan 11 19:15:29.538: INFO: namespace : nslifetest-58-5834 api call to delete is complete Jan 11 19:15:29.538: INFO: namespace : nslifetest-89-4722 api call to delete is complete Jan 11 19:15:29.538: INFO: namespace : nslifetest-85-1503 api call to delete is complete Jan 11 19:15:29.538: INFO: namespace : nslifetest-83-5890 api call to delete is complete Jan 11 19:15:29.538: INFO: namespace : nslifetest-57-5292 api call to delete is complete Jan 11 19:15:29.538: INFO: namespace : nslifetest-95-4415 api call to delete is complete Jan 11 19:15:29.538: INFO: namespace : nslifetest-6-5243 api call to delete is complete Jan 11 19:15:29.538: INFO: namespace : nslifetest-44-9180 api call to delete is complete Jan 11 19:15:29.538: INFO: namespace : nslifetest-9-2463 api call to delete is complete Jan 11 19:15:29.538: INFO: namespace : nslifetest-13-3153 api call to delete is complete Jan 11 19:15:29.538: INFO: namespace : nslifetest-46-5758 api call to delete is complete Jan 11 19:15:29.538: INFO: namespace : nslifetest-92-742 api call to delete is complete Jan 11 19:15:29.538: INFO: namespace : nslifetest-72-7516 api call to delete is complete Jan 11 19:15:29.538: INFO: namespace : nslifetest-10-7274 api call to delete is complete Jan 11 19:15:29.542: INFO: namespace : nslifetest-20-305 api call to delete is complete Jan 11 19:15:29.593: INFO: namespace : nslifetest-53-3915 api call to delete is complete Jan 11 19:15:29.643: INFO: namespace : nslifetest-49-9111 api call to delete is complete Jan 11 19:15:29.693: INFO: namespace : nslifetest-62-9235 api call to delete is complete Jan 11 19:15:29.743: INFO: namespace : nslifetest-98-1189 api call to delete is complete Jan 11 19:15:29.793: INFO: namespace : nslifetest-91-3724 api call to delete is complete Jan 11 19:15:29.843: INFO: namespace : nslifetest-21-1082 api call to delete is complete Jan 11 19:15:29.894: INFO: namespace : nslifetest-54-2974 api call to delete is complete Jan 11 19:15:29.943: INFO: namespace : nslifetest-63-6874 api call to delete is complete Jan 11 19:15:29.993: INFO: namespace : nslifetest-5-8510 api call to delete is complete Jan 11 19:15:30.043: INFO: namespace : nslifetest-50-8125 api call to delete is complete Jan 11 19:15:30.093: INFO: namespace : nslifetest-55-4244 api call to delete is complete Jan 11 19:15:30.143: INFO: namespace : nslifetest-22-7930 api call to delete is complete Jan 11 19:15:30.193: INFO: namespace : nslifetest-64-181 api call to delete is complete Jan 11 19:15:30.244: INFO: namespace : nslifetest-56-7692 api call to delete is complete Jan 11 19:15:30.293: INFO: namespace : nslifetest-23-1261 api call to delete is complete Jan 11 19:15:30.343: INFO: namespace : nslifetest-65-4126 api call to delete is complete Jan 11 19:15:30.394: INFO: namespace : nslifetest-24-7607 api call to delete is complete Jan 11 19:15:30.443: INFO: namespace : nslifetest-66-3969 api call to delete is complete Jan 11 19:15:30.494: INFO: namespace : nslifetest-76-4797 api call to delete is complete Jan 11 19:15:30.544: INFO: namespace : nslifetest-84-3249 api call to delete is complete Jan 11 19:15:30.593: INFO: namespace : nslifetest-25-7468 api call to delete is complete Jan 11 19:15:30.643: INFO: namespace : nslifetest-67-2196 api call to delete is complete Jan 11 19:15:30.693: INFO: namespace : nslifetest-74-6053 api call to delete is complete Jan 11 19:15:30.744: INFO: namespace : nslifetest-35-2398 api call to delete is complete Jan 11 19:15:30.793: INFO: namespace : nslifetest-36-2113 api call to delete is complete Jan 11 19:15:30.843: INFO: namespace : nslifetest-75-1646 api call to delete is complete Jan 11 19:15:30.893: INFO: namespace : nslifetest-68-9001 api call to delete is complete Jan 11 19:15:30.943: INFO: namespace : nslifetest-77-7522 api call to delete is complete Jan 11 19:15:30.993: INFO: namespace : nslifetest-26-5750 api call to delete is complete Jan 11 19:15:31.043: INFO: namespace : nslifetest-37-8514 api call to delete is complete Jan 11 19:15:31.093: INFO: namespace : nslifetest-41-6637 api call to delete is complete Jan 11 19:15:31.143: INFO: namespace : nslifetest-69-6527 api call to delete is complete Jan 11 19:15:31.193: INFO: namespace : nslifetest-7-9591 api call to delete is complete Jan 11 19:15:31.243: INFO: namespace : nslifetest-27-6023 api call to delete is complete Jan 11 19:15:31.293: INFO: namespace : nslifetest-42-3891 api call to delete is complete Jan 11 19:15:31.343: INFO: namespace : nslifetest-38-1292 api call to delete is complete Jan 11 19:15:31.393: INFO: namespace : nslifetest-30-6900 api call to delete is complete Jan 11 19:15:31.443: INFO: namespace : nslifetest-31-1894 api call to delete is complete Jan 11 19:15:31.493: INFO: namespace : nslifetest-29-9446 api call to delete is complete Jan 11 19:15:31.543: INFO: namespace : nslifetest-78-2957 api call to delete is complete Jan 11 19:15:31.593: INFO: namespace : nslifetest-39-8843 api call to delete is complete Jan 11 19:15:31.643: INFO: namespace : nslifetest-3-4461 api call to delete is complete Jan 11 19:15:31.693: INFO: namespace : nslifetest-4-4307 api call to delete is complete Jan 11 19:15:31.743: INFO: namespace : nslifetest-32-9270 api call to delete is complete Jan 11 19:15:31.793: INFO: namespace : nslifetest-28-4553 api call to delete is complete Jan 11 19:15:31.843: INFO: namespace : nslifetest-40-7202 api call to delete is complete Jan 11 19:15:31.893: INFO: namespace : nslifetest-33-9778 api call to delete is complete Jan 11 19:15:31.943: INFO: namespace : nslifetest-34-5299 api call to delete is complete STEP: Waiting for namespaces to vanish Jan 11 19:15:34.035: INFO: Remaining namespaces : 100 Jan 11 19:15:36.034: INFO: Remaining namespaces : 100 Jan 11 19:15:38.035: INFO: Remaining namespaces : 100 Jan 11 19:15:40.035: INFO: Remaining namespaces : 90 Jan 11 19:15:42.035: INFO: Remaining namespaces : 90 Jan 11 19:15:44.035: INFO: Remaining namespaces : 84 Jan 11 19:15:46.034: INFO: Remaining namespaces : 80 Jan 11 19:15:48.034: INFO: Remaining namespaces : 80 Jan 11 19:15:50.034: INFO: Remaining namespaces : 78 Jan 11 19:15:52.034: INFO: Remaining namespaces : 71 Jan 11 19:15:54.035: INFO: Remaining namespaces : 70 Jan 11 19:15:56.034: INFO: Remaining namespaces : 64 Jan 11 19:15:58.034: INFO: Remaining namespaces : 60 Jan 11 19:16:00.034: INFO: Remaining namespaces : 53 Jan 11 19:16:02.035: INFO: Remaining namespaces : 50 Jan 11 19:16:04.034: INFO: Remaining namespaces : 50 Jan 11 19:16:06.036: INFO: Remaining namespaces : 41 Jan 11 19:16:08.033: INFO: Remaining namespaces : 40 Jan 11 19:16:10.033: INFO: Remaining namespaces : 33 Jan 11 19:16:12.033: INFO: Remaining namespaces : 30 Jan 11 19:16:14.034: INFO: Remaining namespaces : 27 Jan 11 19:16:16.033: INFO: Remaining namespaces : 21 Jan 11 19:16:18.033: INFO: Remaining namespaces : 20 Jan 11 19:16:20.033: INFO: Remaining namespaces : 17 Jan 11 19:16:22.033: INFO: Remaining namespaces : 11 [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:16:24.033: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-1267" for this suite. Jan 11 19:16:30.393: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:16:33.618: INFO: namespace namespaces-1267 deletion completed in 9.494622554s STEP: Destroying namespace "nslifetest-48-6490" for this suite. Jan 11 19:16:33.707: INFO: Namespace nslifetest-48-6490 was already deleted STEP: Destroying namespace "nslifetest-22-7930" for this suite. Jan 11 19:16:33.797: INFO: Namespace nslifetest-22-7930 was already deleted STEP: Destroying namespace "nslifetest-50-8125" for this suite. Jan 11 19:16:33.886: INFO: Namespace nslifetest-50-8125 was already deleted STEP: Destroying namespace "nslifetest-49-9111" for this suite. Jan 11 19:16:33.976: INFO: Namespace nslifetest-49-9111 was already deleted STEP: Destroying namespace "nslifetest-23-1261" for this suite. Jan 11 19:16:34.066: INFO: Namespace nslifetest-23-1261 was already deleted STEP: Destroying namespace "nslifetest-39-8843" for this suite. Jan 11 19:16:34.155: INFO: Namespace nslifetest-39-8843 was already deleted STEP: Destroying namespace "nslifetest-0-7127" for this suite. Jan 11 19:16:34.245: INFO: Namespace nslifetest-0-7127 was already deleted STEP: Destroying namespace "nslifetest-73-2389" for this suite. Jan 11 19:16:34.335: INFO: Namespace nslifetest-73-2389 was already deleted STEP: Destroying namespace "nslifetest-52-186" for this suite. Jan 11 19:16:34.424: INFO: Namespace nslifetest-52-186 was already deleted STEP: Destroying namespace "nslifetest-38-1292" for this suite. Jan 11 19:16:34.514: INFO: Namespace nslifetest-38-1292 was already deleted STEP: Destroying namespace "nslifetest-42-3891" for this suite. Jan 11 19:16:34.609: INFO: Namespace nslifetest-42-3891 was already deleted STEP: Destroying namespace "nslifetest-53-3915" for this suite. Jan 11 19:16:34.698: INFO: Namespace nslifetest-53-3915 was already deleted STEP: Destroying namespace "nslifetest-40-7202" for this suite. Jan 11 19:16:34.788: INFO: Namespace nslifetest-40-7202 was already deleted STEP: Destroying namespace "nslifetest-43-7835" for this suite. Jan 11 19:16:34.878: INFO: Namespace nslifetest-43-7835 was already deleted STEP: Destroying namespace "nslifetest-37-8514" for this suite. Jan 11 19:16:34.967: INFO: Namespace nslifetest-37-8514 was already deleted STEP: Destroying namespace "nslifetest-1-3303" for this suite. Jan 11 19:16:35.057: INFO: Namespace nslifetest-1-3303 was already deleted STEP: Destroying namespace "nslifetest-51-1913" for this suite. Jan 11 19:16:35.147: INFO: Namespace nslifetest-51-1913 was already deleted STEP: Destroying namespace "nslifetest-41-6637" for this suite. Jan 11 19:16:35.237: INFO: Namespace nslifetest-41-6637 was already deleted STEP: Destroying namespace "nslifetest-36-2113" for this suite. Jan 11 19:16:35.326: INFO: Namespace nslifetest-36-2113 was already deleted STEP: Destroying namespace "nslifetest-62-9235" for this suite. Jan 11 19:16:35.416: INFO: Namespace nslifetest-62-9235 was already deleted STEP: Destroying namespace "nslifetest-65-4126" for this suite. Jan 11 19:16:35.506: INFO: Namespace nslifetest-65-4126 was already deleted STEP: Destroying namespace "nslifetest-64-181" for this suite. Jan 11 19:16:35.596: INFO: Namespace nslifetest-64-181 was already deleted STEP: Destroying namespace "nslifetest-28-4553" for this suite. Jan 11 19:16:35.685: INFO: Namespace nslifetest-28-4553 was already deleted STEP: Destroying namespace "nslifetest-44-9180" for this suite. Jan 11 19:16:35.775: INFO: Namespace nslifetest-44-9180 was already deleted STEP: Destroying namespace "nslifetest-61-3695" for this suite. Jan 11 19:16:35.864: INFO: Namespace nslifetest-61-3695 was already deleted STEP: Destroying namespace "nslifetest-99-5529" for this suite. Jan 11 19:16:35.954: INFO: Namespace nslifetest-99-5529 was already deleted STEP: Destroying namespace "nslifetest-55-4244" for this suite. Jan 11 19:16:36.043: INFO: Namespace nslifetest-55-4244 was already deleted STEP: Destroying namespace "nslifetest-29-9446" for this suite. Jan 11 19:16:36.135: INFO: Namespace nslifetest-29-9446 was already deleted STEP: Destroying namespace "nslifetest-68-9001" for this suite. Jan 11 19:16:36.225: INFO: Namespace nslifetest-68-9001 was already deleted STEP: Destroying namespace "nslifetest-63-6874" for this suite. Jan 11 19:16:36.314: INFO: Namespace nslifetest-63-6874 was already deleted STEP: Destroying namespace "nslifetest-67-2196" for this suite. Jan 11 19:16:36.404: INFO: Namespace nslifetest-67-2196 was already deleted STEP: Destroying namespace "nslifetest-54-2974" for this suite. Jan 11 19:16:36.493: INFO: Namespace nslifetest-54-2974 was already deleted STEP: Destroying namespace "nslifetest-46-5758" for this suite. Jan 11 19:16:36.583: INFO: Namespace nslifetest-46-5758 was already deleted STEP: Destroying namespace "nslifetest-24-7607" for this suite. Jan 11 19:16:36.672: INFO: Namespace nslifetest-24-7607 was already deleted STEP: Destroying namespace "nslifetest-71-8685" for this suite. Jan 11 19:16:36.762: INFO: Namespace nslifetest-71-8685 was already deleted STEP: Destroying namespace "nslifetest-45-8878" for this suite. Jan 11 19:16:36.852: INFO: Namespace nslifetest-45-8878 was already deleted STEP: Destroying namespace "nslifetest-56-7692" for this suite. Jan 11 19:16:36.942: INFO: Namespace nslifetest-56-7692 was already deleted STEP: Destroying namespace "nslifetest-25-7468" for this suite. Jan 11 19:16:37.031: INFO: Namespace nslifetest-25-7468 was already deleted STEP: Destroying namespace "nslifetest-69-6527" for this suite. Jan 11 19:16:37.121: INFO: Namespace nslifetest-69-6527 was already deleted STEP: Destroying namespace "nslifetest-32-9270" for this suite. Jan 11 19:16:37.210: INFO: Namespace nslifetest-32-9270 was already deleted STEP: Destroying namespace "nslifetest-59-2732" for this suite. Jan 11 19:16:37.300: INFO: Namespace nslifetest-59-2732 was already deleted STEP: Destroying namespace "nslifetest-27-6023" for this suite. Jan 11 19:16:37.390: INFO: Namespace nslifetest-27-6023 was already deleted STEP: Destroying namespace "nslifetest-66-3969" for this suite. Jan 11 19:16:37.480: INFO: Namespace nslifetest-66-3969 was already deleted STEP: Destroying namespace "nslifetest-2-2727" for this suite. Jan 11 19:16:37.570: INFO: Namespace nslifetest-2-2727 was already deleted STEP: Destroying namespace "nslifetest-58-5834" for this suite. Jan 11 19:16:37.660: INFO: Namespace nslifetest-58-5834 was already deleted STEP: Destroying namespace "nslifetest-60-1310" for this suite. Jan 11 19:16:37.749: INFO: Namespace nslifetest-60-1310 was already deleted STEP: Destroying namespace "nslifetest-72-7516" for this suite. Jan 11 19:16:37.839: INFO: Namespace nslifetest-72-7516 was already deleted STEP: Destroying namespace "nslifetest-47-2561" for this suite. Jan 11 19:16:37.929: INFO: Namespace nslifetest-47-2561 was already deleted STEP: Destroying namespace "nslifetest-26-5750" for this suite. Jan 11 19:16:38.018: INFO: Namespace nslifetest-26-5750 was already deleted STEP: Destroying namespace "nslifetest-31-1894" for this suite. Jan 11 19:16:38.108: INFO: Namespace nslifetest-31-1894 was already deleted STEP: Destroying namespace "nslifetest-70-6539" for this suite. Jan 11 19:16:38.198: INFO: Namespace nslifetest-70-6539 was already deleted STEP: Destroying namespace "nslifetest-30-6900" for this suite. Jan 11 19:16:38.287: INFO: Namespace nslifetest-30-6900 was already deleted STEP: Destroying namespace "nslifetest-12-4061" for this suite. Jan 11 19:16:38.377: INFO: Namespace nslifetest-12-4061 was already deleted STEP: Destroying namespace "nslifetest-57-5292" for this suite. Jan 11 19:16:38.466: INFO: Namespace nslifetest-57-5292 was already deleted STEP: Destroying namespace "nslifetest-33-9778" for this suite. Jan 11 19:16:38.556: INFO: Namespace nslifetest-33-9778 was already deleted STEP: Destroying namespace "nslifetest-3-4461" for this suite. Jan 11 19:16:38.646: INFO: Namespace nslifetest-3-4461 was already deleted STEP: Destroying namespace "nslifetest-6-5243" for this suite. Jan 11 19:16:38.735: INFO: Namespace nslifetest-6-5243 was already deleted STEP: Destroying namespace "nslifetest-4-4307" for this suite. Jan 11 19:16:38.825: INFO: Namespace nslifetest-4-4307 was already deleted STEP: Destroying namespace "nslifetest-5-8510" for this suite. Jan 11 19:16:38.914: INFO: Namespace nslifetest-5-8510 was already deleted STEP: Destroying namespace "nslifetest-8-3731" for this suite. Jan 11 19:16:39.004: INFO: Namespace nslifetest-8-3731 was already deleted STEP: Destroying namespace "nslifetest-7-9591" for this suite. Jan 11 19:16:39.093: INFO: Namespace nslifetest-7-9591 was already deleted STEP: Destroying namespace "nslifetest-86-3288" for this suite. Jan 11 19:16:39.183: INFO: Namespace nslifetest-86-3288 was already deleted STEP: Destroying namespace "nslifetest-11-316" for this suite. Jan 11 19:16:39.272: INFO: Namespace nslifetest-11-316 was already deleted STEP: Destroying namespace "nslifetest-10-7274" for this suite. Jan 11 19:16:39.362: INFO: Namespace nslifetest-10-7274 was already deleted STEP: Destroying namespace "nslifetest-9-2463" for this suite. Jan 11 19:16:39.451: INFO: Namespace nslifetest-9-2463 was already deleted STEP: Destroying namespace "nslifetest-74-6053" for this suite. Jan 11 19:16:39.541: INFO: Namespace nslifetest-74-6053 was already deleted STEP: Destroying namespace "nslifetest-75-1646" for this suite. Jan 11 19:16:39.630: INFO: Namespace nslifetest-75-1646 was already deleted STEP: Destroying namespace "nslifetest-76-4797" for this suite. Jan 11 19:16:39.720: INFO: Namespace nslifetest-76-4797 was already deleted STEP: Destroying namespace "nslifetest-34-5299" for this suite. Jan 11 19:16:39.809: INFO: Namespace nslifetest-34-5299 was already deleted STEP: Destroying namespace "nslifetest-79-6098" for this suite. Jan 11 19:16:39.898: INFO: Namespace nslifetest-79-6098 was already deleted STEP: Destroying namespace "nslifetest-77-7522" for this suite. Jan 11 19:16:39.987: INFO: Namespace nslifetest-77-7522 was already deleted STEP: Destroying namespace "nslifetest-78-2957" for this suite. Jan 11 19:16:40.077: INFO: Namespace nslifetest-78-2957 was already deleted STEP: Destroying namespace "nslifetest-80-2989" for this suite. Jan 11 19:16:40.167: INFO: Namespace nslifetest-80-2989 was already deleted STEP: Destroying namespace "nslifetest-81-1529" for this suite. Jan 11 19:16:40.257: INFO: Namespace nslifetest-81-1529 was already deleted STEP: Destroying namespace "nslifetest-83-5890" for this suite. Jan 11 19:16:40.346: INFO: Namespace nslifetest-83-5890 was already deleted STEP: Destroying namespace "nslifetest-84-3249" for this suite. Jan 11 19:16:40.436: INFO: Namespace nslifetest-84-3249 was already deleted STEP: Destroying namespace "nslifetest-85-1503" for this suite. Jan 11 19:16:40.526: INFO: Namespace nslifetest-85-1503 was already deleted STEP: Destroying namespace "nslifetest-17-7042" for this suite. Jan 11 19:16:40.615: INFO: Namespace nslifetest-17-7042 was already deleted STEP: Destroying namespace "nslifetest-82-2100" for this suite. Jan 11 19:16:40.705: INFO: Namespace nslifetest-82-2100 was already deleted STEP: Destroying namespace "nslifetest-13-3153" for this suite. Jan 11 19:16:40.795: INFO: Namespace nslifetest-13-3153 was already deleted STEP: Destroying namespace "nslifetest-18-8468" for this suite. Jan 11 19:16:40.885: INFO: Namespace nslifetest-18-8468 was already deleted STEP: Destroying namespace "nslifetest-19-7492" for this suite. Jan 11 19:16:40.975: INFO: Namespace nslifetest-19-7492 was already deleted STEP: Destroying namespace "nslifetest-14-6490" for this suite. Jan 11 19:16:41.064: INFO: Namespace nslifetest-14-6490 was already deleted STEP: Destroying namespace "nslifetest-20-305" for this suite. Jan 11 19:16:41.154: INFO: Namespace nslifetest-20-305 was already deleted STEP: Destroying namespace "nslifetest-16-2952" for this suite. Jan 11 19:16:41.243: INFO: Namespace nslifetest-16-2952 was already deleted STEP: Destroying namespace "nslifetest-15-8077" for this suite. Jan 11 19:16:41.333: INFO: Namespace nslifetest-15-8077 was already deleted STEP: Destroying namespace "nslifetest-87-1036" for this suite. Jan 11 19:16:41.422: INFO: Namespace nslifetest-87-1036 was already deleted STEP: Destroying namespace "nslifetest-92-742" for this suite. Jan 11 19:16:41.512: INFO: Namespace nslifetest-92-742 was already deleted STEP: Destroying namespace "nslifetest-90-4470" for this suite. Jan 11 19:16:41.602: INFO: Namespace nslifetest-90-4470 was already deleted STEP: Destroying namespace "nslifetest-88-7769" for this suite. Jan 11 19:16:41.691: INFO: Namespace nslifetest-88-7769 was already deleted STEP: Destroying namespace "nslifetest-89-4722" for this suite. Jan 11 19:16:41.781: INFO: Namespace nslifetest-89-4722 was already deleted STEP: Destroying namespace "nslifetest-91-3724" for this suite. Jan 11 19:16:41.870: INFO: Namespace nslifetest-91-3724 was already deleted STEP: Destroying namespace "nslifetest-21-1082" for this suite. Jan 11 19:16:41.960: INFO: Namespace nslifetest-21-1082 was already deleted STEP: Destroying namespace "nslifetest-95-4415" for this suite. Jan 11 19:16:42.050: INFO: Namespace nslifetest-95-4415 was already deleted STEP: Destroying namespace "nslifetest-97-9823" for this suite. Jan 11 19:16:42.139: INFO: Namespace nslifetest-97-9823 was already deleted STEP: Destroying namespace "nslifetest-94-979" for this suite. Jan 11 19:16:42.229: INFO: Namespace nslifetest-94-979 was already deleted STEP: Destroying namespace "nslifetest-93-6473" for this suite. Jan 11 19:16:42.319: INFO: Namespace nslifetest-93-6473 was already deleted STEP: Destroying namespace "nslifetest-96-8756" for this suite. Jan 11 19:16:42.408: INFO: Namespace nslifetest-96-8756 was already deleted STEP: Destroying namespace "nslifetest-35-2398" for this suite. Jan 11 19:16:42.497: INFO: Namespace nslifetest-35-2398 was already deleted STEP: Destroying namespace "nslifetest-98-1189" for this suite. Jan 11 19:16:42.587: INFO: Namespace nslifetest-98-1189 was already deleted •SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] NoExecuteTaintManager Single Pod [Serial] doesn't evict pod with tolerations from tainted nodes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/taints.go:209 [BeforeEach] [sig-scheduling] NoExecuteTaintManager Single Pod [Serial] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:16:42.589: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename taint-single-pod STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in taint-single-pod-2451 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] NoExecuteTaintManager Single Pod [Serial] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/taints.go:164 Jan 11 19:16:43.250: INFO: Waiting up to 1m0s for all nodes to be ready Jan 11 19:17:43.795: INFO: Waiting for terminating namespaces to be deleted... [It] doesn't evict pod with tolerations from tainted nodes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/taints.go:209 Jan 11 19:17:43.885: INFO: Starting informer... STEP: Starting pod... Jan 11 19:17:44.067: INFO: Pod is running on ip-10-250-27-25.ec2.internal. Tainting Node STEP: Trying to apply a taint on the Node STEP: verifying the node has the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute STEP: Waiting for Pod to be deleted Jan 11 19:18:49.340: INFO: Pod wasn't evicted. Test successful STEP: verifying the node doesn't have the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute [AfterEach] [sig-scheduling] NoExecuteTaintManager Single Pod [Serial] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:18:49.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "taint-single-pod-2451" for this suite. Jan 11 19:19:17.976: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:19:21.204: INFO: namespace taint-single-pod-2451 deletion completed in 31.50009772s •SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl taint [Serial] should update the taint on a node /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1846 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:19:21.205: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename kubectl STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-7926 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:225 [It] should update the taint on a node /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1846 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: adding the taint kubernetes.io/e2e-taint-key-001-2ba72e3e-b668-45c8-a63d-6bd5da36ef26=testing-taint-value:NoSchedule to a node Jan 11 19:19:24.306: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config taint nodes ip-10-250-27-25.ec2.internal kubernetes.io/e2e-taint-key-001-2ba72e3e-b668-45c8-a63d-6bd5da36ef26=testing-taint-value:NoSchedule' Jan 11 19:19:25.435: INFO: stderr: "" Jan 11 19:19:25.436: INFO: stdout: "node/ip-10-250-27-25.ec2.internal tainted\n" Jan 11 19:19:25.436: INFO: stdout: "node/ip-10-250-27-25.ec2.internal tainted\n" STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-001-2ba72e3e-b668-45c8-a63d-6bd5da36ef26=testing-taint-value:NoSchedule Jan 11 19:19:25.436: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config describe node ip-10-250-27-25.ec2.internal' Jan 11 19:19:26.158: INFO: stderr: "" Jan 11 19:19:26.158: INFO: stdout: "Name: ip-10-250-27-25.ec2.internal\nRoles: \nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/instance-type=m5.large\n beta.kubernetes.io/os=linux\n failure-domain.beta.kubernetes.io/region=us-east-1\n failure-domain.beta.kubernetes.io/zone=us-east-1c\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=ip-10-250-27-25.ec2.internal\n kubernetes.io/os=linux\n node.kubernetes.io/role=node\n worker.garden.sapcloud.io/group=worker-1\n worker.gardener.cloud/pool=worker-1\nAnnotations: node.alpha.kubernetes.io/ttl: 0\n projectcalico.org/IPv4Address: 10.250.27.25/19\n projectcalico.org/IPv4IPIPTunnelAddr: 100.64.1.1\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sat, 11 Jan 2020 15:56:03 +0000\nTaints: kubernetes.io/e2e-taint-key-001-2ba72e3e-b668-45c8-a63d-6bd5da36ef26=testing-taint-value:NoSchedule\nUnschedulable: false\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n KernelDeadlock False Sat, 11 Jan 2020 19:19:07 +0000 Sat, 11 Jan 2020 15:56:58 +0000 KernelHasNoDeadlock kernel has no deadlock\n ReadonlyFilesystem False Sat, 11 Jan 2020 19:19:07 +0000 Sat, 11 Jan 2020 15:56:58 +0000 FilesystemIsNotReadOnly Filesystem is not read-only\n FrequentUnregisterNetDevice False Sat, 11 Jan 2020 19:19:07 +0000 Sat, 11 Jan 2020 15:56:58 +0000 NoFrequentUnregisterNetDevice node is functioning properly\n FrequentKubeletRestart False Sat, 11 Jan 2020 19:19:07 +0000 Sat, 11 Jan 2020 15:56:58 +0000 NoFrequentKubeletRestart kubelet is functioning properly\n FrequentDockerRestart False Sat, 11 Jan 2020 19:19:07 +0000 Sat, 11 Jan 2020 15:56:58 +0000 NoFrequentDockerRestart docker is functioning properly\n FrequentContainerdRestart False Sat, 11 Jan 2020 19:19:07 +0000 Sat, 11 Jan 2020 15:56:58 +0000 NoFrequentContainerdRestart containerd is functioning properly\n CorruptDockerOverlay2 False Sat, 11 Jan 2020 19:19:07 +0000 Sat, 11 Jan 2020 15:56:58 +0000 NoCorruptDockerOverlay2 docker overlay2 is functioning properly\n NetworkUnavailable False Sat, 11 Jan 2020 15:56:18 +0000 Sat, 11 Jan 2020 15:56:18 +0000 CalicoIsUp Calico is running on this node\n MemoryPressure False Sat, 11 Jan 2020 19:19:21 +0000 Sat, 11 Jan 2020 15:56:03 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Sat, 11 Jan 2020 19:19:21 +0000 Sat, 11 Jan 2020 15:56:03 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Sat, 11 Jan 2020 19:19:21 +0000 Sat, 11 Jan 2020 15:56:03 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Sat, 11 Jan 2020 19:19:21 +0000 Sat, 11 Jan 2020 15:56:13 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 10.250.27.25\n Hostname: ip-10-250-27-25.ec2.internal\n InternalDNS: ip-10-250-27-25.ec2.internal\nCapacity:\n attachable-volumes-aws-ebs: 25\n cpu: 2\n ephemeral-storage: 28056816Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 7865496Ki\n pods: 110\nAllocatable:\n attachable-volumes-aws-ebs: 25\n cpu: 1920m\n ephemeral-storage: 27293670584\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 6577812679\n pods: 110\nSystem Info:\n Machine ID: ec280dba3c1837e27848a3dec8c080a9\n System UUID: ec280dba-3c18-37e2-7848-a3dec8c080a9\n Boot ID: 89e42b89-b944-47ea-8bf6-5f2fe6d80c97\n Kernel Version: 4.19.86-coreos\n OS Image: Container Linux by CoreOS 2303.3.0 (Rhyolite)\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: docker://18.6.3\n Kubelet Version: v1.16.4\n Kube-Proxy Version: v1.16.4\nPodCIDR: 100.64.1.0/24\nPodCIDRs: 100.64.1.0/24\nProviderID: aws:///us-east-1c/i-0a8c404292a3c92e9\nNon-terminated Pods: (4 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system calico-node-m8r2d 100m (5%) 500m (26%) 100Mi (1%) 700Mi (11%) 3h23m\n kube-system kube-proxy-rq4kf 20m (1%) 0 (0%) 64Mi (1%) 0 (0%) 3h23m\n kube-system node-exporter-l6q84 5m (0%) 25m (1%) 10Mi (0%) 100Mi (1%) 3h23m\n kube-system node-problem-detector-9z5sq 20m (1%) 200m (10%) 20Mi (0%) 100Mi (1%) 3h23m\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 145m (7%) 725m (37%)\n memory 194Mi (3%) 900Mi (14%)\n ephemeral-storage 0 (0%) 0 (0%)\n attachable-volumes-aws-ebs 0 0\nEvents: \n" Jan 11 19:19:26.159: INFO: stdout: "Name: ip-10-250-27-25.ec2.internal\nRoles: \nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/instance-type=m5.large\n beta.kubernetes.io/os=linux\n failure-domain.beta.kubernetes.io/region=us-east-1\n failure-domain.beta.kubernetes.io/zone=us-east-1c\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=ip-10-250-27-25.ec2.internal\n kubernetes.io/os=linux\n node.kubernetes.io/role=node\n worker.garden.sapcloud.io/group=worker-1\n worker.gardener.cloud/pool=worker-1\nAnnotations: node.alpha.kubernetes.io/ttl: 0\n projectcalico.org/IPv4Address: 10.250.27.25/19\n projectcalico.org/IPv4IPIPTunnelAddr: 100.64.1.1\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sat, 11 Jan 2020 15:56:03 +0000\nTaints: kubernetes.io/e2e-taint-key-001-2ba72e3e-b668-45c8-a63d-6bd5da36ef26=testing-taint-value:NoSchedule\nUnschedulable: false\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n KernelDeadlock False Sat, 11 Jan 2020 19:19:07 +0000 Sat, 11 Jan 2020 15:56:58 +0000 KernelHasNoDeadlock kernel has no deadlock\n ReadonlyFilesystem False Sat, 11 Jan 2020 19:19:07 +0000 Sat, 11 Jan 2020 15:56:58 +0000 FilesystemIsNotReadOnly Filesystem is not read-only\n FrequentUnregisterNetDevice False Sat, 11 Jan 2020 19:19:07 +0000 Sat, 11 Jan 2020 15:56:58 +0000 NoFrequentUnregisterNetDevice node is functioning properly\n FrequentKubeletRestart False Sat, 11 Jan 2020 19:19:07 +0000 Sat, 11 Jan 2020 15:56:58 +0000 NoFrequentKubeletRestart kubelet is functioning properly\n FrequentDockerRestart False Sat, 11 Jan 2020 19:19:07 +0000 Sat, 11 Jan 2020 15:56:58 +0000 NoFrequentDockerRestart docker is functioning properly\n FrequentContainerdRestart False Sat, 11 Jan 2020 19:19:07 +0000 Sat, 11 Jan 2020 15:56:58 +0000 NoFrequentContainerdRestart containerd is functioning properly\n CorruptDockerOverlay2 False Sat, 11 Jan 2020 19:19:07 +0000 Sat, 11 Jan 2020 15:56:58 +0000 NoCorruptDockerOverlay2 docker overlay2 is functioning properly\n NetworkUnavailable False Sat, 11 Jan 2020 15:56:18 +0000 Sat, 11 Jan 2020 15:56:18 +0000 CalicoIsUp Calico is running on this node\n MemoryPressure False Sat, 11 Jan 2020 19:19:21 +0000 Sat, 11 Jan 2020 15:56:03 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Sat, 11 Jan 2020 19:19:21 +0000 Sat, 11 Jan 2020 15:56:03 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Sat, 11 Jan 2020 19:19:21 +0000 Sat, 11 Jan 2020 15:56:03 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Sat, 11 Jan 2020 19:19:21 +0000 Sat, 11 Jan 2020 15:56:13 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 10.250.27.25\n Hostname: ip-10-250-27-25.ec2.internal\n InternalDNS: ip-10-250-27-25.ec2.internal\nCapacity:\n attachable-volumes-aws-ebs: 25\n cpu: 2\n ephemeral-storage: 28056816Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 7865496Ki\n pods: 110\nAllocatable:\n attachable-volumes-aws-ebs: 25\n cpu: 1920m\n ephemeral-storage: 27293670584\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 6577812679\n pods: 110\nSystem Info:\n Machine ID: ec280dba3c1837e27848a3dec8c080a9\n System UUID: ec280dba-3c18-37e2-7848-a3dec8c080a9\n Boot ID: 89e42b89-b944-47ea-8bf6-5f2fe6d80c97\n Kernel Version: 4.19.86-coreos\n OS Image: Container Linux by CoreOS 2303.3.0 (Rhyolite)\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: docker://18.6.3\n Kubelet Version: v1.16.4\n Kube-Proxy Version: v1.16.4\nPodCIDR: 100.64.1.0/24\nPodCIDRs: 100.64.1.0/24\nProviderID: aws:///us-east-1c/i-0a8c404292a3c92e9\nNon-terminated Pods: (4 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system calico-node-m8r2d 100m (5%) 500m (26%) 100Mi (1%) 700Mi (11%) 3h23m\n kube-system kube-proxy-rq4kf 20m (1%) 0 (0%) 64Mi (1%) 0 (0%) 3h23m\n kube-system node-exporter-l6q84 5m (0%) 25m (1%) 10Mi (0%) 100Mi (1%) 3h23m\n kube-system node-problem-detector-9z5sq 20m (1%) 200m (10%) 20Mi (0%) 100Mi (1%) 3h23m\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 145m (7%) 725m (37%)\n memory 194Mi (3%) 900Mi (14%)\n ephemeral-storage 0 (0%) 0 (0%)\n attachable-volumes-aws-ebs 0 0\nEvents: \n" STEP: removing the taint kubernetes.io/e2e-taint-key-001-2ba72e3e-b668-45c8-a63d-6bd5da36ef26=testing-taint-value:NoSchedule of a node Jan 11 19:19:26.159: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config taint nodes ip-10-250-27-25.ec2.internal kubernetes.io/e2e-taint-key-001-2ba72e3e-b668-45c8-a63d-6bd5da36ef26:NoSchedule-' Jan 11 19:19:26.798: INFO: stderr: "" Jan 11 19:19:26.798: INFO: stdout: "node/ip-10-250-27-25.ec2.internal untainted\n" Jan 11 19:19:26.798: INFO: stdout: "node/ip-10-250-27-25.ec2.internal untainted\n" STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-001-2ba72e3e-b668-45c8-a63d-6bd5da36ef26 Jan 11 19:19:26.798: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config describe node ip-10-250-27-25.ec2.internal' Jan 11 19:19:27.608: INFO: stderr: "" Jan 11 19:19:27.608: INFO: stdout: "Name: ip-10-250-27-25.ec2.internal\nRoles: \nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/instance-type=m5.large\n beta.kubernetes.io/os=linux\n failure-domain.beta.kubernetes.io/region=us-east-1\n failure-domain.beta.kubernetes.io/zone=us-east-1c\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=ip-10-250-27-25.ec2.internal\n kubernetes.io/os=linux\n node.kubernetes.io/role=node\n worker.garden.sapcloud.io/group=worker-1\n worker.gardener.cloud/pool=worker-1\nAnnotations: node.alpha.kubernetes.io/ttl: 0\n projectcalico.org/IPv4Address: 10.250.27.25/19\n projectcalico.org/IPv4IPIPTunnelAddr: 100.64.1.1\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sat, 11 Jan 2020 15:56:03 +0000\nTaints: \nUnschedulable: false\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n KernelDeadlock False Sat, 11 Jan 2020 19:19:07 +0000 Sat, 11 Jan 2020 15:56:58 +0000 KernelHasNoDeadlock kernel has no deadlock\n ReadonlyFilesystem False Sat, 11 Jan 2020 19:19:07 +0000 Sat, 11 Jan 2020 15:56:58 +0000 FilesystemIsNotReadOnly Filesystem is not read-only\n FrequentUnregisterNetDevice False Sat, 11 Jan 2020 19:19:07 +0000 Sat, 11 Jan 2020 15:56:58 +0000 NoFrequentUnregisterNetDevice node is functioning properly\n FrequentKubeletRestart False Sat, 11 Jan 2020 19:19:07 +0000 Sat, 11 Jan 2020 15:56:58 +0000 NoFrequentKubeletRestart kubelet is functioning properly\n FrequentDockerRestart False Sat, 11 Jan 2020 19:19:07 +0000 Sat, 11 Jan 2020 15:56:58 +0000 NoFrequentDockerRestart docker is functioning properly\n FrequentContainerdRestart False Sat, 11 Jan 2020 19:19:07 +0000 Sat, 11 Jan 2020 15:56:58 +0000 NoFrequentContainerdRestart containerd is functioning properly\n CorruptDockerOverlay2 False Sat, 11 Jan 2020 19:19:07 +0000 Sat, 11 Jan 2020 15:56:58 +0000 NoCorruptDockerOverlay2 docker overlay2 is functioning properly\n NetworkUnavailable False Sat, 11 Jan 2020 15:56:18 +0000 Sat, 11 Jan 2020 15:56:18 +0000 CalicoIsUp Calico is running on this node\n MemoryPressure False Sat, 11 Jan 2020 19:19:21 +0000 Sat, 11 Jan 2020 15:56:03 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Sat, 11 Jan 2020 19:19:21 +0000 Sat, 11 Jan 2020 15:56:03 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Sat, 11 Jan 2020 19:19:21 +0000 Sat, 11 Jan 2020 15:56:03 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Sat, 11 Jan 2020 19:19:21 +0000 Sat, 11 Jan 2020 15:56:13 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 10.250.27.25\n Hostname: ip-10-250-27-25.ec2.internal\n InternalDNS: ip-10-250-27-25.ec2.internal\nCapacity:\n attachable-volumes-aws-ebs: 25\n cpu: 2\n ephemeral-storage: 28056816Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 7865496Ki\n pods: 110\nAllocatable:\n attachable-volumes-aws-ebs: 25\n cpu: 1920m\n ephemeral-storage: 27293670584\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 6577812679\n pods: 110\nSystem Info:\n Machine ID: ec280dba3c1837e27848a3dec8c080a9\n System UUID: ec280dba-3c18-37e2-7848-a3dec8c080a9\n Boot ID: 89e42b89-b944-47ea-8bf6-5f2fe6d80c97\n Kernel Version: 4.19.86-coreos\n OS Image: Container Linux by CoreOS 2303.3.0 (Rhyolite)\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: docker://18.6.3\n Kubelet Version: v1.16.4\n Kube-Proxy Version: v1.16.4\nPodCIDR: 100.64.1.0/24\nPodCIDRs: 100.64.1.0/24\nProviderID: aws:///us-east-1c/i-0a8c404292a3c92e9\nNon-terminated Pods: (4 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system calico-node-m8r2d 100m (5%) 500m (26%) 100Mi (1%) 700Mi (11%) 3h23m\n kube-system kube-proxy-rq4kf 20m (1%) 0 (0%) 64Mi (1%) 0 (0%) 3h23m\n kube-system node-exporter-l6q84 5m (0%) 25m (1%) 10Mi (0%) 100Mi (1%) 3h23m\n kube-system node-problem-detector-9z5sq 20m (1%) 200m (10%) 20Mi (0%) 100Mi (1%) 3h23m\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 145m (7%) 725m (37%)\n memory 194Mi (3%) 900Mi (14%)\n ephemeral-storage 0 (0%) 0 (0%)\n attachable-volumes-aws-ebs 0 0\nEvents: \n" Jan 11 19:19:27.608: INFO: stdout: "Name: ip-10-250-27-25.ec2.internal\nRoles: \nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/instance-type=m5.large\n beta.kubernetes.io/os=linux\n failure-domain.beta.kubernetes.io/region=us-east-1\n failure-domain.beta.kubernetes.io/zone=us-east-1c\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=ip-10-250-27-25.ec2.internal\n kubernetes.io/os=linux\n node.kubernetes.io/role=node\n worker.garden.sapcloud.io/group=worker-1\n worker.gardener.cloud/pool=worker-1\nAnnotations: node.alpha.kubernetes.io/ttl: 0\n projectcalico.org/IPv4Address: 10.250.27.25/19\n projectcalico.org/IPv4IPIPTunnelAddr: 100.64.1.1\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sat, 11 Jan 2020 15:56:03 +0000\nTaints: \nUnschedulable: false\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n KernelDeadlock False Sat, 11 Jan 2020 19:19:07 +0000 Sat, 11 Jan 2020 15:56:58 +0000 KernelHasNoDeadlock kernel has no deadlock\n ReadonlyFilesystem False Sat, 11 Jan 2020 19:19:07 +0000 Sat, 11 Jan 2020 15:56:58 +0000 FilesystemIsNotReadOnly Filesystem is not read-only\n FrequentUnregisterNetDevice False Sat, 11 Jan 2020 19:19:07 +0000 Sat, 11 Jan 2020 15:56:58 +0000 NoFrequentUnregisterNetDevice node is functioning properly\n FrequentKubeletRestart False Sat, 11 Jan 2020 19:19:07 +0000 Sat, 11 Jan 2020 15:56:58 +0000 NoFrequentKubeletRestart kubelet is functioning properly\n FrequentDockerRestart False Sat, 11 Jan 2020 19:19:07 +0000 Sat, 11 Jan 2020 15:56:58 +0000 NoFrequentDockerRestart docker is functioning properly\n FrequentContainerdRestart False Sat, 11 Jan 2020 19:19:07 +0000 Sat, 11 Jan 2020 15:56:58 +0000 NoFrequentContainerdRestart containerd is functioning properly\n CorruptDockerOverlay2 False Sat, 11 Jan 2020 19:19:07 +0000 Sat, 11 Jan 2020 15:56:58 +0000 NoCorruptDockerOverlay2 docker overlay2 is functioning properly\n NetworkUnavailable False Sat, 11 Jan 2020 15:56:18 +0000 Sat, 11 Jan 2020 15:56:18 +0000 CalicoIsUp Calico is running on this node\n MemoryPressure False Sat, 11 Jan 2020 19:19:21 +0000 Sat, 11 Jan 2020 15:56:03 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Sat, 11 Jan 2020 19:19:21 +0000 Sat, 11 Jan 2020 15:56:03 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Sat, 11 Jan 2020 19:19:21 +0000 Sat, 11 Jan 2020 15:56:03 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Sat, 11 Jan 2020 19:19:21 +0000 Sat, 11 Jan 2020 15:56:13 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 10.250.27.25\n Hostname: ip-10-250-27-25.ec2.internal\n InternalDNS: ip-10-250-27-25.ec2.internal\nCapacity:\n attachable-volumes-aws-ebs: 25\n cpu: 2\n ephemeral-storage: 28056816Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 7865496Ki\n pods: 110\nAllocatable:\n attachable-volumes-aws-ebs: 25\n cpu: 1920m\n ephemeral-storage: 27293670584\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 6577812679\n pods: 110\nSystem Info:\n Machine ID: ec280dba3c1837e27848a3dec8c080a9\n System UUID: ec280dba-3c18-37e2-7848-a3dec8c080a9\n Boot ID: 89e42b89-b944-47ea-8bf6-5f2fe6d80c97\n Kernel Version: 4.19.86-coreos\n OS Image: Container Linux by CoreOS 2303.3.0 (Rhyolite)\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: docker://18.6.3\n Kubelet Version: v1.16.4\n Kube-Proxy Version: v1.16.4\nPodCIDR: 100.64.1.0/24\nPodCIDRs: 100.64.1.0/24\nProviderID: aws:///us-east-1c/i-0a8c404292a3c92e9\nNon-terminated Pods: (4 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system calico-node-m8r2d 100m (5%) 500m (26%) 100Mi (1%) 700Mi (11%) 3h23m\n kube-system kube-proxy-rq4kf 20m (1%) 0 (0%) 64Mi (1%) 0 (0%) 3h23m\n kube-system node-exporter-l6q84 5m (0%) 25m (1%) 10Mi (0%) 100Mi (1%) 3h23m\n kube-system node-problem-detector-9z5sq 20m (1%) 200m (10%) 20Mi (0%) 100Mi (1%) 3h23m\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 145m (7%) 725m (37%)\n memory 194Mi (3%) 900Mi (14%)\n ephemeral-storage 0 (0%) 0 (0%)\n attachable-volumes-aws-ebs 0 0\nEvents: \n" STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-001-2ba72e3e-b668-45c8-a63d-6bd5da36ef26=testing-taint-value:NoSchedule [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:19:27.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7926" for this suite. Jan 11 19:19:34.150: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:19:37.377: INFO: namespace kubectl-7926 deletion completed in 9.49736551s •SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:500 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:19:37.378: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename sched-pred STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in sched-pred-7636 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:87 Jan 11 19:19:38.019: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jan 11 19:19:38.291: INFO: Waiting for terminating namespaces to be deleted... Jan 11 19:19:38.380: INFO: Logging pods the kubelet thinks is on node ip-10-250-27-25.ec2.internal before test Jan 11 19:19:38.581: INFO: calico-node-m8r2d from kube-system started at 2020-01-11 15:56:04 +0000 UTC (1 container statuses recorded) Jan 11 19:19:38.581: INFO: Container calico-node ready: true, restart count 0 Jan 11 19:19:38.581: INFO: kube-proxy-rq4kf from kube-system started at 2020-01-11 15:56:04 +0000 UTC (1 container statuses recorded) Jan 11 19:19:38.581: INFO: Container kube-proxy ready: true, restart count 0 Jan 11 19:19:38.581: INFO: node-problem-detector-9z5sq from kube-system started at 2020-01-11 15:56:04 +0000 UTC (1 container statuses recorded) Jan 11 19:19:38.581: INFO: Container node-problem-detector ready: true, restart count 0 Jan 11 19:19:38.581: INFO: node-exporter-l6q84 from kube-system started at 2020-01-11 15:56:04 +0000 UTC (1 container statuses recorded) Jan 11 19:19:38.581: INFO: Container node-exporter ready: true, restart count 0 Jan 11 19:19:38.581: INFO: Logging pods the kubelet thinks is on node ip-10-250-7-77.ec2.internal before test Jan 11 19:19:38.697: INFO: calico-node-dl8nk from kube-system started at 2020-01-11 15:55:58 +0000 UTC (1 container statuses recorded) Jan 11 19:19:38.697: INFO: Container calico-node ready: true, restart count 0 Jan 11 19:19:38.697: INFO: node-problem-detector-jx2p4 from kube-system started at 2020-01-11 15:55:58 +0000 UTC (1 container statuses recorded) Jan 11 19:19:38.697: INFO: Container node-problem-detector ready: true, restart count 0 Jan 11 19:19:38.697: INFO: node-exporter-gp57h from kube-system started at 2020-01-11 15:55:58 +0000 UTC (1 container statuses recorded) Jan 11 19:19:38.697: INFO: Container node-exporter ready: true, restart count 0 Jan 11 19:19:38.697: INFO: calico-kube-controllers-79bcd784b6-c46r9 from kube-system started at 2020-01-11 15:56:08 +0000 UTC (1 container statuses recorded) Jan 11 19:19:38.697: INFO: Container calico-kube-controllers ready: true, restart count 0 Jan 11 19:19:38.697: INFO: metrics-server-7c797fd994-4x7v9 from kube-system started at 2020-01-11 15:56:08 +0000 UTC (1 container statuses recorded) Jan 11 19:19:38.697: INFO: Container metrics-server ready: true, restart count 0 Jan 11 19:19:38.697: INFO: coredns-59c969ffb8-57m7v from kube-system started at 2020-01-11 15:56:11 +0000 UTC (1 container statuses recorded) Jan 11 19:19:38.697: INFO: Container coredns ready: true, restart count 0 Jan 11 19:19:38.697: INFO: calico-typha-deploy-9f6b455c4-vdrzx from kube-system started at 2020-01-11 16:21:07 +0000 UTC (1 container statuses recorded) Jan 11 19:19:38.697: INFO: Container calico-typha ready: true, restart count 0 Jan 11 19:19:38.697: INFO: kube-proxy-nn5px from kube-system started at 2020-01-11 15:55:58 +0000 UTC (1 container statuses recorded) Jan 11 19:19:38.697: INFO: Container kube-proxy ready: true, restart count 0 Jan 11 19:19:38.697: INFO: calico-typha-horizontal-autoscaler-85c99966bb-6j6rp from kube-system started at 2020-01-11 15:56:08 +0000 UTC (1 container statuses recorded) Jan 11 19:19:38.697: INFO: Container autoscaler ready: true, restart count 0 Jan 11 19:19:38.697: INFO: calico-typha-vertical-autoscaler-5769b74b58-r8t6r from kube-system started at 2020-01-11 15:56:13 +0000 UTC (1 container statuses recorded) Jan 11 19:19:38.697: INFO: Container autoscaler ready: true, restart count 5 Jan 11 19:19:38.697: INFO: addons-nginx-ingress-controller-7c75bb76db-cd9r9 from kube-system started at 2020-01-11 15:56:13 +0000 UTC (1 container statuses recorded) Jan 11 19:19:38.697: INFO: Container nginx-ingress-controller ready: true, restart count 0 Jan 11 19:19:38.697: INFO: vpn-shoot-5d76665b65-6rkww from kube-system started at 2020-01-11 15:56:13 +0000 UTC (1 container statuses recorded) Jan 11 19:19:38.697: INFO: Container vpn-shoot ready: true, restart count 0 Jan 11 19:19:38.697: INFO: addons-nginx-ingress-nginx-ingress-k8s-backend-95f65778d-4fk7d from kube-system started at 2020-01-11 15:56:08 +0000 UTC (1 container statuses recorded) Jan 11 19:19:38.697: INFO: Container nginx-ingress-nginx-ingress-k8s-backend ready: true, restart count 0 Jan 11 19:19:38.697: INFO: addons-kubernetes-dashboard-78954cc66b-69k8m from kube-system started at 2020-01-11 15:56:08 +0000 UTC (1 container statuses recorded) Jan 11 19:19:38.697: INFO: Container kubernetes-dashboard ready: true, restart count 0 Jan 11 19:19:38.697: INFO: blackbox-exporter-54bb5f55cc-452fk from kube-system started at 2020-01-11 15:55:58 +0000 UTC (1 container statuses recorded) Jan 11 19:19:38.697: INFO: Container blackbox-exporter ready: true, restart count 0 Jan 11 19:19:38.697: INFO: coredns-59c969ffb8-fqq79 from kube-system started at 2020-01-11 15:56:08 +0000 UTC (1 container statuses recorded) Jan 11 19:19:38.697: INFO: Container coredns ready: true, restart count 0 [It] validates that taints-tolerations is respected if matching /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:500 STEP: Trying to launch a pod without a toleration to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random taint on the found node. STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-16ea621d-6be2-453e-933a-a081037198ae=testing-taint-value:NoSchedule STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-label-key-709fdd09-9ba6-413e-b69f-04c9051078ff testing-label-value STEP: Trying to relaunch the pod, now with tolerations. STEP: removing the label kubernetes.io/e2e-label-key-709fdd09-9ba6-413e-b69f-04c9051078ff off the node ip-10-250-27-25.ec2.internal STEP: verifying the node doesn't have the label kubernetes.io/e2e-label-key-709fdd09-9ba6-413e-b69f-04c9051078ff STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-16ea621d-6be2-453e-933a-a081037198ae=testing-taint-value:NoSchedule [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:19:44.518: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-7636" for this suite. Jan 11 19:19:54.879: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:19:58.107: INFO: namespace sched-pred-7636 deletion completed in 13.498003445s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:78 •SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-instrumentation] Logging soak [Performance] [Slow] [Disruptive] should survive logging 1KB every 1s seconds, for a duration of 2m0s /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/logging/generic_soak.go:54 [BeforeEach] [sig-instrumentation] Logging soak [Performance] [Slow] [Disruptive] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:19:58.109: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename logging-soak STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in logging-soak-1391 STEP: Waiting for a default service account to be provisioned in namespace [It] should survive logging 1KB every 1s seconds, for a duration of 2m0s /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/logging/generic_soak.go:54 STEP: scaling up to 1 pods per node Jan 11 19:19:58.752: INFO: Starting logging soak, wave = wave0 Jan 11 19:19:58.933: INFO: 0/2 : Creating container with label app=logging-soakwave0-pod Jan 11 19:19:59.027: INFO: 1/2 : Creating container with label app=logging-soakwave0-pod Jan 11 19:20:00.208: INFO: Selector matched 2 pods for map[app:logging-soakwave0-pod] Jan 11 19:20:00.208: INFO: Found 0 / 2 Jan 11 19:20:01.208: INFO: Selector matched 2 pods for map[app:logging-soakwave0-pod] Jan 11 19:20:01.208: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config logs logging-soakwave0-pod-0 logging-soak --namespace=logging-soak-1391' Jan 11 19:20:02.099: INFO: stderr: "" Jan 11 19:20:02.099: INFO: stdout: "logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123\nlogs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123\nlogs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123\n" Jan 11 19:20:02.099: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config logs logging-soakwave0-pod-1 logging-soak --namespace=logging-soak-1391' Jan 11 19:20:02.628: INFO: stderr: "" Jan 11 19:20:02.628: INFO: stdout: "logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123\nlogs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123\nlogs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123logs-123\n" Jan 11 19:20:02.628: INFO: Found 2 / 2 Jan 11 19:20:02.628: INFO: WaitFor completed with timeout 2m0s. Pods found = 2 out of 2 Jan 11 19:20:02.628: INFO: Completed logging soak, wave 0 Jan 11 19:20:03.752: INFO: Waiting on all 1 logging soak waves to complete [AfterEach] [sig-instrumentation] Logging soak [Performance] [Slow] [Disruptive] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:20:03.752: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "logging-soak-1391" for this suite. Jan 11 19:20:50.113: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:20:53.348: INFO: namespace logging-soak-1391 deletion completed in 49.505338305s •SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:20:53.350: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename namespaces STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in namespaces-3049 STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating a test namespace STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nsdeletetest-4290 STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nsdeletetest-7088 STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:21:08.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-3049" for this suite. Jan 11 19:21:15.074: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:21:18.297: INFO: namespace namespaces-3049 deletion completed in 9.491965697s STEP: Destroying namespace "nsdeletetest-4290" for this suite. Jan 11 19:21:18.386: INFO: Namespace nsdeletetest-4290 was already deleted STEP: Destroying namespace "nsdeletetest-7088" for this suite. Jan 11 19:21:24.657: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:21:27.880: INFO: namespace nsdeletetest-7088 deletion completed in 9.493790193s •SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPriorities [Serial] Pod should be preferably scheduled to nodes pod can tolerate /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:220 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:21:27.882: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename sched-priority STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in sched-priority-741 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:76 Jan 11 19:21:28.521: INFO: Waiting up to 1m0s for all nodes to be ready Jan 11 19:22:29.157: INFO: Waiting for terminating namespaces to be deleted... Jan 11 19:22:29.247: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jan 11 19:22:29.521: INFO: 20 / 20 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jan 11 19:22:29.521: INFO: expected 12 pod replicas in namespace 'kube-system', 12 are Running and Ready. [It] Pod should be preferably scheduled to nodes pod can tolerate /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:220 Jan 11 19:22:29.521: INFO: ComputeCPUMemFraction for node: ip-10-250-27-25.ec2.internal Jan 11 19:22:29.614: INFO: Pod for on the node: calico-node-m8r2d, Cpu: 100, Mem: 104857600 Jan 11 19:22:29.614: INFO: Pod for on the node: kube-proxy-rq4kf, Cpu: 20, Mem: 67108864 Jan 11 19:22:29.614: INFO: Pod for on the node: node-exporter-l6q84, Cpu: 5, Mem: 10485760 Jan 11 19:22:29.614: INFO: Pod for on the node: node-problem-detector-9z5sq, Cpu: 20, Mem: 20971520 Jan 11 19:22:29.614: INFO: Node: ip-10-250-27-25.ec2.internal, totalRequestedCPUResource: 245, cpuAllocatableMil: 1920, cpuFraction: 0.12760416666666666 Jan 11 19:22:29.614: INFO: Node: ip-10-250-27-25.ec2.internal, totalRequestedMemResource: 308281344, memAllocatableVal: 6577812679, memFraction: 0.04686684754404816 Jan 11 19:22:29.614: INFO: ComputeCPUMemFraction for node: ip-10-250-7-77.ec2.internal Jan 11 19:22:29.707: INFO: Pod for on the node: addons-kubernetes-dashboard-78954cc66b-69k8m, Cpu: 50, Mem: 52428800 Jan 11 19:22:29.707: INFO: Pod for on the node: addons-nginx-ingress-controller-7c75bb76db-cd9r9, Cpu: 100, Mem: 104857600 Jan 11 19:22:29.707: INFO: Pod for on the node: addons-nginx-ingress-nginx-ingress-k8s-backend-95f65778d-4fk7d, Cpu: 100, Mem: 209715200 Jan 11 19:22:29.707: INFO: Pod for on the node: blackbox-exporter-54bb5f55cc-452fk, Cpu: 5, Mem: 5242880 Jan 11 19:22:29.707: INFO: Pod for on the node: calico-kube-controllers-79bcd784b6-c46r9, Cpu: 100, Mem: 209715200 Jan 11 19:22:29.707: INFO: Pod for on the node: calico-node-dl8nk, Cpu: 100, Mem: 104857600 Jan 11 19:22:29.707: INFO: Pod for on the node: calico-typha-deploy-9f6b455c4-vdrzx, Cpu: 100, Mem: 209715200 Jan 11 19:22:29.707: INFO: Pod for on the node: calico-typha-horizontal-autoscaler-85c99966bb-6j6rp, Cpu: 10, Mem: 209715200 Jan 11 19:22:29.707: INFO: Pod for on the node: calico-typha-vertical-autoscaler-5769b74b58-r8t6r, Cpu: 100, Mem: 209715200 Jan 11 19:22:29.707: INFO: Pod for on the node: coredns-59c969ffb8-57m7v, Cpu: 50, Mem: 15728640 Jan 11 19:22:29.707: INFO: Pod for on the node: coredns-59c969ffb8-fqq79, Cpu: 50, Mem: 15728640 Jan 11 19:22:29.708: INFO: Pod for on the node: kube-proxy-nn5px, Cpu: 20, Mem: 67108864 Jan 11 19:22:29.708: INFO: Pod for on the node: metrics-server-7c797fd994-4x7v9, Cpu: 20, Mem: 104857600 Jan 11 19:22:29.708: INFO: Pod for on the node: node-exporter-gp57h, Cpu: 5, Mem: 10485760 Jan 11 19:22:29.708: INFO: Pod for on the node: node-problem-detector-jx2p4, Cpu: 20, Mem: 20971520 Jan 11 19:22:29.708: INFO: Pod for on the node: vpn-shoot-5d76665b65-6rkww, Cpu: 100, Mem: 104857600 Jan 11 19:22:29.708: INFO: Node: ip-10-250-7-77.ec2.internal, totalRequestedCPUResource: 630, cpuAllocatableMil: 1920, cpuFraction: 0.328125 Jan 11 19:22:29.708: INFO: Node: ip-10-250-7-77.ec2.internal, totalRequestedMemResource: 921698304, memAllocatableVal: 6577812679, memFraction: 0.14012230949393992 Jan 11 19:22:29.800: INFO: Waiting for running... Jan 11 19:22:34.992: INFO: Waiting for running... STEP: Compute Cpu, Mem Fraction after create balanced pods. Jan 11 19:22:40.093: INFO: ComputeCPUMemFraction for node: ip-10-250-27-25.ec2.internal Jan 11 19:22:40.187: INFO: Pod for on the node: calico-node-m8r2d, Cpu: 100, Mem: 104857600 Jan 11 19:22:40.187: INFO: Pod for on the node: kube-proxy-rq4kf, Cpu: 20, Mem: 67108864 Jan 11 19:22:40.187: INFO: Pod for on the node: node-exporter-l6q84, Cpu: 5, Mem: 10485760 Jan 11 19:22:40.187: INFO: Pod for on the node: node-problem-detector-9z5sq, Cpu: 20, Mem: 20971520 Jan 11 19:22:40.187: INFO: Pod for on the node: 754cea51-cd7b-4b59-9d44-4b48d72d114b-0, Cpu: 715, Mem: 2980624995 Jan 11 19:22:40.187: INFO: Node: ip-10-250-27-25.ec2.internal, totalRequestedCPUResource: 960, cpuAllocatableMil: 1920, cpuFraction: 0.5 Jan 11 19:22:40.187: INFO: Node: ip-10-250-27-25.ec2.internal, totalRequestedMemResource: 3288906339, memAllocatableVal: 6577812679, memFraction: 0.4999999999239869 STEP: Compute Cpu, Mem Fraction after create balanced pods. Jan 11 19:22:40.187: INFO: ComputeCPUMemFraction for node: ip-10-250-7-77.ec2.internal Jan 11 19:22:40.280: INFO: Pod for on the node: addons-kubernetes-dashboard-78954cc66b-69k8m, Cpu: 50, Mem: 52428800 Jan 11 19:22:40.280: INFO: Pod for on the node: addons-nginx-ingress-controller-7c75bb76db-cd9r9, Cpu: 100, Mem: 104857600 Jan 11 19:22:40.280: INFO: Pod for on the node: addons-nginx-ingress-nginx-ingress-k8s-backend-95f65778d-4fk7d, Cpu: 100, Mem: 209715200 Jan 11 19:22:40.280: INFO: Pod for on the node: blackbox-exporter-54bb5f55cc-452fk, Cpu: 5, Mem: 5242880 Jan 11 19:22:40.280: INFO: Pod for on the node: calico-kube-controllers-79bcd784b6-c46r9, Cpu: 100, Mem: 209715200 Jan 11 19:22:40.280: INFO: Pod for on the node: calico-node-dl8nk, Cpu: 100, Mem: 104857600 Jan 11 19:22:40.280: INFO: Pod for on the node: calico-typha-deploy-9f6b455c4-vdrzx, Cpu: 100, Mem: 209715200 Jan 11 19:22:40.280: INFO: Pod for on the node: calico-typha-horizontal-autoscaler-85c99966bb-6j6rp, Cpu: 10, Mem: 209715200 Jan 11 19:22:40.280: INFO: Pod for on the node: calico-typha-vertical-autoscaler-5769b74b58-r8t6r, Cpu: 100, Mem: 209715200 Jan 11 19:22:40.280: INFO: Pod for on the node: coredns-59c969ffb8-57m7v, Cpu: 50, Mem: 15728640 Jan 11 19:22:40.280: INFO: Pod for on the node: coredns-59c969ffb8-fqq79, Cpu: 50, Mem: 15728640 Jan 11 19:22:40.280: INFO: Pod for on the node: kube-proxy-nn5px, Cpu: 20, Mem: 67108864 Jan 11 19:22:40.280: INFO: Pod for on the node: metrics-server-7c797fd994-4x7v9, Cpu: 20, Mem: 104857600 Jan 11 19:22:40.280: INFO: Pod for on the node: node-exporter-gp57h, Cpu: 5, Mem: 10485760 Jan 11 19:22:40.280: INFO: Pod for on the node: node-problem-detector-jx2p4, Cpu: 20, Mem: 20971520 Jan 11 19:22:40.280: INFO: Pod for on the node: vpn-shoot-5d76665b65-6rkww, Cpu: 100, Mem: 104857600 Jan 11 19:22:40.280: INFO: Pod for on the node: 22cbc05c-3bfb-4768-9eef-b2f836cae1b8-0, Cpu: 330, Mem: 2367208035 Jan 11 19:22:40.280: INFO: Node: ip-10-250-7-77.ec2.internal, totalRequestedCPUResource: 960, cpuAllocatableMil: 1920, cpuFraction: 0.5 Jan 11 19:22:40.280: INFO: Node: ip-10-250-7-77.ec2.internal, totalRequestedMemResource: 3288906339, memAllocatableVal: 6577812679, memFraction: 0.4999999999239869 STEP: Trying to apply 10 (tolerable) taints on the first node. STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-abc04f7b-bdb9-4d11-835a-2de2ad72acf7=testing-taint-value-49112b3a-bee7-4d2f-a524-95aa35f0058e:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-eb5c547e-b791-4550-9769-fcb616019166=testing-taint-value-af891fbc-0b5d-4e4b-8d38-984304ae9544:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-92d35d9c-2027-4d7a-9e45-a7984e3f9600=testing-taint-value-d569a112-055a-4020-888e-f042664d2f00:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-4f0008e0-99b2-45a2-b60b-a8528a47fc48=testing-taint-value-c92e5620-2442-490e-b805-5ee10b97435c:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-047b065d-4c5e-4846-b4ee-e7f8e409fb5d=testing-taint-value-f694fb40-d326-42b4-9f11-febb590dc81e:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-89358202-a9cf-4fed-beda-bbc957b6f26d=testing-taint-value-37204dcb-09ab-4d71-8269-5b8c6d0123b6:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-f18ae806-8005-4a2b-8355-7d58dceddd56=testing-taint-value-921a5717-3e9f-40c1-84d2-0d59004bb316:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-00dcaf63-b58e-4410-b85f-6fa964bf128a=testing-taint-value-8965bb40-53b8-47cc-9350-ecf350311f00:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-42079c3b-348c-4b40-a9ef-173ad3ad07d9=testing-taint-value-725861a3-145b-4feb-8177-ab9f8d836422:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-251dd388-6eba-460d-aa93-dc5f12ff9a2a=testing-taint-value-703e8496-669f-4c83-ad58-1cb4adfba855:PreferNoSchedule STEP: Adding 10 intolerable taints to all other nodes STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-e7dde069-3581-4dc4-92c1-7ec28e15eab9=testing-taint-value-0dc30bed-2946-4d01-ac0e-48fb6dbe698b:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-f6f32bfb-f5a7-4310-adf5-02c55ded9363=testing-taint-value-992784ff-31bc-400b-b1f1-e18c2b41555b:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-87be651a-7649-4b53-89f9-97d21d2b433e=testing-taint-value-44864ca2-ce9f-4e0f-b9a5-e479e01801d2:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-fce2ccc0-cf82-43e8-a9bf-4a5711cc5217=testing-taint-value-0b85d32e-c456-4c1b-b144-94b375d4ee71:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-c9b71790-5210-406e-a134-8beadf9c8888=testing-taint-value-54c6ca23-e094-47d8-81f3-d65074c6d9eb:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-171bc6f2-f01c-4177-9283-654b59185b62=testing-taint-value-39f692be-a523-4564-ae05-9074cf9cc7e0:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-889fdd49-57b1-4524-8e35-594ece95b41d=testing-taint-value-e9209046-90eb-47df-b822-98556d7ee20b:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-45952fe1-572a-4133-a0bb-fe3cb70683bc=testing-taint-value-044c1d42-97ee-4e7f-a560-b966df2c2955:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-05a9332a-80a9-470d-aa62-461c4fb7cf7d=testing-taint-value-bbae8cb2-51b3-4fa7-98c5-a0d73a19c267:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-8de199ef-2845-481f-b1a7-40d307c2b5d3=testing-taint-value-ac7be345-15a5-4dec-8fc5-c0f1ea9c39f9:PreferNoSchedule STEP: Create a pod that tolerates all the taints of the first node. STEP: Pod should prefer scheduled to the node that pod can tolerate. STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-8de199ef-2845-481f-b1a7-40d307c2b5d3=testing-taint-value-ac7be345-15a5-4dec-8fc5-c0f1ea9c39f9:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-05a9332a-80a9-470d-aa62-461c4fb7cf7d=testing-taint-value-bbae8cb2-51b3-4fa7-98c5-a0d73a19c267:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-45952fe1-572a-4133-a0bb-fe3cb70683bc=testing-taint-value-044c1d42-97ee-4e7f-a560-b966df2c2955:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-889fdd49-57b1-4524-8e35-594ece95b41d=testing-taint-value-e9209046-90eb-47df-b822-98556d7ee20b:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-171bc6f2-f01c-4177-9283-654b59185b62=testing-taint-value-39f692be-a523-4564-ae05-9074cf9cc7e0:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-c9b71790-5210-406e-a134-8beadf9c8888=testing-taint-value-54c6ca23-e094-47d8-81f3-d65074c6d9eb:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-fce2ccc0-cf82-43e8-a9bf-4a5711cc5217=testing-taint-value-0b85d32e-c456-4c1b-b144-94b375d4ee71:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-87be651a-7649-4b53-89f9-97d21d2b433e=testing-taint-value-44864ca2-ce9f-4e0f-b9a5-e479e01801d2:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-f6f32bfb-f5a7-4310-adf5-02c55ded9363=testing-taint-value-992784ff-31bc-400b-b1f1-e18c2b41555b:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-e7dde069-3581-4dc4-92c1-7ec28e15eab9=testing-taint-value-0dc30bed-2946-4d01-ac0e-48fb6dbe698b:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-251dd388-6eba-460d-aa93-dc5f12ff9a2a=testing-taint-value-703e8496-669f-4c83-ad58-1cb4adfba855:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-42079c3b-348c-4b40-a9ef-173ad3ad07d9=testing-taint-value-725861a3-145b-4feb-8177-ab9f8d836422:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-00dcaf63-b58e-4410-b85f-6fa964bf128a=testing-taint-value-8965bb40-53b8-47cc-9350-ecf350311f00:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-f18ae806-8005-4a2b-8355-7d58dceddd56=testing-taint-value-921a5717-3e9f-40c1-84d2-0d59004bb316:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-89358202-a9cf-4fed-beda-bbc957b6f26d=testing-taint-value-37204dcb-09ab-4d71-8269-5b8c6d0123b6:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-047b065d-4c5e-4846-b4ee-e7f8e409fb5d=testing-taint-value-f694fb40-d326-42b4-9f11-febb590dc81e:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-4f0008e0-99b2-45a2-b60b-a8528a47fc48=testing-taint-value-c92e5620-2442-490e-b805-5ee10b97435c:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-92d35d9c-2027-4d7a-9e45-a7984e3f9600=testing-taint-value-d569a112-055a-4020-888e-f042664d2f00:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-eb5c547e-b791-4550-9769-fcb616019166=testing-taint-value-af891fbc-0b5d-4e4b-8d38-984304ae9544:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-abc04f7b-bdb9-4d11-835a-2de2ad72acf7=testing-taint-value-49112b3a-bee7-4d2f-a524-95aa35f0058e:PreferNoSchedule [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:22:53.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-priority-741" for this suite. Jan 11 19:23:05.926: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:23:09.146: INFO: namespace sched-priority-741 deletion completed in 15.490447526s [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:73 •SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:23:09.147: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename sched-pred STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in sched-pred-3561 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:87 Jan 11 19:23:09.786: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jan 11 19:23:10.056: INFO: Waiting for terminating namespaces to be deleted... Jan 11 19:23:10.145: INFO: Logging pods the kubelet thinks is on node ip-10-250-27-25.ec2.internal before test Jan 11 19:23:10.339: INFO: calico-node-m8r2d from kube-system started at 2020-01-11 15:56:04 +0000 UTC (1 container statuses recorded) Jan 11 19:23:10.339: INFO: Container calico-node ready: true, restart count 0 Jan 11 19:23:10.339: INFO: kube-proxy-rq4kf from kube-system started at 2020-01-11 15:56:04 +0000 UTC (1 container statuses recorded) Jan 11 19:23:10.339: INFO: Container kube-proxy ready: true, restart count 0 Jan 11 19:23:10.339: INFO: node-problem-detector-9z5sq from kube-system started at 2020-01-11 15:56:04 +0000 UTC (1 container statuses recorded) Jan 11 19:23:10.339: INFO: Container node-problem-detector ready: true, restart count 0 Jan 11 19:23:10.339: INFO: node-exporter-l6q84 from kube-system started at 2020-01-11 15:56:04 +0000 UTC (1 container statuses recorded) Jan 11 19:23:10.339: INFO: Container node-exporter ready: true, restart count 0 Jan 11 19:23:10.339: INFO: Logging pods the kubelet thinks is on node ip-10-250-7-77.ec2.internal before test Jan 11 19:23:10.552: INFO: addons-nginx-ingress-nginx-ingress-k8s-backend-95f65778d-4fk7d from kube-system started at 2020-01-11 15:56:08 +0000 UTC (1 container statuses recorded) Jan 11 19:23:10.552: INFO: Container nginx-ingress-nginx-ingress-k8s-backend ready: true, restart count 0 Jan 11 19:23:10.552: INFO: addons-kubernetes-dashboard-78954cc66b-69k8m from kube-system started at 2020-01-11 15:56:08 +0000 UTC (1 container statuses recorded) Jan 11 19:23:10.552: INFO: Container kubernetes-dashboard ready: true, restart count 0 Jan 11 19:23:10.552: INFO: blackbox-exporter-54bb5f55cc-452fk from kube-system started at 2020-01-11 15:55:58 +0000 UTC (1 container statuses recorded) Jan 11 19:23:10.552: INFO: Container blackbox-exporter ready: true, restart count 0 Jan 11 19:23:10.552: INFO: coredns-59c969ffb8-fqq79 from kube-system started at 2020-01-11 15:56:08 +0000 UTC (1 container statuses recorded) Jan 11 19:23:10.552: INFO: Container coredns ready: true, restart count 0 Jan 11 19:23:10.552: INFO: calico-node-dl8nk from kube-system started at 2020-01-11 15:55:58 +0000 UTC (1 container statuses recorded) Jan 11 19:23:10.552: INFO: Container calico-node ready: true, restart count 0 Jan 11 19:23:10.552: INFO: node-problem-detector-jx2p4 from kube-system started at 2020-01-11 15:55:58 +0000 UTC (1 container statuses recorded) Jan 11 19:23:10.552: INFO: Container node-problem-detector ready: true, restart count 0 Jan 11 19:23:10.553: INFO: calico-kube-controllers-79bcd784b6-c46r9 from kube-system started at 2020-01-11 15:56:08 +0000 UTC (1 container statuses recorded) Jan 11 19:23:10.553: INFO: Container calico-kube-controllers ready: true, restart count 0 Jan 11 19:23:10.553: INFO: metrics-server-7c797fd994-4x7v9 from kube-system started at 2020-01-11 15:56:08 +0000 UTC (1 container statuses recorded) Jan 11 19:23:10.553: INFO: Container metrics-server ready: true, restart count 0 Jan 11 19:23:10.553: INFO: node-exporter-gp57h from kube-system started at 2020-01-11 15:55:58 +0000 UTC (1 container statuses recorded) Jan 11 19:23:10.553: INFO: Container node-exporter ready: true, restart count 0 Jan 11 19:23:10.553: INFO: coredns-59c969ffb8-57m7v from kube-system started at 2020-01-11 15:56:11 +0000 UTC (1 container statuses recorded) Jan 11 19:23:10.553: INFO: Container coredns ready: true, restart count 0 Jan 11 19:23:10.553: INFO: calico-typha-deploy-9f6b455c4-vdrzx from kube-system started at 2020-01-11 16:21:07 +0000 UTC (1 container statuses recorded) Jan 11 19:23:10.553: INFO: Container calico-typha ready: true, restart count 0 Jan 11 19:23:10.553: INFO: kube-proxy-nn5px from kube-system started at 2020-01-11 15:55:58 +0000 UTC (1 container statuses recorded) Jan 11 19:23:10.553: INFO: Container kube-proxy ready: true, restart count 0 Jan 11 19:23:10.553: INFO: calico-typha-horizontal-autoscaler-85c99966bb-6j6rp from kube-system started at 2020-01-11 15:56:08 +0000 UTC (1 container statuses recorded) Jan 11 19:23:10.553: INFO: Container autoscaler ready: true, restart count 0 Jan 11 19:23:10.553: INFO: calico-typha-vertical-autoscaler-5769b74b58-r8t6r from kube-system started at 2020-01-11 15:56:13 +0000 UTC (1 container statuses recorded) Jan 11 19:23:10.553: INFO: Container autoscaler ready: true, restart count 5 Jan 11 19:23:10.553: INFO: addons-nginx-ingress-controller-7c75bb76db-cd9r9 from kube-system started at 2020-01-11 15:56:13 +0000 UTC (1 container statuses recorded) Jan 11 19:23:10.553: INFO: Container nginx-ingress-controller ready: true, restart count 0 Jan 11 19:23:10.553: INFO: vpn-shoot-5d76665b65-6rkww from kube-system started at 2020-01-11 15:56:13 +0000 UTC (1 container statuses recorded) Jan 11 19:23:10.553: INFO: Container vpn-shoot ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-cd08f3ce-bafa-4664-94fa-286843df2f11 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-cd08f3ce-bafa-4664-94fa-286843df2f11 off the node ip-10-250-27-25.ec2.internal STEP: verifying the node doesn't have the label kubernetes.io/e2e-cd08f3ce-bafa-4664-94fa-286843df2f11 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:23:15.822: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-3561" for this suite. Jan 11 19:23:36.183: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:23:39.413: INFO: namespace sched-pred-3561 deletion completed in 23.5002501s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:78 •SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Downward API [Serial] [Disruptive] [NodeFeature:EphemeralStorage] Downward API tests for local ephemeral storage should provide default limits.ephemeral-storage from node allocatable /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:321 [BeforeEach] [k8s.io] Downward API [Serial] [Disruptive] [NodeFeature:EphemeralStorage] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:23:39.414: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename downward-api STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-3439 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Downward API tests for local ephemeral storage /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:289 [It] should provide default limits.ephemeral-storage from node allocatable /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:321 STEP: Creating a pod to test downward api env vars Jan 11 19:23:40.147: INFO: Waiting up to 5m0s for pod "downward-api-f10171fe-f4e5-45b8-bd1c-b7f336e388b5" in namespace "downward-api-3439" to be "success or failure" Jan 11 19:23:40.237: INFO: Pod "downward-api-f10171fe-f4e5-45b8-bd1c-b7f336e388b5": Phase="Pending", Reason="", readiness=false. Elapsed: 89.710571ms Jan 11 19:23:42.327: INFO: Pod "downward-api-f10171fe-f4e5-45b8-bd1c-b7f336e388b5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.180179399s STEP: Saw pod success Jan 11 19:23:42.327: INFO: Pod "downward-api-f10171fe-f4e5-45b8-bd1c-b7f336e388b5" satisfied condition "success or failure" Jan 11 19:23:42.417: INFO: Trying to get logs from node ip-10-250-27-25.ec2.internal pod downward-api-f10171fe-f4e5-45b8-bd1c-b7f336e388b5 container dapi-container: STEP: delete the pod Jan 11 19:23:42.606: INFO: Waiting for pod downward-api-f10171fe-f4e5-45b8-bd1c-b7f336e388b5 to disappear Jan 11 19:23:42.696: INFO: Pod downward-api-f10171fe-f4e5-45b8-bd1c-b7f336e388b5 no longer exists [AfterEach] [k8s.io] Downward API [Serial] [Disruptive] [NodeFeature:EphemeralStorage] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:23:42.696: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3439" for this suite. Jan 11 19:23:49.057: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:23:52.281: INFO: namespace downward-api-3439 deletion completed in 9.494156234s •SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] NoExecuteTaintManager Single Pod [Serial] removing taint cancels eviction [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-scheduling] NoExecuteTaintManager Single Pod [Serial] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:23:52.282: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename taint-single-pod STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in taint-single-pod-6171 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] NoExecuteTaintManager Single Pod [Serial] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/taints.go:164 Jan 11 19:23:52.922: INFO: Waiting up to 1m0s for all nodes to be ready Jan 11 19:24:53.468: INFO: Waiting for terminating namespaces to be deleted... [It] removing taint cancels eviction [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Jan 11 19:24:53.558: INFO: Starting informer... STEP: Starting pod... Jan 11 19:24:53.740: INFO: Pod is running on ip-10-250-27-25.ec2.internal. Tainting Node STEP: Trying to apply a taint on the Node STEP: verifying the node has the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute STEP: Waiting short time to make sure Pod is queued for deletion Jan 11 19:24:54.013: INFO: Pod wasn't evicted. Proceeding Jan 11 19:24:54.013: INFO: Removing taint from Node STEP: verifying the node doesn't have the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute STEP: Waiting some time to make sure that toleration time passed. Jan 11 19:26:09.286: INFO: Pod wasn't evicted. Test successful [AfterEach] [sig-scheduling] NoExecuteTaintManager Single Pod [Serial] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:26:09.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "taint-single-pod-6171" for this suite. Jan 11 19:26:21.646: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:26:24.871: INFO: namespace taint-single-pod-6171 deletion completed in 15.494649443s •SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:26:24.872: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename daemonsets STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in daemonsets-4342 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should run and stop complex daemon [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Jan 11 19:26:25.965: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Jan 11 19:26:26.146: INFO: Number of nodes with available pods: 0 Jan 11 19:26:26.146: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Jan 11 19:26:26.508: INFO: Number of nodes with available pods: 0 Jan 11 19:26:26.508: INFO: Node ip-10-250-27-25.ec2.internal is running more than one daemon pod Jan 11 19:26:27.599: INFO: Number of nodes with available pods: 0 Jan 11 19:26:27.599: INFO: Node ip-10-250-27-25.ec2.internal is running more than one daemon pod Jan 11 19:26:28.598: INFO: Number of nodes with available pods: 1 Jan 11 19:26:28.599: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Jan 11 19:26:28.960: INFO: Number of nodes with available pods: 0 Jan 11 19:26:28.960: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Jan 11 19:26:29.142: INFO: Number of nodes with available pods: 0 Jan 11 19:26:29.142: INFO: Node ip-10-250-27-25.ec2.internal is running more than one daemon pod Jan 11 19:26:30.233: INFO: Number of nodes with available pods: 0 Jan 11 19:26:30.233: INFO: Node ip-10-250-27-25.ec2.internal is running more than one daemon pod Jan 11 19:26:31.233: INFO: Number of nodes with available pods: 0 Jan 11 19:26:31.233: INFO: Node ip-10-250-27-25.ec2.internal is running more than one daemon pod Jan 11 19:26:32.233: INFO: Number of nodes with available pods: 0 Jan 11 19:26:32.233: INFO: Node ip-10-250-27-25.ec2.internal is running more than one daemon pod Jan 11 19:26:33.233: INFO: Number of nodes with available pods: 1 Jan 11 19:26:33.233: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4342, will wait for the garbage collector to delete the pods Jan 11 19:26:33.694: INFO: Deleting DaemonSet.extensions daemon-set took: 91.565529ms Jan 11 19:26:33.795: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.282486ms Jan 11 19:26:43.884: INFO: Number of nodes with available pods: 0 Jan 11 19:26:43.884: INFO: Number of running nodes: 0, number of available pods: 0 Jan 11 19:26:43.974: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4342/daemonsets","resourceVersion":"43013"},"items":null} Jan 11 19:26:44.064: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4342/pods","resourceVersion":"43014"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:26:44.426: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-4342" for this suite. Jan 11 19:26:50.787: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:26:54.020: INFO: namespace daemonsets-4342 deletion completed in 9.503088897s •SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:543 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:26:54.021: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename sched-pred STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in sched-pred-2498 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:87 Jan 11 19:26:54.661: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jan 11 19:26:54.936: INFO: Waiting for terminating namespaces to be deleted... Jan 11 19:26:55.025: INFO: Logging pods the kubelet thinks is on node ip-10-250-27-25.ec2.internal before test Jan 11 19:26:55.222: INFO: kube-proxy-rq4kf from kube-system started at 2020-01-11 15:56:04 +0000 UTC (1 container statuses recorded) Jan 11 19:26:55.222: INFO: Container kube-proxy ready: true, restart count 0 Jan 11 19:26:55.222: INFO: node-problem-detector-9z5sq from kube-system started at 2020-01-11 15:56:04 +0000 UTC (1 container statuses recorded) Jan 11 19:26:55.222: INFO: Container node-problem-detector ready: true, restart count 0 Jan 11 19:26:55.222: INFO: node-exporter-l6q84 from kube-system started at 2020-01-11 15:56:04 +0000 UTC (1 container statuses recorded) Jan 11 19:26:55.222: INFO: Container node-exporter ready: true, restart count 0 Jan 11 19:26:55.222: INFO: calico-node-m8r2d from kube-system started at 2020-01-11 15:56:04 +0000 UTC (1 container statuses recorded) Jan 11 19:26:55.222: INFO: Container calico-node ready: true, restart count 0 Jan 11 19:26:55.222: INFO: Logging pods the kubelet thinks is on node ip-10-250-7-77.ec2.internal before test Jan 11 19:26:55.432: INFO: vpn-shoot-5d76665b65-6rkww from kube-system started at 2020-01-11 15:56:13 +0000 UTC (1 container statuses recorded) Jan 11 19:26:55.433: INFO: Container vpn-shoot ready: true, restart count 0 Jan 11 19:26:55.433: INFO: addons-nginx-ingress-nginx-ingress-k8s-backend-95f65778d-4fk7d from kube-system started at 2020-01-11 15:56:08 +0000 UTC (1 container statuses recorded) Jan 11 19:26:55.433: INFO: Container nginx-ingress-nginx-ingress-k8s-backend ready: true, restart count 0 Jan 11 19:26:55.433: INFO: addons-kubernetes-dashboard-78954cc66b-69k8m from kube-system started at 2020-01-11 15:56:08 +0000 UTC (1 container statuses recorded) Jan 11 19:26:55.433: INFO: Container kubernetes-dashboard ready: true, restart count 0 Jan 11 19:26:55.433: INFO: blackbox-exporter-54bb5f55cc-452fk from kube-system started at 2020-01-11 15:55:58 +0000 UTC (1 container statuses recorded) Jan 11 19:26:55.433: INFO: Container blackbox-exporter ready: true, restart count 0 Jan 11 19:26:55.433: INFO: coredns-59c969ffb8-fqq79 from kube-system started at 2020-01-11 15:56:08 +0000 UTC (1 container statuses recorded) Jan 11 19:26:55.433: INFO: Container coredns ready: true, restart count 0 Jan 11 19:26:55.433: INFO: calico-node-dl8nk from kube-system started at 2020-01-11 15:55:58 +0000 UTC (1 container statuses recorded) Jan 11 19:26:55.433: INFO: Container calico-node ready: true, restart count 0 Jan 11 19:26:55.433: INFO: node-problem-detector-jx2p4 from kube-system started at 2020-01-11 15:55:58 +0000 UTC (1 container statuses recorded) Jan 11 19:26:55.433: INFO: Container node-problem-detector ready: true, restart count 0 Jan 11 19:26:55.433: INFO: node-exporter-gp57h from kube-system started at 2020-01-11 15:55:58 +0000 UTC (1 container statuses recorded) Jan 11 19:26:55.433: INFO: Container node-exporter ready: true, restart count 0 Jan 11 19:26:55.433: INFO: calico-kube-controllers-79bcd784b6-c46r9 from kube-system started at 2020-01-11 15:56:08 +0000 UTC (1 container statuses recorded) Jan 11 19:26:55.433: INFO: Container calico-kube-controllers ready: true, restart count 0 Jan 11 19:26:55.433: INFO: metrics-server-7c797fd994-4x7v9 from kube-system started at 2020-01-11 15:56:08 +0000 UTC (1 container statuses recorded) Jan 11 19:26:55.433: INFO: Container metrics-server ready: true, restart count 0 Jan 11 19:26:55.433: INFO: coredns-59c969ffb8-57m7v from kube-system started at 2020-01-11 15:56:11 +0000 UTC (1 container statuses recorded) Jan 11 19:26:55.433: INFO: Container coredns ready: true, restart count 0 Jan 11 19:26:55.433: INFO: calico-typha-deploy-9f6b455c4-vdrzx from kube-system started at 2020-01-11 16:21:07 +0000 UTC (1 container statuses recorded) Jan 11 19:26:55.433: INFO: Container calico-typha ready: true, restart count 0 Jan 11 19:26:55.433: INFO: kube-proxy-nn5px from kube-system started at 2020-01-11 15:55:58 +0000 UTC (1 container statuses recorded) Jan 11 19:26:55.433: INFO: Container kube-proxy ready: true, restart count 0 Jan 11 19:26:55.433: INFO: calico-typha-horizontal-autoscaler-85c99966bb-6j6rp from kube-system started at 2020-01-11 15:56:08 +0000 UTC (1 container statuses recorded) Jan 11 19:26:55.433: INFO: Container autoscaler ready: true, restart count 0 Jan 11 19:26:55.433: INFO: calico-typha-vertical-autoscaler-5769b74b58-r8t6r from kube-system started at 2020-01-11 15:56:13 +0000 UTC (1 container statuses recorded) Jan 11 19:26:55.433: INFO: Container autoscaler ready: true, restart count 5 Jan 11 19:26:55.433: INFO: addons-nginx-ingress-controller-7c75bb76db-cd9r9 from kube-system started at 2020-01-11 15:56:13 +0000 UTC (1 container statuses recorded) Jan 11 19:26:55.433: INFO: Container nginx-ingress-controller ready: true, restart count 0 [It] validates that taints-tolerations is respected if not matching /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:543 STEP: Trying to launch a pod without a toleration to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random taint on the found node. STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-f65b1e3e-ae5c-4e9a-8130-a9471b76e918=testing-taint-value:NoSchedule STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-label-key-80ec987a-a275-48f7-950b-220c8e28a01c testing-label-value STEP: Trying to relaunch the pod, still no tolerations. STEP: Considering event: Type = [Normal], Name = [without-toleration.15e8ebb126841e23], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2498/without-toleration to ip-10-250-27-25.ec2.internal] STEP: Considering event: Type = [Normal], Name = [without-toleration.15e8ebb14b57a7c1], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [without-toleration.15e8ebb14db3983e], Reason = [Created], Message = [Created container without-toleration] STEP: Considering event: Type = [Normal], Name = [without-toleration.15e8ebb153b1d1cb], Reason = [Started], Message = [Started container without-toleration] STEP: Considering event: Type = [Normal], Name = [without-toleration.15e8ebb1b36ab269], Reason = [Killing], Message = [Stopping container without-toleration] STEP: Considering event: Type = [Normal], Name = [without-toleration.15e8ebb1cc17772c], Reason = [SandboxChanged], Message = [Pod sandbox changed, it will be killed and re-created.] STEP: Considering event: Type = [Warning], Name = [still-no-tolerations.15e8ebb1de88c091], Reason = [FailedScheduling], Message = [0/2 nodes are available: 1 node(s) didn't match node selector, 1 node(s) had taints that the pod didn't tolerate.] STEP: Considering event: Type = [Warning], Name = [without-toleration.15e8ebb1e7970d5c], Reason = [FailedCreatePodSandBox], Message = [Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "eb86a4274b181b43fb1606f18b2dc41425100337f9aa88a67893b6690a2ea18d" network for pod "without-toleration": networkPlugin cni failed to set up pod "without-toleration_sched-pred-2498" network: pods "without-toleration" not found] STEP: Removing taint off the node STEP: Considering event: Type = [Warning], Name = [still-no-tolerations.15e8ebb1de88c091], Reason = [FailedScheduling], Message = [0/2 nodes are available: 1 node(s) didn't match node selector, 1 node(s) had taints that the pod didn't tolerate.] STEP: Considering event: Type = [Normal], Name = [without-toleration.15e8ebb126841e23], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2498/without-toleration to ip-10-250-27-25.ec2.internal] STEP: Considering event: Type = [Normal], Name = [without-toleration.15e8ebb14b57a7c1], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [without-toleration.15e8ebb14db3983e], Reason = [Created], Message = [Created container without-toleration] STEP: Considering event: Type = [Normal], Name = [without-toleration.15e8ebb153b1d1cb], Reason = [Started], Message = [Started container without-toleration] STEP: Considering event: Type = [Normal], Name = [without-toleration.15e8ebb1b36ab269], Reason = [Killing], Message = [Stopping container without-toleration] STEP: Considering event: Type = [Normal], Name = [without-toleration.15e8ebb1cc17772c], Reason = [SandboxChanged], Message = [Pod sandbox changed, it will be killed and re-created.] STEP: Considering event: Type = [Warning], Name = [without-toleration.15e8ebb1e7970d5c], Reason = [FailedCreatePodSandBox], Message = [Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "eb86a4274b181b43fb1606f18b2dc41425100337f9aa88a67893b6690a2ea18d" network for pod "without-toleration": networkPlugin cni failed to set up pod "without-toleration_sched-pred-2498" network: pods "without-toleration" not found] STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-f65b1e3e-ae5c-4e9a-8130-a9471b76e918=testing-taint-value:NoSchedule STEP: Considering event: Type = [Normal], Name = [still-no-tolerations.15e8ebb2361b3091], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2498/still-no-tolerations to ip-10-250-27-25.ec2.internal] STEP: Considering event: Type = [Normal], Name = [still-no-tolerations.15e8ebb25b8cf3a4], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [still-no-tolerations.15e8ebb25f61e52f], Reason = [Created], Message = [Created container still-no-tolerations] STEP: Considering event: Type = [Normal], Name = [still-no-tolerations.15e8ebb266c60a3b], Reason = [Started], Message = [Started container still-no-tolerations] STEP: removing the label kubernetes.io/e2e-label-key-80ec987a-a275-48f7-950b-220c8e28a01c off the node ip-10-250-27-25.ec2.internal STEP: verifying the node doesn't have the label kubernetes.io/e2e-label-key-80ec987a-a275-48f7-950b-220c8e28a01c STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-f65b1e3e-ae5c-4e9a-8130-a9471b76e918=testing-taint-value:NoSchedule [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:27:01.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-2498" for this suite. Jan 11 19:27:16.063: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:27:19.298: INFO: namespace sched-pred-2498 deletion completed in 17.504819271s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:78 •SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-scheduling] NoExecuteTaintManager Multiple Pods [Serial] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:27:19.301: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename taint-multiple-pods STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in taint-multiple-pods-2335 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] NoExecuteTaintManager Multiple Pods [Serial] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/taints.go:345 Jan 11 19:27:19.951: INFO: Waiting up to 1m0s for all nodes to be ready Jan 11 19:28:20.498: INFO: Waiting for terminating namespaces to be deleted... [It] evicts pods with minTolerationSeconds [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Jan 11 19:28:20.593: INFO: Starting informer... STEP: Starting pods... Jan 11 19:28:20.865: INFO: Pod1 is running on ip-10-250-27-25.ec2.internal. Tainting Node Jan 11 19:28:23.314: INFO: Pod2 is running on ip-10-250-27-25.ec2.internal. Tainting Node STEP: Trying to apply a taint on the Node STEP: verifying the node has the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute STEP: Waiting for Pod1 and Pod2 to be deleted Jan 11 19:28:29.917: INFO: Noticed Pod "taint-eviction-b1" gets evicted. Jan 11 19:28:50.014: INFO: Noticed Pod "taint-eviction-b2" gets evicted. STEP: verifying the node doesn't have the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute [AfterEach] [sig-scheduling] NoExecuteTaintManager Multiple Pods [Serial] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:28:50.287: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "taint-multiple-pods-2335" for this suite. Jan 11 19:28:56.648: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:28:59.876: INFO: namespace taint-multiple-pods-2335 deletion completed in 9.498367226s •SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should not update pod when spec was updated and update strategy is OnDelete /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:277 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:28:59.881: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename daemonsets STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in daemonsets-4710 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should not update pod when spec was updated and update strategy is OnDelete /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:277 Jan 11 19:29:00.981: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Jan 11 19:29:01.251: INFO: Number of nodes with available pods: 0 Jan 11 19:29:01.251: INFO: Node ip-10-250-27-25.ec2.internal is running more than one daemon pod Jan 11 19:29:02.433: INFO: Number of nodes with available pods: 1 Jan 11 19:29:02.434: INFO: Node ip-10-250-7-77.ec2.internal is running more than one daemon pod Jan 11 19:29:03.432: INFO: Number of nodes with available pods: 2 Jan 11 19:29:03.432: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images aren't updated. STEP: Check that daemon pods are still running on every node of the cluster. Jan 11 19:29:04.245: INFO: Number of nodes with available pods: 2 Jan 11 19:29:04.246: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4710, will wait for the garbage collector to delete the pods Jan 11 19:29:04.977: INFO: Deleting DaemonSet.extensions daemon-set took: 91.311806ms Jan 11 19:29:05.077: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.257521ms Jan 11 19:29:13.867: INFO: Number of nodes with available pods: 0 Jan 11 19:29:13.867: INFO: Number of running nodes: 0, number of available pods: 0 Jan 11 19:29:13.957: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4710/daemonsets","resourceVersion":"43433"},"items":null} Jan 11 19:29:14.047: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4710/pods","resourceVersion":"43433"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:29:14.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-4710" for this suite. Jan 11 19:29:20.678: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:29:23.905: INFO: namespace daemonsets-4710 deletion completed in 9.497347361s •S ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:29:23.905: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename namespaces STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in namespaces-7307 STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating a test namespace STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nsdeletetest-1069 STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nsdeletetest-5860 STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:29:32.111: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-7307" for this suite. Jan 11 19:29:38.472: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:29:41.700: INFO: namespace namespaces-7307 deletion completed in 9.497779258s STEP: Destroying namespace "nsdeletetest-1069" for this suite. Jan 11 19:29:41.789: INFO: Namespace nsdeletetest-1069 was already deleted STEP: Destroying namespace "nsdeletetest-5860" for this suite. Jan 11 19:29:48.060: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:29:51.296: INFO: namespace nsdeletetest-5860 deletion completed in 9.506270236s •SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] PodPriorityResolution [Serial] validates critical system priorities are created and resolved /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:239 [BeforeEach] [sig-scheduling] PodPriorityResolution [Serial] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:29:51.296: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename sched-pod-priority STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in sched-pod-priority-3050 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] PodPriorityResolution [Serial] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:230 Jan 11 19:29:51.951: INFO: Waiting for terminating namespaces to be deleted... [It] validates critical system priorities are created and resolved /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:239 STEP: Create pods that use critical system priorities. Jan 11 19:29:52.134: INFO: Created pod: pod0-system-node-critical Jan 11 19:29:52.403: INFO: Created pod: pod1-system-cluster-critical [AfterEach] [sig-scheduling] PodPriorityResolution [Serial] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:29:52.671: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pod-priority-3050" for this suite. Jan 11 19:29:59.034: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:30:02.281: INFO: namespace sched-pod-priority-3050 deletion completed in 9.517306907s •SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:30:02.282: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename sched-pred STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in sched-pred-3561 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:87 Jan 11 19:30:02.950: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jan 11 19:30:03.270: INFO: Waiting for terminating namespaces to be deleted... Jan 11 19:30:03.363: INFO: Logging pods the kubelet thinks is on node ip-10-250-27-25.ec2.internal before test Jan 11 19:30:03.594: INFO: node-problem-detector-9z5sq from kube-system started at 2020-01-11 15:56:04 +0000 UTC (1 container statuses recorded) Jan 11 19:30:03.594: INFO: Container node-problem-detector ready: true, restart count 0 Jan 11 19:30:03.594: INFO: node-exporter-l6q84 from kube-system started at 2020-01-11 15:56:04 +0000 UTC (1 container statuses recorded) Jan 11 19:30:03.594: INFO: Container node-exporter ready: true, restart count 0 Jan 11 19:30:03.594: INFO: calico-node-m8r2d from kube-system started at 2020-01-11 15:56:04 +0000 UTC (1 container statuses recorded) Jan 11 19:30:03.594: INFO: Container calico-node ready: true, restart count 0 Jan 11 19:30:03.594: INFO: kube-proxy-rq4kf from kube-system started at 2020-01-11 15:56:04 +0000 UTC (1 container statuses recorded) Jan 11 19:30:03.594: INFO: Container kube-proxy ready: true, restart count 0 Jan 11 19:30:03.594: INFO: Logging pods the kubelet thinks is on node ip-10-250-7-77.ec2.internal before test Jan 11 19:30:03.775: INFO: addons-nginx-ingress-controller-7c75bb76db-cd9r9 from kube-system started at 2020-01-11 15:56:13 +0000 UTC (1 container statuses recorded) Jan 11 19:30:03.775: INFO: Container nginx-ingress-controller ready: true, restart count 0 Jan 11 19:30:03.775: INFO: vpn-shoot-5d76665b65-6rkww from kube-system started at 2020-01-11 15:56:13 +0000 UTC (1 container statuses recorded) Jan 11 19:30:03.775: INFO: Container vpn-shoot ready: true, restart count 0 Jan 11 19:30:03.775: INFO: addons-nginx-ingress-nginx-ingress-k8s-backend-95f65778d-4fk7d from kube-system started at 2020-01-11 15:56:08 +0000 UTC (1 container statuses recorded) Jan 11 19:30:03.775: INFO: Container nginx-ingress-nginx-ingress-k8s-backend ready: true, restart count 0 Jan 11 19:30:03.775: INFO: addons-kubernetes-dashboard-78954cc66b-69k8m from kube-system started at 2020-01-11 15:56:08 +0000 UTC (1 container statuses recorded) Jan 11 19:30:03.775: INFO: Container kubernetes-dashboard ready: true, restart count 0 Jan 11 19:30:03.775: INFO: blackbox-exporter-54bb5f55cc-452fk from kube-system started at 2020-01-11 15:55:58 +0000 UTC (1 container statuses recorded) Jan 11 19:30:03.775: INFO: Container blackbox-exporter ready: true, restart count 0 Jan 11 19:30:03.775: INFO: coredns-59c969ffb8-fqq79 from kube-system started at 2020-01-11 15:56:08 +0000 UTC (1 container statuses recorded) Jan 11 19:30:03.775: INFO: Container coredns ready: true, restart count 0 Jan 11 19:30:03.775: INFO: calico-node-dl8nk from kube-system started at 2020-01-11 15:55:58 +0000 UTC (1 container statuses recorded) Jan 11 19:30:03.775: INFO: Container calico-node ready: true, restart count 0 Jan 11 19:30:03.775: INFO: node-problem-detector-jx2p4 from kube-system started at 2020-01-11 15:55:58 +0000 UTC (1 container statuses recorded) Jan 11 19:30:03.775: INFO: Container node-problem-detector ready: true, restart count 0 Jan 11 19:30:03.775: INFO: node-exporter-gp57h from kube-system started at 2020-01-11 15:55:58 +0000 UTC (1 container statuses recorded) Jan 11 19:30:03.775: INFO: Container node-exporter ready: true, restart count 0 Jan 11 19:30:03.775: INFO: calico-kube-controllers-79bcd784b6-c46r9 from kube-system started at 2020-01-11 15:56:08 +0000 UTC (1 container statuses recorded) Jan 11 19:30:03.775: INFO: Container calico-kube-controllers ready: true, restart count 0 Jan 11 19:30:03.775: INFO: metrics-server-7c797fd994-4x7v9 from kube-system started at 2020-01-11 15:56:08 +0000 UTC (1 container statuses recorded) Jan 11 19:30:03.775: INFO: Container metrics-server ready: true, restart count 0 Jan 11 19:30:03.775: INFO: coredns-59c969ffb8-57m7v from kube-system started at 2020-01-11 15:56:11 +0000 UTC (1 container statuses recorded) Jan 11 19:30:03.775: INFO: Container coredns ready: true, restart count 0 Jan 11 19:30:03.775: INFO: calico-typha-deploy-9f6b455c4-vdrzx from kube-system started at 2020-01-11 16:21:07 +0000 UTC (1 container statuses recorded) Jan 11 19:30:03.775: INFO: Container calico-typha ready: true, restart count 0 Jan 11 19:30:03.775: INFO: kube-proxy-nn5px from kube-system started at 2020-01-11 15:55:58 +0000 UTC (1 container statuses recorded) Jan 11 19:30:03.775: INFO: Container kube-proxy ready: true, restart count 0 Jan 11 19:30:03.775: INFO: calico-typha-horizontal-autoscaler-85c99966bb-6j6rp from kube-system started at 2020-01-11 15:56:08 +0000 UTC (1 container statuses recorded) Jan 11 19:30:03.775: INFO: Container autoscaler ready: true, restart count 0 Jan 11 19:30:03.775: INFO: calico-typha-vertical-autoscaler-5769b74b58-r8t6r from kube-system started at 2020-01-11 15:56:13 +0000 UTC (1 container statuses recorded) Jan 11 19:30:03.775: INFO: Container autoscaler ready: true, restart count 5 [It] validates that NodeSelector is respected if not matching [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.15e8ebdd1126e8f8], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:30:05.238: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-3561" for this suite. Jan 11 19:30:11.600: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:30:14.828: INFO: namespace sched-pred-3561 deletion completed in 9.498602364s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:78 •SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] NoExecuteTaintManager Single Pod [Serial] evicts pods from tainted nodes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/taints.go:177 [BeforeEach] [sig-scheduling] NoExecuteTaintManager Single Pod [Serial] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:30:14.828: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename taint-single-pod STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in taint-single-pod-5244 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] NoExecuteTaintManager Single Pod [Serial] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/taints.go:164 Jan 11 19:30:15.470: INFO: Waiting up to 1m0s for all nodes to be ready Jan 11 19:31:16.015: INFO: Waiting for terminating namespaces to be deleted... [It] evicts pods from tainted nodes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/taints.go:177 Jan 11 19:31:16.105: INFO: Starting informer... STEP: Starting pod... Jan 11 19:31:16.286: INFO: Pod is running on ip-10-250-27-25.ec2.internal. Tainting Node STEP: Trying to apply a taint on the Node STEP: verifying the node has the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute STEP: Waiting for Pod to be deleted Jan 11 19:31:18.756: INFO: Noticed Pod eviction. Test successful STEP: verifying the node doesn't have the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute [AfterEach] [sig-scheduling] NoExecuteTaintManager Single Pod [Serial] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:31:19.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "taint-single-pod-5244" for this suite. Jan 11 19:31:25.389: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:31:28.612: INFO: namespace taint-single-pod-5244 deletion completed in 9.493159467s •SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local Pods sharing a single local PV [Serial] all pods should be running /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:638 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:31:28.613: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in persistent-local-volumes-test-9333 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:152 [BeforeEach] Pods sharing a single local PV [Serial] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:615 [It] all pods should be running /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:638 STEP: Create a PVC STEP: Create 50 pods to use this PVC STEP: Wait for all pods are running [AfterEach] Pods sharing a single local PV [Serial] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:629 STEP: Clean PV local-pv8pzn4 [AfterEach] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:31:53.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-9333" for this suite. Jan 11 19:32:39.706: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:32:42.930: INFO: namespace persistent-local-volumes-test-9333 deletion completed in 49.497266683s •SSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:32:42.931: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename sched-pred STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in sched-pred-6831 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:87 Jan 11 19:32:43.571: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jan 11 19:32:43.842: INFO: Waiting for terminating namespaces to be deleted... Jan 11 19:32:43.932: INFO: Logging pods the kubelet thinks is on node ip-10-250-27-25.ec2.internal before test Jan 11 19:32:44.130: INFO: calico-node-m8r2d from kube-system started at 2020-01-11 15:56:04 +0000 UTC (1 container statuses recorded) Jan 11 19:32:44.130: INFO: Container calico-node ready: true, restart count 0 Jan 11 19:32:44.130: INFO: kube-proxy-rq4kf from kube-system started at 2020-01-11 15:56:04 +0000 UTC (1 container statuses recorded) Jan 11 19:32:44.130: INFO: Container kube-proxy ready: true, restart count 0 Jan 11 19:32:44.130: INFO: node-problem-detector-9z5sq from kube-system started at 2020-01-11 15:56:04 +0000 UTC (1 container statuses recorded) Jan 11 19:32:44.130: INFO: Container node-problem-detector ready: true, restart count 0 Jan 11 19:32:44.130: INFO: node-exporter-l6q84 from kube-system started at 2020-01-11 15:56:04 +0000 UTC (1 container statuses recorded) Jan 11 19:32:44.130: INFO: Container node-exporter ready: true, restart count 0 Jan 11 19:32:44.130: INFO: Logging pods the kubelet thinks is on node ip-10-250-7-77.ec2.internal before test Jan 11 19:32:44.239: INFO: blackbox-exporter-54bb5f55cc-452fk from kube-system started at 2020-01-11 15:55:58 +0000 UTC (1 container statuses recorded) Jan 11 19:32:44.239: INFO: Container blackbox-exporter ready: true, restart count 0 Jan 11 19:32:44.239: INFO: coredns-59c969ffb8-fqq79 from kube-system started at 2020-01-11 15:56:08 +0000 UTC (1 container statuses recorded) Jan 11 19:32:44.239: INFO: Container coredns ready: true, restart count 0 Jan 11 19:32:44.239: INFO: calico-node-dl8nk from kube-system started at 2020-01-11 15:55:58 +0000 UTC (1 container statuses recorded) Jan 11 19:32:44.239: INFO: Container calico-node ready: true, restart count 0 Jan 11 19:32:44.239: INFO: node-problem-detector-jx2p4 from kube-system started at 2020-01-11 15:55:58 +0000 UTC (1 container statuses recorded) Jan 11 19:32:44.239: INFO: Container node-problem-detector ready: true, restart count 0 Jan 11 19:32:44.240: INFO: node-exporter-gp57h from kube-system started at 2020-01-11 15:55:58 +0000 UTC (1 container statuses recorded) Jan 11 19:32:44.240: INFO: Container node-exporter ready: true, restart count 0 Jan 11 19:32:44.240: INFO: calico-kube-controllers-79bcd784b6-c46r9 from kube-system started at 2020-01-11 15:56:08 +0000 UTC (1 container statuses recorded) Jan 11 19:32:44.240: INFO: Container calico-kube-controllers ready: true, restart count 0 Jan 11 19:32:44.240: INFO: metrics-server-7c797fd994-4x7v9 from kube-system started at 2020-01-11 15:56:08 +0000 UTC (1 container statuses recorded) Jan 11 19:32:44.240: INFO: Container metrics-server ready: true, restart count 0 Jan 11 19:32:44.240: INFO: coredns-59c969ffb8-57m7v from kube-system started at 2020-01-11 15:56:11 +0000 UTC (1 container statuses recorded) Jan 11 19:32:44.240: INFO: Container coredns ready: true, restart count 0 Jan 11 19:32:44.240: INFO: calico-typha-deploy-9f6b455c4-vdrzx from kube-system started at 2020-01-11 16:21:07 +0000 UTC (1 container statuses recorded) Jan 11 19:32:44.240: INFO: Container calico-typha ready: true, restart count 0 Jan 11 19:32:44.240: INFO: kube-proxy-nn5px from kube-system started at 2020-01-11 15:55:58 +0000 UTC (1 container statuses recorded) Jan 11 19:32:44.240: INFO: Container kube-proxy ready: true, restart count 0 Jan 11 19:32:44.240: INFO: calico-typha-horizontal-autoscaler-85c99966bb-6j6rp from kube-system started at 2020-01-11 15:56:08 +0000 UTC (1 container statuses recorded) Jan 11 19:32:44.240: INFO: Container autoscaler ready: true, restart count 0 Jan 11 19:32:44.240: INFO: calico-typha-vertical-autoscaler-5769b74b58-r8t6r from kube-system started at 2020-01-11 15:56:13 +0000 UTC (1 container statuses recorded) Jan 11 19:32:44.240: INFO: Container autoscaler ready: true, restart count 5 Jan 11 19:32:44.240: INFO: addons-nginx-ingress-controller-7c75bb76db-cd9r9 from kube-system started at 2020-01-11 15:56:13 +0000 UTC (1 container statuses recorded) Jan 11 19:32:44.240: INFO: Container nginx-ingress-controller ready: true, restart count 0 Jan 11 19:32:44.240: INFO: vpn-shoot-5d76665b65-6rkww from kube-system started at 2020-01-11 15:56:13 +0000 UTC (1 container statuses recorded) Jan 11 19:32:44.240: INFO: Container vpn-shoot ready: true, restart count 0 Jan 11 19:32:44.240: INFO: addons-nginx-ingress-nginx-ingress-k8s-backend-95f65778d-4fk7d from kube-system started at 2020-01-11 15:56:08 +0000 UTC (1 container statuses recorded) Jan 11 19:32:44.240: INFO: Container nginx-ingress-nginx-ingress-k8s-backend ready: true, restart count 0 Jan 11 19:32:44.240: INFO: addons-kubernetes-dashboard-78954cc66b-69k8m from kube-system started at 2020-01-11 15:56:08 +0000 UTC (1 container statuses recorded) Jan 11 19:32:44.240: INFO: Container kubernetes-dashboard ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: verifying the node has the label node ip-10-250-27-25.ec2.internal STEP: verifying the node has the label node ip-10-250-7-77.ec2.internal Jan 11 19:32:44.796: INFO: Pod addons-kubernetes-dashboard-78954cc66b-69k8m requesting resource cpu=50m on Node ip-10-250-7-77.ec2.internal Jan 11 19:32:44.796: INFO: Pod addons-nginx-ingress-controller-7c75bb76db-cd9r9 requesting resource cpu=100m on Node ip-10-250-7-77.ec2.internal Jan 11 19:32:44.796: INFO: Pod addons-nginx-ingress-nginx-ingress-k8s-backend-95f65778d-4fk7d requesting resource cpu=0m on Node ip-10-250-7-77.ec2.internal Jan 11 19:32:44.796: INFO: Pod blackbox-exporter-54bb5f55cc-452fk requesting resource cpu=5m on Node ip-10-250-7-77.ec2.internal Jan 11 19:32:44.796: INFO: Pod calico-kube-controllers-79bcd784b6-c46r9 requesting resource cpu=0m on Node ip-10-250-7-77.ec2.internal Jan 11 19:32:44.796: INFO: Pod calico-node-dl8nk requesting resource cpu=100m on Node ip-10-250-7-77.ec2.internal Jan 11 19:32:44.796: INFO: Pod calico-node-m8r2d requesting resource cpu=100m on Node ip-10-250-27-25.ec2.internal Jan 11 19:32:44.796: INFO: Pod calico-typha-deploy-9f6b455c4-vdrzx requesting resource cpu=0m on Node ip-10-250-7-77.ec2.internal Jan 11 19:32:44.796: INFO: Pod calico-typha-horizontal-autoscaler-85c99966bb-6j6rp requesting resource cpu=10m on Node ip-10-250-7-77.ec2.internal Jan 11 19:32:44.796: INFO: Pod calico-typha-vertical-autoscaler-5769b74b58-r8t6r requesting resource cpu=0m on Node ip-10-250-7-77.ec2.internal Jan 11 19:32:44.796: INFO: Pod coredns-59c969ffb8-57m7v requesting resource cpu=50m on Node ip-10-250-7-77.ec2.internal Jan 11 19:32:44.796: INFO: Pod coredns-59c969ffb8-fqq79 requesting resource cpu=50m on Node ip-10-250-7-77.ec2.internal Jan 11 19:32:44.796: INFO: Pod kube-proxy-nn5px requesting resource cpu=20m on Node ip-10-250-7-77.ec2.internal Jan 11 19:32:44.796: INFO: Pod kube-proxy-rq4kf requesting resource cpu=20m on Node ip-10-250-27-25.ec2.internal Jan 11 19:32:44.796: INFO: Pod metrics-server-7c797fd994-4x7v9 requesting resource cpu=20m on Node ip-10-250-7-77.ec2.internal Jan 11 19:32:44.796: INFO: Pod node-exporter-gp57h requesting resource cpu=5m on Node ip-10-250-7-77.ec2.internal Jan 11 19:32:44.796: INFO: Pod node-exporter-l6q84 requesting resource cpu=5m on Node ip-10-250-27-25.ec2.internal Jan 11 19:32:44.796: INFO: Pod node-problem-detector-9z5sq requesting resource cpu=20m on Node ip-10-250-27-25.ec2.internal Jan 11 19:32:44.796: INFO: Pod node-problem-detector-jx2p4 requesting resource cpu=20m on Node ip-10-250-7-77.ec2.internal Jan 11 19:32:44.796: INFO: Pod vpn-shoot-5d76665b65-6rkww requesting resource cpu=100m on Node ip-10-250-7-77.ec2.internal STEP: Starting Pods to consume most of the cluster CPU. Jan 11 19:32:44.796: INFO: Creating a pod which consumes cpu=973m on Node ip-10-250-7-77.ec2.internal Jan 11 19:32:44.889: INFO: Creating a pod which consumes cpu=1242m on Node ip-10-250-27-25.ec2.internal STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-929c2015-3bbd-4225-bf56-7b0601fba7e8.15e8ec0283b6c4f9], Reason = [Scheduled], Message = [Successfully assigned sched-pred-6831/filler-pod-929c2015-3bbd-4225-bf56-7b0601fba7e8 to ip-10-250-27-25.ec2.internal] STEP: Considering event: Type = [Normal], Name = [filler-pod-929c2015-3bbd-4225-bf56-7b0601fba7e8.15e8ec02a8abc15b], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-929c2015-3bbd-4225-bf56-7b0601fba7e8.15e8ec02ab8055aa], Reason = [Created], Message = [Created container filler-pod-929c2015-3bbd-4225-bf56-7b0601fba7e8] STEP: Considering event: Type = [Normal], Name = [filler-pod-929c2015-3bbd-4225-bf56-7b0601fba7e8.15e8ec02b427f547], Reason = [Started], Message = [Started container filler-pod-929c2015-3bbd-4225-bf56-7b0601fba7e8] STEP: Considering event: Type = [Normal], Name = [filler-pod-be9644c7-da9e-4eb0-8c37-1feda19f12bb.15e8ec027e4ae2a1], Reason = [Scheduled], Message = [Successfully assigned sched-pred-6831/filler-pod-be9644c7-da9e-4eb0-8c37-1feda19f12bb to ip-10-250-7-77.ec2.internal] STEP: Considering event: Type = [Normal], Name = [filler-pod-be9644c7-da9e-4eb0-8c37-1feda19f12bb.15e8ec02a6491876], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-be9644c7-da9e-4eb0-8c37-1feda19f12bb.15e8ec02a9087aef], Reason = [Created], Message = [Created container filler-pod-be9644c7-da9e-4eb0-8c37-1feda19f12bb] STEP: Considering event: Type = [Normal], Name = [filler-pod-be9644c7-da9e-4eb0-8c37-1feda19f12bb.15e8ec02ae6c02ce], Reason = [Started], Message = [Started container filler-pod-be9644c7-da9e-4eb0-8c37-1feda19f12bb] STEP: Considering event: Type = [Warning], Name = [additional-pod.15e8ec031af6540b], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 Insufficient cpu.] STEP: Considering event: Type = [Warning], Name = [additional-pod.15e8ec031b338041], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 Insufficient cpu.] STEP: removing the label node off the node ip-10-250-27-25.ec2.internal STEP: verifying the node doesn't have the label node STEP: removing the label node off the node ip-10-250-7-77.ec2.internal STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:32:49.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6831" for this suite. Jan 11 19:32:55.516: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:32:58.738: INFO: namespace sched-pred-6831 deletion completed in 9.49224161s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:78 •SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:32:58.739: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename daemonsets STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in daemonsets-1904 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Jan 11 19:32:59.829: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Jan 11 19:33:00.099: INFO: Number of nodes with available pods: 0 Jan 11 19:33:00.100: INFO: Node ip-10-250-27-25.ec2.internal is running more than one daemon pod Jan 11 19:33:01.281: INFO: Number of nodes with available pods: 1 Jan 11 19:33:01.281: INFO: Node ip-10-250-7-77.ec2.internal is running more than one daemon pod Jan 11 19:33:02.282: INFO: Number of nodes with available pods: 2 Jan 11 19:33:02.282: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Jan 11 19:33:02.914: INFO: Wrong image for pod: daemon-set-ldndp. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. Jan 11 19:33:02.914: INFO: Wrong image for pod: daemon-set-q2kzv. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. Jan 11 19:33:04.095: INFO: Wrong image for pod: daemon-set-ldndp. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. Jan 11 19:33:04.095: INFO: Wrong image for pod: daemon-set-q2kzv. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. Jan 11 19:33:04.095: INFO: Pod daemon-set-q2kzv is not available Jan 11 19:33:05.096: INFO: Wrong image for pod: daemon-set-ldndp. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. Jan 11 19:33:05.096: INFO: Wrong image for pod: daemon-set-q2kzv. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. Jan 11 19:33:05.096: INFO: Pod daemon-set-q2kzv is not available Jan 11 19:33:06.095: INFO: Wrong image for pod: daemon-set-ldndp. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. Jan 11 19:33:06.096: INFO: Wrong image for pod: daemon-set-q2kzv. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. Jan 11 19:33:06.096: INFO: Pod daemon-set-q2kzv is not available Jan 11 19:33:07.095: INFO: Wrong image for pod: daemon-set-ldndp. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. Jan 11 19:33:07.095: INFO: Wrong image for pod: daemon-set-q2kzv. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. Jan 11 19:33:07.095: INFO: Pod daemon-set-q2kzv is not available Jan 11 19:33:08.095: INFO: Wrong image for pod: daemon-set-ldndp. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. Jan 11 19:33:08.095: INFO: Wrong image for pod: daemon-set-q2kzv. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. Jan 11 19:33:08.095: INFO: Pod daemon-set-q2kzv is not available Jan 11 19:33:09.095: INFO: Wrong image for pod: daemon-set-ldndp. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. Jan 11 19:33:09.095: INFO: Wrong image for pod: daemon-set-q2kzv. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. Jan 11 19:33:09.095: INFO: Pod daemon-set-q2kzv is not available Jan 11 19:33:10.095: INFO: Wrong image for pod: daemon-set-ldndp. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. Jan 11 19:33:10.095: INFO: Wrong image for pod: daemon-set-q2kzv. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. Jan 11 19:33:10.095: INFO: Pod daemon-set-q2kzv is not available Jan 11 19:33:11.095: INFO: Wrong image for pod: daemon-set-ldndp. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. Jan 11 19:33:11.095: INFO: Wrong image for pod: daemon-set-q2kzv. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. Jan 11 19:33:11.095: INFO: Pod daemon-set-q2kzv is not available Jan 11 19:33:12.095: INFO: Wrong image for pod: daemon-set-ldndp. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. Jan 11 19:33:12.095: INFO: Wrong image for pod: daemon-set-q2kzv. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. Jan 11 19:33:12.095: INFO: Pod daemon-set-q2kzv is not available Jan 11 19:33:13.095: INFO: Wrong image for pod: daemon-set-ldndp. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. Jan 11 19:33:13.095: INFO: Wrong image for pod: daemon-set-q2kzv. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. Jan 11 19:33:13.095: INFO: Pod daemon-set-q2kzv is not available Jan 11 19:33:14.095: INFO: Wrong image for pod: daemon-set-ldndp. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. Jan 11 19:33:14.095: INFO: Pod daemon-set-n6dq5 is not available Jan 11 19:33:15.095: INFO: Wrong image for pod: daemon-set-ldndp. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. Jan 11 19:33:16.095: INFO: Wrong image for pod: daemon-set-ldndp. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. Jan 11 19:33:16.095: INFO: Pod daemon-set-ldndp is not available Jan 11 19:33:17.095: INFO: Pod daemon-set-nhhlp is not available STEP: Check that daemon pods are still running on every node of the cluster. Jan 11 19:33:17.366: INFO: Number of nodes with available pods: 1 Jan 11 19:33:17.366: INFO: Node ip-10-250-7-77.ec2.internal is running more than one daemon pod Jan 11 19:33:18.547: INFO: Number of nodes with available pods: 2 Jan 11 19:33:18.547: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1904, will wait for the garbage collector to delete the pods Jan 11 19:33:19.278: INFO: Deleting DaemonSet.extensions daemon-set took: 91.210403ms Jan 11 19:33:19.378: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.338346ms Jan 11 19:33:28.168: INFO: Number of nodes with available pods: 0 Jan 11 19:33:28.168: INFO: Number of running nodes: 0, number of available pods: 0 Jan 11 19:33:28.258: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1904/daemonsets","resourceVersion":"44661"},"items":null} Jan 11 19:33:28.347: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1904/pods","resourceVersion":"44661"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:33:28.623: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-1904" for this suite. Jan 11 19:33:34.984: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:33:38.209: INFO: namespace daemonsets-1904 deletion completed in 9.495403594s •SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:33:38.209: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename daemonsets STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in daemonsets-5878 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should run and stop simple daemon [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Jan 11 19:33:39.574: INFO: Number of nodes with available pods: 0 Jan 11 19:33:39.574: INFO: Node ip-10-250-27-25.ec2.internal is running more than one daemon pod Jan 11 19:33:40.755: INFO: Number of nodes with available pods: 1 Jan 11 19:33:40.755: INFO: Node ip-10-250-7-77.ec2.internal is running more than one daemon pod Jan 11 19:33:41.755: INFO: Number of nodes with available pods: 2 Jan 11 19:33:41.755: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Jan 11 19:33:42.206: INFO: Number of nodes with available pods: 1 Jan 11 19:33:42.206: INFO: Node ip-10-250-7-77.ec2.internal is running more than one daemon pod Jan 11 19:33:43.387: INFO: Number of nodes with available pods: 1 Jan 11 19:33:43.387: INFO: Node ip-10-250-7-77.ec2.internal is running more than one daemon pod Jan 11 19:33:44.388: INFO: Number of nodes with available pods: 1 Jan 11 19:33:44.388: INFO: Node ip-10-250-7-77.ec2.internal is running more than one daemon pod Jan 11 19:33:45.388: INFO: Number of nodes with available pods: 1 Jan 11 19:33:45.388: INFO: Node ip-10-250-7-77.ec2.internal is running more than one daemon pod Jan 11 19:33:46.388: INFO: Number of nodes with available pods: 1 Jan 11 19:33:46.388: INFO: Node ip-10-250-7-77.ec2.internal is running more than one daemon pod Jan 11 19:33:47.388: INFO: Number of nodes with available pods: 1 Jan 11 19:33:47.388: INFO: Node ip-10-250-7-77.ec2.internal is running more than one daemon pod Jan 11 19:33:48.388: INFO: Number of nodes with available pods: 1 Jan 11 19:33:48.388: INFO: Node ip-10-250-7-77.ec2.internal is running more than one daemon pod Jan 11 19:33:49.387: INFO: Number of nodes with available pods: 1 Jan 11 19:33:49.387: INFO: Node ip-10-250-7-77.ec2.internal is running more than one daemon pod Jan 11 19:33:50.388: INFO: Number of nodes with available pods: 2 Jan 11 19:33:50.388: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5878, will wait for the garbage collector to delete the pods Jan 11 19:33:50.759: INFO: Deleting DaemonSet.extensions daemon-set took: 91.527415ms Jan 11 19:33:51.260: INFO: Terminating DaemonSet.extensions daemon-set pods took: 500.284483ms Jan 11 19:34:03.950: INFO: Number of nodes with available pods: 0 Jan 11 19:34:03.950: INFO: Number of running nodes: 0, number of available pods: 0 Jan 11 19:34:04.039: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5878/daemonsets","resourceVersion":"44785"},"items":null} Jan 11 19:34:04.129: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5878/pods","resourceVersion":"44786"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:34:04.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-5878" for this suite. Jan 11 19:34:10.762: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:34:13.982: INFO: namespace daemonsets-5878 deletion completed in 9.489574602s •SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSJan 11 19:34:13.985: INFO: Running AfterSuite actions on all nodes Jan 11 19:34:13.985: INFO: Running AfterSuite actions on node 1 Jan 11 19:34:13.985: INFO: Skipping dumping logs from cluster Ran 41 of 4731 Specs in 2889.102 seconds SUCCESS! -- 41 Passed | 0 Failed | 0 Flaked | 0 Pending | 4690 Skipped PASS Ginkgo ran 1 suite in 48m10.723711292s Test Suite Passed Conformance test: not doing test setup. Running Suite: Kubernetes e2e suite =================================== Random Seed: 1578771255 - Will randomize all specs Will run 4731 specs Running in parallel across 8 nodes Jan 11 19:34:28.100: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Deleting namespaces I0111 19:34:28.567759 8609 suites.go:70] Waiting for deletion of the following namespaces: [] STEP: Waiting for namespaces to vanish Jan 11 19:34:30.657: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jan 11 19:34:30.927: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jan 11 19:34:31.310: INFO: 20 / 20 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jan 11 19:34:31.310: INFO: expected 12 pod replicas in namespace 'kube-system', 12 are Running and Ready. Jan 11 19:34:31.310: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jan 11 19:34:31.407: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'calico-node' (0 seconds elapsed) Jan 11 19:34:31.407: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jan 11 19:34:31.407: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'node-exporter' (0 seconds elapsed) Jan 11 19:34:31.407: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'node-problem-detector' (0 seconds elapsed) Jan 11 19:34:31.407: INFO: e2e test version: v1.16.4 Jan 11 19:34:31.495: INFO: kube-apiserver version: v1.16.4 Jan 11 19:34:31.496: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config Jan 11 19:34:31.588: INFO: Cluster IP family: ipv4 SSSSSSSSS ------------------------------ Jan 11 19:34:31.510: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config Jan 11 19:34:31.875: INFO: Cluster IP family: ipv4 Jan 11 19:34:31.510: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config Jan 11 19:34:31.875: INFO: Cluster IP family: ipv4 Jan 11 19:34:31.510: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config Jan 11 19:34:31.876: INFO: Cluster IP family: ipv4 Jan 11 19:34:31.510: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config Jan 11 19:34:31.877: INFO: Cluster IP family: ipv4 SS ------------------------------ Jan 11 19:34:31.510: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config Jan 11 19:34:31.877: INFO: Cluster IP family: ipv4 SSS ------------------------------ Jan 11 19:34:31.511: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config Jan 11 19:34:31.880: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ Jan 11 19:34:31.541: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config Jan 11 19:34:31.906: INFO: Cluster IP family: ipv4 SSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:34:31.602: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename emptydir Jan 11 19:34:31.972: INFO: Found PodSecurityPolicies; assuming PodSecurityPolicy is enabled. Jan 11 19:34:32.154: INFO: Found ClusterRoles; assuming RBAC is enabled. STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-3178 STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating a pod to test emptydir 0777 on tmpfs Jan 11 19:34:32.615: INFO: Waiting up to 5m0s for pod "pod-edaf3345-fe7b-47bd-8b9e-3e1b8d32d0e8" in namespace "emptydir-3178" to be "success or failure" Jan 11 19:34:32.705: INFO: Pod "pod-edaf3345-fe7b-47bd-8b9e-3e1b8d32d0e8": Phase="Pending", Reason="", readiness=false. Elapsed: 89.668617ms Jan 11 19:34:34.795: INFO: Pod "pod-edaf3345-fe7b-47bd-8b9e-3e1b8d32d0e8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.179777512s STEP: Saw pod success Jan 11 19:34:34.795: INFO: Pod "pod-edaf3345-fe7b-47bd-8b9e-3e1b8d32d0e8" satisfied condition "success or failure" Jan 11 19:34:34.885: INFO: Trying to get logs from node ip-10-250-27-25.ec2.internal pod pod-edaf3345-fe7b-47bd-8b9e-3e1b8d32d0e8 container test-container: STEP: delete the pod Jan 11 19:34:35.213: INFO: Waiting for pod pod-edaf3345-fe7b-47bd-8b9e-3e1b8d32d0e8 to disappear Jan 11 19:34:35.303: INFO: Pod pod-edaf3345-fe7b-47bd-8b9e-3e1b8d32d0e8 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:34:35.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3178" for this suite. Jan 11 19:34:41.663: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:34:44.887: INFO: namespace emptydir-3178 deletion completed in 9.493925027s • [SLOW TEST:13.285 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SS ------------------------------ [BeforeEach] [k8s.io] Container Runtime /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:34:31.908: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename container-runtime Jan 11 19:34:33.074: INFO: Found PodSecurityPolicies; assuming PodSecurityPolicy is enabled. Jan 11 19:34:33.344: INFO: Found ClusterRoles; assuming RBAC is enabled. STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-runtime-5620 STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jan 11 19:34:38.343: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:34:38.526: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-5620" for this suite. Jan 11 19:34:44.887: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:34:48.201: INFO: namespace container-runtime-5620 deletion completed in 9.584651099s • [SLOW TEST:16.293 seconds] [k8s.io] Container Runtime /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693 blackbox test /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 on terminated container /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:132 should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ S ------------------------------ [BeforeEach] [sig-network] Services /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:34:31.889: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename services Jan 11 19:34:33.068: INFO: Found PodSecurityPolicies; assuming PodSecurityPolicy is enabled. Jan 11 19:34:33.422: INFO: Found ClusterRoles; assuming RBAC is enabled. STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-8498 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:91 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: creating a service externalname-service with the type=ExternalName in namespace services-8498 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-8498 I0111 19:34:34.153781 8614 runners.go:184] Created replication controller with name: externalname-service, namespace: services-8498, replica count: 2 I0111 19:34:37.254414 8614 runners.go:184] externalname-service Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0111 19:34:40.254672 8614 runners.go:184] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 11 19:34:40.254: INFO: Creating new exec pod Jan 11 19:34:43.526: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=services-8498 execpod82kct -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Jan 11 19:34:44.918: INFO: stderr: "+ nc -zv -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Jan 11 19:34:44.918: INFO: stdout: "" Jan 11 19:34:44.919: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=services-8498 execpod82kct -- /bin/sh -x -c nc -zv -t -w 2 100.111.232.102 80' Jan 11 19:34:46.291: INFO: stderr: "+ nc -zv -t -w 2 100.111.232.102 80\nConnection to 100.111.232.102 80 port [tcp/http] succeeded!\n" Jan 11 19:34:46.291: INFO: stdout: "" Jan 11 19:34:46.291: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:34:46.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8498" for this suite. Jan 11 19:34:52.744: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:34:56.180: INFO: namespace services-8498 deletion completed in 9.70385507s [AfterEach] [sig-network] Services /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:95 • [SLOW TEST:24.291 seconds] [sig-network] Services /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSS ------------------------------ [BeforeEach] [sig-api-machinery] Watchers /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:34:31.878: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename watch Jan 11 19:34:32.667: INFO: Found PodSecurityPolicies; assuming PodSecurityPolicy is enabled. Jan 11 19:34:33.023: INFO: Found ClusterRoles; assuming RBAC is enabled. STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in watch-4896 STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:34:47.002: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-4896" for this suite. Jan 11 19:34:53.446: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:34:56.746: INFO: namespace watch-4896 deletion completed in 9.569148434s • [SLOW TEST:24.867 seconds] [sig-api-machinery] Watchers /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:34:44.892: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in persistent-local-volumes-test-2649 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:152 [BeforeEach] [Volume type: dir-bindmounted] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Jan 11 19:34:48.009: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-2649 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-97f9eb32-93be-436c-b141-99de0420c31e && mount --bind /tmp/local-volume-test-97f9eb32-93be-436c-b141-99de0420c31e /tmp/local-volume-test-97f9eb32-93be-436c-b141-99de0420c31e' Jan 11 19:34:49.329: INFO: stderr: "" Jan 11 19:34:49.329: INFO: stdout: "" STEP: Creating local PVCs and PVs Jan 11 19:34:49.329: INFO: Creating a PV followed by a PVC Jan 11 19:34:49.509: INFO: Waiting for PV local-pv4v5z5 to bind to PVC pvc-4zd6c Jan 11 19:34:49.509: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-4zd6c] to have phase Bound Jan 11 19:34:49.599: INFO: PersistentVolumeClaim pvc-4zd6c found and phase=Bound (89.468912ms) Jan 11 19:34:49.599: INFO: Waiting up to 3m0s for PersistentVolume local-pv4v5z5 to have phase Bound Jan 11 19:34:49.689: INFO: PersistentVolume local-pv4v5z5 found and phase=Bound (89.67628ms) [BeforeEach] One pod requesting one prebound PVC /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Jan 11 19:34:52.319: INFO: pod "security-context-2085c12e-8fd4-42e3-ba04-ccfdfc163732" created on Node "ip-10-250-27-25.ec2.internal" STEP: Writing in pod1 Jan 11 19:34:52.319: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-2649 security-context-2085c12e-8fd4-42e3-ba04-ccfdfc163732 -- /bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file' Jan 11 19:34:53.637: INFO: stderr: "" Jan 11 19:34:53.637: INFO: stdout: "" Jan 11 19:34:53.637: INFO: podRWCmdExec out: "" err: [It] should be able to mount volume and write from pod1 /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 Jan 11 19:34:53.638: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-2649 security-context-2085c12e-8fd4-42e3-ba04-ccfdfc163732 -- /bin/sh -c cat /mnt/volume1/test-file' Jan 11 19:34:55.021: INFO: stderr: "" Jan 11 19:34:55.021: INFO: stdout: "test-file-content\n" Jan 11 19:34:55.021: INFO: podRWCmdExec out: "test-file-content\n" err: STEP: Writing in pod1 Jan 11 19:34:55.021: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-2649 security-context-2085c12e-8fd4-42e3-ba04-ccfdfc163732 -- /bin/sh -c mkdir -p /mnt/volume1; echo /tmp/local-volume-test-97f9eb32-93be-436c-b141-99de0420c31e > /mnt/volume1/test-file' Jan 11 19:34:56.374: INFO: stderr: "" Jan 11 19:34:56.374: INFO: stdout: "" Jan 11 19:34:56.374: INFO: podRWCmdExec out: "" err: [AfterEach] One pod requesting one prebound PVC /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod security-context-2085c12e-8fd4-42e3-ba04-ccfdfc163732 in namespace persistent-local-volumes-test-2649 [AfterEach] [Volume type: dir-bindmounted] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jan 11 19:34:56.466: INFO: Deleting PersistentVolumeClaim "pvc-4zd6c" Jan 11 19:34:56.557: INFO: Deleting PersistentVolume "local-pv4v5z5" STEP: Removing the test directory Jan 11 19:34:56.648: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-2649 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount /tmp/local-volume-test-97f9eb32-93be-436c-b141-99de0420c31e && rm -r /tmp/local-volume-test-97f9eb32-93be-436c-b141-99de0420c31e' Jan 11 19:34:58.170: INFO: stderr: "" Jan 11 19:34:58.170: INFO: stdout: "" [AfterEach] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:34:58.260: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-2649" for this suite. Jan 11 19:35:04.620: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:35:07.932: INFO: namespace persistent-local-volumes-test-2649 deletion completed in 9.581224269s • [SLOW TEST:23.041 seconds] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-bindmounted] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and write from pod1 /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:34:31.880: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename persistent-local-volumes-test Jan 11 19:34:32.570: INFO: Found PodSecurityPolicies; assuming PodSecurityPolicy is enabled. Jan 11 19:34:32.927: INFO: Found ClusterRoles; assuming RBAC is enabled. STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in persistent-local-volumes-test-6552 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:152 [BeforeEach] [Volume type: blockfswithoutformat] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "ip-10-250-27-25.ec2.internal" using path "/tmp/local-volume-test-3230e89e-9e5f-43d4-a9dc-9bcec87ac68d" Jan 11 19:34:37.751: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-6552 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-3230e89e-9e5f-43d4-a9dc-9bcec87ac68d && dd if=/dev/zero of=/tmp/local-volume-test-3230e89e-9e5f-43d4-a9dc-9bcec87ac68d/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-3230e89e-9e5f-43d4-a9dc-9bcec87ac68d/file' Jan 11 19:34:39.132: INFO: stderr: "5120+0 records in\n5120+0 records out\n20971520 bytes (21 MB, 20 MiB) copied, 0.0197622 s, 1.1 GB/s\n" Jan 11 19:34:39.132: INFO: stdout: "" Jan 11 19:34:39.132: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-6552 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-3230e89e-9e5f-43d4-a9dc-9bcec87ac68d/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}' Jan 11 19:34:40.469: INFO: stderr: "" Jan 11 19:34:40.469: INFO: stdout: "/dev/loop0\n" STEP: Creating local PVCs and PVs Jan 11 19:34:40.469: INFO: Creating a PV followed by a PVC Jan 11 19:34:40.650: INFO: Waiting for PV local-pvrtrjt to bind to PVC pvc-rs9v5 Jan 11 19:34:40.650: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-rs9v5] to have phase Bound Jan 11 19:34:40.740: INFO: PersistentVolumeClaim pvc-rs9v5 found but phase is Pending instead of Bound. Jan 11 19:34:42.830: INFO: PersistentVolumeClaim pvc-rs9v5 found and phase=Bound (2.179639186s) Jan 11 19:34:42.830: INFO: Waiting up to 3m0s for PersistentVolume local-pvrtrjt to have phase Bound Jan 11 19:34:42.919: INFO: PersistentVolume local-pvrtrjt found and phase=Bound (89.452343ms) [It] should be able to write from pod1 and read from pod2 /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 STEP: Creating pod1 to write to the PV STEP: Creating a pod Jan 11 19:34:45.547: INFO: pod "security-context-cc48bf40-13ac-4c04-a6ba-9091c2e61ee7" created on Node "ip-10-250-27-25.ec2.internal" STEP: Writing in pod1 Jan 11 19:34:45.548: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-6552 security-context-cc48bf40-13ac-4c04-a6ba-9091c2e61ee7 -- /bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file' Jan 11 19:34:46.871: INFO: stderr: "" Jan 11 19:34:46.871: INFO: stdout: "" Jan 11 19:34:46.871: INFO: podRWCmdExec out: "" err: Jan 11 19:34:46.871: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-6552 security-context-cc48bf40-13ac-4c04-a6ba-9091c2e61ee7 -- /bin/sh -c cat /mnt/volume1/test-file' Jan 11 19:34:48.196: INFO: stderr: "" Jan 11 19:34:48.196: INFO: stdout: "test-file-content\n" Jan 11 19:34:48.196: INFO: podRWCmdExec out: "test-file-content\n" err: STEP: Creating pod2 to read from the PV STEP: Creating a pod Jan 11 19:34:50.649: INFO: pod "security-context-6e372eeb-5ddb-4114-800e-a8421aeedc0a" created on Node "ip-10-250-27-25.ec2.internal" Jan 11 19:34:50.649: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-6552 security-context-6e372eeb-5ddb-4114-800e-a8421aeedc0a -- /bin/sh -c cat /mnt/volume1/test-file' Jan 11 19:34:52.068: INFO: stderr: "" Jan 11 19:34:52.068: INFO: stdout: "test-file-content\n" Jan 11 19:34:52.068: INFO: podRWCmdExec out: "test-file-content\n" err: STEP: Writing in pod2 Jan 11 19:34:52.068: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-6552 security-context-6e372eeb-5ddb-4114-800e-a8421aeedc0a -- /bin/sh -c mkdir -p /mnt/volume1; echo /dev/loop0 > /mnt/volume1/test-file' Jan 11 19:34:53.370: INFO: stderr: "" Jan 11 19:34:53.370: INFO: stdout: "" Jan 11 19:34:53.370: INFO: podRWCmdExec out: "" err: STEP: Reading in pod1 Jan 11 19:34:53.370: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-6552 security-context-cc48bf40-13ac-4c04-a6ba-9091c2e61ee7 -- /bin/sh -c cat /mnt/volume1/test-file' Jan 11 19:34:54.690: INFO: stderr: "" Jan 11 19:34:54.690: INFO: stdout: "/dev/loop0\n" Jan 11 19:34:54.690: INFO: podRWCmdExec out: "/dev/loop0\n" err: STEP: Deleting pod1 STEP: Deleting pod security-context-cc48bf40-13ac-4c04-a6ba-9091c2e61ee7 in namespace persistent-local-volumes-test-6552 STEP: Deleting pod2 STEP: Deleting pod security-context-6e372eeb-5ddb-4114-800e-a8421aeedc0a in namespace persistent-local-volumes-test-6552 [AfterEach] [Volume type: blockfswithoutformat] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jan 11 19:34:54.872: INFO: Deleting PersistentVolumeClaim "pvc-rs9v5" Jan 11 19:34:54.962: INFO: Deleting PersistentVolume "local-pvrtrjt" Jan 11 19:34:55.053: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-6552 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-3230e89e-9e5f-43d4-a9dc-9bcec87ac68d/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}' Jan 11 19:34:56.350: INFO: stderr: "" Jan 11 19:34:56.350: INFO: stdout: "/dev/loop0\n" STEP: Tear down block device "/dev/loop0" on node "ip-10-250-27-25.ec2.internal" at path /tmp/local-volume-test-3230e89e-9e5f-43d4-a9dc-9bcec87ac68d/file Jan 11 19:34:56.350: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-6552 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0' Jan 11 19:34:57.741: INFO: stderr: "" Jan 11 19:34:57.741: INFO: stdout: "" STEP: Removing the test directory /tmp/local-volume-test-3230e89e-9e5f-43d4-a9dc-9bcec87ac68d Jan 11 19:34:57.741: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-6552 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-3230e89e-9e5f-43d4-a9dc-9bcec87ac68d' Jan 11 19:34:59.131: INFO: stderr: "" Jan 11 19:34:59.131: INFO: stdout: "" [AfterEach] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:34:59.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-6552" for this suite. Jan 11 19:35:05.584: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:35:08.900: INFO: namespace persistent-local-volumes-test-6552 deletion completed in 9.58599277s • [SLOW TEST:37.020 seconds] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: blockfswithoutformat] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume at the same time /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248 should be able to write from pod1 and read from pod2 /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 ------------------------------ SSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:34:48.205: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in persistent-local-volumes-test-2888 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:152 [BeforeEach] [Volume type: dir] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Jan 11 19:34:51.609: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-2888 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-5249c92d-b203-442d-9bd6-c8d426ac4a20' Jan 11 19:34:52.980: INFO: stderr: "" Jan 11 19:34:52.980: INFO: stdout: "" STEP: Creating local PVCs and PVs Jan 11 19:34:52.980: INFO: Creating a PV followed by a PVC Jan 11 19:34:53.160: INFO: Waiting for PV local-pvgzzrn to bind to PVC pvc-4zs72 Jan 11 19:34:53.160: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-4zs72] to have phase Bound Jan 11 19:34:53.250: INFO: PersistentVolumeClaim pvc-4zs72 found and phase=Bound (89.575682ms) Jan 11 19:34:53.250: INFO: Waiting up to 3m0s for PersistentVolume local-pvgzzrn to have phase Bound Jan 11 19:34:53.339: INFO: PersistentVolume local-pvgzzrn found and phase=Bound (89.831679ms) [BeforeEach] One pod requesting one prebound PVC /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Jan 11 19:34:55.969: INFO: pod "security-context-dc789957-e516-4b44-9c0b-078993dcfdcc" created on Node "ip-10-250-27-25.ec2.internal" STEP: Writing in pod1 Jan 11 19:34:55.969: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-2888 security-context-dc789957-e516-4b44-9c0b-078993dcfdcc -- /bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file' Jan 11 19:34:57.311: INFO: stderr: "" Jan 11 19:34:57.311: INFO: stdout: "" Jan 11 19:34:57.311: INFO: podRWCmdExec out: "" err: [It] should be able to mount volume and write from pod1 /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 Jan 11 19:34:57.311: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-2888 security-context-dc789957-e516-4b44-9c0b-078993dcfdcc -- /bin/sh -c cat /mnt/volume1/test-file' Jan 11 19:34:58.670: INFO: stderr: "" Jan 11 19:34:58.670: INFO: stdout: "test-file-content\n" Jan 11 19:34:58.670: INFO: podRWCmdExec out: "test-file-content\n" err: STEP: Writing in pod1 Jan 11 19:34:58.670: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-2888 security-context-dc789957-e516-4b44-9c0b-078993dcfdcc -- /bin/sh -c mkdir -p /mnt/volume1; echo /tmp/local-volume-test-5249c92d-b203-442d-9bd6-c8d426ac4a20 > /mnt/volume1/test-file' Jan 11 19:34:59.874: INFO: stderr: "" Jan 11 19:34:59.874: INFO: stdout: "" Jan 11 19:34:59.874: INFO: podRWCmdExec out: "" err: [AfterEach] One pod requesting one prebound PVC /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod security-context-dc789957-e516-4b44-9c0b-078993dcfdcc in namespace persistent-local-volumes-test-2888 [AfterEach] [Volume type: dir] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jan 11 19:34:59.965: INFO: Deleting PersistentVolumeClaim "pvc-4zs72" Jan 11 19:35:00.056: INFO: Deleting PersistentVolume "local-pvgzzrn" STEP: Removing the test directory Jan 11 19:35:00.147: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-2888 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-5249c92d-b203-442d-9bd6-c8d426ac4a20' Jan 11 19:35:01.434: INFO: stderr: "" Jan 11 19:35:01.434: INFO: stdout: "" [AfterEach] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:35:01.525: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-2888" for this suite. Jan 11 19:35:07.887: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:35:11.194: INFO: namespace persistent-local-volumes-test-2888 deletion completed in 9.577993671s • [SLOW TEST:22.989 seconds] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and write from pod1 /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 ------------------------------ SS ------------------------------ [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:93 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:34:56.186: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename provisioning STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in provisioning-2190 STEP: Waiting for a default service account to be provisioned in namespace [It] should fail if non-existent subpath is outside the volume [Slow][LinuxOnly] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:250 Jan 11 19:34:56.825: INFO: Could not find CSI Name for in-tree plugin kubernetes.io/host-path Jan 11 19:34:56.915: INFO: Creating resource for inline volume STEP: Creating pod pod-subpath-test-hostpath-8k5p STEP: Checking for subpath error in container status Jan 11 19:35:01.186: INFO: Deleting pod "pod-subpath-test-hostpath-8k5p" in namespace "provisioning-2190" Jan 11 19:35:01.276: INFO: Wait up to 5m0s for pod "pod-subpath-test-hostpath-8k5p" to be fully deleted STEP: Deleting pod Jan 11 19:35:05.456: INFO: Deleting pod "pod-subpath-test-hostpath-8k5p" in namespace "provisioning-2190" Jan 11 19:35:05.545: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics [AfterEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:35:05.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "provisioning-2190" for this suite. Jan 11 19:35:11.904: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:35:15.203: INFO: namespace provisioning-2190 deletion completed in 9.567196307s • [SLOW TEST:19.017 seconds] [sig-storage] In-tree Volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Driver: hostPath] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:69 [Testpattern: Inline-volume (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:92 should fail if non-existent subpath is outside the volume [Slow][LinuxOnly] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:250 ------------------------------ S ------------------------------ [BeforeEach] [sig-network] Services /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:34:56.755: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename services STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-6365 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:91 [It] should allow pods to hairpin back to themselves through services /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:350 STEP: creating a TCP service hairpin-test with type=ClusterIP in namespace services-6365 Jan 11 19:34:57.484: INFO: hairpin-test cluster ip: 100.104.2.229 STEP: creating a client/server pod STEP: waiting for the service to expose an endpoint STEP: waiting up to 3m0s for service hairpin-test in namespace services-6365 to expose endpoints map[hairpin:[8080]] Jan 11 19:35:00.111: INFO: successfully validated that service hairpin-test in namespace services-6365 exposes endpoints map[hairpin:[8080]] (2.53608868s elapsed) STEP: Checking if the pod can reach itself Jan 11 19:35:01.112: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=services-6365 hairpin -- /bin/sh -x -c nc -zv -t -w 2 hairpin-test 8080' Jan 11 19:35:02.389: INFO: stderr: "+ nc -zv -t -w 2 hairpin-test 8080\nConnection to hairpin-test 8080 port [tcp/http-alt] succeeded!\n" Jan 11 19:35:02.389: INFO: stdout: "" Jan 11 19:35:02.390: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=services-6365 hairpin -- /bin/sh -x -c nc -zv -t -w 2 100.104.2.229 8080' Jan 11 19:35:03.679: INFO: stderr: "+ nc -zv -t -w 2 100.104.2.229 8080\nConnection to 100.104.2.229 8080 port [tcp/http-alt] succeeded!\n" Jan 11 19:35:03.679: INFO: stdout: "" [AfterEach] [sig-network] Services /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:35:03.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6365" for this suite. Jan 11 19:35:16.038: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:35:19.351: INFO: namespace services-6365 deletion completed in 15.58053228s [AfterEach] [sig-network] Services /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:95 • [SLOW TEST:22.596 seconds] [sig-network] Services /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should allow pods to hairpin back to themselves through services /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:350 ------------------------------ S ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:35:07.935: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename downward-api STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-1739 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating a pod to test downward API volume plugin Jan 11 19:35:08.664: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ce057a02-9103-41ff-87b5-a8e513d7c30a" in namespace "downward-api-1739" to be "success or failure" Jan 11 19:35:08.754: INFO: Pod "downwardapi-volume-ce057a02-9103-41ff-87b5-a8e513d7c30a": Phase="Pending", Reason="", readiness=false. Elapsed: 89.312513ms Jan 11 19:35:10.843: INFO: Pod "downwardapi-volume-ce057a02-9103-41ff-87b5-a8e513d7c30a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.178946121s STEP: Saw pod success Jan 11 19:35:10.843: INFO: Pod "downwardapi-volume-ce057a02-9103-41ff-87b5-a8e513d7c30a" satisfied condition "success or failure" Jan 11 19:35:10.933: INFO: Trying to get logs from node ip-10-250-27-25.ec2.internal pod downwardapi-volume-ce057a02-9103-41ff-87b5-a8e513d7c30a container client-container: STEP: delete the pod Jan 11 19:35:11.125: INFO: Waiting for pod downwardapi-volume-ce057a02-9103-41ff-87b5-a8e513d7c30a to disappear Jan 11 19:35:11.214: INFO: Pod downwardapi-volume-ce057a02-9103-41ff-87b5-a8e513d7c30a no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:35:11.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1739" for this suite. Jan 11 19:35:17.573: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:35:20.886: INFO: namespace downward-api-1739 deletion completed in 9.581666003s • [SLOW TEST:12.952 seconds] [sig-storage] Downward API volume /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:35:08.912: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename downward-api STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-8506 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:106 STEP: Creating a pod to test downward API volume plugin Jan 11 19:35:09.942: INFO: Waiting up to 5m0s for pod "metadata-volume-6a291f19-0add-4ced-a103-c909aa0f0f7a" in namespace "downward-api-8506" to be "success or failure" Jan 11 19:35:10.031: INFO: Pod "metadata-volume-6a291f19-0add-4ced-a103-c909aa0f0f7a": Phase="Pending", Reason="", readiness=false. Elapsed: 89.14791ms Jan 11 19:35:12.121: INFO: Pod "metadata-volume-6a291f19-0add-4ced-a103-c909aa0f0f7a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.179051872s STEP: Saw pod success Jan 11 19:35:12.121: INFO: Pod "metadata-volume-6a291f19-0add-4ced-a103-c909aa0f0f7a" satisfied condition "success or failure" Jan 11 19:35:12.211: INFO: Trying to get logs from node ip-10-250-27-25.ec2.internal pod metadata-volume-6a291f19-0add-4ced-a103-c909aa0f0f7a container client-container: STEP: delete the pod Jan 11 19:35:12.444: INFO: Waiting for pod metadata-volume-6a291f19-0add-4ced-a103-c909aa0f0f7a to disappear Jan 11 19:35:12.533: INFO: Pod metadata-volume-6a291f19-0add-4ced-a103-c909aa0f0f7a no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:35:12.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8506" for this suite. Jan 11 19:35:18.898: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:35:22.217: INFO: namespace downward-api-8506 deletion completed in 9.592153778s • [SLOW TEST:13.304 seconds] [sig-storage] Downward API volume /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:106 ------------------------------ S ------------------------------ [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:93 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:34:31.910: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename provisioning Jan 11 19:34:33.175: INFO: Found PodSecurityPolicies; assuming PodSecurityPolicy is enabled. Jan 11 19:34:33.530: INFO: Found ClusterRoles; assuming RBAC is enabled. STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in provisioning-888 STEP: Waiting for a default service account to be provisioned in namespace [It] should fail if subpath with backstepping is outside the volume [Slow] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:261 STEP: deploying csi-hostpath driver Jan 11 19:34:34.091: INFO: creating *v1.ServiceAccount: provisioning-888/csi-attacher Jan 11 19:34:34.180: INFO: creating *v1.ClusterRole: external-attacher-runner-provisioning-888 Jan 11 19:34:34.181: INFO: Define cluster role external-attacher-runner-provisioning-888 Jan 11 19:34:34.270: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-provisioning-888 Jan 11 19:34:34.360: INFO: creating *v1.Role: provisioning-888/external-attacher-cfg-provisioning-888 Jan 11 19:34:34.449: INFO: creating *v1.RoleBinding: provisioning-888/csi-attacher-role-cfg Jan 11 19:34:34.540: INFO: creating *v1.ServiceAccount: provisioning-888/csi-provisioner Jan 11 19:34:34.630: INFO: creating *v1.ClusterRole: external-provisioner-runner-provisioning-888 Jan 11 19:34:34.630: INFO: Define cluster role external-provisioner-runner-provisioning-888 Jan 11 19:34:34.720: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-provisioning-888 Jan 11 19:34:34.809: INFO: creating *v1.Role: provisioning-888/external-provisioner-cfg-provisioning-888 Jan 11 19:34:34.898: INFO: creating *v1.RoleBinding: provisioning-888/csi-provisioner-role-cfg Jan 11 19:34:34.987: INFO: creating *v1.ServiceAccount: provisioning-888/csi-snapshotter Jan 11 19:34:35.076: INFO: creating *v1.ClusterRole: external-snapshotter-runner-provisioning-888 Jan 11 19:34:35.076: INFO: Define cluster role external-snapshotter-runner-provisioning-888 Jan 11 19:34:35.166: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-provisioning-888 Jan 11 19:34:35.255: INFO: creating *v1.Role: provisioning-888/external-snapshotter-leaderelection-provisioning-888 Jan 11 19:34:35.344: INFO: creating *v1.RoleBinding: provisioning-888/external-snapshotter-leaderelection Jan 11 19:34:35.434: INFO: creating *v1.ServiceAccount: provisioning-888/csi-resizer Jan 11 19:34:35.523: INFO: creating *v1.ClusterRole: external-resizer-runner-provisioning-888 Jan 11 19:34:35.523: INFO: Define cluster role external-resizer-runner-provisioning-888 Jan 11 19:34:35.613: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-provisioning-888 Jan 11 19:34:35.702: INFO: creating *v1.Role: provisioning-888/external-resizer-cfg-provisioning-888 Jan 11 19:34:35.791: INFO: creating *v1.RoleBinding: provisioning-888/csi-resizer-role-cfg Jan 11 19:34:35.881: INFO: creating *v1.Service: provisioning-888/csi-hostpath-attacher Jan 11 19:34:35.974: INFO: creating *v1.StatefulSet: provisioning-888/csi-hostpath-attacher Jan 11 19:34:36.064: INFO: creating *v1beta1.CSIDriver: csi-hostpath-provisioning-888 Jan 11 19:34:36.153: INFO: creating *v1.Service: provisioning-888/csi-hostpathplugin Jan 11 19:34:36.247: INFO: creating *v1.StatefulSet: provisioning-888/csi-hostpathplugin Jan 11 19:34:36.337: INFO: creating *v1.Service: provisioning-888/csi-hostpath-provisioner Jan 11 19:34:36.430: INFO: creating *v1.StatefulSet: provisioning-888/csi-hostpath-provisioner Jan 11 19:34:36.519: INFO: creating *v1.Service: provisioning-888/csi-hostpath-resizer Jan 11 19:34:36.612: INFO: creating *v1.StatefulSet: provisioning-888/csi-hostpath-resizer Jan 11 19:34:36.702: INFO: creating *v1.Service: provisioning-888/csi-snapshotter Jan 11 19:34:36.795: INFO: creating *v1.StatefulSet: provisioning-888/csi-snapshotter Jan 11 19:34:36.884: INFO: creating *v1.ClusterRoleBinding: psp-csi-hostpath-role-provisioning-888 Jan 11 19:34:36.974: INFO: Test running for native CSI Driver, not checking metrics Jan 11 19:34:36.974: INFO: Creating resource for dynamic PV STEP: creating a StorageClass provisioning-888-csi-hostpath-provisioning-888-scv2275 STEP: creating a claim Jan 11 19:34:37.063: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Jan 11 19:34:37.154: INFO: Waiting up to 5m0s for PersistentVolumeClaims [csi-hostpath745w7] to have phase Bound Jan 11 19:34:37.243: INFO: PersistentVolumeClaim csi-hostpath745w7 found but phase is Pending instead of Bound. Jan 11 19:34:39.332: INFO: PersistentVolumeClaim csi-hostpath745w7 found but phase is Pending instead of Bound. Jan 11 19:34:41.421: INFO: PersistentVolumeClaim csi-hostpath745w7 found but phase is Pending instead of Bound. Jan 11 19:34:43.511: INFO: PersistentVolumeClaim csi-hostpath745w7 found but phase is Pending instead of Bound. Jan 11 19:34:45.600: INFO: PersistentVolumeClaim csi-hostpath745w7 found but phase is Pending instead of Bound. Jan 11 19:34:47.689: INFO: PersistentVolumeClaim csi-hostpath745w7 found and phase=Bound (10.535049003s) STEP: Creating pod pod-subpath-test-csi-hostpath-dynamicpv-tbjq STEP: Checking for subpath error in container status Jan 11 19:34:58.136: INFO: Deleting pod "pod-subpath-test-csi-hostpath-dynamicpv-tbjq" in namespace "provisioning-888" Jan 11 19:34:58.226: INFO: Wait up to 5m0s for pod "pod-subpath-test-csi-hostpath-dynamicpv-tbjq" to be fully deleted STEP: Deleting pod Jan 11 19:35:08.404: INFO: Deleting pod "pod-subpath-test-csi-hostpath-dynamicpv-tbjq" in namespace "provisioning-888" STEP: Deleting pvc Jan 11 19:35:08.493: INFO: Deleting PersistentVolumeClaim "csi-hostpath745w7" Jan 11 19:35:08.587: INFO: Waiting up to 5m0s for PersistentVolume pvc-48b5be36-56cf-481a-8301-85c5e478522b to get deleted Jan 11 19:35:08.676: INFO: PersistentVolume pvc-48b5be36-56cf-481a-8301-85c5e478522b was removed STEP: Deleting sc STEP: uninstalling csi-hostpath driver Jan 11 19:35:08.766: INFO: deleting *v1.ServiceAccount: provisioning-888/csi-attacher Jan 11 19:35:08.857: INFO: deleting *v1.ClusterRole: external-attacher-runner-provisioning-888 Jan 11 19:35:08.947: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-provisioning-888 Jan 11 19:35:09.038: INFO: deleting *v1.Role: provisioning-888/external-attacher-cfg-provisioning-888 Jan 11 19:35:09.128: INFO: deleting *v1.RoleBinding: provisioning-888/csi-attacher-role-cfg Jan 11 19:35:09.218: INFO: deleting *v1.ServiceAccount: provisioning-888/csi-provisioner Jan 11 19:35:09.309: INFO: deleting *v1.ClusterRole: external-provisioner-runner-provisioning-888 Jan 11 19:35:09.552: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-provisioning-888 Jan 11 19:35:09.642: INFO: deleting *v1.Role: provisioning-888/external-provisioner-cfg-provisioning-888 Jan 11 19:35:09.732: INFO: deleting *v1.RoleBinding: provisioning-888/csi-provisioner-role-cfg Jan 11 19:35:09.823: INFO: deleting *v1.ServiceAccount: provisioning-888/csi-snapshotter Jan 11 19:35:09.913: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-provisioning-888 Jan 11 19:35:10.003: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-provisioning-888 Jan 11 19:35:10.093: INFO: deleting *v1.Role: provisioning-888/external-snapshotter-leaderelection-provisioning-888 Jan 11 19:35:10.183: INFO: deleting *v1.RoleBinding: provisioning-888/external-snapshotter-leaderelection Jan 11 19:35:10.274: INFO: deleting *v1.ServiceAccount: provisioning-888/csi-resizer Jan 11 19:35:10.364: INFO: deleting *v1.ClusterRole: external-resizer-runner-provisioning-888 Jan 11 19:35:10.455: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-provisioning-888 Jan 11 19:35:10.546: INFO: deleting *v1.Role: provisioning-888/external-resizer-cfg-provisioning-888 Jan 11 19:35:10.637: INFO: deleting *v1.RoleBinding: provisioning-888/csi-resizer-role-cfg Jan 11 19:35:10.727: INFO: deleting *v1.Service: provisioning-888/csi-hostpath-attacher Jan 11 19:35:10.822: INFO: deleting *v1.StatefulSet: provisioning-888/csi-hostpath-attacher Jan 11 19:35:10.912: INFO: deleting *v1beta1.CSIDriver: csi-hostpath-provisioning-888 Jan 11 19:35:11.002: INFO: deleting *v1.Service: provisioning-888/csi-hostpathplugin Jan 11 19:35:11.097: INFO: deleting *v1.StatefulSet: provisioning-888/csi-hostpathplugin Jan 11 19:35:11.187: INFO: deleting *v1.Service: provisioning-888/csi-hostpath-provisioner Jan 11 19:35:11.282: INFO: deleting *v1.StatefulSet: provisioning-888/csi-hostpath-provisioner Jan 11 19:35:11.372: INFO: deleting *v1.Service: provisioning-888/csi-hostpath-resizer Jan 11 19:35:11.468: INFO: deleting *v1.StatefulSet: provisioning-888/csi-hostpath-resizer Jan 11 19:35:11.559: INFO: deleting *v1.Service: provisioning-888/csi-snapshotter Jan 11 19:35:11.654: INFO: deleting *v1.StatefulSet: provisioning-888/csi-snapshotter Jan 11 19:35:11.744: INFO: deleting *v1.ClusterRoleBinding: psp-csi-hostpath-role-provisioning-888 [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:35:11.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready WARNING: pod log: csi-hostpath-attacher-0/csi-attacher: context canceled WARNING: pod log: csi-hostpathplugin-0/hostpath: context canceled STEP: Destroying namespace "provisioning-888" for this suite. Jan 11 19:35:24.192: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:35:27.482: INFO: namespace provisioning-888 deletion completed in 15.556986939s • [SLOW TEST:55.572 seconds] [sig-storage] CSI Volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Driver: csi-hostpath] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:62 [Testpattern: Dynamic PV (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:92 should fail if subpath with backstepping is outside the volume [Slow] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:261 ------------------------------ SSSSSSSS ------------------------------ [BeforeEach] [sig-apps] DisruptionController /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:34:31.886: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename disruption Jan 11 19:34:32.708: INFO: Found PodSecurityPolicies; assuming PodSecurityPolicy is enabled. Jan 11 19:34:32.979: INFO: Found ClusterRoles; assuming RBAC is enabled. STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in disruption-8708 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] DisruptionController /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:52 [It] evictions: enough pods, replicaSet, percentage => should allow an eviction /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:149 STEP: Waiting for the pdb to be processed STEP: locating a running pod STEP: Waiting for all pods to be running Jan 11 19:34:39.890: INFO: running pods: 6 < 10 [AfterEach] [sig-apps] DisruptionController /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:34:42.077: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "disruption-8708" for this suite. Jan 11 19:35:30.439: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:35:33.755: INFO: namespace disruption-8708 deletion completed in 51.587238425s • [SLOW TEST:61.869 seconds] [sig-apps] DisruptionController /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 evictions: enough pods, replicaSet, percentage => should allow an eviction /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:149 ------------------------------ S ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:35:19.354: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename downward-api STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-1906 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating a pod to test downward API volume plugin Jan 11 19:35:20.083: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ed9ca1ca-754d-4806-80cf-19eb3d7adacf" in namespace "downward-api-1906" to be "success or failure" Jan 11 19:35:20.173: INFO: Pod "downwardapi-volume-ed9ca1ca-754d-4806-80cf-19eb3d7adacf": Phase="Pending", Reason="", readiness=false. Elapsed: 89.356698ms Jan 11 19:35:22.263: INFO: Pod "downwardapi-volume-ed9ca1ca-754d-4806-80cf-19eb3d7adacf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.179196006s STEP: Saw pod success Jan 11 19:35:22.263: INFO: Pod "downwardapi-volume-ed9ca1ca-754d-4806-80cf-19eb3d7adacf" satisfied condition "success or failure" Jan 11 19:35:22.352: INFO: Trying to get logs from node ip-10-250-7-77.ec2.internal pod downwardapi-volume-ed9ca1ca-754d-4806-80cf-19eb3d7adacf container client-container: STEP: delete the pod Jan 11 19:35:22.542: INFO: Waiting for pod downwardapi-volume-ed9ca1ca-754d-4806-80cf-19eb3d7adacf to disappear Jan 11 19:35:22.631: INFO: Pod downwardapi-volume-ed9ca1ca-754d-4806-80cf-19eb3d7adacf no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:35:22.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1906" for this suite. Jan 11 19:35:30.991: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:35:34.290: INFO: namespace downward-api-1906 deletion completed in 11.568255915s • [SLOW TEST:14.936 seconds] [sig-storage] Downward API volume /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Container Runtime /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:35:22.220: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename container-runtime STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-runtime-5038 STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to pull from private registry with secret [NodeConformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:394 STEP: create image pull secret STEP: create the container STEP: check the container status STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:35:26.677: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-5038" for this suite. Jan 11 19:35:33.038: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:35:36.357: INFO: namespace container-runtime-5038 deletion completed in 9.588608882s • [SLOW TEST:14.137 seconds] [k8s.io] Container Runtime /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693 blackbox test /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 when running a container with a new image /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:252 should be able to pull from private registry with secret [NodeConformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:394 ------------------------------ SSSS ------------------------------ [BeforeEach] [k8s.io] Pods /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:35:20.920: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename pods STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pods-7088 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:165 [It] should get a host IP [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: creating pod Jan 11 19:35:24.010: INFO: Pod pod-hostip-d556ebce-ca77-4bb1-8852-8276e8192507 has hostIP: 10.250.27.25 [AfterEach] [k8s.io] Pods /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:35:24.010: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7088" for this suite. Jan 11 19:35:36.370: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:35:39.688: INFO: namespace pods-7088 deletion completed in 15.587601862s • [SLOW TEST:18.769 seconds] [k8s.io] Pods /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693 should get a host IP [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:35:11.200: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename resourcequota STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in resourcequota-4710 STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:35:29.125: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4710" for this suite. Jan 11 19:35:37.487: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:35:40.802: INFO: namespace resourcequota-4710 deletion completed in 11.586654965s • [SLOW TEST:29.602 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSS ------------------------------ [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:93 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:35:15.206: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename provisioning STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in provisioning-5862 STEP: Waiting for a default service account to be provisioned in namespace [It] should fail if non-existent subpath is outside the volume [Slow][LinuxOnly] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:250 Jan 11 19:35:15.852: INFO: Could not find CSI Name for in-tree plugin kubernetes.io/host-path Jan 11 19:35:16.034: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-5862" in namespace "provisioning-5862" to be "success or failure" Jan 11 19:35:16.123: INFO: Pod "hostpath-symlink-prep-provisioning-5862": Phase="Pending", Reason="", readiness=false. Elapsed: 89.091564ms Jan 11 19:35:18.213: INFO: Pod "hostpath-symlink-prep-provisioning-5862": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.17911906s STEP: Saw pod success Jan 11 19:35:18.213: INFO: Pod "hostpath-symlink-prep-provisioning-5862" satisfied condition "success or failure" Jan 11 19:35:18.213: INFO: Deleting pod "hostpath-symlink-prep-provisioning-5862" in namespace "provisioning-5862" Jan 11 19:35:18.305: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-5862" to be fully deleted Jan 11 19:35:18.395: INFO: Creating resource for inline volume STEP: Creating pod pod-subpath-test-hostpathsymlink-6dgn STEP: Checking for subpath error in container status Jan 11 19:35:22.669: INFO: Deleting pod "pod-subpath-test-hostpathsymlink-6dgn" in namespace "provisioning-5862" Jan 11 19:35:22.759: INFO: Wait up to 5m0s for pod "pod-subpath-test-hostpathsymlink-6dgn" to be fully deleted STEP: Deleting pod Jan 11 19:35:28.939: INFO: Deleting pod "pod-subpath-test-hostpathsymlink-6dgn" in namespace "provisioning-5862" Jan 11 19:35:29.118: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-5862" in namespace "provisioning-5862" to be "success or failure" Jan 11 19:35:29.208: INFO: Pod "hostpath-symlink-prep-provisioning-5862": Phase="Pending", Reason="", readiness=false. Elapsed: 89.878831ms Jan 11 19:35:31.298: INFO: Pod "hostpath-symlink-prep-provisioning-5862": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.179713025s STEP: Saw pod success Jan 11 19:35:31.298: INFO: Pod "hostpath-symlink-prep-provisioning-5862" satisfied condition "success or failure" Jan 11 19:35:31.298: INFO: Deleting pod "hostpath-symlink-prep-provisioning-5862" in namespace "provisioning-5862" Jan 11 19:35:31.390: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-5862" to be fully deleted Jan 11 19:35:31.479: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics [AfterEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:35:31.479: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "provisioning-5862" for this suite. Jan 11 19:35:37.838: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:35:41.146: INFO: namespace provisioning-5862 deletion completed in 9.576263573s • [SLOW TEST:25.939 seconds] [sig-storage] In-tree Volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Driver: hostPathSymlink] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:69 [Testpattern: Inline-volume (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:92 should fail if non-existent subpath is outside the volume [Slow][LinuxOnly] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:250 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:35:33.758: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename replication-controller STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in replication-controller-7839 STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating replication controller my-hostname-basic-5832ca42-5e25-4a65-a0a9-cb99210f13ac Jan 11 19:35:34.580: INFO: Pod name my-hostname-basic-5832ca42-5e25-4a65-a0a9-cb99210f13ac: Found 1 pods out of 1 Jan 11 19:35:34.580: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-5832ca42-5e25-4a65-a0a9-cb99210f13ac" are running Jan 11 19:35:36.760: INFO: Pod "my-hostname-basic-5832ca42-5e25-4a65-a0a9-cb99210f13ac-fsgf9" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-11 19:35:34 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-11 19:35:34 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-5832ca42-5e25-4a65-a0a9-cb99210f13ac]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-11 19:35:34 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-5832ca42-5e25-4a65-a0a9-cb99210f13ac]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-11 19:35:34 +0000 UTC Reason: Message:}]) Jan 11 19:35:36.760: INFO: Trying to dial the pod Jan 11 19:35:42.045: INFO: Controller my-hostname-basic-5832ca42-5e25-4a65-a0a9-cb99210f13ac: Got expected result from replica 1 [my-hostname-basic-5832ca42-5e25-4a65-a0a9-cb99210f13ac-fsgf9]: "my-hostname-basic-5832ca42-5e25-4a65-a0a9-cb99210f13ac-fsgf9", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:35:42.045: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-7839" for this suite. Jan 11 19:35:48.407: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:35:51.731: INFO: namespace replication-controller-7839 deletion completed in 9.594308443s • [SLOW TEST:17.973 seconds] [sig-apps] ReplicationController /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSS ------------------------------ [BeforeEach] [k8s.io] Container Runtime /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:35:40.812: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename container-runtime STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-runtime-2631 STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jan 11 19:35:43.905: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:35:44.087: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-2631" for this suite. Jan 11 19:35:50.448: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:35:53.758: INFO: namespace container-runtime-2631 deletion completed in 9.580871291s • [SLOW TEST:12.947 seconds] [k8s.io] Container Runtime /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693 blackbox test /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 on terminated container /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:132 should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:35:41.166: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename projected STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-8100 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating the pod Jan 11 19:35:45.165: INFO: Successfully updated pod "annotationupdate738607b6-7ce5-40e2-a724-b47c635e3a2c" [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:35:47.357: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8100" for this suite. Jan 11 19:35:59.718: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:36:03.023: INFO: namespace projected-8100 deletion completed in 15.576170481s • [SLOW TEST:21.857 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-scheduling] LimitRange /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:35:39.694: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename limitrange STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in limitrange-3295 STEP: Waiting for a default service account to be provisioned in namespace [It] should create a LimitRange with defaults and ensure pod has those defaults applied. /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/limit_range.go:56 STEP: Creating a LimitRange STEP: Setting up watch STEP: Submitting a LimitRange Jan 11 19:35:40.532: INFO: observed the limitRanges list STEP: Verifying LimitRange creation was observed STEP: Fetching the LimitRange to ensure it has proper values Jan 11 19:35:40.711: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Jan 11 19:35:40.711: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with no resource requirements STEP: Ensuring Pod has resource requirements applied from LimitRange Jan 11 19:35:40.892: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Jan 11 19:35:40.892: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with partial resource requirements STEP: Ensuring Pod has merged resource requirements applied from LimitRange Jan 11 19:35:41.072: INFO: Verifying requests: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] Jan 11 19:35:41.072: INFO: Verifying limits: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Failing to create a Pod with less than min resources STEP: Failing to create a Pod with more than max resources STEP: Updating a LimitRange STEP: Verifying LimitRange updating is effective STEP: Creating a Pod with less than former min resources STEP: Failing to create a Pod with more than max resources STEP: Deleting a LimitRange STEP: Verifying the LimitRange was deleted Jan 11 19:35:48.792: INFO: limitRange is already deleted STEP: Creating a Pod with more than former max resources [AfterEach] [sig-scheduling] LimitRange /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:35:48.885: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "limitrange-3295" for this suite. Jan 11 19:36:01.245: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:36:04.599: INFO: namespace limitrange-3295 deletion completed in 15.623126259s • [SLOW TEST:24.904 seconds] [sig-scheduling] LimitRange /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 should create a LimitRange with defaults and ensure pod has those defaults applied. /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/limit_range.go:56 ------------------------------ SSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:35:27.493: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename kubectl STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-230 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:225 [BeforeEach] Simple pod /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:371 STEP: creating the pod from Jan 11 19:35:28.949: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config create -f - --namespace=kubectl-230' Jan 11 19:35:29.926: INFO: stderr: "" Jan 11 19:35:29.926: INFO: stdout: "pod/httpd created\n" Jan 11 19:35:29.926: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [httpd] Jan 11 19:35:29.926: INFO: Waiting up to 5m0s for pod "httpd" in namespace "kubectl-230" to be "running and ready" Jan 11 19:35:30.016: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 89.058346ms Jan 11 19:35:32.105: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 2.178493451s Jan 11 19:35:34.194: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 4.268003999s Jan 11 19:35:36.284: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 6.357762708s Jan 11 19:35:38.374: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 8.447189196s Jan 11 19:35:40.463: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 10.536701563s Jan 11 19:35:42.553: INFO: Pod "httpd": Phase="Running", Reason="", readiness=true. Elapsed: 12.626022563s Jan 11 19:35:42.553: INFO: Pod "httpd" satisfied condition "running and ready" Jan 11 19:35:42.553: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [httpd] [It] should contain last line of the log /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:581 STEP: executing a command with run Jan 11 19:35:42.553: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config run run-log-test --generator=run-pod/v1 --image=docker.io/library/busybox:1.29 --restart=OnFailure --namespace=kubectl-230 -- sh -c sleep 10; seq 100 | while read i; do echo $i; sleep 0.01; done; echo EOF' Jan 11 19:35:43.018: INFO: stderr: "" Jan 11 19:35:43.018: INFO: stdout: "pod/run-log-test created\n" Jan 11 19:35:43.018: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [run-log-test] Jan 11 19:35:43.018: INFO: Waiting up to 5m0s for pod "run-log-test" in namespace "kubectl-230" to be "running and ready, or succeeded" Jan 11 19:35:43.107: INFO: Pod "run-log-test": Phase="Pending", Reason="", readiness=false. Elapsed: 89.164164ms Jan 11 19:35:45.197: INFO: Pod "run-log-test": Phase="Running", Reason="", readiness=true. Elapsed: 2.178669528s Jan 11 19:35:45.197: INFO: Pod "run-log-test" satisfied condition "running and ready, or succeeded" Jan 11 19:35:45.197: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [run-log-test] Jan 11 19:35:45.197: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-230 logs -f run-log-test' Jan 11 19:35:56.214: INFO: stderr: "" Jan 11 19:35:56.215: INFO: stdout: "1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13\n14\n15\n16\n17\n18\n19\n20\n21\n22\n23\n24\n25\n26\n27\n28\n29\n30\n31\n32\n33\n34\n35\n36\n37\n38\n39\n40\n41\n42\n43\n44\n45\n46\n47\n48\n49\n50\n51\n52\n53\n54\n55\n56\n57\n58\n59\n60\n61\n62\n63\n64\n65\n66\n67\n68\n69\n70\n71\n72\n73\n74\n75\n76\n77\n78\n79\n80\n81\n82\n83\n84\n85\n86\n87\n88\n89\n90\n91\n92\n93\n94\n95\n96\n97\n98\n99\n100\nEOF\n" [AfterEach] Simple pod /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:377 STEP: using delete to clean up resources Jan 11 19:35:56.215: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config delete --grace-period=0 --force -f - --namespace=kubectl-230' Jan 11 19:35:56.728: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 11 19:35:56.728: INFO: stdout: "pod \"httpd\" force deleted\n" Jan 11 19:35:56.728: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config get rc,svc -l name=httpd --no-headers --namespace=kubectl-230' Jan 11 19:35:57.270: INFO: stderr: "No resources found in kubectl-230 namespace.\n" Jan 11 19:35:57.270: INFO: stdout: "" Jan 11 19:35:57.270: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config get pods -l name=httpd --namespace=kubectl-230 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jan 11 19:35:57.724: INFO: stderr: "" Jan 11 19:35:57.724: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:35:57.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-230" for this suite. Jan 11 19:36:04.082: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:36:07.387: INFO: namespace kubectl-230 deletion completed in 9.57271878s • [SLOW TEST:39.894 seconds] [sig-cli] Kubectl client /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Simple pod /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:369 should contain last line of the log /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:581 ------------------------------ SSSSS ------------------------------ [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:93 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:35:53.763: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename provisioning STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in provisioning-7600 STEP: Waiting for a default service account to be provisioned in namespace [It] should support existing directory /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:188 Jan 11 19:35:54.402: INFO: Could not find CSI Name for in-tree plugin kubernetes.io/host-path Jan 11 19:35:54.493: INFO: Creating resource for inline volume STEP: Creating pod pod-subpath-test-hostpath-7bp4 STEP: Creating a pod to test subpath Jan 11 19:35:54.585: INFO: Waiting up to 5m0s for pod "pod-subpath-test-hostpath-7bp4" in namespace "provisioning-7600" to be "success or failure" Jan 11 19:35:54.675: INFO: Pod "pod-subpath-test-hostpath-7bp4": Phase="Pending", Reason="", readiness=false. Elapsed: 89.48272ms Jan 11 19:35:56.765: INFO: Pod "pod-subpath-test-hostpath-7bp4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.179572326s Jan 11 19:35:58.855: INFO: Pod "pod-subpath-test-hostpath-7bp4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.26966419s Jan 11 19:36:00.945: INFO: Pod "pod-subpath-test-hostpath-7bp4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.359991833s STEP: Saw pod success Jan 11 19:36:00.945: INFO: Pod "pod-subpath-test-hostpath-7bp4" satisfied condition "success or failure" Jan 11 19:36:01.035: INFO: Trying to get logs from node ip-10-250-27-25.ec2.internal pod pod-subpath-test-hostpath-7bp4 container test-container-volume-hostpath-7bp4: STEP: delete the pod Jan 11 19:36:01.228: INFO: Waiting for pod pod-subpath-test-hostpath-7bp4 to disappear Jan 11 19:36:01.317: INFO: Pod pod-subpath-test-hostpath-7bp4 no longer exists STEP: Deleting pod pod-subpath-test-hostpath-7bp4 Jan 11 19:36:01.317: INFO: Deleting pod "pod-subpath-test-hostpath-7bp4" in namespace "provisioning-7600" STEP: Deleting pod Jan 11 19:36:01.407: INFO: Deleting pod "pod-subpath-test-hostpath-7bp4" in namespace "provisioning-7600" Jan 11 19:36:01.498: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics [AfterEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:36:01.498: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "provisioning-7600" for this suite. Jan 11 19:36:07.871: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:36:11.179: INFO: namespace provisioning-7600 deletion completed in 9.590162893s • [SLOW TEST:17.416 seconds] [sig-storage] In-tree Volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Driver: hostPath] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:69 [Testpattern: Inline-volume (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:92 should support existing directory /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:188 ------------------------------ SS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:35:36.364: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename webhook STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-9730 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:88 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 11 19:35:38.357: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714368138, loc:(*time.Location)(0x84bfb00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714368138, loc:(*time.Location)(0x84bfb00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714368138, loc:(*time.Location)(0x84bfb00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714368138, loc:(*time.Location)(0x84bfb00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 11 19:35:41.539: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:35:53.481: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9730" for this suite. Jan 11 19:36:01.846: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:36:05.173: INFO: namespace webhook-9730 deletion completed in 11.601083846s STEP: Destroying namespace "webhook-9730-markers" for this suite. Jan 11 19:36:11.443: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:36:14.755: INFO: namespace webhook-9730-markers deletion completed in 9.582144798s [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:103 • [SLOW TEST:38.753 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:36:03.046: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename gc STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in gc-3062 STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0111 19:36:05.752380 8614 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 11 19:36:05.752: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:36:05.752: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3062" for this suite. Jan 11 19:36:12.111: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:36:15.409: INFO: namespace gc-3062 deletion completed in 9.566785496s • [SLOW TEST:12.363 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete RS created by deployment when not orphaning [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:35:34.318: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename gc STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in gc-4317 STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0111 19:36:06.454596 8607 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 11 19:36:06.454: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:36:06.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4317" for this suite. Jan 11 19:36:12.813: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:36:16.114: INFO: namespace gc-4317 deletion completed in 9.569491915s • [SLOW TEST:41.796 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ S ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:36:04.606: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename secrets STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secrets-9168 STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating secret with name secret-test-2afa3d19-52eb-4970-82c0-d90b7292143a STEP: Creating a pod to test consume secrets Jan 11 19:36:05.438: INFO: Waiting up to 5m0s for pod "pod-secrets-8b5917e3-10e5-4217-8b88-0b1828401cc8" in namespace "secrets-9168" to be "success or failure" Jan 11 19:36:05.528: INFO: Pod "pod-secrets-8b5917e3-10e5-4217-8b88-0b1828401cc8": Phase="Pending", Reason="", readiness=false. Elapsed: 89.804467ms Jan 11 19:36:07.617: INFO: Pod "pod-secrets-8b5917e3-10e5-4217-8b88-0b1828401cc8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.179637358s Jan 11 19:36:09.707: INFO: Pod "pod-secrets-8b5917e3-10e5-4217-8b88-0b1828401cc8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.26939838s STEP: Saw pod success Jan 11 19:36:09.707: INFO: Pod "pod-secrets-8b5917e3-10e5-4217-8b88-0b1828401cc8" satisfied condition "success or failure" Jan 11 19:36:09.797: INFO: Trying to get logs from node ip-10-250-27-25.ec2.internal pod pod-secrets-8b5917e3-10e5-4217-8b88-0b1828401cc8 container secret-volume-test: STEP: delete the pod Jan 11 19:36:09.987: INFO: Waiting for pod pod-secrets-8b5917e3-10e5-4217-8b88-0b1828401cc8 to disappear Jan 11 19:36:10.077: INFO: Pod pod-secrets-8b5917e3-10e5-4217-8b88-0b1828401cc8 no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:36:10.077: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9168" for this suite. Jan 11 19:36:16.437: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:36:19.750: INFO: namespace secrets-9168 deletion completed in 9.582308571s • [SLOW TEST:15.144 seconds] [sig-storage] Secrets /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:35 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SS ------------------------------ [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:36:11.184: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename security-context-test STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in security-context-test-4557 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:40 [It] should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:316 Jan 11 19:36:11.943: INFO: Waiting up to 5m0s for pod "alpine-nnp-nil-45344a28-a68c-4b70-a959-e6f8ffeed2b7" in namespace "security-context-test-4557" to be "success or failure" Jan 11 19:36:12.033: INFO: Pod "alpine-nnp-nil-45344a28-a68c-4b70-a959-e6f8ffeed2b7": Phase="Pending", Reason="", readiness=false. Elapsed: 89.597153ms Jan 11 19:36:14.124: INFO: Pod "alpine-nnp-nil-45344a28-a68c-4b70-a959-e6f8ffeed2b7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.180260929s Jan 11 19:36:16.213: INFO: Pod "alpine-nnp-nil-45344a28-a68c-4b70-a959-e6f8ffeed2b7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.27012972s Jan 11 19:36:16.213: INFO: Pod "alpine-nnp-nil-45344a28-a68c-4b70-a959-e6f8ffeed2b7" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:36:16.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-4557" for this suite. Jan 11 19:36:22.712: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:36:26.026: INFO: namespace security-context-test-4557 deletion completed in 9.582568247s • [SLOW TEST:14.841 seconds] [k8s.io] Security Context /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693 when creating containers with AllowPrivilegeEscalation /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:277 should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:316 ------------------------------ SSS ------------------------------ [BeforeEach] [k8s.io] [sig-node] kubelet /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:35:51.740: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename kubelet STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubelet-8935 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] kubelet /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:248 [BeforeEach] [k8s.io] [sig-node] Clean up pods on node /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:269 [It] kubelet should be able to delete 10 pods per node in 1m0s. /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:316 STEP: Creating a RC of 20 pods and wait until all pods of this RC are running STEP: creating replication controller cleanup20-89fb37d3-cf29-4a92-875a-f865f41aea70 in namespace kubelet-8935 I0111 19:35:52.836896 8625 runners.go:184] Created replication controller with name: cleanup20-89fb37d3-cf29-4a92-875a-f865f41aea70, namespace: kubelet-8935, replica count: 20 I0111 19:36:02.987394 8625 runners.go:184] cleanup20-89fb37d3-cf29-4a92-875a-f865f41aea70 Pods: 20 out of 20 created, 20 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 11 19:36:03.987: INFO: Checking pods on node ip-10-250-27-25.ec2.internal via /runningpods endpoint Jan 11 19:36:03.987: INFO: Checking pods on node ip-10-250-7-77.ec2.internal via /runningpods endpoint Jan 11 19:36:04.114: INFO: Resource usage on node "ip-10-250-7-77.ec2.internal" is not ready yet Jan 11 19:36:04.114: INFO: Resource usage on node "ip-10-250-27-25.ec2.internal": container cpu(cores) memory_working_set(MB) memory_rss(MB) "kubelet" 0.112 120.17 151.94 "/" 1.189 1377.09 504.44 "runtime" 0.059 491.64 124.38 STEP: Deleting the RC STEP: deleting ReplicationController cleanup20-89fb37d3-cf29-4a92-875a-f865f41aea70 in namespace kubelet-8935, will wait for the garbage collector to delete the pods Jan 11 19:36:04.457: INFO: Deleting ReplicationController cleanup20-89fb37d3-cf29-4a92-875a-f865f41aea70 took: 90.770328ms Jan 11 19:36:05.357: INFO: Terminating ReplicationController cleanup20-89fb37d3-cf29-4a92-875a-f865f41aea70 pods took: 900.311688ms Jan 11 19:36:19.158: INFO: Checking pods on node ip-10-250-7-77.ec2.internal via /runningpods endpoint Jan 11 19:36:19.158: INFO: Checking pods on node ip-10-250-27-25.ec2.internal via /runningpods endpoint Jan 11 19:36:19.263: INFO: Deleting 20 pods on 2 nodes completed in 1.105837162s after the RC was deleted Jan 11 19:36:19.263: INFO: CPU usage of containers on node "ip-10-250-27-25.ec2.internal" :container 5th% 20th% 50th% 70th% 90th% 95th% 99th% "/" 0.000 0.000 1.033 1.033 1.033 1.033 1.033 "runtime" 0.000 0.000 0.059 0.059 0.059 0.059 0.059 "kubelet" 0.000 0.000 0.112 0.178 0.178 0.178 0.178 CPU usage of containers on node "ip-10-250-7-77.ec2.internal" :container 5th% 20th% 50th% 70th% 90th% 95th% 99th% "/" 0.000 0.000 0.605 0.605 0.605 0.605 0.605 "runtime" 0.000 0.000 0.000 0.000 0.000 0.000 0.000 "kubelet" 0.000 0.000 0.000 0.000 0.000 0.000 0.000 [AfterEach] [k8s.io] [sig-node] Clean up pods on node /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:301 STEP: removing the label kubelet_cleanup off the node ip-10-250-27-25.ec2.internal STEP: verifying the node doesn't have the label kubelet_cleanup STEP: removing the label kubelet_cleanup off the node ip-10-250-7-77.ec2.internal STEP: verifying the node doesn't have the label kubelet_cleanup [AfterEach] [k8s.io] [sig-node] kubelet /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:36:19.886: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-8935" for this suite. Jan 11 19:36:26.246: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:36:29.568: INFO: namespace kubelet-8935 deletion completed in 9.591401126s • [SLOW TEST:37.828 seconds] [k8s.io] [sig-node] kubelet /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693 [k8s.io] [sig-node] Clean up pods on node /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693 kubelet should be able to delete 10 pods per node in 1m0s. /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:316 ------------------------------ SS ------------------------------ [BeforeEach] [sig-storage] Dynamic Provisioning /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:36:19.756: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename volume-provisioning STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in volume-provisioning-1196 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Dynamic Provisioning /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:259 [It] should create persistent volumes in the same zone as specified in allowedTopologies after a pod mounting the claims is started /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:1013 Jan 11 19:36:20.395: INFO: Skipping "Delayed binding EBS storage class test with AllowedTopologies": cloud providers is not [aws] Jan 11 19:36:20.395: INFO: Skipping "Delayed binding GCE PD storage class test with AllowedTopologies": cloud providers is not [gce gke] Jan 11 19:36:20.395: INFO: Skipping "Delayed binding EBS storage class test with AllowedTopologies": cloud providers is not [aws] Jan 11 19:36:20.395: INFO: Skipping "Delayed binding GCE PD storage class test with AllowedTopologies": cloud providers is not [gce gke] [AfterEach] [sig-storage] Dynamic Provisioning /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:36:20.395: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "volume-provisioning-1196" for this suite. Jan 11 19:36:26.757: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:36:30.068: INFO: namespace volume-provisioning-1196 deletion completed in 9.580756955s • [SLOW TEST:10.312 seconds] [sig-storage] Dynamic Provisioning /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 DynamicProvisioner delayed binding with allowedTopologies [Slow] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:1012 should create persistent volumes in the same zone as specified in allowedTopologies after a pod mounting the claims is started /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:1013 ------------------------------ SSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] RuntimeClass /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:36:29.572: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename runtimeclass STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in runtimeclass-7664 STEP: Waiting for a default service account to be provisioned in namespace [It] should reject a Pod requesting a RuntimeClass with conflicting node selector /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/runtimeclass.go:40 [AfterEach] [sig-node] RuntimeClass /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:36:30.394: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "runtimeclass-7664" for this suite. Jan 11 19:36:36.755: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:36:40.078: INFO: namespace runtimeclass-7664 deletion completed in 9.592676817s • [SLOW TEST:10.506 seconds] [sig-node] RuntimeClass /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/runtimeclass.go:37 should reject a Pod requesting a RuntimeClass with conflicting node selector /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/runtimeclass.go:40 ------------------------------ S ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:36:16.118: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in persistent-local-volumes-test-3037 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:152 [BeforeEach] [Volume type: blockfswithoutformat] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "ip-10-250-27-25.ec2.internal" using path "/tmp/local-volume-test-57642c73-36af-4ca2-9c77-1bc94fa43392" Jan 11 19:36:19.801: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-3037 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-57642c73-36af-4ca2-9c77-1bc94fa43392 && dd if=/dev/zero of=/tmp/local-volume-test-57642c73-36af-4ca2-9c77-1bc94fa43392/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-57642c73-36af-4ca2-9c77-1bc94fa43392/file' Jan 11 19:36:21.196: INFO: stderr: "5120+0 records in\n5120+0 records out\n20971520 bytes (21 MB, 20 MiB) copied, 0.0291656 s, 719 MB/s\n" Jan 11 19:36:21.196: INFO: stdout: "" Jan 11 19:36:21.196: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-3037 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-57642c73-36af-4ca2-9c77-1bc94fa43392/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}' Jan 11 19:36:22.505: INFO: stderr: "" Jan 11 19:36:22.505: INFO: stdout: "/dev/loop0\n" STEP: Creating local PVCs and PVs Jan 11 19:36:22.505: INFO: Creating a PV followed by a PVC Jan 11 19:36:22.685: INFO: Waiting for PV local-pvldqf2 to bind to PVC pvc-fmbcs Jan 11 19:36:22.685: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-fmbcs] to have phase Bound Jan 11 19:36:22.774: INFO: PersistentVolumeClaim pvc-fmbcs found and phase=Bound (89.337862ms) Jan 11 19:36:22.774: INFO: Waiting up to 3m0s for PersistentVolume local-pvldqf2 to have phase Bound Jan 11 19:36:22.864: INFO: PersistentVolume local-pvldqf2 found and phase=Bound (89.488624ms) [BeforeEach] One pod requesting one prebound PVC /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Jan 11 19:36:27.494: INFO: pod "security-context-3bdbb4c1-e42d-4f96-a436-ae7f252c1b72" created on Node "ip-10-250-27-25.ec2.internal" STEP: Writing in pod1 Jan 11 19:36:27.494: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-3037 security-context-3bdbb4c1-e42d-4f96-a436-ae7f252c1b72 -- /bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file' Jan 11 19:36:28.849: INFO: stderr: "" Jan 11 19:36:28.849: INFO: stdout: "" Jan 11 19:36:28.849: INFO: podRWCmdExec out: "" err: [It] should be able to mount volume and write from pod1 /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 Jan 11 19:36:28.849: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-3037 security-context-3bdbb4c1-e42d-4f96-a436-ae7f252c1b72 -- /bin/sh -c cat /mnt/volume1/test-file' Jan 11 19:36:30.199: INFO: stderr: "" Jan 11 19:36:30.199: INFO: stdout: "test-file-content\n" Jan 11 19:36:30.199: INFO: podRWCmdExec out: "test-file-content\n" err: STEP: Writing in pod1 Jan 11 19:36:30.199: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-3037 security-context-3bdbb4c1-e42d-4f96-a436-ae7f252c1b72 -- /bin/sh -c mkdir -p /mnt/volume1; echo /dev/loop0 > /mnt/volume1/test-file' Jan 11 19:36:31.506: INFO: stderr: "" Jan 11 19:36:31.506: INFO: stdout: "" Jan 11 19:36:31.506: INFO: podRWCmdExec out: "" err: [AfterEach] One pod requesting one prebound PVC /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod security-context-3bdbb4c1-e42d-4f96-a436-ae7f252c1b72 in namespace persistent-local-volumes-test-3037 [AfterEach] [Volume type: blockfswithoutformat] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jan 11 19:36:31.596: INFO: Deleting PersistentVolumeClaim "pvc-fmbcs" Jan 11 19:36:31.687: INFO: Deleting PersistentVolume "local-pvldqf2" Jan 11 19:36:31.777: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-3037 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-57642c73-36af-4ca2-9c77-1bc94fa43392/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}' Jan 11 19:36:33.066: INFO: stderr: "" Jan 11 19:36:33.066: INFO: stdout: "/dev/loop0\n" STEP: Tear down block device "/dev/loop0" on node "ip-10-250-27-25.ec2.internal" at path /tmp/local-volume-test-57642c73-36af-4ca2-9c77-1bc94fa43392/file Jan 11 19:36:33.066: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-3037 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0' Jan 11 19:36:34.359: INFO: stderr: "" Jan 11 19:36:34.359: INFO: stdout: "" STEP: Removing the test directory /tmp/local-volume-test-57642c73-36af-4ca2-9c77-1bc94fa43392 Jan 11 19:36:34.359: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-3037 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-57642c73-36af-4ca2-9c77-1bc94fa43392' Jan 11 19:36:35.802: INFO: stderr: "" Jan 11 19:36:35.802: INFO: stdout: "" [AfterEach] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:36:35.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-3037" for this suite. Jan 11 19:36:48.254: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:36:51.571: INFO: namespace persistent-local-volumes-test-3037 deletion completed in 15.586755576s • [SLOW TEST:35.454 seconds] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: blockfswithoutformat] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and write from pod1 /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 ------------------------------ SSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:36:26.032: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename kubectl STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-4625 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:225 [BeforeEach] Update Demo /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [It] should create and stop a replication controller [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: creating a replication controller Jan 11 19:36:26.671: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config create -f - --namespace=kubectl-4625' Jan 11 19:36:27.303: INFO: stderr: "" Jan 11 19:36:27.303: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jan 11 19:36:27.303: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4625' Jan 11 19:36:27.743: INFO: stderr: "" Jan 11 19:36:27.743: INFO: stdout: "update-demo-nautilus-dv7p8 update-demo-nautilus-jgvw7 " Jan 11 19:36:27.743: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config get pods update-demo-nautilus-dv7p8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4625' Jan 11 19:36:28.178: INFO: stderr: "" Jan 11 19:36:28.178: INFO: stdout: "" Jan 11 19:36:28.178: INFO: update-demo-nautilus-dv7p8 is created but not running Jan 11 19:36:33.178: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4625' Jan 11 19:36:33.658: INFO: stderr: "" Jan 11 19:36:33.658: INFO: stdout: "update-demo-nautilus-dv7p8 update-demo-nautilus-jgvw7 " Jan 11 19:36:33.658: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config get pods update-demo-nautilus-dv7p8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4625' Jan 11 19:36:34.098: INFO: stderr: "" Jan 11 19:36:34.098: INFO: stdout: "true" Jan 11 19:36:34.098: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config get pods update-demo-nautilus-dv7p8 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4625' Jan 11 19:36:34.525: INFO: stderr: "" Jan 11 19:36:34.525: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 11 19:36:34.525: INFO: validating pod update-demo-nautilus-dv7p8 Jan 11 19:36:34.704: INFO: got data: { "image": "nautilus.jpg" } Jan 11 19:36:34.704: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 11 19:36:34.704: INFO: update-demo-nautilus-dv7p8 is verified up and running Jan 11 19:36:34.705: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config get pods update-demo-nautilus-jgvw7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4625' Jan 11 19:36:35.131: INFO: stderr: "" Jan 11 19:36:35.131: INFO: stdout: "true" Jan 11 19:36:35.132: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config get pods update-demo-nautilus-jgvw7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4625' Jan 11 19:36:35.571: INFO: stderr: "" Jan 11 19:36:35.571: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 11 19:36:35.571: INFO: validating pod update-demo-nautilus-jgvw7 Jan 11 19:36:35.752: INFO: got data: { "image": "nautilus.jpg" } Jan 11 19:36:35.752: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 11 19:36:35.752: INFO: update-demo-nautilus-jgvw7 is verified up and running STEP: using delete to clean up resources Jan 11 19:36:35.752: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config delete --grace-period=0 --force -f - --namespace=kubectl-4625' Jan 11 19:36:36.275: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 11 19:36:36.275: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Jan 11 19:36:36.275: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-4625' Jan 11 19:36:36.803: INFO: stderr: "No resources found in kubectl-4625 namespace.\n" Jan 11 19:36:36.803: INFO: stdout: "" Jan 11 19:36:36.803: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config get pods -l name=update-demo --namespace=kubectl-4625 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jan 11 19:36:37.236: INFO: stderr: "" Jan 11 19:36:37.236: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:36:37.236: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4625" for this suite. Jan 11 19:36:49.596: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:36:52.903: INFO: namespace kubectl-4625 deletion completed in 15.576904652s • [SLOW TEST:26.872 seconds] [sig-cli] Kubectl client /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:275 should create and stop a replication controller [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:36:15.428: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename pv STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pv-5231 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:110 [BeforeEach] NFS /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:127 STEP: creating nfs-server pod STEP: locating the "nfs-server" server pod Jan 11 19:36:26.509: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config logs nfs-server nfs-server --namespace=pv-5231' Jan 11 19:36:27.035: INFO: stderr: "" Jan 11 19:36:27.035: INFO: stdout: "Serving /exports\nrpcinfo: can't contact rpcbind: : RPC: Unable to receive; errno = Connection refused\nStarting rpcbind\nNFS started\n" Jan 11 19:36:27.035: INFO: nfs server pod IP address: 100.64.1.190 [It] should create 2 PVs and 4 PVCs: test write access /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:231 Jan 11 19:36:27.035: INFO: Creating a PV followed by a PVC Jan 11 19:36:27.214: INFO: Creating a PV followed by a PVC Jan 11 19:36:27.582: INFO: Waiting up to 3m0s for PersistentVolume nfs-jgknl to have phase Bound Jan 11 19:36:27.671: INFO: PersistentVolume nfs-jgknl found and phase=Bound (89.114741ms) Jan 11 19:36:27.760: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-479sx] to have phase Bound Jan 11 19:36:27.850: INFO: PersistentVolumeClaim pvc-479sx found and phase=Bound (89.222388ms) Jan 11 19:36:27.850: INFO: Waiting up to 3m0s for PersistentVolume nfs-2kk59 to have phase Bound Jan 11 19:36:27.939: INFO: PersistentVolume nfs-2kk59 found and phase=Bound (89.117009ms) Jan 11 19:36:28.028: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-8mg5p] to have phase Bound Jan 11 19:36:28.117: INFO: PersistentVolumeClaim pvc-8mg5p found and phase=Bound (89.267034ms) STEP: Checking pod has write access to PersistentVolumes Jan 11 19:36:28.296: INFO: Creating nfs test pod STEP: Pod should terminate with exitcode 0 (success) Jan 11 19:36:28.386: INFO: Waiting up to 5m0s for pod "pvc-tester-xm57d" in namespace "pv-5231" to be "success or failure" Jan 11 19:36:28.475: INFO: Pod "pvc-tester-xm57d": Phase="Pending", Reason="", readiness=false. Elapsed: 89.196767ms Jan 11 19:36:30.564: INFO: Pod "pvc-tester-xm57d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.178661339s STEP: Saw pod success Jan 11 19:36:30.564: INFO: Pod "pvc-tester-xm57d" satisfied condition "success or failure" Jan 11 19:36:30.564: INFO: Pod pvc-tester-xm57d succeeded Jan 11 19:36:30.564: INFO: Deleting pod "pvc-tester-xm57d" in namespace "pv-5231" Jan 11 19:36:30.656: INFO: Wait up to 5m0s for pod "pvc-tester-xm57d" to be fully deleted Jan 11 19:36:30.835: INFO: Creating nfs test pod STEP: Pod should terminate with exitcode 0 (success) Jan 11 19:36:30.925: INFO: Waiting up to 5m0s for pod "pvc-tester-m6blt" in namespace "pv-5231" to be "success or failure" Jan 11 19:36:31.015: INFO: Pod "pvc-tester-m6blt": Phase="Pending", Reason="", readiness=false. Elapsed: 89.568263ms Jan 11 19:36:33.104: INFO: Pod "pvc-tester-m6blt": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.179290697s STEP: Saw pod success Jan 11 19:36:33.104: INFO: Pod "pvc-tester-m6blt" satisfied condition "success or failure" Jan 11 19:36:33.104: INFO: Pod pvc-tester-m6blt succeeded Jan 11 19:36:33.104: INFO: Deleting pod "pvc-tester-m6blt" in namespace "pv-5231" Jan 11 19:36:33.197: INFO: Wait up to 5m0s for pod "pvc-tester-m6blt" to be fully deleted STEP: Deleting PVCs to invoke reclaim policy Jan 11 19:36:33.554: INFO: Deleting PVC pvc-479sx to trigger reclamation of PV nfs-jgknl Jan 11 19:36:33.554: INFO: Deleting PersistentVolumeClaim "pvc-479sx" Jan 11 19:36:33.645: INFO: Waiting for reclaim process to complete. Jan 11 19:36:33.645: INFO: Waiting up to 3m0s for PersistentVolume nfs-jgknl to have phase Released Jan 11 19:36:33.734: INFO: PersistentVolume nfs-jgknl found and phase=Released (89.146883ms) Jan 11 19:36:33.823: INFO: PV nfs-jgknl now in "Released" phase Jan 11 19:36:34.002: INFO: Deleting PVC pvc-8mg5p to trigger reclamation of PV nfs-2kk59 Jan 11 19:36:34.002: INFO: Deleting PersistentVolumeClaim "pvc-8mg5p" Jan 11 19:36:34.092: INFO: Waiting for reclaim process to complete. Jan 11 19:36:34.092: INFO: Waiting up to 3m0s for PersistentVolume nfs-2kk59 to have phase Released Jan 11 19:36:34.182: INFO: PersistentVolume nfs-2kk59 found and phase=Released (89.263856ms) Jan 11 19:36:34.271: INFO: PV nfs-2kk59 now in "Released" phase [AfterEach] with multiple PVs and PVCs all in same ns /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:217 Jan 11 19:36:34.271: INFO: AfterEach: deleting 2 PVCs and 2 PVs... Jan 11 19:36:34.271: INFO: Deleting PersistentVolumeClaim "pvc-v2bgh" Jan 11 19:36:34.361: INFO: Deleting PersistentVolumeClaim "pvc-m967l" Jan 11 19:36:34.453: INFO: Deleting PersistentVolume "nfs-jgknl" Jan 11 19:36:34.542: INFO: Deleting PersistentVolume "nfs-2kk59" [AfterEach] NFS /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:147 Jan 11 19:36:34.632: INFO: Deleting pod "nfs-server" in namespace "pv-5231" Jan 11 19:36:34.723: INFO: Wait up to 5m0s for pod "nfs-server" to be fully deleted [AfterEach] [sig-storage] PersistentVolumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:36:44.902: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-5231" for this suite. Jan 11 19:36:51.262: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:36:54.569: INFO: namespace pv-5231 deletion completed in 9.575643194s • [SLOW TEST:39.141 seconds] [sig-storage] PersistentVolumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 NFS /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:120 with multiple PVs and PVCs all in same ns /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:210 should create 2 PVs and 4 PVCs: test write access /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:231 ------------------------------ SSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:36:30.082: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in persistent-local-volumes-test-9045 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:152 [BeforeEach] [Volume type: blockfswithoutformat] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "ip-10-250-27-25.ec2.internal" using path "/tmp/local-volume-test-b3e04f9f-18e3-4e29-b3d0-520ad39a2000" Jan 11 19:36:33.175: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-9045 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-b3e04f9f-18e3-4e29-b3d0-520ad39a2000 && dd if=/dev/zero of=/tmp/local-volume-test-b3e04f9f-18e3-4e29-b3d0-520ad39a2000/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-b3e04f9f-18e3-4e29-b3d0-520ad39a2000/file' Jan 11 19:36:34.502: INFO: stderr: "5120+0 records in\n5120+0 records out\n20971520 bytes (21 MB, 20 MiB) copied, 0.0176459 s, 1.2 GB/s\n" Jan 11 19:36:34.502: INFO: stdout: "" Jan 11 19:36:34.502: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-9045 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-b3e04f9f-18e3-4e29-b3d0-520ad39a2000/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}' Jan 11 19:36:35.925: INFO: stderr: "" Jan 11 19:36:35.926: INFO: stdout: "/dev/loop0\n" STEP: Creating local PVCs and PVs Jan 11 19:36:35.926: INFO: Creating a PV followed by a PVC Jan 11 19:36:36.105: INFO: Waiting for PV local-pv8nrl2 to bind to PVC pvc-6qtpp Jan 11 19:36:36.105: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-6qtpp] to have phase Bound Jan 11 19:36:36.195: INFO: PersistentVolumeClaim pvc-6qtpp found and phase=Bound (89.302829ms) Jan 11 19:36:36.195: INFO: Waiting up to 3m0s for PersistentVolume local-pv8nrl2 to have phase Bound Jan 11 19:36:36.285: INFO: PersistentVolume local-pv8nrl2 found and phase=Bound (89.801682ms) [BeforeEach] Set fsGroup for local volume /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set same fsGroup for two pods simultaneously [Slow] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:274 STEP: Create first pod and check fsGroup is set STEP: Creating a pod Jan 11 19:36:38.824: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec security-context-d7ff6c00-3690-4ba3-9d28-c4e8358f81e6 --namespace=persistent-local-volumes-test-9045 -- stat -c %g /mnt/volume1' Jan 11 19:36:40.125: INFO: stderr: "" Jan 11 19:36:40.125: INFO: stdout: "1234\n" STEP: Create second pod with same fsGroup and check fsGroup is correct STEP: Creating a pod Jan 11 19:36:42.487: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec security-context-7a50ff68-d683-41c3-8234-393487f74f5c --namespace=persistent-local-volumes-test-9045 -- stat -c %g /mnt/volume1' Jan 11 19:36:43.848: INFO: stderr: "" Jan 11 19:36:43.849: INFO: stdout: "1234\n" STEP: Deleting first pod STEP: Deleting pod security-context-d7ff6c00-3690-4ba3-9d28-c4e8358f81e6 in namespace persistent-local-volumes-test-9045 STEP: Deleting second pod STEP: Deleting pod security-context-7a50ff68-d683-41c3-8234-393487f74f5c in namespace persistent-local-volumes-test-9045 [AfterEach] [Volume type: blockfswithoutformat] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jan 11 19:36:44.031: INFO: Deleting PersistentVolumeClaim "pvc-6qtpp" Jan 11 19:36:44.121: INFO: Deleting PersistentVolume "local-pv8nrl2" Jan 11 19:36:44.212: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-9045 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-b3e04f9f-18e3-4e29-b3d0-520ad39a2000/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}' Jan 11 19:36:45.500: INFO: stderr: "" Jan 11 19:36:45.500: INFO: stdout: "/dev/loop0\n" STEP: Tear down block device "/dev/loop0" on node "ip-10-250-27-25.ec2.internal" at path /tmp/local-volume-test-b3e04f9f-18e3-4e29-b3d0-520ad39a2000/file Jan 11 19:36:45.500: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-9045 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0' Jan 11 19:36:46.866: INFO: stderr: "" Jan 11 19:36:46.866: INFO: stdout: "" STEP: Removing the test directory /tmp/local-volume-test-b3e04f9f-18e3-4e29-b3d0-520ad39a2000 Jan 11 19:36:46.866: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-9045 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-b3e04f9f-18e3-4e29-b3d0-520ad39a2000' Jan 11 19:36:48.204: INFO: stderr: "" Jan 11 19:36:48.204: INFO: stdout: "" [AfterEach] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:36:48.295: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-9045" for this suite. Jan 11 19:36:54.657: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:36:57.974: INFO: namespace persistent-local-volumes-test-9045 deletion completed in 9.587688478s • [SLOW TEST:27.892 seconds] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: blockfswithoutformat] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set same fsGroup for two pods simultaneously [Slow] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:274 ------------------------------ SSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] DisruptionController /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:36:07.401: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename disruption STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in disruption-8861 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] DisruptionController /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:52 [It] evictions: too few pods, absolute => should not allow an eviction /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:149 STEP: Waiting for the pdb to be processed STEP: locating a running pod [AfterEach] [sig-apps] DisruptionController /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:36:10.677: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "disruption-8861" for this suite. Jan 11 19:36:57.034: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:37:00.329: INFO: namespace disruption-8861 deletion completed in 49.562504581s • [SLOW TEST:52.929 seconds] [sig-apps] DisruptionController /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 evictions: too few pods, absolute => should not allow an eviction /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:149 ------------------------------ SSSSSSS ------------------------------ [BeforeEach] [sig-storage] Dynamic Provisioning /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:36:57.987: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename volume-provisioning STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in volume-provisioning-7513 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Dynamic Provisioning /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:259 [It] should create persistent volumes in the same zone as node after a pod mounting the claims is started /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:968 Jan 11 19:36:58.626: INFO: Skipping "Delayed binding EBS storage class test ": cloud providers is not [aws] Jan 11 19:36:58.626: INFO: Skipping "Delayed binding GCE PD storage class test ": cloud providers is not [gce gke] Jan 11 19:36:58.626: INFO: Skipping "Delayed binding EBS storage class test ": cloud providers is not [aws] Jan 11 19:36:58.626: INFO: Skipping "Delayed binding GCE PD storage class test ": cloud providers is not [gce gke] [AfterEach] [sig-storage] Dynamic Provisioning /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:36:58.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "volume-provisioning-7513" for this suite. Jan 11 19:37:04.986: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:37:08.297: INFO: namespace volume-provisioning-7513 deletion completed in 9.580766263s • [SLOW TEST:10.310 seconds] [sig-storage] Dynamic Provisioning /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 DynamicProvisioner delayed binding [Slow] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:967 should create persistent volumes in the same zone as node after a pod mounting the claims is started /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:968 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:36:54.576: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename emptydir STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-6300 STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating a pod to test emptydir 0666 on tmpfs Jan 11 19:36:56.336: INFO: Waiting up to 5m0s for pod "pod-01c53598-3efd-4b2e-b767-a1e71c7f0d48" in namespace "emptydir-6300" to be "success or failure" Jan 11 19:36:56.426: INFO: Pod "pod-01c53598-3efd-4b2e-b767-a1e71c7f0d48": Phase="Pending", Reason="", readiness=false. Elapsed: 89.243844ms Jan 11 19:36:58.515: INFO: Pod "pod-01c53598-3efd-4b2e-b767-a1e71c7f0d48": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.178982515s STEP: Saw pod success Jan 11 19:36:58.515: INFO: Pod "pod-01c53598-3efd-4b2e-b767-a1e71c7f0d48" satisfied condition "success or failure" Jan 11 19:36:58.605: INFO: Trying to get logs from node ip-10-250-27-25.ec2.internal pod pod-01c53598-3efd-4b2e-b767-a1e71c7f0d48 container test-container: STEP: delete the pod Jan 11 19:36:58.796: INFO: Waiting for pod pod-01c53598-3efd-4b2e-b767-a1e71c7f0d48 to disappear Jan 11 19:36:58.886: INFO: Pod pod-01c53598-3efd-4b2e-b767-a1e71c7f0d48 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:36:58.886: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6300" for this suite. Jan 11 19:37:05.246: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:37:08.552: INFO: namespace emptydir-6300 deletion completed in 9.575006668s • [SLOW TEST:13.976 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ S ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:37:00.340: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename projected STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-1628 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide podname only [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating a pod to test downward API volume plugin Jan 11 19:37:01.069: INFO: Waiting up to 5m0s for pod "downwardapi-volume-951cbd3d-72d5-4814-87bf-e01d747824f7" in namespace "projected-1628" to be "success or failure" Jan 11 19:37:01.159: INFO: Pod "downwardapi-volume-951cbd3d-72d5-4814-87bf-e01d747824f7": Phase="Pending", Reason="", readiness=false. Elapsed: 89.639133ms Jan 11 19:37:03.248: INFO: Pod "downwardapi-volume-951cbd3d-72d5-4814-87bf-e01d747824f7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.179311093s STEP: Saw pod success Jan 11 19:37:03.249: INFO: Pod "downwardapi-volume-951cbd3d-72d5-4814-87bf-e01d747824f7" satisfied condition "success or failure" Jan 11 19:37:03.338: INFO: Trying to get logs from node ip-10-250-27-25.ec2.internal pod downwardapi-volume-951cbd3d-72d5-4814-87bf-e01d747824f7 container client-container: STEP: delete the pod Jan 11 19:37:03.527: INFO: Waiting for pod downwardapi-volume-951cbd3d-72d5-4814-87bf-e01d747824f7 to disappear Jan 11 19:37:03.616: INFO: Pod downwardapi-volume-951cbd3d-72d5-4814-87bf-e01d747824f7 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:37:03.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1628" for this suite. Jan 11 19:37:09.975: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:37:13.401: INFO: namespace projected-1628 deletion completed in 9.69392285s • [SLOW TEST:13.060 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should provide podname only [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:37:13.427: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename kubelet-test STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubelet-test-8468 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [AfterEach] [k8s.io] Kubelet /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:37:14.245: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-8468" for this suite. Jan 11 19:37:20.605: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:37:23.984: INFO: namespace kubelet-test-8468 deletion completed in 9.647412681s • [SLOW TEST:10.557 seconds] [k8s.io] Kubelet /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693 when scheduling a busybox command that always fails in a pod /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should be possible to delete [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Subpath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:36:52.911: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename subpath STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in subpath-5747 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating pod pod-subpath-test-downwardapi-vxlk STEP: Creating a pod to test atomic-volume-subpath Jan 11 19:36:53.825: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-vxlk" in namespace "subpath-5747" to be "success or failure" Jan 11 19:36:53.915: INFO: Pod "pod-subpath-test-downwardapi-vxlk": Phase="Pending", Reason="", readiness=false. Elapsed: 89.854608ms Jan 11 19:36:56.007: INFO: Pod "pod-subpath-test-downwardapi-vxlk": Phase="Running", Reason="", readiness=true. Elapsed: 2.182091601s Jan 11 19:36:58.097: INFO: Pod "pod-subpath-test-downwardapi-vxlk": Phase="Running", Reason="", readiness=true. Elapsed: 4.271984126s Jan 11 19:37:00.187: INFO: Pod "pod-subpath-test-downwardapi-vxlk": Phase="Running", Reason="", readiness=true. Elapsed: 6.36199616s Jan 11 19:37:02.277: INFO: Pod "pod-subpath-test-downwardapi-vxlk": Phase="Running", Reason="", readiness=true. Elapsed: 8.452226812s Jan 11 19:37:04.367: INFO: Pod "pod-subpath-test-downwardapi-vxlk": Phase="Running", Reason="", readiness=true. Elapsed: 10.542687843s Jan 11 19:37:06.458: INFO: Pod "pod-subpath-test-downwardapi-vxlk": Phase="Running", Reason="", readiness=true. Elapsed: 12.633477524s Jan 11 19:37:08.548: INFO: Pod "pod-subpath-test-downwardapi-vxlk": Phase="Running", Reason="", readiness=true. Elapsed: 14.723238749s Jan 11 19:37:10.639: INFO: Pod "pod-subpath-test-downwardapi-vxlk": Phase="Running", Reason="", readiness=true. Elapsed: 16.81389054s Jan 11 19:37:12.729: INFO: Pod "pod-subpath-test-downwardapi-vxlk": Phase="Running", Reason="", readiness=true. Elapsed: 18.904082352s Jan 11 19:37:14.819: INFO: Pod "pod-subpath-test-downwardapi-vxlk": Phase="Running", Reason="", readiness=true. Elapsed: 20.993783535s Jan 11 19:37:16.909: INFO: Pod "pod-subpath-test-downwardapi-vxlk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 23.083920066s STEP: Saw pod success Jan 11 19:37:16.909: INFO: Pod "pod-subpath-test-downwardapi-vxlk" satisfied condition "success or failure" Jan 11 19:37:16.998: INFO: Trying to get logs from node ip-10-250-27-25.ec2.internal pod pod-subpath-test-downwardapi-vxlk container test-container-subpath-downwardapi-vxlk: STEP: delete the pod Jan 11 19:37:17.222: INFO: Waiting for pod pod-subpath-test-downwardapi-vxlk to disappear Jan 11 19:37:17.311: INFO: Pod pod-subpath-test-downwardapi-vxlk no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-vxlk Jan 11 19:37:17.311: INFO: Deleting pod "pod-subpath-test-downwardapi-vxlk" in namespace "subpath-5747" [AfterEach] [sig-storage] Subpath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:37:17.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-5747" for this suite. Jan 11 19:37:23.774: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:37:27.087: INFO: namespace subpath-5747 deletion completed in 9.582627044s • [SLOW TEST:34.176 seconds] [sig-storage] Subpath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ S ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:37:08.555: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in persistent-local-volumes-test-8795 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:152 [BeforeEach] [Volume type: dir] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Jan 11 19:37:11.641: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-8795 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-a490331d-9615-46df-84ea-b5280405f5ca' Jan 11 19:37:12.936: INFO: stderr: "" Jan 11 19:37:12.936: INFO: stdout: "" STEP: Creating local PVCs and PVs Jan 11 19:37:12.936: INFO: Creating a PV followed by a PVC Jan 11 19:37:13.115: INFO: Waiting for PV local-pvwsg85 to bind to PVC pvc-wwkxt Jan 11 19:37:13.115: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-wwkxt] to have phase Bound Jan 11 19:37:13.204: INFO: PersistentVolumeClaim pvc-wwkxt found and phase=Bound (89.068563ms) Jan 11 19:37:13.204: INFO: Waiting up to 3m0s for PersistentVolume local-pvwsg85 to have phase Bound Jan 11 19:37:13.293: INFO: PersistentVolume local-pvwsg85 found and phase=Bound (88.888313ms) [It] should be able to write from pod1 and read from pod2 /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 STEP: Creating pod1 STEP: Creating a pod Jan 11 19:37:15.920: INFO: pod "security-context-e6b53c14-bc99-4583-9a25-26446a9aea81" created on Node "ip-10-250-27-25.ec2.internal" STEP: Writing in pod1 Jan 11 19:37:15.920: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-8795 security-context-e6b53c14-bc99-4583-9a25-26446a9aea81 -- /bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file' Jan 11 19:37:17.239: INFO: stderr: "" Jan 11 19:37:17.239: INFO: stdout: "" Jan 11 19:37:17.239: INFO: podRWCmdExec out: "" err: Jan 11 19:37:17.239: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-8795 security-context-e6b53c14-bc99-4583-9a25-26446a9aea81 -- /bin/sh -c cat /mnt/volume1/test-file' Jan 11 19:37:18.557: INFO: stderr: "" Jan 11 19:37:18.557: INFO: stdout: "test-file-content\n" Jan 11 19:37:18.557: INFO: podRWCmdExec out: "test-file-content\n" err: STEP: Deleting pod1 STEP: Deleting pod security-context-e6b53c14-bc99-4583-9a25-26446a9aea81 in namespace persistent-local-volumes-test-8795 STEP: Creating pod2 STEP: Creating a pod Jan 11 19:37:21.097: INFO: pod "security-context-08c01102-d364-4301-afc1-452017b9586d" created on Node "ip-10-250-27-25.ec2.internal" STEP: Reading in pod2 Jan 11 19:37:21.097: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-8795 security-context-08c01102-d364-4301-afc1-452017b9586d -- /bin/sh -c cat /mnt/volume1/test-file' Jan 11 19:37:22.419: INFO: stderr: "" Jan 11 19:37:22.419: INFO: stdout: "test-file-content\n" Jan 11 19:37:22.419: INFO: podRWCmdExec out: "test-file-content\n" err: STEP: Deleting pod2 STEP: Deleting pod security-context-08c01102-d364-4301-afc1-452017b9586d in namespace persistent-local-volumes-test-8795 [AfterEach] [Volume type: dir] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jan 11 19:37:22.509: INFO: Deleting PersistentVolumeClaim "pvc-wwkxt" Jan 11 19:37:22.600: INFO: Deleting PersistentVolume "local-pvwsg85" STEP: Removing the test directory Jan 11 19:37:22.690: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-8795 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-a490331d-9615-46df-84ea-b5280405f5ca' Jan 11 19:37:24.196: INFO: stderr: "" Jan 11 19:37:24.196: INFO: stdout: "" [AfterEach] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:37:24.287: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-8795" for this suite. Jan 11 19:37:30.645: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:37:33.950: INFO: namespace persistent-local-volumes-test-8795 deletion completed in 9.573096636s • [SLOW TEST:25.395 seconds] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume one after the other /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254 should be able to write from pod1 and read from pod2 /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 ------------------------------ SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:37:23.995: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename projected STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-6078 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's cpu limit [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating a pod to test downward API volume plugin Jan 11 19:37:24.745: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b19fd44d-1882-44c2-bc08-ab5db453e4de" in namespace "projected-6078" to be "success or failure" Jan 11 19:37:24.834: INFO: Pod "downwardapi-volume-b19fd44d-1882-44c2-bc08-ab5db453e4de": Phase="Pending", Reason="", readiness=false. Elapsed: 89.175741ms Jan 11 19:37:26.924: INFO: Pod "downwardapi-volume-b19fd44d-1882-44c2-bc08-ab5db453e4de": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.179098821s STEP: Saw pod success Jan 11 19:37:26.924: INFO: Pod "downwardapi-volume-b19fd44d-1882-44c2-bc08-ab5db453e4de" satisfied condition "success or failure" Jan 11 19:37:27.013: INFO: Trying to get logs from node ip-10-250-27-25.ec2.internal pod downwardapi-volume-b19fd44d-1882-44c2-bc08-ab5db453e4de container client-container: STEP: delete the pod Jan 11 19:37:27.202: INFO: Waiting for pod downwardapi-volume-b19fd44d-1882-44c2-bc08-ab5db453e4de to disappear Jan 11 19:37:27.291: INFO: Pod downwardapi-volume-b19fd44d-1882-44c2-bc08-ab5db453e4de no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:37:27.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6078" for this suite. Jan 11 19:37:35.651: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:37:39.056: INFO: namespace projected-6078 deletion completed in 11.672768565s • [SLOW TEST:15.061 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should provide container's cpu limit [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:37:33.971: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename configmap STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-6603 STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating configMap with name configmap-test-volume-map-481f150e-a862-4b6a-9769-5d39de0c5112 STEP: Creating a pod to test consume configMaps Jan 11 19:37:34.931: INFO: Waiting up to 5m0s for pod "pod-configmaps-b1531e48-7854-4fda-b946-3c74ea041273" in namespace "configmap-6603" to be "success or failure" Jan 11 19:37:35.020: INFO: Pod "pod-configmaps-b1531e48-7854-4fda-b946-3c74ea041273": Phase="Pending", Reason="", readiness=false. Elapsed: 89.077874ms Jan 11 19:37:37.110: INFO: Pod "pod-configmaps-b1531e48-7854-4fda-b946-3c74ea041273": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.178275987s STEP: Saw pod success Jan 11 19:37:37.110: INFO: Pod "pod-configmaps-b1531e48-7854-4fda-b946-3c74ea041273" satisfied condition "success or failure" Jan 11 19:37:37.199: INFO: Trying to get logs from node ip-10-250-27-25.ec2.internal pod pod-configmaps-b1531e48-7854-4fda-b946-3c74ea041273 container configmap-volume-test: STEP: delete the pod Jan 11 19:37:37.389: INFO: Waiting for pod pod-configmaps-b1531e48-7854-4fda-b946-3c74ea041273 to disappear Jan 11 19:37:37.478: INFO: Pod pod-configmaps-b1531e48-7854-4fda-b946-3c74ea041273 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:37:37.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6603" for this suite. Jan 11 19:37:43.837: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:37:47.367: INFO: namespace configmap-6603 deletion completed in 9.798704542s • [SLOW TEST:13.397 seconds] [sig-storage] ConfigMap /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:34 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:36:15.118: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in persistent-local-volumes-test-8319 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:152 [BeforeEach] StatefulSet with pod affinity [Slow] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:381 STEP: Setting up local volumes on node "ip-10-250-27-25.ec2.internal" STEP: Initializing test volumes Jan 11 19:36:18.304: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-8319 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-3effb1bf-1c7a-46fb-8d26-8cb2bd39b319' Jan 11 19:36:19.680: INFO: stderr: "" Jan 11 19:36:19.680: INFO: stdout: "" Jan 11 19:36:19.680: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-8319 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-aa36f2d9-1e34-467a-8021-12d411a6ff46' Jan 11 19:36:21.026: INFO: stderr: "" Jan 11 19:36:21.026: INFO: stdout: "" Jan 11 19:36:21.026: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-8319 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-1d878d6c-7c95-4bec-a34f-86747f3e2538' Jan 11 19:36:22.363: INFO: stderr: "" Jan 11 19:36:22.363: INFO: stdout: "" Jan 11 19:36:22.363: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-8319 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-43db7919-c8de-480f-8739-a249ad16c431' Jan 11 19:36:23.707: INFO: stderr: "" Jan 11 19:36:23.707: INFO: stdout: "" Jan 11 19:36:23.707: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-8319 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-515d0723-83e5-4cad-a468-fb455a3ef449' Jan 11 19:36:25.084: INFO: stderr: "" Jan 11 19:36:25.084: INFO: stdout: "" Jan 11 19:36:25.084: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-8319 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-08c7d521-3a2c-415a-985b-ee38ba49d5b4' Jan 11 19:36:26.403: INFO: stderr: "" Jan 11 19:36:26.403: INFO: stdout: "" STEP: Creating local PVCs and PVs Jan 11 19:36:26.403: INFO: Creating a PV followed by a PVC Jan 11 19:36:26.583: INFO: Creating a PV followed by a PVC Jan 11 19:36:26.763: INFO: Creating a PV followed by a PVC Jan 11 19:36:26.943: INFO: Creating a PV followed by a PVC Jan 11 19:36:27.123: INFO: Creating a PV followed by a PVC Jan 11 19:36:27.302: INFO: Creating a PV followed by a PVC STEP: Setting up local volumes on node "ip-10-250-7-77.ec2.internal" STEP: Initializing test volumes Jan 11 19:36:41.366: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-8319 hostexec-ip-10-250-7-77.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-776b4be9-2273-4505-b527-afa090db70d8' Jan 11 19:36:42.668: INFO: stderr: "" Jan 11 19:36:42.668: INFO: stdout: "" Jan 11 19:36:42.668: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-8319 hostexec-ip-10-250-7-77.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-e4893dfe-2d16-48cf-8203-d63dfeb4ff9a' Jan 11 19:36:43.974: INFO: stderr: "" Jan 11 19:36:43.974: INFO: stdout: "" Jan 11 19:36:43.974: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-8319 hostexec-ip-10-250-7-77.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-183f4795-869d-4349-b493-5f9675c7afbf' Jan 11 19:36:45.296: INFO: stderr: "" Jan 11 19:36:45.296: INFO: stdout: "" Jan 11 19:36:45.296: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-8319 hostexec-ip-10-250-7-77.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-a08caf42-6133-4a36-9761-ed80dcf246fa' Jan 11 19:36:46.658: INFO: stderr: "" Jan 11 19:36:46.659: INFO: stdout: "" Jan 11 19:36:46.659: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-8319 hostexec-ip-10-250-7-77.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-95f5c8ac-bb2a-4584-8208-3165a6c49440' Jan 11 19:36:47.931: INFO: stderr: "" Jan 11 19:36:47.931: INFO: stdout: "" Jan 11 19:36:47.931: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-8319 hostexec-ip-10-250-7-77.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-50c4c93d-632d-44eb-b15c-a72ecf6aa2f1' Jan 11 19:36:49.307: INFO: stderr: "" Jan 11 19:36:49.307: INFO: stdout: "" STEP: Creating local PVCs and PVs Jan 11 19:36:49.307: INFO: Creating a PV followed by a PVC Jan 11 19:36:49.490: INFO: Creating a PV followed by a PVC Jan 11 19:36:49.671: INFO: Creating a PV followed by a PVC Jan 11 19:36:49.852: INFO: Creating a PV followed by a PVC Jan 11 19:36:50.033: INFO: Creating a PV followed by a PVC Jan 11 19:36:50.213: INFO: Creating a PV followed by a PVC [It] should use volumes on one node when pod has affinity /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:409 STEP: Creating a StatefulSet with pod affinity on nodes Jan 11 19:37:02.202: INFO: Found 1 stateful pods, waiting for 3 Jan 11 19:37:12.293: INFO: Waiting for pod local-volume-statefulset-0 to enter Running - Ready=true, currently Running - Ready=true Jan 11 19:37:12.293: INFO: Waiting for pod local-volume-statefulset-1 to enter Running - Ready=true, currently Running - Ready=true Jan 11 19:37:12.293: INFO: Waiting for pod local-volume-statefulset-2 to enter Running - Ready=true, currently Running - Ready=true Jan 11 19:37:12.384: INFO: Waiting up to 1s for PersistentVolumeClaims [vol1-local-volume-statefulset-0] to have phase Bound Jan 11 19:37:12.473: INFO: PersistentVolumeClaim vol1-local-volume-statefulset-0 found and phase=Bound (89.217521ms) Jan 11 19:37:12.473: INFO: Waiting up to 1s for PersistentVolumeClaims [vol2-local-volume-statefulset-0] to have phase Bound Jan 11 19:37:12.563: INFO: PersistentVolumeClaim vol2-local-volume-statefulset-0 found and phase=Bound (90.121411ms) Jan 11 19:37:12.563: INFO: Waiting up to 1s for PersistentVolumeClaims [vol1-local-volume-statefulset-1] to have phase Bound Jan 11 19:37:12.654: INFO: PersistentVolumeClaim vol1-local-volume-statefulset-1 found and phase=Bound (91.198426ms) Jan 11 19:37:12.654: INFO: Waiting up to 1s for PersistentVolumeClaims [vol2-local-volume-statefulset-1] to have phase Bound Jan 11 19:37:12.744: INFO: PersistentVolumeClaim vol2-local-volume-statefulset-1 found and phase=Bound (90.149914ms) Jan 11 19:37:12.745: INFO: Waiting up to 1s for PersistentVolumeClaims [vol1-local-volume-statefulset-2] to have phase Bound Jan 11 19:37:12.835: INFO: PersistentVolumeClaim vol1-local-volume-statefulset-2 found and phase=Bound (90.011038ms) Jan 11 19:37:12.835: INFO: Waiting up to 1s for PersistentVolumeClaims [vol2-local-volume-statefulset-2] to have phase Bound Jan 11 19:37:12.924: INFO: PersistentVolumeClaim vol2-local-volume-statefulset-2 found and phase=Bound (89.799972ms) [AfterEach] StatefulSet with pod affinity [Slow] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:393 STEP: Cleaning up PVC and PV Jan 11 19:37:12.925: INFO: Deleting PersistentVolumeClaim "pvc-k6kcb" Jan 11 19:37:13.016: INFO: Deleting PersistentVolume "local-pv74krm" STEP: Cleaning up PVC and PV Jan 11 19:37:13.108: INFO: Deleting PersistentVolumeClaim "pvc-86jgv" Jan 11 19:37:13.199: INFO: Deleting PersistentVolume "local-pv8vw28" STEP: Cleaning up PVC and PV Jan 11 19:37:13.290: INFO: Deleting PersistentVolumeClaim "pvc-79hwm" Jan 11 19:37:13.380: INFO: Deleting PersistentVolume "local-pvxtdqh" STEP: Cleaning up PVC and PV Jan 11 19:37:13.471: INFO: Deleting PersistentVolumeClaim "pvc-6xgfp" Jan 11 19:37:13.562: INFO: Deleting PersistentVolume "local-pvmrj89" STEP: Cleaning up PVC and PV Jan 11 19:37:13.653: INFO: Deleting PersistentVolumeClaim "pvc-r8k7v" Jan 11 19:37:13.744: INFO: Deleting PersistentVolume "local-pvmqlmz" STEP: Cleaning up PVC and PV Jan 11 19:37:13.835: INFO: Deleting PersistentVolumeClaim "pvc-bjf5t" Jan 11 19:37:13.927: INFO: Deleting PersistentVolume "local-pv9gsgr" STEP: Removing the test directory Jan 11 19:37:14.018: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-8319 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-3effb1bf-1c7a-46fb-8d26-8cb2bd39b319' Jan 11 19:37:15.346: INFO: stderr: "" Jan 11 19:37:15.346: INFO: stdout: "" STEP: Removing the test directory Jan 11 19:37:15.347: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-8319 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-aa36f2d9-1e34-467a-8021-12d411a6ff46' Jan 11 19:37:16.744: INFO: stderr: "" Jan 11 19:37:16.744: INFO: stdout: "" STEP: Removing the test directory Jan 11 19:37:16.744: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-8319 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-1d878d6c-7c95-4bec-a34f-86747f3e2538' Jan 11 19:37:18.055: INFO: stderr: "" Jan 11 19:37:18.055: INFO: stdout: "" STEP: Removing the test directory Jan 11 19:37:18.055: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-8319 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-43db7919-c8de-480f-8739-a249ad16c431' Jan 11 19:37:19.377: INFO: stderr: "" Jan 11 19:37:19.377: INFO: stdout: "" STEP: Removing the test directory Jan 11 19:37:19.378: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-8319 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-515d0723-83e5-4cad-a468-fb455a3ef449' Jan 11 19:37:20.662: INFO: stderr: "" Jan 11 19:37:20.662: INFO: stdout: "" STEP: Removing the test directory Jan 11 19:37:20.662: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-8319 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-08c7d521-3a2c-415a-985b-ee38ba49d5b4' Jan 11 19:37:21.959: INFO: stderr: "" Jan 11 19:37:21.959: INFO: stdout: "" STEP: Cleaning up PVC and PV Jan 11 19:37:21.959: INFO: Deleting PersistentVolumeClaim "pvc-fxw7x" Jan 11 19:37:22.049: INFO: Deleting PersistentVolume "local-pvnrdvf" STEP: Cleaning up PVC and PV Jan 11 19:37:22.140: INFO: Deleting PersistentVolumeClaim "pvc-7tg22" Jan 11 19:37:22.231: INFO: Deleting PersistentVolume "local-pvg288j" STEP: Cleaning up PVC and PV Jan 11 19:37:22.321: INFO: Deleting PersistentVolumeClaim "pvc-nqmqg" Jan 11 19:37:22.412: INFO: Deleting PersistentVolume "local-pvmvdjf" STEP: Cleaning up PVC and PV Jan 11 19:37:22.503: INFO: Deleting PersistentVolumeClaim "pvc-xw2vr" Jan 11 19:37:22.593: INFO: Deleting PersistentVolume "local-pv2jghz" STEP: Cleaning up PVC and PV Jan 11 19:37:22.684: INFO: Deleting PersistentVolumeClaim "pvc-7cskr" Jan 11 19:37:22.776: INFO: Deleting PersistentVolume "local-pvb4m8c" STEP: Cleaning up PVC and PV Jan 11 19:37:22.866: INFO: Deleting PersistentVolumeClaim "pvc-bq7v4" Jan 11 19:37:22.957: INFO: Deleting PersistentVolume "local-pvt7jg4" STEP: Removing the test directory Jan 11 19:37:23.048: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-8319 hostexec-ip-10-250-7-77.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-776b4be9-2273-4505-b527-afa090db70d8' Jan 11 19:37:24.366: INFO: stderr: "" Jan 11 19:37:24.366: INFO: stdout: "" STEP: Removing the test directory Jan 11 19:37:24.366: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-8319 hostexec-ip-10-250-7-77.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-e4893dfe-2d16-48cf-8203-d63dfeb4ff9a' Jan 11 19:37:25.646: INFO: stderr: "" Jan 11 19:37:25.646: INFO: stdout: "" STEP: Removing the test directory Jan 11 19:37:25.646: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-8319 hostexec-ip-10-250-7-77.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-183f4795-869d-4349-b493-5f9675c7afbf' Jan 11 19:37:26.935: INFO: stderr: "" Jan 11 19:37:26.936: INFO: stdout: "" STEP: Removing the test directory Jan 11 19:37:26.936: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-8319 hostexec-ip-10-250-7-77.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-a08caf42-6133-4a36-9761-ed80dcf246fa' Jan 11 19:37:28.293: INFO: stderr: "" Jan 11 19:37:28.293: INFO: stdout: "" STEP: Removing the test directory Jan 11 19:37:28.293: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-8319 hostexec-ip-10-250-7-77.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-95f5c8ac-bb2a-4584-8208-3165a6c49440' Jan 11 19:37:29.609: INFO: stderr: "" Jan 11 19:37:29.609: INFO: stdout: "" STEP: Removing the test directory Jan 11 19:37:29.609: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-8319 hostexec-ip-10-250-7-77.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-50c4c93d-632d-44eb-b15c-a72ecf6aa2f1' Jan 11 19:37:31.148: INFO: stderr: "" Jan 11 19:37:31.148: INFO: stdout: "" [AfterEach] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:37:31.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-8319" for this suite. Jan 11 19:37:45.606: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:37:49.227: INFO: namespace persistent-local-volumes-test-8319 deletion completed in 17.891586731s • [SLOW TEST:94.109 seconds] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 StatefulSet with pod affinity [Slow] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:374 should use volumes on one node when pod has affinity /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:409 ------------------------------ SSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Aggregator /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:37:39.063: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename aggregator STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in aggregator-7230 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:77 Jan 11 19:37:39.945: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config [It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Registering the sample API server. Jan 11 19:37:41.602: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714368260, loc:(*time.Location)(0x84bfb00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714368260, loc:(*time.Location)(0x84bfb00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714368260, loc:(*time.Location)(0x84bfb00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714368260, loc:(*time.Location)(0x84bfb00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-8447597c78\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 11 19:37:44.593: INFO: Waited 810.389636ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:68 [AfterEach] [sig-api-machinery] Aggregator /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:37:47.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-7230" for this suite. Jan 11 19:37:53.948: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:37:54.302: INFO: discovery error for unexpected group: schema.GroupVersion{Group:"crd-publish-openapi-test-common-group.example.com", Version:"v5"} Jan 11 19:37:54.302: INFO: Error discoverying server preferred namespaced resources: unable to retrieve the complete list of server APIs: crd-publish-openapi-test-common-group.example.com/v5: the server could not find the requested resource, retrying in 2s. Jan 11 19:37:59.750: INFO: namespace aggregator-7230 deletion completed in 12.070026923s • [SLOW TEST:20.687 seconds] [sig-api-machinery] Aggregator /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:37:08.322: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in crd-publish-openapi-7761 STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation Jan 11 19:37:08.961: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation Jan 11 19:37:31.143: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config Jan 11 19:37:36.223: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:37:59.181: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-7761" for this suite. Jan 11 19:38:05.628: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:38:09.078: INFO: namespace crd-publish-openapi-7761 deletion completed in 9.719688159s • [SLOW TEST:60.756 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:37:49.236: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in persistent-local-volumes-test-7429 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:152 [BeforeEach] [Volume type: dir-link] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Jan 11 19:37:52.907: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-7429 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-97b5b8f2-a980-4d4e-bca2-e2cc965260d6-backend && ln -s /tmp/local-volume-test-97b5b8f2-a980-4d4e-bca2-e2cc965260d6-backend /tmp/local-volume-test-97b5b8f2-a980-4d4e-bca2-e2cc965260d6' Jan 11 19:37:54.168: INFO: stderr: "" Jan 11 19:37:54.168: INFO: stdout: "" STEP: Creating local PVCs and PVs Jan 11 19:37:54.168: INFO: Creating a PV followed by a PVC Jan 11 19:37:54.348: INFO: Waiting for PV local-pvtjrqh to bind to PVC pvc-nfbft Jan 11 19:37:54.348: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-nfbft] to have phase Bound Jan 11 19:37:54.437: INFO: PersistentVolumeClaim pvc-nfbft found and phase=Bound (89.614011ms) Jan 11 19:37:54.437: INFO: Waiting up to 3m0s for PersistentVolume local-pvtjrqh to have phase Bound Jan 11 19:37:54.528: INFO: PersistentVolume local-pvtjrqh found and phase=Bound (90.141634ms) [BeforeEach] One pod requesting one prebound PVC /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Jan 11 19:37:57.157: INFO: pod "security-context-c890e8ef-dbab-4e8f-b2c8-73404ae17048" created on Node "ip-10-250-27-25.ec2.internal" STEP: Writing in pod1 Jan 11 19:37:57.157: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-7429 security-context-c890e8ef-dbab-4e8f-b2c8-73404ae17048 -- /bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file' Jan 11 19:37:58.438: INFO: stderr: "" Jan 11 19:37:58.438: INFO: stdout: "" Jan 11 19:37:58.438: INFO: podRWCmdExec out: "" err: [It] should be able to mount volume and read from pod1 /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 STEP: Reading in pod1 Jan 11 19:37:58.439: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-7429 security-context-c890e8ef-dbab-4e8f-b2c8-73404ae17048 -- /bin/sh -c cat /mnt/volume1/test-file' Jan 11 19:37:59.711: INFO: stderr: "" Jan 11 19:37:59.711: INFO: stdout: "test-file-content\n" Jan 11 19:37:59.711: INFO: podRWCmdExec out: "test-file-content\n" err: [AfterEach] One pod requesting one prebound PVC /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod security-context-c890e8ef-dbab-4e8f-b2c8-73404ae17048 in namespace persistent-local-volumes-test-7429 [AfterEach] [Volume type: dir-link] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jan 11 19:37:59.802: INFO: Deleting PersistentVolumeClaim "pvc-nfbft" Jan 11 19:37:59.892: INFO: Deleting PersistentVolume "local-pvtjrqh" STEP: Removing the test directory Jan 11 19:37:59.984: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-7429 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-97b5b8f2-a980-4d4e-bca2-e2cc965260d6 && rm -r /tmp/local-volume-test-97b5b8f2-a980-4d4e-bca2-e2cc965260d6-backend' Jan 11 19:38:01.360: INFO: stderr: "" Jan 11 19:38:01.360: INFO: stdout: "" [AfterEach] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:38:01.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-7429" for this suite. Jan 11 19:38:07.864: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:38:11.263: INFO: namespace persistent-local-volumes-test-7429 deletion completed in 9.706738194s • [SLOW TEST:22.027 seconds] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-link] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and read from pod1 /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 ------------------------------ [BeforeEach] [sig-node] RuntimeClass /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:38:11.265: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename runtimeclass STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in runtimeclass-2531 STEP: Waiting for a default service account to be provisioned in namespace [It] should reject a Pod requesting a RuntimeClass with an unconfigured handler /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtimeclass.go:48 [AfterEach] [sig-node] RuntimeClass /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:38:13.248: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "runtimeclass-2531" for this suite. Jan 11 19:38:25.611: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:38:28.921: INFO: namespace runtimeclass-2531 deletion completed in 15.581258814s • [SLOW TEST:17.656 seconds] [sig-node] RuntimeClass /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtimeclass.go:40 should reject a Pod requesting a RuntimeClass with an unconfigured handler /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtimeclass.go:48 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:37:47.369: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in crd-publish-openapi-9767 STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: set up a multi version CRD Jan 11 19:37:48.013: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:38:22.917: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-9767" for this suite. Jan 11 19:38:29.364: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:38:32.675: INFO: namespace crd-publish-openapi-9767 deletion completed in 9.581214458s • [SLOW TEST:45.307 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ S ------------------------------ [BeforeEach] [sig-network] Networking /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:37:59.758: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename nettest STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nettest-1592 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Networking /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:35 STEP: Executing a successful http request from the external internet [It] should function for endpoint-Service: http /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:141 STEP: Performing setup for networking test in namespace nettest-1592 STEP: creating a selector STEP: Creating the service pods in kubernetes Jan 11 19:38:00.627: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods STEP: Getting node addresses Jan 11 19:38:20.148: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Jan 11 19:38:20.328: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-network] Networking /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:38:20.329: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "nettest-1592" for this suite. Jan 11 19:38:34.686: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:38:37.976: INFO: namespace nettest-1592 deletion completed in 17.557394264s S [SKIPPING] [38.218 seconds] [sig-network] Networking /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 Granular Checks: Services /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:103 should function for endpoint-Service: http [It] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:141 Requires at least 2 nodes (not -1) /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:597 ------------------------------ S ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:38:32.678: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename emptydir STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-210 STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating a pod to test emptydir 0666 on node default medium Jan 11 19:38:33.642: INFO: Waiting up to 5m0s for pod "pod-567bc36b-e7b5-4303-baeb-a3f4b3688c8b" in namespace "emptydir-210" to be "success or failure" Jan 11 19:38:33.731: INFO: Pod "pod-567bc36b-e7b5-4303-baeb-a3f4b3688c8b": Phase="Pending", Reason="", readiness=false. Elapsed: 89.101241ms Jan 11 19:38:35.821: INFO: Pod "pod-567bc36b-e7b5-4303-baeb-a3f4b3688c8b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.178672059s STEP: Saw pod success Jan 11 19:38:35.821: INFO: Pod "pod-567bc36b-e7b5-4303-baeb-a3f4b3688c8b" satisfied condition "success or failure" Jan 11 19:38:35.910: INFO: Trying to get logs from node ip-10-250-27-25.ec2.internal pod pod-567bc36b-e7b5-4303-baeb-a3f4b3688c8b container test-container: STEP: delete the pod Jan 11 19:38:36.100: INFO: Waiting for pod pod-567bc36b-e7b5-4303-baeb-a3f4b3688c8b to disappear Jan 11 19:38:36.189: INFO: Pod pod-567bc36b-e7b5-4303-baeb-a3f4b3688c8b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:38:36.189: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-210" for this suite. Jan 11 19:38:44.548: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:38:47.851: INFO: namespace emptydir-210 deletion completed in 11.571457628s • [SLOW TEST:15.172 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ S ------------------------------ [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:34:31.893: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename container-probe Jan 11 19:34:33.271: INFO: Found PodSecurityPolicies; assuming PodSecurityPolicy is enabled. Jan 11 19:34:33.628: INFO: Found ClusterRoles; assuming RBAC is enabled. STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-probe-5574 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:52 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating pod test-webserver-2aad54db-db36-40fb-bb83-deb356b00ebb in namespace container-probe-5574 Jan 11 19:34:38.270: INFO: Started pod test-webserver-2aad54db-db36-40fb-bb83-deb356b00ebb in namespace container-probe-5574 STEP: checking the pod's current state and verifying that restartCount is present Jan 11 19:34:38.359: INFO: Initial restart count of pod test-webserver-2aad54db-db36-40fb-bb83-deb356b00ebb is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:38:38.831: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5574" for this suite. Jan 11 19:38:47.192: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:38:50.506: INFO: namespace container-probe-5574 deletion completed in 11.584369062s • [SLOW TEST:258.613 seconds] [k8s.io] Probing container /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSS ------------------------------ [BeforeEach] [sig-apps] CronJob /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:36:40.081: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename cronjob STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in cronjob-7238 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] CronJob /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:55 [It] should replace jobs when ReplaceConcurrent /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:139 STEP: Creating a ReplaceConcurrent cronjob STEP: Ensuring a job is scheduled STEP: Ensuring exactly one is scheduled STEP: Ensuring exactly one running job exists by listing jobs explicitly STEP: Ensuring the job is replaced with a new one STEP: Removing cronjob [AfterEach] [sig-apps] CronJob /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:38:07.361: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "cronjob-7238" for this suite. Jan 11 19:38:47.724: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:38:51.043: INFO: namespace cronjob-7238 deletion completed in 43.590938699s • [SLOW TEST:130.963 seconds] [sig-apps] CronJob /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should replace jobs when ReplaceConcurrent /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:139 ------------------------------ SSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PVC Protection /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:38:09.124: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename pvc-protection STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pvc-protection-6134 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PVC Protection /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:45 Jan 11 19:38:10.352: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable STEP: Creating a PVC Jan 11 19:38:10.531: INFO: Default storage class: "default" Jan 11 19:38:10.531: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil STEP: Creating a Pod that becomes Running and therefore is actively using the PVC STEP: Waiting for PVC to become Bound Jan 11 19:38:40.983: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-protectiondczsb] to have phase Bound Jan 11 19:38:41.073: INFO: PersistentVolumeClaim pvc-protectiondczsb found and phase=Bound (89.944251ms) STEP: Checking that PVC Protection finalizer is set [It] Verify "immediate" deletion of a PVC that is not in active use by a pod /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:86 STEP: Deleting the pod using the PVC Jan 11 19:38:41.163: INFO: Deleting pod "pvc-tester-lxmml" in namespace "pvc-protection-6134" Jan 11 19:38:41.254: INFO: Wait up to 5m0s for pod "pvc-tester-lxmml" to be fully deleted STEP: Deleting the PVC Jan 11 19:38:45.574: INFO: Waiting up to 3m0s for PersistentVolumeClaim pvc-protectiondczsb to be removed Jan 11 19:38:45.663: INFO: Claim "pvc-protectiondczsb" in namespace "pvc-protection-6134" doesn't exist in the system [AfterEach] [sig-storage] PVC Protection /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:38:45.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pvc-protection-6134" for this suite. Jan 11 19:38:54.024: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:38:57.333: INFO: namespace pvc-protection-6134 deletion completed in 11.578705188s [AfterEach] [sig-storage] PVC Protection /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:80 • [SLOW TEST:48.209 seconds] [sig-storage] PVC Protection /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Verify "immediate" deletion of a PVC that is not in active use by a pod /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:86 ------------------------------ SSSSSSSS ------------------------------ [BeforeEach] [k8s.io] [sig-node] Security Context /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:38:57.344: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename security-context STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in security-context-747 STEP: Waiting for a default service account to be provisioned in namespace [It] should support container.SecurityContext.RunAsUser [LinuxOnly] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:101 STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser Jan 11 19:38:58.943: INFO: Waiting up to 5m0s for pod "security-context-0338871b-2e00-41ef-bc63-ab26a04192e9" in namespace "security-context-747" to be "success or failure" Jan 11 19:38:59.033: INFO: Pod "security-context-0338871b-2e00-41ef-bc63-ab26a04192e9": Phase="Pending", Reason="", readiness=false. Elapsed: 90.192638ms Jan 11 19:39:01.124: INFO: Pod "security-context-0338871b-2e00-41ef-bc63-ab26a04192e9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.180653106s STEP: Saw pod success Jan 11 19:39:01.124: INFO: Pod "security-context-0338871b-2e00-41ef-bc63-ab26a04192e9" satisfied condition "success or failure" Jan 11 19:39:01.214: INFO: Trying to get logs from node ip-10-250-27-25.ec2.internal pod security-context-0338871b-2e00-41ef-bc63-ab26a04192e9 container test-container: STEP: delete the pod Jan 11 19:39:01.404: INFO: Waiting for pod security-context-0338871b-2e00-41ef-bc63-ab26a04192e9 to disappear Jan 11 19:39:01.498: INFO: Pod security-context-0338871b-2e00-41ef-bc63-ab26a04192e9 no longer exists [AfterEach] [k8s.io] [sig-node] Security Context /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:39:01.498: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-747" for this suite. Jan 11 19:39:07.858: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:39:11.166: INFO: namespace security-context-747 deletion completed in 9.577699876s • [SLOW TEST:13.822 seconds] [k8s.io] [sig-node] Security Context /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693 should support container.SecurityContext.RunAsUser [LinuxOnly] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:101 ------------------------------ SS ------------------------------ [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:93 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:38:51.058: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename provisioning STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in provisioning-5742 STEP: Waiting for a default service account to be provisioned in namespace [It] should support readOnly file specified in the volumeMount [LinuxOnly] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:362 Jan 11 19:38:51.855: INFO: Could not find CSI Name for in-tree plugin kubernetes.io/host-path Jan 11 19:38:52.037: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-5742" in namespace "provisioning-5742" to be "success or failure" Jan 11 19:38:52.127: INFO: Pod "hostpath-symlink-prep-provisioning-5742": Phase="Pending", Reason="", readiness=false. Elapsed: 89.783508ms Jan 11 19:38:54.217: INFO: Pod "hostpath-symlink-prep-provisioning-5742": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.17971743s STEP: Saw pod success Jan 11 19:38:54.217: INFO: Pod "hostpath-symlink-prep-provisioning-5742" satisfied condition "success or failure" Jan 11 19:38:54.217: INFO: Deleting pod "hostpath-symlink-prep-provisioning-5742" in namespace "provisioning-5742" Jan 11 19:38:54.310: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-5742" to be fully deleted Jan 11 19:38:54.400: INFO: Creating resource for inline volume STEP: Creating pod pod-subpath-test-hostpathsymlink-hjl2 STEP: Creating a pod to test subpath Jan 11 19:38:54.493: INFO: Waiting up to 5m0s for pod "pod-subpath-test-hostpathsymlink-hjl2" in namespace "provisioning-5742" to be "success or failure" Jan 11 19:38:54.582: INFO: Pod "pod-subpath-test-hostpathsymlink-hjl2": Phase="Pending", Reason="", readiness=false. Elapsed: 89.457655ms Jan 11 19:38:56.672: INFO: Pod "pod-subpath-test-hostpathsymlink-hjl2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.179866308s Jan 11 19:38:58.763: INFO: Pod "pod-subpath-test-hostpathsymlink-hjl2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.270115953s STEP: Saw pod success Jan 11 19:38:58.763: INFO: Pod "pod-subpath-test-hostpathsymlink-hjl2" satisfied condition "success or failure" Jan 11 19:38:58.852: INFO: Trying to get logs from node ip-10-250-27-25.ec2.internal pod pod-subpath-test-hostpathsymlink-hjl2 container test-container-subpath-hostpathsymlink-hjl2: STEP: delete the pod Jan 11 19:38:59.043: INFO: Waiting for pod pod-subpath-test-hostpathsymlink-hjl2 to disappear Jan 11 19:38:59.132: INFO: Pod pod-subpath-test-hostpathsymlink-hjl2 no longer exists STEP: Deleting pod pod-subpath-test-hostpathsymlink-hjl2 Jan 11 19:38:59.132: INFO: Deleting pod "pod-subpath-test-hostpathsymlink-hjl2" in namespace "provisioning-5742" STEP: Deleting pod Jan 11 19:38:59.221: INFO: Deleting pod "pod-subpath-test-hostpathsymlink-hjl2" in namespace "provisioning-5742" Jan 11 19:38:59.401: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-5742" in namespace "provisioning-5742" to be "success or failure" Jan 11 19:38:59.491: INFO: Pod "hostpath-symlink-prep-provisioning-5742": Phase="Pending", Reason="", readiness=false. Elapsed: 89.252115ms Jan 11 19:39:01.581: INFO: Pod "hostpath-symlink-prep-provisioning-5742": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.179242887s STEP: Saw pod success Jan 11 19:39:01.581: INFO: Pod "hostpath-symlink-prep-provisioning-5742" satisfied condition "success or failure" Jan 11 19:39:01.581: INFO: Deleting pod "hostpath-symlink-prep-provisioning-5742" in namespace "provisioning-5742" Jan 11 19:39:01.675: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-5742" to be fully deleted Jan 11 19:39:01.765: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics [AfterEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:39:01.765: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "provisioning-5742" for this suite. Jan 11 19:39:08.126: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:39:11.434: INFO: namespace provisioning-5742 deletion completed in 9.577467603s • [SLOW TEST:20.376 seconds] [sig-storage] In-tree Volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Driver: hostPathSymlink] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:69 [Testpattern: Inline-volume (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:92 should support readOnly file specified in the volumeMount [LinuxOnly] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:362 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:93 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:36:51.583: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename provisioning STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in provisioning-9667 STEP: Waiting for a default service account to be provisioned in namespace [It] should support restarting containers using directory as subpath [Slow] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:303 STEP: deploying csi-hostpath driver Jan 11 19:36:52.415: INFO: creating *v1.ServiceAccount: provisioning-9667/csi-attacher Jan 11 19:36:52.505: INFO: creating *v1.ClusterRole: external-attacher-runner-provisioning-9667 Jan 11 19:36:52.505: INFO: Define cluster role external-attacher-runner-provisioning-9667 Jan 11 19:36:52.595: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-provisioning-9667 Jan 11 19:36:52.685: INFO: creating *v1.Role: provisioning-9667/external-attacher-cfg-provisioning-9667 Jan 11 19:36:52.775: INFO: creating *v1.RoleBinding: provisioning-9667/csi-attacher-role-cfg Jan 11 19:36:52.865: INFO: creating *v1.ServiceAccount: provisioning-9667/csi-provisioner Jan 11 19:36:52.956: INFO: creating *v1.ClusterRole: external-provisioner-runner-provisioning-9667 Jan 11 19:36:52.956: INFO: Define cluster role external-provisioner-runner-provisioning-9667 Jan 11 19:36:53.045: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-provisioning-9667 Jan 11 19:36:53.134: INFO: creating *v1.Role: provisioning-9667/external-provisioner-cfg-provisioning-9667 Jan 11 19:36:53.224: INFO: creating *v1.RoleBinding: provisioning-9667/csi-provisioner-role-cfg Jan 11 19:36:53.313: INFO: creating *v1.ServiceAccount: provisioning-9667/csi-snapshotter Jan 11 19:36:53.402: INFO: creating *v1.ClusterRole: external-snapshotter-runner-provisioning-9667 Jan 11 19:36:53.402: INFO: Define cluster role external-snapshotter-runner-provisioning-9667 Jan 11 19:36:53.494: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-provisioning-9667 Jan 11 19:36:53.583: INFO: creating *v1.Role: provisioning-9667/external-snapshotter-leaderelection-provisioning-9667 Jan 11 19:36:53.673: INFO: creating *v1.RoleBinding: provisioning-9667/external-snapshotter-leaderelection Jan 11 19:36:53.762: INFO: creating *v1.ServiceAccount: provisioning-9667/csi-resizer Jan 11 19:36:53.851: INFO: creating *v1.ClusterRole: external-resizer-runner-provisioning-9667 Jan 11 19:36:53.851: INFO: Define cluster role external-resizer-runner-provisioning-9667 Jan 11 19:36:53.941: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-provisioning-9667 Jan 11 19:36:54.031: INFO: creating *v1.Role: provisioning-9667/external-resizer-cfg-provisioning-9667 Jan 11 19:36:54.121: INFO: creating *v1.RoleBinding: provisioning-9667/csi-resizer-role-cfg Jan 11 19:36:54.211: INFO: creating *v1.Service: provisioning-9667/csi-hostpath-attacher Jan 11 19:36:54.304: INFO: creating *v1.StatefulSet: provisioning-9667/csi-hostpath-attacher Jan 11 19:36:54.394: INFO: creating *v1beta1.CSIDriver: csi-hostpath-provisioning-9667 Jan 11 19:36:54.483: INFO: creating *v1.Service: provisioning-9667/csi-hostpathplugin Jan 11 19:36:54.576: INFO: creating *v1.StatefulSet: provisioning-9667/csi-hostpathplugin Jan 11 19:36:54.666: INFO: creating *v1.Service: provisioning-9667/csi-hostpath-provisioner Jan 11 19:36:54.761: INFO: creating *v1.StatefulSet: provisioning-9667/csi-hostpath-provisioner Jan 11 19:36:54.851: INFO: creating *v1.Service: provisioning-9667/csi-hostpath-resizer Jan 11 19:36:54.947: INFO: creating *v1.StatefulSet: provisioning-9667/csi-hostpath-resizer Jan 11 19:36:55.036: INFO: creating *v1.Service: provisioning-9667/csi-snapshotter Jan 11 19:36:55.129: INFO: creating *v1.StatefulSet: provisioning-9667/csi-snapshotter Jan 11 19:36:55.225: INFO: creating *v1.ClusterRoleBinding: psp-csi-hostpath-role-provisioning-9667 Jan 11 19:36:55.314: INFO: Test running for native CSI Driver, not checking metrics Jan 11 19:36:55.314: INFO: Creating resource for dynamic PV STEP: creating a StorageClass provisioning-9667-csi-hostpath-provisioning-9667-sctcqg4 STEP: creating a claim Jan 11 19:36:55.403: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Jan 11 19:36:55.493: INFO: Waiting up to 5m0s for PersistentVolumeClaims [csi-hostpathsqv6t] to have phase Bound Jan 11 19:36:55.582: INFO: PersistentVolumeClaim csi-hostpathsqv6t found but phase is Pending instead of Bound. Jan 11 19:36:57.672: INFO: PersistentVolumeClaim csi-hostpathsqv6t found and phase=Bound (2.178486588s) STEP: Creating pod pod-subpath-test-csi-hostpath-dynamicpv-wpkq STEP: Failing liveness probe Jan 11 19:37:08.121: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=provisioning-9667 pod-subpath-test-csi-hostpath-dynamicpv-wpkq --container test-container-volume-csi-hostpath-dynamicpv-wpkq -- /bin/sh -c rm /probe-volume/probe-file' Jan 11 19:37:09.425: INFO: stderr: "" Jan 11 19:37:09.425: INFO: stdout: "" Jan 11 19:37:09.425: INFO: Pod exec output: STEP: Waiting for container to restart Jan 11 19:37:09.514: INFO: Container test-container-subpath-csi-hostpath-dynamicpv-wpkq, restarts: 0 Jan 11 19:37:19.604: INFO: Container test-container-subpath-csi-hostpath-dynamicpv-wpkq, restarts: 2 Jan 11 19:37:19.604: INFO: Container has restart count: 2 STEP: Rewriting the file Jan 11 19:37:19.604: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=provisioning-9667 pod-subpath-test-csi-hostpath-dynamicpv-wpkq --container test-container-volume-csi-hostpath-dynamicpv-wpkq -- /bin/sh -c echo test-after > /probe-volume/probe-file' Jan 11 19:37:20.950: INFO: stderr: "" Jan 11 19:37:20.950: INFO: stdout: "" Jan 11 19:37:20.950: INFO: Pod exec output: STEP: Waiting for container to stop restarting Jan 11 19:37:39.129: INFO: Container has restart count: 3 Jan 11 19:38:41.129: INFO: Container restart has stabilized Jan 11 19:38:41.129: INFO: Deleting pod "pod-subpath-test-csi-hostpath-dynamicpv-wpkq" in namespace "provisioning-9667" Jan 11 19:38:41.219: INFO: Wait up to 5m0s for pod "pod-subpath-test-csi-hostpath-dynamicpv-wpkq" to be fully deleted STEP: Deleting pod Jan 11 19:38:49.398: INFO: Deleting pod "pod-subpath-test-csi-hostpath-dynamicpv-wpkq" in namespace "provisioning-9667" STEP: Deleting pvc Jan 11 19:38:49.488: INFO: Deleting PersistentVolumeClaim "csi-hostpathsqv6t" Jan 11 19:38:49.578: INFO: Waiting up to 5m0s for PersistentVolume pvc-d33fec60-aa8e-4132-afa9-b3c88d424900 to get deleted Jan 11 19:38:49.667: INFO: PersistentVolume pvc-d33fec60-aa8e-4132-afa9-b3c88d424900 found and phase=Bound (89.149199ms) Jan 11 19:38:54.756: INFO: PersistentVolume pvc-d33fec60-aa8e-4132-afa9-b3c88d424900 was removed STEP: Deleting sc STEP: uninstalling csi-hostpath driver Jan 11 19:38:54.847: INFO: deleting *v1.ServiceAccount: provisioning-9667/csi-attacher Jan 11 19:38:54.939: INFO: deleting *v1.ClusterRole: external-attacher-runner-provisioning-9667 Jan 11 19:38:55.029: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-provisioning-9667 Jan 11 19:38:55.119: INFO: deleting *v1.Role: provisioning-9667/external-attacher-cfg-provisioning-9667 Jan 11 19:38:55.210: INFO: deleting *v1.RoleBinding: provisioning-9667/csi-attacher-role-cfg Jan 11 19:38:55.300: INFO: deleting *v1.ServiceAccount: provisioning-9667/csi-provisioner Jan 11 19:38:55.391: INFO: deleting *v1.ClusterRole: external-provisioner-runner-provisioning-9667 Jan 11 19:38:55.481: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-provisioning-9667 Jan 11 19:38:55.571: INFO: deleting *v1.Role: provisioning-9667/external-provisioner-cfg-provisioning-9667 Jan 11 19:38:55.662: INFO: deleting *v1.RoleBinding: provisioning-9667/csi-provisioner-role-cfg Jan 11 19:38:55.752: INFO: deleting *v1.ServiceAccount: provisioning-9667/csi-snapshotter Jan 11 19:38:55.993: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-provisioning-9667 Jan 11 19:38:56.085: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-provisioning-9667 Jan 11 19:38:56.176: INFO: deleting *v1.Role: provisioning-9667/external-snapshotter-leaderelection-provisioning-9667 Jan 11 19:38:56.295: INFO: deleting *v1.RoleBinding: provisioning-9667/external-snapshotter-leaderelection Jan 11 19:38:56.387: INFO: deleting *v1.ServiceAccount: provisioning-9667/csi-resizer Jan 11 19:38:56.478: INFO: deleting *v1.ClusterRole: external-resizer-runner-provisioning-9667 Jan 11 19:38:56.568: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-provisioning-9667 Jan 11 19:38:56.659: INFO: deleting *v1.Role: provisioning-9667/external-resizer-cfg-provisioning-9667 Jan 11 19:38:56.750: INFO: deleting *v1.RoleBinding: provisioning-9667/csi-resizer-role-cfg Jan 11 19:38:56.841: INFO: deleting *v1.Service: provisioning-9667/csi-hostpath-attacher Jan 11 19:38:56.977: INFO: deleting *v1.StatefulSet: provisioning-9667/csi-hostpath-attacher Jan 11 19:38:57.068: INFO: deleting *v1beta1.CSIDriver: csi-hostpath-provisioning-9667 Jan 11 19:38:57.159: INFO: deleting *v1.Service: provisioning-9667/csi-hostpathplugin Jan 11 19:38:57.253: INFO: deleting *v1.StatefulSet: provisioning-9667/csi-hostpathplugin Jan 11 19:38:57.345: INFO: deleting *v1.Service: provisioning-9667/csi-hostpath-provisioner Jan 11 19:38:57.439: INFO: deleting *v1.StatefulSet: provisioning-9667/csi-hostpath-provisioner Jan 11 19:38:57.529: INFO: deleting *v1.Service: provisioning-9667/csi-hostpath-resizer Jan 11 19:38:57.626: INFO: deleting *v1.StatefulSet: provisioning-9667/csi-hostpath-resizer Jan 11 19:38:57.716: INFO: deleting *v1.Service: provisioning-9667/csi-snapshotter Jan 11 19:38:57.811: INFO: deleting *v1.StatefulSet: provisioning-9667/csi-snapshotter Jan 11 19:38:57.902: INFO: deleting *v1.ClusterRoleBinding: psp-csi-hostpath-role-provisioning-9667 [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:38:57.992: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "provisioning-9667" for this suite. WARNING: pod log: csi-hostpath-attacher-0/csi-attacher: context canceled WARNING: pod log: csi-hostpath-provisioner-0/csi-provisioner: context canceled WARNING: pod log: csi-hostpath-resizer-0/csi-resizer: context canceled WARNING: pod log: csi-hostpathplugin-0/node-driver-registrar: context canceled WARNING: pod log: csi-hostpathplugin-0/hostpath: context canceled WARNING: pod log: csi-hostpathplugin-0/liveness-probe: context canceled WARNING: pod log: csi-snapshotter-0/csi-snapshotter: context canceled WARNING: pod log: csi-hostpath-attacher-0/csi-attacher: context canceled WARNING: pod log: csi-hostpath-provisioner-0/csi-provisioner: context canceled WARNING: pod log: csi-hostpath-resizer-0/csi-resizer: context canceled WARNING: pod log: csi-hostpathplugin-0/node-driver-registrar: context canceled WARNING: pod log: csi-hostpathplugin-0/hostpath: context canceled WARNING: pod log: csi-hostpathplugin-0/liveness-probe: context canceled WARNING: pod log: csi-snapshotter-0/csi-snapshotter: context canceled Jan 11 19:39:10.351: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:39:13.654: INFO: namespace provisioning-9667 deletion completed in 15.571021081s • [SLOW TEST:142.071 seconds] [sig-storage] CSI Volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Driver: csi-hostpath] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:62 [Testpattern: Dynamic PV (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:92 should support restarting containers using directory as subpath [Slow] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:303 ------------------------------ SSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:38:37.986: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename projected STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-9280 STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating configMap with name cm-test-opt-del-a4a7e0ca-d0d5-40d4-9ccf-bb539fc1cb09 STEP: Creating configMap with name cm-test-opt-upd-103a823f-b3e0-40fd-8735-fe0218da5969 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-a4a7e0ca-d0d5-40d4-9ccf-bb539fc1cb09 STEP: Updating configmap cm-test-opt-upd-103a823f-b3e0-40fd-8735-fe0218da5969 STEP: Creating configMap with name cm-test-opt-create-5c6298bb-25fa-404d-a92e-67697ee16902 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:38:46.415: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9280" for this suite. Jan 11 19:39:16.774: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:39:20.063: INFO: namespace projected-9280 deletion completed in 33.558000749s • [SLOW TEST:42.077 seconds] [sig-storage] Projected configMap /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:35 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSS ------------------------------ [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:93 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:38:47.854: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename provisioning STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in provisioning-5267 STEP: Waiting for a default service account to be provisioned in namespace [It] should fail if subpath file is outside the volume [Slow][LinuxOnly] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:239 Jan 11 19:38:48.653: INFO: Could not find CSI Name for in-tree plugin kubernetes.io/host-path Jan 11 19:38:48.835: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-5267" in namespace "provisioning-5267" to be "success or failure" Jan 11 19:38:48.924: INFO: Pod "hostpath-symlink-prep-provisioning-5267": Phase="Pending", Reason="", readiness=false. Elapsed: 88.863726ms Jan 11 19:38:51.013: INFO: Pod "hostpath-symlink-prep-provisioning-5267": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.178433063s STEP: Saw pod success Jan 11 19:38:51.014: INFO: Pod "hostpath-symlink-prep-provisioning-5267" satisfied condition "success or failure" Jan 11 19:38:51.014: INFO: Deleting pod "hostpath-symlink-prep-provisioning-5267" in namespace "provisioning-5267" Jan 11 19:38:51.105: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-5267" to be fully deleted Jan 11 19:38:51.193: INFO: Creating resource for inline volume STEP: Creating pod pod-subpath-test-hostpathsymlink-tpjx STEP: Checking for subpath error in container status Jan 11 19:38:55.462: INFO: Deleting pod "pod-subpath-test-hostpathsymlink-tpjx" in namespace "provisioning-5267" Jan 11 19:38:55.552: INFO: Wait up to 5m0s for pod "pod-subpath-test-hostpathsymlink-tpjx" to be fully deleted STEP: Deleting pod Jan 11 19:39:09.731: INFO: Deleting pod "pod-subpath-test-hostpathsymlink-tpjx" in namespace "provisioning-5267" Jan 11 19:39:09.912: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-5267" in namespace "provisioning-5267" to be "success or failure" Jan 11 19:39:10.001: INFO: Pod "hostpath-symlink-prep-provisioning-5267": Phase="Pending", Reason="", readiness=false. Elapsed: 89.591575ms Jan 11 19:39:12.091: INFO: Pod "hostpath-symlink-prep-provisioning-5267": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.179332434s STEP: Saw pod success Jan 11 19:39:12.091: INFO: Pod "hostpath-symlink-prep-provisioning-5267" satisfied condition "success or failure" Jan 11 19:39:12.091: INFO: Deleting pod "hostpath-symlink-prep-provisioning-5267" in namespace "provisioning-5267" Jan 11 19:39:12.182: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-5267" to be fully deleted Jan 11 19:39:12.271: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics [AfterEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:39:12.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "provisioning-5267" for this suite. Jan 11 19:39:18.632: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:39:21.937: INFO: namespace provisioning-5267 deletion completed in 9.575417373s • [SLOW TEST:34.084 seconds] [sig-storage] In-tree Volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Driver: hostPathSymlink] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:69 [Testpattern: Inline-volume (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:92 should fail if subpath file is outside the volume [Slow][LinuxOnly] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:239 ------------------------------ SSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Docker Containers /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:39:11.171: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename containers STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in containers-100 STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating a pod to test override arguments Jan 11 19:39:11.902: INFO: Waiting up to 5m0s for pod "client-containers-7ada83af-1aff-48a1-8bb8-3bae1a8c6f62" in namespace "containers-100" to be "success or failure" Jan 11 19:39:11.992: INFO: Pod "client-containers-7ada83af-1aff-48a1-8bb8-3bae1a8c6f62": Phase="Pending", Reason="", readiness=false. Elapsed: 89.682168ms Jan 11 19:39:14.082: INFO: Pod "client-containers-7ada83af-1aff-48a1-8bb8-3bae1a8c6f62": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.179927285s STEP: Saw pod success Jan 11 19:39:14.082: INFO: Pod "client-containers-7ada83af-1aff-48a1-8bb8-3bae1a8c6f62" satisfied condition "success or failure" Jan 11 19:39:14.172: INFO: Trying to get logs from node ip-10-250-27-25.ec2.internal pod client-containers-7ada83af-1aff-48a1-8bb8-3bae1a8c6f62 container test-container: STEP: delete the pod Jan 11 19:39:14.362: INFO: Waiting for pod client-containers-7ada83af-1aff-48a1-8bb8-3bae1a8c6f62 to disappear Jan 11 19:39:14.452: INFO: Pod client-containers-7ada83af-1aff-48a1-8bb8-3bae1a8c6f62 no longer exists [AfterEach] [k8s.io] Docker Containers /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:39:14.452: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-100" for this suite. Jan 11 19:39:20.813: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:39:24.131: INFO: namespace containers-100 deletion completed in 9.587728799s • [SLOW TEST:12.960 seconds] [k8s.io] Docker Containers /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:39:11.457: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename projected STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-4525 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:90 STEP: Creating a pod to test downward API volume plugin Jan 11 19:39:12.188: INFO: Waiting up to 5m0s for pod "metadata-volume-c073e4c1-d5c7-476c-ae0e-97d95b7bc427" in namespace "projected-4525" to be "success or failure" Jan 11 19:39:12.278: INFO: Pod "metadata-volume-c073e4c1-d5c7-476c-ae0e-97d95b7bc427": Phase="Pending", Reason="", readiness=false. Elapsed: 89.76484ms Jan 11 19:39:14.367: INFO: Pod "metadata-volume-c073e4c1-d5c7-476c-ae0e-97d95b7bc427": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.179404433s STEP: Saw pod success Jan 11 19:39:14.368: INFO: Pod "metadata-volume-c073e4c1-d5c7-476c-ae0e-97d95b7bc427" satisfied condition "success or failure" Jan 11 19:39:14.458: INFO: Trying to get logs from node ip-10-250-27-25.ec2.internal pod metadata-volume-c073e4c1-d5c7-476c-ae0e-97d95b7bc427 container client-container: STEP: delete the pod Jan 11 19:39:14.648: INFO: Waiting for pod metadata-volume-c073e4c1-d5c7-476c-ae0e-97d95b7bc427 to disappear Jan 11 19:39:14.737: INFO: Pod metadata-volume-c073e4c1-d5c7-476c-ae0e-97d95b7bc427 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:39:14.737: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4525" for this suite. Jan 11 19:39:21.099: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:39:24.416: INFO: namespace projected-4525 deletion completed in 9.587996168s • [SLOW TEST:12.959 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:90 ------------------------------ SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:39:13.662: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename gc STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in gc-4739 STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Jan 11 19:39:20.837: INFO: 0 pods remaining Jan 11 19:39:20.837: INFO: 0 pods has nil DeletionTimestamp Jan 11 19:39:20.837: INFO: STEP: Gathering metrics W0111 19:39:21.927215 8607 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 11 19:39:21.927: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:39:21.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4739" for this suite. Jan 11 19:39:28.286: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:39:31.590: INFO: namespace gc-4739 deletion completed in 9.573414305s • [SLOW TEST:17.929 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:39:20.076: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename projected STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-8469 STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating configMap with name projected-configmap-test-volume-792ba78b-365a-4169-b8ef-ca30e7ac8890 STEP: Creating a pod to test consume configMaps Jan 11 19:39:21.332: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-61516131-4bf2-43af-952f-a5d3cb22abef" in namespace "projected-8469" to be "success or failure" Jan 11 19:39:21.421: INFO: Pod "pod-projected-configmaps-61516131-4bf2-43af-952f-a5d3cb22abef": Phase="Pending", Reason="", readiness=false. Elapsed: 89.316895ms Jan 11 19:39:23.511: INFO: Pod "pod-projected-configmaps-61516131-4bf2-43af-952f-a5d3cb22abef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.179075981s STEP: Saw pod success Jan 11 19:39:23.511: INFO: Pod "pod-projected-configmaps-61516131-4bf2-43af-952f-a5d3cb22abef" satisfied condition "success or failure" Jan 11 19:39:23.601: INFO: Trying to get logs from node ip-10-250-27-25.ec2.internal pod pod-projected-configmaps-61516131-4bf2-43af-952f-a5d3cb22abef container projected-configmap-volume-test: STEP: delete the pod Jan 11 19:39:23.789: INFO: Waiting for pod pod-projected-configmaps-61516131-4bf2-43af-952f-a5d3cb22abef to disappear Jan 11 19:39:23.878: INFO: Pod pod-projected-configmaps-61516131-4bf2-43af-952f-a5d3cb22abef no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:39:23.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8469" for this suite. Jan 11 19:39:30.237: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:39:33.558: INFO: namespace projected-8469 deletion completed in 9.588983478s • [SLOW TEST:13.482 seconds] [sig-storage] Projected configMap /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:35 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SS ------------------------------ [BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:34 [BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:39:24.132: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename sysctl STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in sysctl-1296 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:63 [It] should reject invalid sysctls /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:153 STEP: Creating a pod with one valid and two invalid sysctls [AfterEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:39:24.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sysctl-1296" for this suite. Jan 11 19:39:31.224: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:39:34.547: INFO: namespace sysctl-1296 deletion completed in 9.592995867s • [SLOW TEST:10.415 seconds] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693 should reject invalid sysctls /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:153 ------------------------------ SSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:39:33.562: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename downward-api STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-54 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide podname only [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating a pod to test downward API volume plugin Jan 11 19:39:34.288: INFO: Waiting up to 5m0s for pod "downwardapi-volume-452653cc-d054-4c3e-9b91-f0d117fe5c4f" in namespace "downward-api-54" to be "success or failure" Jan 11 19:39:34.377: INFO: Pod "downwardapi-volume-452653cc-d054-4c3e-9b91-f0d117fe5c4f": Phase="Pending", Reason="", readiness=false. Elapsed: 89.140143ms Jan 11 19:39:36.467: INFO: Pod "downwardapi-volume-452653cc-d054-4c3e-9b91-f0d117fe5c4f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.178542963s STEP: Saw pod success Jan 11 19:39:36.467: INFO: Pod "downwardapi-volume-452653cc-d054-4c3e-9b91-f0d117fe5c4f" satisfied condition "success or failure" Jan 11 19:39:36.556: INFO: Trying to get logs from node ip-10-250-27-25.ec2.internal pod downwardapi-volume-452653cc-d054-4c3e-9b91-f0d117fe5c4f container client-container: STEP: delete the pod Jan 11 19:39:36.745: INFO: Waiting for pod downwardapi-volume-452653cc-d054-4c3e-9b91-f0d117fe5c4f to disappear Jan 11 19:39:36.834: INFO: Pod downwardapi-volume-452653cc-d054-4c3e-9b91-f0d117fe5c4f no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:39:36.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-54" for this suite. Jan 11 19:39:43.194: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:39:46.499: INFO: namespace downward-api-54 deletion completed in 9.574525252s • [SLOW TEST:12.937 seconds] [sig-storage] Downward API volume /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide podname only [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSS ------------------------------ [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:93 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:37:27.090: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename provisioning STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in provisioning-3332 STEP: Waiting for a default service account to be provisioned in namespace [It] should support file as subpath [LinuxOnly] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:213 STEP: deploying csi-hostpath driver Jan 11 19:37:27.950: INFO: creating *v1.ServiceAccount: provisioning-3332/csi-attacher Jan 11 19:37:28.040: INFO: creating *v1.ClusterRole: external-attacher-runner-provisioning-3332 Jan 11 19:37:28.040: INFO: Define cluster role external-attacher-runner-provisioning-3332 Jan 11 19:37:28.130: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-provisioning-3332 Jan 11 19:37:28.221: INFO: creating *v1.Role: provisioning-3332/external-attacher-cfg-provisioning-3332 Jan 11 19:37:28.311: INFO: creating *v1.RoleBinding: provisioning-3332/csi-attacher-role-cfg Jan 11 19:37:28.402: INFO: creating *v1.ServiceAccount: provisioning-3332/csi-provisioner Jan 11 19:37:28.492: INFO: creating *v1.ClusterRole: external-provisioner-runner-provisioning-3332 Jan 11 19:37:28.492: INFO: Define cluster role external-provisioner-runner-provisioning-3332 Jan 11 19:37:28.582: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-provisioning-3332 Jan 11 19:37:28.672: INFO: creating *v1.Role: provisioning-3332/external-provisioner-cfg-provisioning-3332 Jan 11 19:37:28.763: INFO: creating *v1.RoleBinding: provisioning-3332/csi-provisioner-role-cfg Jan 11 19:37:28.852: INFO: creating *v1.ServiceAccount: provisioning-3332/csi-snapshotter Jan 11 19:37:28.942: INFO: creating *v1.ClusterRole: external-snapshotter-runner-provisioning-3332 Jan 11 19:37:28.942: INFO: Define cluster role external-snapshotter-runner-provisioning-3332 Jan 11 19:37:29.033: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-provisioning-3332 Jan 11 19:37:29.123: INFO: creating *v1.Role: provisioning-3332/external-snapshotter-leaderelection-provisioning-3332 Jan 11 19:37:29.213: INFO: creating *v1.RoleBinding: provisioning-3332/external-snapshotter-leaderelection Jan 11 19:37:29.303: INFO: creating *v1.ServiceAccount: provisioning-3332/csi-resizer Jan 11 19:37:29.393: INFO: creating *v1.ClusterRole: external-resizer-runner-provisioning-3332 Jan 11 19:37:29.393: INFO: Define cluster role external-resizer-runner-provisioning-3332 Jan 11 19:37:29.484: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-provisioning-3332 Jan 11 19:37:29.574: INFO: creating *v1.Role: provisioning-3332/external-resizer-cfg-provisioning-3332 Jan 11 19:37:29.664: INFO: creating *v1.RoleBinding: provisioning-3332/csi-resizer-role-cfg Jan 11 19:37:29.754: INFO: creating *v1.Service: provisioning-3332/csi-hostpath-attacher Jan 11 19:37:29.848: INFO: creating *v1.StatefulSet: provisioning-3332/csi-hostpath-attacher Jan 11 19:37:29.938: INFO: creating *v1beta1.CSIDriver: csi-hostpath-provisioning-3332 Jan 11 19:37:30.028: INFO: creating *v1.Service: provisioning-3332/csi-hostpathplugin Jan 11 19:37:30.123: INFO: creating *v1.StatefulSet: provisioning-3332/csi-hostpathplugin Jan 11 19:37:30.213: INFO: creating *v1.Service: provisioning-3332/csi-hostpath-provisioner Jan 11 19:37:30.306: INFO: creating *v1.StatefulSet: provisioning-3332/csi-hostpath-provisioner Jan 11 19:37:30.396: INFO: creating *v1.Service: provisioning-3332/csi-hostpath-resizer Jan 11 19:37:30.490: INFO: creating *v1.StatefulSet: provisioning-3332/csi-hostpath-resizer Jan 11 19:37:30.580: INFO: creating *v1.Service: provisioning-3332/csi-snapshotter Jan 11 19:37:30.675: INFO: creating *v1.StatefulSet: provisioning-3332/csi-snapshotter Jan 11 19:37:30.766: INFO: creating *v1.ClusterRoleBinding: psp-csi-hostpath-role-provisioning-3332 Jan 11 19:37:30.856: INFO: Test running for native CSI Driver, not checking metrics Jan 11 19:37:30.856: INFO: Creating resource for dynamic PV STEP: creating a StorageClass provisioning-3332-csi-hostpath-provisioning-3332-sc67nqw STEP: creating a claim Jan 11 19:37:30.946: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Jan 11 19:37:31.038: INFO: Waiting up to 5m0s for PersistentVolumeClaims [csi-hostpathcr78c] to have phase Bound Jan 11 19:37:31.128: INFO: PersistentVolumeClaim csi-hostpathcr78c found but phase is Pending instead of Bound. Jan 11 19:37:33.217: INFO: PersistentVolumeClaim csi-hostpathcr78c found but phase is Pending instead of Bound. Jan 11 19:37:35.308: INFO: PersistentVolumeClaim csi-hostpathcr78c found but phase is Pending instead of Bound. Jan 11 19:37:37.397: INFO: PersistentVolumeClaim csi-hostpathcr78c found but phase is Pending instead of Bound. Jan 11 19:37:39.487: INFO: PersistentVolumeClaim csi-hostpathcr78c found but phase is Pending instead of Bound. Jan 11 19:37:41.576: INFO: PersistentVolumeClaim csi-hostpathcr78c found but phase is Pending instead of Bound. Jan 11 19:37:43.667: INFO: PersistentVolumeClaim csi-hostpathcr78c found but phase is Pending instead of Bound. Jan 11 19:37:45.756: INFO: PersistentVolumeClaim csi-hostpathcr78c found but phase is Pending instead of Bound. Jan 11 19:37:47.847: INFO: PersistentVolumeClaim csi-hostpathcr78c found but phase is Pending instead of Bound. Jan 11 19:37:49.938: INFO: PersistentVolumeClaim csi-hostpathcr78c found but phase is Pending instead of Bound. Jan 11 19:37:52.028: INFO: PersistentVolumeClaim csi-hostpathcr78c found but phase is Pending instead of Bound. Jan 11 19:37:54.118: INFO: PersistentVolumeClaim csi-hostpathcr78c found but phase is Pending instead of Bound. Jan 11 19:37:56.208: INFO: PersistentVolumeClaim csi-hostpathcr78c found but phase is Pending instead of Bound. Jan 11 19:37:58.297: INFO: PersistentVolumeClaim csi-hostpathcr78c found but phase is Pending instead of Bound. Jan 11 19:38:00.399: INFO: PersistentVolumeClaim csi-hostpathcr78c found but phase is Pending instead of Bound. Jan 11 19:38:02.488: INFO: PersistentVolumeClaim csi-hostpathcr78c found but phase is Pending instead of Bound. Jan 11 19:38:04.578: INFO: PersistentVolumeClaim csi-hostpathcr78c found but phase is Pending instead of Bound. Jan 11 19:38:06.668: INFO: PersistentVolumeClaim csi-hostpathcr78c found but phase is Pending instead of Bound. Jan 11 19:38:08.757: INFO: PersistentVolumeClaim csi-hostpathcr78c found but phase is Pending instead of Bound. Jan 11 19:38:10.847: INFO: PersistentVolumeClaim csi-hostpathcr78c found but phase is Pending instead of Bound. Jan 11 19:38:12.937: INFO: PersistentVolumeClaim csi-hostpathcr78c found but phase is Pending instead of Bound. Jan 11 19:38:15.026: INFO: PersistentVolumeClaim csi-hostpathcr78c found but phase is Pending instead of Bound. Jan 11 19:38:17.116: INFO: PersistentVolumeClaim csi-hostpathcr78c found but phase is Pending instead of Bound. Jan 11 19:38:19.206: INFO: PersistentVolumeClaim csi-hostpathcr78c found but phase is Pending instead of Bound. Jan 11 19:38:21.296: INFO: PersistentVolumeClaim csi-hostpathcr78c found but phase is Pending instead of Bound. Jan 11 19:38:23.385: INFO: PersistentVolumeClaim csi-hostpathcr78c found but phase is Pending instead of Bound. Jan 11 19:38:25.475: INFO: PersistentVolumeClaim csi-hostpathcr78c found but phase is Pending instead of Bound. Jan 11 19:38:27.565: INFO: PersistentVolumeClaim csi-hostpathcr78c found but phase is Pending instead of Bound. Jan 11 19:38:29.654: INFO: PersistentVolumeClaim csi-hostpathcr78c found but phase is Pending instead of Bound. Jan 11 19:38:31.744: INFO: PersistentVolumeClaim csi-hostpathcr78c found but phase is Pending instead of Bound. Jan 11 19:38:33.834: INFO: PersistentVolumeClaim csi-hostpathcr78c found but phase is Pending instead of Bound. Jan 11 19:38:35.923: INFO: PersistentVolumeClaim csi-hostpathcr78c found but phase is Pending instead of Bound. Jan 11 19:38:38.013: INFO: PersistentVolumeClaim csi-hostpathcr78c found but phase is Pending instead of Bound. Jan 11 19:38:40.103: INFO: PersistentVolumeClaim csi-hostpathcr78c found but phase is Pending instead of Bound. Jan 11 19:38:42.195: INFO: PersistentVolumeClaim csi-hostpathcr78c found but phase is Pending instead of Bound. Jan 11 19:38:44.284: INFO: PersistentVolumeClaim csi-hostpathcr78c found but phase is Pending instead of Bound. Jan 11 19:38:46.374: INFO: PersistentVolumeClaim csi-hostpathcr78c found but phase is Pending instead of Bound. Jan 11 19:38:48.464: INFO: PersistentVolumeClaim csi-hostpathcr78c found but phase is Pending instead of Bound. Jan 11 19:38:50.554: INFO: PersistentVolumeClaim csi-hostpathcr78c found but phase is Pending instead of Bound. Jan 11 19:38:52.643: INFO: PersistentVolumeClaim csi-hostpathcr78c found but phase is Pending instead of Bound. Jan 11 19:38:54.733: INFO: PersistentVolumeClaim csi-hostpathcr78c found but phase is Pending instead of Bound. Jan 11 19:38:56.822: INFO: PersistentVolumeClaim csi-hostpathcr78c found but phase is Pending instead of Bound. Jan 11 19:38:58.973: INFO: PersistentVolumeClaim csi-hostpathcr78c found but phase is Pending instead of Bound. Jan 11 19:39:01.063: INFO: PersistentVolumeClaim csi-hostpathcr78c found but phase is Pending instead of Bound. Jan 11 19:39:03.154: INFO: PersistentVolumeClaim csi-hostpathcr78c found and phase=Bound (1m32.115869643s) STEP: Creating pod pod-subpath-test-csi-hostpath-dynamicpv-lz6m STEP: Creating a pod to test atomic-volume-subpath Jan 11 19:39:03.428: INFO: Waiting up to 5m0s for pod "pod-subpath-test-csi-hostpath-dynamicpv-lz6m" in namespace "provisioning-3332" to be "success or failure" Jan 11 19:39:03.518: INFO: Pod "pod-subpath-test-csi-hostpath-dynamicpv-lz6m": Phase="Pending", Reason="", readiness=false. Elapsed: 90.073093ms Jan 11 19:39:05.608: INFO: Pod "pod-subpath-test-csi-hostpath-dynamicpv-lz6m": Phase="Pending", Reason="", readiness=false. Elapsed: 2.180114042s Jan 11 19:39:07.700: INFO: Pod "pod-subpath-test-csi-hostpath-dynamicpv-lz6m": Phase="Pending", Reason="", readiness=false. Elapsed: 4.272358046s Jan 11 19:39:09.790: INFO: Pod "pod-subpath-test-csi-hostpath-dynamicpv-lz6m": Phase="Running", Reason="", readiness=true. Elapsed: 6.362564356s Jan 11 19:39:11.881: INFO: Pod "pod-subpath-test-csi-hostpath-dynamicpv-lz6m": Phase="Running", Reason="", readiness=true. Elapsed: 8.452767915s Jan 11 19:39:13.971: INFO: Pod "pod-subpath-test-csi-hostpath-dynamicpv-lz6m": Phase="Running", Reason="", readiness=true. Elapsed: 10.542814736s Jan 11 19:39:16.061: INFO: Pod "pod-subpath-test-csi-hostpath-dynamicpv-lz6m": Phase="Running", Reason="", readiness=true. Elapsed: 12.633122231s Jan 11 19:39:18.151: INFO: Pod "pod-subpath-test-csi-hostpath-dynamicpv-lz6m": Phase="Running", Reason="", readiness=true. Elapsed: 14.723165174s Jan 11 19:39:20.241: INFO: Pod "pod-subpath-test-csi-hostpath-dynamicpv-lz6m": Phase="Running", Reason="", readiness=true. Elapsed: 16.81318982s Jan 11 19:39:22.332: INFO: Pod "pod-subpath-test-csi-hostpath-dynamicpv-lz6m": Phase="Running", Reason="", readiness=true. Elapsed: 18.903781158s Jan 11 19:39:24.421: INFO: Pod "pod-subpath-test-csi-hostpath-dynamicpv-lz6m": Phase="Running", Reason="", readiness=true. Elapsed: 20.993648761s Jan 11 19:39:26.511: INFO: Pod "pod-subpath-test-csi-hostpath-dynamicpv-lz6m": Phase="Running", Reason="", readiness=true. Elapsed: 23.083699293s Jan 11 19:39:28.601: INFO: Pod "pod-subpath-test-csi-hostpath-dynamicpv-lz6m": Phase="Succeeded", Reason="", readiness=false. Elapsed: 25.173689324s STEP: Saw pod success Jan 11 19:39:28.602: INFO: Pod "pod-subpath-test-csi-hostpath-dynamicpv-lz6m" satisfied condition "success or failure" Jan 11 19:39:28.692: INFO: Trying to get logs from node ip-10-250-7-77.ec2.internal pod pod-subpath-test-csi-hostpath-dynamicpv-lz6m container test-container-subpath-csi-hostpath-dynamicpv-lz6m: STEP: delete the pod Jan 11 19:39:28.882: INFO: Waiting for pod pod-subpath-test-csi-hostpath-dynamicpv-lz6m to disappear Jan 11 19:39:28.972: INFO: Pod pod-subpath-test-csi-hostpath-dynamicpv-lz6m no longer exists STEP: Deleting pod pod-subpath-test-csi-hostpath-dynamicpv-lz6m Jan 11 19:39:28.972: INFO: Deleting pod "pod-subpath-test-csi-hostpath-dynamicpv-lz6m" in namespace "provisioning-3332" STEP: Deleting pod Jan 11 19:39:29.062: INFO: Deleting pod "pod-subpath-test-csi-hostpath-dynamicpv-lz6m" in namespace "provisioning-3332" STEP: Deleting pvc Jan 11 19:39:29.151: INFO: Deleting PersistentVolumeClaim "csi-hostpathcr78c" Jan 11 19:39:29.242: INFO: Waiting up to 5m0s for PersistentVolume pvc-d381ea03-e9d2-447d-bd17-9601427a40e0 to get deleted Jan 11 19:39:29.332: INFO: PersistentVolume pvc-d381ea03-e9d2-447d-bd17-9601427a40e0 found and phase=Bound (89.852342ms) Jan 11 19:39:34.429: INFO: PersistentVolume pvc-d381ea03-e9d2-447d-bd17-9601427a40e0 was removed STEP: Deleting sc STEP: uninstalling csi-hostpath driver Jan 11 19:39:34.520: INFO: deleting *v1.ServiceAccount: provisioning-3332/csi-attacher Jan 11 19:39:34.612: INFO: deleting *v1.ClusterRole: external-attacher-runner-provisioning-3332 Jan 11 19:39:34.704: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-provisioning-3332 Jan 11 19:39:34.795: INFO: deleting *v1.Role: provisioning-3332/external-attacher-cfg-provisioning-3332 Jan 11 19:39:34.887: INFO: deleting *v1.RoleBinding: provisioning-3332/csi-attacher-role-cfg Jan 11 19:39:34.979: INFO: deleting *v1.ServiceAccount: provisioning-3332/csi-provisioner Jan 11 19:39:35.069: INFO: deleting *v1.ClusterRole: external-provisioner-runner-provisioning-3332 Jan 11 19:39:35.161: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-provisioning-3332 Jan 11 19:39:35.252: INFO: deleting *v1.Role: provisioning-3332/external-provisioner-cfg-provisioning-3332 Jan 11 19:39:35.344: INFO: deleting *v1.RoleBinding: provisioning-3332/csi-provisioner-role-cfg Jan 11 19:39:35.435: INFO: deleting *v1.ServiceAccount: provisioning-3332/csi-snapshotter Jan 11 19:39:35.526: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-provisioning-3332 Jan 11 19:39:35.617: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-provisioning-3332 Jan 11 19:39:35.708: INFO: deleting *v1.Role: provisioning-3332/external-snapshotter-leaderelection-provisioning-3332 Jan 11 19:39:35.800: INFO: deleting *v1.RoleBinding: provisioning-3332/external-snapshotter-leaderelection Jan 11 19:39:35.891: INFO: deleting *v1.ServiceAccount: provisioning-3332/csi-resizer Jan 11 19:39:35.982: INFO: deleting *v1.ClusterRole: external-resizer-runner-provisioning-3332 Jan 11 19:39:36.073: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-provisioning-3332 Jan 11 19:39:36.165: INFO: deleting *v1.Role: provisioning-3332/external-resizer-cfg-provisioning-3332 Jan 11 19:39:36.256: INFO: deleting *v1.RoleBinding: provisioning-3332/csi-resizer-role-cfg Jan 11 19:39:36.347: INFO: deleting *v1.Service: provisioning-3332/csi-hostpath-attacher Jan 11 19:39:36.444: INFO: deleting *v1.StatefulSet: provisioning-3332/csi-hostpath-attacher Jan 11 19:39:36.535: INFO: deleting *v1beta1.CSIDriver: csi-hostpath-provisioning-3332 Jan 11 19:39:36.627: INFO: deleting *v1.Service: provisioning-3332/csi-hostpathplugin Jan 11 19:39:36.724: INFO: deleting *v1.StatefulSet: provisioning-3332/csi-hostpathplugin Jan 11 19:39:36.815: INFO: deleting *v1.Service: provisioning-3332/csi-hostpath-provisioner Jan 11 19:39:36.912: INFO: deleting *v1.StatefulSet: provisioning-3332/csi-hostpath-provisioner Jan 11 19:39:37.003: INFO: deleting *v1.Service: provisioning-3332/csi-hostpath-resizer Jan 11 19:39:37.099: INFO: deleting *v1.StatefulSet: provisioning-3332/csi-hostpath-resizer Jan 11 19:39:37.191: INFO: deleting *v1.Service: provisioning-3332/csi-snapshotter Jan 11 19:39:37.286: INFO: deleting *v1.StatefulSet: provisioning-3332/csi-snapshotter Jan 11 19:39:37.378: INFO: deleting *v1.ClusterRoleBinding: psp-csi-hostpath-role-provisioning-3332 [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:39:37.469: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "provisioning-3332" for this suite. WARNING: pod log: csi-hostpath-attacher-0/csi-attacher: context canceled WARNING: pod log: csi-hostpath-provisioner-0/csi-provisioner: context canceled WARNING: pod log: csi-hostpath-resizer-0/csi-resizer: context canceled WARNING: pod log: csi-hostpathplugin-0/node-driver-registrar: context canceled WARNING: pod log: csi-hostpathplugin-0/hostpath: context canceled WARNING: pod log: csi-hostpathplugin-0/liveness-probe: context canceled WARNING: pod log: csi-snapshotter-0/csi-snapshotter: context canceled WARNING: pod log: csi-hostpath-attacher-0/csi-attacher: context canceled WARNING: pod log: csi-hostpath-provisioner-0/csi-provisioner: context canceled WARNING: pod log: csi-hostpath-resizer-0/csi-resizer: context canceled WARNING: pod log: csi-hostpathplugin-0/node-driver-registrar: context canceled WARNING: pod log: csi-hostpathplugin-0/hostpath: context canceled WARNING: pod log: csi-hostpathplugin-0/liveness-probe: context canceled WARNING: pod log: csi-snapshotter-0/csi-snapshotter: context canceled Jan 11 19:39:45.831: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:39:49.154: INFO: namespace provisioning-3332 deletion completed in 11.594088095s • [SLOW TEST:142.064 seconds] [sig-storage] CSI Volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Driver: csi-hostpath] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:62 [Testpattern: Dynamic PV (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:92 should support file as subpath [LinuxOnly] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:213 ------------------------------ S ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:39:34.553: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename projected STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-8423 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating a pod to test downward API volume plugin Jan 11 19:39:35.282: INFO: Waiting up to 5m0s for pod "downwardapi-volume-182da614-5c3c-4d0a-839c-95dc7b714898" in namespace "projected-8423" to be "success or failure" Jan 11 19:39:35.372: INFO: Pod "downwardapi-volume-182da614-5c3c-4d0a-839c-95dc7b714898": Phase="Pending", Reason="", readiness=false. Elapsed: 89.385585ms Jan 11 19:39:37.461: INFO: Pod "downwardapi-volume-182da614-5c3c-4d0a-839c-95dc7b714898": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.178970368s STEP: Saw pod success Jan 11 19:39:37.461: INFO: Pod "downwardapi-volume-182da614-5c3c-4d0a-839c-95dc7b714898" satisfied condition "success or failure" Jan 11 19:39:37.551: INFO: Trying to get logs from node ip-10-250-27-25.ec2.internal pod downwardapi-volume-182da614-5c3c-4d0a-839c-95dc7b714898 container client-container: STEP: delete the pod Jan 11 19:39:37.761: INFO: Waiting for pod downwardapi-volume-182da614-5c3c-4d0a-839c-95dc7b714898 to disappear Jan 11 19:39:37.850: INFO: Pod downwardapi-volume-182da614-5c3c-4d0a-839c-95dc7b714898 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:39:37.850: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8423" for this suite. Jan 11 19:39:46.214: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:39:49.534: INFO: namespace projected-8423 deletion completed in 11.592769894s • [SLOW TEST:14.982 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:39:49.158: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename emptydir STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-129 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:46 [It] files with FSGroup ownership should support (root,0644,tmpfs) /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:63 STEP: Creating a pod to test emptydir 0644 on tmpfs Jan 11 19:39:49.895: INFO: Waiting up to 5m0s for pod "pod-51236cfb-ec48-4cbc-bbf7-fd26786efa20" in namespace "emptydir-129" to be "success or failure" Jan 11 19:39:49.985: INFO: Pod "pod-51236cfb-ec48-4cbc-bbf7-fd26786efa20": Phase="Pending", Reason="", readiness=false. Elapsed: 89.907448ms Jan 11 19:39:52.078: INFO: Pod "pod-51236cfb-ec48-4cbc-bbf7-fd26786efa20": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.182936472s STEP: Saw pod success Jan 11 19:39:52.078: INFO: Pod "pod-51236cfb-ec48-4cbc-bbf7-fd26786efa20" satisfied condition "success or failure" Jan 11 19:39:52.168: INFO: Trying to get logs from node ip-10-250-7-77.ec2.internal pod pod-51236cfb-ec48-4cbc-bbf7-fd26786efa20 container test-container: STEP: delete the pod Jan 11 19:39:52.359: INFO: Waiting for pod pod-51236cfb-ec48-4cbc-bbf7-fd26786efa20 to disappear Jan 11 19:39:52.448: INFO: Pod pod-51236cfb-ec48-4cbc-bbf7-fd26786efa20 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:39:52.448: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-129" for this suite. Jan 11 19:39:58.815: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:40:02.192: INFO: namespace emptydir-129 deletion completed in 9.650860444s • [SLOW TEST:13.034 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:44 files with FSGroup ownership should support (root,0644,tmpfs) /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:63 ------------------------------ SSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:39:46.505: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename crd-webhook STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in crd-webhook-4150 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Jan 11 19:39:48.318: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714368388, loc:(*time.Location)(0x84bfb00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714368388, loc:(*time.Location)(0x84bfb00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714368388, loc:(*time.Location)(0x84bfb00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714368388, loc:(*time.Location)(0x84bfb00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-64d485d9bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 11 19:39:51.505: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Jan 11 19:39:51.595: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:39:53.185: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-4150" for this suite. Jan 11 19:39:59.544: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:40:02.854: INFO: namespace crd-webhook-4150 deletion completed in 9.578404903s [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:16.707 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ S ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:40:02.201: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename projected STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-8228 STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating configMap with name projected-configmap-test-volume-map-c0c2d6da-1e96-4ca2-851c-ca3f07b03aec STEP: Creating a pod to test consume configMaps Jan 11 19:40:03.047: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-182dd087-22d2-44af-a165-949a87c7a187" in namespace "projected-8228" to be "success or failure" Jan 11 19:40:03.137: INFO: Pod "pod-projected-configmaps-182dd087-22d2-44af-a165-949a87c7a187": Phase="Pending", Reason="", readiness=false. Elapsed: 89.649124ms Jan 11 19:40:05.227: INFO: Pod "pod-projected-configmaps-182dd087-22d2-44af-a165-949a87c7a187": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.180023334s STEP: Saw pod success Jan 11 19:40:05.228: INFO: Pod "pod-projected-configmaps-182dd087-22d2-44af-a165-949a87c7a187" satisfied condition "success or failure" Jan 11 19:40:05.320: INFO: Trying to get logs from node ip-10-250-7-77.ec2.internal pod pod-projected-configmaps-182dd087-22d2-44af-a165-949a87c7a187 container projected-configmap-volume-test: STEP: delete the pod Jan 11 19:40:05.516: INFO: Waiting for pod pod-projected-configmaps-182dd087-22d2-44af-a165-949a87c7a187 to disappear Jan 11 19:40:05.606: INFO: Pod pod-projected-configmaps-182dd087-22d2-44af-a165-949a87c7a187 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:40:05.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8228" for this suite. Jan 11 19:40:11.970: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:40:15.287: INFO: namespace projected-8228 deletion completed in 9.588655312s • [SLOW TEST:13.086 seconds] [sig-storage] Projected configMap /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:35 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:39:31.615: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename kubectl STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-2618 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:225 [BeforeEach] Update Demo /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [It] should scale a replication controller [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: creating a replication controller Jan 11 19:39:32.256: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config create -f - --namespace=kubectl-2618' Jan 11 19:39:33.269: INFO: stderr: "" Jan 11 19:39:33.269: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jan 11 19:39:33.269: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2618' Jan 11 19:39:33.790: INFO: stderr: "" Jan 11 19:39:33.790: INFO: stdout: "update-demo-nautilus-rqkwn update-demo-nautilus-vm5hn " Jan 11 19:39:33.791: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config get pods update-demo-nautilus-rqkwn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2618' Jan 11 19:39:34.242: INFO: stderr: "" Jan 11 19:39:34.242: INFO: stdout: "" Jan 11 19:39:34.242: INFO: update-demo-nautilus-rqkwn is created but not running Jan 11 19:39:39.243: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2618' Jan 11 19:39:39.734: INFO: stderr: "" Jan 11 19:39:39.734: INFO: stdout: "update-demo-nautilus-rqkwn update-demo-nautilus-vm5hn " Jan 11 19:39:39.734: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config get pods update-demo-nautilus-rqkwn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2618' Jan 11 19:39:40.199: INFO: stderr: "" Jan 11 19:39:40.199: INFO: stdout: "true" Jan 11 19:39:40.199: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config get pods update-demo-nautilus-rqkwn -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2618' Jan 11 19:39:40.677: INFO: stderr: "" Jan 11 19:39:40.678: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 11 19:39:40.678: INFO: validating pod update-demo-nautilus-rqkwn Jan 11 19:39:40.860: INFO: got data: { "image": "nautilus.jpg" } Jan 11 19:39:40.860: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 11 19:39:40.860: INFO: update-demo-nautilus-rqkwn is verified up and running Jan 11 19:39:40.860: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config get pods update-demo-nautilus-vm5hn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2618' Jan 11 19:39:41.321: INFO: stderr: "" Jan 11 19:39:41.321: INFO: stdout: "true" Jan 11 19:39:41.321: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config get pods update-demo-nautilus-vm5hn -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2618' Jan 11 19:39:41.794: INFO: stderr: "" Jan 11 19:39:41.794: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 11 19:39:41.794: INFO: validating pod update-demo-nautilus-vm5hn Jan 11 19:39:41.975: INFO: got data: { "image": "nautilus.jpg" } Jan 11 19:39:41.975: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 11 19:39:41.975: INFO: update-demo-nautilus-vm5hn is verified up and running STEP: scaling down the replication controller Jan 11 19:39:41.977: INFO: scanned /root for discovery docs: Jan 11 19:39:41.977: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-2618' Jan 11 19:39:42.630: INFO: stderr: "" Jan 11 19:39:42.630: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Jan 11 19:39:42.630: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2618' Jan 11 19:39:43.099: INFO: stderr: "" Jan 11 19:39:43.099: INFO: stdout: "update-demo-nautilus-rqkwn update-demo-nautilus-vm5hn " STEP: Replicas for name=update-demo: expected=1 actual=2 Jan 11 19:39:48.100: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2618' Jan 11 19:39:48.552: INFO: stderr: "" Jan 11 19:39:48.552: INFO: stdout: "update-demo-nautilus-rqkwn " Jan 11 19:39:48.553: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config get pods update-demo-nautilus-rqkwn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2618' Jan 11 19:39:49.026: INFO: stderr: "" Jan 11 19:39:49.026: INFO: stdout: "true" Jan 11 19:39:49.026: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config get pods update-demo-nautilus-rqkwn -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2618' Jan 11 19:39:49.478: INFO: stderr: "" Jan 11 19:39:49.478: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 11 19:39:49.478: INFO: validating pod update-demo-nautilus-rqkwn Jan 11 19:39:49.571: INFO: got data: { "image": "nautilus.jpg" } Jan 11 19:39:49.571: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 11 19:39:49.571: INFO: update-demo-nautilus-rqkwn is verified up and running STEP: scaling up the replication controller Jan 11 19:39:49.572: INFO: scanned /root for discovery docs: Jan 11 19:39:49.572: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-2618' Jan 11 19:39:50.228: INFO: stderr: "" Jan 11 19:39:50.228: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Jan 11 19:39:50.228: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2618' Jan 11 19:39:50.703: INFO: stderr: "" Jan 11 19:39:50.703: INFO: stdout: "update-demo-nautilus-qqfqx update-demo-nautilus-rqkwn " Jan 11 19:39:50.703: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config get pods update-demo-nautilus-qqfqx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2618' Jan 11 19:39:51.201: INFO: stderr: "" Jan 11 19:39:51.201: INFO: stdout: "" Jan 11 19:39:51.201: INFO: update-demo-nautilus-qqfqx is created but not running Jan 11 19:39:56.202: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2618' Jan 11 19:39:56.661: INFO: stderr: "" Jan 11 19:39:56.661: INFO: stdout: "update-demo-nautilus-qqfqx update-demo-nautilus-rqkwn " Jan 11 19:39:56.661: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config get pods update-demo-nautilus-qqfqx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2618' Jan 11 19:39:57.126: INFO: stderr: "" Jan 11 19:39:57.126: INFO: stdout: "true" Jan 11 19:39:57.126: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config get pods update-demo-nautilus-qqfqx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2618' Jan 11 19:39:57.589: INFO: stderr: "" Jan 11 19:39:57.589: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 11 19:39:57.589: INFO: validating pod update-demo-nautilus-qqfqx Jan 11 19:39:57.730: INFO: got data: { "image": "nautilus.jpg" } Jan 11 19:39:57.730: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 11 19:39:57.730: INFO: update-demo-nautilus-qqfqx is verified up and running Jan 11 19:39:57.730: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config get pods update-demo-nautilus-rqkwn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2618' Jan 11 19:39:58.182: INFO: stderr: "" Jan 11 19:39:58.182: INFO: stdout: "true" Jan 11 19:39:58.182: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config get pods update-demo-nautilus-rqkwn -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2618' Jan 11 19:39:58.645: INFO: stderr: "" Jan 11 19:39:58.645: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 11 19:39:58.645: INFO: validating pod update-demo-nautilus-rqkwn Jan 11 19:39:58.767: INFO: got data: { "image": "nautilus.jpg" } Jan 11 19:39:58.767: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 11 19:39:58.767: INFO: update-demo-nautilus-rqkwn is verified up and running STEP: using delete to clean up resources Jan 11 19:39:58.767: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config delete --grace-period=0 --force -f - --namespace=kubectl-2618' Jan 11 19:39:59.322: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 11 19:39:59.323: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Jan 11 19:39:59.323: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-2618' Jan 11 19:39:59.907: INFO: stderr: "No resources found in kubectl-2618 namespace.\n" Jan 11 19:39:59.907: INFO: stdout: "" Jan 11 19:39:59.907: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config get pods -l name=update-demo --namespace=kubectl-2618 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jan 11 19:40:00.404: INFO: stderr: "" Jan 11 19:40:00.404: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:40:00.404: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2618" for this suite. Jan 11 19:40:12.765: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:40:16.072: INFO: namespace kubectl-2618 deletion completed in 15.57654342s • [SLOW TEST:44.457 seconds] [sig-cli] Kubectl client /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:275 should scale a replication controller [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:39:49.544: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in persistent-local-volumes-test-4532 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:152 [BeforeEach] [Volume type: block] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "ip-10-250-27-25.ec2.internal" using path "/tmp/local-volume-test-89785bea-3e12-4f5d-8c14-2ea958aeaecb" Jan 11 19:39:52.638: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-4532 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-89785bea-3e12-4f5d-8c14-2ea958aeaecb && dd if=/dev/zero of=/tmp/local-volume-test-89785bea-3e12-4f5d-8c14-2ea958aeaecb/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-89785bea-3e12-4f5d-8c14-2ea958aeaecb/file' Jan 11 19:39:54.050: INFO: stderr: "5120+0 records in\n5120+0 records out\n20971520 bytes (21 MB, 20 MiB) copied, 0.0190587 s, 1.1 GB/s\n" Jan 11 19:39:54.050: INFO: stdout: "" Jan 11 19:39:54.050: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-4532 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-89785bea-3e12-4f5d-8c14-2ea958aeaecb/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}' Jan 11 19:39:55.393: INFO: stderr: "" Jan 11 19:39:55.393: INFO: stdout: "/dev/loop0\n" STEP: Creating local PVCs and PVs Jan 11 19:39:55.393: INFO: Creating a PV followed by a PVC Jan 11 19:39:55.576: INFO: Waiting for PV local-pvt225g to bind to PVC pvc-2wq7c Jan 11 19:39:55.576: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-2wq7c] to have phase Bound Jan 11 19:39:55.666: INFO: PersistentVolumeClaim pvc-2wq7c found and phase=Bound (89.729515ms) Jan 11 19:39:55.666: INFO: Waiting up to 3m0s for PersistentVolume local-pvt225g to have phase Bound Jan 11 19:39:55.755: INFO: PersistentVolume local-pvt225g found and phase=Bound (89.58901ms) [It] should be able to write from pod1 and read from pod2 /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 STEP: Creating pod1 STEP: Creating a pod Jan 11 19:39:58.389: INFO: pod "security-context-e64848e2-2769-48fb-947d-d92ee888fb8c" created on Node "ip-10-250-27-25.ec2.internal" STEP: Writing in pod1 Jan 11 19:39:58.389: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-4532 security-context-e64848e2-2769-48fb-947d-d92ee888fb8c -- /bin/sh -c mkdir -p /tmp/mnt/volume1; echo test-file-content > /tmp/mnt/volume1/test-file && SUDO_CMD=$(which sudo); echo ${SUDO_CMD} && ${SUDO_CMD} dd if=/tmp/mnt/volume1/test-file of=/mnt/volume1 bs=512 count=100 && rm /tmp/mnt/volume1/test-file' Jan 11 19:39:59.925: INFO: stderr: "0+1 records in\n0+1 records out\n18 bytes (18B) copied, 0.000044 seconds, 399.5KB/s\n" Jan 11 19:39:59.925: INFO: stdout: "\n" Jan 11 19:39:59.925: INFO: podRWCmdExec out: "\n" err: Jan 11 19:39:59.925: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-4532 security-context-e64848e2-2769-48fb-947d-d92ee888fb8c -- /bin/sh -c hexdump -n 100 -e '100 "%_p"' /mnt/volume1 | head -1' Jan 11 19:40:01.278: INFO: stderr: "" Jan 11 19:40:01.278: INFO: stdout: "test-file-content..................................................................................." Jan 11 19:40:01.278: INFO: podRWCmdExec out: "test-file-content..................................................................................." err: STEP: Deleting pod1 STEP: Deleting pod security-context-e64848e2-2769-48fb-947d-d92ee888fb8c in namespace persistent-local-volumes-test-4532 STEP: Creating pod2 STEP: Creating a pod Jan 11 19:40:03.831: INFO: pod "security-context-c6ebbece-650c-4f4f-abb4-6c35e3d304d8" created on Node "ip-10-250-27-25.ec2.internal" STEP: Reading in pod2 Jan 11 19:40:03.831: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-4532 security-context-c6ebbece-650c-4f4f-abb4-6c35e3d304d8 -- /bin/sh -c hexdump -n 100 -e '100 "%_p"' /mnt/volume1 | head -1' Jan 11 19:40:05.232: INFO: stderr: "" Jan 11 19:40:05.233: INFO: stdout: "test-file-content..................................................................................." Jan 11 19:40:05.233: INFO: podRWCmdExec out: "test-file-content..................................................................................." err: STEP: Deleting pod2 STEP: Deleting pod security-context-c6ebbece-650c-4f4f-abb4-6c35e3d304d8 in namespace persistent-local-volumes-test-4532 [AfterEach] [Volume type: block] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jan 11 19:40:05.323: INFO: Deleting PersistentVolumeClaim "pvc-2wq7c" Jan 11 19:40:05.413: INFO: Deleting PersistentVolume "local-pvt225g" Jan 11 19:40:05.505: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-4532 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-89785bea-3e12-4f5d-8c14-2ea958aeaecb/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}' Jan 11 19:40:06.895: INFO: stderr: "" Jan 11 19:40:06.895: INFO: stdout: "/dev/loop0\n" STEP: Tear down block device "/dev/loop0" on node "ip-10-250-27-25.ec2.internal" at path /tmp/local-volume-test-89785bea-3e12-4f5d-8c14-2ea958aeaecb/file Jan 11 19:40:06.896: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-4532 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0' Jan 11 19:40:08.206: INFO: stderr: "" Jan 11 19:40:08.206: INFO: stdout: "" STEP: Removing the test directory /tmp/local-volume-test-89785bea-3e12-4f5d-8c14-2ea958aeaecb Jan 11 19:40:08.206: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-4532 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-89785bea-3e12-4f5d-8c14-2ea958aeaecb' Jan 11 19:40:09.566: INFO: stderr: "" Jan 11 19:40:09.566: INFO: stdout: "" [AfterEach] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:40:09.657: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-4532" for this suite. Jan 11 19:40:16.017: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:40:19.337: INFO: namespace persistent-local-volumes-test-4532 deletion completed in 9.588946176s • [SLOW TEST:29.793 seconds] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: block] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume one after the other /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254 should be able to write from pod1 and read from pod2 /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 ------------------------------ SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:39:21.951: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename kubectl STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-597 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:225 [BeforeEach] Simple pod /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:371 STEP: creating the pod from Jan 11 19:39:22.590: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config create -f - --namespace=kubectl-597' Jan 11 19:39:23.647: INFO: stderr: "" Jan 11 19:39:23.647: INFO: stdout: "pod/httpd created\n" Jan 11 19:39:23.647: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [httpd] Jan 11 19:39:23.647: INFO: Waiting up to 5m0s for pod "httpd" in namespace "kubectl-597" to be "running and ready" Jan 11 19:39:23.736: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 89.229075ms Jan 11 19:39:25.826: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 2.17874894s Jan 11 19:39:27.915: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 4.268175189s Jan 11 19:39:30.004: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 6.357641589s Jan 11 19:39:32.096: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 8.448814649s Jan 11 19:39:34.186: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 10.538754339s Jan 11 19:39:36.275: INFO: Pod "httpd": Phase="Running", Reason="", readiness=true. Elapsed: 12.628424279s Jan 11 19:39:36.275: INFO: Pod "httpd" satisfied condition "running and ready" Jan 11 19:39:36.275: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [httpd] [It] should handle in-cluster config /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:613 STEP: adding rbac permissions STEP: overriding icc with values provided by flags Jan 11 19:39:36.555: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=kubectl-597 httpd -- /bin/sh -x -c printenv KUBERNETES_SERVICE_HOST' Jan 11 19:39:37.881: INFO: stderr: "+ printenv KUBERNETES_SERVICE_HOST\n" Jan 11 19:39:37.881: INFO: stdout: "100.104.0.1\n" Jan 11 19:39:37.881: INFO: stdout: 100.104.0.1 Jan 11 19:39:37.881: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=kubectl-597 httpd -- /bin/sh -x -c printenv KUBERNETES_SERVICE_PORT' Jan 11 19:39:39.250: INFO: stderr: "+ printenv KUBERNETES_SERVICE_PORT\n" Jan 11 19:39:39.250: INFO: stdout: "443\n" Jan 11 19:39:39.250: INFO: stdout: 443 Jan 11 19:39:39.250: INFO: copying /go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl to the httpd pod Jan 11 19:39:39.250: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config cp /go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl kubectl-597/httpd:/tmp/' Jan 11 19:39:52.670: INFO: stderr: "" Jan 11 19:39:52.670: INFO: stdout: "" Jan 11 19:39:52.670: INFO: copying override kubeconfig to the httpd pod Jan 11 19:39:52.670: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config cp /tmp/icc-override617181757/icc-override.kubeconfig kubectl-597/httpd:/tmp/' Jan 11 19:39:55.274: INFO: stderr: "" Jan 11 19:39:55.275: INFO: stdout: "" Jan 11 19:39:55.275: INFO: copying configmap manifests to the httpd pod Jan 11 19:39:55.275: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config cp /tmp/icc-override617181757/invalid-configmap-with-namespace.yaml kubectl-597/httpd:/tmp/' Jan 11 19:39:57.860: INFO: stderr: "" Jan 11 19:39:57.861: INFO: stdout: "" Jan 11 19:39:57.861: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config cp /tmp/icc-override617181757/invalid-configmap-without-namespace.yaml kubectl-597/httpd:/tmp/' Jan 11 19:40:00.611: INFO: stderr: "" Jan 11 19:40:00.611: INFO: stdout: "" STEP: getting pods with in-cluster configs Jan 11 19:40:00.611: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=kubectl-597 httpd -- /bin/sh -x -c /tmp/kubectl get pods --v=6 2>&1' Jan 11 19:40:02.553: INFO: stderr: "+ /tmp/kubectl get pods '--v=6'\n" Jan 11 19:40:02.553: INFO: stdout: "I0111 19:40:02.048734 150 merged_client_builder.go:164] Using in-cluster namespace\nI0111 19:40:02.049209 150 merged_client_builder.go:122] Using in-cluster configuration\nI0111 19:40:02.125657 150 round_trippers.go:443] GET https://100.104.0.1:443/api?timeout=32s 200 OK in 74 milliseconds\nI0111 19:40:02.140835 150 round_trippers.go:443] GET https://100.104.0.1:443/apis?timeout=32s 200 OK in 2 milliseconds\nI0111 19:40:02.164905 150 round_trippers.go:443] GET https://100.104.0.1:443/apis/extensions/v1beta1?timeout=32s 200 OK in 6 milliseconds\nI0111 19:40:02.164957 150 round_trippers.go:443] GET https://100.104.0.1:443/apis/apiregistration.k8s.io/v1?timeout=32s 200 OK in 6 milliseconds\nI0111 19:40:02.165184 150 round_trippers.go:443] GET https://100.104.0.1:443/apis/apiregistration.k8s.io/v1beta1?timeout=32s 200 OK in 6 milliseconds\nI0111 19:40:02.165265 150 round_trippers.go:443] GET https://100.104.0.1:443/apis/rbac.authorization.k8s.io/v1?timeout=32s 200 OK in 6 milliseconds\nI0111 19:40:02.165272 150 round_trippers.go:443] GET https://100.104.0.1:443/apis/apps/v1?timeout=32s 200 OK in 6 milliseconds\nI0111 19:40:02.165298 150 round_trippers.go:443] GET https://100.104.0.1:443/apis/authentication.k8s.io/v1?timeout=32s 200 OK in 6 milliseconds\nI0111 19:40:02.165301 150 round_trippers.go:443] GET https://100.104.0.1:443/apis/authentication.k8s.io/v1beta1?timeout=32s 200 OK in 6 milliseconds\nI0111 19:40:02.165318 150 round_trippers.go:443] GET https://100.104.0.1:443/apis/events.k8s.io/v1beta1?timeout=32s 200 OK in 6 milliseconds\nI0111 19:40:02.176619 150 round_trippers.go:443] GET https://100.104.0.1:443/api/v1?timeout=32s 200 OK in 18 milliseconds\nI0111 19:40:02.176668 150 round_trippers.go:443] GET https://100.104.0.1:443/apis/autoscaling/v1?timeout=32s 200 OK in 17 milliseconds\nI0111 19:40:02.176727 150 round_trippers.go:443] GET https://100.104.0.1:443/apis/authorization.k8s.io/v1?timeout=32s 200 OK in 17 milliseconds\nI0111 19:40:02.176771 150 round_trippers.go:443] GET https://100.104.0.1:443/apis/rbac.authorization.k8s.io/v1beta1?timeout=32s 200 OK in 17 milliseconds\nI0111 19:40:02.176824 150 round_trippers.go:443] GET https://100.104.0.1:443/apis/authorization.k8s.io/v1beta1?timeout=32s 200 OK in 17 milliseconds\nI0111 19:40:02.176871 150 round_trippers.go:443] GET https://100.104.0.1:443/apis/storage.k8s.io/v1?timeout=32s 200 OK in 17 milliseconds\nI0111 19:40:02.177409 150 round_trippers.go:443] GET https://100.104.0.1:443/apis/autoscaling/v2beta1?timeout=32s 200 OK in 18 milliseconds\nI0111 19:40:02.177475 150 round_trippers.go:443] GET https://100.104.0.1:443/apis/autoscaling/v2beta2?timeout=32s 200 OK in 17 milliseconds\nI0111 19:40:02.177553 150 round_trippers.go:443] GET https://100.104.0.1:443/apis/certificates.k8s.io/v1beta1?timeout=32s 200 OK in 17 milliseconds\nI0111 19:40:02.177702 150 round_trippers.go:443] GET https://100.104.0.1:443/apis/storage.k8s.io/v1beta1?timeout=32s 200 OK in 18 milliseconds\nI0111 19:40:02.177757 150 round_trippers.go:443] GET https://100.104.0.1:443/apis/batch/v1?timeout=32s 200 OK in 18 milliseconds\nI0111 19:40:02.177799 150 round_trippers.go:443] GET https://100.104.0.1:443/apis/networking.k8s.io/v1?timeout=32s 200 OK in 17 milliseconds\nI0111 19:40:02.177842 150 round_trippers.go:443] GET https://100.104.0.1:443/apis/apiextensions.k8s.io/v1?timeout=32s 200 OK in 17 milliseconds\nI0111 19:40:02.177893 150 round_trippers.go:443] GET https://100.104.0.1:443/apis/batch/v1beta1?timeout=32s 200 OK in 18 milliseconds\nI0111 19:40:02.177933 150 round_trippers.go:443] GET https://100.104.0.1:443/apis/admissionregistration.k8s.io/v1beta1?timeout=32s 200 OK in 18 milliseconds\nI0111 19:40:02.177983 150 round_trippers.go:443] GET https://100.104.0.1:443/apis/networking.k8s.io/v1beta1?timeout=32s 200 OK in 17 milliseconds\nI0111 19:40:02.178035 150 round_trippers.go:443] GET https://100.104.0.1:443/apis/cert.gardener.cloud/v1alpha1?timeout=32s 200 OK in 17 milliseconds\nI0111 19:40:02.178075 150 round_trippers.go:443] GET https://100.104.0.1:443/apis/apiextensions.k8s.io/v1beta1?timeout=32s 200 OK in 18 milliseconds\nI0111 19:40:02.178125 150 round_trippers.go:443] GET https://100.104.0.1:443/apis/coordination.k8s.io/v1?timeout=32s 200 OK in 17 milliseconds\nI0111 19:40:02.178172 150 round_trippers.go:443] GET https://100.104.0.1:443/apis/coordination.k8s.io/v1beta1?timeout=32s 200 OK in 17 milliseconds\nI0111 19:40:02.178211 150 round_trippers.go:443] GET https://100.104.0.1:443/apis/crd.projectcalico.org/v1?timeout=32s 200 OK in 17 milliseconds\nI0111 19:40:02.178272 150 round_trippers.go:443] GET https://100.104.0.1:443/apis/scheduling.k8s.io/v1beta1?timeout=32s 200 OK in 18 milliseconds\nI0111 19:40:02.178345 150 round_trippers.go:443] GET https://100.104.0.1:443/apis/admissionregistration.k8s.io/v1?timeout=32s 200 OK in 18 milliseconds\nI0111 19:40:02.178398 150 round_trippers.go:443] GET https://100.104.0.1:443/apis/scheduling.k8s.io/v1?timeout=32s 200 OK in 18 milliseconds\nI0111 19:40:02.178440 150 round_trippers.go:443] GET https://100.104.0.1:443/apis/policy/v1beta1?timeout=32s 200 OK in 18 milliseconds\nI0111 19:40:02.178490 150 round_trippers.go:443] GET https://100.104.0.1:443/apis/node.k8s.io/v1beta1?timeout=32s 200 OK in 17 milliseconds\nI0111 19:40:02.178559 150 round_trippers.go:443] GET https://100.104.0.1:443/apis/dns.gardener.cloud/v1alpha1?timeout=32s 200 OK in 18 milliseconds\nI0111 19:40:02.178603 150 round_trippers.go:443] GET https://100.104.0.1:443/apis/snapshot.storage.k8s.io/v1alpha1?timeout=32s 200 OK in 18 milliseconds\nI0111 19:40:02.181993 150 round_trippers.go:443] GET https://100.104.0.1:443/apis/metrics.k8s.io/v1beta1?timeout=32s 200 OK in 23 milliseconds\nI0111 19:40:02.456296 150 merged_client_builder.go:122] Using in-cluster configuration\nI0111 19:40:02.463228 150 merged_client_builder.go:122] Using in-cluster configuration\nI0111 19:40:02.472492 150 round_trippers.go:443] GET https://100.104.0.1:443/api/v1/namespaces/kubectl-597/pods?limit=500 200 OK in 8 milliseconds\nNAME READY STATUS RESTARTS AGE\nhttpd 1/1 Running 0 39s\n" Jan 11 19:40:02.553: INFO: stdout: I0111 19:40:02.048734 150 merged_client_builder.go:164] Using in-cluster namespace I0111 19:40:02.049209 150 merged_client_builder.go:122] Using in-cluster configuration I0111 19:40:02.125657 150 round_trippers.go:443] GET https://100.104.0.1:443/api?timeout=32s 200 OK in 74 milliseconds I0111 19:40:02.140835 150 round_trippers.go:443] GET https://100.104.0.1:443/apis?timeout=32s 200 OK in 2 milliseconds I0111 19:40:02.164905 150 round_trippers.go:443] GET https://100.104.0.1:443/apis/extensions/v1beta1?timeout=32s 200 OK in 6 milliseconds I0111 19:40:02.164957 150 round_trippers.go:443] GET https://100.104.0.1:443/apis/apiregistration.k8s.io/v1?timeout=32s 200 OK in 6 milliseconds I0111 19:40:02.165184 150 round_trippers.go:443] GET https://100.104.0.1:443/apis/apiregistration.k8s.io/v1beta1?timeout=32s 200 OK in 6 milliseconds I0111 19:40:02.165265 150 round_trippers.go:443] GET https://100.104.0.1:443/apis/rbac.authorization.k8s.io/v1?timeout=32s 200 OK in 6 milliseconds I0111 19:40:02.165272 150 round_trippers.go:443] GET https://100.104.0.1:443/apis/apps/v1?timeout=32s 200 OK in 6 milliseconds I0111 19:40:02.165298 150 round_trippers.go:443] GET https://100.104.0.1:443/apis/authentication.k8s.io/v1?timeout=32s 200 OK in 6 milliseconds I0111 19:40:02.165301 150 round_trippers.go:443] GET https://100.104.0.1:443/apis/authentication.k8s.io/v1beta1?timeout=32s 200 OK in 6 milliseconds I0111 19:40:02.165318 150 round_trippers.go:443] GET https://100.104.0.1:443/apis/events.k8s.io/v1beta1?timeout=32s 200 OK in 6 milliseconds I0111 19:40:02.176619 150 round_trippers.go:443] GET https://100.104.0.1:443/api/v1?timeout=32s 200 OK in 18 milliseconds I0111 19:40:02.176668 150 round_trippers.go:443] GET https://100.104.0.1:443/apis/autoscaling/v1?timeout=32s 200 OK in 17 milliseconds I0111 19:40:02.176727 150 round_trippers.go:443] GET https://100.104.0.1:443/apis/authorization.k8s.io/v1?timeout=32s 200 OK in 17 milliseconds I0111 19:40:02.176771 150 round_trippers.go:443] GET https://100.104.0.1:443/apis/rbac.authorization.k8s.io/v1beta1?timeout=32s 200 OK in 17 milliseconds I0111 19:40:02.176824 150 round_trippers.go:443] GET https://100.104.0.1:443/apis/authorization.k8s.io/v1beta1?timeout=32s 200 OK in 17 milliseconds I0111 19:40:02.176871 150 round_trippers.go:443] GET https://100.104.0.1:443/apis/storage.k8s.io/v1?timeout=32s 200 OK in 17 milliseconds I0111 19:40:02.177409 150 round_trippers.go:443] GET https://100.104.0.1:443/apis/autoscaling/v2beta1?timeout=32s 200 OK in 18 milliseconds I0111 19:40:02.177475 150 round_trippers.go:443] GET https://100.104.0.1:443/apis/autoscaling/v2beta2?timeout=32s 200 OK in 17 milliseconds I0111 19:40:02.177553 150 round_trippers.go:443] GET https://100.104.0.1:443/apis/certificates.k8s.io/v1beta1?timeout=32s 200 OK in 17 milliseconds I0111 19:40:02.177702 150 round_trippers.go:443] GET https://100.104.0.1:443/apis/storage.k8s.io/v1beta1?timeout=32s 200 OK in 18 milliseconds I0111 19:40:02.177757 150 round_trippers.go:443] GET https://100.104.0.1:443/apis/batch/v1?timeout=32s 200 OK in 18 milliseconds I0111 19:40:02.177799 150 round_trippers.go:443] GET https://100.104.0.1:443/apis/networking.k8s.io/v1?timeout=32s 200 OK in 17 milliseconds I0111 19:40:02.177842 150 round_trippers.go:443] GET https://100.104.0.1:443/apis/apiextensions.k8s.io/v1?timeout=32s 200 OK in 17 milliseconds I0111 19:40:02.177893 150 round_trippers.go:443] GET https://100.104.0.1:443/apis/batch/v1beta1?timeout=32s 200 OK in 18 milliseconds I0111 19:40:02.177933 150 round_trippers.go:443] GET https://100.104.0.1:443/apis/admissionregistration.k8s.io/v1beta1?timeout=32s 200 OK in 18 milliseconds I0111 19:40:02.177983 150 round_trippers.go:443] GET https://100.104.0.1:443/apis/networking.k8s.io/v1beta1?timeout=32s 200 OK in 17 milliseconds I0111 19:40:02.178035 150 round_trippers.go:443] GET https://100.104.0.1:443/apis/cert.gardener.cloud/v1alpha1?timeout=32s 200 OK in 17 milliseconds I0111 19:40:02.178075 150 round_trippers.go:443] GET https://100.104.0.1:443/apis/apiextensions.k8s.io/v1beta1?timeout=32s 200 OK in 18 milliseconds I0111 19:40:02.178125 150 round_trippers.go:443] GET https://100.104.0.1:443/apis/coordination.k8s.io/v1?timeout=32s 200 OK in 17 milliseconds I0111 19:40:02.178172 150 round_trippers.go:443] GET https://100.104.0.1:443/apis/coordination.k8s.io/v1beta1?timeout=32s 200 OK in 17 milliseconds I0111 19:40:02.178211 150 round_trippers.go:443] GET https://100.104.0.1:443/apis/crd.projectcalico.org/v1?timeout=32s 200 OK in 17 milliseconds I0111 19:40:02.178272 150 round_trippers.go:443] GET https://100.104.0.1:443/apis/scheduling.k8s.io/v1beta1?timeout=32s 200 OK in 18 milliseconds I0111 19:40:02.178345 150 round_trippers.go:443] GET https://100.104.0.1:443/apis/admissionregistration.k8s.io/v1?timeout=32s 200 OK in 18 milliseconds I0111 19:40:02.178398 150 round_trippers.go:443] GET https://100.104.0.1:443/apis/scheduling.k8s.io/v1?timeout=32s 200 OK in 18 milliseconds I0111 19:40:02.178440 150 round_trippers.go:443] GET https://100.104.0.1:443/apis/policy/v1beta1?timeout=32s 200 OK in 18 milliseconds I0111 19:40:02.178490 150 round_trippers.go:443] GET https://100.104.0.1:443/apis/node.k8s.io/v1beta1?timeout=32s 200 OK in 17 milliseconds I0111 19:40:02.178559 150 round_trippers.go:443] GET https://100.104.0.1:443/apis/dns.gardener.cloud/v1alpha1?timeout=32s 200 OK in 18 milliseconds I0111 19:40:02.178603 150 round_trippers.go:443] GET https://100.104.0.1:443/apis/snapshot.storage.k8s.io/v1alpha1?timeout=32s 200 OK in 18 milliseconds I0111 19:40:02.181993 150 round_trippers.go:443] GET https://100.104.0.1:443/apis/metrics.k8s.io/v1beta1?timeout=32s 200 OK in 23 milliseconds I0111 19:40:02.456296 150 merged_client_builder.go:122] Using in-cluster configuration I0111 19:40:02.463228 150 merged_client_builder.go:122] Using in-cluster configuration I0111 19:40:02.472492 150 round_trippers.go:443] GET https://100.104.0.1:443/api/v1/namespaces/kubectl-597/pods?limit=500 200 OK in 8 milliseconds NAME READY STATUS RESTARTS AGE httpd 1/1 Running 0 39s STEP: creating an object containing a namespace with in-cluster config Jan 11 19:40:02.553: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=kubectl-597 httpd -- /bin/sh -x -c /tmp/kubectl create -f /tmp/invalid-configmap-with-namespace.yaml --v=6 2>&1' Jan 11 19:40:04.196: INFO: rc: 255 STEP: creating an object not containing a namespace with in-cluster config Jan 11 19:40:04.196: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=kubectl-597 httpd -- /bin/sh -x -c /tmp/kubectl create -f /tmp/invalid-configmap-without-namespace.yaml --v=6 2>&1' Jan 11 19:40:05.987: INFO: rc: 255 STEP: trying to use kubectl with invalid token Jan 11 19:40:05.987: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=kubectl-597 httpd -- /bin/sh -x -c /tmp/kubectl get pods --token=invalid --v=7 2>&1' Jan 11 19:40:07.368: INFO: rc: 255 Jan 11 19:40:07.368: INFO: got err error running &{/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=kubectl-597 httpd -- /bin/sh -x -c /tmp/kubectl get pods --token=invalid --v=7 2>&1] [] I0111 19:40:07.274903 189 merged_client_builder.go:164] Using in-cluster namespace I0111 19:40:07.280227 189 merged_client_builder.go:122] Using in-cluster configuration I0111 19:40:07.283681 189 merged_client_builder.go:122] Using in-cluster configuration I0111 19:40:07.287596 189 merged_client_builder.go:122] Using in-cluster configuration I0111 19:40:07.287934 189 round_trippers.go:420] GET https://100.104.0.1:443/api/v1/namespaces/kubectl-597/pods?limit=500 I0111 19:40:07.287946 189 round_trippers.go:427] Request Headers: I0111 19:40:07.287953 189 round_trippers.go:431] Accept: application/json;as=Table;v=v1beta1;g=meta.k8s.io, application/json I0111 19:40:07.287960 189 round_trippers.go:431] User-Agent: kubectl/v1.16.4 (linux/amd64) kubernetes/224be7b I0111 19:40:07.287967 189 round_trippers.go:431] Authorization: Bearer I0111 19:40:07.300231 189 round_trippers.go:446] Response Status: 401 Unauthorized in 12 milliseconds I0111 19:40:07.300469 189 helpers.go:199] server response object: [{ "kind": "Status", "apiVersion": "v1", "metadata": {}, "status": "Failure", "message": "Unauthorized", "reason": "Unauthorized", "code": 401 }] F0111 19:40:07.300489 189 helpers.go:114] error: You must be logged in to the server (Unauthorized) + /tmp/kubectl get pods '--token=invalid' '--v=7' command terminated with exit code 255 [] 0xc0041a5890 exit status 255 true [0xc003224ae0 0xc003224af8 0xc003224b10] [0xc003224ae0 0xc003224af8 0xc003224b10] [0xc003224af0 0xc003224b08] [0x10efe30 0x10efe30] 0xc002c655c0 }: Command stdout: I0111 19:40:07.274903 189 merged_client_builder.go:164] Using in-cluster namespace I0111 19:40:07.280227 189 merged_client_builder.go:122] Using in-cluster configuration I0111 19:40:07.283681 189 merged_client_builder.go:122] Using in-cluster configuration I0111 19:40:07.287596 189 merged_client_builder.go:122] Using in-cluster configuration I0111 19:40:07.287934 189 round_trippers.go:420] GET https://100.104.0.1:443/api/v1/namespaces/kubectl-597/pods?limit=500 I0111 19:40:07.287946 189 round_trippers.go:427] Request Headers: I0111 19:40:07.287953 189 round_trippers.go:431] Accept: application/json;as=Table;v=v1beta1;g=meta.k8s.io, application/json I0111 19:40:07.287960 189 round_trippers.go:431] User-Agent: kubectl/v1.16.4 (linux/amd64) kubernetes/224be7b I0111 19:40:07.287967 189 round_trippers.go:431] Authorization: Bearer I0111 19:40:07.300231 189 round_trippers.go:446] Response Status: 401 Unauthorized in 12 milliseconds I0111 19:40:07.300469 189 helpers.go:199] server response object: [{ "kind": "Status", "apiVersion": "v1", "metadata": {}, "status": "Failure", "message": "Unauthorized", "reason": "Unauthorized", "code": 401 }] F0111 19:40:07.300489 189 helpers.go:114] error: You must be logged in to the server (Unauthorized) stderr: + /tmp/kubectl get pods '--token=invalid' '--v=7' command terminated with exit code 255 error: exit status 255 STEP: trying to use kubectl with invalid server Jan 11 19:40:07.368: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=kubectl-597 httpd -- /bin/sh -x -c /tmp/kubectl get pods --server=invalid --v=6 2>&1' Jan 11 19:40:13.702: INFO: rc: 255 Jan 11 19:40:13.702: INFO: got err error running &{/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=kubectl-597 httpd -- /bin/sh -x -c /tmp/kubectl get pods --server=invalid --v=6 2>&1] [] I0111 19:40:08.604693 200 merged_client_builder.go:164] Using in-cluster namespace I0111 19:40:13.621956 200 round_trippers.go:443] GET http://invalid/api?timeout=32s in 5016 milliseconds I0111 19:40:13.622090 200 cached_discovery.go:121] skipped caching discovery info due to Get http://invalid/api?timeout=32s: dial tcp: lookup invalid on 100.104.0.10:53: no such host I0111 19:40:13.624368 200 round_trippers.go:443] GET http://invalid/api?timeout=32s in 2 milliseconds I0111 19:40:13.624567 200 cached_discovery.go:121] skipped caching discovery info due to Get http://invalid/api?timeout=32s: dial tcp: lookup invalid on 100.104.0.10:53: no such host I0111 19:40:13.624611 200 shortcut.go:89] Error loading discovery information: Get http://invalid/api?timeout=32s: dial tcp: lookup invalid on 100.104.0.10:53: no such host I0111 19:40:13.626602 200 round_trippers.go:443] GET http://invalid/api?timeout=32s in 1 milliseconds I0111 19:40:13.626678 200 cached_discovery.go:121] skipped caching discovery info due to Get http://invalid/api?timeout=32s: dial tcp: lookup invalid on 100.104.0.10:53: no such host I0111 19:40:13.628926 200 round_trippers.go:443] GET http://invalid/api?timeout=32s in 2 milliseconds I0111 19:40:13.629408 200 cached_discovery.go:121] skipped caching discovery info due to Get http://invalid/api?timeout=32s: dial tcp: lookup invalid on 100.104.0.10:53: no such host I0111 19:40:13.634128 200 round_trippers.go:443] GET http://invalid/api?timeout=32s in 4 milliseconds I0111 19:40:13.634171 200 cached_discovery.go:121] skipped caching discovery info due to Get http://invalid/api?timeout=32s: dial tcp: lookup invalid on 100.104.0.10:53: no such host I0111 19:40:13.634196 200 helpers.go:217] Connection error: Get http://invalid/api?timeout=32s: dial tcp: lookup invalid on 100.104.0.10:53: no such host F0111 19:40:13.634219 200 helpers.go:114] Unable to connect to the server: dial tcp: lookup invalid on 100.104.0.10:53: no such host + /tmp/kubectl get pods '--server=invalid' '--v=6' command terminated with exit code 255 [] 0xc003cbdcb0 exit status 255 true [0xc0021f74d8 0xc0021f74f0 0xc0021f7508] [0xc0021f74d8 0xc0021f74f0 0xc0021f7508] [0xc0021f74e8 0xc0021f7500] [0x10efe30 0x10efe30] 0xc003274d80 }: Command stdout: I0111 19:40:08.604693 200 merged_client_builder.go:164] Using in-cluster namespace I0111 19:40:13.621956 200 round_trippers.go:443] GET http://invalid/api?timeout=32s in 5016 milliseconds I0111 19:40:13.622090 200 cached_discovery.go:121] skipped caching discovery info due to Get http://invalid/api?timeout=32s: dial tcp: lookup invalid on 100.104.0.10:53: no such host I0111 19:40:13.624368 200 round_trippers.go:443] GET http://invalid/api?timeout=32s in 2 milliseconds I0111 19:40:13.624567 200 cached_discovery.go:121] skipped caching discovery info due to Get http://invalid/api?timeout=32s: dial tcp: lookup invalid on 100.104.0.10:53: no such host I0111 19:40:13.624611 200 shortcut.go:89] Error loading discovery information: Get http://invalid/api?timeout=32s: dial tcp: lookup invalid on 100.104.0.10:53: no such host I0111 19:40:13.626602 200 round_trippers.go:443] GET http://invalid/api?timeout=32s in 1 milliseconds I0111 19:40:13.626678 200 cached_discovery.go:121] skipped caching discovery info due to Get http://invalid/api?timeout=32s: dial tcp: lookup invalid on 100.104.0.10:53: no such host I0111 19:40:13.628926 200 round_trippers.go:443] GET http://invalid/api?timeout=32s in 2 milliseconds I0111 19:40:13.629408 200 cached_discovery.go:121] skipped caching discovery info due to Get http://invalid/api?timeout=32s: dial tcp: lookup invalid on 100.104.0.10:53: no such host I0111 19:40:13.634128 200 round_trippers.go:443] GET http://invalid/api?timeout=32s in 4 milliseconds I0111 19:40:13.634171 200 cached_discovery.go:121] skipped caching discovery info due to Get http://invalid/api?timeout=32s: dial tcp: lookup invalid on 100.104.0.10:53: no such host I0111 19:40:13.634196 200 helpers.go:217] Connection error: Get http://invalid/api?timeout=32s: dial tcp: lookup invalid on 100.104.0.10:53: no such host F0111 19:40:13.634219 200 helpers.go:114] Unable to connect to the server: dial tcp: lookup invalid on 100.104.0.10:53: no such host stderr: + /tmp/kubectl get pods '--server=invalid' '--v=6' command terminated with exit code 255 error: exit status 255 STEP: trying to use kubectl with invalid namespace Jan 11 19:40:13.702: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=kubectl-597 httpd -- /bin/sh -x -c /tmp/kubectl get pods --namespace=invalid --v=6 2>&1' Jan 11 19:40:15.293: INFO: stderr: "+ /tmp/kubectl get pods '--namespace=invalid' '--v=6'\n" Jan 11 19:40:15.293: INFO: stdout: "I0111 19:40:15.119014 212 merged_client_builder.go:122] Using in-cluster configuration\nI0111 19:40:15.123967 212 merged_client_builder.go:122] Using in-cluster configuration\nI0111 19:40:15.149186 212 merged_client_builder.go:122] Using in-cluster configuration\nI0111 19:40:15.196015 212 round_trippers.go:443] GET https://100.104.0.1:443/api/v1/namespaces/invalid/pods?limit=500 200 OK in 46 milliseconds\nNo resources found in invalid namespace.\n" Jan 11 19:40:15.293: INFO: stdout: I0111 19:40:15.119014 212 merged_client_builder.go:122] Using in-cluster configuration I0111 19:40:15.123967 212 merged_client_builder.go:122] Using in-cluster configuration I0111 19:40:15.149186 212 merged_client_builder.go:122] Using in-cluster configuration I0111 19:40:15.196015 212 round_trippers.go:443] GET https://100.104.0.1:443/api/v1/namespaces/invalid/pods?limit=500 200 OK in 46 milliseconds No resources found in invalid namespace. STEP: trying to use kubectl with kubeconfig Jan 11 19:40:15.293: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=kubectl-597 httpd -- /bin/sh -x -c /tmp/kubectl get pods --kubeconfig=/tmp/icc-override.kubeconfig --v=6 2>&1' Jan 11 19:40:16.904: INFO: stderr: "+ /tmp/kubectl get pods '--kubeconfig=/tmp/icc-override.kubeconfig' '--v=6'\n" Jan 11 19:40:16.904: INFO: stdout: "I0111 19:40:16.628357 223 loader.go:375] Config loaded from file: /tmp/icc-override.kubeconfig\nI0111 19:40:16.645421 223 round_trippers.go:443] GET https://kubernetes.default.svc:443/api?timeout=32s 200 OK in 16 milliseconds\nI0111 19:40:16.652908 223 round_trippers.go:443] GET https://kubernetes.default.svc:443/apis?timeout=32s 200 OK in 2 milliseconds\nI0111 19:40:16.660577 223 round_trippers.go:443] GET https://kubernetes.default.svc:443/apis/apiregistration.k8s.io/v1beta1?timeout=32s 200 OK in 2 milliseconds\nI0111 19:40:16.660887 223 round_trippers.go:443] GET https://kubernetes.default.svc:443/apis/rbac.authorization.k8s.io/v1?timeout=32s 200 OK in 2 milliseconds\nI0111 19:40:16.661018 223 round_trippers.go:443] GET https://kubernetes.default.svc:443/apis/apiregistration.k8s.io/v1?timeout=32s 200 OK in 2 milliseconds\nI0111 19:40:16.661309 223 round_trippers.go:443] GET https://kubernetes.default.svc:443/apis/extensions/v1beta1?timeout=32s 200 OK in 2 milliseconds\nI0111 19:40:16.661498 223 round_trippers.go:443] GET https://kubernetes.default.svc:443/apis/rbac.authorization.k8s.io/v1beta1?timeout=32s 200 OK in 3 milliseconds\nI0111 19:40:16.661606 223 round_trippers.go:443] GET https://kubernetes.default.svc:443/api/v1?timeout=32s 200 OK in 3 milliseconds\nI0111 19:40:16.661917 223 round_trippers.go:443] GET https://kubernetes.default.svc:443/apis/storage.k8s.io/v1?timeout=32s 200 OK in 3 milliseconds\nI0111 19:40:16.661939 223 round_trippers.go:443] GET https://kubernetes.default.svc:443/apis/storage.k8s.io/v1beta1?timeout=32s 200 OK in 3 milliseconds\nI0111 19:40:16.662755 223 round_trippers.go:443] GET https://kubernetes.default.svc:443/apis/batch/v1beta1?timeout=32s 200 OK in 2 milliseconds\nI0111 19:40:16.662970 223 round_trippers.go:443] GET https://kubernetes.default.svc:443/apis/admissionregistration.k8s.io/v1beta1?timeout=32s 200 OK in 4 milliseconds\nI0111 19:40:16.663179 223 round_trippers.go:443] GET https://kubernetes.default.svc:443/apis/apiextensions.k8s.io/v1?timeout=32s 200 OK in 4 milliseconds\nI0111 19:40:16.663230 223 round_trippers.go:443] GET https://kubernetes.default.svc:443/apis/admissionregistration.k8s.io/v1?timeout=32s 200 OK in 4 milliseconds\nI0111 19:40:16.663260 223 round_trippers.go:443] GET https://kubernetes.default.svc:443/apis/authentication.k8s.io/v1?timeout=32s 200 OK in 4 milliseconds\nI0111 19:40:16.663535 223 round_trippers.go:443] GET https://kubernetes.default.svc:443/apis/authentication.k8s.io/v1beta1?timeout=32s 200 OK in 4 milliseconds\nI0111 19:40:16.663560 223 round_trippers.go:443] GET https://kubernetes.default.svc:443/apis/scheduling.k8s.io/v1?timeout=32s 200 OK in 4 milliseconds\nI0111 19:40:16.663589 223 round_trippers.go:443] GET https://kubernetes.default.svc:443/apis/node.k8s.io/v1beta1?timeout=32s 200 OK in 4 milliseconds\nI0111 19:40:16.663634 223 round_trippers.go:443] GET https://kubernetes.default.svc:443/apis/scheduling.k8s.io/v1beta1?timeout=32s 200 OK in 4 milliseconds\nI0111 19:40:16.663660 223 round_trippers.go:443] GET https://kubernetes.default.svc:443/apis/events.k8s.io/v1beta1?timeout=32s 200 OK in 4 milliseconds\nI0111 19:40:16.663687 223 round_trippers.go:443] GET https://kubernetes.default.svc:443/apis/apps/v1?timeout=32s 200 OK in 5 milliseconds\nI0111 19:40:16.663703 223 round_trippers.go:443] GET https://kubernetes.default.svc:443/apis/batch/v1?timeout=32s 200 OK in 4 milliseconds\nI0111 19:40:16.663731 223 round_trippers.go:443] GET https://kubernetes.default.svc:443/apis/authorization.k8s.io/v1beta1?timeout=32s 200 OK in 4 milliseconds\nI0111 19:40:16.664015 223 round_trippers.go:443] GET https://kubernetes.default.svc:443/apis/crd.projectcalico.org/v1?timeout=32s 200 OK in 4 milliseconds\nI0111 19:40:16.664126 223 round_trippers.go:443] GET https://kubernetes.default.svc:443/apis/coordination.k8s.io/v1beta1?timeout=32s 200 OK in 4 milliseconds\nI0111 19:40:16.664155 223 round_trippers.go:443] GET https://kubernetes.default.svc:443/apis/autoscaling/v2beta1?timeout=32s 200 OK in 4 milliseconds\nI0111 19:40:16.664194 223 round_trippers.go:443] GET https://kubernetes.default.svc:443/apis/autoscaling/v2beta2?timeout=32s 200 OK in 4 milliseconds\nI0111 19:40:16.664232 223 round_trippers.go:443] GET https://kubernetes.default.svc:443/apis/coordination.k8s.io/v1?timeout=32s 200 OK in 4 milliseconds\nI0111 19:40:16.664389 223 round_trippers.go:443] GET https://kubernetes.default.svc:443/apis/dns.gardener.cloud/v1alpha1?timeout=32s 200 OK in 4 milliseconds\nI0111 19:40:16.664441 223 round_trippers.go:443] GET https://kubernetes.default.svc:443/apis/certificates.k8s.io/v1beta1?timeout=32s 200 OK in 4 milliseconds\nI0111 19:40:16.664503 223 round_trippers.go:443] GET https://kubernetes.default.svc:443/apis/networking.k8s.io/v1?timeout=32s 200 OK in 4 milliseconds\nI0111 19:40:16.664610 223 round_trippers.go:443] GET https://kubernetes.default.svc:443/apis/authorization.k8s.io/v1?timeout=32s 200 OK in 5 milliseconds\nI0111 19:40:16.664680 223 round_trippers.go:443] GET https://kubernetes.default.svc:443/apis/snapshot.storage.k8s.io/v1alpha1?timeout=32s 200 OK in 4 milliseconds\nI0111 19:40:16.664736 223 round_trippers.go:443] GET https://kubernetes.default.svc:443/apis/cert.gardener.cloud/v1alpha1?timeout=32s 200 OK in 4 milliseconds\nI0111 19:40:16.664795 223 round_trippers.go:443] GET https://kubernetes.default.svc:443/apis/policy/v1beta1?timeout=32s 200 OK in 4 milliseconds\nI0111 19:40:16.664919 223 round_trippers.go:443] GET https://kubernetes.default.svc:443/apis/apiextensions.k8s.io/v1beta1?timeout=32s 200 OK in 6 milliseconds\nI0111 19:40:16.665034 223 round_trippers.go:443] GET https://kubernetes.default.svc:443/apis/networking.k8s.io/v1beta1?timeout=32s 200 OK in 4 milliseconds\nI0111 19:40:16.665147 223 round_trippers.go:443] GET https://kubernetes.default.svc:443/apis/autoscaling/v1?timeout=32s 200 OK in 5 milliseconds\nI0111 19:40:16.669772 223 round_trippers.go:443] GET https://kubernetes.default.svc:443/apis/metrics.k8s.io/v1beta1?timeout=32s 200 OK in 11 milliseconds\nI0111 19:40:16.835215 223 round_trippers.go:443] GET https://kubernetes.default.svc:443/api/v1/namespaces/default/pods?limit=500 200 OK in 3 milliseconds\nNo resources found in default namespace.\n" Jan 11 19:40:16.905: INFO: stdout: I0111 19:40:16.628357 223 loader.go:375] Config loaded from file: /tmp/icc-override.kubeconfig I0111 19:40:16.645421 223 round_trippers.go:443] GET https://kubernetes.default.svc:443/api?timeout=32s 200 OK in 16 milliseconds I0111 19:40:16.652908 223 round_trippers.go:443] GET https://kubernetes.default.svc:443/apis?timeout=32s 200 OK in 2 milliseconds I0111 19:40:16.660577 223 round_trippers.go:443] GET https://kubernetes.default.svc:443/apis/apiregistration.k8s.io/v1beta1?timeout=32s 200 OK in 2 milliseconds I0111 19:40:16.660887 223 round_trippers.go:443] GET https://kubernetes.default.svc:443/apis/rbac.authorization.k8s.io/v1?timeout=32s 200 OK in 2 milliseconds I0111 19:40:16.661018 223 round_trippers.go:443] GET https://kubernetes.default.svc:443/apis/apiregistration.k8s.io/v1?timeout=32s 200 OK in 2 milliseconds I0111 19:40:16.661309 223 round_trippers.go:443] GET https://kubernetes.default.svc:443/apis/extensions/v1beta1?timeout=32s 200 OK in 2 milliseconds I0111 19:40:16.661498 223 round_trippers.go:443] GET https://kubernetes.default.svc:443/apis/rbac.authorization.k8s.io/v1beta1?timeout=32s 200 OK in 3 milliseconds I0111 19:40:16.661606 223 round_trippers.go:443] GET https://kubernetes.default.svc:443/api/v1?timeout=32s 200 OK in 3 milliseconds I0111 19:40:16.661917 223 round_trippers.go:443] GET https://kubernetes.default.svc:443/apis/storage.k8s.io/v1?timeout=32s 200 OK in 3 milliseconds I0111 19:40:16.661939 223 round_trippers.go:443] GET https://kubernetes.default.svc:443/apis/storage.k8s.io/v1beta1?timeout=32s 200 OK in 3 milliseconds I0111 19:40:16.662755 223 round_trippers.go:443] GET https://kubernetes.default.svc:443/apis/batch/v1beta1?timeout=32s 200 OK in 2 milliseconds I0111 19:40:16.662970 223 round_trippers.go:443] GET https://kubernetes.default.svc:443/apis/admissionregistration.k8s.io/v1beta1?timeout=32s 200 OK in 4 milliseconds I0111 19:40:16.663179 223 round_trippers.go:443] GET https://kubernetes.default.svc:443/apis/apiextensions.k8s.io/v1?timeout=32s 200 OK in 4 milliseconds I0111 19:40:16.663230 223 round_trippers.go:443] GET https://kubernetes.default.svc:443/apis/admissionregistration.k8s.io/v1?timeout=32s 200 OK in 4 milliseconds I0111 19:40:16.663260 223 round_trippers.go:443] GET https://kubernetes.default.svc:443/apis/authentication.k8s.io/v1?timeout=32s 200 OK in 4 milliseconds I0111 19:40:16.663535 223 round_trippers.go:443] GET https://kubernetes.default.svc:443/apis/authentication.k8s.io/v1beta1?timeout=32s 200 OK in 4 milliseconds I0111 19:40:16.663560 223 round_trippers.go:443] GET https://kubernetes.default.svc:443/apis/scheduling.k8s.io/v1?timeout=32s 200 OK in 4 milliseconds I0111 19:40:16.663589 223 round_trippers.go:443] GET https://kubernetes.default.svc:443/apis/node.k8s.io/v1beta1?timeout=32s 200 OK in 4 milliseconds I0111 19:40:16.663634 223 round_trippers.go:443] GET https://kubernetes.default.svc:443/apis/scheduling.k8s.io/v1beta1?timeout=32s 200 OK in 4 milliseconds I0111 19:40:16.663660 223 round_trippers.go:443] GET https://kubernetes.default.svc:443/apis/events.k8s.io/v1beta1?timeout=32s 200 OK in 4 milliseconds I0111 19:40:16.663687 223 round_trippers.go:443] GET https://kubernetes.default.svc:443/apis/apps/v1?timeout=32s 200 OK in 5 milliseconds I0111 19:40:16.663703 223 round_trippers.go:443] GET https://kubernetes.default.svc:443/apis/batch/v1?timeout=32s 200 OK in 4 milliseconds I0111 19:40:16.663731 223 round_trippers.go:443] GET https://kubernetes.default.svc:443/apis/authorization.k8s.io/v1beta1?timeout=32s 200 OK in 4 milliseconds I0111 19:40:16.664015 223 round_trippers.go:443] GET https://kubernetes.default.svc:443/apis/crd.projectcalico.org/v1?timeout=32s 200 OK in 4 milliseconds I0111 19:40:16.664126 223 round_trippers.go:443] GET https://kubernetes.default.svc:443/apis/coordination.k8s.io/v1beta1?timeout=32s 200 OK in 4 milliseconds I0111 19:40:16.664155 223 round_trippers.go:443] GET https://kubernetes.default.svc:443/apis/autoscaling/v2beta1?timeout=32s 200 OK in 4 milliseconds I0111 19:40:16.664194 223 round_trippers.go:443] GET https://kubernetes.default.svc:443/apis/autoscaling/v2beta2?timeout=32s 200 OK in 4 milliseconds I0111 19:40:16.664232 223 round_trippers.go:443] GET https://kubernetes.default.svc:443/apis/coordination.k8s.io/v1?timeout=32s 200 OK in 4 milliseconds I0111 19:40:16.664389 223 round_trippers.go:443] GET https://kubernetes.default.svc:443/apis/dns.gardener.cloud/v1alpha1?timeout=32s 200 OK in 4 milliseconds I0111 19:40:16.664441 223 round_trippers.go:443] GET https://kubernetes.default.svc:443/apis/certificates.k8s.io/v1beta1?timeout=32s 200 OK in 4 milliseconds I0111 19:40:16.664503 223 round_trippers.go:443] GET https://kubernetes.default.svc:443/apis/networking.k8s.io/v1?timeout=32s 200 OK in 4 milliseconds I0111 19:40:16.664610 223 round_trippers.go:443] GET https://kubernetes.default.svc:443/apis/authorization.k8s.io/v1?timeout=32s 200 OK in 5 milliseconds I0111 19:40:16.664680 223 round_trippers.go:443] GET https://kubernetes.default.svc:443/apis/snapshot.storage.k8s.io/v1alpha1?timeout=32s 200 OK in 4 milliseconds I0111 19:40:16.664736 223 round_trippers.go:443] GET https://kubernetes.default.svc:443/apis/cert.gardener.cloud/v1alpha1?timeout=32s 200 OK in 4 milliseconds I0111 19:40:16.664795 223 round_trippers.go:443] GET https://kubernetes.default.svc:443/apis/policy/v1beta1?timeout=32s 200 OK in 4 milliseconds I0111 19:40:16.664919 223 round_trippers.go:443] GET https://kubernetes.default.svc:443/apis/apiextensions.k8s.io/v1beta1?timeout=32s 200 OK in 6 milliseconds I0111 19:40:16.665034 223 round_trippers.go:443] GET https://kubernetes.default.svc:443/apis/networking.k8s.io/v1beta1?timeout=32s 200 OK in 4 milliseconds I0111 19:40:16.665147 223 round_trippers.go:443] GET https://kubernetes.default.svc:443/apis/autoscaling/v1?timeout=32s 200 OK in 5 milliseconds I0111 19:40:16.669772 223 round_trippers.go:443] GET https://kubernetes.default.svc:443/apis/metrics.k8s.io/v1beta1?timeout=32s 200 OK in 11 milliseconds I0111 19:40:16.835215 223 round_trippers.go:443] GET https://kubernetes.default.svc:443/api/v1/namespaces/default/pods?limit=500 200 OK in 3 milliseconds No resources found in default namespace. [AfterEach] Simple pod /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:377 STEP: using delete to clean up resources Jan 11 19:40:16.905: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config delete --grace-period=0 --force -f - --namespace=kubectl-597' Jan 11 19:40:17.425: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 11 19:40:17.425: INFO: stdout: "pod \"httpd\" force deleted\n" Jan 11 19:40:17.425: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config get rc,svc -l name=httpd --no-headers --namespace=kubectl-597' Jan 11 19:40:17.963: INFO: stderr: "No resources found in kubectl-597 namespace.\n" Jan 11 19:40:17.963: INFO: stdout: "" Jan 11 19:40:17.963: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config get pods -l name=httpd --namespace=kubectl-597 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jan 11 19:40:18.435: INFO: stderr: "" Jan 11 19:40:18.435: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:40:18.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-597" for this suite. Jan 11 19:40:24.794: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:40:28.097: INFO: namespace kubectl-597 deletion completed in 9.571575085s • [SLOW TEST:66.147 seconds] [sig-cli] Kubectl client /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Simple pod /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:369 should handle in-cluster config /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:613 ------------------------------ SS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:40:19.359: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename kubectl STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-16 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:225 [It] should reuse port when apply to an existing SVC /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:781 STEP: creating Redis SVC Jan 11 19:40:19.999: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config create -f - --namespace=kubectl-16' Jan 11 19:40:21.021: INFO: stderr: "" Jan 11 19:40:21.021: INFO: stdout: "service/redis-master created\n" STEP: getting the original port Jan 11 19:40:21.021: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config get service redis-master --namespace=kubectl-16 -o jsonpath={.spec.ports[0].port}' Jan 11 19:40:21.488: INFO: stderr: "" Jan 11 19:40:21.488: INFO: stdout: "6379" STEP: applying the same configuration Jan 11 19:40:21.488: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config apply -f - --namespace=kubectl-16' Jan 11 19:40:22.644: INFO: stderr: "Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply\n" Jan 11 19:40:22.644: INFO: stdout: "service/redis-master configured\n" STEP: getting the port after applying configuration Jan 11 19:40:22.644: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config get service redis-master --namespace=kubectl-16 -o jsonpath={.spec.ports[0].port}' Jan 11 19:40:23.096: INFO: stderr: "" Jan 11 19:40:23.096: INFO: stdout: "6379" STEP: checking the result [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:40:23.097: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-16" for this suite. Jan 11 19:40:29.458: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:40:32.775: INFO: namespace kubectl-16 deletion completed in 9.587472877s • [SLOW TEST:13.416 seconds] [sig-cli] Kubectl client /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl apply /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:766 should reuse port when apply to an existing SVC /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:781 ------------------------------ [BeforeEach] [Testpattern: Inline-volume (default fs)] volumeIO /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:93 [BeforeEach] [Testpattern: Inline-volume (default fs)] volumeIO /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:39:24.434: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename volumeio STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in volumeio-2392 STEP: Waiting for a default service account to be provisioned in namespace [It] should write files of various sizes, verify size, validate content [Slow] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_io.go:137 Jan 11 19:39:25.074: INFO: Could not find CSI Name for in-tree plugin kubernetes.io/host-path Jan 11 19:39:25.257: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-volumeio-2392" in namespace "volumeio-2392" to be "success or failure" Jan 11 19:39:25.346: INFO: Pod "hostpath-symlink-prep-volumeio-2392": Phase="Pending", Reason="", readiness=false. Elapsed: 89.386693ms Jan 11 19:39:27.436: INFO: Pod "hostpath-symlink-prep-volumeio-2392": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.179583714s STEP: Saw pod success Jan 11 19:39:27.436: INFO: Pod "hostpath-symlink-prep-volumeio-2392" satisfied condition "success or failure" Jan 11 19:39:27.436: INFO: Deleting pod "hostpath-symlink-prep-volumeio-2392" in namespace "volumeio-2392" Jan 11 19:39:27.530: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-volumeio-2392" to be fully deleted Jan 11 19:39:27.619: INFO: Creating resource for inline volume STEP: starting hostpathsymlink-io-client STEP: writing 1048576 bytes to test file /opt/hostPathSymlink_io_test_volumeio-2392-1048576 Jan 11 19:39:31.891: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=volumeio-2392 hostpathsymlink-io-client -- /bin/sh -c i=0; while [ $i -lt 1 ]; do dd if=/opt/hostpathsymlink-volumeio-2392-dd_if bs=1048576 >>/opt/hostPathSymlink_io_test_volumeio-2392-1048576 2>/dev/null; let i+=1; done' Jan 11 19:39:33.265: INFO: stderr: "" Jan 11 19:39:33.265: INFO: stdout: "" STEP: verifying file size Jan 11 19:39:33.265: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=volumeio-2392 hostpathsymlink-io-client -- /bin/sh -c stat -c %s /opt/hostPathSymlink_io_test_volumeio-2392-1048576' Jan 11 19:39:34.631: INFO: stderr: "" Jan 11 19:39:34.631: INFO: stdout: "1048576\n" STEP: verifying file hash Jan 11 19:39:34.631: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=volumeio-2392 hostpathsymlink-io-client -- /bin/sh -c md5sum /opt/hostPathSymlink_io_test_volumeio-2392-1048576 | cut -d' ' -f1' Jan 11 19:39:35.949: INFO: stderr: "" Jan 11 19:39:35.949: INFO: stdout: "5c34c2813223a7ca05a3c2f38c0d1710\n" STEP: writing 104857600 bytes to test file /opt/hostPathSymlink_io_test_volumeio-2392-104857600 Jan 11 19:39:35.950: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=volumeio-2392 hostpathsymlink-io-client -- /bin/sh -c i=0; while [ $i -lt 100 ]; do dd if=/opt/hostpathsymlink-volumeio-2392-dd_if bs=1048576 >>/opt/hostPathSymlink_io_test_volumeio-2392-104857600 2>/dev/null; let i+=1; done' Jan 11 19:39:37.971: INFO: stderr: "" Jan 11 19:39:37.971: INFO: stdout: "" STEP: verifying file size Jan 11 19:39:37.971: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=volumeio-2392 hostpathsymlink-io-client -- /bin/sh -c stat -c %s /opt/hostPathSymlink_io_test_volumeio-2392-104857600' Jan 11 19:39:39.309: INFO: stderr: "" Jan 11 19:39:39.309: INFO: stdout: "104857600\n" STEP: verifying file hash Jan 11 19:39:39.310: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=volumeio-2392 hostpathsymlink-io-client -- /bin/sh -c md5sum /opt/hostPathSymlink_io_test_volumeio-2392-104857600 | cut -d' ' -f1' Jan 11 19:39:40.844: INFO: stderr: "" Jan 11 19:39:40.844: INFO: stdout: "f2fa202b1ffeedda5f3a58bd1ae81104\n" STEP: deleting test file /opt/hostPathSymlink_io_test_volumeio-2392-104857600... Jan 11 19:39:40.844: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=volumeio-2392 hostpathsymlink-io-client -- /bin/sh -c rm -f /opt/hostPathSymlink_io_test_volumeio-2392-104857600' Jan 11 19:39:42.201: INFO: stderr: "" Jan 11 19:39:42.201: INFO: stdout: "" STEP: deleting test file /opt/hostPathSymlink_io_test_volumeio-2392-1048576... Jan 11 19:39:42.201: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=volumeio-2392 hostpathsymlink-io-client -- /bin/sh -c rm -f /opt/hostPathSymlink_io_test_volumeio-2392-1048576' Jan 11 19:39:43.595: INFO: stderr: "" Jan 11 19:39:43.596: INFO: stdout: "" STEP: deleting test file /opt/hostpathsymlink-volumeio-2392-dd_if... Jan 11 19:39:43.596: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=volumeio-2392 hostpathsymlink-io-client -- /bin/sh -c rm -f /opt/hostpathsymlink-volumeio-2392-dd_if' Jan 11 19:39:44.912: INFO: stderr: "" Jan 11 19:39:44.912: INFO: stdout: "" STEP: deleting client pod "hostpathsymlink-io-client"... Jan 11 19:39:44.912: INFO: Deleting pod "hostpathsymlink-io-client" in namespace "volumeio-2392" Jan 11 19:39:45.004: INFO: Wait up to 5m0s for pod "hostpathsymlink-io-client" to be fully deleted Jan 11 19:39:59.184: INFO: sleeping a bit so kubelet can unmount and detach the volume Jan 11 19:40:19.277: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-volumeio-2392" in namespace "volumeio-2392" to be "success or failure" Jan 11 19:40:19.367: INFO: Pod "hostpath-symlink-prep-volumeio-2392": Phase="Pending", Reason="", readiness=false. Elapsed: 90.011105ms Jan 11 19:40:21.458: INFO: Pod "hostpath-symlink-prep-volumeio-2392": Phase="Pending", Reason="", readiness=false. Elapsed: 2.180259408s Jan 11 19:40:23.548: INFO: Pod "hostpath-symlink-prep-volumeio-2392": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.270647678s STEP: Saw pod success Jan 11 19:40:23.548: INFO: Pod "hostpath-symlink-prep-volumeio-2392" satisfied condition "success or failure" Jan 11 19:40:23.548: INFO: Deleting pod "hostpath-symlink-prep-volumeio-2392" in namespace "volumeio-2392" Jan 11 19:40:23.648: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-volumeio-2392" to be fully deleted Jan 11 19:40:23.737: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics [AfterEach] [Testpattern: Inline-volume (default fs)] volumeIO /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:40:23.737: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "volumeio-2392" for this suite. Jan 11 19:40:30.099: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:40:33.422: INFO: namespace volumeio-2392 deletion completed in 9.593353529s • [SLOW TEST:68.987 seconds] [sig-storage] In-tree Volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Driver: hostPathSymlink] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:69 [Testpattern: Inline-volume (default fs)] volumeIO /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:92 should write files of various sizes, verify size, validate content [Slow] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_io.go:137 ------------------------------ SSSSS ------------------------------ [BeforeEach] [Testpattern: inline ephemeral CSI volume] ephemeral /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:93 [BeforeEach] [Testpattern: inline ephemeral CSI volume] ephemeral /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:79 [BeforeEach] [Testpattern: inline ephemeral CSI volume] ephemeral /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:38:50.513: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename ephemeral STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in ephemeral-1641 STEP: Waiting for a default service account to be provisioned in namespace [It] should support two pods which share the same volume /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:140 STEP: deploying csi-hostpath driver Jan 11 19:38:51.643: INFO: creating *v1.ServiceAccount: ephemeral-1641/csi-attacher Jan 11 19:38:51.733: INFO: creating *v1.ClusterRole: external-attacher-runner-ephemeral-1641 Jan 11 19:38:51.733: INFO: Define cluster role external-attacher-runner-ephemeral-1641 Jan 11 19:38:51.824: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-ephemeral-1641 Jan 11 19:38:51.914: INFO: creating *v1.Role: ephemeral-1641/external-attacher-cfg-ephemeral-1641 Jan 11 19:38:52.004: INFO: creating *v1.RoleBinding: ephemeral-1641/csi-attacher-role-cfg Jan 11 19:38:52.094: INFO: creating *v1.ServiceAccount: ephemeral-1641/csi-provisioner Jan 11 19:38:52.184: INFO: creating *v1.ClusterRole: external-provisioner-runner-ephemeral-1641 Jan 11 19:38:52.184: INFO: Define cluster role external-provisioner-runner-ephemeral-1641 Jan 11 19:38:52.275: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-ephemeral-1641 Jan 11 19:38:52.365: INFO: creating *v1.Role: ephemeral-1641/external-provisioner-cfg-ephemeral-1641 Jan 11 19:38:52.456: INFO: creating *v1.RoleBinding: ephemeral-1641/csi-provisioner-role-cfg Jan 11 19:38:52.546: INFO: creating *v1.ServiceAccount: ephemeral-1641/csi-snapshotter Jan 11 19:38:52.636: INFO: creating *v1.ClusterRole: external-snapshotter-runner-ephemeral-1641 Jan 11 19:38:52.636: INFO: Define cluster role external-snapshotter-runner-ephemeral-1641 Jan 11 19:38:52.727: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-ephemeral-1641 Jan 11 19:38:52.816: INFO: creating *v1.Role: ephemeral-1641/external-snapshotter-leaderelection-ephemeral-1641 Jan 11 19:38:52.906: INFO: creating *v1.RoleBinding: ephemeral-1641/external-snapshotter-leaderelection Jan 11 19:38:52.997: INFO: creating *v1.ServiceAccount: ephemeral-1641/csi-resizer Jan 11 19:38:53.086: INFO: creating *v1.ClusterRole: external-resizer-runner-ephemeral-1641 Jan 11 19:38:53.086: INFO: Define cluster role external-resizer-runner-ephemeral-1641 Jan 11 19:38:53.176: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-ephemeral-1641 Jan 11 19:38:53.266: INFO: creating *v1.Role: ephemeral-1641/external-resizer-cfg-ephemeral-1641 Jan 11 19:38:53.357: INFO: creating *v1.RoleBinding: ephemeral-1641/csi-resizer-role-cfg Jan 11 19:38:53.447: INFO: creating *v1.Service: ephemeral-1641/csi-hostpath-attacher Jan 11 19:38:53.541: INFO: creating *v1.StatefulSet: ephemeral-1641/csi-hostpath-attacher Jan 11 19:38:53.632: INFO: creating *v1beta1.CSIDriver: csi-hostpath-ephemeral-1641 Jan 11 19:38:53.722: INFO: creating *v1.Service: ephemeral-1641/csi-hostpathplugin Jan 11 19:38:53.816: INFO: creating *v1.StatefulSet: ephemeral-1641/csi-hostpathplugin Jan 11 19:38:53.907: INFO: creating *v1.Service: ephemeral-1641/csi-hostpath-provisioner Jan 11 19:38:54.000: INFO: creating *v1.StatefulSet: ephemeral-1641/csi-hostpath-provisioner Jan 11 19:38:54.091: INFO: creating *v1.Service: ephemeral-1641/csi-hostpath-resizer Jan 11 19:38:54.188: INFO: creating *v1.StatefulSet: ephemeral-1641/csi-hostpath-resizer Jan 11 19:38:54.278: INFO: creating *v1.Service: ephemeral-1641/csi-snapshotter Jan 11 19:38:54.371: INFO: creating *v1.StatefulSet: ephemeral-1641/csi-snapshotter Jan 11 19:38:54.461: INFO: creating *v1.ClusterRoleBinding: psp-csi-hostpath-role-ephemeral-1641 STEP: checking the requested inline volume exists in the pod running on node {Name:ip-10-250-27-25.ec2.internal Selector:map[] Affinity:nil} STEP: writing data in one pod and checking for it in the second Jan 11 19:39:15.186: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=ephemeral-1641 inline-volume-tester-vzj2s -- /bin/sh -c touch /mnt/test-0/hello-world' Jan 11 19:39:16.832: INFO: stderr: "" Jan 11 19:39:16.832: INFO: stdout: "" Jan 11 19:39:16.832: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=ephemeral-1641 inline-volume-tester2-p7jn5 -- /bin/sh -c [ ! -f /mnt/test-0/hello-world ]' Jan 11 19:39:18.215: INFO: stderr: "" Jan 11 19:39:18.215: INFO: stdout: "" Jan 11 19:39:18.447: INFO: Pod inline-volume-tester2-p7jn5 has the following logs: STEP: Deleting pod inline-volume-tester2-p7jn5 in namespace ephemeral-1641 Jan 11 19:39:50.830: INFO: Pod inline-volume-tester-vzj2s has the following logs: /dev/nvme0n1p9 on /mnt/test-0 type ext4 (rw,seclabel,relatime) STEP: Deleting pod inline-volume-tester-vzj2s in namespace ephemeral-1641 STEP: uninstalling csi-hostpath driver Jan 11 19:40:23.102: INFO: deleting *v1.ServiceAccount: ephemeral-1641/csi-attacher Jan 11 19:40:23.193: INFO: deleting *v1.ClusterRole: external-attacher-runner-ephemeral-1641 Jan 11 19:40:23.284: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-ephemeral-1641 Jan 11 19:40:23.376: INFO: deleting *v1.Role: ephemeral-1641/external-attacher-cfg-ephemeral-1641 Jan 11 19:40:23.468: INFO: deleting *v1.RoleBinding: ephemeral-1641/csi-attacher-role-cfg Jan 11 19:40:23.559: INFO: deleting *v1.ServiceAccount: ephemeral-1641/csi-provisioner Jan 11 19:40:23.650: INFO: deleting *v1.ClusterRole: external-provisioner-runner-ephemeral-1641 Jan 11 19:40:23.741: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-ephemeral-1641 Jan 11 19:40:23.832: INFO: deleting *v1.Role: ephemeral-1641/external-provisioner-cfg-ephemeral-1641 Jan 11 19:40:23.922: INFO: deleting *v1.RoleBinding: ephemeral-1641/csi-provisioner-role-cfg Jan 11 19:40:24.014: INFO: deleting *v1.ServiceAccount: ephemeral-1641/csi-snapshotter Jan 11 19:40:24.105: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-ephemeral-1641 Jan 11 19:40:24.196: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-ephemeral-1641 Jan 11 19:40:24.287: INFO: deleting *v1.Role: ephemeral-1641/external-snapshotter-leaderelection-ephemeral-1641 Jan 11 19:40:24.378: INFO: deleting *v1.RoleBinding: ephemeral-1641/external-snapshotter-leaderelection Jan 11 19:40:24.469: INFO: deleting *v1.ServiceAccount: ephemeral-1641/csi-resizer Jan 11 19:40:24.560: INFO: deleting *v1.ClusterRole: external-resizer-runner-ephemeral-1641 Jan 11 19:40:24.651: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-ephemeral-1641 Jan 11 19:40:24.742: INFO: deleting *v1.Role: ephemeral-1641/external-resizer-cfg-ephemeral-1641 Jan 11 19:40:24.833: INFO: deleting *v1.RoleBinding: ephemeral-1641/csi-resizer-role-cfg Jan 11 19:40:24.924: INFO: deleting *v1.Service: ephemeral-1641/csi-hostpath-attacher Jan 11 19:40:25.019: INFO: deleting *v1.StatefulSet: ephemeral-1641/csi-hostpath-attacher Jan 11 19:40:25.110: INFO: deleting *v1beta1.CSIDriver: csi-hostpath-ephemeral-1641 Jan 11 19:40:25.203: INFO: deleting *v1.Service: ephemeral-1641/csi-hostpathplugin Jan 11 19:40:25.301: INFO: deleting *v1.StatefulSet: ephemeral-1641/csi-hostpathplugin Jan 11 19:40:25.392: INFO: deleting *v1.Service: ephemeral-1641/csi-hostpath-provisioner Jan 11 19:40:25.487: INFO: deleting *v1.StatefulSet: ephemeral-1641/csi-hostpath-provisioner Jan 11 19:40:25.580: INFO: deleting *v1.Service: ephemeral-1641/csi-hostpath-resizer Jan 11 19:40:25.676: INFO: deleting *v1.StatefulSet: ephemeral-1641/csi-hostpath-resizer Jan 11 19:40:25.767: INFO: deleting *v1.Service: ephemeral-1641/csi-snapshotter Jan 11 19:40:25.861: INFO: deleting *v1.StatefulSet: ephemeral-1641/csi-snapshotter Jan 11 19:40:25.952: INFO: deleting *v1.ClusterRoleBinding: psp-csi-hostpath-role-ephemeral-1641 [AfterEach] [Testpattern: inline ephemeral CSI volume] ephemeral /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:40:26.042: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "ephemeral-1641" for this suite. Jan 11 19:40:38.405: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:40:41.726: INFO: namespace ephemeral-1641 deletion completed in 15.592955661s • [SLOW TEST:111.213 seconds] [sig-storage] CSI Volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Driver: csi-hostpath] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:62 [Testpattern: inline ephemeral CSI volume] ephemeral /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:92 should support two pods which share the same volume /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:140 ------------------------------ SSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:40:16.074: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in persistent-local-volumes-test-6324 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:152 [BeforeEach] [Volume type: dir-link-bindmounted] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Jan 11 19:40:19.298: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-6324 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-ff5ba419-bdcc-4a60-b8b2-1a39ca8a2651-backend && mount --bind /tmp/local-volume-test-ff5ba419-bdcc-4a60-b8b2-1a39ca8a2651-backend /tmp/local-volume-test-ff5ba419-bdcc-4a60-b8b2-1a39ca8a2651-backend && ln -s /tmp/local-volume-test-ff5ba419-bdcc-4a60-b8b2-1a39ca8a2651-backend /tmp/local-volume-test-ff5ba419-bdcc-4a60-b8b2-1a39ca8a2651' Jan 11 19:40:20.667: INFO: stderr: "" Jan 11 19:40:20.667: INFO: stdout: "" STEP: Creating local PVCs and PVs Jan 11 19:40:20.667: INFO: Creating a PV followed by a PVC Jan 11 19:40:20.846: INFO: Waiting for PV local-pv6vw58 to bind to PVC pvc-bgjcm Jan 11 19:40:20.846: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-bgjcm] to have phase Bound Jan 11 19:40:20.937: INFO: PersistentVolumeClaim pvc-bgjcm found and phase=Bound (90.638066ms) Jan 11 19:40:20.937: INFO: Waiting up to 3m0s for PersistentVolume local-pv6vw58 to have phase Bound Jan 11 19:40:21.027: INFO: PersistentVolume local-pv6vw58 found and phase=Bound (89.965104ms) [BeforeEach] Set fsGroup for local volume /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set fsGroup for one pod [Slow] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:267 STEP: Checking fsGroup is set STEP: Creating a pod Jan 11 19:40:23.566: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec security-context-07b19580-411e-4595-875c-60f96e7367f5 --namespace=persistent-local-volumes-test-6324 -- stat -c %g /mnt/volume1' Jan 11 19:40:24.910: INFO: stderr: "" Jan 11 19:40:24.910: INFO: stdout: "1234\n" STEP: Deleting pod STEP: Deleting pod security-context-07b19580-411e-4595-875c-60f96e7367f5 in namespace persistent-local-volumes-test-6324 [AfterEach] [Volume type: dir-link-bindmounted] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jan 11 19:40:25.001: INFO: Deleting PersistentVolumeClaim "pvc-bgjcm" Jan 11 19:40:25.091: INFO: Deleting PersistentVolume "local-pv6vw58" STEP: Removing the test directory Jan 11 19:40:25.181: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-6324 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm /tmp/local-volume-test-ff5ba419-bdcc-4a60-b8b2-1a39ca8a2651 && umount /tmp/local-volume-test-ff5ba419-bdcc-4a60-b8b2-1a39ca8a2651-backend && rm -r /tmp/local-volume-test-ff5ba419-bdcc-4a60-b8b2-1a39ca8a2651-backend' Jan 11 19:40:26.827: INFO: stderr: "" Jan 11 19:40:26.827: INFO: stdout: "" [AfterEach] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:40:26.918: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-6324" for this suite. Jan 11 19:40:39.277: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:40:42.577: INFO: namespace persistent-local-volumes-test-6324 deletion completed in 15.56842443s • [SLOW TEST:26.502 seconds] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-link-bindmounted] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set fsGroup for one pod [Slow] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:267 ------------------------------ SSSSSSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:40:32.776: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename dns STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in dns-58 STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-58.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-58.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-58.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-58.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-58.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-58.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 11 19:40:36.660: INFO: DNS probes using dns-58/dns-test-1342ef4b-bf06-4e90-bb08-c1c6163e5ade succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:40:36.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-58" for this suite. Jan 11 19:40:43.116: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:40:46.431: INFO: namespace dns-58 deletion completed in 9.585997096s • [SLOW TEST:13.655 seconds] [sig-network] DNS /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:40:03.215: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename gc STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in gc-1428 STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if deleteOptions.OrphanDependents is nil /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/garbage_collector.go:434 STEP: create the rc STEP: delete the rc STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0111 19:40:39.301704 8631 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 11 19:40:39.301: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:40:39.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1428" for this suite. Jan 11 19:40:45.659: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:40:49.065: INFO: namespace gc-1428 deletion completed in 9.673387875s • [SLOW TEST:45.850 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if deleteOptions.OrphanDependents is nil /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/garbage_collector.go:434 ------------------------------ SSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:40:28.107: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename configmap STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-3650 STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating configMap with name configmap-test-upd-b4801f01-a51d-4101-8f5f-e34ca52791a1 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:40:31.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3650" for this suite. Jan 11 19:40:46.046: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:40:49.437: INFO: namespace configmap-3650 deletion completed in 17.660548936s • [SLOW TEST:21.330 seconds] [sig-storage] ConfigMap /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:34 binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSS ------------------------------ [BeforeEach] [sig-api-machinery] Secrets /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:40:49.073: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename secrets STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secrets-5217 STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating secret with name secret-test-2a24ec83-c7b2-46aa-8561-f23f0c978711 STEP: Creating a pod to test consume secrets Jan 11 19:40:49.890: INFO: Waiting up to 5m0s for pod "pod-secrets-40b623e5-5e7e-4be5-a09b-b25d1f6e942a" in namespace "secrets-5217" to be "success or failure" Jan 11 19:40:49.980: INFO: Pod "pod-secrets-40b623e5-5e7e-4be5-a09b-b25d1f6e942a": Phase="Pending", Reason="", readiness=false. Elapsed: 89.356585ms Jan 11 19:40:52.070: INFO: Pod "pod-secrets-40b623e5-5e7e-4be5-a09b-b25d1f6e942a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.179243916s STEP: Saw pod success Jan 11 19:40:52.070: INFO: Pod "pod-secrets-40b623e5-5e7e-4be5-a09b-b25d1f6e942a" satisfied condition "success or failure" Jan 11 19:40:52.159: INFO: Trying to get logs from node ip-10-250-27-25.ec2.internal pod pod-secrets-40b623e5-5e7e-4be5-a09b-b25d1f6e942a container secret-env-test: STEP: delete the pod Jan 11 19:40:52.349: INFO: Waiting for pod pod-secrets-40b623e5-5e7e-4be5-a09b-b25d1f6e942a to disappear Jan 11 19:40:52.438: INFO: Pod pod-secrets-40b623e5-5e7e-4be5-a09b-b25d1f6e942a no longer exists [AfterEach] [sig-api-machinery] Secrets /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:40:52.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5217" for this suite. Jan 11 19:40:58.797: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:41:02.212: INFO: namespace secrets-5217 deletion completed in 9.683313723s • [SLOW TEST:13.139 seconds] [sig-api-machinery] Secrets /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32 should be consumable from pods in env vars [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:40:49.444: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename kubectl STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-7115 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:225 [BeforeEach] Kubectl copy /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1233 STEP: creating the pod Jan 11 19:40:50.081: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config create -f - --namespace=kubectl-7115' Jan 11 19:40:51.025: INFO: stderr: "" Jan 11 19:40:51.025: INFO: stdout: "pod/busybox1 created\n" Jan 11 19:40:51.025: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [busybox1] Jan 11 19:40:51.025: INFO: Waiting up to 5m0s for pod "busybox1" in namespace "kubectl-7115" to be "running and ready" Jan 11 19:40:51.115: INFO: Pod "busybox1": Phase="Pending", Reason="", readiness=false. Elapsed: 89.290697ms Jan 11 19:40:53.204: INFO: Pod "busybox1": Phase="Running", Reason="", readiness=true. Elapsed: 2.179022202s Jan 11 19:40:53.204: INFO: Pod "busybox1" satisfied condition "running and ready" Jan 11 19:40:53.204: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [busybox1] [It] should copy a file from a running Pod /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1249 STEP: specifying a remote filepath busybox1:/root/foo/bar/foo.bar on the pod Jan 11 19:40:53.205: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config cp busybox1:/root/foo/bar/foo.bar /tmp/copy-foobar612511520 --namespace=kubectl-7115' Jan 11 19:40:54.471: INFO: stderr: "" Jan 11 19:40:54.471: INFO: stdout: "tar: removing leading '/' from member names\n" STEP: verifying that the contents of the remote file busybox1:/root/foo/bar/foo.bar have been copied to a local file /tmp/copy-foobar612511520 [AfterEach] Kubectl copy /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1240 STEP: using delete to clean up resources Jan 11 19:40:54.471: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config delete --grace-period=0 --force -f - --namespace=kubectl-7115' Jan 11 19:40:54.979: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 11 19:40:54.979: INFO: stdout: "pod \"busybox1\" force deleted\n" Jan 11 19:40:54.979: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config get rc,svc -l app=busybox1 --no-headers --namespace=kubectl-7115' Jan 11 19:40:55.499: INFO: stderr: "No resources found in kubectl-7115 namespace.\n" Jan 11 19:40:55.499: INFO: stdout: "" Jan 11 19:40:55.499: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config get pods -l app=busybox1 --namespace=kubectl-7115 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jan 11 19:40:55.919: INFO: stderr: "" Jan 11 19:40:55.919: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:40:55.919: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7115" for this suite. Jan 11 19:41:02.278: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:41:05.678: INFO: namespace kubectl-7115 deletion completed in 9.668609954s • [SLOW TEST:16.234 seconds] [sig-cli] Kubectl client /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl copy /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1230 should copy a file from a running Pod /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1249 ------------------------------ SSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:40:33.431: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in csi-mock-volumes-1062 STEP: Waiting for a default service account to be provisioned in namespace [It] should require VolumeAttach for drivers with attachment /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:263 STEP: deploying csi mock driver Jan 11 19:40:34.263: INFO: creating *v1.ServiceAccount: csi-mock-volumes-1062/csi-attacher Jan 11 19:40:34.353: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-1062 Jan 11 19:40:34.353: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-1062 Jan 11 19:40:34.443: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-1062 Jan 11 19:40:34.534: INFO: creating *v1.Role: csi-mock-volumes-1062/external-attacher-cfg-csi-mock-volumes-1062 Jan 11 19:40:34.624: INFO: creating *v1.RoleBinding: csi-mock-volumes-1062/csi-attacher-role-cfg Jan 11 19:40:34.714: INFO: creating *v1.ServiceAccount: csi-mock-volumes-1062/csi-provisioner Jan 11 19:40:34.804: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-1062 Jan 11 19:40:34.804: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-1062 Jan 11 19:40:34.894: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-1062 Jan 11 19:40:34.984: INFO: creating *v1.Role: csi-mock-volumes-1062/external-provisioner-cfg-csi-mock-volumes-1062 Jan 11 19:40:35.074: INFO: creating *v1.RoleBinding: csi-mock-volumes-1062/csi-provisioner-role-cfg Jan 11 19:40:35.164: INFO: creating *v1.ServiceAccount: csi-mock-volumes-1062/csi-resizer Jan 11 19:40:35.254: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-1062 Jan 11 19:40:35.254: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-1062 Jan 11 19:40:35.348: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-1062 Jan 11 19:40:35.438: INFO: creating *v1.Role: csi-mock-volumes-1062/external-resizer-cfg-csi-mock-volumes-1062 Jan 11 19:40:35.529: INFO: creating *v1.RoleBinding: csi-mock-volumes-1062/csi-resizer-role-cfg Jan 11 19:40:35.619: INFO: creating *v1.ServiceAccount: csi-mock-volumes-1062/csi-mock Jan 11 19:40:35.709: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-1062 Jan 11 19:40:35.799: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-1062 Jan 11 19:40:35.889: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-1062 Jan 11 19:40:35.979: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-1062 Jan 11 19:40:36.069: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-1062 Jan 11 19:40:36.159: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-1062 Jan 11 19:40:36.249: INFO: creating *v1.StatefulSet: csi-mock-volumes-1062/csi-mockplugin Jan 11 19:40:36.340: INFO: creating *v1beta1.CSIDriver: csi-mock-csi-mock-volumes-1062 Jan 11 19:40:36.430: INFO: creating *v1.StatefulSet: csi-mock-volumes-1062/csi-mockplugin-attacher Jan 11 19:40:36.520: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-1062" STEP: Creating pod Jan 11 19:40:36.789: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Jan 11 19:40:36.881: INFO: Waiting up to 5m0s for PersistentVolumeClaims [pvc-8rb9b] to have phase Bound Jan 11 19:40:36.971: INFO: PersistentVolumeClaim pvc-8rb9b found but phase is Pending instead of Bound. Jan 11 19:40:39.061: INFO: PersistentVolumeClaim pvc-8rb9b found but phase is Pending instead of Bound. Jan 11 19:40:41.151: INFO: PersistentVolumeClaim pvc-8rb9b found and phase=Bound (4.269629496s) STEP: Checking if VolumeAttachment was created for the pod STEP: Deleting pod pvc-volume-tester-6rjqv Jan 11 19:40:47.870: INFO: Deleting pod "pvc-volume-tester-6rjqv" in namespace "csi-mock-volumes-1062" Jan 11 19:40:47.961: INFO: Wait up to 5m0s for pod "pvc-volume-tester-6rjqv" to be fully deleted STEP: Deleting claim pvc-8rb9b Jan 11 19:40:54.320: INFO: Waiting up to 2m0s for PersistentVolume pvc-e8f141b1-e389-4b8e-b7ba-b8f3033fc423 to get deleted Jan 11 19:40:54.410: INFO: PersistentVolume pvc-e8f141b1-e389-4b8e-b7ba-b8f3033fc423 was removed STEP: Deleting storageclass csi-mock-volumes-1062-sc STEP: Cleaning up resources STEP: uninstalling csi mock driver Jan 11 19:40:54.501: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-1062/csi-attacher Jan 11 19:40:54.593: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-1062 Jan 11 19:40:54.686: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-1062 Jan 11 19:40:54.778: INFO: deleting *v1.Role: csi-mock-volumes-1062/external-attacher-cfg-csi-mock-volumes-1062 Jan 11 19:40:54.869: INFO: deleting *v1.RoleBinding: csi-mock-volumes-1062/csi-attacher-role-cfg Jan 11 19:40:54.960: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-1062/csi-provisioner Jan 11 19:40:55.051: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-1062 Jan 11 19:40:55.143: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-1062 Jan 11 19:40:55.234: INFO: deleting *v1.Role: csi-mock-volumes-1062/external-provisioner-cfg-csi-mock-volumes-1062 Jan 11 19:40:55.326: INFO: deleting *v1.RoleBinding: csi-mock-volumes-1062/csi-provisioner-role-cfg Jan 11 19:40:55.417: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-1062/csi-resizer Jan 11 19:40:55.508: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-1062 Jan 11 19:40:55.601: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-1062 Jan 11 19:40:55.692: INFO: deleting *v1.Role: csi-mock-volumes-1062/external-resizer-cfg-csi-mock-volumes-1062 Jan 11 19:40:55.784: INFO: deleting *v1.RoleBinding: csi-mock-volumes-1062/csi-resizer-role-cfg Jan 11 19:40:55.876: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-1062/csi-mock Jan 11 19:40:55.968: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-1062 Jan 11 19:40:56.059: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-1062 Jan 11 19:40:56.150: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-1062 Jan 11 19:40:56.242: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-1062 Jan 11 19:40:56.333: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-1062 Jan 11 19:40:56.425: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-1062 Jan 11 19:40:56.517: INFO: deleting *v1.StatefulSet: csi-mock-volumes-1062/csi-mockplugin Jan 11 19:40:56.609: INFO: deleting *v1beta1.CSIDriver: csi-mock-csi-mock-volumes-1062 Jan 11 19:40:56.701: INFO: deleting *v1.StatefulSet: csi-mock-volumes-1062/csi-mockplugin-attacher [AfterEach] [sig-storage] CSI mock volume /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:40:56.882: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "csi-mock-volumes-1062" for this suite. Jan 11 19:41:09.245: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:41:12.819: INFO: namespace csi-mock-volumes-1062 deletion completed in 15.844341003s • [SLOW TEST:39.388 seconds] [sig-storage] CSI mock volume /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI attach test using mock driver /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:241 should require VolumeAttach for drivers with attachment /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:263 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:40:46.448: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename kubectl STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-107 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:225 [BeforeEach] Kubectl replace /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1704 [It] should update a single-container pod's image [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: running the image docker.io/library/httpd:2.4.38-alpine Jan 11 19:40:47.087: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config run e2e-test-httpd-pod --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-107' Jan 11 19:40:47.595: INFO: stderr: "" Jan 11 19:40:47.595: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created Jan 11 19:40:52.695: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config get pod e2e-test-httpd-pod --namespace=kubectl-107 -o json' Jan 11 19:40:53.115: INFO: stderr: "" Jan 11 19:40:53.115: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"annotations\": {\n \"cni.projectcalico.org/podIP\": \"100.64.1.249/32\",\n \"kubernetes.io/psp\": \"e2e-test-privileged-psp\"\n },\n \"creationTimestamp\": \"2020-01-11T19:40:47Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-107\",\n \"resourceVersion\": \"51301\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-107/pods/e2e-test-httpd-pod\",\n \"uid\": \"349c0a52-7eb4-4de4-b31b-513a3c315522\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-gxsp9\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"ip-10-250-27-25.ec2.internal\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-gxsp9\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-gxsp9\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-11T19:40:47Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-11T19:40:48Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-11T19:40:48Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-11T19:40:47Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"docker://b36d2052bc90d2c52e3a148b064d1a40719d01f26fe82cac66f9d4957cadb98d\",\n \"image\": \"httpd:2.4.38-alpine\",\n \"imageID\": \"docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-01-11T19:40:48Z\"\n }\n }\n }\n ],\n \"hostIP\": \"10.250.27.25\",\n \"phase\": \"Running\",\n \"podIP\": \"100.64.1.249\",\n \"podIPs\": [\n {\n \"ip\": \"100.64.1.249\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-01-11T19:40:47Z\"\n }\n}\n" STEP: replace the image in the pod Jan 11 19:40:53.115: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config replace -f - --namespace=kubectl-107' Jan 11 19:40:53.710: INFO: stderr: "" Jan 11 19:40:53.710: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1709 Jan 11 19:40:53.800: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config delete pods e2e-test-httpd-pod --namespace=kubectl-107' Jan 11 19:41:03.781: INFO: stderr: "" Jan 11 19:41:03.781: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:41:03.781: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-107" for this suite. Jan 11 19:41:10.144: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:41:13.626: INFO: namespace kubectl-107 deletion completed in 9.752250058s • [SLOW TEST:27.178 seconds] [sig-cli] Kubectl client /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1700 should update a single-container pod's image [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSS ------------------------------ [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:93 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:41:02.226: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename provisioning STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in provisioning-4755 STEP: Waiting for a default service account to be provisioned in namespace [It] should support creating multiple subpath from same volumes [Slow] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:277 Jan 11 19:41:02.864: INFO: Could not find CSI Name for in-tree plugin kubernetes.io/host-path Jan 11 19:41:03.046: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-4755" in namespace "provisioning-4755" to be "success or failure" Jan 11 19:41:03.136: INFO: Pod "hostpath-symlink-prep-provisioning-4755": Phase="Pending", Reason="", readiness=false. Elapsed: 89.555238ms Jan 11 19:41:05.226: INFO: Pod "hostpath-symlink-prep-provisioning-4755": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.17928633s STEP: Saw pod success Jan 11 19:41:05.226: INFO: Pod "hostpath-symlink-prep-provisioning-4755" satisfied condition "success or failure" Jan 11 19:41:05.226: INFO: Deleting pod "hostpath-symlink-prep-provisioning-4755" in namespace "provisioning-4755" Jan 11 19:41:05.319: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-4755" to be fully deleted Jan 11 19:41:05.408: INFO: Creating resource for inline volume STEP: Creating pod pod-subpath-test-hostpathsymlink-5pp9 STEP: Creating a pod to test multi_subpath Jan 11 19:41:05.499: INFO: Waiting up to 5m0s for pod "pod-subpath-test-hostpathsymlink-5pp9" in namespace "provisioning-4755" to be "success or failure" Jan 11 19:41:05.589: INFO: Pod "pod-subpath-test-hostpathsymlink-5pp9": Phase="Pending", Reason="", readiness=false. Elapsed: 89.364507ms Jan 11 19:41:07.679: INFO: Pod "pod-subpath-test-hostpathsymlink-5pp9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.179494101s STEP: Saw pod success Jan 11 19:41:07.679: INFO: Pod "pod-subpath-test-hostpathsymlink-5pp9" satisfied condition "success or failure" Jan 11 19:41:07.768: INFO: Trying to get logs from node ip-10-250-27-25.ec2.internal pod pod-subpath-test-hostpathsymlink-5pp9 container test-container-subpath-hostpathsymlink-5pp9: STEP: delete the pod Jan 11 19:41:07.957: INFO: Waiting for pod pod-subpath-test-hostpathsymlink-5pp9 to disappear Jan 11 19:41:08.047: INFO: Pod pod-subpath-test-hostpathsymlink-5pp9 no longer exists STEP: Deleting pod Jan 11 19:41:08.047: INFO: Deleting pod "pod-subpath-test-hostpathsymlink-5pp9" in namespace "provisioning-4755" Jan 11 19:41:08.226: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-4755" in namespace "provisioning-4755" to be "success or failure" Jan 11 19:41:08.315: INFO: Pod "hostpath-symlink-prep-provisioning-4755": Phase="Pending", Reason="", readiness=false. Elapsed: 89.106808ms Jan 11 19:41:10.407: INFO: Pod "hostpath-symlink-prep-provisioning-4755": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.18086226s STEP: Saw pod success Jan 11 19:41:10.407: INFO: Pod "hostpath-symlink-prep-provisioning-4755" satisfied condition "success or failure" Jan 11 19:41:10.407: INFO: Deleting pod "hostpath-symlink-prep-provisioning-4755" in namespace "provisioning-4755" Jan 11 19:41:10.500: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-4755" to be fully deleted Jan 11 19:41:10.589: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics [AfterEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:41:10.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "provisioning-4755" for this suite. Jan 11 19:41:16.948: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:41:20.309: INFO: namespace provisioning-4755 deletion completed in 9.629394167s • [SLOW TEST:18.083 seconds] [sig-storage] In-tree Volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Driver: hostPathSymlink] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:69 [Testpattern: Inline-volume (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:92 should support creating multiple subpath from same volumes [Slow] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:277 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:41:12.849: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename security-context-test STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in security-context-test-905 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:40 [It] should run with an explicit non-root user ID [LinuxOnly] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:123 Jan 11 19:41:13.582: INFO: Waiting up to 5m0s for pod "explicit-nonroot-uid" in namespace "security-context-test-905" to be "success or failure" Jan 11 19:41:13.672: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 90.029756ms Jan 11 19:41:15.762: INFO: Pod "explicit-nonroot-uid": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.180179345s Jan 11 19:41:15.762: INFO: Pod "explicit-nonroot-uid" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:41:15.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-905" for this suite. Jan 11 19:41:22.221: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:41:25.664: INFO: namespace security-context-test-905 deletion completed in 9.714046993s • [SLOW TEST:12.815 seconds] [k8s.io] Security Context /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693 When creating a container with runAsNonRoot /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:98 should run with an explicit non-root user ID [LinuxOnly] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:123 ------------------------------ SSSSS ------------------------------ [BeforeEach] [Testpattern: Dynamic PV (default fs)] volumeIO /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:93 [BeforeEach] [Testpattern: Dynamic PV (default fs)] volumeIO /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:40:15.289: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename volumeio STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in volumeio-3164 STEP: Waiting for a default service account to be provisioned in namespace [It] should write files of various sizes, verify size, validate content [Slow] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_io.go:137 STEP: deploying csi-hostpath driver Jan 11 19:40:16.144: INFO: creating *v1.ServiceAccount: volumeio-3164/csi-attacher Jan 11 19:40:16.235: INFO: creating *v1.ClusterRole: external-attacher-runner-volumeio-3164 Jan 11 19:40:16.235: INFO: Define cluster role external-attacher-runner-volumeio-3164 Jan 11 19:40:16.325: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-volumeio-3164 Jan 11 19:40:16.416: INFO: creating *v1.Role: volumeio-3164/external-attacher-cfg-volumeio-3164 Jan 11 19:40:16.507: INFO: creating *v1.RoleBinding: volumeio-3164/csi-attacher-role-cfg Jan 11 19:40:16.597: INFO: creating *v1.ServiceAccount: volumeio-3164/csi-provisioner Jan 11 19:40:16.687: INFO: creating *v1.ClusterRole: external-provisioner-runner-volumeio-3164 Jan 11 19:40:16.687: INFO: Define cluster role external-provisioner-runner-volumeio-3164 Jan 11 19:40:16.777: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-volumeio-3164 Jan 11 19:40:16.867: INFO: creating *v1.Role: volumeio-3164/external-provisioner-cfg-volumeio-3164 Jan 11 19:40:16.957: INFO: creating *v1.RoleBinding: volumeio-3164/csi-provisioner-role-cfg Jan 11 19:40:17.047: INFO: creating *v1.ServiceAccount: volumeio-3164/csi-snapshotter Jan 11 19:40:17.136: INFO: creating *v1.ClusterRole: external-snapshotter-runner-volumeio-3164 Jan 11 19:40:17.136: INFO: Define cluster role external-snapshotter-runner-volumeio-3164 Jan 11 19:40:17.226: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-volumeio-3164 Jan 11 19:40:17.316: INFO: creating *v1.Role: volumeio-3164/external-snapshotter-leaderelection-volumeio-3164 Jan 11 19:40:17.406: INFO: creating *v1.RoleBinding: volumeio-3164/external-snapshotter-leaderelection Jan 11 19:40:17.498: INFO: creating *v1.ServiceAccount: volumeio-3164/csi-resizer Jan 11 19:40:17.589: INFO: creating *v1.ClusterRole: external-resizer-runner-volumeio-3164 Jan 11 19:40:17.589: INFO: Define cluster role external-resizer-runner-volumeio-3164 Jan 11 19:40:17.679: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-volumeio-3164 Jan 11 19:40:17.769: INFO: creating *v1.Role: volumeio-3164/external-resizer-cfg-volumeio-3164 Jan 11 19:40:17.859: INFO: creating *v1.RoleBinding: volumeio-3164/csi-resizer-role-cfg Jan 11 19:40:17.949: INFO: creating *v1.Service: volumeio-3164/csi-hostpath-attacher Jan 11 19:40:18.043: INFO: creating *v1.StatefulSet: volumeio-3164/csi-hostpath-attacher Jan 11 19:40:18.134: INFO: creating *v1beta1.CSIDriver: csi-hostpath-volumeio-3164 Jan 11 19:40:18.223: INFO: creating *v1.Service: volumeio-3164/csi-hostpathplugin Jan 11 19:40:18.318: INFO: creating *v1.StatefulSet: volumeio-3164/csi-hostpathplugin Jan 11 19:40:18.409: INFO: creating *v1.Service: volumeio-3164/csi-hostpath-provisioner Jan 11 19:40:18.506: INFO: creating *v1.StatefulSet: volumeio-3164/csi-hostpath-provisioner Jan 11 19:40:18.596: INFO: creating *v1.Service: volumeio-3164/csi-hostpath-resizer Jan 11 19:40:18.689: INFO: creating *v1.StatefulSet: volumeio-3164/csi-hostpath-resizer Jan 11 19:40:18.780: INFO: creating *v1.Service: volumeio-3164/csi-snapshotter Jan 11 19:40:18.873: INFO: creating *v1.StatefulSet: volumeio-3164/csi-snapshotter Jan 11 19:40:18.964: INFO: creating *v1.ClusterRoleBinding: psp-csi-hostpath-role-volumeio-3164 Jan 11 19:40:19.054: INFO: Test running for native CSI Driver, not checking metrics Jan 11 19:40:19.054: INFO: Creating resource for dynamic PV STEP: creating a StorageClass volumeio-3164-csi-hostpath-volumeio-3164-scmr9z7 STEP: creating a claim Jan 11 19:40:19.143: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Jan 11 19:40:19.236: INFO: Waiting up to 5m0s for PersistentVolumeClaims [csi-hostpath974nb] to have phase Bound Jan 11 19:40:19.326: INFO: PersistentVolumeClaim csi-hostpath974nb found but phase is Pending instead of Bound. Jan 11 19:40:21.416: INFO: PersistentVolumeClaim csi-hostpath974nb found and phase=Bound (2.180229507s) STEP: starting hostpath-io-client STEP: writing 1048576 bytes to test file /opt/csi-hostpath_io_test_volumeio-3164-1048576 Jan 11 19:40:27.869: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=volumeio-3164 hostpath-io-client -- /bin/sh -c i=0; while [ $i -lt 1 ]; do dd if=/opt/hostpath-volumeio-3164-dd_if bs=1048576 >>/opt/csi-hostpath_io_test_volumeio-3164-1048576 2>/dev/null; let i+=1; done' Jan 11 19:40:29.216: INFO: stderr: "" Jan 11 19:40:29.216: INFO: stdout: "" STEP: verifying file size Jan 11 19:40:29.216: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=volumeio-3164 hostpath-io-client -- /bin/sh -c stat -c %s /opt/csi-hostpath_io_test_volumeio-3164-1048576' Jan 11 19:40:30.525: INFO: stderr: "" Jan 11 19:40:30.525: INFO: stdout: "1048576\n" STEP: verifying file hash Jan 11 19:40:30.525: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=volumeio-3164 hostpath-io-client -- /bin/sh -c md5sum /opt/csi-hostpath_io_test_volumeio-3164-1048576 | cut -d' ' -f1' Jan 11 19:40:31.813: INFO: stderr: "" Jan 11 19:40:31.813: INFO: stdout: "5c34c2813223a7ca05a3c2f38c0d1710\n" STEP: writing 104857600 bytes to test file /opt/csi-hostpath_io_test_volumeio-3164-104857600 Jan 11 19:40:31.813: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=volumeio-3164 hostpath-io-client -- /bin/sh -c i=0; while [ $i -lt 100 ]; do dd if=/opt/hostpath-volumeio-3164-dd_if bs=1048576 >>/opt/csi-hostpath_io_test_volumeio-3164-104857600 2>/dev/null; let i+=1; done' Jan 11 19:40:33.254: INFO: stderr: "" Jan 11 19:40:33.254: INFO: stdout: "" STEP: verifying file size Jan 11 19:40:33.254: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=volumeio-3164 hostpath-io-client -- /bin/sh -c stat -c %s /opt/csi-hostpath_io_test_volumeio-3164-104857600' Jan 11 19:40:34.634: INFO: stderr: "" Jan 11 19:40:34.634: INFO: stdout: "104857600\n" STEP: verifying file hash Jan 11 19:40:34.635: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=volumeio-3164 hostpath-io-client -- /bin/sh -c md5sum /opt/csi-hostpath_io_test_volumeio-3164-104857600 | cut -d' ' -f1' Jan 11 19:40:36.157: INFO: stderr: "" Jan 11 19:40:36.158: INFO: stdout: "f2fa202b1ffeedda5f3a58bd1ae81104\n" STEP: deleting test file /opt/csi-hostpath_io_test_volumeio-3164-104857600... Jan 11 19:40:36.158: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=volumeio-3164 hostpath-io-client -- /bin/sh -c rm -f /opt/csi-hostpath_io_test_volumeio-3164-104857600' Jan 11 19:40:37.466: INFO: stderr: "" Jan 11 19:40:37.466: INFO: stdout: "" STEP: deleting test file /opt/csi-hostpath_io_test_volumeio-3164-1048576... Jan 11 19:40:37.466: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=volumeio-3164 hostpath-io-client -- /bin/sh -c rm -f /opt/csi-hostpath_io_test_volumeio-3164-1048576' Jan 11 19:40:38.742: INFO: stderr: "" Jan 11 19:40:38.742: INFO: stdout: "" STEP: deleting test file /opt/hostpath-volumeio-3164-dd_if... Jan 11 19:40:38.742: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=volumeio-3164 hostpath-io-client -- /bin/sh -c rm -f /opt/hostpath-volumeio-3164-dd_if' Jan 11 19:40:39.973: INFO: stderr: "" Jan 11 19:40:39.973: INFO: stdout: "" STEP: deleting client pod "hostpath-io-client"... Jan 11 19:40:39.973: INFO: Deleting pod "hostpath-io-client" in namespace "volumeio-3164" Jan 11 19:40:40.065: INFO: Wait up to 5m0s for pod "hostpath-io-client" to be fully deleted Jan 11 19:40:48.244: INFO: sleeping a bit so kubelet can unmount and detach the volume STEP: Deleting pvc Jan 11 19:41:08.244: INFO: Deleting PersistentVolumeClaim "csi-hostpath974nb" Jan 11 19:41:08.336: INFO: Waiting up to 5m0s for PersistentVolume pvc-26d9b525-672b-4db6-af8f-adecbe0b8f55 to get deleted Jan 11 19:41:08.425: INFO: PersistentVolume pvc-26d9b525-672b-4db6-af8f-adecbe0b8f55 was removed STEP: Deleting sc STEP: uninstalling csi-hostpath driver Jan 11 19:41:08.516: INFO: deleting *v1.ServiceAccount: volumeio-3164/csi-attacher Jan 11 19:41:08.608: INFO: deleting *v1.ClusterRole: external-attacher-runner-volumeio-3164 Jan 11 19:41:08.699: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-volumeio-3164 Jan 11 19:41:08.790: INFO: deleting *v1.Role: volumeio-3164/external-attacher-cfg-volumeio-3164 Jan 11 19:41:08.881: INFO: deleting *v1.RoleBinding: volumeio-3164/csi-attacher-role-cfg Jan 11 19:41:08.973: INFO: deleting *v1.ServiceAccount: volumeio-3164/csi-provisioner Jan 11 19:41:09.064: INFO: deleting *v1.ClusterRole: external-provisioner-runner-volumeio-3164 Jan 11 19:41:09.155: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-volumeio-3164 Jan 11 19:41:09.246: INFO: deleting *v1.Role: volumeio-3164/external-provisioner-cfg-volumeio-3164 Jan 11 19:41:09.337: INFO: deleting *v1.RoleBinding: volumeio-3164/csi-provisioner-role-cfg Jan 11 19:41:09.430: INFO: deleting *v1.ServiceAccount: volumeio-3164/csi-snapshotter Jan 11 19:41:09.521: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-volumeio-3164 Jan 11 19:41:09.612: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-volumeio-3164 Jan 11 19:41:09.702: INFO: deleting *v1.Role: volumeio-3164/external-snapshotter-leaderelection-volumeio-3164 Jan 11 19:41:09.793: INFO: deleting *v1.RoleBinding: volumeio-3164/external-snapshotter-leaderelection Jan 11 19:41:09.885: INFO: deleting *v1.ServiceAccount: volumeio-3164/csi-resizer Jan 11 19:41:09.975: INFO: deleting *v1.ClusterRole: external-resizer-runner-volumeio-3164 Jan 11 19:41:10.067: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-volumeio-3164 Jan 11 19:41:10.158: INFO: deleting *v1.Role: volumeio-3164/external-resizer-cfg-volumeio-3164 Jan 11 19:41:10.249: INFO: deleting *v1.RoleBinding: volumeio-3164/csi-resizer-role-cfg Jan 11 19:41:10.340: INFO: deleting *v1.Service: volumeio-3164/csi-hostpath-attacher Jan 11 19:41:10.436: INFO: deleting *v1.StatefulSet: volumeio-3164/csi-hostpath-attacher Jan 11 19:41:10.527: INFO: deleting *v1beta1.CSIDriver: csi-hostpath-volumeio-3164 Jan 11 19:41:10.618: INFO: deleting *v1.Service: volumeio-3164/csi-hostpathplugin Jan 11 19:41:10.714: INFO: deleting *v1.StatefulSet: volumeio-3164/csi-hostpathplugin Jan 11 19:41:10.805: INFO: deleting *v1.Service: volumeio-3164/csi-hostpath-provisioner Jan 11 19:41:10.903: INFO: deleting *v1.StatefulSet: volumeio-3164/csi-hostpath-provisioner Jan 11 19:41:10.993: INFO: deleting *v1.Service: volumeio-3164/csi-hostpath-resizer Jan 11 19:41:11.089: INFO: deleting *v1.StatefulSet: volumeio-3164/csi-hostpath-resizer Jan 11 19:41:11.180: INFO: deleting *v1.Service: volumeio-3164/csi-snapshotter Jan 11 19:41:11.275: INFO: deleting *v1.StatefulSet: volumeio-3164/csi-snapshotter Jan 11 19:41:11.366: INFO: deleting *v1.ClusterRoleBinding: psp-csi-hostpath-role-volumeio-3164 [AfterEach] [Testpattern: Dynamic PV (default fs)] volumeIO /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:41:11.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "volumeio-3164" for this suite. Jan 11 19:41:23.817: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:41:27.221: INFO: namespace volumeio-3164 deletion completed in 15.673394956s • [SLOW TEST:71.931 seconds] [sig-storage] CSI Volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Driver: csi-hostpath] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:62 [Testpattern: Dynamic PV (default fs)] volumeIO /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:92 should write files of various sizes, verify size, validate content [Slow] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_io.go:137 ------------------------------ S ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:40:42.588: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in crd-publish-openapi-1753 STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Jan 11 19:40:43.225: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: client-side validation (kubectl create and apply) allows request with known and required properties Jan 11 19:40:47.518: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-1753 create -f -' Jan 11 19:40:48.976: INFO: stderr: "" Jan 11 19:40:48.976: INFO: stdout: "e2e-test-crd-publish-openapi-2049-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Jan 11 19:40:48.976: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-1753 delete e2e-test-crd-publish-openapi-2049-crds test-foo' Jan 11 19:41:04.492: INFO: stderr: "" Jan 11 19:41:04.493: INFO: stdout: "e2e-test-crd-publish-openapi-2049-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" Jan 11 19:41:04.493: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-1753 apply -f -' Jan 11 19:41:05.630: INFO: stderr: "" Jan 11 19:41:05.630: INFO: stdout: "e2e-test-crd-publish-openapi-2049-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Jan 11 19:41:05.630: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-1753 delete e2e-test-crd-publish-openapi-2049-crds test-foo' Jan 11 19:41:06.149: INFO: stderr: "" Jan 11 19:41:06.150: INFO: stdout: "e2e-test-crd-publish-openapi-2049-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema Jan 11 19:41:06.150: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-1753 create -f -' Jan 11 19:41:06.994: INFO: rc: 1 Jan 11 19:41:06.994: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-1753 apply -f -' Jan 11 19:41:07.839: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties Jan 11 19:41:07.840: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-1753 create -f -' Jan 11 19:41:08.697: INFO: rc: 1 Jan 11 19:41:08.697: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-1753 apply -f -' Jan 11 19:41:09.557: INFO: rc: 1 STEP: kubectl explain works to explain CR properties Jan 11 19:41:09.558: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config explain e2e-test-crd-publish-openapi-2049-crds' Jan 11 19:41:10.414: INFO: stderr: "" Jan 11 19:41:10.414: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-2049-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively Jan 11 19:41:10.414: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config explain e2e-test-crd-publish-openapi-2049-crds.metadata' Jan 11 19:41:11.265: INFO: stderr: "" Jan 11 19:41:11.265: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-2049-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC. Populated by the system.\n Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested. Populated by the system when a graceful deletion is\n requested. Read-only. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server. If this field is specified and the generated name exists, the\n server will NOT return a 409 - instead, it will either return 201 Created\n or 500 with Reason ServerTimeout indicating a unique name could not be\n found in the time allotted, and the client should retry (optionally after\n the time indicated in the Retry-After header). Applied only if Name is not\n specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty. Must\n be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources. Populated by the system.\n Read-only. Value must be treated as opaque by clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n release and the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations. Populated by the system. Read-only.\n More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" Jan 11 19:41:11.266: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config explain e2e-test-crd-publish-openapi-2049-crds.spec' Jan 11 19:41:12.117: INFO: stderr: "" Jan 11 19:41:12.117: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-2049-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" Jan 11 19:41:12.117: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config explain e2e-test-crd-publish-openapi-2049-crds.spec.bars' Jan 11 19:41:12.973: INFO: stderr: "" Jan 11 19:41:12.973: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-2049-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist Jan 11 19:41:12.974: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config explain e2e-test-crd-publish-openapi-2049-crds.spec.bars2' Jan 11 19:41:13.472: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:41:18.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1753" for this suite. Jan 11 19:41:25.218: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:41:28.606: INFO: namespace crd-publish-openapi-1753 deletion completed in 9.656105745s • [SLOW TEST:46.018 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:41:05.687: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in crd-publish-openapi-1365 STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Jan 11 19:41:06.327: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Jan 11 19:41:11.400: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-1365 create -f -' Jan 11 19:41:12.830: INFO: stderr: "" Jan 11 19:41:12.830: INFO: stdout: "e2e-test-crd-publish-openapi-6249-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Jan 11 19:41:12.830: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-1365 delete e2e-test-crd-publish-openapi-6249-crds test-cr' Jan 11 19:41:13.343: INFO: stderr: "" Jan 11 19:41:13.343: INFO: stdout: "e2e-test-crd-publish-openapi-6249-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" Jan 11 19:41:13.343: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-1365 apply -f -' Jan 11 19:41:14.484: INFO: stderr: "" Jan 11 19:41:14.484: INFO: stdout: "e2e-test-crd-publish-openapi-6249-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Jan 11 19:41:14.484: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-1365 delete e2e-test-crd-publish-openapi-6249-crds test-cr' Jan 11 19:41:15.039: INFO: stderr: "" Jan 11 19:41:15.039: INFO: stdout: "e2e-test-crd-publish-openapi-6249-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Jan 11 19:41:15.039: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config explain e2e-test-crd-publish-openapi-6249-crds' Jan 11 19:41:15.897: INFO: stderr: "" Jan 11 19:41:15.897: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-6249-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:41:21.335: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1365" for this suite. Jan 11 19:41:27.781: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:41:31.164: INFO: namespace crd-publish-openapi-1365 deletion completed in 9.651231754s • [SLOW TEST:25.477 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SS ------------------------------ [BeforeEach] [sig-network] Services /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:41:27.224: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename services STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-7435 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:91 [It] should be able to update NodePorts with two same port numbers but different protocols /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:948 STEP: creating a TCP service nodeport-update-service with type=ClusterIP in namespace services-7435 Jan 11 19:41:27.957: INFO: service port TCP: 80 STEP: changing the TCP service to type=NodePort and add a UDP port Jan 11 19:41:28.141: INFO: new service allocates NodePort 30723 for Port tcp-port Jan 11 19:41:28.141: INFO: new service allocates NodePort 30691 for Port udp-port Jan 11 19:41:28.141: INFO: Cleaning up the updating NodePorts test service [AfterEach] [sig-network] Services /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:41:28.241: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7435" for this suite. Jan 11 19:41:34.602: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:41:38.006: INFO: namespace services-7435 deletion completed in 9.674080579s [AfterEach] [sig-network] Services /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:95 • [SLOW TEST:10.782 seconds] [sig-network] Services /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to update NodePorts with two same port numbers but different protocols /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:948 ------------------------------ SSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:41:31.169: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename security-context-test STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in security-context-test-7608 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:40 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Jan 11 19:41:31.897: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-b9654c29-cfe1-4f63-a895-292c5150307b" in namespace "security-context-test-7608" to be "success or failure" Jan 11 19:41:31.986: INFO: Pod "busybox-readonly-false-b9654c29-cfe1-4f63-a895-292c5150307b": Phase="Pending", Reason="", readiness=false. Elapsed: 89.235473ms Jan 11 19:41:34.075: INFO: Pod "busybox-readonly-false-b9654c29-cfe1-4f63-a895-292c5150307b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.178914104s Jan 11 19:41:34.076: INFO: Pod "busybox-readonly-false-b9654c29-cfe1-4f63-a895-292c5150307b" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:41:34.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-7608" for this suite. Jan 11 19:41:40.435: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:41:43.727: INFO: namespace security-context-test-7608 deletion completed in 9.560668947s • [SLOW TEST:12.558 seconds] [k8s.io] Security Context /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693 When creating a pod with readOnlyRootFilesystem /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:165 should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:41:20.361: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename kubectl STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-9377 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:225 [It] should create/apply a CR with unknown fields for CRD with no validation schema /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:899 STEP: create CRD with no validation schema Jan 11 19:41:20.996: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: sleep for 10s to wait for potential crd openapi publishing alpha feature STEP: successfully create CR Jan 11 19:41:31.265: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-9377 create --validate=true -f -' Jan 11 19:41:32.712: INFO: stderr: "" Jan 11 19:41:32.712: INFO: stdout: "e2e-test-kubectl-4410-crd.kubectl.example.com/test-cr created\n" Jan 11 19:41:32.712: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-9377 delete e2e-test-kubectl-4410-crds test-cr' Jan 11 19:41:33.227: INFO: stderr: "" Jan 11 19:41:33.227: INFO: stdout: "e2e-test-kubectl-4410-crd.kubectl.example.com \"test-cr\" deleted\n" STEP: successfully apply CR Jan 11 19:41:33.228: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-9377 apply --validate=true -f -' Jan 11 19:41:34.356: INFO: stderr: "" Jan 11 19:41:34.356: INFO: stdout: "e2e-test-kubectl-4410-crd.kubectl.example.com/test-cr created\n" Jan 11 19:41:34.356: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-9377 delete e2e-test-kubectl-4410-crds test-cr' Jan 11 19:41:34.870: INFO: stderr: "" Jan 11 19:41:34.870: INFO: stdout: "e2e-test-kubectl-4410-crd.kubectl.example.com \"test-cr\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:41:35.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9377" for this suite. Jan 11 19:41:41.407: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:41:44.707: INFO: namespace kubectl-9377 deletion completed in 9.567884675s • [SLOW TEST:24.346 seconds] [sig-cli] Kubectl client /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl client-side validation /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:898 should create/apply a CR with unknown fields for CRD with no validation schema /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:899 ------------------------------ SS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:41:25.674: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in persistent-local-volumes-test-9957 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:152 [BeforeEach] [Volume type: dir-bindmounted] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Jan 11 19:41:28.767: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-9957 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-c3164a60-aa97-4718-8871-0137ce3322d0 && mount --bind /tmp/local-volume-test-c3164a60-aa97-4718-8871-0137ce3322d0 /tmp/local-volume-test-c3164a60-aa97-4718-8871-0137ce3322d0' Jan 11 19:41:30.101: INFO: stderr: "" Jan 11 19:41:30.101: INFO: stdout: "" STEP: Creating local PVCs and PVs Jan 11 19:41:30.101: INFO: Creating a PV followed by a PVC Jan 11 19:41:30.282: INFO: Waiting for PV local-pv6hmlh to bind to PVC pvc-hl798 Jan 11 19:41:30.282: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-hl798] to have phase Bound Jan 11 19:41:30.372: INFO: PersistentVolumeClaim pvc-hl798 found and phase=Bound (89.775663ms) Jan 11 19:41:30.372: INFO: Waiting up to 3m0s for PersistentVolume local-pv6hmlh to have phase Bound Jan 11 19:41:30.462: INFO: PersistentVolume local-pv6hmlh found and phase=Bound (89.734397ms) [BeforeEach] Set fsGroup for local volume /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set same fsGroup for two pods simultaneously [Slow] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:274 STEP: Create first pod and check fsGroup is set STEP: Creating a pod Jan 11 19:41:33.003: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec security-context-4302fa5d-23a8-415b-b609-880aa702a714 --namespace=persistent-local-volumes-test-9957 -- stat -c %g /mnt/volume1' Jan 11 19:41:34.318: INFO: stderr: "" Jan 11 19:41:34.318: INFO: stdout: "1234\n" STEP: Create second pod with same fsGroup and check fsGroup is correct STEP: Creating a pod Jan 11 19:41:36.679: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec security-context-736b3583-4487-4ccc-b276-e4d858e4630e --namespace=persistent-local-volumes-test-9957 -- stat -c %g /mnt/volume1' Jan 11 19:41:37.961: INFO: stderr: "" Jan 11 19:41:37.961: INFO: stdout: "1234\n" STEP: Deleting first pod STEP: Deleting pod security-context-4302fa5d-23a8-415b-b609-880aa702a714 in namespace persistent-local-volumes-test-9957 STEP: Deleting second pod STEP: Deleting pod security-context-736b3583-4487-4ccc-b276-e4d858e4630e in namespace persistent-local-volumes-test-9957 [AfterEach] [Volume type: dir-bindmounted] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jan 11 19:41:38.144: INFO: Deleting PersistentVolumeClaim "pvc-hl798" Jan 11 19:41:38.235: INFO: Deleting PersistentVolume "local-pv6hmlh" STEP: Removing the test directory Jan 11 19:41:38.326: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-9957 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount /tmp/local-volume-test-c3164a60-aa97-4718-8871-0137ce3322d0 && rm -r /tmp/local-volume-test-c3164a60-aa97-4718-8871-0137ce3322d0' Jan 11 19:41:39.644: INFO: stderr: "" Jan 11 19:41:39.644: INFO: stdout: "" [AfterEach] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:41:39.735: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-9957" for this suite. Jan 11 19:41:46.097: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:41:49.423: INFO: namespace persistent-local-volumes-test-9957 deletion completed in 9.596161762s • [SLOW TEST:23.749 seconds] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-bindmounted] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set same fsGroup for two pods simultaneously [Slow] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:274 ------------------------------ SSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Dynamic Provisioning /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:41:43.733: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename volume-provisioning STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in volume-provisioning-439 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Dynamic Provisioning /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:259 [It] should provision storage with different parameters /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:265 Jan 11 19:41:44.548: INFO: Skipping "SSD PD on GCE/GKE": cloud providers is not [gce gke] Jan 11 19:41:44.548: INFO: Skipping "HDD PD on GCE/GKE": cloud providers is not [gce gke] Jan 11 19:41:44.548: INFO: Skipping "gp2 EBS on AWS": cloud providers is not [aws] Jan 11 19:41:44.548: INFO: Skipping "io1 EBS on AWS": cloud providers is not [aws] Jan 11 19:41:44.548: INFO: Skipping "sc1 EBS on AWS": cloud providers is not [aws] Jan 11 19:41:44.548: INFO: Skipping "st1 EBS on AWS": cloud providers is not [aws] Jan 11 19:41:44.548: INFO: Skipping "encrypted EBS on AWS": cloud providers is not [aws] Jan 11 19:41:44.548: INFO: Skipping "generic Cinder volume on OpenStack": cloud providers is not [openstack] Jan 11 19:41:44.548: INFO: Skipping "Cinder volume with empty volume type and zone on OpenStack": cloud providers is not [openstack] Jan 11 19:41:44.548: INFO: Skipping "generic vSphere volume": cloud providers is not [vsphere] Jan 11 19:41:44.548: INFO: Skipping "Azure disk volume with empty sku and location": cloud providers is not [azure] [AfterEach] [sig-storage] Dynamic Provisioning /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:41:44.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "volume-provisioning-439" for this suite. Jan 11 19:41:50.906: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:41:54.197: INFO: namespace volume-provisioning-439 deletion completed in 9.558998256s • [SLOW TEST:10.465 seconds] [sig-storage] Dynamic Provisioning /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 DynamicProvisioner [Slow] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:264 should provision storage with different parameters /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:265 ------------------------------ SSSSSSS ------------------------------ [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:41:49.435: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename kubelet-test STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubelet-test-9202 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [AfterEach] [k8s.io] Kubelet /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:41:54.350: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-9202" for this suite. Jan 11 19:42:00.712: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:42:04.023: INFO: namespace kubelet-test-9202 deletion completed in 9.581544368s • [SLOW TEST:14.588 seconds] [k8s.io] Kubelet /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693 when scheduling a busybox command that always fails in a pod /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should have an terminated reason [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:41:13.640: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename dns STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in dns-5564 STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5564.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-5564.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5564.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-5564.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 11 19:41:17.006: INFO: DNS probes using dns-test-04dd225b-b30b-48d9-b572-292c3276efad succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5564.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-5564.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5564.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-5564.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 11 19:41:19.824: INFO: File wheezy_udp@dns-test-service-3.dns-5564.svc.cluster.local from pod dns-5564/dns-test-1c9770ba-060b-4008-b08b-4f3d1b3eb97a contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 11 19:41:19.918: INFO: File jessie_udp@dns-test-service-3.dns-5564.svc.cluster.local from pod dns-5564/dns-test-1c9770ba-060b-4008-b08b-4f3d1b3eb97a contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 11 19:41:19.918: INFO: Lookups using dns-5564/dns-test-1c9770ba-060b-4008-b08b-4f3d1b3eb97a failed for: [wheezy_udp@dns-test-service-3.dns-5564.svc.cluster.local jessie_udp@dns-test-service-3.dns-5564.svc.cluster.local] Jan 11 19:41:25.012: INFO: File wheezy_udp@dns-test-service-3.dns-5564.svc.cluster.local from pod dns-5564/dns-test-1c9770ba-060b-4008-b08b-4f3d1b3eb97a contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 11 19:41:25.107: INFO: File jessie_udp@dns-test-service-3.dns-5564.svc.cluster.local from pod dns-5564/dns-test-1c9770ba-060b-4008-b08b-4f3d1b3eb97a contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 11 19:41:25.107: INFO: Lookups using dns-5564/dns-test-1c9770ba-060b-4008-b08b-4f3d1b3eb97a failed for: [wheezy_udp@dns-test-service-3.dns-5564.svc.cluster.local jessie_udp@dns-test-service-3.dns-5564.svc.cluster.local] Jan 11 19:41:30.012: INFO: File wheezy_udp@dns-test-service-3.dns-5564.svc.cluster.local from pod dns-5564/dns-test-1c9770ba-060b-4008-b08b-4f3d1b3eb97a contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 11 19:41:30.109: INFO: File jessie_udp@dns-test-service-3.dns-5564.svc.cluster.local from pod dns-5564/dns-test-1c9770ba-060b-4008-b08b-4f3d1b3eb97a contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 11 19:41:30.109: INFO: Lookups using dns-5564/dns-test-1c9770ba-060b-4008-b08b-4f3d1b3eb97a failed for: [wheezy_udp@dns-test-service-3.dns-5564.svc.cluster.local jessie_udp@dns-test-service-3.dns-5564.svc.cluster.local] Jan 11 19:41:35.011: INFO: File wheezy_udp@dns-test-service-3.dns-5564.svc.cluster.local from pod dns-5564/dns-test-1c9770ba-060b-4008-b08b-4f3d1b3eb97a contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 11 19:41:35.104: INFO: File jessie_udp@dns-test-service-3.dns-5564.svc.cluster.local from pod dns-5564/dns-test-1c9770ba-060b-4008-b08b-4f3d1b3eb97a contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 11 19:41:35.104: INFO: Lookups using dns-5564/dns-test-1c9770ba-060b-4008-b08b-4f3d1b3eb97a failed for: [wheezy_udp@dns-test-service-3.dns-5564.svc.cluster.local jessie_udp@dns-test-service-3.dns-5564.svc.cluster.local] Jan 11 19:41:40.013: INFO: File wheezy_udp@dns-test-service-3.dns-5564.svc.cluster.local from pod dns-5564/dns-test-1c9770ba-060b-4008-b08b-4f3d1b3eb97a contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 11 19:41:40.106: INFO: File jessie_udp@dns-test-service-3.dns-5564.svc.cluster.local from pod dns-5564/dns-test-1c9770ba-060b-4008-b08b-4f3d1b3eb97a contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 11 19:41:40.106: INFO: Lookups using dns-5564/dns-test-1c9770ba-060b-4008-b08b-4f3d1b3eb97a failed for: [wheezy_udp@dns-test-service-3.dns-5564.svc.cluster.local jessie_udp@dns-test-service-3.dns-5564.svc.cluster.local] Jan 11 19:41:45.011: INFO: File wheezy_udp@dns-test-service-3.dns-5564.svc.cluster.local from pod dns-5564/dns-test-1c9770ba-060b-4008-b08b-4f3d1b3eb97a contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 11 19:41:45.105: INFO: File jessie_udp@dns-test-service-3.dns-5564.svc.cluster.local from pod dns-5564/dns-test-1c9770ba-060b-4008-b08b-4f3d1b3eb97a contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 11 19:41:45.105: INFO: Lookups using dns-5564/dns-test-1c9770ba-060b-4008-b08b-4f3d1b3eb97a failed for: [wheezy_udp@dns-test-service-3.dns-5564.svc.cluster.local jessie_udp@dns-test-service-3.dns-5564.svc.cluster.local] Jan 11 19:41:50.105: INFO: DNS probes using dns-test-1c9770ba-060b-4008-b08b-4f3d1b3eb97a succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5564.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-5564.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5564.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-5564.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 11 19:41:55.111: INFO: DNS probes using dns-test-05e9fc8e-50af-4c82-9cd0-29116bd70138 succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:41:55.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5564" for this suite. Jan 11 19:42:01.660: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:42:05.011: INFO: namespace dns-5564 deletion completed in 9.620421674s • [SLOW TEST:51.371 seconds] [sig-network] DNS /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:41:38.022: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in persistent-local-volumes-test-3766 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:152 [BeforeEach] [Volume type: dir] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Jan 11 19:41:41.117: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-3766 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-df7afda0-0a38-424d-bbd4-5bcc9d635cf9' Jan 11 19:41:42.374: INFO: stderr: "" Jan 11 19:41:42.374: INFO: stdout: "" STEP: Creating local PVCs and PVs Jan 11 19:41:42.374: INFO: Creating a PV followed by a PVC Jan 11 19:41:42.554: INFO: Waiting for PV local-pvp4598 to bind to PVC pvc-gztf8 Jan 11 19:41:42.554: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-gztf8] to have phase Bound Jan 11 19:41:42.644: INFO: PersistentVolumeClaim pvc-gztf8 found and phase=Bound (89.690838ms) Jan 11 19:41:42.644: INFO: Waiting up to 3m0s for PersistentVolume local-pvp4598 to have phase Bound Jan 11 19:41:42.733: INFO: PersistentVolume local-pvp4598 found and phase=Bound (89.542122ms) [BeforeEach] Set fsGroup for local volume /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set same fsGroup for two pods simultaneously [Slow] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:274 STEP: Create first pod and check fsGroup is set STEP: Creating a pod Jan 11 19:41:45.273: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec security-context-ef892f2c-64e9-44f9-be4f-d7e90c0481db --namespace=persistent-local-volumes-test-3766 -- stat -c %g /mnt/volume1' Jan 11 19:41:46.579: INFO: stderr: "" Jan 11 19:41:46.579: INFO: stdout: "1234\n" STEP: Create second pod with same fsGroup and check fsGroup is correct STEP: Creating a pod Jan 11 19:41:48.940: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec security-context-a1ad5e08-87d1-417f-9c4b-6395135e2d12 --namespace=persistent-local-volumes-test-3766 -- stat -c %g /mnt/volume1' Jan 11 19:41:50.201: INFO: stderr: "" Jan 11 19:41:50.201: INFO: stdout: "1234\n" STEP: Deleting first pod STEP: Deleting pod security-context-ef892f2c-64e9-44f9-be4f-d7e90c0481db in namespace persistent-local-volumes-test-3766 STEP: Deleting second pod STEP: Deleting pod security-context-a1ad5e08-87d1-417f-9c4b-6395135e2d12 in namespace persistent-local-volumes-test-3766 [AfterEach] [Volume type: dir] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jan 11 19:41:50.384: INFO: Deleting PersistentVolumeClaim "pvc-gztf8" Jan 11 19:41:50.474: INFO: Deleting PersistentVolume "local-pvp4598" STEP: Removing the test directory Jan 11 19:41:50.565: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-3766 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-df7afda0-0a38-424d-bbd4-5bcc9d635cf9' Jan 11 19:41:52.032: INFO: stderr: "" Jan 11 19:41:52.032: INFO: stdout: "" [AfterEach] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:41:52.124: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-3766" for this suite. Jan 11 19:42:04.484: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:42:07.804: INFO: namespace persistent-local-volumes-test-3766 deletion completed in 15.589940749s • [SLOW TEST:29.782 seconds] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set same fsGroup for two pods simultaneously [Slow] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:274 ------------------------------ [BeforeEach] [sig-network] Networking /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:41:28.613: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename nettest STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nettest-7326 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Networking /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:35 STEP: Executing a successful http request from the external internet [It] should function for node-Service: udp /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:132 STEP: Performing setup for networking test in namespace nettest-7326 STEP: creating a selector STEP: Creating the service pods in kubernetes Jan 11 19:41:29.327: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods STEP: Getting node addresses Jan 11 19:41:54.768: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Jan 11 19:41:54.949: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-network] Networking /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:41:54.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "nettest-7326" for this suite. Jan 11 19:42:07.308: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:42:10.606: INFO: namespace nettest-7326 deletion completed in 15.565771846s S [SKIPPING] [41.992 seconds] [sig-network] Networking /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 Granular Checks: Services /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:103 should function for node-Service: udp [It] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:132 Requires at least 2 nodes (not -1) /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:597 ------------------------------ SSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:41:44.713: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in persistent-local-volumes-test-657 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:152 [BeforeEach] [Volume type: dir-link-bindmounted] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Jan 11 19:41:47.801: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-657 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-364e6522-9af1-4134-8f1c-7e5354ae1c9c-backend && mount --bind /tmp/local-volume-test-364e6522-9af1-4134-8f1c-7e5354ae1c9c-backend /tmp/local-volume-test-364e6522-9af1-4134-8f1c-7e5354ae1c9c-backend && ln -s /tmp/local-volume-test-364e6522-9af1-4134-8f1c-7e5354ae1c9c-backend /tmp/local-volume-test-364e6522-9af1-4134-8f1c-7e5354ae1c9c' Jan 11 19:41:49.121: INFO: stderr: "" Jan 11 19:41:49.121: INFO: stdout: "" STEP: Creating local PVCs and PVs Jan 11 19:41:49.121: INFO: Creating a PV followed by a PVC Jan 11 19:41:49.300: INFO: Waiting for PV local-pv8hgx4 to bind to PVC pvc-pqk2l Jan 11 19:41:49.300: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-pqk2l] to have phase Bound Jan 11 19:41:49.389: INFO: PersistentVolumeClaim pvc-pqk2l found and phase=Bound (89.005019ms) Jan 11 19:41:49.389: INFO: Waiting up to 3m0s for PersistentVolume local-pv8hgx4 to have phase Bound Jan 11 19:41:49.478: INFO: PersistentVolume local-pv8hgx4 found and phase=Bound (88.895506ms) [BeforeEach] Set fsGroup for local volume /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set same fsGroup for two pods simultaneously [Slow] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:274 STEP: Create first pod and check fsGroup is set STEP: Creating a pod Jan 11 19:41:54.019: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec security-context-909dbee6-b125-42fd-8424-f308c40bfa49 --namespace=persistent-local-volumes-test-657 -- stat -c %g /mnt/volume1' Jan 11 19:41:55.358: INFO: stderr: "" Jan 11 19:41:55.358: INFO: stdout: "1234\n" STEP: Create second pod with same fsGroup and check fsGroup is correct STEP: Creating a pod Jan 11 19:41:57.716: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec security-context-ed756b38-d8a9-476e-b86c-0d899544658a --namespace=persistent-local-volumes-test-657 -- stat -c %g /mnt/volume1' Jan 11 19:41:59.029: INFO: stderr: "" Jan 11 19:41:59.029: INFO: stdout: "1234\n" STEP: Deleting first pod STEP: Deleting pod security-context-909dbee6-b125-42fd-8424-f308c40bfa49 in namespace persistent-local-volumes-test-657 STEP: Deleting second pod STEP: Deleting pod security-context-ed756b38-d8a9-476e-b86c-0d899544658a in namespace persistent-local-volumes-test-657 [AfterEach] [Volume type: dir-link-bindmounted] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jan 11 19:41:59.215: INFO: Deleting PersistentVolumeClaim "pvc-pqk2l" Jan 11 19:41:59.305: INFO: Deleting PersistentVolume "local-pv8hgx4" STEP: Removing the test directory Jan 11 19:41:59.395: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-657 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm /tmp/local-volume-test-364e6522-9af1-4134-8f1c-7e5354ae1c9c && umount /tmp/local-volume-test-364e6522-9af1-4134-8f1c-7e5354ae1c9c-backend && rm -r /tmp/local-volume-test-364e6522-9af1-4134-8f1c-7e5354ae1c9c-backend' Jan 11 19:42:01.054: INFO: stderr: "" Jan 11 19:42:01.054: INFO: stdout: "" [AfterEach] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:42:01.145: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-657" for this suite. Jan 11 19:42:07.504: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:42:10.798: INFO: namespace persistent-local-volumes-test-657 deletion completed in 9.56259598s • [SLOW TEST:26.086 seconds] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-link-bindmounted] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set same fsGroup for two pods simultaneously [Slow] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:274 ------------------------------ SS ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:42:07.806: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename projected STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-212 STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating projection with secret that has name projected-secret-test-ed9e272e-2e54-4e10-a9e0-48bd6f7c718a STEP: Creating a pod to test consume secrets Jan 11 19:42:08.633: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-01221051-600a-4acc-bc2b-bc369d447265" in namespace "projected-212" to be "success or failure" Jan 11 19:42:08.722: INFO: Pod "pod-projected-secrets-01221051-600a-4acc-bc2b-bc369d447265": Phase="Pending", Reason="", readiness=false. Elapsed: 89.858801ms Jan 11 19:42:10.813: INFO: Pod "pod-projected-secrets-01221051-600a-4acc-bc2b-bc369d447265": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.180045971s STEP: Saw pod success Jan 11 19:42:10.813: INFO: Pod "pod-projected-secrets-01221051-600a-4acc-bc2b-bc369d447265" satisfied condition "success or failure" Jan 11 19:42:10.902: INFO: Trying to get logs from node ip-10-250-27-25.ec2.internal pod pod-projected-secrets-01221051-600a-4acc-bc2b-bc369d447265 container projected-secret-volume-test: STEP: delete the pod Jan 11 19:42:11.092: INFO: Waiting for pod pod-projected-secrets-01221051-600a-4acc-bc2b-bc369d447265 to disappear Jan 11 19:42:11.181: INFO: Pod pod-projected-secrets-01221051-600a-4acc-bc2b-bc369d447265 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:42:11.181: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-212" for this suite. Jan 11 19:42:19.542: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:42:22.864: INFO: namespace projected-212 deletion completed in 11.591886433s • [SLOW TEST:15.059 seconds] [sig-storage] Projected secret /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ S ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:42:10.615: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename kubectl STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-2366 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:225 [It] should support --unix-socket=/path [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Starting the proxy Jan 11 19:42:11.251: INFO: Asynchronously running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config proxy --unix-socket=/tmp/kubectl-proxy-unix212783694/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:42:11.308: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2366" for this suite. Jan 11 19:42:19.667: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:42:22.966: INFO: namespace kubectl-2366 deletion completed in 11.568182016s • [SLOW TEST:12.352 seconds] [sig-cli] Kubectl client /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Proxy server /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1782 should support --unix-socket=/path [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl Port forwarding /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:41:54.209: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename port-forwarding STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in port-forwarding-1368 STEP: Waiting for a default service account to be provisioned in namespace [It] should support a client that connects, sends DATA, and disconnects /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:474 STEP: Creating the target pod STEP: Running 'kubectl port-forward' Jan 11 19:42:01.120: INFO: starting port-forward command and streaming output Jan 11 19:42:01.120: INFO: Asynchronously running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config port-forward --namespace=port-forwarding-1368 pfpod :80' Jan 11 19:42:01.120: INFO: reading from `kubectl port-forward` command's stdout STEP: Dialing the local port STEP: Reading data from the local port STEP: Waiting for the target pod to stop running Jan 11 19:42:03.748: INFO: Waiting up to 5m0s for pod "pfpod" in namespace "port-forwarding-1368" to be "container terminated" Jan 11 19:42:03.837: INFO: Pod "pfpod": Phase="Running", Reason="", readiness=true. Elapsed: 89.082055ms Jan 11 19:42:05.926: INFO: Pod "pfpod": Phase="Running", Reason="", readiness=false. Elapsed: 2.178699089s Jan 11 19:42:05.927: INFO: Pod "pfpod" satisfied condition "container terminated" STEP: Verifying logs STEP: Closing the connection to the local port [AfterEach] [sig-cli] Kubectl Port forwarding /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:42:06.025: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "port-forwarding-1368" for this suite. Jan 11 19:42:20.386: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:42:23.676: INFO: namespace port-forwarding-1368 deletion completed in 17.559988462s • [SLOW TEST:29.468 seconds] [sig-cli] Kubectl Port forwarding /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 With a server listening on localhost /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:463 that expects NO client request /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:473 should support a client that connects, sends DATA, and disconnects /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:474 ------------------------------ SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:42:04.042: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename resourcequota STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in resourcequota-1218 STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:42:16.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1218" for this suite. Jan 11 19:42:22.680: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:42:25.996: INFO: namespace resourcequota-1218 deletion completed in 9.587028528s • [SLOW TEST:21.954 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:42:05.022: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename gc STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in gc-3108 STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0111 19:42:17.519540 8609 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 11 19:42:17.519: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:42:17.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3108" for this suite. Jan 11 19:42:23.880: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:42:27.192: INFO: namespace gc-3108 deletion completed in 9.581555991s • [SLOW TEST:22.170 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ S ------------------------------ [BeforeEach] [sig-storage] PV Protection /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:42:22.985: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename pv-protection STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pv-protection-4661 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PV Protection /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pv_protection.go:51 Jan 11 19:42:23.949: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable STEP: Creating a PV STEP: Waiting for PV to enter phase Available Jan 11 19:42:24.127: INFO: Waiting up to 30s for PersistentVolume hostpath-f5jzp to have phase Available Jan 11 19:42:24.216: INFO: PersistentVolume hostpath-f5jzp found and phase=Available (89.034004ms) STEP: Checking that PV Protection finalizer is set [It] Verify "immediate" deletion of a PV that is not bound to a PVC /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pv_protection.go:99 STEP: Deleting the PV Jan 11 19:42:24.396: INFO: Waiting up to 3m0s for PersistentVolume hostpath-f5jzp to get deleted Jan 11 19:42:24.485: INFO: PersistentVolume hostpath-f5jzp was removed [AfterEach] [sig-storage] PV Protection /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:42:24.485: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-protection-4661" for this suite. Jan 11 19:42:30.843: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:42:34.151: INFO: namespace pv-protection-4661 deletion completed in 9.575360276s [AfterEach] [sig-storage] PV Protection /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pv_protection.go:92 Jan 11 19:42:34.151: INFO: AfterEach: Cleaning up test resources. Jan 11 19:42:34.151: INFO: pvc is nil Jan 11 19:42:34.151: INFO: Deleting PersistentVolume "hostpath-f5jzp" • [SLOW TEST:11.255 seconds] [sig-storage] PV Protection /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Verify "immediate" deletion of a PV that is not bound to a PVC /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pv_protection.go:99 ------------------------------ SSS ------------------------------ [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:42:22.868: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename init-container STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in init-container-5108 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: creating the pod Jan 11 19:42:23.852: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:42:26.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-5108" for this suite. Jan 11 19:42:32.617: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:42:35.934: INFO: namespace init-container-5108 deletion completed in 9.587139873s • [SLOW TEST:13.067 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:93 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:42:27.195: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename provisioning STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in provisioning-4208 STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to unmount after the subpath directory is deleted /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:425 Jan 11 19:42:28.051: INFO: Could not find CSI Name for in-tree plugin kubernetes.io/host-path Jan 11 19:42:28.141: INFO: Creating resource for inline volume STEP: Creating pod pod-subpath-test-hostpath-lj6v Jan 11 19:42:30.414: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=provisioning-4208 pod-subpath-test-hostpath-lj6v --container test-container-volume-hostpath-lj6v -- /bin/sh -c rm -r /test-volume/provisioning-4208' Jan 11 19:42:31.761: INFO: stderr: "" Jan 11 19:42:31.761: INFO: stdout: "" STEP: Deleting pod pod-subpath-test-hostpath-lj6v Jan 11 19:42:31.761: INFO: Deleting pod "pod-subpath-test-hostpath-lj6v" in namespace "provisioning-4208" Jan 11 19:42:31.852: INFO: Wait up to 5m0s for pod "pod-subpath-test-hostpath-lj6v" to be fully deleted STEP: Deleting pod Jan 11 19:42:44.032: INFO: Deleting pod "pod-subpath-test-hostpath-lj6v" in namespace "provisioning-4208" Jan 11 19:42:44.122: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics [AfterEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:42:44.122: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "provisioning-4208" for this suite. Jan 11 19:42:50.482: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:42:53.795: INFO: namespace provisioning-4208 deletion completed in 9.582895879s • [SLOW TEST:26.600 seconds] [sig-storage] In-tree Volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Driver: hostPath] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:69 [Testpattern: Inline-volume (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:92 should be able to unmount after the subpath directory is deleted /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:425 ------------------------------ SSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:42:35.936: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename gc STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in gc-7743 STEP: Waiting for a default service account to be provisioned in namespace [It] should support cascading deletion of custom resources /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/garbage_collector.go:866 Jan 11 19:42:36.650: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config Jan 11 19:42:37.400: INFO: created owner resource "ownervxlb5" Jan 11 19:42:37.490: INFO: created dependent resource "dependent9jttx" [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:42:52.941: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7743" for this suite. Jan 11 19:42:59.303: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:43:02.667: INFO: namespace gc-7743 deletion completed in 9.634070527s • [SLOW TEST:26.731 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should support cascading deletion of custom resources /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/garbage_collector.go:866 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:43:02.693: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in persistent-local-volumes-test-3441 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:152 [BeforeEach] [Volume type: blockfswithoutformat] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "ip-10-250-27-25.ec2.internal" using path "/tmp/local-volume-test-e175e3e1-efc0-4e99-bda4-4034d9c92c8c" Jan 11 19:43:05.807: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-3441 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-e175e3e1-efc0-4e99-bda4-4034d9c92c8c && dd if=/dev/zero of=/tmp/local-volume-test-e175e3e1-efc0-4e99-bda4-4034d9c92c8c/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-e175e3e1-efc0-4e99-bda4-4034d9c92c8c/file' Jan 11 19:43:07.144: INFO: stderr: "5120+0 records in\n5120+0 records out\n20971520 bytes (21 MB, 20 MiB) copied, 0.0191457 s, 1.1 GB/s\n" Jan 11 19:43:07.144: INFO: stdout: "" Jan 11 19:43:07.145: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-3441 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-e175e3e1-efc0-4e99-bda4-4034d9c92c8c/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}' Jan 11 19:43:08.496: INFO: stderr: "" Jan 11 19:43:08.496: INFO: stdout: "/dev/loop0\n" STEP: Creating local PVCs and PVs Jan 11 19:43:08.496: INFO: Creating a PV followed by a PVC Jan 11 19:43:08.676: INFO: Waiting for PV local-pvbn8c8 to bind to PVC pvc-jcqn8 Jan 11 19:43:08.676: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-jcqn8] to have phase Bound Jan 11 19:43:08.765: INFO: PersistentVolumeClaim pvc-jcqn8 found and phase=Bound (89.286494ms) Jan 11 19:43:08.765: INFO: Waiting up to 3m0s for PersistentVolume local-pvbn8c8 to have phase Bound Jan 11 19:43:08.854: INFO: PersistentVolume local-pvbn8c8 found and phase=Bound (89.417362ms) [BeforeEach] One pod requesting one prebound PVC /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Jan 11 19:43:11.483: INFO: pod "security-context-1585fbe5-8a7c-4d3d-99a9-d631684e9ccb" created on Node "ip-10-250-27-25.ec2.internal" STEP: Writing in pod1 Jan 11 19:43:11.483: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-3441 security-context-1585fbe5-8a7c-4d3d-99a9-d631684e9ccb -- /bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file' Jan 11 19:43:12.779: INFO: stderr: "" Jan 11 19:43:12.779: INFO: stdout: "" Jan 11 19:43:12.779: INFO: podRWCmdExec out: "" err: [It] should be able to mount volume and read from pod1 /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 STEP: Reading in pod1 Jan 11 19:43:12.779: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-3441 security-context-1585fbe5-8a7c-4d3d-99a9-d631684e9ccb -- /bin/sh -c cat /mnt/volume1/test-file' Jan 11 19:43:14.070: INFO: stderr: "" Jan 11 19:43:14.070: INFO: stdout: "test-file-content\n" Jan 11 19:43:14.070: INFO: podRWCmdExec out: "test-file-content\n" err: [AfterEach] One pod requesting one prebound PVC /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod security-context-1585fbe5-8a7c-4d3d-99a9-d631684e9ccb in namespace persistent-local-volumes-test-3441 [AfterEach] [Volume type: blockfswithoutformat] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jan 11 19:43:14.160: INFO: Deleting PersistentVolumeClaim "pvc-jcqn8" Jan 11 19:43:14.251: INFO: Deleting PersistentVolume "local-pvbn8c8" Jan 11 19:43:14.342: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-3441 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-e175e3e1-efc0-4e99-bda4-4034d9c92c8c/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}' Jan 11 19:43:15.701: INFO: stderr: "" Jan 11 19:43:15.701: INFO: stdout: "/dev/loop0\n" STEP: Tear down block device "/dev/loop0" on node "ip-10-250-27-25.ec2.internal" at path /tmp/local-volume-test-e175e3e1-efc0-4e99-bda4-4034d9c92c8c/file Jan 11 19:43:15.701: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-3441 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0' Jan 11 19:43:17.006: INFO: stderr: "" Jan 11 19:43:17.006: INFO: stdout: "" STEP: Removing the test directory /tmp/local-volume-test-e175e3e1-efc0-4e99-bda4-4034d9c92c8c Jan 11 19:43:17.006: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-3441 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-e175e3e1-efc0-4e99-bda4-4034d9c92c8c' Jan 11 19:43:18.325: INFO: stderr: "" Jan 11 19:43:18.325: INFO: stdout: "" [AfterEach] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:43:18.416: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-3441" for this suite. Jan 11 19:43:24.777: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:43:28.088: INFO: namespace persistent-local-volumes-test-3441 deletion completed in 9.580957105s • [SLOW TEST:25.396 seconds] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: blockfswithoutformat] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and read from pod1 /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 ------------------------------ [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:42:26.004: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename gc STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in gc-94 STEP: Waiting for a default service account to be provisioned in namespace [It] should support orphan deletion of custom resources /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/garbage_collector.go:969 Jan 11 19:42:26.650: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config Jan 11 19:42:27.400: INFO: created owner resource "ownervphv5" Jan 11 19:42:27.490: INFO: created dependent resource "dependentfmkvr" STEP: wait for the owner to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the dependent crd [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:43:23.032: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-94" for this suite. Jan 11 19:43:29.394: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:43:32.713: INFO: namespace gc-94 deletion completed in 9.589650997s • [SLOW TEST:66.709 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should support orphan deletion of custom resources /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/garbage_collector.go:969 ------------------------------ SSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:42:53.805: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in csi-mock-volumes-6381 STEP: Waiting for a default service account to be provisioned in namespace [It] should expand volume without restarting pod if nodeExpansion=off /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:449 STEP: deploying csi mock driver Jan 11 19:42:54.646: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6381/csi-attacher Jan 11 19:42:54.736: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-6381 Jan 11 19:42:54.736: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-6381 Jan 11 19:42:54.826: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-6381 Jan 11 19:42:54.916: INFO: creating *v1.Role: csi-mock-volumes-6381/external-attacher-cfg-csi-mock-volumes-6381 Jan 11 19:42:55.006: INFO: creating *v1.RoleBinding: csi-mock-volumes-6381/csi-attacher-role-cfg Jan 11 19:42:55.095: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6381/csi-provisioner Jan 11 19:42:55.186: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-6381 Jan 11 19:42:55.186: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-6381 Jan 11 19:42:55.276: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-6381 Jan 11 19:42:55.366: INFO: creating *v1.Role: csi-mock-volumes-6381/external-provisioner-cfg-csi-mock-volumes-6381 Jan 11 19:42:55.456: INFO: creating *v1.RoleBinding: csi-mock-volumes-6381/csi-provisioner-role-cfg Jan 11 19:42:55.546: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6381/csi-resizer Jan 11 19:42:55.636: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-6381 Jan 11 19:42:55.636: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-6381 Jan 11 19:42:55.726: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-6381 Jan 11 19:42:55.815: INFO: creating *v1.Role: csi-mock-volumes-6381/external-resizer-cfg-csi-mock-volumes-6381 Jan 11 19:42:55.906: INFO: creating *v1.RoleBinding: csi-mock-volumes-6381/csi-resizer-role-cfg Jan 11 19:42:55.995: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6381/csi-mock Jan 11 19:42:56.086: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-6381 Jan 11 19:42:56.176: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-6381 Jan 11 19:42:56.266: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-6381 Jan 11 19:42:56.355: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-6381 Jan 11 19:42:56.445: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-6381 Jan 11 19:42:56.535: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-6381 Jan 11 19:42:56.626: INFO: creating *v1.StatefulSet: csi-mock-volumes-6381/csi-mockplugin Jan 11 19:42:56.716: INFO: creating *v1.StatefulSet: csi-mock-volumes-6381/csi-mockplugin-attacher Jan 11 19:42:56.806: INFO: creating *v1.StatefulSet: csi-mock-volumes-6381/csi-mockplugin-resizer STEP: Creating pod Jan 11 19:42:57.076: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Jan 11 19:42:57.167: INFO: Waiting up to 5m0s for PersistentVolumeClaims [pvc-m8dmz] to have phase Bound Jan 11 19:42:57.257: INFO: PersistentVolumeClaim pvc-m8dmz found but phase is Pending instead of Bound. Jan 11 19:42:59.347: INFO: PersistentVolumeClaim pvc-m8dmz found but phase is Pending instead of Bound. Jan 11 19:43:01.437: INFO: PersistentVolumeClaim pvc-m8dmz found and phase=Bound (4.269567284s) STEP: Expanding current pvc STEP: Waiting for persistent volume resize to finish STEP: Waiting for PVC resize to finish STEP: Deleting pod pvc-volume-tester-wkvl6 Jan 11 19:43:06.254: INFO: Deleting pod "pvc-volume-tester-wkvl6" in namespace "csi-mock-volumes-6381" Jan 11 19:43:06.345: INFO: Wait up to 5m0s for pod "pvc-volume-tester-wkvl6" to be fully deleted STEP: Deleting claim pvc-m8dmz Jan 11 19:43:14.705: INFO: Waiting up to 2m0s for PersistentVolume pvc-bff16a78-86a3-4438-a1ed-8118c918d275 to get deleted Jan 11 19:43:14.795: INFO: PersistentVolume pvc-bff16a78-86a3-4438-a1ed-8118c918d275 was removed STEP: Deleting storageclass csi-mock-volumes-6381-sc STEP: Cleaning up resources STEP: uninstalling csi mock driver Jan 11 19:43:14.886: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6381/csi-attacher Jan 11 19:43:14.978: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-6381 Jan 11 19:43:15.068: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-6381 Jan 11 19:43:15.160: INFO: deleting *v1.Role: csi-mock-volumes-6381/external-attacher-cfg-csi-mock-volumes-6381 Jan 11 19:43:15.251: INFO: deleting *v1.RoleBinding: csi-mock-volumes-6381/csi-attacher-role-cfg Jan 11 19:43:15.341: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6381/csi-provisioner Jan 11 19:43:15.433: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-6381 Jan 11 19:43:15.524: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-6381 Jan 11 19:43:15.615: INFO: deleting *v1.Role: csi-mock-volumes-6381/external-provisioner-cfg-csi-mock-volumes-6381 Jan 11 19:43:15.707: INFO: deleting *v1.RoleBinding: csi-mock-volumes-6381/csi-provisioner-role-cfg Jan 11 19:43:15.798: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6381/csi-resizer Jan 11 19:43:15.889: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-6381 Jan 11 19:43:15.980: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-6381 Jan 11 19:43:16.071: INFO: deleting *v1.Role: csi-mock-volumes-6381/external-resizer-cfg-csi-mock-volumes-6381 Jan 11 19:43:16.163: INFO: deleting *v1.RoleBinding: csi-mock-volumes-6381/csi-resizer-role-cfg Jan 11 19:43:16.254: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6381/csi-mock Jan 11 19:43:16.345: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-6381 Jan 11 19:43:16.436: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-6381 Jan 11 19:43:16.528: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-6381 Jan 11 19:43:16.619: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-6381 Jan 11 19:43:16.712: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-6381 Jan 11 19:43:16.803: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-6381 Jan 11 19:43:16.895: INFO: deleting *v1.StatefulSet: csi-mock-volumes-6381/csi-mockplugin Jan 11 19:43:16.986: INFO: deleting *v1.StatefulSet: csi-mock-volumes-6381/csi-mockplugin-attacher Jan 11 19:43:17.078: INFO: deleting *v1.StatefulSet: csi-mock-volumes-6381/csi-mockplugin-resizer [AfterEach] [sig-storage] CSI mock volume /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:43:17.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "csi-mock-volumes-6381" for this suite. Jan 11 19:43:29.532: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:43:32.847: INFO: namespace csi-mock-volumes-6381 deletion completed in 15.587370485s • [SLOW TEST:39.043 seconds] [sig-storage] CSI mock volume /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI Volume expansion /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:420 should expand volume without restarting pod if nodeExpansion=off /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:449 ------------------------------ S ------------------------------ [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:43:28.090: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-lifecycle-hook-9803 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Jan 11 19:43:34.069: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 11 19:43:34.160: INFO: Pod pod-with-poststart-exec-hook still exists Jan 11 19:43:36.160: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 11 19:43:36.250: INFO: Pod pod-with-poststart-exec-hook still exists Jan 11 19:43:38.160: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 11 19:43:38.250: INFO: Pod pod-with-poststart-exec-hook still exists Jan 11 19:43:40.160: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 11 19:43:40.250: INFO: Pod pod-with-poststart-exec-hook still exists Jan 11 19:43:42.160: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 11 19:43:42.251: INFO: Pod pod-with-poststart-exec-hook still exists Jan 11 19:43:44.160: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 11 19:43:44.251: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:43:44.251: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-9803" for this suite. Jan 11 19:43:56.612: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:43:59.926: INFO: namespace container-lifecycle-hook-9803 deletion completed in 15.584131171s • [SLOW TEST:31.836 seconds] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693 when create a pod with lifecycle hook /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:38:28.950: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename projected STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-9839 STEP: Waiting for a default service account to be provisioned in namespace [It] Should fail non-optional pod creation due to configMap object does not exist [Slow] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:499 STEP: Creating the pod [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:43:30.805: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9839" for this suite. Jan 11 19:43:59.167: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:44:02.531: INFO: namespace projected-9839 deletion completed in 31.634443605s • [SLOW TEST:333.581 seconds] [sig-storage] Projected configMap /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:35 Should fail non-optional pod creation due to configMap object does not exist [Slow] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:499 ------------------------------ SS ------------------------------ [BeforeEach] [sig-storage] Subpath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:43:32.719: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename subpath STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in subpath-6443 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating pod pod-subpath-test-secret-7m4k STEP: Creating a pod to test atomic-volume-subpath Jan 11 19:43:33.634: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-7m4k" in namespace "subpath-6443" to be "success or failure" Jan 11 19:43:33.724: INFO: Pod "pod-subpath-test-secret-7m4k": Phase="Pending", Reason="", readiness=false. Elapsed: 89.935698ms Jan 11 19:43:35.814: INFO: Pod "pod-subpath-test-secret-7m4k": Phase="Running", Reason="", readiness=true. Elapsed: 2.179779376s Jan 11 19:43:37.904: INFO: Pod "pod-subpath-test-secret-7m4k": Phase="Running", Reason="", readiness=true. Elapsed: 4.269649489s Jan 11 19:43:39.994: INFO: Pod "pod-subpath-test-secret-7m4k": Phase="Running", Reason="", readiness=true. Elapsed: 6.359644973s Jan 11 19:43:42.085: INFO: Pod "pod-subpath-test-secret-7m4k": Phase="Running", Reason="", readiness=true. Elapsed: 8.450258452s Jan 11 19:43:44.175: INFO: Pod "pod-subpath-test-secret-7m4k": Phase="Running", Reason="", readiness=true. Elapsed: 10.540654262s Jan 11 19:43:46.265: INFO: Pod "pod-subpath-test-secret-7m4k": Phase="Running", Reason="", readiness=true. Elapsed: 12.630998576s Jan 11 19:43:48.355: INFO: Pod "pod-subpath-test-secret-7m4k": Phase="Running", Reason="", readiness=true. Elapsed: 14.720974907s Jan 11 19:43:50.445: INFO: Pod "pod-subpath-test-secret-7m4k": Phase="Running", Reason="", readiness=true. Elapsed: 16.811171772s Jan 11 19:43:52.539: INFO: Pod "pod-subpath-test-secret-7m4k": Phase="Running", Reason="", readiness=true. Elapsed: 18.904265218s Jan 11 19:43:54.628: INFO: Pod "pod-subpath-test-secret-7m4k": Phase="Running", Reason="", readiness=true. Elapsed: 20.9939465s Jan 11 19:43:56.719: INFO: Pod "pod-subpath-test-secret-7m4k": Phase="Succeeded", Reason="", readiness=false. Elapsed: 23.084267867s STEP: Saw pod success Jan 11 19:43:56.719: INFO: Pod "pod-subpath-test-secret-7m4k" satisfied condition "success or failure" Jan 11 19:43:56.809: INFO: Trying to get logs from node ip-10-250-27-25.ec2.internal pod pod-subpath-test-secret-7m4k container test-container-subpath-secret-7m4k: STEP: delete the pod Jan 11 19:43:57.026: INFO: Waiting for pod pod-subpath-test-secret-7m4k to disappear Jan 11 19:43:57.116: INFO: Pod pod-subpath-test-secret-7m4k no longer exists STEP: Deleting pod pod-subpath-test-secret-7m4k Jan 11 19:43:57.117: INFO: Deleting pod "pod-subpath-test-secret-7m4k" in namespace "subpath-6443" [AfterEach] [sig-storage] Subpath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:43:57.207: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-6443" for this suite. Jan 11 19:44:03.568: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:44:06.889: INFO: namespace subpath-6443 deletion completed in 9.590604829s • [SLOW TEST:34.170 seconds] [sig-storage] Subpath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSS ------------------------------ [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:93 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:43:32.851: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename provisioning STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in provisioning-6240 STEP: Waiting for a default service account to be provisioned in namespace [It] should support readOnly directory specified in the volumeMount /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:347 STEP: deploying csi-hostpath driver Jan 11 19:43:33.678: INFO: creating *v1.ServiceAccount: provisioning-6240/csi-attacher Jan 11 19:43:33.768: INFO: creating *v1.ClusterRole: external-attacher-runner-provisioning-6240 Jan 11 19:43:33.768: INFO: Define cluster role external-attacher-runner-provisioning-6240 Jan 11 19:43:33.858: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-provisioning-6240 Jan 11 19:43:33.948: INFO: creating *v1.Role: provisioning-6240/external-attacher-cfg-provisioning-6240 Jan 11 19:43:34.038: INFO: creating *v1.RoleBinding: provisioning-6240/csi-attacher-role-cfg Jan 11 19:43:34.128: INFO: creating *v1.ServiceAccount: provisioning-6240/csi-provisioner Jan 11 19:43:34.219: INFO: creating *v1.ClusterRole: external-provisioner-runner-provisioning-6240 Jan 11 19:43:34.219: INFO: Define cluster role external-provisioner-runner-provisioning-6240 Jan 11 19:43:34.308: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-provisioning-6240 Jan 11 19:43:34.398: INFO: creating *v1.Role: provisioning-6240/external-provisioner-cfg-provisioning-6240 Jan 11 19:43:34.488: INFO: creating *v1.RoleBinding: provisioning-6240/csi-provisioner-role-cfg Jan 11 19:43:34.577: INFO: creating *v1.ServiceAccount: provisioning-6240/csi-snapshotter Jan 11 19:43:34.667: INFO: creating *v1.ClusterRole: external-snapshotter-runner-provisioning-6240 Jan 11 19:43:34.667: INFO: Define cluster role external-snapshotter-runner-provisioning-6240 Jan 11 19:43:34.757: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-provisioning-6240 Jan 11 19:43:34.847: INFO: creating *v1.Role: provisioning-6240/external-snapshotter-leaderelection-provisioning-6240 Jan 11 19:43:34.937: INFO: creating *v1.RoleBinding: provisioning-6240/external-snapshotter-leaderelection Jan 11 19:43:35.027: INFO: creating *v1.ServiceAccount: provisioning-6240/csi-resizer Jan 11 19:43:35.117: INFO: creating *v1.ClusterRole: external-resizer-runner-provisioning-6240 Jan 11 19:43:35.117: INFO: Define cluster role external-resizer-runner-provisioning-6240 Jan 11 19:43:35.207: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-provisioning-6240 Jan 11 19:43:35.296: INFO: creating *v1.Role: provisioning-6240/external-resizer-cfg-provisioning-6240 Jan 11 19:43:35.386: INFO: creating *v1.RoleBinding: provisioning-6240/csi-resizer-role-cfg Jan 11 19:43:35.477: INFO: creating *v1.Service: provisioning-6240/csi-hostpath-attacher Jan 11 19:43:35.574: INFO: creating *v1.StatefulSet: provisioning-6240/csi-hostpath-attacher Jan 11 19:43:35.664: INFO: creating *v1beta1.CSIDriver: csi-hostpath-provisioning-6240 Jan 11 19:43:35.754: INFO: creating *v1.Service: provisioning-6240/csi-hostpathplugin Jan 11 19:43:35.848: INFO: creating *v1.StatefulSet: provisioning-6240/csi-hostpathplugin Jan 11 19:43:35.939: INFO: creating *v1.Service: provisioning-6240/csi-hostpath-provisioner Jan 11 19:43:36.033: INFO: creating *v1.StatefulSet: provisioning-6240/csi-hostpath-provisioner Jan 11 19:43:36.123: INFO: creating *v1.Service: provisioning-6240/csi-hostpath-resizer Jan 11 19:43:36.217: INFO: creating *v1.StatefulSet: provisioning-6240/csi-hostpath-resizer Jan 11 19:43:36.307: INFO: creating *v1.Service: provisioning-6240/csi-snapshotter Jan 11 19:43:36.401: INFO: creating *v1.StatefulSet: provisioning-6240/csi-snapshotter Jan 11 19:43:36.491: INFO: creating *v1.ClusterRoleBinding: psp-csi-hostpath-role-provisioning-6240 Jan 11 19:43:36.581: INFO: Test running for native CSI Driver, not checking metrics Jan 11 19:43:36.581: INFO: Creating resource for dynamic PV STEP: creating a StorageClass provisioning-6240-csi-hostpath-provisioning-6240-sc9ht95 STEP: creating a claim Jan 11 19:43:36.670: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Jan 11 19:43:36.761: INFO: Waiting up to 5m0s for PersistentVolumeClaims [csi-hostpathkv2qt] to have phase Bound Jan 11 19:43:36.851: INFO: PersistentVolumeClaim csi-hostpathkv2qt found but phase is Pending instead of Bound. Jan 11 19:43:38.941: INFO: PersistentVolumeClaim csi-hostpathkv2qt found and phase=Bound (2.179170331s) STEP: Creating pod pod-subpath-test-csi-hostpath-dynamicpv-2jkm STEP: Creating a pod to test subpath Jan 11 19:43:39.212: INFO: Waiting up to 5m0s for pod "pod-subpath-test-csi-hostpath-dynamicpv-2jkm" in namespace "provisioning-6240" to be "success or failure" Jan 11 19:43:39.301: INFO: Pod "pod-subpath-test-csi-hostpath-dynamicpv-2jkm": Phase="Pending", Reason="", readiness=false. Elapsed: 89.333618ms Jan 11 19:43:41.391: INFO: Pod "pod-subpath-test-csi-hostpath-dynamicpv-2jkm": Phase="Pending", Reason="", readiness=false. Elapsed: 2.179632049s Jan 11 19:43:43.482: INFO: Pod "pod-subpath-test-csi-hostpath-dynamicpv-2jkm": Phase="Pending", Reason="", readiness=false. Elapsed: 4.270198821s Jan 11 19:43:45.572: INFO: Pod "pod-subpath-test-csi-hostpath-dynamicpv-2jkm": Phase="Pending", Reason="", readiness=false. Elapsed: 6.360446962s Jan 11 19:43:47.663: INFO: Pod "pod-subpath-test-csi-hostpath-dynamicpv-2jkm": Phase="Pending", Reason="", readiness=false. Elapsed: 8.451261513s Jan 11 19:43:49.753: INFO: Pod "pod-subpath-test-csi-hostpath-dynamicpv-2jkm": Phase="Pending", Reason="", readiness=false. Elapsed: 10.541446712s Jan 11 19:43:51.843: INFO: Pod "pod-subpath-test-csi-hostpath-dynamicpv-2jkm": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.631507865s STEP: Saw pod success Jan 11 19:43:51.843: INFO: Pod "pod-subpath-test-csi-hostpath-dynamicpv-2jkm" satisfied condition "success or failure" Jan 11 19:43:51.933: INFO: Trying to get logs from node ip-10-250-27-25.ec2.internal pod pod-subpath-test-csi-hostpath-dynamicpv-2jkm container test-container-subpath-csi-hostpath-dynamicpv-2jkm: STEP: delete the pod Jan 11 19:43:52.124: INFO: Waiting for pod pod-subpath-test-csi-hostpath-dynamicpv-2jkm to disappear Jan 11 19:43:52.214: INFO: Pod pod-subpath-test-csi-hostpath-dynamicpv-2jkm no longer exists STEP: Deleting pod pod-subpath-test-csi-hostpath-dynamicpv-2jkm Jan 11 19:43:52.214: INFO: Deleting pod "pod-subpath-test-csi-hostpath-dynamicpv-2jkm" in namespace "provisioning-6240" STEP: Deleting pod Jan 11 19:43:52.303: INFO: Deleting pod "pod-subpath-test-csi-hostpath-dynamicpv-2jkm" in namespace "provisioning-6240" STEP: Deleting pvc Jan 11 19:43:52.393: INFO: Deleting PersistentVolumeClaim "csi-hostpathkv2qt" Jan 11 19:43:52.484: INFO: Waiting up to 5m0s for PersistentVolume pvc-d20cce58-e817-4841-a0a5-59a6ad229ddf to get deleted Jan 11 19:43:52.574: INFO: PersistentVolume pvc-d20cce58-e817-4841-a0a5-59a6ad229ddf was removed STEP: Deleting sc STEP: uninstalling csi-hostpath driver Jan 11 19:43:52.665: INFO: deleting *v1.ServiceAccount: provisioning-6240/csi-attacher Jan 11 19:43:52.756: INFO: deleting *v1.ClusterRole: external-attacher-runner-provisioning-6240 Jan 11 19:43:52.848: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-provisioning-6240 Jan 11 19:43:52.938: INFO: deleting *v1.Role: provisioning-6240/external-attacher-cfg-provisioning-6240 Jan 11 19:43:53.030: INFO: deleting *v1.RoleBinding: provisioning-6240/csi-attacher-role-cfg Jan 11 19:43:53.122: INFO: deleting *v1.ServiceAccount: provisioning-6240/csi-provisioner Jan 11 19:43:53.213: INFO: deleting *v1.ClusterRole: external-provisioner-runner-provisioning-6240 Jan 11 19:43:53.304: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-provisioning-6240 Jan 11 19:43:53.395: INFO: deleting *v1.Role: provisioning-6240/external-provisioner-cfg-provisioning-6240 Jan 11 19:43:53.486: INFO: deleting *v1.RoleBinding: provisioning-6240/csi-provisioner-role-cfg Jan 11 19:43:53.577: INFO: deleting *v1.ServiceAccount: provisioning-6240/csi-snapshotter Jan 11 19:43:53.668: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-provisioning-6240 Jan 11 19:43:53.760: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-provisioning-6240 Jan 11 19:43:53.851: INFO: deleting *v1.Role: provisioning-6240/external-snapshotter-leaderelection-provisioning-6240 Jan 11 19:43:53.945: INFO: deleting *v1.RoleBinding: provisioning-6240/external-snapshotter-leaderelection Jan 11 19:43:54.037: INFO: deleting *v1.ServiceAccount: provisioning-6240/csi-resizer Jan 11 19:43:54.128: INFO: deleting *v1.ClusterRole: external-resizer-runner-provisioning-6240 Jan 11 19:43:54.219: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-provisioning-6240 Jan 11 19:43:54.311: INFO: deleting *v1.Role: provisioning-6240/external-resizer-cfg-provisioning-6240 Jan 11 19:43:54.402: INFO: deleting *v1.RoleBinding: provisioning-6240/csi-resizer-role-cfg Jan 11 19:43:54.493: INFO: deleting *v1.Service: provisioning-6240/csi-hostpath-attacher Jan 11 19:43:54.590: INFO: deleting *v1.StatefulSet: provisioning-6240/csi-hostpath-attacher Jan 11 19:43:54.681: INFO: deleting *v1beta1.CSIDriver: csi-hostpath-provisioning-6240 Jan 11 19:43:54.773: INFO: deleting *v1.Service: provisioning-6240/csi-hostpathplugin Jan 11 19:43:54.870: INFO: deleting *v1.StatefulSet: provisioning-6240/csi-hostpathplugin Jan 11 19:43:54.961: INFO: deleting *v1.Service: provisioning-6240/csi-hostpath-provisioner Jan 11 19:43:55.057: INFO: deleting *v1.StatefulSet: provisioning-6240/csi-hostpath-provisioner Jan 11 19:43:55.149: INFO: deleting *v1.Service: provisioning-6240/csi-hostpath-resizer Jan 11 19:43:55.244: INFO: deleting *v1.StatefulSet: provisioning-6240/csi-hostpath-resizer Jan 11 19:43:55.335: INFO: deleting *v1.Service: provisioning-6240/csi-snapshotter Jan 11 19:43:55.432: INFO: deleting *v1.StatefulSet: provisioning-6240/csi-snapshotter Jan 11 19:43:55.524: INFO: deleting *v1.ClusterRoleBinding: psp-csi-hostpath-role-provisioning-6240 [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:43:55.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "provisioning-6240" for this suite. Jan 11 19:44:07.976: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:44:11.282: INFO: namespace provisioning-6240 deletion completed in 15.575983882s • [SLOW TEST:38.432 seconds] [sig-storage] CSI Volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Driver: csi-hostpath] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:62 [Testpattern: Dynamic PV (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:92 should support readOnly directory specified in the volumeMount /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:347 ------------------------------ SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:43:59.928: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename projected STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-821 STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating configMap with name projected-configmap-test-volume-0e843de3-870e-4c67-8ca5-03171e623ab2 STEP: Creating a pod to test consume configMaps Jan 11 19:44:00.748: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-c8df9190-8d02-4bdb-85a8-9f5b2f4ef8f4" in namespace "projected-821" to be "success or failure" Jan 11 19:44:00.838: INFO: Pod "pod-projected-configmaps-c8df9190-8d02-4bdb-85a8-9f5b2f4ef8f4": Phase="Pending", Reason="", readiness=false. Elapsed: 89.595347ms Jan 11 19:44:02.928: INFO: Pod "pod-projected-configmaps-c8df9190-8d02-4bdb-85a8-9f5b2f4ef8f4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.179651102s STEP: Saw pod success Jan 11 19:44:02.928: INFO: Pod "pod-projected-configmaps-c8df9190-8d02-4bdb-85a8-9f5b2f4ef8f4" satisfied condition "success or failure" Jan 11 19:44:03.018: INFO: Trying to get logs from node ip-10-250-27-25.ec2.internal pod pod-projected-configmaps-c8df9190-8d02-4bdb-85a8-9f5b2f4ef8f4 container projected-configmap-volume-test: STEP: delete the pod Jan 11 19:44:03.211: INFO: Waiting for pod pod-projected-configmaps-c8df9190-8d02-4bdb-85a8-9f5b2f4ef8f4 to disappear Jan 11 19:44:03.300: INFO: Pod pod-projected-configmaps-c8df9190-8d02-4bdb-85a8-9f5b2f4ef8f4 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:44:03.300: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-821" for this suite. Jan 11 19:44:09.662: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:44:12.970: INFO: namespace projected-821 deletion completed in 9.577647162s • [SLOW TEST:13.043 seconds] [sig-storage] Projected configMap /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:35 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:42:23.692: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename statefulset STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in statefulset-6005 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:62 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:77 STEP: Creating service test in namespace statefulset-6005 [It] should perform rolling updates and roll backs of template modifications [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating a new StatefulSet Jan 11 19:42:24.617: INFO: Found 1 stateful pods, waiting for 3 Jan 11 19:42:34.707: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 11 19:42:34.707: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 11 19:42:34.707: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Jan 11 19:42:34.976: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=statefulset-6005 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 11 19:42:36.266: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Jan 11 19:42:36.266: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 11 19:42:36.266: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Jan 11 19:42:46.816: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Jan 11 19:42:47.086: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=statefulset-6005 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 11 19:42:48.362: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Jan 11 19:42:48.362: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 11 19:42:48.362: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 11 19:43:08.897: INFO: Waiting for StatefulSet statefulset-6005/ss2 to complete update Jan 11 19:43:08.897: INFO: Waiting for Pod statefulset-6005/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Rolling back to a previous revision Jan 11 19:43:19.077: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=statefulset-6005 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 11 19:43:20.330: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Jan 11 19:43:20.330: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 11 19:43:20.330: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 11 19:43:20.699: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Jan 11 19:43:20.967: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=statefulset-6005 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 11 19:43:22.265: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Jan 11 19:43:22.265: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 11 19:43:22.265: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 11 19:43:42.802: INFO: Waiting for StatefulSet statefulset-6005/ss2 to complete update Jan 11 19:43:42.802: INFO: Waiting for Pod statefulset-6005/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 Jan 11 19:43:52.982: INFO: Deleting all statefulset in ns statefulset-6005 Jan 11 19:43:53.071: INFO: Scaling statefulset ss2 to 0 Jan 11 19:44:23.428: INFO: Waiting for statefulset status.replicas updated to 0 Jan 11 19:44:23.518: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:44:23.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-6005" for this suite. Jan 11 19:44:30.146: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:44:33.469: INFO: namespace statefulset-6005 deletion completed in 9.591075951s • [SLOW TEST:129.776 seconds] [sig-apps] StatefulSet /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693 should perform rolling updates and roll backs of template modifications [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:44:11.298: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in persistent-local-volumes-test-5188 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:152 [BeforeEach] [Volume type: tmpfs] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating tmpfs mount point on node "ip-10-250-27-25.ec2.internal" at path "/tmp/local-volume-test-a1089299-ad72-4143-ae38-3567d702c477" Jan 11 19:44:14.402: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-5188 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-a1089299-ad72-4143-ae38-3567d702c477" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-a1089299-ad72-4143-ae38-3567d702c477" "/tmp/local-volume-test-a1089299-ad72-4143-ae38-3567d702c477"' Jan 11 19:44:15.678: INFO: stderr: "" Jan 11 19:44:15.678: INFO: stdout: "" STEP: Creating local PVCs and PVs Jan 11 19:44:15.678: INFO: Creating a PV followed by a PVC Jan 11 19:44:15.858: INFO: Waiting for PV local-pvln7vq to bind to PVC pvc-ptvh5 Jan 11 19:44:15.858: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-ptvh5] to have phase Bound Jan 11 19:44:15.947: INFO: PersistentVolumeClaim pvc-ptvh5 found and phase=Bound (89.4922ms) Jan 11 19:44:15.947: INFO: Waiting up to 3m0s for PersistentVolume local-pvln7vq to have phase Bound Jan 11 19:44:16.038: INFO: PersistentVolume local-pvln7vq found and phase=Bound (90.364483ms) [It] should be able to write from pod1 and read from pod2 /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 STEP: Creating pod1 STEP: Creating a pod Jan 11 19:44:18.667: INFO: pod "security-context-e77b8b9a-cd00-4c80-830f-50fa62d95ff9" created on Node "ip-10-250-27-25.ec2.internal" STEP: Writing in pod1 Jan 11 19:44:18.668: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-5188 security-context-e77b8b9a-cd00-4c80-830f-50fa62d95ff9 -- /bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file' Jan 11 19:44:19.953: INFO: stderr: "" Jan 11 19:44:19.953: INFO: stdout: "" Jan 11 19:44:19.953: INFO: podRWCmdExec out: "" err: Jan 11 19:44:19.953: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-5188 security-context-e77b8b9a-cd00-4c80-830f-50fa62d95ff9 -- /bin/sh -c cat /mnt/volume1/test-file' Jan 11 19:44:21.276: INFO: stderr: "" Jan 11 19:44:21.276: INFO: stdout: "test-file-content\n" Jan 11 19:44:21.276: INFO: podRWCmdExec out: "test-file-content\n" err: STEP: Deleting pod1 STEP: Deleting pod security-context-e77b8b9a-cd00-4c80-830f-50fa62d95ff9 in namespace persistent-local-volumes-test-5188 STEP: Creating pod2 STEP: Creating a pod Jan 11 19:44:23.817: INFO: pod "security-context-90c954b4-ef16-4cc8-b52d-8582dda9b51b" created on Node "ip-10-250-27-25.ec2.internal" STEP: Reading in pod2 Jan 11 19:44:23.817: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-5188 security-context-90c954b4-ef16-4cc8-b52d-8582dda9b51b -- /bin/sh -c cat /mnt/volume1/test-file' Jan 11 19:44:25.082: INFO: stderr: "" Jan 11 19:44:25.082: INFO: stdout: "test-file-content\n" Jan 11 19:44:25.082: INFO: podRWCmdExec out: "test-file-content\n" err: STEP: Deleting pod2 STEP: Deleting pod security-context-90c954b4-ef16-4cc8-b52d-8582dda9b51b in namespace persistent-local-volumes-test-5188 [AfterEach] [Volume type: tmpfs] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jan 11 19:44:25.173: INFO: Deleting PersistentVolumeClaim "pvc-ptvh5" Jan 11 19:44:25.264: INFO: Deleting PersistentVolume "local-pvln7vq" STEP: Unmount tmpfs mount point on node "ip-10-250-27-25.ec2.internal" at path "/tmp/local-volume-test-a1089299-ad72-4143-ae38-3567d702c477" Jan 11 19:44:25.355: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-5188 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-a1089299-ad72-4143-ae38-3567d702c477"' Jan 11 19:44:26.621: INFO: stderr: "" Jan 11 19:44:26.621: INFO: stdout: "" STEP: Removing the test directory Jan 11 19:44:26.622: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-5188 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-a1089299-ad72-4143-ae38-3567d702c477' Jan 11 19:44:27.942: INFO: stderr: "" Jan 11 19:44:27.942: INFO: stdout: "" [AfterEach] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:44:28.033: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-5188" for this suite. Jan 11 19:44:34.394: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:44:37.712: INFO: namespace persistent-local-volumes-test-5188 deletion completed in 9.587940391s • [SLOW TEST:26.414 seconds] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: tmpfs] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume one after the other /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254 should be able to write from pod1 and read from pod2 /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 ------------------------------ SS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:44:12.972: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in persistent-local-volumes-test-7685 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:152 [BeforeEach] [Volume type: blockfswithformat] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "ip-10-250-27-25.ec2.internal" using path "/tmp/local-volume-test-c46e819c-ab16-4c9d-9a8e-ff906831014e" Jan 11 19:44:16.066: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-7685 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-c46e819c-ab16-4c9d-9a8e-ff906831014e && dd if=/dev/zero of=/tmp/local-volume-test-c46e819c-ab16-4c9d-9a8e-ff906831014e/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-c46e819c-ab16-4c9d-9a8e-ff906831014e/file' Jan 11 19:44:17.359: INFO: stderr: "5120+0 records in\n5120+0 records out\n20971520 bytes (21 MB, 20 MiB) copied, 0.0176591 s, 1.2 GB/s\n" Jan 11 19:44:17.359: INFO: stdout: "" Jan 11 19:44:17.359: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-7685 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-c46e819c-ab16-4c9d-9a8e-ff906831014e/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}' Jan 11 19:44:18.643: INFO: stderr: "" Jan 11 19:44:18.643: INFO: stdout: "/dev/loop0\n" Jan 11 19:44:18.643: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-7685 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkfs -t ext4 /dev/loop0 && mount -t ext4 /dev/loop0 /tmp/local-volume-test-c46e819c-ab16-4c9d-9a8e-ff906831014e && chmod o+rwx /tmp/local-volume-test-c46e819c-ab16-4c9d-9a8e-ff906831014e' Jan 11 19:44:20.015: INFO: stderr: "mke2fs 1.44.5 (15-Dec-2018)\n" Jan 11 19:44:20.015: INFO: stdout: "Discarding device blocks: 1024/20480\b\b\b\b\b\b\b\b\b\b\b \b\b\b\b\b\b\b\b\b\b\bdone \nCreating filesystem with 20480 1k blocks and 5136 inodes\nFilesystem UUID: d9370340-2de0-429d-b856-6ba306cbc986\nSuperblock backups stored on blocks: \n\t8193\n\nAllocating group tables: 0/3\b\b\b \b\b\bdone \nWriting inode tables: 0/3\b\b\b \b\b\bdone \nCreating journal (1024 blocks): done\nWriting superblocks and filesystem accounting information: 0/3\b\b\b \b\b\bdone\n\n" STEP: Creating local PVCs and PVs Jan 11 19:44:20.015: INFO: Creating a PV followed by a PVC Jan 11 19:44:20.196: INFO: Waiting for PV local-pv2cws7 to bind to PVC pvc-lxdmm Jan 11 19:44:20.196: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-lxdmm] to have phase Bound Jan 11 19:44:20.286: INFO: PersistentVolumeClaim pvc-lxdmm found and phase=Bound (89.830515ms) Jan 11 19:44:20.286: INFO: Waiting up to 3m0s for PersistentVolume local-pv2cws7 to have phase Bound Jan 11 19:44:20.376: INFO: PersistentVolume local-pv2cws7 found and phase=Bound (90.016812ms) [BeforeEach] One pod requesting one prebound PVC /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Jan 11 19:44:23.007: INFO: pod "security-context-5d0b2417-da9e-4e26-a186-588f812ea6e9" created on Node "ip-10-250-27-25.ec2.internal" STEP: Writing in pod1 Jan 11 19:44:23.007: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-7685 security-context-5d0b2417-da9e-4e26-a186-588f812ea6e9 -- /bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file' Jan 11 19:44:24.354: INFO: stderr: "" Jan 11 19:44:24.354: INFO: stdout: "" Jan 11 19:44:24.354: INFO: podRWCmdExec out: "" err: [It] should be able to mount volume and write from pod1 /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 Jan 11 19:44:24.354: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-7685 security-context-5d0b2417-da9e-4e26-a186-588f812ea6e9 -- /bin/sh -c cat /mnt/volume1/test-file' Jan 11 19:44:25.744: INFO: stderr: "" Jan 11 19:44:25.744: INFO: stdout: "test-file-content\n" Jan 11 19:44:25.744: INFO: podRWCmdExec out: "test-file-content\n" err: STEP: Writing in pod1 Jan 11 19:44:25.744: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-7685 security-context-5d0b2417-da9e-4e26-a186-588f812ea6e9 -- /bin/sh -c mkdir -p /mnt/volume1; echo /tmp/local-volume-test-c46e819c-ab16-4c9d-9a8e-ff906831014e > /mnt/volume1/test-file' Jan 11 19:44:27.034: INFO: stderr: "" Jan 11 19:44:27.034: INFO: stdout: "" Jan 11 19:44:27.034: INFO: podRWCmdExec out: "" err: [AfterEach] One pod requesting one prebound PVC /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod security-context-5d0b2417-da9e-4e26-a186-588f812ea6e9 in namespace persistent-local-volumes-test-7685 [AfterEach] [Volume type: blockfswithformat] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jan 11 19:44:27.125: INFO: Deleting PersistentVolumeClaim "pvc-lxdmm" Jan 11 19:44:27.215: INFO: Deleting PersistentVolume "local-pv2cws7" Jan 11 19:44:27.307: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-7685 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount /tmp/local-volume-test-c46e819c-ab16-4c9d-9a8e-ff906831014e' Jan 11 19:44:28.574: INFO: stderr: "" Jan 11 19:44:28.574: INFO: stdout: "" Jan 11 19:44:28.574: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-7685 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-c46e819c-ab16-4c9d-9a8e-ff906831014e/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}' Jan 11 19:44:29.853: INFO: stderr: "" Jan 11 19:44:29.853: INFO: stdout: "/dev/loop0\n" STEP: Tear down block device "/dev/loop0" on node "ip-10-250-27-25.ec2.internal" at path /tmp/local-volume-test-c46e819c-ab16-4c9d-9a8e-ff906831014e/file Jan 11 19:44:29.853: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-7685 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0' Jan 11 19:44:31.128: INFO: stderr: "" Jan 11 19:44:31.128: INFO: stdout: "" STEP: Removing the test directory /tmp/local-volume-test-c46e819c-ab16-4c9d-9a8e-ff906831014e Jan 11 19:44:31.128: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-7685 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-c46e819c-ab16-4c9d-9a8e-ff906831014e' Jan 11 19:44:32.390: INFO: stderr: "" Jan 11 19:44:32.390: INFO: stdout: "" [AfterEach] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:44:32.484: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-7685" for this suite. Jan 11 19:44:38.845: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:44:42.160: INFO: namespace persistent-local-volumes-test-7685 deletion completed in 9.584247084s • [SLOW TEST:29.188 seconds] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: blockfswithformat] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and write from pod1 /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:44:33.475: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename kubectl STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-2678 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:225 [It] should check is all data is printed [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Jan 11 19:44:34.111: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config version' Jan 11 19:44:34.588: INFO: stderr: "" Jan 11 19:44:34.588: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"16\", GitVersion:\"v1.16.4\", GitCommit:\"224be7bdce5a9dd0c2fd0d46b83865648e2fe0ba\", GitTreeState:\"clean\", BuildDate:\"2019-12-11T12:47:40Z\", GoVersion:\"go1.12.12\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"16\", GitVersion:\"v1.16.4\", GitCommit:\"224be7bdce5a9dd0c2fd0d46b83865648e2fe0ba\", GitTreeState:\"clean\", BuildDate:\"2019-12-11T12:37:43Z\", GoVersion:\"go1.12.12\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:44:34.588: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2678" for this suite. Jan 11 19:44:40.948: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:44:44.237: INFO: namespace kubectl-2678 deletion completed in 9.558475433s • [SLOW TEST:10.763 seconds] [sig-cli] Kubectl client /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl version /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1380 should check is all data is printed [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSS ------------------------------ [BeforeEach] [sig-apps] Job /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:44:02.536: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename job STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in job-4422 STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-4422, will wait for the garbage collector to delete the pods Jan 11 19:44:05.639: INFO: Deleting Job.batch foo took: 91.313191ms Jan 11 19:44:05.739: INFO: Terminating Job.batch foo pods took: 100.382249ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:44:43.929: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-4422" for this suite. Jan 11 19:44:52.290: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:44:55.613: INFO: namespace job-4422 deletion completed in 11.59297985s • [SLOW TEST:53.077 seconds] [sig-apps] Job /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ [BeforeEach] [k8s.io] Container Runtime /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:44:44.245: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename container-runtime STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-runtime-1119 STEP: Waiting for a default service account to be provisioned in namespace [It] should not be able to pull non-existing image from gcr.io [NodeConformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:363 STEP: create the container STEP: check the container status STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:44:47.425: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-1119" for this suite. Jan 11 19:44:53.783: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:44:57.078: INFO: namespace container-runtime-1119 deletion completed in 9.563124221s • [SLOW TEST:12.833 seconds] [k8s.io] Container Runtime /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693 blackbox test /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 when running a container with a new image /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:252 should not be able to pull non-existing image from gcr.io [NodeConformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:363 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:44:37.716: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in persistent-local-volumes-test-2002 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:152 [BeforeEach] [Volume type: tmpfs] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating tmpfs mount point on node "ip-10-250-27-25.ec2.internal" at path "/tmp/local-volume-test-03e5e562-ede7-42b3-91ec-35c6cf55a303" Jan 11 19:44:40.905: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-2002 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-03e5e562-ede7-42b3-91ec-35c6cf55a303" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-03e5e562-ede7-42b3-91ec-35c6cf55a303" "/tmp/local-volume-test-03e5e562-ede7-42b3-91ec-35c6cf55a303"' Jan 11 19:44:42.214: INFO: stderr: "" Jan 11 19:44:42.214: INFO: stdout: "" STEP: Creating local PVCs and PVs Jan 11 19:44:42.214: INFO: Creating a PV followed by a PVC Jan 11 19:44:42.394: INFO: Waiting for PV local-pv64llj to bind to PVC pvc-wvhl2 Jan 11 19:44:42.394: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-wvhl2] to have phase Bound Jan 11 19:44:42.483: INFO: PersistentVolumeClaim pvc-wvhl2 found and phase=Bound (89.426437ms) Jan 11 19:44:42.483: INFO: Waiting up to 3m0s for PersistentVolume local-pv64llj to have phase Bound Jan 11 19:44:42.573: INFO: PersistentVolume local-pv64llj found and phase=Bound (89.593137ms) [It] should be able to write from pod1 and read from pod2 /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 STEP: Creating pod1 to write to the PV STEP: Creating a pod Jan 11 19:44:45.203: INFO: pod "security-context-5d0ec471-88f7-4180-b83d-ff3208187c00" created on Node "ip-10-250-27-25.ec2.internal" STEP: Writing in pod1 Jan 11 19:44:45.203: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-2002 security-context-5d0ec471-88f7-4180-b83d-ff3208187c00 -- /bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file' Jan 11 19:44:46.497: INFO: stderr: "" Jan 11 19:44:46.497: INFO: stdout: "" Jan 11 19:44:46.497: INFO: podRWCmdExec out: "" err: Jan 11 19:44:46.497: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-2002 security-context-5d0ec471-88f7-4180-b83d-ff3208187c00 -- /bin/sh -c cat /mnt/volume1/test-file' Jan 11 19:44:47.786: INFO: stderr: "" Jan 11 19:44:47.786: INFO: stdout: "test-file-content\n" Jan 11 19:44:47.786: INFO: podRWCmdExec out: "test-file-content\n" err: STEP: Creating pod2 to read from the PV STEP: Creating a pod Jan 11 19:44:50.237: INFO: pod "security-context-3e7870c1-8587-4140-b1f3-51abef3cee81" created on Node "ip-10-250-27-25.ec2.internal" Jan 11 19:44:50.237: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-2002 security-context-3e7870c1-8587-4140-b1f3-51abef3cee81 -- /bin/sh -c cat /mnt/volume1/test-file' Jan 11 19:44:51.506: INFO: stderr: "" Jan 11 19:44:51.506: INFO: stdout: "test-file-content\n" Jan 11 19:44:51.506: INFO: podRWCmdExec out: "test-file-content\n" err: STEP: Writing in pod2 Jan 11 19:44:51.506: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-2002 security-context-3e7870c1-8587-4140-b1f3-51abef3cee81 -- /bin/sh -c mkdir -p /mnt/volume1; echo /tmp/local-volume-test-03e5e562-ede7-42b3-91ec-35c6cf55a303 > /mnt/volume1/test-file' Jan 11 19:44:52.832: INFO: stderr: "" Jan 11 19:44:52.832: INFO: stdout: "" Jan 11 19:44:52.832: INFO: podRWCmdExec out: "" err: STEP: Reading in pod1 Jan 11 19:44:52.832: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-2002 security-context-5d0ec471-88f7-4180-b83d-ff3208187c00 -- /bin/sh -c cat /mnt/volume1/test-file' Jan 11 19:44:54.109: INFO: stderr: "" Jan 11 19:44:54.109: INFO: stdout: "/tmp/local-volume-test-03e5e562-ede7-42b3-91ec-35c6cf55a303\n" Jan 11 19:44:54.109: INFO: podRWCmdExec out: "/tmp/local-volume-test-03e5e562-ede7-42b3-91ec-35c6cf55a303\n" err: STEP: Deleting pod1 STEP: Deleting pod security-context-5d0ec471-88f7-4180-b83d-ff3208187c00 in namespace persistent-local-volumes-test-2002 STEP: Deleting pod2 STEP: Deleting pod security-context-3e7870c1-8587-4140-b1f3-51abef3cee81 in namespace persistent-local-volumes-test-2002 [AfterEach] [Volume type: tmpfs] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jan 11 19:44:54.291: INFO: Deleting PersistentVolumeClaim "pvc-wvhl2" Jan 11 19:44:54.382: INFO: Deleting PersistentVolume "local-pv64llj" STEP: Unmount tmpfs mount point on node "ip-10-250-27-25.ec2.internal" at path "/tmp/local-volume-test-03e5e562-ede7-42b3-91ec-35c6cf55a303" Jan 11 19:44:54.472: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-2002 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-03e5e562-ede7-42b3-91ec-35c6cf55a303"' Jan 11 19:44:55.797: INFO: stderr: "" Jan 11 19:44:55.797: INFO: stdout: "" STEP: Removing the test directory Jan 11 19:44:55.797: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-2002 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-03e5e562-ede7-42b3-91ec-35c6cf55a303' Jan 11 19:44:57.130: INFO: stderr: "" Jan 11 19:44:57.130: INFO: stdout: "" [AfterEach] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:44:57.221: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-2002" for this suite. Jan 11 19:45:03.581: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:45:06.902: INFO: namespace persistent-local-volumes-test-2002 deletion completed in 9.5899735s • [SLOW TEST:29.185 seconds] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: tmpfs] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume at the same time /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248 should be able to write from pod1 and read from pod2 /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 ------------------------------ [BeforeEach] [k8s.io] Pods /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:44:55.614: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename pods STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pods-251 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:165 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Jan 11 19:44:59.388: INFO: Successfully updated pod "pod-update-activedeadlineseconds-140a289a-3e3c-43ba-a402-168d5ef8d0a9" Jan 11 19:44:59.388: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-140a289a-3e3c-43ba-a402-168d5ef8d0a9" in namespace "pods-251" to be "terminated due to deadline exceeded" Jan 11 19:44:59.481: INFO: Pod "pod-update-activedeadlineseconds-140a289a-3e3c-43ba-a402-168d5ef8d0a9": Phase="Running", Reason="", readiness=true. Elapsed: 92.742702ms Jan 11 19:45:01.571: INFO: Pod "pod-update-activedeadlineseconds-140a289a-3e3c-43ba-a402-168d5ef8d0a9": Phase="Running", Reason="", readiness=true. Elapsed: 2.183044759s Jan 11 19:45:03.661: INFO: Pod "pod-update-activedeadlineseconds-140a289a-3e3c-43ba-a402-168d5ef8d0a9": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 4.272896802s Jan 11 19:45:03.661: INFO: Pod "pod-update-activedeadlineseconds-140a289a-3e3c-43ba-a402-168d5ef8d0a9" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:45:03.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-251" for this suite. Jan 11 19:45:10.022: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:45:13.345: INFO: namespace pods-251 deletion completed in 9.593057767s • [SLOW TEST:17.731 seconds] [k8s.io] Pods /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSS ------------------------------ [BeforeEach] [sig-network] Networking /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:44:42.186: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename nettest STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nettest-2130 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Networking /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:35 STEP: Executing a successful http request from the external internet [It] should update nodePort: udp [Slow] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:194 STEP: Performing setup for networking test in namespace nettest-2130 STEP: creating a selector STEP: Creating the service pods in kubernetes Jan 11 19:44:42.987: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods STEP: Getting node addresses Jan 11 19:45:02.468: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Jan 11 19:45:02.651: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-network] Networking /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:45:02.652: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "nettest-2130" for this suite. Jan 11 19:45:15.013: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:45:18.331: INFO: namespace nettest-2130 deletion completed in 15.586912796s S [SKIPPING] [36.144 seconds] [sig-network] Networking /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 Granular Checks: Services /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:103 should update nodePort: udp [Slow] [It] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:194 Requires at least 2 nodes (not -1) /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:597 ------------------------------ SSSSSS ------------------------------ [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:44:57.109: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename deployment STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in deployment-5170 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 [It] deployment should support rollover [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Jan 11 19:44:57.931: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jan 11 19:45:00.111: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Jan 11 19:45:02.200: INFO: Creating deployment "test-rollover-deployment" Jan 11 19:45:02.387: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Jan 11 19:45:02.476: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Jan 11 19:45:02.655: INFO: Ensure that both replica sets have 1 created replica Jan 11 19:45:02.833: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Jan 11 19:45:03.013: INFO: Updating deployment test-rollover-deployment Jan 11 19:45:03.013: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Jan 11 19:45:03.102: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Jan 11 19:45:03.281: INFO: Make sure deployment "test-rollover-deployment" is complete Jan 11 19:45:03.460: INFO: all replica sets need to contain the pod-template-hash label Jan 11 19:45:03.460: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714368702, loc:(*time.Location)(0x84bfb00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714368702, loc:(*time.Location)(0x84bfb00)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714368703, loc:(*time.Location)(0x84bfb00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714368702, loc:(*time.Location)(0x84bfb00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7d7dc6548c\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 11 19:45:05.639: INFO: all replica sets need to contain the pod-template-hash label Jan 11 19:45:05.640: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714368702, loc:(*time.Location)(0x84bfb00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714368702, loc:(*time.Location)(0x84bfb00)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714368703, loc:(*time.Location)(0x84bfb00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714368702, loc:(*time.Location)(0x84bfb00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7d7dc6548c\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 11 19:45:07.639: INFO: all replica sets need to contain the pod-template-hash label Jan 11 19:45:07.639: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714368702, loc:(*time.Location)(0x84bfb00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714368702, loc:(*time.Location)(0x84bfb00)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714368703, loc:(*time.Location)(0x84bfb00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714368702, loc:(*time.Location)(0x84bfb00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7d7dc6548c\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 11 19:45:09.639: INFO: all replica sets need to contain the pod-template-hash label Jan 11 19:45:09.639: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714368702, loc:(*time.Location)(0x84bfb00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714368702, loc:(*time.Location)(0x84bfb00)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714368703, loc:(*time.Location)(0x84bfb00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714368702, loc:(*time.Location)(0x84bfb00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7d7dc6548c\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 11 19:45:11.638: INFO: all replica sets need to contain the pod-template-hash label Jan 11 19:45:11.638: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714368702, loc:(*time.Location)(0x84bfb00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714368702, loc:(*time.Location)(0x84bfb00)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714368703, loc:(*time.Location)(0x84bfb00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714368702, loc:(*time.Location)(0x84bfb00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7d7dc6548c\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 11 19:45:13.638: INFO: all replica sets need to contain the pod-template-hash label Jan 11 19:45:13.638: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714368702, loc:(*time.Location)(0x84bfb00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714368702, loc:(*time.Location)(0x84bfb00)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714368703, loc:(*time.Location)(0x84bfb00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714368702, loc:(*time.Location)(0x84bfb00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7d7dc6548c\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 11 19:45:15.639: INFO: Jan 11 19:45:15.639: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:62 Jan 11 19:45:15.908: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-5170 /apis/apps/v1/namespaces/deployment-5170/deployments/test-rollover-deployment 136215f2-1d73-4b60-bd2f-4410b53ccb03 53910 2 2020-01-11 19:45:02 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{redis docker.io/library/redis:5.0.5-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0024ca098 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-01-11 19:45:02 +0000 UTC,LastTransitionTime:2020-01-11 19:45:02 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-7d7dc6548c" has successfully progressed.,LastUpdateTime:2020-01-11 19:45:13 +0000 UTC,LastTransitionTime:2020-01-11 19:45:02 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Jan 11 19:45:15.997: INFO: New ReplicaSet "test-rollover-deployment-7d7dc6548c" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-7d7dc6548c deployment-5170 /apis/apps/v1/namespaces/deployment-5170/replicasets/test-rollover-deployment-7d7dc6548c 51884d29-922a-469e-9932-23e0db597ca2 53903 2 2020-01-11 19:45:02 +0000 UTC map[name:rollover-pod pod-template-hash:7d7dc6548c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 136215f2-1d73-4b60-bd2f-4410b53ccb03 0xc002819017 0xc002819018}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 7d7dc6548c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:7d7dc6548c] map[] [] [] []} {[] [] [{redis docker.io/library/redis:5.0.5-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002819078 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Jan 11 19:45:15.998: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Jan 11 19:45:15.998: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-5170 /apis/apps/v1/namespaces/deployment-5170/replicasets/test-rollover-controller f3490af4-36ac-4786-9f21-e64ae21c8fd1 53909 2 2020-01-11 19:44:57 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 136215f2-1d73-4b60-bd2f-4410b53ccb03 0xc002818f47 0xc002818f48}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc002818fa8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jan 11 19:45:15.998: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-f6c94f66c deployment-5170 /apis/apps/v1/namespaces/deployment-5170/replicasets/test-rollover-deployment-f6c94f66c d27786af-da5f-409b-a8d6-f76b7777479e 53822 2 2020-01-11 19:45:02 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 136215f2-1d73-4b60-bd2f-4410b53ccb03 0xc0028190e0 0xc0028190e1}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: f6c94f66c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002819158 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jan 11 19:45:16.087: INFO: Pod "test-rollover-deployment-7d7dc6548c-b4zwg" is available: &Pod{ObjectMeta:{test-rollover-deployment-7d7dc6548c-b4zwg test-rollover-deployment-7d7dc6548c- deployment-5170 /api/v1/namespaces/deployment-5170/pods/test-rollover-deployment-7d7dc6548c-b4zwg 0c5ec4ac-db82-48c6-8fec-1f52a9c74fd2 53835 0 2020-01-11 19:45:02 +0000 UTC map[name:rollover-pod pod-template-hash:7d7dc6548c] map[cni.projectcalico.org/podIP:100.64.1.59/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet test-rollover-deployment-7d7dc6548c 51884d29-922a-469e-9932-23e0db597ca2 0xc002819667 0xc002819668}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-f72wd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-f72wd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:redis,Image:docker.io/library/redis:5.0.5-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-f72wd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-10-250-27-25.ec2.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:45:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:45:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:45:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:45:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.27.25,PodIP:100.64.1.59,StartTime:2020-01-11 19:45:03 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:redis,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-11 19:45:03 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:redis:5.0.5-alpine,ImageID:docker-pullable://redis@sha256:50899ea1ceed33fa03232f3ac57578a424faa1742c1ac9c7a7bdb95cdf19b858,ContainerID:docker://0eae9e94c454f11b1a8de70896c1251dfe9b8e55e456f2ebb33f34a8e91d4b6d,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:100.64.1.59,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:45:16.087: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-5170" for this suite. Jan 11 19:45:22.446: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:45:25.743: INFO: namespace deployment-5170 deletion completed in 9.564988517s • [SLOW TEST:28.634 seconds] [sig-apps] Deployment /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:45:13.353: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename secrets STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secrets-7083 STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating secret with name secret-test-ec2af96f-e68f-432a-ae89-90c82c0885b6 STEP: Creating a pod to test consume secrets Jan 11 19:45:14.235: INFO: Waiting up to 5m0s for pod "pod-secrets-401c8dc5-5254-4c61-9917-7b7a3a0a5100" in namespace "secrets-7083" to be "success or failure" Jan 11 19:45:14.325: INFO: Pod "pod-secrets-401c8dc5-5254-4c61-9917-7b7a3a0a5100": Phase="Pending", Reason="", readiness=false. Elapsed: 90.069846ms Jan 11 19:45:16.415: INFO: Pod "pod-secrets-401c8dc5-5254-4c61-9917-7b7a3a0a5100": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.180218165s STEP: Saw pod success Jan 11 19:45:16.415: INFO: Pod "pod-secrets-401c8dc5-5254-4c61-9917-7b7a3a0a5100" satisfied condition "success or failure" Jan 11 19:45:16.505: INFO: Trying to get logs from node ip-10-250-27-25.ec2.internal pod pod-secrets-401c8dc5-5254-4c61-9917-7b7a3a0a5100 container secret-volume-test: STEP: delete the pod Jan 11 19:45:16.697: INFO: Waiting for pod pod-secrets-401c8dc5-5254-4c61-9917-7b7a3a0a5100 to disappear Jan 11 19:45:16.787: INFO: Pod pod-secrets-401c8dc5-5254-4c61-9917-7b7a3a0a5100 no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:45:16.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7083" for this suite. Jan 11 19:45:23.149: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:45:26.466: INFO: namespace secrets-7083 deletion completed in 9.58703092s • [SLOW TEST:13.113 seconds] [sig-storage] Secrets /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:35 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSS ------------------------------ [BeforeEach] [sig-storage] Volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:45:06.904: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename volume STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in volume-8733 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volumes.go:41 [It] should be mountable /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volumes.go:47 STEP: starting configmap-client STEP: Checking that text file contents are perfect. Jan 11 19:45:10.008: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec configmap-client --namespace=volume-8733 -- cat /opt/0/firstfile' Jan 11 19:45:11.282: INFO: stderr: "" Jan 11 19:45:11.282: INFO: stdout: "this is the first file" Jan 11 19:45:11.282: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=volume-8733 configmap-client -- /bin/sh -c test -d /opt/0' Jan 11 19:45:12.567: INFO: stderr: "" Jan 11 19:45:12.568: INFO: stdout: "" Jan 11 19:45:12.568: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=volume-8733 configmap-client -- /bin/sh -c test -b /opt/0' Jan 11 19:45:13.848: INFO: rc: 1 Jan 11 19:45:13.848: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec configmap-client --namespace=volume-8733 -- cat /opt/1/secondfile' Jan 11 19:45:15.179: INFO: stderr: "" Jan 11 19:45:15.179: INFO: stdout: "this is the second file" Jan 11 19:45:15.179: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=volume-8733 configmap-client -- /bin/sh -c test -d /opt/1' Jan 11 19:45:16.454: INFO: stderr: "" Jan 11 19:45:16.454: INFO: stdout: "" Jan 11 19:45:16.454: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=volume-8733 configmap-client -- /bin/sh -c test -b /opt/1' Jan 11 19:45:17.722: INFO: rc: 1 STEP: cleaning the environment after configmap Jan 11 19:45:17.813: INFO: Deleting pod "configmap-client" in namespace "volume-8733" Jan 11 19:45:17.903: INFO: Wait up to 5m0s for pod "configmap-client" to be fully deleted [AfterEach] [sig-storage] Volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:45:24.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "volume-8733" for this suite. Jan 11 19:45:30.445: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:45:33.760: INFO: namespace volume-8733 deletion completed in 9.58564441s • [SLOW TEST:26.857 seconds] [sig-storage] Volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 ConfigMap /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volumes.go:46 should be mountable /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volumes.go:47 ------------------------------ SSSS ------------------------------ [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:45:25.752: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename security-context-test STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in security-context-test-7559 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:40 [It] should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:211 Jan 11 19:45:26.479: INFO: Waiting up to 5m0s for pod "busybox-readonly-true-cdf4c3d5-b743-4d1f-a530-1ee1568555cf" in namespace "security-context-test-7559" to be "success or failure" Jan 11 19:45:26.568: INFO: Pod "busybox-readonly-true-cdf4c3d5-b743-4d1f-a530-1ee1568555cf": Phase="Pending", Reason="", readiness=false. Elapsed: 88.691623ms Jan 11 19:45:28.657: INFO: Pod "busybox-readonly-true-cdf4c3d5-b743-4d1f-a530-1ee1568555cf": Phase="Failed", Reason="", readiness=false. Elapsed: 2.178306618s Jan 11 19:45:28.657: INFO: Pod "busybox-readonly-true-cdf4c3d5-b743-4d1f-a530-1ee1568555cf" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:45:28.657: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-7559" for this suite. Jan 11 19:45:35.017: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:45:38.328: INFO: namespace security-context-test-7559 deletion completed in 9.579858311s • [SLOW TEST:12.576 seconds] [k8s.io] Security Context /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693 When creating a pod with readOnlyRootFilesystem /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:165 should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:211 ------------------------------ SS ------------------------------ [BeforeEach] [sig-storage] HostPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:45:33.768: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename hostpath STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in hostpath-4337 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should support subPath [NodeConformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:91 STEP: Creating a pod to test hostPath subPath Jan 11 19:45:34.645: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-4337" to be "success or failure" Jan 11 19:45:34.735: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 89.967706ms Jan 11 19:45:36.825: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.179662629s STEP: Saw pod success Jan 11 19:45:36.825: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Jan 11 19:45:36.915: INFO: Trying to get logs from node ip-10-250-27-25.ec2.internal pod pod-host-path-test container test-container-2: STEP: delete the pod Jan 11 19:45:37.107: INFO: Waiting for pod pod-host-path-test to disappear Jan 11 19:45:37.197: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:45:37.197: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-4337" for this suite. Jan 11 19:45:43.563: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:45:46.879: INFO: namespace hostpath-4337 deletion completed in 9.590777842s • [SLOW TEST:13.111 seconds] [sig-storage] HostPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should support subPath [NodeConformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:91 ------------------------------ SS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:45:26.473: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename kubectl STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-417 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:225 [BeforeEach] Kubectl run default /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1403 [It] should create an rc or deployment from an image [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: running the image docker.io/library/httpd:2.4.38-alpine Jan 11 19:45:27.113: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-417' Jan 11 19:45:27.635: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jan 11 19:45:27.635: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n" STEP: verifying the pod controlled by e2e-test-httpd-deployment gets created [AfterEach] Kubectl run default /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1409 Jan 11 19:45:27.725: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config delete deployment e2e-test-httpd-deployment --namespace=kubectl-417' Jan 11 19:45:28.242: INFO: stderr: "" Jan 11 19:45:28.242: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:45:28.242: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-417" for this suite. Jan 11 19:45:56.604: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:45:59.928: INFO: namespace kubectl-417 deletion completed in 31.594738133s • [SLOW TEST:33.455 seconds] [sig-cli] Kubectl client /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run default /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1397 should create an rc or deployment from an image [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:45:38.334: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in persistent-local-volumes-test-7579 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:152 [BeforeEach] [Volume type: block] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "ip-10-250-27-25.ec2.internal" using path "/tmp/local-volume-test-1b91384e-759a-432d-ad4a-26d0b9b54f47" Jan 11 19:45:41.425: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-7579 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-1b91384e-759a-432d-ad4a-26d0b9b54f47 && dd if=/dev/zero of=/tmp/local-volume-test-1b91384e-759a-432d-ad4a-26d0b9b54f47/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-1b91384e-759a-432d-ad4a-26d0b9b54f47/file' Jan 11 19:45:42.706: INFO: stderr: "5120+0 records in\n5120+0 records out\n20971520 bytes (21 MB, 20 MiB) copied, 0.0169172 s, 1.2 GB/s\n" Jan 11 19:45:42.706: INFO: stdout: "" Jan 11 19:45:42.707: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-7579 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-1b91384e-759a-432d-ad4a-26d0b9b54f47/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}' Jan 11 19:45:44.029: INFO: stderr: "" Jan 11 19:45:44.029: INFO: stdout: "/dev/loop0\n" STEP: Creating local PVCs and PVs Jan 11 19:45:44.029: INFO: Creating a PV followed by a PVC Jan 11 19:45:44.208: INFO: Waiting for PV local-pv5wr8p to bind to PVC pvc-bkqcx Jan 11 19:45:44.208: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-bkqcx] to have phase Bound Jan 11 19:45:44.297: INFO: PersistentVolumeClaim pvc-bkqcx found and phase=Bound (89.093694ms) Jan 11 19:45:44.297: INFO: Waiting up to 3m0s for PersistentVolume local-pv5wr8p to have phase Bound Jan 11 19:45:44.387: INFO: PersistentVolume local-pv5wr8p found and phase=Bound (89.330118ms) [BeforeEach] One pod requesting one prebound PVC /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Jan 11 19:45:47.013: INFO: pod "security-context-32c59899-0cdc-466a-adb1-7ff50e25f0d7" created on Node "ip-10-250-27-25.ec2.internal" STEP: Writing in pod1 Jan 11 19:45:47.013: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-7579 security-context-32c59899-0cdc-466a-adb1-7ff50e25f0d7 -- /bin/sh -c mkdir -p /tmp/mnt/volume1; echo test-file-content > /tmp/mnt/volume1/test-file && SUDO_CMD=$(which sudo); echo ${SUDO_CMD} && ${SUDO_CMD} dd if=/tmp/mnt/volume1/test-file of=/mnt/volume1 bs=512 count=100 && rm /tmp/mnt/volume1/test-file' Jan 11 19:45:48.309: INFO: stderr: "0+1 records in\n0+1 records out\n18 bytes (18B) copied, 0.000038 seconds, 462.6KB/s\n" Jan 11 19:45:48.309: INFO: stdout: "\n" Jan 11 19:45:48.309: INFO: podRWCmdExec out: "\n" err: [It] should be able to mount volume and read from pod1 /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 STEP: Reading in pod1 Jan 11 19:45:48.309: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-7579 security-context-32c59899-0cdc-466a-adb1-7ff50e25f0d7 -- /bin/sh -c hexdump -n 100 -e '100 "%_p"' /mnt/volume1 | head -1' Jan 11 19:45:49.574: INFO: stderr: "" Jan 11 19:45:49.574: INFO: stdout: "test-file-content..................................................................................." Jan 11 19:45:49.574: INFO: podRWCmdExec out: "test-file-content..................................................................................." err: [AfterEach] One pod requesting one prebound PVC /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod security-context-32c59899-0cdc-466a-adb1-7ff50e25f0d7 in namespace persistent-local-volumes-test-7579 [AfterEach] [Volume type: block] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jan 11 19:45:49.664: INFO: Deleting PersistentVolumeClaim "pvc-bkqcx" Jan 11 19:45:49.755: INFO: Deleting PersistentVolume "local-pv5wr8p" Jan 11 19:45:49.845: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-7579 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-1b91384e-759a-432d-ad4a-26d0b9b54f47/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}' Jan 11 19:45:51.164: INFO: stderr: "" Jan 11 19:45:51.164: INFO: stdout: "/dev/loop0\n" STEP: Tear down block device "/dev/loop0" on node "ip-10-250-27-25.ec2.internal" at path /tmp/local-volume-test-1b91384e-759a-432d-ad4a-26d0b9b54f47/file Jan 11 19:45:51.164: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-7579 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0' Jan 11 19:45:52.450: INFO: stderr: "" Jan 11 19:45:52.451: INFO: stdout: "" STEP: Removing the test directory /tmp/local-volume-test-1b91384e-759a-432d-ad4a-26d0b9b54f47 Jan 11 19:45:52.451: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-7579 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-1b91384e-759a-432d-ad4a-26d0b9b54f47' Jan 11 19:45:53.704: INFO: stderr: "" Jan 11 19:45:53.704: INFO: stdout: "" [AfterEach] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:45:53.795: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-7579" for this suite. Jan 11 19:46:06.155: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:46:09.445: INFO: namespace persistent-local-volumes-test-7579 deletion completed in 15.558896736s • [SLOW TEST:31.111 seconds] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: block] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and read from pod1 /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:40:41.734: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename projected STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-85 STEP: Waiting for a default service account to be provisioned in namespace [It] Should fail non-optional pod creation due to secret object does not exist [Slow] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:411 STEP: Creating the pod [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:45:42.903: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-85" for this suite. Jan 11 19:46:11.265: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:46:14.591: INFO: namespace projected-85 deletion completed in 31.597005771s • [SLOW TEST:332.857 seconds] [sig-storage] Projected secret /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 Should fail non-optional pod creation due to secret object does not exist [Slow] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:411 ------------------------------ [BeforeEach] [sig-apps] DisruptionController /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:46:14.593: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename disruption STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in disruption-2322 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] DisruptionController /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:52 [It] evictions: too few pods, replicaSet, percentage => should not allow an eviction /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:149 STEP: Waiting for the pdb to be processed STEP: locating a running pod [AfterEach] [sig-apps] DisruptionController /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:46:17.801: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "disruption-2322" for this suite. Jan 11 19:46:30.163: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:46:33.489: INFO: namespace disruption-2322 deletion completed in 15.596452479s • [SLOW TEST:18.896 seconds] [sig-apps] DisruptionController /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 evictions: too few pods, replicaSet, percentage => should not allow an eviction /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:149 ------------------------------ SSSS ------------------------------ [BeforeEach] version v1 /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:46:09.474: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename proxy STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in proxy-5995 STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-hnjh4 in namespace proxy-5995 I0111 19:46:11.033037 8614 runners.go:184] Created replication controller with name: proxy-service-hnjh4, namespace: proxy-5995, replica count: 1 I0111 19:46:12.133578 8614 runners.go:184] proxy-service-hnjh4 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0111 19:46:13.133898 8614 runners.go:184] proxy-service-hnjh4 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0111 19:46:14.134205 8614 runners.go:184] proxy-service-hnjh4 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0111 19:46:15.134419 8614 runners.go:184] proxy-service-hnjh4 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0111 19:46:16.134680 8614 runners.go:184] proxy-service-hnjh4 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0111 19:46:17.134934 8614 runners.go:184] proxy-service-hnjh4 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0111 19:46:18.135173 8614 runners.go:184] proxy-service-hnjh4 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0111 19:46:19.135468 8614 runners.go:184] proxy-service-hnjh4 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0111 19:46:20.135769 8614 runners.go:184] proxy-service-hnjh4 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0111 19:46:21.136091 8614 runners.go:184] proxy-service-hnjh4 Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 11 19:46:21.225: INFO: setup took 10.37512553s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Jan 11 19:46:21.328: INFO: (0) /api/v1/namespaces/proxy-5995/pods/http:proxy-service-hnjh4-6tdwj:160/proxy/: foo (200; 103.234993ms) Jan 11 19:46:21.329: INFO: (0) /api/v1/namespaces/proxy-5995/pods/proxy-service-hnjh4-6tdwj:162/proxy/: bar (200; 103.703434ms) Jan 11 19:46:21.329: INFO: (0) /api/v1/namespaces/proxy-5995/services/proxy-service-hnjh4:portname2/proxy/: bar (200; 103.846636ms) Jan 11 19:46:21.329: INFO: (0) /api/v1/namespaces/proxy-5995/pods/http:proxy-service-hnjh4-6tdwj:162/proxy/: bar (200; 103.915095ms) Jan 11 19:46:21.332: INFO: (0) /api/v1/namespaces/proxy-5995/pods/proxy-service-hnjh4-6tdwj/proxy/: test (200; 106.534866ms) Jan 11 19:46:21.332: INFO: (0) /api/v1/namespaces/proxy-5995/pods/proxy-service-hnjh4-6tdwj:160/proxy/: foo (200; 107.252518ms) Jan 11 19:46:21.337: INFO: (0) /api/v1/namespaces/proxy-5995/pods/https:proxy-service-hnjh4-6tdwj:460/proxy/: tls baz (200; 111.899812ms) Jan 11 19:46:21.339: INFO: (0) /api/v1/namespaces/proxy-5995/pods/https:proxy-service-hnjh4-6tdwj:443/proxy/: ... (200; 183.624335ms) Jan 11 19:46:21.411: INFO: (0) /api/v1/namespaces/proxy-5995/pods/proxy-service-hnjh4-6tdwj:1080/proxy/: test<... (200; 185.150694ms) Jan 11 19:46:21.503: INFO: (1) /api/v1/namespaces/proxy-5995/pods/proxy-service-hnjh4-6tdwj:160/proxy/: foo (200; 91.864032ms) Jan 11 19:46:21.504: INFO: (1) /api/v1/namespaces/proxy-5995/pods/http:proxy-service-hnjh4-6tdwj:160/proxy/: foo (200; 92.794796ms) Jan 11 19:46:21.504: INFO: (1) /api/v1/namespaces/proxy-5995/pods/http:proxy-service-hnjh4-6tdwj:162/proxy/: bar (200; 92.789607ms) Jan 11 19:46:21.504: INFO: (1) /api/v1/namespaces/proxy-5995/pods/proxy-service-hnjh4-6tdwj:162/proxy/: bar (200; 92.742326ms) Jan 11 19:46:21.504: INFO: (1) /api/v1/namespaces/proxy-5995/pods/https:proxy-service-hnjh4-6tdwj:460/proxy/: tls baz (200; 92.888401ms) Jan 11 19:46:21.504: INFO: (1) /api/v1/namespaces/proxy-5995/pods/http:proxy-service-hnjh4-6tdwj:1080/proxy/: ... (200; 92.825117ms) Jan 11 19:46:21.504: INFO: (1) /api/v1/namespaces/proxy-5995/pods/proxy-service-hnjh4-6tdwj/proxy/: test (200; 92.971945ms) Jan 11 19:46:21.504: INFO: (1) /api/v1/namespaces/proxy-5995/pods/https:proxy-service-hnjh4-6tdwj:462/proxy/: tls qux (200; 92.886176ms) Jan 11 19:46:21.505: INFO: (1) /api/v1/namespaces/proxy-5995/pods/https:proxy-service-hnjh4-6tdwj:443/proxy/: test<... (200; 94.646992ms) Jan 11 19:46:21.505: INFO: (1) /api/v1/namespaces/proxy-5995/services/http:proxy-service-hnjh4:portname1/proxy/: foo (200; 94.891517ms) Jan 11 19:46:21.505: INFO: (1) /api/v1/namespaces/proxy-5995/services/https:proxy-service-hnjh4:tlsportname2/proxy/: tls qux (200; 94.696116ms) Jan 11 19:46:21.507: INFO: (1) /api/v1/namespaces/proxy-5995/services/proxy-service-hnjh4:portname1/proxy/: foo (200; 96.32646ms) Jan 11 19:46:21.509: INFO: (1) /api/v1/namespaces/proxy-5995/services/http:proxy-service-hnjh4:portname2/proxy/: bar (200; 97.833867ms) Jan 11 19:46:21.509: INFO: (1) /api/v1/namespaces/proxy-5995/services/proxy-service-hnjh4:portname2/proxy/: bar (200; 97.792644ms) Jan 11 19:46:21.600: INFO: (2) /api/v1/namespaces/proxy-5995/pods/proxy-service-hnjh4-6tdwj/proxy/: test (200; 91.66245ms) Jan 11 19:46:21.601: INFO: (2) /api/v1/namespaces/proxy-5995/pods/proxy-service-hnjh4-6tdwj:162/proxy/: bar (200; 92.095458ms) Jan 11 19:46:21.601: INFO: (2) /api/v1/namespaces/proxy-5995/pods/http:proxy-service-hnjh4-6tdwj:160/proxy/: foo (200; 92.289661ms) Jan 11 19:46:21.601: INFO: (2) /api/v1/namespaces/proxy-5995/pods/proxy-service-hnjh4-6tdwj:160/proxy/: foo (200; 92.126692ms) Jan 11 19:46:21.601: INFO: (2) /api/v1/namespaces/proxy-5995/pods/https:proxy-service-hnjh4-6tdwj:460/proxy/: tls baz (200; 92.219193ms) Jan 11 19:46:21.601: INFO: (2) /api/v1/namespaces/proxy-5995/pods/proxy-service-hnjh4-6tdwj:1080/proxy/: test<... (200; 92.296969ms) Jan 11 19:46:21.601: INFO: (2) /api/v1/namespaces/proxy-5995/pods/https:proxy-service-hnjh4-6tdwj:443/proxy/: ... (200; 93.635204ms) Jan 11 19:46:21.603: INFO: (2) /api/v1/namespaces/proxy-5995/pods/https:proxy-service-hnjh4-6tdwj:462/proxy/: tls qux (200; 93.733466ms) Jan 11 19:46:21.604: INFO: (2) /api/v1/namespaces/proxy-5995/services/http:proxy-service-hnjh4:portname2/proxy/: bar (200; 95.212958ms) Jan 11 19:46:21.604: INFO: (2) /api/v1/namespaces/proxy-5995/services/https:proxy-service-hnjh4:tlsportname2/proxy/: tls qux (200; 95.167467ms) Jan 11 19:46:21.606: INFO: (2) /api/v1/namespaces/proxy-5995/services/proxy-service-hnjh4:portname1/proxy/: foo (200; 96.705256ms) Jan 11 19:46:21.606: INFO: (2) /api/v1/namespaces/proxy-5995/services/http:proxy-service-hnjh4:portname1/proxy/: foo (200; 96.846469ms) Jan 11 19:46:21.606: INFO: (2) /api/v1/namespaces/proxy-5995/services/proxy-service-hnjh4:portname2/proxy/: bar (200; 96.836407ms) Jan 11 19:46:21.698: INFO: (3) /api/v1/namespaces/proxy-5995/pods/http:proxy-service-hnjh4-6tdwj:1080/proxy/: ... (200; 92.366341ms) Jan 11 19:46:21.698: INFO: (3) /api/v1/namespaces/proxy-5995/pods/http:proxy-service-hnjh4-6tdwj:160/proxy/: foo (200; 92.402601ms) Jan 11 19:46:21.698: INFO: (3) /api/v1/namespaces/proxy-5995/pods/proxy-service-hnjh4-6tdwj/proxy/: test (200; 92.51481ms) Jan 11 19:46:21.698: INFO: (3) /api/v1/namespaces/proxy-5995/pods/proxy-service-hnjh4-6tdwj:160/proxy/: foo (200; 92.630876ms) Jan 11 19:46:21.698: INFO: (3) /api/v1/namespaces/proxy-5995/pods/https:proxy-service-hnjh4-6tdwj:443/proxy/: test<... (200; 92.538909ms) Jan 11 19:46:21.698: INFO: (3) /api/v1/namespaces/proxy-5995/pods/proxy-service-hnjh4-6tdwj:162/proxy/: bar (200; 92.67474ms) Jan 11 19:46:21.700: INFO: (3) /api/v1/namespaces/proxy-5995/pods/http:proxy-service-hnjh4-6tdwj:162/proxy/: bar (200; 94.157963ms) Jan 11 19:46:21.700: INFO: (3) /api/v1/namespaces/proxy-5995/services/https:proxy-service-hnjh4:tlsportname2/proxy/: tls qux (200; 94.358499ms) Jan 11 19:46:21.700: INFO: (3) /api/v1/namespaces/proxy-5995/services/https:proxy-service-hnjh4:tlsportname1/proxy/: tls baz (200; 94.343196ms) Jan 11 19:46:21.700: INFO: (3) /api/v1/namespaces/proxy-5995/pods/https:proxy-service-hnjh4-6tdwj:460/proxy/: tls baz (200; 94.298572ms) Jan 11 19:46:21.702: INFO: (3) /api/v1/namespaces/proxy-5995/services/proxy-service-hnjh4:portname2/proxy/: bar (200; 95.75176ms) Jan 11 19:46:21.703: INFO: (3) /api/v1/namespaces/proxy-5995/services/proxy-service-hnjh4:portname1/proxy/: foo (200; 97.568727ms) Jan 11 19:46:21.703: INFO: (3) /api/v1/namespaces/proxy-5995/services/http:proxy-service-hnjh4:portname1/proxy/: foo (200; 97.522056ms) Jan 11 19:46:21.705: INFO: (3) /api/v1/namespaces/proxy-5995/services/http:proxy-service-hnjh4:portname2/proxy/: bar (200; 99.189565ms) Jan 11 19:46:21.799: INFO: (4) /api/v1/namespaces/proxy-5995/pods/https:proxy-service-hnjh4-6tdwj:460/proxy/: tls baz (200; 93.627774ms) Jan 11 19:46:21.799: INFO: (4) /api/v1/namespaces/proxy-5995/pods/proxy-service-hnjh4-6tdwj/proxy/: test (200; 93.598971ms) Jan 11 19:46:21.799: INFO: (4) /api/v1/namespaces/proxy-5995/pods/proxy-service-hnjh4-6tdwj:160/proxy/: foo (200; 93.564139ms) Jan 11 19:46:21.799: INFO: (4) /api/v1/namespaces/proxy-5995/pods/http:proxy-service-hnjh4-6tdwj:160/proxy/: foo (200; 93.614193ms) Jan 11 19:46:21.799: INFO: (4) /api/v1/namespaces/proxy-5995/pods/http:proxy-service-hnjh4-6tdwj:1080/proxy/: ... (200; 93.584076ms) Jan 11 19:46:21.799: INFO: (4) /api/v1/namespaces/proxy-5995/pods/proxy-service-hnjh4-6tdwj:1080/proxy/: test<... (200; 93.590071ms) Jan 11 19:46:21.799: INFO: (4) /api/v1/namespaces/proxy-5995/pods/https:proxy-service-hnjh4-6tdwj:443/proxy/: test (200; 92.833066ms) Jan 11 19:46:21.911: INFO: (5) /api/v1/namespaces/proxy-5995/pods/https:proxy-service-hnjh4-6tdwj:443/proxy/: test<... (200; 94.528379ms) Jan 11 19:46:21.913: INFO: (5) /api/v1/namespaces/proxy-5995/pods/http:proxy-service-hnjh4-6tdwj:1080/proxy/: ... (200; 94.670769ms) Jan 11 19:46:21.915: INFO: (5) /api/v1/namespaces/proxy-5995/services/proxy-service-hnjh4:portname1/proxy/: foo (200; 95.959759ms) Jan 11 19:46:21.915: INFO: (5) /api/v1/namespaces/proxy-5995/services/proxy-service-hnjh4:portname2/proxy/: bar (200; 96.13403ms) Jan 11 19:46:21.915: INFO: (5) /api/v1/namespaces/proxy-5995/services/http:proxy-service-hnjh4:portname2/proxy/: bar (200; 95.974473ms) Jan 11 19:46:21.915: INFO: (5) /api/v1/namespaces/proxy-5995/pods/proxy-service-hnjh4-6tdwj:160/proxy/: foo (200; 96.005875ms) Jan 11 19:46:22.007: INFO: (6) /api/v1/namespaces/proxy-5995/pods/https:proxy-service-hnjh4-6tdwj:462/proxy/: tls qux (200; 92.498219ms) Jan 11 19:46:22.007: INFO: (6) /api/v1/namespaces/proxy-5995/pods/proxy-service-hnjh4-6tdwj:1080/proxy/: test<... (200; 92.517252ms) Jan 11 19:46:22.007: INFO: (6) /api/v1/namespaces/proxy-5995/pods/proxy-service-hnjh4-6tdwj:162/proxy/: bar (200; 92.511697ms) Jan 11 19:46:22.007: INFO: (6) /api/v1/namespaces/proxy-5995/pods/proxy-service-hnjh4-6tdwj:160/proxy/: foo (200; 92.665941ms) Jan 11 19:46:22.007: INFO: (6) /api/v1/namespaces/proxy-5995/pods/http:proxy-service-hnjh4-6tdwj:162/proxy/: bar (200; 92.680441ms) Jan 11 19:46:22.009: INFO: (6) /api/v1/namespaces/proxy-5995/pods/http:proxy-service-hnjh4-6tdwj:1080/proxy/: ... (200; 94.015041ms) Jan 11 19:46:22.009: INFO: (6) /api/v1/namespaces/proxy-5995/pods/https:proxy-service-hnjh4-6tdwj:460/proxy/: tls baz (200; 93.965505ms) Jan 11 19:46:22.009: INFO: (6) /api/v1/namespaces/proxy-5995/services/http:proxy-service-hnjh4:portname1/proxy/: foo (200; 94.218954ms) Jan 11 19:46:22.009: INFO: (6) /api/v1/namespaces/proxy-5995/pods/proxy-service-hnjh4-6tdwj/proxy/: test (200; 94.025475ms) Jan 11 19:46:22.009: INFO: (6) /api/v1/namespaces/proxy-5995/pods/https:proxy-service-hnjh4-6tdwj:443/proxy/: test (200; 92.248724ms) Jan 11 19:46:22.209: INFO: (7) /api/v1/namespaces/proxy-5995/pods/http:proxy-service-hnjh4-6tdwj:1080/proxy/: ... (200; 92.326244ms) Jan 11 19:46:22.209: INFO: (7) /api/v1/namespaces/proxy-5995/pods/https:proxy-service-hnjh4-6tdwj:443/proxy/: test<... (200; 92.287362ms) Jan 11 19:46:22.213: INFO: (7) /api/v1/namespaces/proxy-5995/services/https:proxy-service-hnjh4:tlsportname1/proxy/: tls baz (200; 96.551392ms) Jan 11 19:46:22.213: INFO: (7) /api/v1/namespaces/proxy-5995/pods/https:proxy-service-hnjh4-6tdwj:460/proxy/: tls baz (200; 96.713209ms) Jan 11 19:46:22.213: INFO: (7) /api/v1/namespaces/proxy-5995/services/https:proxy-service-hnjh4:tlsportname2/proxy/: tls qux (200; 96.629296ms) Jan 11 19:46:22.213: INFO: (7) /api/v1/namespaces/proxy-5995/services/proxy-service-hnjh4:portname1/proxy/: foo (200; 96.592752ms) Jan 11 19:46:22.214: INFO: (7) /api/v1/namespaces/proxy-5995/pods/http:proxy-service-hnjh4-6tdwj:160/proxy/: foo (200; 97.747553ms) Jan 11 19:46:22.214: INFO: (7) /api/v1/namespaces/proxy-5995/services/http:proxy-service-hnjh4:portname2/proxy/: bar (200; 97.717757ms) Jan 11 19:46:22.216: INFO: (7) /api/v1/namespaces/proxy-5995/services/proxy-service-hnjh4:portname2/proxy/: bar (200; 99.353498ms) Jan 11 19:46:22.217: INFO: (7) /api/v1/namespaces/proxy-5995/services/http:proxy-service-hnjh4:portname1/proxy/: foo (200; 101.257986ms) Jan 11 19:46:22.311: INFO: (8) /api/v1/namespaces/proxy-5995/pods/proxy-service-hnjh4-6tdwj/proxy/: test (200; 92.958962ms) Jan 11 19:46:22.311: INFO: (8) /api/v1/namespaces/proxy-5995/pods/https:proxy-service-hnjh4-6tdwj:462/proxy/: tls qux (200; 93.032103ms) Jan 11 19:46:22.311: INFO: (8) /api/v1/namespaces/proxy-5995/pods/proxy-service-hnjh4-6tdwj:160/proxy/: foo (200; 93.017154ms) Jan 11 19:46:22.311: INFO: (8) /api/v1/namespaces/proxy-5995/pods/https:proxy-service-hnjh4-6tdwj:443/proxy/: ... (200; 93.050537ms) Jan 11 19:46:22.311: INFO: (8) /api/v1/namespaces/proxy-5995/pods/proxy-service-hnjh4-6tdwj:1080/proxy/: test<... (200; 93.246698ms) Jan 11 19:46:22.313: INFO: (8) /api/v1/namespaces/proxy-5995/pods/http:proxy-service-hnjh4-6tdwj:160/proxy/: foo (200; 94.936968ms) Jan 11 19:46:22.313: INFO: (8) /api/v1/namespaces/proxy-5995/pods/proxy-service-hnjh4-6tdwj:162/proxy/: bar (200; 95.014331ms) Jan 11 19:46:22.313: INFO: (8) /api/v1/namespaces/proxy-5995/services/https:proxy-service-hnjh4:tlsportname1/proxy/: tls baz (200; 95.113723ms) Jan 11 19:46:22.313: INFO: (8) /api/v1/namespaces/proxy-5995/services/https:proxy-service-hnjh4:tlsportname2/proxy/: tls qux (200; 95.055681ms) Jan 11 19:46:22.314: INFO: (8) /api/v1/namespaces/proxy-5995/services/proxy-service-hnjh4:portname1/proxy/: foo (200; 96.306785ms) Jan 11 19:46:22.314: INFO: (8) /api/v1/namespaces/proxy-5995/services/proxy-service-hnjh4:portname2/proxy/: bar (200; 96.23098ms) Jan 11 19:46:22.316: INFO: (8) /api/v1/namespaces/proxy-5995/services/http:proxy-service-hnjh4:portname1/proxy/: foo (200; 97.851867ms) Jan 11 19:46:22.317: INFO: (8) /api/v1/namespaces/proxy-5995/services/http:proxy-service-hnjh4:portname2/proxy/: bar (200; 99.481498ms) Jan 11 19:46:22.409: INFO: (9) /api/v1/namespaces/proxy-5995/pods/http:proxy-service-hnjh4-6tdwj:162/proxy/: bar (200; 91.377312ms) Jan 11 19:46:22.410: INFO: (9) /api/v1/namespaces/proxy-5995/pods/proxy-service-hnjh4-6tdwj:162/proxy/: bar (200; 92.029708ms) Jan 11 19:46:22.410: INFO: (9) /api/v1/namespaces/proxy-5995/pods/https:proxy-service-hnjh4-6tdwj:460/proxy/: tls baz (200; 92.246908ms) Jan 11 19:46:22.410: INFO: (9) /api/v1/namespaces/proxy-5995/pods/proxy-service-hnjh4-6tdwj:1080/proxy/: test<... (200; 92.082989ms) Jan 11 19:46:22.410: INFO: (9) /api/v1/namespaces/proxy-5995/pods/http:proxy-service-hnjh4-6tdwj:160/proxy/: foo (200; 92.156781ms) Jan 11 19:46:22.410: INFO: (9) /api/v1/namespaces/proxy-5995/pods/proxy-service-hnjh4-6tdwj:160/proxy/: foo (200; 92.130185ms) Jan 11 19:46:22.410: INFO: (9) /api/v1/namespaces/proxy-5995/pods/proxy-service-hnjh4-6tdwj/proxy/: test (200; 92.11374ms) Jan 11 19:46:22.410: INFO: (9) /api/v1/namespaces/proxy-5995/pods/https:proxy-service-hnjh4-6tdwj:443/proxy/: ... (200; 93.43818ms) Jan 11 19:46:22.411: INFO: (9) /api/v1/namespaces/proxy-5995/pods/https:proxy-service-hnjh4-6tdwj:462/proxy/: tls qux (200; 93.576089ms) Jan 11 19:46:22.413: INFO: (9) /api/v1/namespaces/proxy-5995/services/https:proxy-service-hnjh4:tlsportname2/proxy/: tls qux (200; 95.124103ms) Jan 11 19:46:22.413: INFO: (9) /api/v1/namespaces/proxy-5995/services/proxy-service-hnjh4:portname2/proxy/: bar (200; 95.199842ms) Jan 11 19:46:22.413: INFO: (9) /api/v1/namespaces/proxy-5995/services/http:proxy-service-hnjh4:portname2/proxy/: bar (200; 95.10286ms) Jan 11 19:46:22.413: INFO: (9) /api/v1/namespaces/proxy-5995/services/https:proxy-service-hnjh4:tlsportname1/proxy/: tls baz (200; 95.126518ms) Jan 11 19:46:22.414: INFO: (9) /api/v1/namespaces/proxy-5995/services/http:proxy-service-hnjh4:portname1/proxy/: foo (200; 96.829211ms) Jan 11 19:46:22.414: INFO: (9) /api/v1/namespaces/proxy-5995/services/proxy-service-hnjh4:portname1/proxy/: foo (200; 96.699336ms) Jan 11 19:46:22.507: INFO: (10) /api/v1/namespaces/proxy-5995/pods/proxy-service-hnjh4-6tdwj:162/proxy/: bar (200; 92.348852ms) Jan 11 19:46:22.507: INFO: (10) /api/v1/namespaces/proxy-5995/pods/proxy-service-hnjh4-6tdwj:160/proxy/: foo (200; 92.4026ms) Jan 11 19:46:22.507: INFO: (10) /api/v1/namespaces/proxy-5995/pods/http:proxy-service-hnjh4-6tdwj:160/proxy/: foo (200; 92.437612ms) Jan 11 19:46:22.507: INFO: (10) /api/v1/namespaces/proxy-5995/pods/https:proxy-service-hnjh4-6tdwj:460/proxy/: tls baz (200; 92.441649ms) Jan 11 19:46:22.507: INFO: (10) /api/v1/namespaces/proxy-5995/pods/proxy-service-hnjh4-6tdwj/proxy/: test (200; 92.420173ms) Jan 11 19:46:22.507: INFO: (10) /api/v1/namespaces/proxy-5995/pods/http:proxy-service-hnjh4-6tdwj:162/proxy/: bar (200; 92.479508ms) Jan 11 19:46:22.507: INFO: (10) /api/v1/namespaces/proxy-5995/pods/https:proxy-service-hnjh4-6tdwj:462/proxy/: tls qux (200; 92.451124ms) Jan 11 19:46:22.507: INFO: (10) /api/v1/namespaces/proxy-5995/pods/https:proxy-service-hnjh4-6tdwj:443/proxy/: ... (200; 93.01869ms) Jan 11 19:46:22.507: INFO: (10) /api/v1/namespaces/proxy-5995/services/https:proxy-service-hnjh4:tlsportname1/proxy/: tls baz (200; 92.99161ms) Jan 11 19:46:22.507: INFO: (10) /api/v1/namespaces/proxy-5995/pods/proxy-service-hnjh4-6tdwj:1080/proxy/: test<... (200; 92.935667ms) Jan 11 19:46:22.508: INFO: (10) /api/v1/namespaces/proxy-5995/services/https:proxy-service-hnjh4:tlsportname2/proxy/: tls qux (200; 94.173952ms) Jan 11 19:46:22.510: INFO: (10) /api/v1/namespaces/proxy-5995/services/proxy-service-hnjh4:portname2/proxy/: bar (200; 95.540106ms) Jan 11 19:46:22.510: INFO: (10) /api/v1/namespaces/proxy-5995/services/proxy-service-hnjh4:portname1/proxy/: foo (200; 95.612127ms) Jan 11 19:46:22.510: INFO: (10) /api/v1/namespaces/proxy-5995/services/http:proxy-service-hnjh4:portname1/proxy/: foo (200; 95.620189ms) Jan 11 19:46:22.510: INFO: (10) /api/v1/namespaces/proxy-5995/services/http:proxy-service-hnjh4:portname2/proxy/: bar (200; 95.618552ms) Jan 11 19:46:22.605: INFO: (11) /api/v1/namespaces/proxy-5995/pods/proxy-service-hnjh4-6tdwj:160/proxy/: foo (200; 94.651175ms) Jan 11 19:46:22.605: INFO: (11) /api/v1/namespaces/proxy-5995/pods/http:proxy-service-hnjh4-6tdwj:162/proxy/: bar (200; 94.776866ms) Jan 11 19:46:22.605: INFO: (11) /api/v1/namespaces/proxy-5995/pods/proxy-service-hnjh4-6tdwj:1080/proxy/: test<... (200; 94.716606ms) Jan 11 19:46:22.605: INFO: (11) /api/v1/namespaces/proxy-5995/pods/https:proxy-service-hnjh4-6tdwj:462/proxy/: tls qux (200; 94.942095ms) Jan 11 19:46:22.605: INFO: (11) /api/v1/namespaces/proxy-5995/pods/proxy-service-hnjh4-6tdwj/proxy/: test (200; 95.117047ms) Jan 11 19:46:22.605: INFO: (11) /api/v1/namespaces/proxy-5995/pods/http:proxy-service-hnjh4-6tdwj:1080/proxy/: ... (200; 95.043106ms) Jan 11 19:46:22.605: INFO: (11) /api/v1/namespaces/proxy-5995/pods/http:proxy-service-hnjh4-6tdwj:160/proxy/: foo (200; 95.052702ms) Jan 11 19:46:22.605: INFO: (11) /api/v1/namespaces/proxy-5995/pods/proxy-service-hnjh4-6tdwj:162/proxy/: bar (200; 95.093233ms) Jan 11 19:46:22.605: INFO: (11) /api/v1/namespaces/proxy-5995/pods/https:proxy-service-hnjh4-6tdwj:460/proxy/: tls baz (200; 95.1583ms) Jan 11 19:46:22.605: INFO: (11) /api/v1/namespaces/proxy-5995/pods/https:proxy-service-hnjh4-6tdwj:443/proxy/: test (200; 93.133095ms) Jan 11 19:46:22.702: INFO: (12) /api/v1/namespaces/proxy-5995/pods/proxy-service-hnjh4-6tdwj:1080/proxy/: test<... (200; 93.291569ms) Jan 11 19:46:22.702: INFO: (12) /api/v1/namespaces/proxy-5995/pods/http:proxy-service-hnjh4-6tdwj:1080/proxy/: ... (200; 93.085879ms) Jan 11 19:46:22.702: INFO: (12) /api/v1/namespaces/proxy-5995/pods/https:proxy-service-hnjh4-6tdwj:460/proxy/: tls baz (200; 93.096739ms) Jan 11 19:46:22.702: INFO: (12) /api/v1/namespaces/proxy-5995/pods/proxy-service-hnjh4-6tdwj:160/proxy/: foo (200; 93.128243ms) Jan 11 19:46:22.702: INFO: (12) /api/v1/namespaces/proxy-5995/pods/https:proxy-service-hnjh4-6tdwj:462/proxy/: tls qux (200; 93.147329ms) Jan 11 19:46:22.702: INFO: (12) /api/v1/namespaces/proxy-5995/pods/https:proxy-service-hnjh4-6tdwj:443/proxy/: ... (200; 92.10494ms) Jan 11 19:46:22.801: INFO: (13) /api/v1/namespaces/proxy-5995/pods/proxy-service-hnjh4-6tdwj:162/proxy/: bar (200; 92.185019ms) Jan 11 19:46:22.801: INFO: (13) /api/v1/namespaces/proxy-5995/pods/https:proxy-service-hnjh4-6tdwj:460/proxy/: tls baz (200; 92.16404ms) Jan 11 19:46:22.801: INFO: (13) /api/v1/namespaces/proxy-5995/pods/http:proxy-service-hnjh4-6tdwj:160/proxy/: foo (200; 92.152538ms) Jan 11 19:46:22.801: INFO: (13) /api/v1/namespaces/proxy-5995/pods/http:proxy-service-hnjh4-6tdwj:162/proxy/: bar (200; 92.38019ms) Jan 11 19:46:22.801: INFO: (13) /api/v1/namespaces/proxy-5995/pods/proxy-service-hnjh4-6tdwj:1080/proxy/: test<... (200; 92.181084ms) Jan 11 19:46:22.803: INFO: (13) /api/v1/namespaces/proxy-5995/pods/https:proxy-service-hnjh4-6tdwj:443/proxy/: test (200; 94.124289ms) Jan 11 19:46:22.803: INFO: (13) /api/v1/namespaces/proxy-5995/services/https:proxy-service-hnjh4:tlsportname1/proxy/: tls baz (200; 94.133706ms) Jan 11 19:46:22.804: INFO: (13) /api/v1/namespaces/proxy-5995/services/http:proxy-service-hnjh4:portname1/proxy/: foo (200; 95.614989ms) Jan 11 19:46:22.804: INFO: (13) /api/v1/namespaces/proxy-5995/services/http:proxy-service-hnjh4:portname2/proxy/: bar (200; 95.533023ms) Jan 11 19:46:22.804: INFO: (13) /api/v1/namespaces/proxy-5995/services/proxy-service-hnjh4:portname2/proxy/: bar (200; 95.631358ms) Jan 11 19:46:22.806: INFO: (13) /api/v1/namespaces/proxy-5995/services/proxy-service-hnjh4:portname1/proxy/: foo (200; 97.006602ms) Jan 11 19:46:22.897: INFO: (14) /api/v1/namespaces/proxy-5995/pods/proxy-service-hnjh4-6tdwj:160/proxy/: foo (200; 91.242751ms) Jan 11 19:46:22.899: INFO: (14) /api/v1/namespaces/proxy-5995/pods/proxy-service-hnjh4-6tdwj:162/proxy/: bar (200; 92.655135ms) Jan 11 19:46:22.899: INFO: (14) /api/v1/namespaces/proxy-5995/pods/proxy-service-hnjh4-6tdwj/proxy/: test (200; 92.701807ms) Jan 11 19:46:22.899: INFO: (14) /api/v1/namespaces/proxy-5995/pods/https:proxy-service-hnjh4-6tdwj:462/proxy/: tls qux (200; 92.555203ms) Jan 11 19:46:22.899: INFO: (14) /api/v1/namespaces/proxy-5995/services/https:proxy-service-hnjh4:tlsportname1/proxy/: tls baz (200; 92.899876ms) Jan 11 19:46:22.899: INFO: (14) /api/v1/namespaces/proxy-5995/services/proxy-service-hnjh4:portname2/proxy/: bar (200; 92.641843ms) Jan 11 19:46:22.899: INFO: (14) /api/v1/namespaces/proxy-5995/pods/https:proxy-service-hnjh4-6tdwj:443/proxy/: test<... (200; 92.65803ms) Jan 11 19:46:22.899: INFO: (14) /api/v1/namespaces/proxy-5995/services/http:proxy-service-hnjh4:portname1/proxy/: foo (200; 92.767172ms) Jan 11 19:46:22.899: INFO: (14) /api/v1/namespaces/proxy-5995/pods/https:proxy-service-hnjh4-6tdwj:460/proxy/: tls baz (200; 92.842711ms) Jan 11 19:46:22.899: INFO: (14) /api/v1/namespaces/proxy-5995/services/https:proxy-service-hnjh4:tlsportname2/proxy/: tls qux (200; 92.773456ms) Jan 11 19:46:22.901: INFO: (14) /api/v1/namespaces/proxy-5995/pods/http:proxy-service-hnjh4-6tdwj:160/proxy/: foo (200; 94.792367ms) Jan 11 19:46:22.901: INFO: (14) /api/v1/namespaces/proxy-5995/pods/http:proxy-service-hnjh4-6tdwj:1080/proxy/: ... (200; 94.872045ms) Jan 11 19:46:22.901: INFO: (14) /api/v1/namespaces/proxy-5995/pods/http:proxy-service-hnjh4-6tdwj:162/proxy/: bar (200; 95.068081ms) Jan 11 19:46:22.903: INFO: (14) /api/v1/namespaces/proxy-5995/services/http:proxy-service-hnjh4:portname2/proxy/: bar (200; 96.810789ms) Jan 11 19:46:22.903: INFO: (14) /api/v1/namespaces/proxy-5995/services/proxy-service-hnjh4:portname1/proxy/: foo (200; 96.857496ms) Jan 11 19:46:22.996: INFO: (15) /api/v1/namespaces/proxy-5995/pods/http:proxy-service-hnjh4-6tdwj:162/proxy/: bar (200; 92.391279ms) Jan 11 19:46:22.996: INFO: (15) /api/v1/namespaces/proxy-5995/pods/http:proxy-service-hnjh4-6tdwj:160/proxy/: foo (200; 92.646408ms) Jan 11 19:46:22.996: INFO: (15) /api/v1/namespaces/proxy-5995/pods/proxy-service-hnjh4-6tdwj:160/proxy/: foo (200; 92.52372ms) Jan 11 19:46:22.996: INFO: (15) /api/v1/namespaces/proxy-5995/pods/https:proxy-service-hnjh4-6tdwj:460/proxy/: tls baz (200; 92.502858ms) Jan 11 19:46:22.996: INFO: (15) /api/v1/namespaces/proxy-5995/pods/proxy-service-hnjh4-6tdwj:162/proxy/: bar (200; 92.500074ms) Jan 11 19:46:22.996: INFO: (15) /api/v1/namespaces/proxy-5995/pods/https:proxy-service-hnjh4-6tdwj:462/proxy/: tls qux (200; 92.531374ms) Jan 11 19:46:22.996: INFO: (15) /api/v1/namespaces/proxy-5995/pods/proxy-service-hnjh4-6tdwj:1080/proxy/: test<... (200; 92.649724ms) Jan 11 19:46:22.996: INFO: (15) /api/v1/namespaces/proxy-5995/pods/https:proxy-service-hnjh4-6tdwj:443/proxy/: test (200; 92.863256ms) Jan 11 19:46:22.997: INFO: (15) /api/v1/namespaces/proxy-5995/services/https:proxy-service-hnjh4:tlsportname1/proxy/: tls baz (200; 94.037744ms) Jan 11 19:46:22.997: INFO: (15) /api/v1/namespaces/proxy-5995/pods/http:proxy-service-hnjh4-6tdwj:1080/proxy/: ... (200; 94.054631ms) Jan 11 19:46:22.997: INFO: (15) /api/v1/namespaces/proxy-5995/services/https:proxy-service-hnjh4:tlsportname2/proxy/: tls qux (200; 94.01911ms) Jan 11 19:46:22.997: INFO: (15) /api/v1/namespaces/proxy-5995/services/proxy-service-hnjh4:portname1/proxy/: foo (200; 94.461839ms) Jan 11 19:46:22.999: INFO: (15) /api/v1/namespaces/proxy-5995/services/proxy-service-hnjh4:portname2/proxy/: bar (200; 95.704999ms) Jan 11 19:46:22.999: INFO: (15) /api/v1/namespaces/proxy-5995/services/http:proxy-service-hnjh4:portname2/proxy/: bar (200; 95.811498ms) Jan 11 19:46:22.999: INFO: (15) /api/v1/namespaces/proxy-5995/services/http:proxy-service-hnjh4:portname1/proxy/: foo (200; 95.898305ms) Jan 11 19:46:23.092: INFO: (16) /api/v1/namespaces/proxy-5995/pods/proxy-service-hnjh4-6tdwj:1080/proxy/: test<... (200; 92.970611ms) Jan 11 19:46:23.092: INFO: (16) /api/v1/namespaces/proxy-5995/pods/proxy-service-hnjh4-6tdwj/proxy/: test (200; 93.150938ms) Jan 11 19:46:23.092: INFO: (16) /api/v1/namespaces/proxy-5995/services/https:proxy-service-hnjh4:tlsportname2/proxy/: tls qux (200; 93.26456ms) Jan 11 19:46:23.092: INFO: (16) /api/v1/namespaces/proxy-5995/pods/http:proxy-service-hnjh4-6tdwj:162/proxy/: bar (200; 93.468126ms) Jan 11 19:46:23.092: INFO: (16) /api/v1/namespaces/proxy-5995/pods/proxy-service-hnjh4-6tdwj:162/proxy/: bar (200; 93.234532ms) Jan 11 19:46:23.092: INFO: (16) /api/v1/namespaces/proxy-5995/pods/https:proxy-service-hnjh4-6tdwj:462/proxy/: tls qux (200; 93.291008ms) Jan 11 19:46:23.092: INFO: (16) /api/v1/namespaces/proxy-5995/pods/http:proxy-service-hnjh4-6tdwj:1080/proxy/: ... (200; 93.303229ms) Jan 11 19:46:23.092: INFO: (16) /api/v1/namespaces/proxy-5995/pods/http:proxy-service-hnjh4-6tdwj:160/proxy/: foo (200; 93.437646ms) Jan 11 19:46:23.094: INFO: (16) /api/v1/namespaces/proxy-5995/pods/https:proxy-service-hnjh4-6tdwj:443/proxy/: test (200; 92.223706ms) Jan 11 19:46:23.192: INFO: (17) /api/v1/namespaces/proxy-5995/pods/http:proxy-service-hnjh4-6tdwj:160/proxy/: foo (200; 92.822949ms) Jan 11 19:46:23.192: INFO: (17) /api/v1/namespaces/proxy-5995/pods/proxy-service-hnjh4-6tdwj:1080/proxy/: test<... (200; 92.912225ms) Jan 11 19:46:23.192: INFO: (17) /api/v1/namespaces/proxy-5995/pods/https:proxy-service-hnjh4-6tdwj:443/proxy/: ... (200; 93.012084ms) Jan 11 19:46:23.192: INFO: (17) /api/v1/namespaces/proxy-5995/pods/https:proxy-service-hnjh4-6tdwj:462/proxy/: tls qux (200; 92.943592ms) Jan 11 19:46:23.193: INFO: (17) /api/v1/namespaces/proxy-5995/services/https:proxy-service-hnjh4:tlsportname1/proxy/: tls baz (200; 94.133263ms) Jan 11 19:46:23.193: INFO: (17) /api/v1/namespaces/proxy-5995/services/https:proxy-service-hnjh4:tlsportname2/proxy/: tls qux (200; 94.241889ms) Jan 11 19:46:23.193: INFO: (17) /api/v1/namespaces/proxy-5995/pods/proxy-service-hnjh4-6tdwj:162/proxy/: bar (200; 94.288535ms) Jan 11 19:46:23.193: INFO: (17) /api/v1/namespaces/proxy-5995/pods/https:proxy-service-hnjh4-6tdwj:460/proxy/: tls baz (200; 94.21744ms) Jan 11 19:46:23.194: INFO: (17) /api/v1/namespaces/proxy-5995/services/proxy-service-hnjh4:portname1/proxy/: foo (200; 95.514665ms) Jan 11 19:46:23.196: INFO: (17) /api/v1/namespaces/proxy-5995/services/http:proxy-service-hnjh4:portname2/proxy/: bar (200; 97.259527ms) Jan 11 19:46:23.196: INFO: (17) /api/v1/namespaces/proxy-5995/services/http:proxy-service-hnjh4:portname1/proxy/: foo (200; 97.467811ms) Jan 11 19:46:23.198: INFO: (17) /api/v1/namespaces/proxy-5995/services/proxy-service-hnjh4:portname2/proxy/: bar (200; 98.786764ms) Jan 11 19:46:23.292: INFO: (18) /api/v1/namespaces/proxy-5995/pods/proxy-service-hnjh4-6tdwj:162/proxy/: bar (200; 94.53867ms) Jan 11 19:46:23.292: INFO: (18) /api/v1/namespaces/proxy-5995/pods/http:proxy-service-hnjh4-6tdwj:162/proxy/: bar (200; 94.607962ms) Jan 11 19:46:23.292: INFO: (18) /api/v1/namespaces/proxy-5995/pods/proxy-service-hnjh4-6tdwj:160/proxy/: foo (200; 94.508639ms) Jan 11 19:46:23.292: INFO: (18) /api/v1/namespaces/proxy-5995/pods/http:proxy-service-hnjh4-6tdwj:160/proxy/: foo (200; 94.542265ms) Jan 11 19:46:23.292: INFO: (18) /api/v1/namespaces/proxy-5995/pods/proxy-service-hnjh4-6tdwj/proxy/: test (200; 94.559405ms) Jan 11 19:46:23.292: INFO: (18) /api/v1/namespaces/proxy-5995/pods/proxy-service-hnjh4-6tdwj:1080/proxy/: test<... (200; 94.660399ms) Jan 11 19:46:23.292: INFO: (18) /api/v1/namespaces/proxy-5995/services/https:proxy-service-hnjh4:tlsportname1/proxy/: tls baz (200; 94.618057ms) Jan 11 19:46:23.292: INFO: (18) /api/v1/namespaces/proxy-5995/pods/https:proxy-service-hnjh4-6tdwj:443/proxy/: ... (200; 94.694009ms) Jan 11 19:46:23.292: INFO: (18) /api/v1/namespaces/proxy-5995/pods/https:proxy-service-hnjh4-6tdwj:460/proxy/: tls baz (200; 94.664276ms) Jan 11 19:46:23.294: INFO: (18) /api/v1/namespaces/proxy-5995/services/https:proxy-service-hnjh4:tlsportname2/proxy/: tls qux (200; 96.573471ms) Jan 11 19:46:23.294: INFO: (18) /api/v1/namespaces/proxy-5995/services/proxy-service-hnjh4:portname1/proxy/: foo (200; 96.500439ms) Jan 11 19:46:23.295: INFO: (18) /api/v1/namespaces/proxy-5995/services/proxy-service-hnjh4:portname2/proxy/: bar (200; 96.797492ms) Jan 11 19:46:23.295: INFO: (18) /api/v1/namespaces/proxy-5995/pods/https:proxy-service-hnjh4-6tdwj:462/proxy/: tls qux (200; 96.933838ms) Jan 11 19:46:23.295: INFO: (18) /api/v1/namespaces/proxy-5995/services/http:proxy-service-hnjh4:portname2/proxy/: bar (200; 97.195415ms) Jan 11 19:46:23.295: INFO: (18) /api/v1/namespaces/proxy-5995/services/http:proxy-service-hnjh4:portname1/proxy/: foo (200; 97.195993ms) Jan 11 19:46:23.392: INFO: (19) /api/v1/namespaces/proxy-5995/pods/http:proxy-service-hnjh4-6tdwj:1080/proxy/: ... (200; 96.652601ms) Jan 11 19:46:23.392: INFO: (19) /api/v1/namespaces/proxy-5995/pods/http:proxy-service-hnjh4-6tdwj:160/proxy/: foo (200; 96.51156ms) Jan 11 19:46:23.392: INFO: (19) /api/v1/namespaces/proxy-5995/services/https:proxy-service-hnjh4:tlsportname1/proxy/: tls baz (200; 96.527128ms) Jan 11 19:46:23.392: INFO: (19) /api/v1/namespaces/proxy-5995/pods/https:proxy-service-hnjh4-6tdwj:460/proxy/: tls baz (200; 96.540588ms) Jan 11 19:46:23.392: INFO: (19) /api/v1/namespaces/proxy-5995/pods/https:proxy-service-hnjh4-6tdwj:443/proxy/: test<... (200; 96.54866ms) Jan 11 19:46:23.392: INFO: (19) /api/v1/namespaces/proxy-5995/pods/proxy-service-hnjh4-6tdwj:160/proxy/: foo (200; 96.596812ms) Jan 11 19:46:23.392: INFO: (19) /api/v1/namespaces/proxy-5995/pods/proxy-service-hnjh4-6tdwj/proxy/: test (200; 96.556423ms) Jan 11 19:46:23.399: INFO: (19) /api/v1/namespaces/proxy-5995/pods/https:proxy-service-hnjh4-6tdwj:462/proxy/: tls qux (200; 103.304373ms) Jan 11 19:46:23.399: INFO: (19) /api/v1/namespaces/proxy-5995/services/https:proxy-service-hnjh4:tlsportname2/proxy/: tls qux (200; 103.343923ms) Jan 11 19:46:23.399: INFO: (19) /api/v1/namespaces/proxy-5995/pods/http:proxy-service-hnjh4-6tdwj:162/proxy/: bar (200; 103.458897ms) Jan 11 19:46:23.400: INFO: (19) /api/v1/namespaces/proxy-5995/services/http:proxy-service-hnjh4:portname1/proxy/: foo (200; 104.469057ms) Jan 11 19:46:23.400: INFO: (19) /api/v1/namespaces/proxy-5995/services/proxy-service-hnjh4:portname2/proxy/: bar (200; 104.455123ms) Jan 11 19:46:23.400: INFO: (19) /api/v1/namespaces/proxy-5995/services/http:proxy-service-hnjh4:portname2/proxy/: bar (200; 104.459202ms) Jan 11 19:46:23.400: INFO: (19) /api/v1/namespaces/proxy-5995/services/proxy-service-hnjh4:portname1/proxy/: foo (200; 104.549364ms) STEP: deleting ReplicationController proxy-service-hnjh4 in namespace proxy-5995, will wait for the garbage collector to delete the pods Jan 11 19:46:23.680: INFO: Deleting ReplicationController proxy-service-hnjh4 took: 90.792091ms Jan 11 19:46:23.781: INFO: Terminating ReplicationController proxy-service-hnjh4 pods took: 100.369733ms [AfterEach] version v1 /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:46:33.881: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-5995" for this suite. Jan 11 19:46:40.240: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:46:43.536: INFO: namespace proxy-5995 deletion completed in 9.564424699s • [SLOW TEST:34.062 seconds] [sig-network] Proxy /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:57 should proxy through a service and a pod [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SS ------------------------------ [BeforeEach] [sig-apps] CronJob /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:44:06.903: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename cronjob STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in cronjob-453 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] CronJob /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:55 [It] should not emit unexpected warnings /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:171 STEP: Creating a cronjob STEP: Ensuring at least two jobs and at least one finished job exists by listing jobs explicitly STEP: Ensuring no unexpected event has happened STEP: Removing cronjob [AfterEach] [sig-apps] CronJob /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:46:42.369: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "cronjob-453" for this suite. Jan 11 19:46:48.730: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:46:52.049: INFO: namespace cronjob-453 deletion completed in 9.588626531s • [SLOW TEST:165.146 seconds] [sig-apps] CronJob /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should not emit unexpected warnings /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:171 ------------------------------ SSSSSS ------------------------------ [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:45:59.933: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename deployment STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in deployment-498 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 [It] iterative rollouts should eventually progress /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:107 Jan 11 19:46:00.574: INFO: Creating deployment "webserver" Jan 11 19:46:00.665: INFO: 00: resuming deployment "webserver" Jan 11 19:46:00.756: INFO: 00: scaling up Jan 11 19:46:00.846: INFO: Updating deployment webserver Jan 11 19:46:01.107: INFO: 01: scaling deployment "webserver" Jan 11 19:46:01.197: INFO: 01: scaling up Jan 11 19:46:01.288: INFO: Updating deployment webserver Jan 11 19:46:01.288: INFO: 02: rolling back a rollout for deployment "webserver" Jan 11 19:46:01.469: INFO: Updating deployment webserver Jan 11 19:46:01.469: INFO: 03: triggering a new rollout for deployment "webserver" Jan 11 19:46:01.563: INFO: 03: scaling up Jan 11 19:46:03.763: INFO: 03: scaling down Jan 11 19:46:03.853: INFO: Updating deployment webserver Jan 11 19:46:04.442: INFO: 04: resuming deployment "webserver" Jan 11 19:46:04.532: INFO: 04: scaling up Jan 11 19:46:06.712: INFO: 04: scaling up Jan 11 19:46:08.712: INFO: 04: scaling down Jan 11 19:46:08.802: INFO: Updating deployment webserver Jan 11 19:46:08.993: INFO: 05: arbitrarily deleting one or more deployment pods for deployment "webserver" Jan 11 19:46:09.084: INFO: 05: deleting deployment pod "webserver-6c4ff4d6cc-654rz" Jan 11 19:46:09.176: INFO: 05: deleting deployment pod "webserver-6c4ff4d6cc-x4wmn" Jan 11 19:46:09.268: INFO: 06: scaling deployment "webserver" Jan 11 19:46:09.447: INFO: Updating deployment webserver Jan 11 19:46:11.785: INFO: 07: arbitrarily deleting one or more deployment pods for deployment "webserver" Jan 11 19:46:11.876: INFO: 07: deleting deployment pod "webserver-6c4ff4d6cc-8nrmp" Jan 11 19:46:11.968: INFO: 07: deleting deployment pod "webserver-6c4ff4d6cc-t84z5" Jan 11 19:46:13.059: INFO: 08: scaling deployment "webserver" Jan 11 19:46:13.239: INFO: Updating deployment webserver Jan 11 19:46:13.239: INFO: 09: scaling deployment "webserver" Jan 11 19:46:13.329: INFO: 09: scaling up Jan 11 19:46:13.419: INFO: Updating deployment webserver Jan 11 19:46:16.395: INFO: 10: arbitrarily deleting one or more deployment pods for deployment "webserver" Jan 11 19:46:16.486: INFO: 10: deleting deployment pod "webserver-6c4ff4d6cc-6kzwc" Jan 11 19:46:16.579: INFO: 10: deleting deployment pod "webserver-6c4ff4d6cc-97d5z" Jan 11 19:46:16.673: INFO: 10: deleting deployment pod "webserver-6c4ff4d6cc-kpm7h" Jan 11 19:46:16.767: INFO: 10: deleting deployment pod "webserver-6c4ff4d6cc-mjsz4" Jan 11 19:46:16.859: INFO: 10: deleting deployment pod "webserver-6c4ff4d6cc-nl2sh" Jan 11 19:46:16.952: INFO: 10: deleting deployment pod "webserver-6c4ff4d6cc-stn5j" Jan 11 19:46:21.371: INFO: 11: scaling deployment "webserver" Jan 11 19:46:21.462: INFO: 11: scaling down Jan 11 19:46:21.552: INFO: Updating deployment webserver Jan 11 19:46:26.101: INFO: 12: rolling back a rollout for deployment "webserver" Jan 11 19:46:26.282: INFO: Updating deployment webserver Jan 11 19:46:26.282: INFO: 13: rolling back a rollout for deployment "webserver" Jan 11 19:46:28.642: INFO: Updating deployment webserver Jan 11 19:46:31.546: INFO: 14: rolling back a rollout for deployment "webserver" Jan 11 19:46:33.907: INFO: Updating deployment webserver Jan 11 19:46:33.907: INFO: 15: scaling deployment "webserver" Jan 11 19:46:33.996: INFO: 15: scaling down Jan 11 19:46:34.086: INFO: Updating deployment webserver Jan 11 19:46:39.656: INFO: 16: scaling deployment "webserver" Jan 11 19:46:39.746: INFO: 16: scaling up Jan 11 19:46:39.836: INFO: Updating deployment webserver Jan 11 19:46:39.836: INFO: 17: arbitrarily deleting one or more deployment pods for deployment "webserver" Jan 11 19:46:39.927: INFO: 17: deleting deployment pod "webserver-595b5b9587-9pgpc" Jan 11 19:46:40.021: INFO: 17: deleting deployment pod "webserver-595b5b9587-gbdfd" Jan 11 19:46:40.113: INFO: 18: triggering a new rollout for deployment "webserver" Jan 11 19:46:40.203: INFO: 18: scaling down Jan 11 19:46:40.294: INFO: Updating deployment webserver Jan 11 19:46:40.294: INFO: 19: scaling deployment "webserver" Jan 11 19:46:42.564: INFO: 19: scaling down Jan 11 19:46:42.654: INFO: Updating deployment webserver Jan 11 19:46:42.744: INFO: Waiting for deployment "webserver" to be observed by the controller Jan 11 19:46:42.833: INFO: Waiting for deployment "webserver" status Jan 11 19:46:42.923: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:19, Replicas:5, UpdatedReplicas:4, ReadyReplicas:4, AvailableReplicas:4, UnavailableReplicas:0, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714368801, loc:(*time.Location)(0x84bfb00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714368801, loc:(*time.Location)(0x84bfb00)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714368802, loc:(*time.Location)(0x84bfb00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714368760, loc:(*time.Location)(0x84bfb00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"webserver-64dbff79df\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 11 19:46:45.014: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:19, Replicas:4, UpdatedReplicas:4, ReadyReplicas:3, AvailableReplicas:3, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714368801, loc:(*time.Location)(0x84bfb00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714368801, loc:(*time.Location)(0x84bfb00)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714368803, loc:(*time.Location)(0x84bfb00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714368760, loc:(*time.Location)(0x84bfb00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"webserver-64dbff79df\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 11 19:46:47.013: INFO: Checking deployment "webserver" for a complete condition [AfterEach] [sig-apps] Deployment /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:62 Jan 11 19:46:47.194: INFO: Deployment "webserver": &Deployment{ObjectMeta:{webserver deployment-498 /apis/apps/v1/namespaces/deployment-498/deployments/webserver 819a26cb-6a44-4702-850f-6dc194f097c3 55189 19 2020-01-11 19:46:00 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:6] [] [] []},Spec:DeploymentSpec{Replicas:*4,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [{A 18 nil}] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0000555f8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*2,Paused:false,ProgressDeadlineSeconds:*30,},Status:DeploymentStatus{ObservedGeneration:19,Replicas:4,UpdatedReplicas:4,AvailableReplicas:4,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-01-11 19:46:41 +0000 UTC,LastTransitionTime:2020-01-11 19:46:41 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "webserver-64dbff79df" has successfully progressed.,LastUpdateTime:2020-01-11 19:46:45 +0000 UTC,LastTransitionTime:2020-01-11 19:46:00 +0000 UTC,},},ReadyReplicas:4,CollisionCount:nil,},} Jan 11 19:46:47.284: INFO: New ReplicaSet "webserver-64dbff79df" of Deployment "webserver": &ReplicaSet{ObjectMeta:{webserver-64dbff79df deployment-498 /apis/apps/v1/namespaces/deployment-498/replicasets/webserver-64dbff79df ea271763-a745-4b1d-9718-d8e4d1f1fb3a 55188 5 2020-01-11 19:46:40 +0000 UTC map[name:httpd pod-template-hash:64dbff79df] map[deployment.kubernetes.io/desired-replicas:4 deployment.kubernetes.io/max-replicas:5 deployment.kubernetes.io/revision:6] [{apps/v1 Deployment webserver 819a26cb-6a44-4702-850f-6dc194f097c3 0xc002276320 0xc002276321}] [] []},Spec:ReplicaSetSpec{Replicas:*4,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 64dbff79df,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:64dbff79df] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [{A 18 nil}] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002276378 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:4,FullyLabeledReplicas:4,ObservedGeneration:5,ReadyReplicas:4,AvailableReplicas:4,Conditions:[]ReplicaSetCondition{},},} Jan 11 19:46:47.284: INFO: All old ReplicaSets of Deployment "webserver": Jan 11 19:46:47.284: INFO: &ReplicaSet{ObjectMeta:{webserver-595b5b9587 deployment-498 /apis/apps/v1/namespaces/deployment-498/replicasets/webserver-595b5b9587 cd47c28a-8a9e-4780-b32c-20a55714a028 55169 31 2020-01-11 19:46:00 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[deployment.kubernetes.io/desired-replicas:4 deployment.kubernetes.io/max-replicas:5 deployment.kubernetes.io/revision:5 deployment.kubernetes.io/revision-history:1,3] [{apps/v1 Deployment webserver 819a26cb-6a44-4702-850f-6dc194f097c3 0xc002276260 0xc002276261}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 595b5b9587,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0022762b8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:31,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jan 11 19:46:47.284: INFO: &ReplicaSet{ObjectMeta:{webserver-6c4ff4d6cc deployment-498 /apis/apps/v1/namespaces/deployment-498/replicasets/webserver-6c4ff4d6cc 03801e64-d378-4988-bbb4-938dc84a2ac8 55017 20 2020-01-11 19:46:03 +0000 UTC map[name:httpd pod-template-hash:6c4ff4d6cc] map[deployment.kubernetes.io/desired-replicas:5 deployment.kubernetes.io/max-replicas:7 deployment.kubernetes.io/revision:4 deployment.kubernetes.io/revision-history:2] [{apps/v1 Deployment webserver 819a26cb-6a44-4702-850f-6dc194f097c3 0xc0022763e0 0xc0022763e1}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 6c4ff4d6cc,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:6c4ff4d6cc] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [{A 3 nil}] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002276438 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:20,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jan 11 19:46:47.376: INFO: Pod "webserver-64dbff79df-2p9z2" is available: &Pod{ObjectMeta:{webserver-64dbff79df-2p9z2 webserver-64dbff79df- deployment-498 /api/v1/namespaces/deployment-498/pods/webserver-64dbff79df-2p9z2 8e9b979d-ee61-424b-804e-41f3b15f00d1 55098 0 2020-01-11 19:46:40 +0000 UTC map[name:httpd pod-template-hash:64dbff79df] map[cni.projectcalico.org/podIP:100.64.0.15/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-64dbff79df ea271763-a745-4b1d-9718-d8e4d1f1fb3a 0xc002276a17 0xc002276a18}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6p9kw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6p9kw,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:A,Value:18,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6p9kw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-10-250-7-77.ec2.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:46:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:46:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:46:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:46:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.7.77,PodIP:100.64.0.15,StartTime:2020-01-11 19:46:40 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-11 19:46:41 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://a10e8c3e62e0ff35889b8029529238e852c9da9e6f1f7393d227e9d3ebdaffea,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:100.64.0.15,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 11 19:46:47.376: INFO: Pod "webserver-64dbff79df-fhrkm" is available: &Pod{ObjectMeta:{webserver-64dbff79df-fhrkm webserver-64dbff79df- deployment-498 /api/v1/namespaces/deployment-498/pods/webserver-64dbff79df-fhrkm 1bc885e7-105e-426b-b496-17f70aa18bd0 55159 0 2020-01-11 19:46:40 +0000 UTC map[name:httpd pod-template-hash:64dbff79df] map[cni.projectcalico.org/podIP:100.64.1.111/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-64dbff79df ea271763-a745-4b1d-9718-d8e4d1f1fb3a 0xc0022770d0 0xc0022770d1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6p9kw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6p9kw,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:A,Value:18,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6p9kw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-10-250-27-25.ec2.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:46:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:46:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:46:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:46:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.27.25,PodIP:100.64.1.111,StartTime:2020-01-11 19:46:40 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-11 19:46:42 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://4d19119d4bee7e4775c4f5afa2b5aa1a58d9362cfe8b91ac7901f70061e06640,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:100.64.1.111,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 11 19:46:47.376: INFO: Pod "webserver-64dbff79df-qt6s6" is available: &Pod{ObjectMeta:{webserver-64dbff79df-qt6s6 webserver-64dbff79df- deployment-498 /api/v1/namespaces/deployment-498/pods/webserver-64dbff79df-qt6s6 923705d1-4e04-4efd-b5d6-be74bb257d48 55187 0 2020-01-11 19:46:42 +0000 UTC map[name:httpd pod-template-hash:64dbff79df] map[cni.projectcalico.org/podIP:100.64.1.114/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-64dbff79df ea271763-a745-4b1d-9718-d8e4d1f1fb3a 0xc002277247 0xc002277248}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6p9kw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6p9kw,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:A,Value:18,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6p9kw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-10-250-27-25.ec2.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:46:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:46:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:46:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:46:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.27.25,PodIP:100.64.1.114,StartTime:2020-01-11 19:46:42 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-11 19:46:43 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://90068d50e1f43cbb867c3f00dfff59bd4a60ca6afdbb41cb115864f3b6249091,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:100.64.1.114,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 11 19:46:47.376: INFO: Pod "webserver-64dbff79df-sdft9" is available: &Pod{ObjectMeta:{webserver-64dbff79df-sdft9 webserver-64dbff79df- deployment-498 /api/v1/namespaces/deployment-498/pods/webserver-64dbff79df-sdft9 b5e07d76-b86d-43ae-9865-a2821e90b307 55144 0 2020-01-11 19:46:40 +0000 UTC map[name:httpd pod-template-hash:64dbff79df] map[cni.projectcalico.org/podIP:100.64.1.112/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-64dbff79df ea271763-a745-4b1d-9718-d8e4d1f1fb3a 0xc0022773c7 0xc0022773c8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6p9kw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6p9kw,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:A,Value:18,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6p9kw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-10-250-27-25.ec2.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:46:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:46:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:46:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:46:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.27.25,PodIP:100.64.1.112,StartTime:2020-01-11 19:46:40 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-11 19:46:42 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://087f6f7599abf4c105f7a799c3b91daa2b5e8ea4d903351bb9f2e1721a3022f2,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:100.64.1.112,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:46:47.376: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-498" for this suite. Jan 11 19:46:53.738: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:46:57.056: INFO: namespace deployment-498 deletion completed in 9.588338149s • [SLOW TEST:57.123 seconds] [sig-apps] Deployment /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 iterative rollouts should eventually progress /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:107 ------------------------------ SS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:46:57.062: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename configmap STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-1241 STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:57 STEP: Creating configMap with name configmap-test-volume-380b2066-c685-412a-8d06-a72160aba486 STEP: Creating a pod to test consume configMaps Jan 11 19:46:57.885: INFO: Waiting up to 5m0s for pod "pod-configmaps-f41b5b95-921c-4844-8568-d5b054e0cb9e" in namespace "configmap-1241" to be "success or failure" Jan 11 19:46:57.975: INFO: Pod "pod-configmaps-f41b5b95-921c-4844-8568-d5b054e0cb9e": Phase="Pending", Reason="", readiness=false. Elapsed: 89.763979ms Jan 11 19:47:00.066: INFO: Pod "pod-configmaps-f41b5b95-921c-4844-8568-d5b054e0cb9e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.18076733s STEP: Saw pod success Jan 11 19:47:00.066: INFO: Pod "pod-configmaps-f41b5b95-921c-4844-8568-d5b054e0cb9e" satisfied condition "success or failure" Jan 11 19:47:00.156: INFO: Trying to get logs from node ip-10-250-27-25.ec2.internal pod pod-configmaps-f41b5b95-921c-4844-8568-d5b054e0cb9e container configmap-volume-test: STEP: delete the pod Jan 11 19:47:00.347: INFO: Waiting for pod pod-configmaps-f41b5b95-921c-4844-8568-d5b054e0cb9e to disappear Jan 11 19:47:00.437: INFO: Pod pod-configmaps-f41b5b95-921c-4844-8568-d5b054e0cb9e no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:47:00.437: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1241" for this suite. Jan 11 19:47:06.798: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:47:10.113: INFO: namespace configmap-1241 deletion completed in 9.584761782s • [SLOW TEST:13.051 seconds] [sig-storage] ConfigMap /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:57 ------------------------------ SSS ------------------------------ [BeforeEach] [k8s.io] Container Runtime /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:46:43.542: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename container-runtime STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-runtime-1869 STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:47:10.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-1869" for this suite. Jan 11 19:47:16.661: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:47:19.958: INFO: namespace container-runtime-1869 deletion completed in 9.565440338s • [SLOW TEST:36.416 seconds] [k8s.io] Container Runtime /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693 blackbox test /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 when starting a container that exits /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:40 should run with the expected status [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:46:52.058: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename services STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-9361 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:91 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-9361 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-9361 STEP: creating replication controller externalsvc in namespace services-9361 I0111 19:46:52.978563 8625 runners.go:184] Created replication controller with name: externalsvc, namespace: services-9361, replica count: 2 I0111 19:46:56.079117 8625 runners.go:184] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName Jan 11 19:46:56.353: INFO: Creating new exec pod Jan 11 19:46:58.625: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=services-9361 execpodkzzwv -- /bin/sh -x -c nslookup clusterip-service' Jan 11 19:47:00.080: INFO: stderr: "+ nslookup clusterip-service\n" Jan 11 19:47:00.080: INFO: stdout: "Server:\t\t100.104.0.10\nAddress:\t100.104.0.10#53\n\nclusterip-service.services-9361.svc.cluster.local\tcanonical name = externalsvc.services-9361.svc.cluster.local.\nName:\texternalsvc.services-9361.svc.cluster.local\nAddress: 100.109.178.43\n\n" STEP: deleting ReplicationController externalsvc in namespace services-9361, will wait for the garbage collector to delete the pods Jan 11 19:47:00.361: INFO: Deleting ReplicationController externalsvc took: 90.858744ms Jan 11 19:47:00.461: INFO: Terminating ReplicationController externalsvc pods took: 100.272339ms Jan 11 19:47:13.962: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:47:14.056: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9361" for this suite. Jan 11 19:47:20.417: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:47:23.729: INFO: namespace services-9361 deletion completed in 9.582112358s [AfterEach] [sig-network] Services /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:95 • [SLOW TEST:31.671 seconds] [sig-network] Services /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SS ------------------------------ [BeforeEach] [sig-network] Services /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:46:33.496: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename services STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-1603 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:91 [It] should have session affinity work for service with type clusterIP /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1801 STEP: creating service in namespace services-1603 STEP: creating service affinity-clusterip in namespace services-1603 STEP: creating replication controller affinity-clusterip in namespace services-1603 I0111 19:46:34.337575 8611 runners.go:184] Created replication controller with name: affinity-clusterip, namespace: services-1603, replica count: 3 I0111 19:46:37.438132 8611 runners.go:184] affinity-clusterip Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 11 19:46:37.617: INFO: Creating new exec pod Jan 11 19:46:40.891: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=services-1603 execpod-affinityvk9gf -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80' Jan 11 19:46:42.418: INFO: stderr: "+ nc -zv -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Jan 11 19:46:42.418: INFO: stdout: "" Jan 11 19:46:42.418: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=services-1603 execpod-affinityvk9gf -- /bin/sh -x -c nc -zv -t -w 2 100.105.158.181 80' Jan 11 19:46:43.840: INFO: stderr: "+ nc -zv -t -w 2 100.105.158.181 80\nConnection to 100.105.158.181 80 port [tcp/http] succeeded!\n" Jan 11 19:46:43.840: INFO: stdout: "" Jan 11 19:46:43.840: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=services-1603 execpod-affinityvk9gf -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://100.105.158.181:80/' Jan 11 19:46:45.192: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://100.105.158.181:80/\n" Jan 11 19:46:45.192: INFO: stdout: "affinity-clusterip-tzllm" Jan 11 19:46:45.192: INFO: Received response from host: affinity-clusterip-tzllm Jan 11 19:46:47.192: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=services-1603 execpod-affinityvk9gf -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://100.105.158.181:80/' Jan 11 19:46:48.489: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://100.105.158.181:80/\n" Jan 11 19:46:48.489: INFO: stdout: "affinity-clusterip-tzllm" Jan 11 19:46:48.489: INFO: Received response from host: affinity-clusterip-tzllm Jan 11 19:46:49.192: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=services-1603 execpod-affinityvk9gf -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://100.105.158.181:80/' Jan 11 19:46:50.477: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://100.105.158.181:80/\n" Jan 11 19:46:50.477: INFO: stdout: "affinity-clusterip-tzllm" Jan 11 19:46:50.477: INFO: Received response from host: affinity-clusterip-tzllm Jan 11 19:46:51.192: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=services-1603 execpod-affinityvk9gf -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://100.105.158.181:80/' Jan 11 19:46:52.518: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://100.105.158.181:80/\n" Jan 11 19:46:52.518: INFO: stdout: "affinity-clusterip-tzllm" Jan 11 19:46:52.518: INFO: Received response from host: affinity-clusterip-tzllm Jan 11 19:46:53.192: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=services-1603 execpod-affinityvk9gf -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://100.105.158.181:80/' Jan 11 19:46:54.612: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://100.105.158.181:80/\n" Jan 11 19:46:54.612: INFO: stdout: "affinity-clusterip-tzllm" Jan 11 19:46:54.612: INFO: Received response from host: affinity-clusterip-tzllm Jan 11 19:46:55.192: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=services-1603 execpod-affinityvk9gf -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://100.105.158.181:80/' Jan 11 19:46:56.545: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://100.105.158.181:80/\n" Jan 11 19:46:56.545: INFO: stdout: "affinity-clusterip-tzllm" Jan 11 19:46:56.545: INFO: Received response from host: affinity-clusterip-tzllm Jan 11 19:46:57.192: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=services-1603 execpod-affinityvk9gf -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://100.105.158.181:80/' Jan 11 19:46:58.548: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://100.105.158.181:80/\n" Jan 11 19:46:58.548: INFO: stdout: "affinity-clusterip-tzllm" Jan 11 19:46:58.548: INFO: Received response from host: affinity-clusterip-tzllm Jan 11 19:46:59.192: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=services-1603 execpod-affinityvk9gf -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://100.105.158.181:80/' Jan 11 19:47:00.507: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://100.105.158.181:80/\n" Jan 11 19:47:00.507: INFO: stdout: "affinity-clusterip-tzllm" Jan 11 19:47:00.507: INFO: Received response from host: affinity-clusterip-tzllm Jan 11 19:47:01.192: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=services-1603 execpod-affinityvk9gf -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://100.105.158.181:80/' Jan 11 19:47:02.509: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://100.105.158.181:80/\n" Jan 11 19:47:02.509: INFO: stdout: "affinity-clusterip-tzllm" Jan 11 19:47:02.509: INFO: Received response from host: affinity-clusterip-tzllm Jan 11 19:47:03.192: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=services-1603 execpod-affinityvk9gf -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://100.105.158.181:80/' Jan 11 19:47:04.520: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://100.105.158.181:80/\n" Jan 11 19:47:04.521: INFO: stdout: "affinity-clusterip-tzllm" Jan 11 19:47:04.521: INFO: Received response from host: affinity-clusterip-tzllm Jan 11 19:47:05.192: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=services-1603 execpod-affinityvk9gf -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://100.105.158.181:80/' Jan 11 19:47:06.493: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://100.105.158.181:80/\n" Jan 11 19:47:06.493: INFO: stdout: "affinity-clusterip-tzllm" Jan 11 19:47:06.493: INFO: Received response from host: affinity-clusterip-tzllm Jan 11 19:47:07.192: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=services-1603 execpod-affinityvk9gf -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://100.105.158.181:80/' Jan 11 19:47:08.490: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://100.105.158.181:80/\n" Jan 11 19:47:08.490: INFO: stdout: "affinity-clusterip-tzllm" Jan 11 19:47:08.490: INFO: Received response from host: affinity-clusterip-tzllm Jan 11 19:47:09.192: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=services-1603 execpod-affinityvk9gf -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://100.105.158.181:80/' Jan 11 19:47:10.458: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://100.105.158.181:80/\n" Jan 11 19:47:10.459: INFO: stdout: "affinity-clusterip-tzllm" Jan 11 19:47:10.459: INFO: Received response from host: affinity-clusterip-tzllm Jan 11 19:47:11.192: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=services-1603 execpod-affinityvk9gf -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://100.105.158.181:80/' Jan 11 19:47:12.494: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://100.105.158.181:80/\n" Jan 11 19:47:12.494: INFO: stdout: "affinity-clusterip-tzllm" Jan 11 19:47:12.494: INFO: Received response from host: affinity-clusterip-tzllm Jan 11 19:47:13.192: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=services-1603 execpod-affinityvk9gf -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://100.105.158.181:80/' Jan 11 19:47:14.503: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://100.105.158.181:80/\n" Jan 11 19:47:14.503: INFO: stdout: "affinity-clusterip-tzllm" Jan 11 19:47:14.503: INFO: Received response from host: affinity-clusterip-tzllm Jan 11 19:47:14.503: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip in namespace services-1603, will wait for the garbage collector to delete the pods Jan 11 19:47:14.876: INFO: Deleting ReplicationController affinity-clusterip took: 91.103421ms Jan 11 19:47:14.976: INFO: Terminating ReplicationController affinity-clusterip pods took: 100.353008ms [AfterEach] [sig-network] Services /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:47:23.973: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1603" for this suite. Jan 11 19:47:30.335: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:47:33.656: INFO: namespace services-1603 deletion completed in 9.59035912s [AfterEach] [sig-network] Services /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:95 • [SLOW TEST:60.159 seconds] [sig-network] Services /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity work for service with type clusterIP /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1801 ------------------------------ SSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl Port forwarding /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:47:10.119: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename port-forwarding STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in port-forwarding-6397 STEP: Waiting for a default service account to be provisioned in namespace [It] should support a client that connects, sends NO DATA, and disconnects /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:465 STEP: Creating the target pod STEP: Running 'kubectl port-forward' Jan 11 19:47:17.031: INFO: starting port-forward command and streaming output Jan 11 19:47:17.031: INFO: Asynchronously running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config port-forward --namespace=port-forwarding-6397 pfpod :80' Jan 11 19:47:17.032: INFO: reading from `kubectl port-forward` command's stdout STEP: Dialing the local port STEP: Closing the connection to the local port STEP: Waiting for the target pod to stop running Jan 11 19:47:18.017: INFO: Waiting up to 5m0s for pod "pfpod" in namespace "port-forwarding-6397" to be "container terminated" Jan 11 19:47:18.107: INFO: Pod "pfpod": Phase="Running", Reason="", readiness=true. Elapsed: 90.247747ms Jan 11 19:47:20.197: INFO: Pod "pfpod": Phase="Running", Reason="", readiness=false. Elapsed: 2.180278248s Jan 11 19:47:20.198: INFO: Pod "pfpod" satisfied condition "container terminated" STEP: Verifying logs [AfterEach] [sig-cli] Kubectl Port forwarding /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:47:20.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "port-forwarding-6397" for this suite. Jan 11 19:47:32.659: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:47:35.976: INFO: namespace port-forwarding-6397 deletion completed in 15.587021956s • [SLOW TEST:25.857 seconds] [sig-cli] Kubectl Port forwarding /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 With a server listening on localhost /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:463 that expects a client request /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:464 should support a client that connects, sends NO DATA, and disconnects /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:465 ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:42:10.803: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename projected STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-8827 STEP: Waiting for a default service account to be provisioned in namespace [It] Should fail non-optional pod creation due to the key in the secret object does not exist [Slow] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:421 STEP: Creating secret with name s-test-opt-create-0ffaf015-9690-4393-8009-73993df55782 STEP: Creating the pod [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:47:11.990: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8827" for this suite. Jan 11 19:47:40.348: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:47:43.647: INFO: namespace projected-8827 deletion completed in 31.566422611s • [SLOW TEST:332.844 seconds] [sig-storage] Projected secret /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 Should fail non-optional pod creation due to the key in the secret object does not exist [Slow] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:421 ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:47:23.733: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename resourcequota STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in resourcequota-9564 STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:47:36.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9564" for this suite. Jan 11 19:47:42.376: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:47:45.691: INFO: namespace resourcequota-9564 deletion completed in 9.585740302s • [SLOW TEST:21.958 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ S ------------------------------ [BeforeEach] [sig-cli] Kubectl Port forwarding /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:47:19.975: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename port-forwarding STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in port-forwarding-231 STEP: Waiting for a default service account to be provisioned in namespace [It] should support a client that connects, sends NO DATA, and disconnects /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:443 STEP: Creating the target pod STEP: Running 'kubectl port-forward' Jan 11 19:47:27.024: INFO: starting port-forward command and streaming output Jan 11 19:47:27.024: INFO: Asynchronously running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config port-forward --namespace=port-forwarding-231 pfpod :80' Jan 11 19:47:27.025: INFO: reading from `kubectl port-forward` command's stdout STEP: Dialing the local port STEP: Closing the connection to the local port STEP: Waiting for the target pod to stop running Jan 11 19:47:28.053: INFO: Waiting up to 5m0s for pod "pfpod" in namespace "port-forwarding-231" to be "container terminated" Jan 11 19:47:28.142: INFO: Pod "pfpod": Phase="Running", Reason="", readiness=true. Elapsed: 89.423188ms Jan 11 19:47:30.233: INFO: Pod "pfpod": Phase="Running", Reason="", readiness=false. Elapsed: 2.179591674s Jan 11 19:47:30.233: INFO: Pod "pfpod" satisfied condition "container terminated" STEP: Verifying logs [AfterEach] [sig-cli] Kubectl Port forwarding /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:47:30.331: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "port-forwarding-231" for this suite. Jan 11 19:47:42.690: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:47:45.992: INFO: namespace port-forwarding-231 deletion completed in 15.570760064s • [SLOW TEST:26.017 seconds] [sig-cli] Kubectl Port forwarding /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 With a server listening on 0.0.0.0 /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:441 that expects a client request /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:442 should support a client that connects, sends NO DATA, and disconnects /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:443 ------------------------------ SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] CronJob /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:45:46.883: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename cronjob STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in cronjob-5671 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] CronJob /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:55 [It] should schedule multiple jobs concurrently /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:60 STEP: Creating a cronjob STEP: Ensuring more than one job is running at a time STEP: Ensuring at least two running jobs exists by listing jobs explicitly STEP: Removing cronjob [AfterEach] [sig-apps] CronJob /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:47:09.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "cronjob-5671" for this suite. Jan 11 19:47:56.335: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:47:59.654: INFO: namespace cronjob-5671 deletion completed in 49.588178406s • [SLOW TEST:132.771 seconds] [sig-apps] CronJob /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should schedule multiple jobs concurrently /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:60 ------------------------------ SSSSS ------------------------------ [BeforeEach] [sig-instrumentation] MetricsGrabber /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:47:46.011: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename metrics-grabber STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in metrics-grabber-7048 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-instrumentation] MetricsGrabber /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/monitoring/metrics_grabber.go:36 W0111 19:47:47.835708 8614 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. [It] should grab all metrics from a Kubelet. /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/monitoring/metrics_grabber.go:52 STEP: Proxying to Node through the API server [AfterEach] [sig-instrumentation] MetricsGrabber /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:47:48.130: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "metrics-grabber-7048" for this suite. Jan 11 19:47:56.490: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:47:59.789: INFO: namespace metrics-grabber-7048 deletion completed in 11.568486904s • [SLOW TEST:13.778 seconds] [sig-instrumentation] MetricsGrabber /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/common/framework.go:23 should grab all metrics from a Kubelet. /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/monitoring/metrics_grabber.go:52 ------------------------------ SSSSSSS ------------------------------ [BeforeEach] [k8s.io] Pods /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:47:43.649: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename pods STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pods-3948 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:165 [It] should be submitted and removed [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Jan 11 19:47:44.462: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:47:53.893: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3948" for this suite. Jan 11 19:48:02.253: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:48:05.641: INFO: namespace pods-3948 deletion completed in 11.656721623s • [SLOW TEST:21.992 seconds] [k8s.io] Pods /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693 should be submitted and removed [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ S ------------------------------ [BeforeEach] [sig-network] Services /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:47:35.978: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename services STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-9378 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:91 [It] should be able to change the type from NodePort to ExternalName [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: creating a service nodeport-service with the type=NodePort in namespace services-9378 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-9378 STEP: creating replication controller externalsvc in namespace services-9378 I0111 19:47:36.897509 8632 runners.go:184] Created replication controller with name: externalsvc, namespace: services-9378, replica count: 2 I0111 19:47:39.997973 8632 runners.go:184] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName Jan 11 19:47:40.274: INFO: Creating new exec pod Jan 11 19:47:42.546: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=services-9378 execpodtmgk8 -- /bin/sh -x -c nslookup nodeport-service' Jan 11 19:47:43.821: INFO: stderr: "+ nslookup nodeport-service\n" Jan 11 19:47:43.821: INFO: stdout: "Server:\t\t100.104.0.10\nAddress:\t100.104.0.10#53\n\nnodeport-service.services-9378.svc.cluster.local\tcanonical name = externalsvc.services-9378.svc.cluster.local.\nName:\texternalsvc.services-9378.svc.cluster.local\nAddress: 100.104.132.219\n\n" STEP: deleting ReplicationController externalsvc in namespace services-9378, will wait for the garbage collector to delete the pods Jan 11 19:47:44.103: INFO: Deleting ReplicationController externalsvc took: 91.184174ms Jan 11 19:47:44.603: INFO: Terminating ReplicationController externalsvc pods took: 500.306639ms Jan 11 19:47:53.899: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:47:53.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9378" for this suite. Jan 11 19:48:02.355: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:48:05.818: INFO: namespace services-9378 deletion completed in 11.733799422s [AfterEach] [sig-network] Services /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:95 • [SLOW TEST:29.841 seconds] [sig-network] Services /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSS ------------------------------ [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:93 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:47:33.670: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename provisioning STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in provisioning-3857 STEP: Waiting for a default service account to be provisioned in namespace [It] should support file as subpath [LinuxOnly] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:213 Jan 11 19:47:34.312: INFO: Could not find CSI Name for in-tree plugin kubernetes.io/host-path Jan 11 19:47:34.403: INFO: Creating resource for inline volume STEP: Creating pod pod-subpath-test-hostpath-cv6h STEP: Creating a pod to test atomic-volume-subpath Jan 11 19:47:34.495: INFO: Waiting up to 5m0s for pod "pod-subpath-test-hostpath-cv6h" in namespace "provisioning-3857" to be "success or failure" Jan 11 19:47:34.586: INFO: Pod "pod-subpath-test-hostpath-cv6h": Phase="Pending", Reason="", readiness=false. Elapsed: 90.242365ms Jan 11 19:47:36.676: INFO: Pod "pod-subpath-test-hostpath-cv6h": Phase="Pending", Reason="", readiness=false. Elapsed: 2.181009509s Jan 11 19:47:38.767: INFO: Pod "pod-subpath-test-hostpath-cv6h": Phase="Running", Reason="", readiness=true. Elapsed: 4.271514471s Jan 11 19:47:40.857: INFO: Pod "pod-subpath-test-hostpath-cv6h": Phase="Running", Reason="", readiness=true. Elapsed: 6.361722206s Jan 11 19:47:42.947: INFO: Pod "pod-subpath-test-hostpath-cv6h": Phase="Running", Reason="", readiness=true. Elapsed: 8.451995209s Jan 11 19:47:45.038: INFO: Pod "pod-subpath-test-hostpath-cv6h": Phase="Running", Reason="", readiness=true. Elapsed: 10.542200962s Jan 11 19:47:47.128: INFO: Pod "pod-subpath-test-hostpath-cv6h": Phase="Running", Reason="", readiness=true. Elapsed: 12.632448094s Jan 11 19:47:49.218: INFO: Pod "pod-subpath-test-hostpath-cv6h": Phase="Running", Reason="", readiness=true. Elapsed: 14.722427628s Jan 11 19:47:51.308: INFO: Pod "pod-subpath-test-hostpath-cv6h": Phase="Running", Reason="", readiness=true. Elapsed: 16.812980347s Jan 11 19:47:53.399: INFO: Pod "pod-subpath-test-hostpath-cv6h": Phase="Running", Reason="", readiness=true. Elapsed: 18.903143957s Jan 11 19:47:55.489: INFO: Pod "pod-subpath-test-hostpath-cv6h": Phase="Running", Reason="", readiness=true. Elapsed: 20.993724269s Jan 11 19:47:57.580: INFO: Pod "pod-subpath-test-hostpath-cv6h": Phase="Succeeded", Reason="", readiness=false. Elapsed: 23.084701806s STEP: Saw pod success Jan 11 19:47:57.580: INFO: Pod "pod-subpath-test-hostpath-cv6h" satisfied condition "success or failure" Jan 11 19:47:57.670: INFO: Trying to get logs from node ip-10-250-27-25.ec2.internal pod pod-subpath-test-hostpath-cv6h container test-container-subpath-hostpath-cv6h: STEP: delete the pod Jan 11 19:47:57.959: INFO: Waiting for pod pod-subpath-test-hostpath-cv6h to disappear Jan 11 19:47:58.049: INFO: Pod pod-subpath-test-hostpath-cv6h no longer exists STEP: Deleting pod pod-subpath-test-hostpath-cv6h Jan 11 19:47:58.049: INFO: Deleting pod "pod-subpath-test-hostpath-cv6h" in namespace "provisioning-3857" STEP: Deleting pod Jan 11 19:47:58.139: INFO: Deleting pod "pod-subpath-test-hostpath-cv6h" in namespace "provisioning-3857" Jan 11 19:47:58.228: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics [AfterEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:47:58.229: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "provisioning-3857" for this suite. Jan 11 19:48:04.590: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:48:08.060: INFO: namespace provisioning-3857 deletion completed in 9.7401329s • [SLOW TEST:34.390 seconds] [sig-storage] In-tree Volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Driver: hostPath] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:69 [Testpattern: Inline-volume (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:92 should support file as subpath [LinuxOnly] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:213 ------------------------------ SS ------------------------------ [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:93 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:47:45.695: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename provisioning STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in provisioning-2469 STEP: Waiting for a default service account to be provisioned in namespace [It] should fail if subpath with backstepping is outside the volume [Slow] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:261 Jan 11 19:47:46.352: INFO: Could not find CSI Name for in-tree plugin kubernetes.io/host-path Jan 11 19:47:46.534: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-2469" in namespace "provisioning-2469" to be "success or failure" Jan 11 19:47:46.624: INFO: Pod "hostpath-symlink-prep-provisioning-2469": Phase="Pending", Reason="", readiness=false. Elapsed: 89.48383ms Jan 11 19:47:48.714: INFO: Pod "hostpath-symlink-prep-provisioning-2469": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.179785974s STEP: Saw pod success Jan 11 19:47:48.714: INFO: Pod "hostpath-symlink-prep-provisioning-2469" satisfied condition "success or failure" Jan 11 19:47:48.714: INFO: Deleting pod "hostpath-symlink-prep-provisioning-2469" in namespace "provisioning-2469" Jan 11 19:47:48.808: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-2469" to be fully deleted Jan 11 19:47:48.898: INFO: Creating resource for inline volume STEP: Creating pod pod-subpath-test-hostpathsymlink-9xhz STEP: Checking for subpath error in container status Jan 11 19:47:53.173: INFO: Deleting pod "pod-subpath-test-hostpathsymlink-9xhz" in namespace "provisioning-2469" Jan 11 19:47:53.264: INFO: Wait up to 5m0s for pod "pod-subpath-test-hostpathsymlink-9xhz" to be fully deleted STEP: Deleting pod Jan 11 19:47:59.445: INFO: Deleting pod "pod-subpath-test-hostpathsymlink-9xhz" in namespace "provisioning-2469" Jan 11 19:47:59.625: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-2469" in namespace "provisioning-2469" to be "success or failure" Jan 11 19:47:59.715: INFO: Pod "hostpath-symlink-prep-provisioning-2469": Phase="Pending", Reason="", readiness=false. Elapsed: 89.647212ms Jan 11 19:48:01.805: INFO: Pod "hostpath-symlink-prep-provisioning-2469": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.17976691s STEP: Saw pod success Jan 11 19:48:01.805: INFO: Pod "hostpath-symlink-prep-provisioning-2469" satisfied condition "success or failure" Jan 11 19:48:01.805: INFO: Deleting pod "hostpath-symlink-prep-provisioning-2469" in namespace "provisioning-2469" Jan 11 19:48:01.899: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-2469" to be fully deleted Jan 11 19:48:01.989: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics [AfterEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:48:01.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "provisioning-2469" for this suite. Jan 11 19:48:08.350: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:48:11.756: INFO: namespace provisioning-2469 deletion completed in 9.675516118s • [SLOW TEST:26.061 seconds] [sig-storage] In-tree Volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Driver: hostPathSymlink] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:69 [Testpattern: Inline-volume (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:92 should fail if subpath with backstepping is outside the volume [Slow] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:261 ------------------------------ SSSSSSSS ------------------------------ [BeforeEach] [sig-node] Downward API /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:47:59.802: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename downward-api STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-7894 STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating a pod to test downward api env vars Jan 11 19:48:00.841: INFO: Waiting up to 5m0s for pod "downward-api-83ec3842-969d-4147-b22a-db2792bf4bad" in namespace "downward-api-7894" to be "success or failure" Jan 11 19:48:00.931: INFO: Pod "downward-api-83ec3842-969d-4147-b22a-db2792bf4bad": Phase="Pending", Reason="", readiness=false. Elapsed: 89.662496ms Jan 11 19:48:03.021: INFO: Pod "downward-api-83ec3842-969d-4147-b22a-db2792bf4bad": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.179457112s STEP: Saw pod success Jan 11 19:48:03.021: INFO: Pod "downward-api-83ec3842-969d-4147-b22a-db2792bf4bad" satisfied condition "success or failure" Jan 11 19:48:03.110: INFO: Trying to get logs from node ip-10-250-27-25.ec2.internal pod downward-api-83ec3842-969d-4147-b22a-db2792bf4bad container dapi-container: STEP: delete the pod Jan 11 19:48:03.299: INFO: Waiting for pod downward-api-83ec3842-969d-4147-b22a-db2792bf4bad to disappear Jan 11 19:48:03.388: INFO: Pod downward-api-83ec3842-969d-4147-b22a-db2792bf4bad no longer exists [AfterEach] [sig-node] Downward API /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:48:03.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7894" for this suite. Jan 11 19:48:09.748: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:48:13.172: INFO: namespace downward-api-7894 deletion completed in 9.692816809s • [SLOW TEST:13.371 seconds] [sig-node] Downward API /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide host IP as an env var [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSS ------------------------------ [BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:93 [BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:85 [BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:42:34.245: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename volume-expand STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in volume-expand-8983 STEP: Waiting for a default service account to be provisioned in namespace [It] should not allow expansion of pvcs without AllowVolumeExpansion property /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:139 STEP: deploying csi-hostpath driver Jan 11 19:42:35.070: INFO: creating *v1.ServiceAccount: volume-expand-8983/csi-attacher Jan 11 19:42:35.159: INFO: creating *v1.ClusterRole: external-attacher-runner-volume-expand-8983 Jan 11 19:42:35.159: INFO: Define cluster role external-attacher-runner-volume-expand-8983 Jan 11 19:42:35.249: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-volume-expand-8983 Jan 11 19:42:35.338: INFO: creating *v1.Role: volume-expand-8983/external-attacher-cfg-volume-expand-8983 Jan 11 19:42:35.428: INFO: creating *v1.RoleBinding: volume-expand-8983/csi-attacher-role-cfg Jan 11 19:42:35.518: INFO: creating *v1.ServiceAccount: volume-expand-8983/csi-provisioner Jan 11 19:42:35.607: INFO: creating *v1.ClusterRole: external-provisioner-runner-volume-expand-8983 Jan 11 19:42:35.607: INFO: Define cluster role external-provisioner-runner-volume-expand-8983 Jan 11 19:42:35.697: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-volume-expand-8983 Jan 11 19:42:35.786: INFO: creating *v1.Role: volume-expand-8983/external-provisioner-cfg-volume-expand-8983 Jan 11 19:42:35.876: INFO: creating *v1.RoleBinding: volume-expand-8983/csi-provisioner-role-cfg Jan 11 19:42:35.966: INFO: creating *v1.ServiceAccount: volume-expand-8983/csi-snapshotter Jan 11 19:42:36.056: INFO: creating *v1.ClusterRole: external-snapshotter-runner-volume-expand-8983 Jan 11 19:42:36.056: INFO: Define cluster role external-snapshotter-runner-volume-expand-8983 Jan 11 19:42:36.145: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-volume-expand-8983 Jan 11 19:42:36.234: INFO: creating *v1.Role: volume-expand-8983/external-snapshotter-leaderelection-volume-expand-8983 Jan 11 19:42:36.324: INFO: creating *v1.RoleBinding: volume-expand-8983/external-snapshotter-leaderelection Jan 11 19:42:36.413: INFO: creating *v1.ServiceAccount: volume-expand-8983/csi-resizer Jan 11 19:42:36.503: INFO: creating *v1.ClusterRole: external-resizer-runner-volume-expand-8983 Jan 11 19:42:36.503: INFO: Define cluster role external-resizer-runner-volume-expand-8983 Jan 11 19:42:36.592: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-volume-expand-8983 Jan 11 19:42:36.681: INFO: creating *v1.Role: volume-expand-8983/external-resizer-cfg-volume-expand-8983 Jan 11 19:42:36.777: INFO: creating *v1.RoleBinding: volume-expand-8983/csi-resizer-role-cfg Jan 11 19:42:36.866: INFO: creating *v1.Service: volume-expand-8983/csi-hostpath-attacher Jan 11 19:42:36.959: INFO: creating *v1.StatefulSet: volume-expand-8983/csi-hostpath-attacher Jan 11 19:42:37.049: INFO: creating *v1beta1.CSIDriver: csi-hostpath-volume-expand-8983 Jan 11 19:42:37.139: INFO: creating *v1.Service: volume-expand-8983/csi-hostpathplugin Jan 11 19:42:37.232: INFO: creating *v1.StatefulSet: volume-expand-8983/csi-hostpathplugin Jan 11 19:42:37.322: INFO: creating *v1.Service: volume-expand-8983/csi-hostpath-provisioner Jan 11 19:42:37.417: INFO: creating *v1.StatefulSet: volume-expand-8983/csi-hostpath-provisioner Jan 11 19:42:37.507: INFO: creating *v1.Service: volume-expand-8983/csi-hostpath-resizer Jan 11 19:42:37.600: INFO: creating *v1.StatefulSet: volume-expand-8983/csi-hostpath-resizer Jan 11 19:42:37.689: INFO: creating *v1.Service: volume-expand-8983/csi-snapshotter Jan 11 19:42:37.783: INFO: creating *v1.StatefulSet: volume-expand-8983/csi-snapshotter Jan 11 19:42:37.874: INFO: creating *v1.ClusterRoleBinding: psp-csi-hostpath-role-volume-expand-8983 Jan 11 19:42:37.963: INFO: Test running for native CSI Driver, not checking metrics Jan 11 19:42:37.963: INFO: Creating resource for dynamic PV STEP: creating a StorageClass volume-expand-8983-csi-hostpath-volume-expand-8983-sc8jvdg STEP: creating a claim Jan 11 19:42:38.143: INFO: Waiting up to 5m0s for PersistentVolumeClaims [csi-hostpath9pqwb] to have phase Bound Jan 11 19:42:38.232: INFO: PersistentVolumeClaim csi-hostpath9pqwb found but phase is Pending instead of Bound. Jan 11 19:42:40.322: INFO: PersistentVolumeClaim csi-hostpath9pqwb found but phase is Pending instead of Bound. Jan 11 19:42:42.412: INFO: PersistentVolumeClaim csi-hostpath9pqwb found but phase is Pending instead of Bound. Jan 11 19:42:44.501: INFO: PersistentVolumeClaim csi-hostpath9pqwb found but phase is Pending instead of Bound. Jan 11 19:42:46.591: INFO: PersistentVolumeClaim csi-hostpath9pqwb found but phase is Pending instead of Bound. Jan 11 19:42:48.681: INFO: PersistentVolumeClaim csi-hostpath9pqwb found but phase is Pending instead of Bound. Jan 11 19:42:50.771: INFO: PersistentVolumeClaim csi-hostpath9pqwb found but phase is Pending instead of Bound. Jan 11 19:42:52.860: INFO: PersistentVolumeClaim csi-hostpath9pqwb found but phase is Pending instead of Bound. Jan 11 19:42:54.950: INFO: PersistentVolumeClaim csi-hostpath9pqwb found but phase is Pending instead of Bound. Jan 11 19:42:57.040: INFO: PersistentVolumeClaim csi-hostpath9pqwb found but phase is Pending instead of Bound. Jan 11 19:42:59.129: INFO: PersistentVolumeClaim csi-hostpath9pqwb found but phase is Pending instead of Bound. Jan 11 19:43:01.219: INFO: PersistentVolumeClaim csi-hostpath9pqwb found but phase is Pending instead of Bound. Jan 11 19:43:03.308: INFO: PersistentVolumeClaim csi-hostpath9pqwb found but phase is Pending instead of Bound. Jan 11 19:43:05.398: INFO: PersistentVolumeClaim csi-hostpath9pqwb found but phase is Pending instead of Bound. Jan 11 19:43:07.488: INFO: PersistentVolumeClaim csi-hostpath9pqwb found but phase is Pending instead of Bound. Jan 11 19:43:09.577: INFO: PersistentVolumeClaim csi-hostpath9pqwb found but phase is Pending instead of Bound. Jan 11 19:43:11.667: INFO: PersistentVolumeClaim csi-hostpath9pqwb found but phase is Pending instead of Bound. Jan 11 19:43:13.757: INFO: PersistentVolumeClaim csi-hostpath9pqwb found but phase is Pending instead of Bound. Jan 11 19:43:15.846: INFO: PersistentVolumeClaim csi-hostpath9pqwb found but phase is Pending instead of Bound. Jan 11 19:43:17.936: INFO: PersistentVolumeClaim csi-hostpath9pqwb found but phase is Pending instead of Bound. Jan 11 19:43:20.026: INFO: PersistentVolumeClaim csi-hostpath9pqwb found but phase is Pending instead of Bound. Jan 11 19:43:22.115: INFO: PersistentVolumeClaim csi-hostpath9pqwb found but phase is Pending instead of Bound. Jan 11 19:43:24.205: INFO: PersistentVolumeClaim csi-hostpath9pqwb found but phase is Pending instead of Bound. Jan 11 19:43:26.294: INFO: PersistentVolumeClaim csi-hostpath9pqwb found but phase is Pending instead of Bound. Jan 11 19:43:28.383: INFO: PersistentVolumeClaim csi-hostpath9pqwb found but phase is Pending instead of Bound. Jan 11 19:43:30.473: INFO: PersistentVolumeClaim csi-hostpath9pqwb found but phase is Pending instead of Bound. Jan 11 19:43:32.563: INFO: PersistentVolumeClaim csi-hostpath9pqwb found but phase is Pending instead of Bound. Jan 11 19:43:34.653: INFO: PersistentVolumeClaim csi-hostpath9pqwb found but phase is Pending instead of Bound. Jan 11 19:43:36.742: INFO: PersistentVolumeClaim csi-hostpath9pqwb found but phase is Pending instead of Bound. Jan 11 19:43:38.832: INFO: PersistentVolumeClaim csi-hostpath9pqwb found but phase is Pending instead of Bound. Jan 11 19:43:40.922: INFO: PersistentVolumeClaim csi-hostpath9pqwb found but phase is Pending instead of Bound. Jan 11 19:43:43.012: INFO: PersistentVolumeClaim csi-hostpath9pqwb found but phase is Pending instead of Bound. Jan 11 19:43:45.101: INFO: PersistentVolumeClaim csi-hostpath9pqwb found but phase is Pending instead of Bound. Jan 11 19:43:47.191: INFO: PersistentVolumeClaim csi-hostpath9pqwb found but phase is Pending instead of Bound. Jan 11 19:43:49.280: INFO: PersistentVolumeClaim csi-hostpath9pqwb found but phase is Pending instead of Bound. Jan 11 19:43:51.370: INFO: PersistentVolumeClaim csi-hostpath9pqwb found but phase is Pending instead of Bound. Jan 11 19:43:53.459: INFO: PersistentVolumeClaim csi-hostpath9pqwb found but phase is Pending instead of Bound. Jan 11 19:43:55.548: INFO: PersistentVolumeClaim csi-hostpath9pqwb found but phase is Pending instead of Bound. Jan 11 19:43:57.638: INFO: PersistentVolumeClaim csi-hostpath9pqwb found but phase is Pending instead of Bound. Jan 11 19:43:59.729: INFO: PersistentVolumeClaim csi-hostpath9pqwb found but phase is Pending instead of Bound. Jan 11 19:44:01.819: INFO: PersistentVolumeClaim csi-hostpath9pqwb found but phase is Pending instead of Bound. Jan 11 19:44:03.908: INFO: PersistentVolumeClaim csi-hostpath9pqwb found but phase is Pending instead of Bound. Jan 11 19:44:05.997: INFO: PersistentVolumeClaim csi-hostpath9pqwb found but phase is Pending instead of Bound. Jan 11 19:44:08.088: INFO: PersistentVolumeClaim csi-hostpath9pqwb found but phase is Pending instead of Bound. Jan 11 19:44:10.177: INFO: PersistentVolumeClaim csi-hostpath9pqwb found but phase is Pending instead of Bound. Jan 11 19:44:12.267: INFO: PersistentVolumeClaim csi-hostpath9pqwb found but phase is Pending instead of Bound. Jan 11 19:44:14.357: INFO: PersistentVolumeClaim csi-hostpath9pqwb found but phase is Pending instead of Bound. Jan 11 19:44:16.446: INFO: PersistentVolumeClaim csi-hostpath9pqwb found but phase is Pending instead of Bound. Jan 11 19:44:18.536: INFO: PersistentVolumeClaim csi-hostpath9pqwb found but phase is Pending instead of Bound. Jan 11 19:44:20.626: INFO: PersistentVolumeClaim csi-hostpath9pqwb found but phase is Pending instead of Bound. Jan 11 19:44:22.716: INFO: PersistentVolumeClaim csi-hostpath9pqwb found but phase is Pending instead of Bound. Jan 11 19:44:24.805: INFO: PersistentVolumeClaim csi-hostpath9pqwb found but phase is Pending instead of Bound. Jan 11 19:44:26.895: INFO: PersistentVolumeClaim csi-hostpath9pqwb found but phase is Pending instead of Bound. Jan 11 19:44:28.984: INFO: PersistentVolumeClaim csi-hostpath9pqwb found but phase is Pending instead of Bound. Jan 11 19:44:31.074: INFO: PersistentVolumeClaim csi-hostpath9pqwb found but phase is Pending instead of Bound. Jan 11 19:44:33.164: INFO: PersistentVolumeClaim csi-hostpath9pqwb found but phase is Pending instead of Bound. Jan 11 19:44:35.253: INFO: PersistentVolumeClaim csi-hostpath9pqwb found but phase is Pending instead of Bound. Jan 11 19:44:37.343: INFO: PersistentVolumeClaim csi-hostpath9pqwb found but phase is Pending instead of Bound. Jan 11 19:44:39.433: INFO: PersistentVolumeClaim csi-hostpath9pqwb found but phase is Pending instead of Bound. Jan 11 19:44:41.523: INFO: PersistentVolumeClaim csi-hostpath9pqwb found but phase is Pending instead of Bound. Jan 11 19:44:43.612: INFO: PersistentVolumeClaim csi-hostpath9pqwb found but phase is Pending instead of Bound. Jan 11 19:44:45.702: INFO: PersistentVolumeClaim csi-hostpath9pqwb found but phase is Pending instead of Bound. Jan 11 19:44:47.792: INFO: PersistentVolumeClaim csi-hostpath9pqwb found but phase is Pending instead of Bound. Jan 11 19:44:49.881: INFO: PersistentVolumeClaim csi-hostpath9pqwb found but phase is Pending instead of Bound. Jan 11 19:44:51.971: INFO: PersistentVolumeClaim csi-hostpath9pqwb found but phase is Pending instead of Bound. Jan 11 19:44:54.061: INFO: PersistentVolumeClaim csi-hostpath9pqwb found but phase is Pending instead of Bound. Jan 11 19:44:56.150: INFO: PersistentVolumeClaim csi-hostpath9pqwb found but phase is Pending instead of Bound. Jan 11 19:44:58.240: INFO: PersistentVolumeClaim csi-hostpath9pqwb found but phase is Pending instead of Bound. Jan 11 19:45:00.330: INFO: PersistentVolumeClaim csi-hostpath9pqwb found but phase is Pending instead of Bound. Jan 11 19:45:02.420: INFO: PersistentVolumeClaim csi-hostpath9pqwb found but phase is Pending instead of Bound. Jan 11 19:45:04.510: INFO: PersistentVolumeClaim csi-hostpath9pqwb found but phase is Pending instead of Bound. Jan 11 19:45:06.600: INFO: PersistentVolumeClaim csi-hostpath9pqwb found but phase is Pending instead of Bound. Jan 11 19:45:08.690: INFO: PersistentVolumeClaim csi-hostpath9pqwb found but phase is Pending instead of Bound. Jan 11 19:45:10.779: INFO: PersistentVolumeClaim csi-hostpath9pqwb found but phase is Pending instead of Bound. Jan 11 19:45:12.869: INFO: PersistentVolumeClaim csi-hostpath9pqwb found but phase is Pending instead of Bound. Jan 11 19:45:14.959: INFO: PersistentVolumeClaim csi-hostpath9pqwb found but phase is Pending instead of Bound. Jan 11 19:45:17.049: INFO: PersistentVolumeClaim csi-hostpath9pqwb found but phase is Pending instead of Bound. Jan 11 19:45:19.138: INFO: PersistentVolumeClaim csi-hostpath9pqwb found but phase is Pending instead of Bound. Jan 11 19:45:21.228: INFO: PersistentVolumeClaim csi-hostpath9pqwb found but phase is Pending instead of Bound. Jan 11 19:45:23.318: INFO: PersistentVolumeClaim csi-hostpath9pqwb found but phase is Pending instead of Bound. Jan 11 19:45:25.408: INFO: PersistentVolumeClaim csi-hostpath9pqwb found but phase is Pending instead of Bound. Jan 11 19:45:27.498: INFO: PersistentVolumeClaim csi-hostpath9pqwb found but phase is Pending instead of Bound. Jan 11 19:45:29.588: INFO: PersistentVolumeClaim csi-hostpath9pqwb found but phase is Pending instead of Bound. Jan 11 19:45:31.678: INFO: PersistentVolumeClaim csi-hostpath9pqwb found but phase is Pending instead of Bound. Jan 11 19:45:33.767: INFO: PersistentVolumeClaim csi-hostpath9pqwb found but phase is Pending instead of Bound. Jan 11 19:45:35.857: INFO: PersistentVolumeClaim csi-hostpath9pqwb found but phase is Pending instead of Bound. Jan 11 19:45:37.949: INFO: PersistentVolumeClaim csi-hostpath9pqwb found but phase is Pending instead of Bound. Jan 11 19:45:40.039: INFO: PersistentVolumeClaim csi-hostpath9pqwb found but phase is Pending instead of Bound. Jan 11 19:45:42.129: INFO: PersistentVolumeClaim csi-hostpath9pqwb found but phase is Pending instead of Bound. Jan 11 19:45:44.218: INFO: PersistentVolumeClaim csi-hostpath9pqwb found but phase is Pending instead of Bound. Jan 11 19:45:46.308: INFO: PersistentVolumeClaim csi-hostpath9pqwb found but phase is Pending instead of Bound. Jan 11 19:45:48.398: INFO: PersistentVolumeClaim csi-hostpath9pqwb found but phase is Pending instead of Bound. Jan 11 19:45:50.487: INFO: PersistentVolumeClaim csi-hostpath9pqwb found but phase is Pending instead of Bound. Jan 11 19:45:52.577: INFO: PersistentVolumeClaim csi-hostpath9pqwb found but phase is Pending instead of Bound. Jan 11 19:45:54.667: INFO: PersistentVolumeClaim csi-hostpath9pqwb found but phase is Pending instead of Bound. Jan 11 19:45:56.757: INFO: PersistentVolumeClaim csi-hostpath9pqwb found but phase is Pending instead of Bound. Jan 11 19:45:58.847: INFO: PersistentVolumeClaim csi-hostpath9pqwb found but phase is Pending instead of Bound. Jan 11 19:46:00.936: INFO: PersistentVolumeClaim csi-hostpath9pqwb found but phase is Pending instead of Bound. Jan 11 19:46:03.026: INFO: PersistentVolumeClaim csi-hostpath9pqwb found but phase is Pending instead of Bound. Jan 11 19:46:05.115: INFO: PersistentVolumeClaim csi-hostpath9pqwb found but phase is Pending instead of Bound. Jan 11 19:46:07.205: INFO: PersistentVolumeClaim csi-hostpath9pqwb found but phase is Pending instead of Bound. Jan 11 19:46:09.294: INFO: PersistentVolumeClaim csi-hostpath9pqwb found but phase is Pending instead of Bound. Jan 11 19:46:11.384: INFO: PersistentVolumeClaim csi-hostpath9pqwb found but phase is Pending instead of Bound. Jan 11 19:46:13.474: INFO: PersistentVolumeClaim csi-hostpath9pqwb found but phase is Pending instead of Bound. Jan 11 19:46:15.563: INFO: PersistentVolumeClaim csi-hostpath9pqwb found but phase is Pending instead of Bound. Jan 11 19:46:17.653: INFO: PersistentVolumeClaim csi-hostpath9pqwb found but phase is Pending instead of Bound. Jan 11 19:46:19.743: INFO: PersistentVolumeClaim csi-hostpath9pqwb found but phase is Pending instead of Bound. Jan 11 19:46:21.832: INFO: PersistentVolumeClaim csi-hostpath9pqwb found but phase is Pending instead of Bound. Jan 11 19:46:23.922: INFO: PersistentVolumeClaim csi-hostpath9pqwb found but phase is Pending instead of Bound. Jan 11 19:46:26.011: INFO: PersistentVolumeClaim csi-hostpath9pqwb found but phase is Pending instead of Bound. Jan 11 19:46:28.101: INFO: PersistentVolumeClaim csi-hostpath9pqwb found but phase is Pending instead of Bound. Jan 11 19:46:30.190: INFO: PersistentVolumeClaim csi-hostpath9pqwb found but phase is Pending instead of Bound. Jan 11 19:46:32.280: INFO: PersistentVolumeClaim csi-hostpath9pqwb found but phase is Pending instead of Bound. Jan 11 19:46:34.369: INFO: PersistentVolumeClaim csi-hostpath9pqwb found but phase is Pending instead of Bound. Jan 11 19:46:36.459: INFO: PersistentVolumeClaim csi-hostpath9pqwb found but phase is Pending instead of Bound. Jan 11 19:46:38.549: INFO: PersistentVolumeClaim csi-hostpath9pqwb found but phase is Pending instead of Bound. Jan 11 19:46:40.639: INFO: PersistentVolumeClaim csi-hostpath9pqwb found but phase is Pending instead of Bound. Jan 11 19:46:42.729: INFO: PersistentVolumeClaim csi-hostpath9pqwb found but phase is Pending instead of Bound. Jan 11 19:46:44.818: INFO: PersistentVolumeClaim csi-hostpath9pqwb found but phase is Pending instead of Bound. Jan 11 19:46:46.909: INFO: PersistentVolumeClaim csi-hostpath9pqwb found but phase is Pending instead of Bound. Jan 11 19:46:48.998: INFO: PersistentVolumeClaim csi-hostpath9pqwb found but phase is Pending instead of Bound. Jan 11 19:46:51.088: INFO: PersistentVolumeClaim csi-hostpath9pqwb found but phase is Pending instead of Bound. Jan 11 19:46:53.177: INFO: PersistentVolumeClaim csi-hostpath9pqwb found but phase is Pending instead of Bound. Jan 11 19:46:55.267: INFO: PersistentVolumeClaim csi-hostpath9pqwb found but phase is Pending instead of Bound. Jan 11 19:46:57.357: INFO: PersistentVolumeClaim csi-hostpath9pqwb found but phase is Pending instead of Bound. Jan 11 19:46:59.446: INFO: PersistentVolumeClaim csi-hostpath9pqwb found but phase is Pending instead of Bound. Jan 11 19:47:01.536: INFO: PersistentVolumeClaim csi-hostpath9pqwb found but phase is Pending instead of Bound. Jan 11 19:47:03.626: INFO: PersistentVolumeClaim csi-hostpath9pqwb found but phase is Pending instead of Bound. Jan 11 19:47:05.715: INFO: PersistentVolumeClaim csi-hostpath9pqwb found but phase is Pending instead of Bound. Jan 11 19:47:07.805: INFO: PersistentVolumeClaim csi-hostpath9pqwb found but phase is Pending instead of Bound. Jan 11 19:47:09.895: INFO: PersistentVolumeClaim csi-hostpath9pqwb found but phase is Pending instead of Bound. Jan 11 19:47:11.985: INFO: PersistentVolumeClaim csi-hostpath9pqwb found but phase is Pending instead of Bound. Jan 11 19:47:14.075: INFO: PersistentVolumeClaim csi-hostpath9pqwb found but phase is Pending instead of Bound. Jan 11 19:47:16.164: INFO: PersistentVolumeClaim csi-hostpath9pqwb found but phase is Pending instead of Bound. Jan 11 19:47:18.254: INFO: PersistentVolumeClaim csi-hostpath9pqwb found but phase is Pending instead of Bound. Jan 11 19:47:20.345: INFO: PersistentVolumeClaim csi-hostpath9pqwb found but phase is Pending instead of Bound. Jan 11 19:47:22.434: INFO: PersistentVolumeClaim csi-hostpath9pqwb found but phase is Pending instead of Bound. Jan 11 19:47:24.523: INFO: PersistentVolumeClaim csi-hostpath9pqwb found but phase is Pending instead of Bound. Jan 11 19:47:26.613: INFO: PersistentVolumeClaim csi-hostpath9pqwb found but phase is Pending instead of Bound. Jan 11 19:47:28.703: INFO: PersistentVolumeClaim csi-hostpath9pqwb found but phase is Pending instead of Bound. Jan 11 19:47:30.792: INFO: PersistentVolumeClaim csi-hostpath9pqwb found but phase is Pending instead of Bound. Jan 11 19:47:32.881: INFO: PersistentVolumeClaim csi-hostpath9pqwb found but phase is Pending instead of Bound. Jan 11 19:47:34.971: INFO: PersistentVolumeClaim csi-hostpath9pqwb found but phase is Pending instead of Bound. Jan 11 19:47:37.061: INFO: PersistentVolumeClaim csi-hostpath9pqwb found but phase is Pending instead of Bound. Jan 11 19:47:39.061: FAIL: Unexpected error: <*errors.errorString | 0xc002e98890>: { s: "PersistentVolumeClaims [csi-hostpath9pqwb] not all in phase Bound within 5m0s", } PersistentVolumeClaims [csi-hostpath9pqwb] not all in phase Bound within 5m0s occurred [AfterEach] [Testpattern: Dynamic PV (block volmode)] volume-expand /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 STEP: Collecting events from namespace "volume-expand-8983". STEP: Found 39 events. Jan 11 19:47:39.154: INFO: At 2020-01-11 19:42:37 +0000 UTC - event for csi-hostpath-attacher: {statefulset-controller } SuccessfulCreate: create Pod csi-hostpath-attacher-0 in StatefulSet csi-hostpath-attacher successful Jan 11 19:47:39.154: INFO: At 2020-01-11 19:42:37 +0000 UTC - event for csi-hostpath-attacher-0: {kubelet ip-10-250-7-77.ec2.internal} Created: Created container csi-attacher Jan 11 19:47:39.154: INFO: At 2020-01-11 19:42:37 +0000 UTC - event for csi-hostpath-attacher-0: {kubelet ip-10-250-7-77.ec2.internal} Pulled: Container image "quay.io/k8scsi/csi-attacher:v1.2.0" already present on machine Jan 11 19:47:39.154: INFO: At 2020-01-11 19:42:37 +0000 UTC - event for csi-hostpath-provisioner: {statefulset-controller } SuccessfulCreate: create Pod csi-hostpath-provisioner-0 in StatefulSet csi-hostpath-provisioner successful Jan 11 19:47:39.154: INFO: At 2020-01-11 19:42:37 +0000 UTC - event for csi-hostpath-resizer: {statefulset-controller } SuccessfulCreate: create Pod csi-hostpath-resizer-0 in StatefulSet csi-hostpath-resizer successful Jan 11 19:47:39.154: INFO: At 2020-01-11 19:42:37 +0000 UTC - event for csi-hostpathplugin: {statefulset-controller } SuccessfulCreate: create Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin successful Jan 11 19:47:39.154: INFO: At 2020-01-11 19:42:37 +0000 UTC - event for csi-hostpathplugin-0: {kubelet ip-10-250-7-77.ec2.internal} Pulled: Container image "quay.io/k8scsi/csi-node-driver-registrar:v1.1.0" already present on machine Jan 11 19:47:39.154: INFO: At 2020-01-11 19:42:37 +0000 UTC - event for csi-snapshotter: {statefulset-controller } SuccessfulCreate: create Pod csi-snapshotter-0 in StatefulSet csi-snapshotter successful Jan 11 19:47:39.154: INFO: At 2020-01-11 19:42:38 +0000 UTC - event for csi-hostpath-attacher-0: {kubelet ip-10-250-7-77.ec2.internal} Started: Started container csi-attacher Jan 11 19:47:39.154: INFO: At 2020-01-11 19:42:38 +0000 UTC - event for csi-hostpath-provisioner-0: {kubelet ip-10-250-7-77.ec2.internal} Created: Created container csi-provisioner Jan 11 19:47:39.154: INFO: At 2020-01-11 19:42:38 +0000 UTC - event for csi-hostpath-provisioner-0: {kubelet ip-10-250-7-77.ec2.internal} Pulled: Container image "quay.io/k8scsi/csi-provisioner:v1.4.0-rc1" already present on machine Jan 11 19:47:39.154: INFO: At 2020-01-11 19:42:38 +0000 UTC - event for csi-hostpath-resizer-0: {kubelet ip-10-250-7-77.ec2.internal} Pulling: Pulling image "quay.io/k8scsi/csi-resizer:v0.2.0" Jan 11 19:47:39.154: INFO: At 2020-01-11 19:42:38 +0000 UTC - event for csi-hostpath9pqwb: {persistentvolume-controller } ExternalProvisioning: waiting for a volume to be created, either by external provisioner "csi-hostpath-volume-expand-8983" or manually created by system administrator Jan 11 19:47:39.154: INFO: At 2020-01-11 19:42:38 +0000 UTC - event for csi-hostpathplugin-0: {kubelet ip-10-250-7-77.ec2.internal} Pulled: Container image "quay.io/k8scsi/livenessprobe:v1.1.0" already present on machine Jan 11 19:47:39.154: INFO: At 2020-01-11 19:42:38 +0000 UTC - event for csi-hostpathplugin-0: {kubelet ip-10-250-7-77.ec2.internal} Started: Started container node-driver-registrar Jan 11 19:47:39.154: INFO: At 2020-01-11 19:42:38 +0000 UTC - event for csi-hostpathplugin-0: {kubelet ip-10-250-7-77.ec2.internal} Created: Created container node-driver-registrar Jan 11 19:47:39.154: INFO: At 2020-01-11 19:42:38 +0000 UTC - event for csi-hostpathplugin-0: {kubelet ip-10-250-7-77.ec2.internal} Started: Started container hostpath Jan 11 19:47:39.154: INFO: At 2020-01-11 19:42:38 +0000 UTC - event for csi-hostpathplugin-0: {kubelet ip-10-250-7-77.ec2.internal} Created: Created container hostpath Jan 11 19:47:39.154: INFO: At 2020-01-11 19:42:38 +0000 UTC - event for csi-hostpathplugin-0: {kubelet ip-10-250-7-77.ec2.internal} Pulled: Container image "quay.io/k8scsi/hostpathplugin:v1.2.0-rc5" already present on machine Jan 11 19:47:39.154: INFO: At 2020-01-11 19:42:38 +0000 UTC - event for csi-snapshotter-0: {kubelet ip-10-250-7-77.ec2.internal} Pulling: Pulling image "quay.io/k8scsi/csi-snapshotter:v2.0.0-rc1" Jan 11 19:47:39.154: INFO: At 2020-01-11 19:42:39 +0000 UTC - event for csi-hostpath-provisioner-0: {kubelet ip-10-250-7-77.ec2.internal} Started: Started container csi-provisioner Jan 11 19:47:39.154: INFO: At 2020-01-11 19:42:39 +0000 UTC - event for csi-hostpath-resizer-0: {kubelet ip-10-250-7-77.ec2.internal} Started: Started container csi-resizer Jan 11 19:47:39.154: INFO: At 2020-01-11 19:42:39 +0000 UTC - event for csi-hostpath-resizer-0: {kubelet ip-10-250-7-77.ec2.internal} Created: Created container csi-resizer Jan 11 19:47:39.154: INFO: At 2020-01-11 19:42:39 +0000 UTC - event for csi-hostpath-resizer-0: {kubelet ip-10-250-7-77.ec2.internal} Pulled: Successfully pulled image "quay.io/k8scsi/csi-resizer:v0.2.0" Jan 11 19:47:39.154: INFO: At 2020-01-11 19:42:39 +0000 UTC - event for csi-hostpath9pqwb: {csi-hostpath-volume-expand-8983_csi-hostpath-provisioner-0_1c102d54-069e-4a3d-b6ad-20aaeb122d01 } ProvisioningFailed: failed to provision volume with StorageClass "volume-expand-8983-csi-hostpath-volume-expand-8983-sc8jvdg": rpc error: code = Internal desc = failed to create volume 8662e4d4-34aa-11ea-818c-0e3085d0ce99: failed to attach device /csi-data-dir/8662e4d4-34aa-11ea-818c-0e3085d0ce99: exit status 1 Jan 11 19:47:39.154: INFO: At 2020-01-11 19:42:39 +0000 UTC - event for csi-hostpath9pqwb: {csi-hostpath-volume-expand-8983_csi-hostpath-provisioner-0_1c102d54-069e-4a3d-b6ad-20aaeb122d01 } Provisioning: External provisioner is provisioning volume for claim "volume-expand-8983/csi-hostpath9pqwb" Jan 11 19:47:39.154: INFO: At 2020-01-11 19:42:39 +0000 UTC - event for csi-hostpathplugin-0: {kubelet ip-10-250-7-77.ec2.internal} Created: Created container liveness-probe Jan 11 19:47:39.154: INFO: At 2020-01-11 19:42:39 +0000 UTC - event for csi-hostpathplugin-0: {kubelet ip-10-250-7-77.ec2.internal} Started: Started container liveness-probe Jan 11 19:47:39.154: INFO: At 2020-01-11 19:42:39 +0000 UTC - event for csi-snapshotter-0: {kubelet ip-10-250-7-77.ec2.internal} Started: Started container csi-snapshotter Jan 11 19:47:39.154: INFO: At 2020-01-11 19:42:39 +0000 UTC - event for csi-snapshotter-0: {kubelet ip-10-250-7-77.ec2.internal} Pulled: Successfully pulled image "quay.io/k8scsi/csi-snapshotter:v2.0.0-rc1" Jan 11 19:47:39.154: INFO: At 2020-01-11 19:42:39 +0000 UTC - event for csi-snapshotter-0: {kubelet ip-10-250-7-77.ec2.internal} Created: Created container csi-snapshotter Jan 11 19:47:39.154: INFO: At 2020-01-11 19:42:41 +0000 UTC - event for csi-hostpath9pqwb: {csi-hostpath-volume-expand-8983_csi-hostpath-provisioner-0_1c102d54-069e-4a3d-b6ad-20aaeb122d01 } ProvisioningFailed: failed to provision volume with StorageClass "volume-expand-8983-csi-hostpath-volume-expand-8983-sc8jvdg": rpc error: code = Internal desc = failed to create volume 874ecc4f-34aa-11ea-818c-0e3085d0ce99: failed to attach device /csi-data-dir/874ecc4f-34aa-11ea-818c-0e3085d0ce99: exit status 1 Jan 11 19:47:39.154: INFO: At 2020-01-11 19:42:43 +0000 UTC - event for csi-hostpath9pqwb: {csi-hostpath-volume-expand-8983_csi-hostpath-provisioner-0_1c102d54-069e-4a3d-b6ad-20aaeb122d01 } ProvisioningFailed: failed to provision volume with StorageClass "volume-expand-8983-csi-hostpath-volume-expand-8983-sc8jvdg": rpc error: code = Internal desc = failed to create volume 88c5345a-34aa-11ea-818c-0e3085d0ce99: failed to attach device /csi-data-dir/88c5345a-34aa-11ea-818c-0e3085d0ce99: exit status 1 Jan 11 19:47:39.154: INFO: At 2020-01-11 19:42:48 +0000 UTC - event for csi-hostpath9pqwb: {csi-hostpath-volume-expand-8983_csi-hostpath-provisioner-0_1c102d54-069e-4a3d-b6ad-20aaeb122d01 } ProvisioningFailed: failed to provision volume with StorageClass "volume-expand-8983-csi-hostpath-volume-expand-8983-sc8jvdg": rpc error: code = Internal desc = failed to create volume 8b6c712a-34aa-11ea-818c-0e3085d0ce99: failed to attach device /csi-data-dir/8b6c712a-34aa-11ea-818c-0e3085d0ce99: exit status 1 Jan 11 19:47:39.154: INFO: At 2020-01-11 19:42:56 +0000 UTC - event for csi-hostpath9pqwb: {csi-hostpath-volume-expand-8983_csi-hostpath-provisioner-0_1c102d54-069e-4a3d-b6ad-20aaeb122d01 } ProvisioningFailed: failed to provision volume with StorageClass "volume-expand-8983-csi-hostpath-volume-expand-8983-sc8jvdg": rpc error: code = Internal desc = failed to create volume 9076c790-34aa-11ea-818c-0e3085d0ce99: failed to attach device /csi-data-dir/9076c790-34aa-11ea-818c-0e3085d0ce99: exit status 1 Jan 11 19:47:39.154: INFO: At 2020-01-11 19:43:12 +0000 UTC - event for csi-hostpath9pqwb: {csi-hostpath-volume-expand-8983_csi-hostpath-provisioner-0_1c102d54-069e-4a3d-b6ad-20aaeb122d01 } ProvisioningFailed: failed to provision volume with StorageClass "volume-expand-8983-csi-hostpath-volume-expand-8983-sc8jvdg": rpc error: code = Internal desc = failed to create volume 9a45750a-34aa-11ea-818c-0e3085d0ce99: failed to attach device /csi-data-dir/9a45750a-34aa-11ea-818c-0e3085d0ce99: exit status 1 Jan 11 19:47:39.154: INFO: At 2020-01-11 19:43:45 +0000 UTC - event for csi-hostpath9pqwb: {csi-hostpath-volume-expand-8983_csi-hostpath-provisioner-0_1c102d54-069e-4a3d-b6ad-20aaeb122d01 } ProvisioningFailed: failed to provision volume with StorageClass "volume-expand-8983-csi-hostpath-volume-expand-8983-sc8jvdg": rpc error: code = Internal desc = failed to create volume ad9c8793-34aa-11ea-818c-0e3085d0ce99: failed to attach device /csi-data-dir/ad9c8793-34aa-11ea-818c-0e3085d0ce99: exit status 1 Jan 11 19:47:39.154: INFO: At 2020-01-11 19:44:49 +0000 UTC - event for csi-hostpath9pqwb: {csi-hostpath-volume-expand-8983_csi-hostpath-provisioner-0_1c102d54-069e-4a3d-b6ad-20aaeb122d01 } ProvisioningFailed: failed to provision volume with StorageClass "volume-expand-8983-csi-hostpath-volume-expand-8983-sc8jvdg": rpc error: code = Internal desc = failed to create volume d40722b8-34aa-11ea-818c-0e3085d0ce99: failed to attach device /csi-data-dir/d40722b8-34aa-11ea-818c-0e3085d0ce99: exit status 1 Jan 11 19:47:39.154: INFO: At 2020-01-11 19:46:58 +0000 UTC - event for csi-hostpath9pqwb: {csi-hostpath-volume-expand-8983_csi-hostpath-provisioner-0_1c102d54-069e-4a3d-b6ad-20aaeb122d01 } ProvisioningFailed: failed to provision volume with StorageClass "volume-expand-8983-csi-hostpath-volume-expand-8983-sc8jvdg": rpc error: code = Internal desc = failed to create volume 2096d37a-34ab-11ea-818c-0e3085d0ce99: failed to attach device /csi-data-dir/2096d37a-34ab-11ea-818c-0e3085d0ce99: exit status 1 Jan 11 19:47:39.244: INFO: POD NODE PHASE GRACE CONDITIONS Jan 11 19:47:39.244: INFO: csi-hostpath-attacher-0 ip-10-250-7-77.ec2.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 19:42:37 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 19:42:38 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 19:42:38 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 19:42:37 +0000 UTC }] Jan 11 19:47:39.245: INFO: csi-hostpath-provisioner-0 ip-10-250-7-77.ec2.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 19:42:37 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 19:42:39 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 19:42:39 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 19:42:37 +0000 UTC }] Jan 11 19:47:39.245: INFO: csi-hostpath-resizer-0 ip-10-250-7-77.ec2.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 19:42:37 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 19:42:39 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 19:42:39 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 19:42:37 +0000 UTC }] Jan 11 19:47:39.245: INFO: csi-hostpathplugin-0 ip-10-250-7-77.ec2.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 19:42:37 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 19:42:39 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 19:42:39 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 19:42:37 +0000 UTC }] Jan 11 19:47:39.245: INFO: csi-snapshotter-0 ip-10-250-7-77.ec2.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 19:42:37 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 19:42:39 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 19:42:39 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 19:42:37 +0000 UTC }] Jan 11 19:47:39.245: INFO: Jan 11 19:47:39.425: INFO: Logging node info for node ip-10-250-27-25.ec2.internal Jan 11 19:47:39.514: INFO: Node Info: &Node{ObjectMeta:{ip-10-250-27-25.ec2.internal /api/v1/nodes/ip-10-250-27-25.ec2.internal af7f64f3-a5de-4df3-9e07-f69e835ab580 55533 0 2020-01-11 15:56:03 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:m5.large beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:us-east-1 failure-domain.beta.kubernetes.io/zone:us-east-1c kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-10-250-27-25.ec2.internal kubernetes.io/os:linux node.kubernetes.io/role:node worker.garden.sapcloud.io/group:worker-1 worker.gardener.cloud/pool:worker-1] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-ephemeral-1641":"ip-10-250-27-25.ec2.internal","csi-hostpath-provisioning-6240":"ip-10-250-27-25.ec2.internal","csi-mock-csi-mock-volumes-1062":"csi-mock-csi-mock-volumes-1062","csi-mock-csi-mock-volumes-6381":"csi-mock-csi-mock-volumes-6381"} node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.250.27.25/19 projectcalico.org/IPv4IPIPTunnelAddr:100.64.1.1 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:100.64.1.0/24,DoNotUse_ExternalID:,ProviderID:aws:///us-east-1c/i-0a8c404292a3c92e9,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-aws-ebs: {{25 0} {} 25 DecimalSI},cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{28730179584 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{8054267904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-aws-ebs: {{25 0} {} 25 DecimalSI},cpu: {{1920 -3} {} 1920m DecimalSI},ephemeral-storage: {{27293670584 0} {} 27293670584 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{6577812679 0} {} 6577812679 DecimalSI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2020-01-11 19:47:23 +0000 UTC,LastTransitionTime:2020-01-11 15:56:58 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2020-01-11 19:47:23 +0000 UTC,LastTransitionTime:2020-01-11 15:56:58 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2020-01-11 19:47:23 +0000 UTC,LastTransitionTime:2020-01-11 15:56:58 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2020-01-11 19:47:23 +0000 UTC,LastTransitionTime:2020-01-11 15:56:58 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2020-01-11 19:47:23 +0000 UTC,LastTransitionTime:2020-01-11 15:56:58 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2020-01-11 19:47:23 +0000 UTC,LastTransitionTime:2020-01-11 15:56:58 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2020-01-11 19:47:23 +0000 UTC,LastTransitionTime:2020-01-11 15:56:58 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-01-11 15:56:18 +0000 UTC,LastTransitionTime:2020-01-11 15:56:18 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-01-11 19:47:33 +0000 UTC,LastTransitionTime:2020-01-11 15:56:03 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-01-11 19:47:33 +0000 UTC,LastTransitionTime:2020-01-11 15:56:03 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-01-11 19:47:33 +0000 UTC,LastTransitionTime:2020-01-11 15:56:03 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-01-11 19:47:33 +0000 UTC,LastTransitionTime:2020-01-11 15:56:13 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.250.27.25,},NodeAddress{Type:Hostname,Address:ip-10-250-27-25.ec2.internal,},NodeAddress{Type:InternalDNS,Address:ip-10-250-27-25.ec2.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec280dba3c1837e27848a3dec8c080a9,SystemUUID:ec280dba-3c18-37e2-7848-a3dec8c080a9,BootID:89e42b89-b944-47ea-8bf6-5f2fe6d80c97,KernelVersion:4.19.86-coreos,OSImage:Container Linux by CoreOS 2303.3.0 (Rhyolite),ContainerRuntimeVersion:docker://18.6.3,KubeletVersion:v1.16.4,KubeProxyVersion:v1.16.4,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[eu.gcr.io/gardener-project/k8s.gcr.io/hyperkube@sha256:1d8d7ef8bae1a6c8564d97a7d83a3661ea4b43127b0a6d901f3cd4b1126ee102 eu.gcr.io/gardener-project/k8s.gcr.io/hyperkube:v1.16.4],SizeBytes:601224435,},ContainerImage{Names:[gcr.io/google-samples/gb-frontend@sha256:35cb427341429fac3df10ff74600ea73e8ec0754d78f9ce89e0b4f3d70d53ba6 gcr.io/google-samples/gb-frontend:v6],SizeBytes:373099368,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa k8s.gcr.io/etcd:3.3.15],SizeBytes:246640776,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/volume/nfs@sha256:c2ad734346f608a5f7d69cfded93c4e8094069320657bd372d12ba21dea3ea71 gcr.io/kubernetes-e2e-test-images/volume/nfs:1.0],SizeBytes:225358913,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:195659796,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/quay_io/calico/node@sha256:d017c694acb9df5ad8e957a14b4c5a613c3a42771a34904f40c279dd2f61461e eu.gcr.io/gardener-project/3rd/quay_io/calico/node:v3.8.2-mod-1],SizeBytes:185406766,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/quay_io/calico/cni@sha256:fe6cb51f30add991b76eadfa26ec10fa8796383a1ddf807be5d4228725207b9d eu.gcr.io/gardener-project/3rd/quay_io/calico/cni:v3.8.2-mod-1],SizeBytes:153790666,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/k8s_gcr_io/node-problem-detector@sha256:00aceed3b4ef20d0d578aff3f904212daa2f0aaf18350d3e213cf4ca0703ccf0 eu.gcr.io/gardener-project/3rd/k8s_gcr_io/node-problem-detector:v0.7.1-mod-1],SizeBytes:96768084,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:1bafcc6fb1aa990b487850adba9cadc020e42d7905aa8a30481182a477ba24b0 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.10],SizeBytes:61365829,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6],SizeBytes:57345321,},ContainerImage{Names:[quay.io/k8scsi/csi-provisioner@sha256:0efcb424f1dde9b9fb11a1a14f2e48ab47e1c3f08bc3a929990dcfcb1f7ab34f quay.io/k8scsi/csi-provisioner:v1.4.0-rc1],SizeBytes:54431016,},ContainerImage{Names:[quay.io/k8scsi/csi-snapshotter@sha256:e3d3e742e32d00488fdb401045b9b1d033d7ca0ab6e760f77b24750fc95e5f70 quay.io/k8scsi/csi-snapshotter:v2.0.0-rc1],SizeBytes:51703561,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/quay_io/calico/typha@sha256:52298609a808087c774e95ded163e91828106bed6cf3117c51aba3f4d3b7943c eu.gcr.io/gardener-project/3rd/quay_io/calico/typha:v3.8.2],SizeBytes:49771411,},ContainerImage{Names:[quay.io/k8scsi/csi-attacher@sha256:26fccd7a99d973845df1193b46ebdcc6ab8dc5f6e6be319750c471fce1742d13 quay.io/k8scsi/csi-attacher:v1.2.0],SizeBytes:46226754,},ContainerImage{Names:[quay.io/k8scsi/csi-attacher@sha256:0aba670b4d9d6b2e720bbf575d733156c676b693ca26501235444490300db838 quay.io/k8scsi/csi-attacher:v1.1.0],SizeBytes:42839085,},ContainerImage{Names:[quay.io/k8scsi/csi-resizer@sha256:7d46fb6eb8b890dc546029d1565d502b4a1d974d33625c6ee2bc7991b77fc1a1 quay.io/k8scsi/csi-resizer:v0.2.0],SizeBytes:42817100,},ContainerImage{Names:[quay.io/k8scsi/csi-resizer@sha256:f315c9042e56def3c05c6b04fe79ec9da6d39ddc557ca365a76cf35964ea08b6 quay.io/k8scsi/csi-resizer:v0.1.0],SizeBytes:42623056,},ContainerImage{Names:[redis@sha256:50899ea1ceed33fa03232f3ac57578a424faa1742c1ac9c7a7bdb95cdf19b858 redis:5.0.5-alpine],SizeBytes:29331594,},ContainerImage{Names:[quay.io/k8scsi/hostpathplugin@sha256:b4826e492fc1762fceaf9726f41575ca0a4567864d3d235da874818de18039de quay.io/k8scsi/hostpathplugin:v1.2.0-rc5],SizeBytes:28761497,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/quay_io/prometheus/node-exporter@sha256:fea82a3a79228af2840c72ff394d7446ace51ae035f5b26cd9767b250baf13b7 eu.gcr.io/gardener-project/3rd/quay_io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/echoserver@sha256:e9ba514b896cdf559eef8788b66c2c3ee55f3572df617647b4b0d8b6bf81cf19 gcr.io/kubernetes-e2e-test-images/echoserver:2.2],SizeBytes:21692741,},ContainerImage{Names:[quay.io/k8scsi/mock-driver@sha256:e0eed916b7d970bad2b7d9875f9ad16932f987f0f3d91ec5d86da68b0b5cc9d1 quay.io/k8scsi/mock-driver:v2.1.0],SizeBytes:16226335,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[quay.io/k8scsi/csi-node-driver-registrar@sha256:13daf82fb99e951a4bff8ae5fc7c17c3a8fe7130be6400990d8f6076c32d4599 quay.io/k8scsi/csi-node-driver-registrar:v1.1.0],SizeBytes:15815995,},ContainerImage{Names:[quay.io/k8scsi/livenessprobe@sha256:dde617756e0f602adc566ab71fd885f1dad451ad3fb063ac991c95a2ff47aea5 quay.io/k8scsi/livenessprobe:v1.1.0],SizeBytes:14967303,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/quay_io/calico/pod2daemon-flexvol@sha256:fd246ba4eb5b96a7b97bfd8d99eb823ba179e6eeb9852cb3e3f7bf2f44a800a8 eu.gcr.io/gardener-project/3rd/quay_io/calico/pod2daemon-flexvol:v3.8.2],SizeBytes:9371181,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1],SizeBytes:9349974,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6 gcr.io/kubernetes-e2e-test-images/kitten:1.0],SizeBytes:4747037,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0],SizeBytes:4732240,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0],SizeBytes:1450451,},ContainerImage{Names:[busybox@sha256:6915be4043561d64e0ab0f8f098dc2ac48e077fe23f488ac24b665166898115a busybox:latest],SizeBytes:1219782,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/gcr_io/google_containers/pause-amd64@sha256:ffa28932647c3b6cab6a618aafe98d33dd185d96158ecf9b1addf042d6244025 k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea eu.gcr.io/gardener-project/3rd/gcr_io/google_containers/pause-amd64:3.1 k8s.gcr.io/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 11 19:47:39.515: INFO: Logging kubelet events for node ip-10-250-27-25.ec2.internal Jan 11 19:47:39.605: INFO: Logging pods the kubelet thinks is on node ip-10-250-27-25.ec2.internal Jan 11 19:47:39.809: INFO: pod-subpath-test-hostpath-cv6h started at 2020-01-11 19:47:34 +0000 UTC (1+1 container statuses recorded) Jan 11 19:47:39.809: INFO: Init container init-volume-hostpath-cv6h ready: true, restart count 0 Jan 11 19:47:39.809: INFO: Container test-container-subpath-hostpath-cv6h ready: true, restart count 0 Jan 11 19:47:39.809: INFO: externalsvc-9sgx6 started at 2020-01-11 19:47:36 +0000 UTC (0+1 container statuses recorded) Jan 11 19:47:39.809: INFO: Container externalsvc ready: true, restart count 0 Jan 11 19:47:39.809: INFO: concurrent-1578772020-qzj8j started at 2020-01-11 19:47:08 +0000 UTC (0+1 container statuses recorded) Jan 11 19:47:39.809: INFO: Container c ready: true, restart count 0 Jan 11 19:47:39.809: INFO: kube-proxy-rq4kf started at 2020-01-11 15:56:04 +0000 UTC (0+1 container statuses recorded) Jan 11 19:47:39.809: INFO: Container kube-proxy ready: true, restart count 0 Jan 11 19:47:39.809: INFO: node-problem-detector-9z5sq started at 2020-01-11 15:56:04 +0000 UTC (0+1 container statuses recorded) Jan 11 19:47:39.809: INFO: Container node-problem-detector ready: true, restart count 0 Jan 11 19:47:39.809: INFO: node-exporter-l6q84 started at 2020-01-11 15:56:04 +0000 UTC (0+1 container statuses recorded) Jan 11 19:47:39.809: INFO: Container node-exporter ready: true, restart count 0 Jan 11 19:47:39.809: INFO: concurrent-1578771960-jz9sd started at 2020-01-11 19:46:08 +0000 UTC (0+1 container statuses recorded) Jan 11 19:47:39.809: INFO: Container c ready: true, restart count 0 Jan 11 19:47:39.809: INFO: calico-node-m8r2d started at 2020-01-11 15:56:04 +0000 UTC (2+1 container statuses recorded) Jan 11 19:47:39.809: INFO: Init container install-cni ready: true, restart count 0 Jan 11 19:47:39.809: INFO: Init container flexvol-driver ready: true, restart count 0 Jan 11 19:47:39.809: INFO: Container calico-node ready: true, restart count 0 Jan 11 19:47:39.809: INFO: externalsvc-cf7fp started at 2020-01-11 19:47:36 +0000 UTC (0+1 container statuses recorded) Jan 11 19:47:39.809: INFO: Container externalsvc ready: true, restart count 0 Jan 11 19:47:39.809: INFO: pod-configmaps-7e495020-2d6a-446b-8955-0ecbc33a5394 started at 2020-01-11 19:45:19 +0000 UTC (0+1 container statuses recorded) Jan 11 19:47:39.809: INFO: Container createcm-volume-test ready: false, restart count 0 W0111 19:47:39.899466 8607 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 11 19:47:40.119: INFO: Latency metrics for node ip-10-250-27-25.ec2.internal Jan 11 19:47:40.119: INFO: Logging node info for node ip-10-250-7-77.ec2.internal Jan 11 19:47:40.209: INFO: Node Info: &Node{ObjectMeta:{ip-10-250-7-77.ec2.internal /api/v1/nodes/ip-10-250-7-77.ec2.internal 3773c02c-1fbb-4cbe-a527-8933de0a8978 55544 0 2020-01-11 15:55:58 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:m5.large beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:us-east-1 failure-domain.beta.kubernetes.io/zone:us-east-1c kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-10-250-7-77.ec2.internal kubernetes.io/os:linux node.kubernetes.io/role:node worker.garden.sapcloud.io/group:worker-1 worker.gardener.cloud/pool:worker-1] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-provisioning-3332":"ip-10-250-7-77.ec2.internal","csi-hostpath-provisioning-888":"ip-10-250-7-77.ec2.internal","csi-hostpath-provisioning-9667":"ip-10-250-7-77.ec2.internal","csi-hostpath-volume-expand-8983":"ip-10-250-7-77.ec2.internal","csi-hostpath-volumeio-3164":"ip-10-250-7-77.ec2.internal"} node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.250.7.77/19 projectcalico.org/IPv4IPIPTunnelAddr:100.64.0.1 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:100.64.0.0/24,DoNotUse_ExternalID:,ProviderID:aws:///us-east-1c/i-0551dba45aad7abfa,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-aws-ebs: {{25 0} {} 25 DecimalSI},cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{28730179584 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{8054267904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-aws-ebs: {{25 0} {} 25 DecimalSI},cpu: {{1920 -3} {} 1920m DecimalSI},ephemeral-storage: {{27293670584 0} {} 27293670584 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{6577812679 0} {} 6577812679 DecimalSI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2020-01-11 19:47:02 +0000 UTC,LastTransitionTime:2020-01-11 15:56:28 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2020-01-11 19:47:02 +0000 UTC,LastTransitionTime:2020-01-11 15:56:28 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2020-01-11 19:47:02 +0000 UTC,LastTransitionTime:2020-01-11 15:56:28 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2020-01-11 19:47:02 +0000 UTC,LastTransitionTime:2020-01-11 15:56:28 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2020-01-11 19:47:02 +0000 UTC,LastTransitionTime:2020-01-11 15:56:28 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2020-01-11 19:47:02 +0000 UTC,LastTransitionTime:2020-01-11 15:56:28 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2020-01-11 19:47:02 +0000 UTC,LastTransitionTime:2020-01-11 15:56:28 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-01-11 15:56:16 +0000 UTC,LastTransitionTime:2020-01-11 15:56:16 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-01-11 19:47:34 +0000 UTC,LastTransitionTime:2020-01-11 15:55:58 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-01-11 19:47:34 +0000 UTC,LastTransitionTime:2020-01-11 15:55:58 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-01-11 19:47:34 +0000 UTC,LastTransitionTime:2020-01-11 15:55:58 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-01-11 19:47:34 +0000 UTC,LastTransitionTime:2020-01-11 15:56:08 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.250.7.77,},NodeAddress{Type:Hostname,Address:ip-10-250-7-77.ec2.internal,},NodeAddress{Type:InternalDNS,Address:ip-10-250-7-77.ec2.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec223a25fa514279256b8b36a522519a,SystemUUID:ec223a25-fa51-4279-256b-8b36a522519a,BootID:652118c2-7bd4-4ebf-b248-be5c7a65a3aa,KernelVersion:4.19.86-coreos,OSImage:Container Linux by CoreOS 2303.3.0 (Rhyolite),ContainerRuntimeVersion:docker://18.6.3,KubeletVersion:v1.16.4,KubeProxyVersion:v1.16.4,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[eu.gcr.io/gardener-project/k8s.gcr.io/hyperkube@sha256:1d8d7ef8bae1a6c8564d97a7d83a3661ea4b43127b0a6d901f3cd4b1126ee102 eu.gcr.io/gardener-project/k8s.gcr.io/hyperkube:v1.16.4],SizeBytes:601224435,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/quay_io/kubernetes-ingress-controller/nginx-ingress-controller@sha256:4980f4ee069f767334c6fb6a7d75fbdc87236542fd749e22af5d80f2217959f4 eu.gcr.io/gardener-project/3rd/quay_io/kubernetes-ingress-controller/nginx-ingress-controller:0.22.0],SizeBytes:551728251,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/quay_io/calico/node@sha256:d017c694acb9df5ad8e957a14b4c5a613c3a42771a34904f40c279dd2f61461e eu.gcr.io/gardener-project/3rd/quay_io/calico/node:v3.8.2-mod-1],SizeBytes:185406766,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/quay_io/calico/cni@sha256:fe6cb51f30add991b76eadfa26ec10fa8796383a1ddf807be5d4228725207b9d eu.gcr.io/gardener-project/3rd/quay_io/calico/cni:v3.8.2-mod-1],SizeBytes:153790666,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/k8s_gcr_io/kubernetes-dashboard-amd64@sha256:2f4fefeb964b1b7b09a3d2607a963506a47a6628d5268825e8b45b8a4c5ace93 eu.gcr.io/gardener-project/3rd/k8s_gcr_io/kubernetes-dashboard-amd64:v1.10.1],SizeBytes:121711221,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/k8s_gcr_io/node-problem-detector@sha256:00aceed3b4ef20d0d578aff3f904212daa2f0aaf18350d3e213cf4ca0703ccf0 eu.gcr.io/gardener-project/3rd/k8s_gcr_io/node-problem-detector:v0.7.1-mod-1],SizeBytes:96768084,},ContainerImage{Names:[eu.gcr.io/gardener-project/gardener/ingress-default-backend@sha256:17b68928ead12cc9df88ee60d9c638d3fd642a7e122c2bb7586da1a21eb2de45 eu.gcr.io/gardener-project/gardener/ingress-default-backend:0.7.0],SizeBytes:69546830,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6],SizeBytes:57345321,},ContainerImage{Names:[quay.io/k8scsi/csi-provisioner@sha256:0efcb424f1dde9b9fb11a1a14f2e48ab47e1c3f08bc3a929990dcfcb1f7ab34f quay.io/k8scsi/csi-provisioner:v1.4.0-rc1],SizeBytes:54431016,},ContainerImage{Names:[quay.io/k8scsi/csi-snapshotter@sha256:e3d3e742e32d00488fdb401045b9b1d033d7ca0ab6e760f77b24750fc95e5f70 quay.io/k8scsi/csi-snapshotter:v2.0.0-rc1],SizeBytes:51703561,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/quay_io/calico/typha@sha256:52298609a808087c774e95ded163e91828106bed6cf3117c51aba3f4d3b7943c eu.gcr.io/gardener-project/3rd/quay_io/calico/typha:v3.8.2],SizeBytes:49771411,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/quay_io/calico/kube-controllers@sha256:242c3e83e41c5ad4a246cba351360d92fb90e1c140cd24e42140e640a0ed3290 eu.gcr.io/gardener-project/3rd/quay_io/calico/kube-controllers:v3.8.2],SizeBytes:46809393,},ContainerImage{Names:[quay.io/k8scsi/csi-attacher@sha256:26fccd7a99d973845df1193b46ebdcc6ab8dc5f6e6be319750c471fce1742d13 quay.io/k8scsi/csi-attacher:v1.2.0],SizeBytes:46226754,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/coredns/coredns@sha256:b1f81b52011f91ebcf512111caa6d6d0896a65251188210cd3145d5b23204531 eu.gcr.io/gardener-project/3rd/coredns/coredns:1.6.3],SizeBytes:44255363,},ContainerImage{Names:[quay.io/k8scsi/csi-resizer@sha256:7d46fb6eb8b890dc546029d1565d502b4a1d974d33625c6ee2bc7991b77fc1a1 quay.io/k8scsi/csi-resizer:v0.2.0],SizeBytes:42817100,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/k8s_gcr_io/cpvpa-amd64@sha256:5843435c534f0368f8980b1635976976b087f0b2dcde01226d9216da2276d24d eu.gcr.io/gardener-project/3rd/k8s_gcr_io/cpvpa-amd64:v0.8.1],SizeBytes:40616150,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/k8s_gcr_io/cluster-proportional-autoscaler-amd64@sha256:2cdb0f90aac21d3f648a945ef929bfb81159d7453499b2dce6164c78a348ac42 eu.gcr.io/gardener-project/3rd/k8s_gcr_io/cluster-proportional-autoscaler-amd64:1.7.1],SizeBytes:40067731,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/k8s_gcr_io/metrics-server-amd64@sha256:c3c8fb8757c3236343da9239a266c6ee9e16ac3c98b6f5d7a7cbb5f83058d4f1 eu.gcr.io/gardener-project/3rd/k8s_gcr_io/metrics-server-amd64:v0.3.3],SizeBytes:39933796,},ContainerImage{Names:[redis@sha256:50899ea1ceed33fa03232f3ac57578a424faa1742c1ac9c7a7bdb95cdf19b858 redis:5.0.5-alpine],SizeBytes:29331594,},ContainerImage{Names:[quay.io/k8scsi/hostpathplugin@sha256:b4826e492fc1762fceaf9726f41575ca0a4567864d3d235da874818de18039de quay.io/k8scsi/hostpathplugin:v1.2.0-rc5],SizeBytes:28761497,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/quay_io/prometheus/node-exporter@sha256:fea82a3a79228af2840c72ff394d7446ace51ae035f5b26cd9767b250baf13b7 eu.gcr.io/gardener-project/3rd/quay_io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/echoserver@sha256:e9ba514b896cdf559eef8788b66c2c3ee55f3572df617647b4b0d8b6bf81cf19 gcr.io/kubernetes-e2e-test-images/echoserver:2.2],SizeBytes:21692741,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/quay_io/prometheus/blackbox-exporter@sha256:c09cbb653e4708a0c14b205822f56026669c6a4a7d0502609c65da2dd741e669 eu.gcr.io/gardener-project/3rd/quay_io/prometheus/blackbox-exporter:v0.14.0],SizeBytes:17584252,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[quay.io/k8scsi/csi-node-driver-registrar@sha256:13daf82fb99e951a4bff8ae5fc7c17c3a8fe7130be6400990d8f6076c32d4599 quay.io/k8scsi/csi-node-driver-registrar:v1.1.0],SizeBytes:15815995,},ContainerImage{Names:[quay.io/k8scsi/livenessprobe@sha256:dde617756e0f602adc566ab71fd885f1dad451ad3fb063ac991c95a2ff47aea5 quay.io/k8scsi/livenessprobe:v1.1.0],SizeBytes:14967303,},ContainerImage{Names:[eu.gcr.io/gardener-project/gardener/vpn-shoot@sha256:6054c6ae62c2bca2f07c913390c3babf14bb8dfa80c707ee8d4fd03c06dbf93f eu.gcr.io/gardener-project/gardener/vpn-shoot:0.16.0],SizeBytes:13732716,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/quay_io/calico/pod2daemon-flexvol@sha256:fd246ba4eb5b96a7b97bfd8d99eb823ba179e6eeb9852cb3e3f7bf2f44a800a8 eu.gcr.io/gardener-project/3rd/quay_io/calico/pod2daemon-flexvol:v3.8.2],SizeBytes:9371181,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:4206620,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/gcr_io/google_containers/pause-amd64@sha256:ffa28932647c3b6cab6a618aafe98d33dd185d96158ecf9b1addf042d6244025 k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea eu.gcr.io/gardener-project/3rd/gcr_io/google_containers/pause-amd64:3.1 k8s.gcr.io/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 11 19:47:40.209: INFO: Logging kubelet events for node ip-10-250-7-77.ec2.internal Jan 11 19:47:40.299: INFO: Logging pods the kubelet thinks is on node ip-10-250-7-77.ec2.internal Jan 11 19:47:40.416: INFO: kube-proxy-nn5px started at 2020-01-11 15:55:58 +0000 UTC (0+1 container statuses recorded) Jan 11 19:47:40.416: INFO: Container kube-proxy ready: true, restart count 0 Jan 11 19:47:40.416: INFO: csi-snapshotter-0 started at 2020-01-11 19:42:37 +0000 UTC (0+1 container statuses recorded) Jan 11 19:47:40.416: INFO: Container csi-snapshotter ready: true, restart count 0 Jan 11 19:47:40.416: INFO: calico-typha-horizontal-autoscaler-85c99966bb-6j6rp started at 2020-01-11 15:56:08 +0000 UTC (0+1 container statuses recorded) Jan 11 19:47:40.416: INFO: Container autoscaler ready: true, restart count 0 Jan 11 19:47:40.416: INFO: calico-typha-vertical-autoscaler-5769b74b58-r8t6r started at 2020-01-11 15:56:13 +0000 UTC (0+1 container statuses recorded) Jan 11 19:47:40.416: INFO: Container autoscaler ready: true, restart count 5 Jan 11 19:47:40.416: INFO: addons-nginx-ingress-controller-7c75bb76db-cd9r9 started at 2020-01-11 15:56:13 +0000 UTC (0+1 container statuses recorded) Jan 11 19:47:40.416: INFO: Container nginx-ingress-controller ready: true, restart count 0 Jan 11 19:47:40.416: INFO: vpn-shoot-5d76665b65-6rkww started at 2020-01-11 15:56:13 +0000 UTC (0+1 container statuses recorded) Jan 11 19:47:40.416: INFO: Container vpn-shoot ready: true, restart count 0 Jan 11 19:47:40.416: INFO: csi-hostpathplugin-0 started at 2020-01-11 19:42:37 +0000 UTC (0+3 container statuses recorded) Jan 11 19:47:40.416: INFO: Container hostpath ready: true, restart count 0 Jan 11 19:47:40.416: INFO: Container liveness-probe ready: true, restart count 0 Jan 11 19:47:40.416: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 11 19:47:40.416: INFO: addons-nginx-ingress-nginx-ingress-k8s-backend-95f65778d-4fk7d started at 2020-01-11 15:56:08 +0000 UTC (0+1 container statuses recorded) Jan 11 19:47:40.416: INFO: Container nginx-ingress-nginx-ingress-k8s-backend ready: true, restart count 0 Jan 11 19:47:40.416: INFO: addons-kubernetes-dashboard-78954cc66b-69k8m started at 2020-01-11 15:56:08 +0000 UTC (0+1 container statuses recorded) Jan 11 19:47:40.416: INFO: Container kubernetes-dashboard ready: true, restart count 0 Jan 11 19:47:40.416: INFO: blackbox-exporter-54bb5f55cc-452fk started at 2020-01-11 15:55:58 +0000 UTC (0+1 container statuses recorded) Jan 11 19:47:40.416: INFO: Container blackbox-exporter ready: true, restart count 0 Jan 11 19:47:40.416: INFO: coredns-59c969ffb8-fqq79 started at 2020-01-11 15:56:08 +0000 UTC (0+1 container statuses recorded) Jan 11 19:47:40.416: INFO: Container coredns ready: true, restart count 0 Jan 11 19:47:40.416: INFO: csi-hostpath-attacher-0 started at 2020-01-11 19:42:37 +0000 UTC (0+1 container statuses recorded) Jan 11 19:47:40.416: INFO: Container csi-attacher ready: true, restart count 0 Jan 11 19:47:40.416: INFO: calico-node-dl8nk started at 2020-01-11 15:55:58 +0000 UTC (2+1 container statuses recorded) Jan 11 19:47:40.416: INFO: Init container install-cni ready: true, restart count 0 Jan 11 19:47:40.416: INFO: Init container flexvol-driver ready: true, restart count 0 Jan 11 19:47:40.416: INFO: Container calico-node ready: true, restart count 0 Jan 11 19:47:40.416: INFO: node-problem-detector-jx2p4 started at 2020-01-11 15:55:58 +0000 UTC (0+1 container statuses recorded) Jan 11 19:47:40.416: INFO: Container node-problem-detector ready: true, restart count 0 Jan 11 19:47:40.416: INFO: csi-hostpath-provisioner-0 started at 2020-01-11 19:42:37 +0000 UTC (0+1 container statuses recorded) Jan 11 19:47:40.416: INFO: Container csi-provisioner ready: true, restart count 0 Jan 11 19:47:40.416: INFO: node-exporter-gp57h started at 2020-01-11 15:55:58 +0000 UTC (0+1 container statuses recorded) Jan 11 19:47:40.416: INFO: Container node-exporter ready: true, restart count 0 Jan 11 19:47:40.416: INFO: calico-kube-controllers-79bcd784b6-c46r9 started at 2020-01-11 15:56:08 +0000 UTC (0+1 container statuses recorded) Jan 11 19:47:40.416: INFO: Container calico-kube-controllers ready: true, restart count 0 Jan 11 19:47:40.416: INFO: metrics-server-7c797fd994-4x7v9 started at 2020-01-11 15:56:08 +0000 UTC (0+1 container statuses recorded) Jan 11 19:47:40.416: INFO: Container metrics-server ready: true, restart count 0 Jan 11 19:47:40.416: INFO: csi-hostpath-resizer-0 started at 2020-01-11 19:42:37 +0000 UTC (0+1 container statuses recorded) Jan 11 19:47:40.416: INFO: Container csi-resizer ready: true, restart count 0 Jan 11 19:47:40.416: INFO: coredns-59c969ffb8-57m7v started at 2020-01-11 15:56:11 +0000 UTC (0+1 container statuses recorded) Jan 11 19:47:40.416: INFO: Container coredns ready: true, restart count 0 Jan 11 19:47:40.416: INFO: calico-typha-deploy-9f6b455c4-vdrzx started at 2020-01-11 16:21:07 +0000 UTC (0+1 container statuses recorded) Jan 11 19:47:40.416: INFO: Container calico-typha ready: true, restart count 0 W0111 19:47:40.507216 8607 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 11 19:47:40.726: INFO: Latency metrics for node ip-10-250-7-77.ec2.internal Jan 11 19:47:40.726: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "volume-expand-8983" for this suite. Jan 11 19:48:11.084: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:48:14.492: INFO: namespace volume-expand-8983 deletion completed in 33.676020794s • Failure [340.247 seconds] [sig-storage] CSI Volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Driver: csi-hostpath] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:62 [Testpattern: Dynamic PV (block volmode)] volume-expand /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:92 should not allow expansion of pvcs without AllowVolumeExpansion property [It] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:139 Jan 11 19:47:39.061: Unexpected error: <*errors.errorString | 0xc002e98890>: { s: "PersistentVolumeClaims [csi-hostpath9pqwb] not all in phase Bound within 5m0s", } PersistentVolumeClaims [csi-hostpath9pqwb] not all in phase Bound within 5m0s occurred /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:366 ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:48:05.644: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename configmap STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-6859 STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:110 STEP: Creating configMap with name configmap-test-volume-map-c71c5f0a-0954-4ecc-a9fa-c27c10af3a14 STEP: Creating a pod to test consume configMaps Jan 11 19:48:06.463: INFO: Waiting up to 5m0s for pod "pod-configmaps-a6f79073-de92-4312-a81a-c4bc47fd1994" in namespace "configmap-6859" to be "success or failure" Jan 11 19:48:06.552: INFO: Pod "pod-configmaps-a6f79073-de92-4312-a81a-c4bc47fd1994": Phase="Pending", Reason="", readiness=false. Elapsed: 89.299357ms Jan 11 19:48:08.641: INFO: Pod "pod-configmaps-a6f79073-de92-4312-a81a-c4bc47fd1994": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.17865538s STEP: Saw pod success Jan 11 19:48:08.641: INFO: Pod "pod-configmaps-a6f79073-de92-4312-a81a-c4bc47fd1994" satisfied condition "success or failure" Jan 11 19:48:08.731: INFO: Trying to get logs from node ip-10-250-27-25.ec2.internal pod pod-configmaps-a6f79073-de92-4312-a81a-c4bc47fd1994 container configmap-volume-test: STEP: delete the pod Jan 11 19:48:08.920: INFO: Waiting for pod pod-configmaps-a6f79073-de92-4312-a81a-c4bc47fd1994 to disappear Jan 11 19:48:09.009: INFO: Pod pod-configmaps-a6f79073-de92-4312-a81a-c4bc47fd1994 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:48:09.010: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6859" for this suite. Jan 11 19:48:15.371: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:48:18.756: INFO: namespace configmap-6859 deletion completed in 9.656372011s • [SLOW TEST:13.113 seconds] [sig-storage] ConfigMap /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:34 should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:110 ------------------------------ SSSSSS ------------------------------ [BeforeEach] [sig-node] Downward API /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:48:11.770: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename downward-api STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-3930 STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:107 STEP: Creating a pod to test downward api env vars Jan 11 19:48:12.643: INFO: Waiting up to 5m0s for pod "downward-api-6b6cffb2-f630-4797-8b4a-7224a4c2da6e" in namespace "downward-api-3930" to be "success or failure" Jan 11 19:48:12.733: INFO: Pod "downward-api-6b6cffb2-f630-4797-8b4a-7224a4c2da6e": Phase="Pending", Reason="", readiness=false. Elapsed: 89.211587ms Jan 11 19:48:14.823: INFO: Pod "downward-api-6b6cffb2-f630-4797-8b4a-7224a4c2da6e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.179545501s STEP: Saw pod success Jan 11 19:48:14.823: INFO: Pod "downward-api-6b6cffb2-f630-4797-8b4a-7224a4c2da6e" satisfied condition "success or failure" Jan 11 19:48:14.914: INFO: Trying to get logs from node ip-10-250-7-77.ec2.internal pod downward-api-6b6cffb2-f630-4797-8b4a-7224a4c2da6e container dapi-container: STEP: delete the pod Jan 11 19:48:15.111: INFO: Waiting for pod downward-api-6b6cffb2-f630-4797-8b4a-7224a4c2da6e to disappear Jan 11 19:48:15.201: INFO: Pod downward-api-6b6cffb2-f630-4797-8b4a-7224a4c2da6e no longer exists [AfterEach] [sig-node] Downward API /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:48:15.201: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3930" for this suite. Jan 11 19:48:21.562: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:48:24.878: INFO: namespace downward-api-3930 deletion completed in 9.585335975s • [SLOW TEST:13.108 seconds] [sig-node] Downward API /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:107 ------------------------------ [BeforeEach] [sig-apps] Job /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:48:08.064: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename job STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in job-1955 STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are not locally restarted /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:110 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:48:16.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-1955" for this suite. Jan 11 19:48:25.250: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:48:28.578: INFO: namespace job-1955 deletion completed in 11.597822822s • [SLOW TEST:20.514 seconds] [sig-apps] Job /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are not locally restarted /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:110 ------------------------------ SSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:47:59.665: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in crd-publish-openapi-119 STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: set up a multi version CRD Jan 11 19:48:00.552: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:48:26.217: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-119" for this suite. Jan 11 19:48:32.577: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:48:35.892: INFO: namespace crd-publish-openapi-119 deletion completed in 9.583797004s • [SLOW TEST:36.227 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ S ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:48:24.879: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename configmap STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-3052 STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating configMap with name configmap-test-volume-4f2b03a9-2dd9-4f36-90b3-0cd597b2ec79 STEP: Creating a pod to test consume configMaps Jan 11 19:48:25.710: INFO: Waiting up to 5m0s for pod "pod-configmaps-e73ca8a3-073d-4891-8ffe-6b40f4bc0f4f" in namespace "configmap-3052" to be "success or failure" Jan 11 19:48:25.800: INFO: Pod "pod-configmaps-e73ca8a3-073d-4891-8ffe-6b40f4bc0f4f": Phase="Pending", Reason="", readiness=false. Elapsed: 89.378568ms Jan 11 19:48:27.890: INFO: Pod "pod-configmaps-e73ca8a3-073d-4891-8ffe-6b40f4bc0f4f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.17932374s STEP: Saw pod success Jan 11 19:48:27.890: INFO: Pod "pod-configmaps-e73ca8a3-073d-4891-8ffe-6b40f4bc0f4f" satisfied condition "success or failure" Jan 11 19:48:27.979: INFO: Trying to get logs from node ip-10-250-27-25.ec2.internal pod pod-configmaps-e73ca8a3-073d-4891-8ffe-6b40f4bc0f4f container configmap-volume-test: STEP: delete the pod Jan 11 19:48:28.171: INFO: Waiting for pod pod-configmaps-e73ca8a3-073d-4891-8ffe-6b40f4bc0f4f to disappear Jan 11 19:48:28.260: INFO: Pod pod-configmaps-e73ca8a3-073d-4891-8ffe-6b40f4bc0f4f no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:48:28.260: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3052" for this suite. Jan 11 19:48:34.621: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:48:37.944: INFO: namespace configmap-3052 deletion completed in 9.592685056s • [SLOW TEST:13.064 seconds] [sig-storage] ConfigMap /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:34 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:48:18.767: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in persistent-local-volumes-test-5951 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:152 [BeforeEach] [Volume type: dir-bindmounted] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Jan 11 19:48:21.867: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-5951 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-bf3175b1-ae9c-44a7-b776-694b6f0d74ae && mount --bind /tmp/local-volume-test-bf3175b1-ae9c-44a7-b776-694b6f0d74ae /tmp/local-volume-test-bf3175b1-ae9c-44a7-b776-694b6f0d74ae' Jan 11 19:48:23.254: INFO: stderr: "" Jan 11 19:48:23.254: INFO: stdout: "" STEP: Creating local PVCs and PVs Jan 11 19:48:23.254: INFO: Creating a PV followed by a PVC Jan 11 19:48:23.433: INFO: Waiting for PV local-pvjb6b2 to bind to PVC pvc-pjjk6 Jan 11 19:48:23.433: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-pjjk6] to have phase Bound Jan 11 19:48:23.523: INFO: PersistentVolumeClaim pvc-pjjk6 found but phase is Pending instead of Bound. Jan 11 19:48:25.613: INFO: PersistentVolumeClaim pvc-pjjk6 found and phase=Bound (2.179734862s) Jan 11 19:48:25.613: INFO: Waiting up to 3m0s for PersistentVolume local-pvjb6b2 to have phase Bound Jan 11 19:48:25.702: INFO: PersistentVolume local-pvjb6b2 found and phase=Bound (88.945354ms) [BeforeEach] Set fsGroup for local volume /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set fsGroup for one pod [Slow] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:267 STEP: Checking fsGroup is set STEP: Creating a pod Jan 11 19:48:28.238: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec security-context-4125944e-23fa-4c11-9a3a-cd5810788f4b --namespace=persistent-local-volumes-test-5951 -- stat -c %g /mnt/volume1' Jan 11 19:48:29.576: INFO: stderr: "" Jan 11 19:48:29.576: INFO: stdout: "1234\n" STEP: Deleting pod STEP: Deleting pod security-context-4125944e-23fa-4c11-9a3a-cd5810788f4b in namespace persistent-local-volumes-test-5951 [AfterEach] [Volume type: dir-bindmounted] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jan 11 19:48:29.666: INFO: Deleting PersistentVolumeClaim "pvc-pjjk6" Jan 11 19:48:29.756: INFO: Deleting PersistentVolume "local-pvjb6b2" STEP: Removing the test directory Jan 11 19:48:29.847: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-5951 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount /tmp/local-volume-test-bf3175b1-ae9c-44a7-b776-694b6f0d74ae && rm -r /tmp/local-volume-test-bf3175b1-ae9c-44a7-b776-694b6f0d74ae' Jan 11 19:48:31.570: INFO: stderr: "" Jan 11 19:48:31.571: INFO: stdout: "" [AfterEach] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:48:31.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-5951" for this suite. Jan 11 19:48:38.021: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:48:41.321: INFO: namespace persistent-local-volumes-test-5951 deletion completed in 9.568744517s • [SLOW TEST:22.554 seconds] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-bindmounted] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set fsGroup for one pod [Slow] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:267 ------------------------------ SSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:48:28.587: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename kubectl STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-5635 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:225 [BeforeEach] Kubectl rolling-update /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1499 [It] should support rolling-update to same image [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: running the image docker.io/library/httpd:2.4.38-alpine Jan 11 19:48:29.229: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-5635' Jan 11 19:48:29.658: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jan 11 19:48:29.658: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n" STEP: verifying the rc e2e-test-httpd-rc was created STEP: rolling-update to same image controller Jan 11 19:48:29.839: INFO: scanned /root for discovery docs: Jan 11 19:48:29.839: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config rolling-update e2e-test-httpd-rc --update-period=1s --image=docker.io/library/httpd:2.4.38-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-5635' Jan 11 19:48:43.448: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Jan 11 19:48:43.448: INFO: stdout: "Created e2e-test-httpd-rc-b75673d07799e812c8b09b480dc4f7a8\nScaling up e2e-test-httpd-rc-b75673d07799e812c8b09b480dc4f7a8 from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-b75673d07799e812c8b09b480dc4f7a8 up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-b75673d07799e812c8b09b480dc4f7a8 to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n" Jan 11 19:48:43.448: INFO: stdout: "Created e2e-test-httpd-rc-b75673d07799e812c8b09b480dc4f7a8\nScaling up e2e-test-httpd-rc-b75673d07799e812c8b09b480dc4f7a8 from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-b75673d07799e812c8b09b480dc4f7a8 up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-b75673d07799e812c8b09b480dc4f7a8 to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-httpd-rc pods to come up. Jan 11 19:48:43.449: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-httpd-rc --namespace=kubectl-5635' Jan 11 19:48:43.898: INFO: stderr: "" Jan 11 19:48:43.898: INFO: stdout: "e2e-test-httpd-rc-b75673d07799e812c8b09b480dc4f7a8-s9qjg " Jan 11 19:48:43.898: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config get pods e2e-test-httpd-rc-b75673d07799e812c8b09b480dc4f7a8-s9qjg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-httpd-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5635' Jan 11 19:48:44.324: INFO: stderr: "" Jan 11 19:48:44.324: INFO: stdout: "true" Jan 11 19:48:44.324: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config get pods e2e-test-httpd-rc-b75673d07799e812c8b09b480dc4f7a8-s9qjg -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-httpd-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5635' Jan 11 19:48:44.747: INFO: stderr: "" Jan 11 19:48:44.747: INFO: stdout: "docker.io/library/httpd:2.4.38-alpine" Jan 11 19:48:44.747: INFO: e2e-test-httpd-rc-b75673d07799e812c8b09b480dc4f7a8-s9qjg is verified up and running [AfterEach] Kubectl rolling-update /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1505 Jan 11 19:48:44.747: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config delete rc e2e-test-httpd-rc --namespace=kubectl-5635' Jan 11 19:48:45.281: INFO: stderr: "" Jan 11 19:48:45.281: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:48:45.281: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5635" for this suite. Jan 11 19:48:57.643: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:49:00.963: INFO: namespace kubectl-5635 deletion completed in 15.589811761s • [SLOW TEST:32.375 seconds] [sig-cli] Kubectl client /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl rolling-update /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1494 should support rolling-update to same image [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:48:05.831: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename kubectl STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-8526 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:225 [It] should create and stop a working application [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: creating all guestbook components Jan 11 19:48:06.471: INFO: apiVersion: v1 kind: Service metadata: name: redis-slave labels: app: redis role: slave tier: backend spec: ports: - port: 6379 selector: app: redis role: slave tier: backend Jan 11 19:48:06.471: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config create -f - --namespace=kubectl-8526' Jan 11 19:48:07.436: INFO: stderr: "" Jan 11 19:48:07.436: INFO: stdout: "service/redis-slave created\n" Jan 11 19:48:07.436: INFO: apiVersion: v1 kind: Service metadata: name: redis-master labels: app: redis role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: redis role: master tier: backend Jan 11 19:48:07.436: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config create -f - --namespace=kubectl-8526' Jan 11 19:48:08.398: INFO: stderr: "" Jan 11 19:48:08.398: INFO: stdout: "service/redis-master created\n" Jan 11 19:48:08.398: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Jan 11 19:48:08.398: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config create -f - --namespace=kubectl-8526' Jan 11 19:48:09.009: INFO: stderr: "" Jan 11 19:48:09.009: INFO: stdout: "service/frontend created\n" Jan 11 19:48:09.009: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v6 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access environment variables to find service host # info, comment out the 'value: dns' line above, and uncomment the # line below: # value: env ports: - containerPort: 80 Jan 11 19:48:09.009: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config create -f - --namespace=kubectl-8526' Jan 11 19:48:09.969: INFO: stderr: "" Jan 11 19:48:09.969: INFO: stdout: "deployment.apps/frontend created\n" Jan 11 19:48:09.969: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: redis-master spec: replicas: 1 selector: matchLabels: app: redis role: master tier: backend template: metadata: labels: app: redis role: master tier: backend spec: containers: - name: master image: docker.io/library/redis:5.0.5-alpine resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Jan 11 19:48:09.969: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config create -f - --namespace=kubectl-8526' Jan 11 19:48:10.925: INFO: stderr: "" Jan 11 19:48:10.926: INFO: stdout: "deployment.apps/redis-master created\n" Jan 11 19:48:10.926: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: redis-slave spec: replicas: 2 selector: matchLabels: app: redis role: slave tier: backend template: metadata: labels: app: redis role: slave tier: backend spec: containers: - name: slave image: docker.io/library/redis:5.0.5-alpine # We are only implementing the dns option of: # https://github.com/kubernetes/examples/blob/97c7ed0eb6555a4b667d2877f965d392e00abc45/guestbook/redis-slave/run.sh command: [ "redis-server", "--slaveof", "redis-master", "6379" ] resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access an environment variable to find the master # service's host, comment out the 'value: dns' line above, and # uncomment the line below: # value: env ports: - containerPort: 6379 Jan 11 19:48:10.926: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config create -f - --namespace=kubectl-8526' Jan 11 19:48:11.889: INFO: stderr: "" Jan 11 19:48:11.889: INFO: stdout: "deployment.apps/redis-slave created\n" STEP: validating guestbook app Jan 11 19:48:11.889: INFO: Waiting for all frontend pods to be Running. Jan 11 19:48:16.990: INFO: Waiting for frontend to serve content. Jan 11 19:48:17.176: INFO: Trying to add a new entry to the guestbook. Jan 11 19:48:17.356: INFO: Verifying that added entry can be retrieved. Jan 11 19:48:17.459: INFO: Failed to get response from guestbook. err: , response: {"data": ""} Jan 11 19:48:22.644: INFO: Failed to get response from guestbook. err: , response: {"data": ""} STEP: using delete to clean up resources Jan 11 19:48:27.824: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config delete --grace-period=0 --force -f - --namespace=kubectl-8526' Jan 11 19:48:28.342: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 11 19:48:28.342: INFO: stdout: "service \"redis-slave\" force deleted\n" STEP: using delete to clean up resources Jan 11 19:48:28.342: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config delete --grace-period=0 --force -f - --namespace=kubectl-8526' Jan 11 19:48:28.862: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 11 19:48:28.862: INFO: stdout: "service \"redis-master\" force deleted\n" STEP: using delete to clean up resources Jan 11 19:48:28.862: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config delete --grace-period=0 --force -f - --namespace=kubectl-8526' Jan 11 19:48:29.377: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 11 19:48:29.377: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Jan 11 19:48:29.377: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config delete --grace-period=0 --force -f - --namespace=kubectl-8526' Jan 11 19:48:29.888: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 11 19:48:29.888: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Jan 11 19:48:29.888: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config delete --grace-period=0 --force -f - --namespace=kubectl-8526' Jan 11 19:48:30.407: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 11 19:48:30.407: INFO: stdout: "deployment.apps \"redis-master\" force deleted\n" STEP: using delete to clean up resources Jan 11 19:48:30.407: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config delete --grace-period=0 --force -f - --namespace=kubectl-8526' Jan 11 19:48:30.924: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 11 19:48:30.924: INFO: stdout: "deployment.apps \"redis-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:48:30.924: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8526" for this suite. Jan 11 19:48:59.285: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:49:02.609: INFO: namespace kubectl-8526 deletion completed in 31.593529577s • [SLOW TEST:56.778 seconds] [sig-cli] Kubectl client /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:333 should create and stop a working application [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ S ------------------------------ [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:93 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:48:13.179: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename provisioning STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in provisioning-638 STEP: Waiting for a default service account to be provisioned in namespace [It] should support existing single file [LinuxOnly] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:202 STEP: deploying csi-hostpath driver Jan 11 19:48:14.016: INFO: creating *v1.ServiceAccount: provisioning-638/csi-attacher Jan 11 19:48:14.106: INFO: creating *v1.ClusterRole: external-attacher-runner-provisioning-638 Jan 11 19:48:14.106: INFO: Define cluster role external-attacher-runner-provisioning-638 Jan 11 19:48:14.200: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-provisioning-638 Jan 11 19:48:14.289: INFO: creating *v1.Role: provisioning-638/external-attacher-cfg-provisioning-638 Jan 11 19:48:14.378: INFO: creating *v1.RoleBinding: provisioning-638/csi-attacher-role-cfg Jan 11 19:48:14.467: INFO: creating *v1.ServiceAccount: provisioning-638/csi-provisioner Jan 11 19:48:14.557: INFO: creating *v1.ClusterRole: external-provisioner-runner-provisioning-638 Jan 11 19:48:14.557: INFO: Define cluster role external-provisioner-runner-provisioning-638 Jan 11 19:48:14.647: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-provisioning-638 Jan 11 19:48:14.737: INFO: creating *v1.Role: provisioning-638/external-provisioner-cfg-provisioning-638 Jan 11 19:48:14.826: INFO: creating *v1.RoleBinding: provisioning-638/csi-provisioner-role-cfg Jan 11 19:48:14.916: INFO: creating *v1.ServiceAccount: provisioning-638/csi-snapshotter Jan 11 19:48:15.005: INFO: creating *v1.ClusterRole: external-snapshotter-runner-provisioning-638 Jan 11 19:48:15.005: INFO: Define cluster role external-snapshotter-runner-provisioning-638 Jan 11 19:48:15.095: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-provisioning-638 Jan 11 19:48:15.187: INFO: creating *v1.Role: provisioning-638/external-snapshotter-leaderelection-provisioning-638 Jan 11 19:48:15.277: INFO: creating *v1.RoleBinding: provisioning-638/external-snapshotter-leaderelection Jan 11 19:48:15.366: INFO: creating *v1.ServiceAccount: provisioning-638/csi-resizer Jan 11 19:48:15.455: INFO: creating *v1.ClusterRole: external-resizer-runner-provisioning-638 Jan 11 19:48:15.455: INFO: Define cluster role external-resizer-runner-provisioning-638 Jan 11 19:48:15.545: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-provisioning-638 Jan 11 19:48:15.634: INFO: creating *v1.Role: provisioning-638/external-resizer-cfg-provisioning-638 Jan 11 19:48:15.724: INFO: creating *v1.RoleBinding: provisioning-638/csi-resizer-role-cfg Jan 11 19:48:15.813: INFO: creating *v1.Service: provisioning-638/csi-hostpath-attacher Jan 11 19:48:15.906: INFO: creating *v1.StatefulSet: provisioning-638/csi-hostpath-attacher Jan 11 19:48:15.996: INFO: creating *v1beta1.CSIDriver: csi-hostpath-provisioning-638 Jan 11 19:48:16.085: INFO: creating *v1.Service: provisioning-638/csi-hostpathplugin Jan 11 19:48:16.179: INFO: creating *v1.StatefulSet: provisioning-638/csi-hostpathplugin Jan 11 19:48:16.269: INFO: creating *v1.Service: provisioning-638/csi-hostpath-provisioner Jan 11 19:48:16.362: INFO: creating *v1.StatefulSet: provisioning-638/csi-hostpath-provisioner Jan 11 19:48:16.451: INFO: creating *v1.Service: provisioning-638/csi-hostpath-resizer Jan 11 19:48:16.545: INFO: creating *v1.StatefulSet: provisioning-638/csi-hostpath-resizer Jan 11 19:48:16.635: INFO: creating *v1.Service: provisioning-638/csi-snapshotter Jan 11 19:48:16.728: INFO: creating *v1.StatefulSet: provisioning-638/csi-snapshotter Jan 11 19:48:16.817: INFO: creating *v1.ClusterRoleBinding: psp-csi-hostpath-role-provisioning-638 Jan 11 19:48:16.906: INFO: Test running for native CSI Driver, not checking metrics Jan 11 19:48:16.906: INFO: Creating resource for dynamic PV STEP: creating a StorageClass provisioning-638-csi-hostpath-provisioning-638-sc64rdp STEP: creating a claim Jan 11 19:48:16.996: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Jan 11 19:48:17.086: INFO: Waiting up to 5m0s for PersistentVolumeClaims [csi-hostpathlskqs] to have phase Bound Jan 11 19:48:17.175: INFO: PersistentVolumeClaim csi-hostpathlskqs found but phase is Pending instead of Bound. Jan 11 19:48:19.264: INFO: PersistentVolumeClaim csi-hostpathlskqs found and phase=Bound (2.178182245s) STEP: Creating pod pod-subpath-test-csi-hostpath-dynamicpv-jvzm STEP: Creating a pod to test subpath Jan 11 19:48:19.533: INFO: Waiting up to 5m0s for pod "pod-subpath-test-csi-hostpath-dynamicpv-jvzm" in namespace "provisioning-638" to be "success or failure" Jan 11 19:48:19.622: INFO: Pod "pod-subpath-test-csi-hostpath-dynamicpv-jvzm": Phase="Pending", Reason="", readiness=false. Elapsed: 89.230311ms Jan 11 19:48:21.713: INFO: Pod "pod-subpath-test-csi-hostpath-dynamicpv-jvzm": Phase="Pending", Reason="", readiness=false. Elapsed: 2.179421848s Jan 11 19:48:23.802: INFO: Pod "pod-subpath-test-csi-hostpath-dynamicpv-jvzm": Phase="Pending", Reason="", readiness=false. Elapsed: 4.268569635s Jan 11 19:48:25.892: INFO: Pod "pod-subpath-test-csi-hostpath-dynamicpv-jvzm": Phase="Pending", Reason="", readiness=false. Elapsed: 6.358583324s Jan 11 19:48:27.981: INFO: Pod "pod-subpath-test-csi-hostpath-dynamicpv-jvzm": Phase="Pending", Reason="", readiness=false. Elapsed: 8.447845606s Jan 11 19:48:30.071: INFO: Pod "pod-subpath-test-csi-hostpath-dynamicpv-jvzm": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.537511517s STEP: Saw pod success Jan 11 19:48:30.071: INFO: Pod "pod-subpath-test-csi-hostpath-dynamicpv-jvzm" satisfied condition "success or failure" Jan 11 19:48:30.160: INFO: Trying to get logs from node ip-10-250-7-77.ec2.internal pod pod-subpath-test-csi-hostpath-dynamicpv-jvzm container test-container-subpath-csi-hostpath-dynamicpv-jvzm: STEP: delete the pod Jan 11 19:48:30.350: INFO: Waiting for pod pod-subpath-test-csi-hostpath-dynamicpv-jvzm to disappear Jan 11 19:48:30.440: INFO: Pod pod-subpath-test-csi-hostpath-dynamicpv-jvzm no longer exists STEP: Deleting pod pod-subpath-test-csi-hostpath-dynamicpv-jvzm Jan 11 19:48:30.440: INFO: Deleting pod "pod-subpath-test-csi-hostpath-dynamicpv-jvzm" in namespace "provisioning-638" STEP: Deleting pod Jan 11 19:48:30.528: INFO: Deleting pod "pod-subpath-test-csi-hostpath-dynamicpv-jvzm" in namespace "provisioning-638" STEP: Deleting pvc Jan 11 19:48:30.617: INFO: Deleting PersistentVolumeClaim "csi-hostpathlskqs" Jan 11 19:48:30.708: INFO: Waiting up to 5m0s for PersistentVolume pvc-613bc164-01f8-4b27-837a-1d05cc478e1e to get deleted Jan 11 19:48:30.797: INFO: PersistentVolume pvc-613bc164-01f8-4b27-837a-1d05cc478e1e was removed STEP: Deleting sc STEP: uninstalling csi-hostpath driver Jan 11 19:48:30.887: INFO: deleting *v1.ServiceAccount: provisioning-638/csi-attacher Jan 11 19:48:30.978: INFO: deleting *v1.ClusterRole: external-attacher-runner-provisioning-638 Jan 11 19:48:31.069: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-provisioning-638 Jan 11 19:48:31.161: INFO: deleting *v1.Role: provisioning-638/external-attacher-cfg-provisioning-638 Jan 11 19:48:31.251: INFO: deleting *v1.RoleBinding: provisioning-638/csi-attacher-role-cfg Jan 11 19:48:31.342: INFO: deleting *v1.ServiceAccount: provisioning-638/csi-provisioner Jan 11 19:48:31.433: INFO: deleting *v1.ClusterRole: external-provisioner-runner-provisioning-638 Jan 11 19:48:31.525: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-provisioning-638 Jan 11 19:48:31.616: INFO: deleting *v1.Role: provisioning-638/external-provisioner-cfg-provisioning-638 Jan 11 19:48:31.707: INFO: deleting *v1.RoleBinding: provisioning-638/csi-provisioner-role-cfg Jan 11 19:48:31.798: INFO: deleting *v1.ServiceAccount: provisioning-638/csi-snapshotter Jan 11 19:48:31.889: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-provisioning-638 Jan 11 19:48:31.980: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-provisioning-638 Jan 11 19:48:32.070: INFO: deleting *v1.Role: provisioning-638/external-snapshotter-leaderelection-provisioning-638 Jan 11 19:48:32.161: INFO: deleting *v1.RoleBinding: provisioning-638/external-snapshotter-leaderelection Jan 11 19:48:32.252: INFO: deleting *v1.ServiceAccount: provisioning-638/csi-resizer Jan 11 19:48:32.344: INFO: deleting *v1.ClusterRole: external-resizer-runner-provisioning-638 Jan 11 19:48:32.435: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-provisioning-638 Jan 11 19:48:32.525: INFO: deleting *v1.Role: provisioning-638/external-resizer-cfg-provisioning-638 Jan 11 19:48:32.615: INFO: deleting *v1.RoleBinding: provisioning-638/csi-resizer-role-cfg Jan 11 19:48:32.706: INFO: deleting *v1.Service: provisioning-638/csi-hostpath-attacher Jan 11 19:48:32.802: INFO: deleting *v1.StatefulSet: provisioning-638/csi-hostpath-attacher Jan 11 19:48:32.893: INFO: deleting *v1beta1.CSIDriver: csi-hostpath-provisioning-638 Jan 11 19:48:32.984: INFO: deleting *v1.Service: provisioning-638/csi-hostpathplugin Jan 11 19:48:33.080: INFO: deleting *v1.StatefulSet: provisioning-638/csi-hostpathplugin Jan 11 19:48:33.171: INFO: deleting *v1.Service: provisioning-638/csi-hostpath-provisioner Jan 11 19:48:33.266: INFO: deleting *v1.StatefulSet: provisioning-638/csi-hostpath-provisioner Jan 11 19:48:33.357: INFO: deleting *v1.Service: provisioning-638/csi-hostpath-resizer Jan 11 19:48:33.453: INFO: deleting *v1.StatefulSet: provisioning-638/csi-hostpath-resizer Jan 11 19:48:33.544: INFO: deleting *v1.Service: provisioning-638/csi-snapshotter Jan 11 19:48:33.641: INFO: deleting *v1.StatefulSet: provisioning-638/csi-snapshotter Jan 11 19:48:33.731: INFO: deleting *v1.ClusterRoleBinding: psp-csi-hostpath-role-provisioning-638 [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:48:33.822: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready WARNING: pod log: csi-hostpath-attacher-0/csi-attacher: context canceled STEP: Destroying namespace "provisioning-638" for this suite. Jan 11 19:49:02.182: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:49:05.482: INFO: namespace provisioning-638 deletion completed in 31.569363862s • [SLOW TEST:52.303 seconds] [sig-storage] CSI Volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Driver: csi-hostpath] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:62 [Testpattern: Dynamic PV (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:92 should support existing single file [LinuxOnly] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:202 ------------------------------ SSSS ------------------------------ [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:93 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:48:41.330: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename provisioning STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in provisioning-1772 STEP: Waiting for a default service account to be provisioned in namespace [It] should fail if subpath directory is outside the volume [Slow] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:223 Jan 11 19:48:41.968: INFO: Could not find CSI Name for in-tree plugin kubernetes.io/host-path Jan 11 19:48:42.059: INFO: Creating resource for inline volume STEP: Creating pod pod-subpath-test-hostpath-cqs7 STEP: Checking for subpath error in container status Jan 11 19:48:46.330: INFO: Deleting pod "pod-subpath-test-hostpath-cqs7" in namespace "provisioning-1772" Jan 11 19:48:46.420: INFO: Wait up to 5m0s for pod "pod-subpath-test-hostpath-cqs7" to be fully deleted STEP: Deleting pod Jan 11 19:48:58.599: INFO: Deleting pod "pod-subpath-test-hostpath-cqs7" in namespace "provisioning-1772" Jan 11 19:48:58.689: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics [AfterEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:48:58.689: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "provisioning-1772" for this suite. Jan 11 19:49:05.051: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:49:08.346: INFO: namespace provisioning-1772 deletion completed in 9.566761122s • [SLOW TEST:27.016 seconds] [sig-storage] In-tree Volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Driver: hostPath] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:69 [Testpattern: Inline-volume (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:92 should fail if subpath directory is outside the volume [Slow] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:223 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:93 [BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:85 [BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:48:14.496: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename volume-expand STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in volume-expand-7991 STEP: Waiting for a default service account to be provisioned in namespace [It] should not allow expansion of pvcs without AllowVolumeExpansion property /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:139 STEP: deploying csi-hostpath driver Jan 11 19:48:15.539: INFO: creating *v1.ServiceAccount: volume-expand-7991/csi-attacher Jan 11 19:48:15.629: INFO: creating *v1.ClusterRole: external-attacher-runner-volume-expand-7991 Jan 11 19:48:15.629: INFO: Define cluster role external-attacher-runner-volume-expand-7991 Jan 11 19:48:15.719: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-volume-expand-7991 Jan 11 19:48:15.809: INFO: creating *v1.Role: volume-expand-7991/external-attacher-cfg-volume-expand-7991 Jan 11 19:48:15.898: INFO: creating *v1.RoleBinding: volume-expand-7991/csi-attacher-role-cfg Jan 11 19:48:15.988: INFO: creating *v1.ServiceAccount: volume-expand-7991/csi-provisioner Jan 11 19:48:16.078: INFO: creating *v1.ClusterRole: external-provisioner-runner-volume-expand-7991 Jan 11 19:48:16.078: INFO: Define cluster role external-provisioner-runner-volume-expand-7991 Jan 11 19:48:16.168: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-volume-expand-7991 Jan 11 19:48:16.257: INFO: creating *v1.Role: volume-expand-7991/external-provisioner-cfg-volume-expand-7991 Jan 11 19:48:16.347: INFO: creating *v1.RoleBinding: volume-expand-7991/csi-provisioner-role-cfg Jan 11 19:48:16.438: INFO: creating *v1.ServiceAccount: volume-expand-7991/csi-snapshotter Jan 11 19:48:16.527: INFO: creating *v1.ClusterRole: external-snapshotter-runner-volume-expand-7991 Jan 11 19:48:16.527: INFO: Define cluster role external-snapshotter-runner-volume-expand-7991 Jan 11 19:48:16.617: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-volume-expand-7991 Jan 11 19:48:16.706: INFO: creating *v1.Role: volume-expand-7991/external-snapshotter-leaderelection-volume-expand-7991 Jan 11 19:48:16.796: INFO: creating *v1.RoleBinding: volume-expand-7991/external-snapshotter-leaderelection Jan 11 19:48:16.885: INFO: creating *v1.ServiceAccount: volume-expand-7991/csi-resizer Jan 11 19:48:16.974: INFO: creating *v1.ClusterRole: external-resizer-runner-volume-expand-7991 Jan 11 19:48:16.974: INFO: Define cluster role external-resizer-runner-volume-expand-7991 Jan 11 19:48:17.064: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-volume-expand-7991 Jan 11 19:48:17.153: INFO: creating *v1.Role: volume-expand-7991/external-resizer-cfg-volume-expand-7991 Jan 11 19:48:17.243: INFO: creating *v1.RoleBinding: volume-expand-7991/csi-resizer-role-cfg Jan 11 19:48:17.332: INFO: creating *v1.Service: volume-expand-7991/csi-hostpath-attacher Jan 11 19:48:17.426: INFO: creating *v1.StatefulSet: volume-expand-7991/csi-hostpath-attacher Jan 11 19:48:17.516: INFO: creating *v1beta1.CSIDriver: csi-hostpath-volume-expand-7991 Jan 11 19:48:17.605: INFO: creating *v1.Service: volume-expand-7991/csi-hostpathplugin Jan 11 19:48:17.699: INFO: creating *v1.StatefulSet: volume-expand-7991/csi-hostpathplugin Jan 11 19:48:17.788: INFO: creating *v1.Service: volume-expand-7991/csi-hostpath-provisioner Jan 11 19:48:17.881: INFO: creating *v1.StatefulSet: volume-expand-7991/csi-hostpath-provisioner Jan 11 19:48:17.971: INFO: creating *v1.Service: volume-expand-7991/csi-hostpath-resizer Jan 11 19:48:18.065: INFO: creating *v1.StatefulSet: volume-expand-7991/csi-hostpath-resizer Jan 11 19:48:18.155: INFO: creating *v1.Service: volume-expand-7991/csi-snapshotter Jan 11 19:48:18.248: INFO: creating *v1.StatefulSet: volume-expand-7991/csi-snapshotter Jan 11 19:48:18.337: INFO: creating *v1.ClusterRoleBinding: psp-csi-hostpath-role-volume-expand-7991 Jan 11 19:48:18.427: INFO: Test running for native CSI Driver, not checking metrics Jan 11 19:48:18.427: INFO: Creating resource for dynamic PV STEP: creating a StorageClass volume-expand-7991-csi-hostpath-volume-expand-7991-sc8fdsd STEP: creating a claim Jan 11 19:48:18.606: INFO: Waiting up to 5m0s for PersistentVolumeClaims [csi-hostpath24bjm] to have phase Bound Jan 11 19:48:18.695: INFO: PersistentVolumeClaim csi-hostpath24bjm found but phase is Pending instead of Bound. Jan 11 19:48:20.784: INFO: PersistentVolumeClaim csi-hostpath24bjm found and phase=Bound (2.178473354s) STEP: Expanding non-expandable pvc Jan 11 19:48:20.964: INFO: currentPvcSize {{5368709120 0} {} 5Gi BinarySI}, newSize {{6442450944 0} {} BinarySI} Jan 11 19:48:21.143: INFO: Error updating pvc csi-hostpath24bjm with persistentvolumeclaims "csi-hostpath24bjm" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize Jan 11 19:48:23.323: INFO: Error updating pvc csi-hostpath24bjm with persistentvolumeclaims "csi-hostpath24bjm" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize Jan 11 19:48:25.323: INFO: Error updating pvc csi-hostpath24bjm with persistentvolumeclaims "csi-hostpath24bjm" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize Jan 11 19:48:27.323: INFO: Error updating pvc csi-hostpath24bjm with persistentvolumeclaims "csi-hostpath24bjm" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize Jan 11 19:48:29.331: INFO: Error updating pvc csi-hostpath24bjm with persistentvolumeclaims "csi-hostpath24bjm" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize Jan 11 19:48:31.326: INFO: Error updating pvc csi-hostpath24bjm with persistentvolumeclaims "csi-hostpath24bjm" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize Jan 11 19:48:33.322: INFO: Error updating pvc csi-hostpath24bjm with persistentvolumeclaims "csi-hostpath24bjm" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize Jan 11 19:48:35.322: INFO: Error updating pvc csi-hostpath24bjm with persistentvolumeclaims "csi-hostpath24bjm" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize Jan 11 19:48:37.323: INFO: Error updating pvc csi-hostpath24bjm with persistentvolumeclaims "csi-hostpath24bjm" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize Jan 11 19:48:39.323: INFO: Error updating pvc csi-hostpath24bjm with persistentvolumeclaims "csi-hostpath24bjm" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize Jan 11 19:48:41.322: INFO: Error updating pvc csi-hostpath24bjm with persistentvolumeclaims "csi-hostpath24bjm" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize Jan 11 19:48:43.323: INFO: Error updating pvc csi-hostpath24bjm with persistentvolumeclaims "csi-hostpath24bjm" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize Jan 11 19:48:45.322: INFO: Error updating pvc csi-hostpath24bjm with persistentvolumeclaims "csi-hostpath24bjm" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize Jan 11 19:48:47.323: INFO: Error updating pvc csi-hostpath24bjm with persistentvolumeclaims "csi-hostpath24bjm" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize Jan 11 19:48:49.324: INFO: Error updating pvc csi-hostpath24bjm with persistentvolumeclaims "csi-hostpath24bjm" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize Jan 11 19:48:51.322: INFO: Error updating pvc csi-hostpath24bjm with persistentvolumeclaims "csi-hostpath24bjm" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize Jan 11 19:48:51.502: INFO: Error updating pvc csi-hostpath24bjm with persistentvolumeclaims "csi-hostpath24bjm" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize STEP: Deleting pvc Jan 11 19:48:51.502: INFO: Deleting PersistentVolumeClaim "csi-hostpath24bjm" Jan 11 19:48:51.593: INFO: Waiting up to 5m0s for PersistentVolume pvc-87cc12ba-3c28-4ee7-a2f8-2a7a52127417 to get deleted Jan 11 19:48:51.682: INFO: PersistentVolume pvc-87cc12ba-3c28-4ee7-a2f8-2a7a52127417 was removed STEP: Deleting sc STEP: uninstalling csi-hostpath driver Jan 11 19:48:51.773: INFO: deleting *v1.ServiceAccount: volume-expand-7991/csi-attacher Jan 11 19:48:51.864: INFO: deleting *v1.ClusterRole: external-attacher-runner-volume-expand-7991 Jan 11 19:48:51.955: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-volume-expand-7991 Jan 11 19:48:52.046: INFO: deleting *v1.Role: volume-expand-7991/external-attacher-cfg-volume-expand-7991 Jan 11 19:48:52.137: INFO: deleting *v1.RoleBinding: volume-expand-7991/csi-attacher-role-cfg Jan 11 19:48:52.227: INFO: deleting *v1.ServiceAccount: volume-expand-7991/csi-provisioner Jan 11 19:48:52.318: INFO: deleting *v1.ClusterRole: external-provisioner-runner-volume-expand-7991 Jan 11 19:48:52.408: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-volume-expand-7991 Jan 11 19:48:52.499: INFO: deleting *v1.Role: volume-expand-7991/external-provisioner-cfg-volume-expand-7991 Jan 11 19:48:52.590: INFO: deleting *v1.RoleBinding: volume-expand-7991/csi-provisioner-role-cfg Jan 11 19:48:52.680: INFO: deleting *v1.ServiceAccount: volume-expand-7991/csi-snapshotter Jan 11 19:48:52.771: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-volume-expand-7991 Jan 11 19:48:52.862: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-volume-expand-7991 Jan 11 19:48:52.952: INFO: deleting *v1.Role: volume-expand-7991/external-snapshotter-leaderelection-volume-expand-7991 Jan 11 19:48:53.043: INFO: deleting *v1.RoleBinding: volume-expand-7991/external-snapshotter-leaderelection Jan 11 19:48:53.134: INFO: deleting *v1.ServiceAccount: volume-expand-7991/csi-resizer Jan 11 19:48:53.224: INFO: deleting *v1.ClusterRole: external-resizer-runner-volume-expand-7991 Jan 11 19:48:53.315: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-volume-expand-7991 Jan 11 19:48:53.405: INFO: deleting *v1.Role: volume-expand-7991/external-resizer-cfg-volume-expand-7991 Jan 11 19:48:53.496: INFO: deleting *v1.RoleBinding: volume-expand-7991/csi-resizer-role-cfg Jan 11 19:48:53.587: INFO: deleting *v1.Service: volume-expand-7991/csi-hostpath-attacher Jan 11 19:48:53.682: INFO: deleting *v1.StatefulSet: volume-expand-7991/csi-hostpath-attacher Jan 11 19:48:53.773: INFO: deleting *v1beta1.CSIDriver: csi-hostpath-volume-expand-7991 Jan 11 19:48:53.864: INFO: deleting *v1.Service: volume-expand-7991/csi-hostpathplugin Jan 11 19:48:53.958: INFO: deleting *v1.StatefulSet: volume-expand-7991/csi-hostpathplugin Jan 11 19:48:54.049: INFO: deleting *v1.Service: volume-expand-7991/csi-hostpath-provisioner Jan 11 19:48:54.142: INFO: deleting *v1.StatefulSet: volume-expand-7991/csi-hostpath-provisioner Jan 11 19:48:54.233: INFO: deleting *v1.Service: volume-expand-7991/csi-hostpath-resizer Jan 11 19:48:54.327: INFO: deleting *v1.StatefulSet: volume-expand-7991/csi-hostpath-resizer Jan 11 19:48:54.418: INFO: deleting *v1.Service: volume-expand-7991/csi-snapshotter Jan 11 19:48:54.513: INFO: deleting *v1.StatefulSet: volume-expand-7991/csi-snapshotter Jan 11 19:48:54.604: INFO: deleting *v1.ClusterRoleBinding: psp-csi-hostpath-role-volume-expand-7991 [AfterEach] [Testpattern: Dynamic PV (block volmode)] volume-expand /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:48:54.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "volume-expand-7991" for this suite. Jan 11 19:49:07.055: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:49:10.354: INFO: namespace volume-expand-7991 deletion completed in 15.569427502s • [SLOW TEST:55.858 seconds] [sig-storage] CSI Volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Driver: csi-hostpath] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:62 [Testpattern: Dynamic PV (block volmode)] volume-expand /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:92 should not allow expansion of pvcs without AllowVolumeExpansion property /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:139 ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:48:35.895: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in csi-mock-volumes-795 STEP: Waiting for a default service account to be provisioned in namespace [It] should not require VolumeAttach for drivers without attachment /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:263 STEP: deploying csi mock driver Jan 11 19:48:37.037: INFO: creating *v1.ServiceAccount: csi-mock-volumes-795/csi-attacher Jan 11 19:48:37.127: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-795 Jan 11 19:48:37.127: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-795 Jan 11 19:48:37.217: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-795 Jan 11 19:48:37.307: INFO: creating *v1.Role: csi-mock-volumes-795/external-attacher-cfg-csi-mock-volumes-795 Jan 11 19:48:37.397: INFO: creating *v1.RoleBinding: csi-mock-volumes-795/csi-attacher-role-cfg Jan 11 19:48:37.487: INFO: creating *v1.ServiceAccount: csi-mock-volumes-795/csi-provisioner Jan 11 19:48:37.578: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-795 Jan 11 19:48:37.578: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-795 Jan 11 19:48:37.668: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-795 Jan 11 19:48:37.758: INFO: creating *v1.Role: csi-mock-volumes-795/external-provisioner-cfg-csi-mock-volumes-795 Jan 11 19:48:37.849: INFO: creating *v1.RoleBinding: csi-mock-volumes-795/csi-provisioner-role-cfg Jan 11 19:48:37.939: INFO: creating *v1.ServiceAccount: csi-mock-volumes-795/csi-resizer Jan 11 19:48:38.029: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-795 Jan 11 19:48:38.029: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-795 Jan 11 19:48:38.119: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-795 Jan 11 19:48:38.209: INFO: creating *v1.Role: csi-mock-volumes-795/external-resizer-cfg-csi-mock-volumes-795 Jan 11 19:48:38.299: INFO: creating *v1.RoleBinding: csi-mock-volumes-795/csi-resizer-role-cfg Jan 11 19:48:38.388: INFO: creating *v1.ServiceAccount: csi-mock-volumes-795/csi-mock Jan 11 19:48:38.478: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-795 Jan 11 19:48:38.568: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-795 Jan 11 19:48:38.658: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-795 Jan 11 19:48:38.748: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-795 Jan 11 19:48:38.838: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-795 Jan 11 19:48:38.928: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-795 Jan 11 19:48:39.019: INFO: creating *v1.StatefulSet: csi-mock-volumes-795/csi-mockplugin Jan 11 19:48:39.109: INFO: creating *v1beta1.CSIDriver: csi-mock-csi-mock-volumes-795 Jan 11 19:48:39.199: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-795" STEP: Creating pod Jan 11 19:48:39.467: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Jan 11 19:48:39.559: INFO: Waiting up to 5m0s for PersistentVolumeClaims [pvc-wb8dq] to have phase Bound Jan 11 19:48:39.649: INFO: PersistentVolumeClaim pvc-wb8dq found but phase is Pending instead of Bound. Jan 11 19:48:41.739: INFO: PersistentVolumeClaim pvc-wb8dq found and phase=Bound (2.179779699s) STEP: Checking if VolumeAttachment was created for the pod STEP: Deleting pod pvc-volume-tester-hf294 Jan 11 19:48:44.458: INFO: Deleting pod "pvc-volume-tester-hf294" in namespace "csi-mock-volumes-795" Jan 11 19:48:44.549: INFO: Wait up to 5m0s for pod "pvc-volume-tester-hf294" to be fully deleted STEP: Deleting claim pvc-wb8dq Jan 11 19:48:54.908: INFO: Waiting up to 2m0s for PersistentVolume pvc-1eb28530-b2d5-4ca1-b971-bed1f14bc424 to get deleted Jan 11 19:48:54.998: INFO: PersistentVolume pvc-1eb28530-b2d5-4ca1-b971-bed1f14bc424 found and phase=Released (89.186606ms) Jan 11 19:48:57.088: INFO: PersistentVolume pvc-1eb28530-b2d5-4ca1-b971-bed1f14bc424 was removed STEP: Deleting storageclass csi-mock-volumes-795-sc STEP: Cleaning up resources STEP: uninstalling csi mock driver Jan 11 19:48:57.179: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-795/csi-attacher Jan 11 19:48:57.270: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-795 Jan 11 19:48:57.361: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-795 Jan 11 19:48:57.452: INFO: deleting *v1.Role: csi-mock-volumes-795/external-attacher-cfg-csi-mock-volumes-795 Jan 11 19:48:57.543: INFO: deleting *v1.RoleBinding: csi-mock-volumes-795/csi-attacher-role-cfg Jan 11 19:48:57.635: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-795/csi-provisioner Jan 11 19:48:57.726: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-795 Jan 11 19:48:57.817: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-795 Jan 11 19:48:57.909: INFO: deleting *v1.Role: csi-mock-volumes-795/external-provisioner-cfg-csi-mock-volumes-795 Jan 11 19:48:58.000: INFO: deleting *v1.RoleBinding: csi-mock-volumes-795/csi-provisioner-role-cfg Jan 11 19:48:58.091: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-795/csi-resizer Jan 11 19:48:58.182: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-795 Jan 11 19:48:58.273: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-795 Jan 11 19:48:58.365: INFO: deleting *v1.Role: csi-mock-volumes-795/external-resizer-cfg-csi-mock-volumes-795 Jan 11 19:48:58.456: INFO: deleting *v1.RoleBinding: csi-mock-volumes-795/csi-resizer-role-cfg Jan 11 19:48:58.547: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-795/csi-mock Jan 11 19:48:58.638: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-795 Jan 11 19:48:58.730: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-795 Jan 11 19:48:58.822: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-795 Jan 11 19:48:58.913: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-795 Jan 11 19:48:59.004: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-795 Jan 11 19:48:59.095: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-795 Jan 11 19:48:59.187: INFO: deleting *v1.StatefulSet: csi-mock-volumes-795/csi-mockplugin Jan 11 19:48:59.278: INFO: deleting *v1beta1.CSIDriver: csi-mock-csi-mock-volumes-795 [AfterEach] [sig-storage] CSI mock volume /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:48:59.459: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "csi-mock-volumes-795" for this suite. Jan 11 19:49:07.820: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:49:11.129: INFO: namespace csi-mock-volumes-795 deletion completed in 11.579604435s • [SLOW TEST:35.234 seconds] [sig-storage] CSI mock volume /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI attach test using mock driver /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:241 should not require VolumeAttach for drivers without attachment /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:263 ------------------------------ SSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] [sig-node] Security Context /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:49:02.612: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename security-context STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in security-context-2029 STEP: Waiting for a default service account to be provisioned in namespace [It] should support pod.Spec.SecurityContext.RunAsUser [LinuxOnly] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:75 STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser Jan 11 19:49:03.346: INFO: Waiting up to 5m0s for pod "security-context-5122c4a2-eaea-47c9-8940-eb11fb396b72" in namespace "security-context-2029" to be "success or failure" Jan 11 19:49:03.435: INFO: Pod "security-context-5122c4a2-eaea-47c9-8940-eb11fb396b72": Phase="Pending", Reason="", readiness=false. Elapsed: 89.887679ms Jan 11 19:49:05.526: INFO: Pod "security-context-5122c4a2-eaea-47c9-8940-eb11fb396b72": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.180010058s STEP: Saw pod success Jan 11 19:49:05.526: INFO: Pod "security-context-5122c4a2-eaea-47c9-8940-eb11fb396b72" satisfied condition "success or failure" Jan 11 19:49:05.615: INFO: Trying to get logs from node ip-10-250-7-77.ec2.internal pod security-context-5122c4a2-eaea-47c9-8940-eb11fb396b72 container test-container: STEP: delete the pod Jan 11 19:49:05.805: INFO: Waiting for pod security-context-5122c4a2-eaea-47c9-8940-eb11fb396b72 to disappear Jan 11 19:49:05.894: INFO: Pod security-context-5122c4a2-eaea-47c9-8940-eb11fb396b72 no longer exists [AfterEach] [k8s.io] [sig-node] Security Context /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:49:05.895: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-2029" for this suite. Jan 11 19:49:12.257: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:49:15.580: INFO: namespace security-context-2029 deletion completed in 9.593972578s • [SLOW TEST:12.968 seconds] [k8s.io] [sig-node] Security Context /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693 should support pod.Spec.SecurityContext.RunAsUser [LinuxOnly] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:75 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:48:37.951: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in csi-mock-volumes-2239 STEP: Waiting for a default service account to be provisioned in namespace [It] should not be passed when CSIDriver does not exist /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:347 STEP: deploying csi mock driver Jan 11 19:48:38.934: INFO: creating *v1.ServiceAccount: csi-mock-volumes-2239/csi-attacher Jan 11 19:48:39.024: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-2239 Jan 11 19:48:39.024: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-2239 Jan 11 19:48:39.114: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-2239 Jan 11 19:48:39.204: INFO: creating *v1.Role: csi-mock-volumes-2239/external-attacher-cfg-csi-mock-volumes-2239 Jan 11 19:48:39.293: INFO: creating *v1.RoleBinding: csi-mock-volumes-2239/csi-attacher-role-cfg Jan 11 19:48:39.383: INFO: creating *v1.ServiceAccount: csi-mock-volumes-2239/csi-provisioner Jan 11 19:48:39.472: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-2239 Jan 11 19:48:39.472: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-2239 Jan 11 19:48:39.562: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-2239 Jan 11 19:48:39.652: INFO: creating *v1.Role: csi-mock-volumes-2239/external-provisioner-cfg-csi-mock-volumes-2239 Jan 11 19:48:39.742: INFO: creating *v1.RoleBinding: csi-mock-volumes-2239/csi-provisioner-role-cfg Jan 11 19:48:39.832: INFO: creating *v1.ServiceAccount: csi-mock-volumes-2239/csi-resizer Jan 11 19:48:39.922: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-2239 Jan 11 19:48:39.922: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-2239 Jan 11 19:48:40.011: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-2239 Jan 11 19:48:40.102: INFO: creating *v1.Role: csi-mock-volumes-2239/external-resizer-cfg-csi-mock-volumes-2239 Jan 11 19:48:40.191: INFO: creating *v1.RoleBinding: csi-mock-volumes-2239/csi-resizer-role-cfg Jan 11 19:48:40.281: INFO: creating *v1.ServiceAccount: csi-mock-volumes-2239/csi-mock Jan 11 19:48:40.371: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-2239 Jan 11 19:48:40.461: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-2239 Jan 11 19:48:40.551: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-2239 Jan 11 19:48:40.641: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-2239 Jan 11 19:48:40.730: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-2239 Jan 11 19:48:40.820: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-2239 Jan 11 19:48:40.910: INFO: creating *v1.StatefulSet: csi-mock-volumes-2239/csi-mockplugin Jan 11 19:48:41.001: INFO: creating *v1.StatefulSet: csi-mock-volumes-2239/csi-mockplugin-attacher STEP: Creating pod Jan 11 19:48:41.271: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Jan 11 19:48:41.363: INFO: Waiting up to 5m0s for PersistentVolumeClaims [pvc-lpnfx] to have phase Bound Jan 11 19:48:41.452: INFO: PersistentVolumeClaim pvc-lpnfx found but phase is Pending instead of Bound. Jan 11 19:48:43.542: INFO: PersistentVolumeClaim pvc-lpnfx found and phase=Bound (2.179207559s) STEP: Deleting the previously created pod Jan 11 19:48:49.992: INFO: Deleting pod "pvc-volume-tester-5f4w6" in namespace "csi-mock-volumes-2239" Jan 11 19:48:50.083: INFO: Wait up to 5m0s for pod "pvc-volume-tester-5f4w6" to be fully deleted STEP: Checking CSI driver logs Jan 11 19:49:04.384: INFO: CSI driver logs: mock driver started gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":""} gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-2239","vendor_version":"0.3.0","manifest":{"url":"https://github.com/kubernetes-csi/csi-test/mock"}},"Error":""} gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":""} gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":9}}}]},"Error":""} gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-d1288874-9288-4091-8a05-db51f9b0bdbe","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":{"volume":{"capacity_bytes":1073741824,"volume_id":"4","volume_context":{"name":"pvc-d1288874-9288-4091-8a05-db51f9b0bdbe"}}},"Error":""} gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":""} gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-2239","vendor_version":"0.3.0","manifest":{"url":"https://github.com/kubernetes-csi/csi-test/mock"}},"Error":""} gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":""} gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":9}}}]},"Error":""} gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-2239","vendor_version":"0.3.0","manifest":{"url":"https://github.com/kubernetes-csi/csi-test/mock"}},"Error":""} gRPCCall: {"Method":"/csi.v1.Node/NodeGetInfo","Request":{},"Response":{"node_id":"csi-mock-csi-mock-volumes-2239","max_volumes_per_node":2},"Error":""} gRPCCall: {"Method":"/csi.v1.Controller/ControllerPublishVolume","Request":{"volume_id":"4","node_id":"csi-mock-csi-mock-volumes-2239","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-d1288874-9288-4091-8a05-db51f9b0bdbe","storage.kubernetes.io/csiProvisionerIdentity":"1578772123296-8081-csi-mock-csi-mock-volumes-2239"}},"Response":{"publish_context":{"device":"/dev/mock","readonly":"false"}},"Error":""} gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}}]},"Error":""} gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","publish_context":{"device":"/dev/mock","readonly":"false"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-d1288874-9288-4091-8a05-db51f9b0bdbe/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-d1288874-9288-4091-8a05-db51f9b0bdbe","storage.kubernetes.io/csiProvisionerIdentity":"1578772123296-8081-csi-mock-csi-mock-volumes-2239"}},"Response":{},"Error":""} gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}}]},"Error":""} gRPCCall: {"Method":"/csi.v1.Node/NodePublishVolume","Request":{"volume_id":"4","publish_context":{"device":"/dev/mock","readonly":"false"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-d1288874-9288-4091-8a05-db51f9b0bdbe/globalmount","target_path":"/var/lib/kubelet/pods/295be60a-4f8c-4b10-821e-e23b9c9a0503/volumes/kubernetes.io~csi/pvc-d1288874-9288-4091-8a05-db51f9b0bdbe/mount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-d1288874-9288-4091-8a05-db51f9b0bdbe","storage.kubernetes.io/csiProvisionerIdentity":"1578772123296-8081-csi-mock-csi-mock-volumes-2239"}},"Response":{},"Error":""} gRPCCall: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/295be60a-4f8c-4b10-821e-e23b9c9a0503/volumes/kubernetes.io~csi/pvc-d1288874-9288-4091-8a05-db51f9b0bdbe/mount"},"Response":{},"Error":""} gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}}]},"Error":""} gRPCCall: {"Method":"/csi.v1.Node/NodeUnstageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-d1288874-9288-4091-8a05-db51f9b0bdbe/globalmount"},"Response":{},"Error":""} gRPCCall: {"Method":"/csi.v1.Controller/ControllerUnpublishVolume","Request":{"volume_id":"4","node_id":"csi-mock-csi-mock-volumes-2239"},"Response":{},"Error":""} Jan 11 19:49:04.384: INFO: Found NodeUnpublishVolume: {Method:/csi.v1.Node/NodeUnpublishVolume Request:{VolumeContext:map[]}} STEP: Deleting pod pvc-volume-tester-5f4w6 Jan 11 19:49:04.384: INFO: Deleting pod "pvc-volume-tester-5f4w6" in namespace "csi-mock-volumes-2239" STEP: Deleting claim pvc-lpnfx Jan 11 19:49:04.655: INFO: Waiting up to 2m0s for PersistentVolume pvc-d1288874-9288-4091-8a05-db51f9b0bdbe to get deleted Jan 11 19:49:04.744: INFO: PersistentVolume pvc-d1288874-9288-4091-8a05-db51f9b0bdbe found and phase=Released (89.495329ms) Jan 11 19:49:06.834: INFO: PersistentVolume pvc-d1288874-9288-4091-8a05-db51f9b0bdbe was removed STEP: Deleting storageclass csi-mock-volumes-2239-sc STEP: Cleaning up resources STEP: uninstalling csi mock driver Jan 11 19:49:06.925: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-2239/csi-attacher Jan 11 19:49:07.016: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-2239 Jan 11 19:49:07.107: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-2239 Jan 11 19:49:07.198: INFO: deleting *v1.Role: csi-mock-volumes-2239/external-attacher-cfg-csi-mock-volumes-2239 Jan 11 19:49:07.289: INFO: deleting *v1.RoleBinding: csi-mock-volumes-2239/csi-attacher-role-cfg Jan 11 19:49:07.380: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-2239/csi-provisioner Jan 11 19:49:07.472: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-2239 Jan 11 19:49:07.563: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-2239 Jan 11 19:49:07.654: INFO: deleting *v1.Role: csi-mock-volumes-2239/external-provisioner-cfg-csi-mock-volumes-2239 Jan 11 19:49:07.744: INFO: deleting *v1.RoleBinding: csi-mock-volumes-2239/csi-provisioner-role-cfg Jan 11 19:49:07.835: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-2239/csi-resizer Jan 11 19:49:07.926: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-2239 Jan 11 19:49:08.018: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-2239 Jan 11 19:49:08.108: INFO: deleting *v1.Role: csi-mock-volumes-2239/external-resizer-cfg-csi-mock-volumes-2239 Jan 11 19:49:08.199: INFO: deleting *v1.RoleBinding: csi-mock-volumes-2239/csi-resizer-role-cfg Jan 11 19:49:08.291: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-2239/csi-mock Jan 11 19:49:08.382: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-2239 Jan 11 19:49:08.473: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-2239 Jan 11 19:49:08.564: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-2239 Jan 11 19:49:08.654: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-2239 Jan 11 19:49:08.745: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-2239 Jan 11 19:49:08.836: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-2239 Jan 11 19:49:08.927: INFO: deleting *v1.StatefulSet: csi-mock-volumes-2239/csi-mockplugin Jan 11 19:49:09.018: INFO: deleting *v1.StatefulSet: csi-mock-volumes-2239/csi-mockplugin-attacher [AfterEach] [sig-storage] CSI mock volume /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:49:09.109: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "csi-mock-volumes-2239" for this suite. Jan 11 19:49:15.469: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:49:18.778: INFO: namespace csi-mock-volumes-2239 deletion completed in 9.578721387s • [SLOW TEST:40.827 seconds] [sig-storage] CSI mock volume /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI workload information using mock driver /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:297 should not be passed when CSIDriver does not exist /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:347 ------------------------------ SSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:49:05.489: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename emptydir STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-7234 STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Jan 11 19:49:08.790: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec pod-sharedvolume-651d99ef-24bc-4367-8bdc-1b693746d304 -c busybox-main-container --namespace=emptydir-7234 -- cat /usr/share/volumeshare/shareddata.txt' Jan 11 19:49:10.194: INFO: stderr: "" Jan 11 19:49:10.194: INFO: stdout: "Hello from the busy-box sub-container\n" [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:49:10.194: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7234" for this suite. Jan 11 19:49:16.553: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:49:19.849: INFO: namespace emptydir-7234 deletion completed in 9.564885899s • [SLOW TEST:14.360 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 pod should support shared volumes between containers [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:49:08.368: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename projected STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-2705 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating a pod to test downward API volume plugin Jan 11 19:49:09.095: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2ff79ffb-6e50-4594-880f-1fa4272855bb" in namespace "projected-2705" to be "success or failure" Jan 11 19:49:09.185: INFO: Pod "downwardapi-volume-2ff79ffb-6e50-4594-880f-1fa4272855bb": Phase="Pending", Reason="", readiness=false. Elapsed: 89.253671ms Jan 11 19:49:11.274: INFO: Pod "downwardapi-volume-2ff79ffb-6e50-4594-880f-1fa4272855bb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.178509179s STEP: Saw pod success Jan 11 19:49:11.274: INFO: Pod "downwardapi-volume-2ff79ffb-6e50-4594-880f-1fa4272855bb" satisfied condition "success or failure" Jan 11 19:49:11.363: INFO: Trying to get logs from node ip-10-250-27-25.ec2.internal pod downwardapi-volume-2ff79ffb-6e50-4594-880f-1fa4272855bb container client-container: STEP: delete the pod Jan 11 19:49:11.552: INFO: Waiting for pod downwardapi-volume-2ff79ffb-6e50-4594-880f-1fa4272855bb to disappear Jan 11 19:49:11.641: INFO: Pod downwardapi-volume-2ff79ffb-6e50-4594-880f-1fa4272855bb no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:49:11.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2705" for this suite. Jan 11 19:49:18.001: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:49:21.296: INFO: namespace projected-2705 deletion completed in 9.564919611s • [SLOW TEST:12.928 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:49:15.605: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename resourcequota STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in resourcequota-2718 STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:49:17.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2718" for this suite. Jan 11 19:49:23.753: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:49:27.075: INFO: namespace resourcequota-2718 deletion completed in 9.592690662s • [SLOW TEST:11.470 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to update and delete ResourceQuota. [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:49:10.357: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename kubectl STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-1405 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:225 [BeforeEach] Kubectl label /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1192 STEP: creating the pod Jan 11 19:49:11.051: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config create -f - --namespace=kubectl-1405' Jan 11 19:49:12.003: INFO: stderr: "" Jan 11 19:49:12.003: INFO: stdout: "pod/pause created\n" Jan 11 19:49:12.003: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Jan 11 19:49:12.004: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-1405" to be "running and ready" Jan 11 19:49:12.093: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 89.752811ms Jan 11 19:49:14.183: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 2.179675621s Jan 11 19:49:14.183: INFO: Pod "pause" satisfied condition "running and ready" Jan 11 19:49:14.183: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: adding the label testing-label with value testing-label-value to a pod Jan 11 19:49:14.183: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config label pods pause testing-label=testing-label-value --namespace=kubectl-1405' Jan 11 19:49:14.700: INFO: stderr: "" Jan 11 19:49:14.700: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Jan 11 19:49:14.700: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config get pod pause -L testing-label --namespace=kubectl-1405' Jan 11 19:49:15.133: INFO: stderr: "" Jan 11 19:49:15.133: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s testing-label-value\n" STEP: removing the label testing-label of a pod Jan 11 19:49:15.133: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config label pods pause testing-label- --namespace=kubectl-1405' Jan 11 19:49:15.653: INFO: stderr: "" Jan 11 19:49:15.653: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Jan 11 19:49:15.653: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config get pod pause -L testing-label --namespace=kubectl-1405' Jan 11 19:49:16.079: INFO: stderr: "" Jan 11 19:49:16.079: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s \n" [AfterEach] Kubectl label /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1199 STEP: using delete to clean up resources Jan 11 19:49:16.079: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config delete --grace-period=0 --force -f - --namespace=kubectl-1405' Jan 11 19:49:16.595: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 11 19:49:16.595: INFO: stdout: "pod \"pause\" force deleted\n" Jan 11 19:49:16.595: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config get rc,svc -l name=pause --no-headers --namespace=kubectl-1405' Jan 11 19:49:17.113: INFO: stderr: "No resources found in kubectl-1405 namespace.\n" Jan 11 19:49:17.113: INFO: stdout: "" Jan 11 19:49:17.113: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config get pods -l name=pause --namespace=kubectl-1405 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jan 11 19:49:17.535: INFO: stderr: "" Jan 11 19:49:17.535: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:49:17.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1405" for this suite. Jan 11 19:49:25.895: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:49:29.204: INFO: namespace kubectl-1405 deletion completed in 11.578094813s • [SLOW TEST:18.847 seconds] [sig-cli] Kubectl client /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1189 should update the label on a resource [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SS ------------------------------ [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:49:18.789: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename var-expansion STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in var-expansion-2558 STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a volume subpath [sig-storage][NodeFeature:VolumeSubpathEnvExpansion] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/expansion.go:162 STEP: Creating a pod to test substitution in volume subpath Jan 11 19:49:19.520: INFO: Waiting up to 5m0s for pod "var-expansion-2497a7e4-a325-490f-b10b-3a6835869531" in namespace "var-expansion-2558" to be "success or failure" Jan 11 19:49:19.610: INFO: Pod "var-expansion-2497a7e4-a325-490f-b10b-3a6835869531": Phase="Pending", Reason="", readiness=false. Elapsed: 89.86261ms Jan 11 19:49:21.700: INFO: Pod "var-expansion-2497a7e4-a325-490f-b10b-3a6835869531": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.180202853s STEP: Saw pod success Jan 11 19:49:21.700: INFO: Pod "var-expansion-2497a7e4-a325-490f-b10b-3a6835869531" satisfied condition "success or failure" Jan 11 19:49:21.790: INFO: Trying to get logs from node ip-10-250-27-25.ec2.internal pod var-expansion-2497a7e4-a325-490f-b10b-3a6835869531 container dapi-container: STEP: delete the pod Jan 11 19:49:21.980: INFO: Waiting for pod var-expansion-2497a7e4-a325-490f-b10b-3a6835869531 to disappear Jan 11 19:49:22.070: INFO: Pod var-expansion-2497a7e4-a325-490f-b10b-3a6835869531 no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:49:22.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-2558" for this suite. Jan 11 19:49:28.431: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:49:31.745: INFO: namespace var-expansion-2558 deletion completed in 9.58386645s • [SLOW TEST:12.955 seconds] [k8s.io] Variable Expansion /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693 should allow substituting values in a volume subpath [sig-storage][NodeFeature:VolumeSubpathEnvExpansion] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/expansion.go:162 ------------------------------ SSSS ------------------------------ [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:49:00.979: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename container-probe STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-probe-5319 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:52 [It] should be restarted with a local redirect http liveness probe /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:232 STEP: Creating pod liveness-2bf3290c-cf31-4a74-92f4-cd3a8de0a85a in namespace container-probe-5319 Jan 11 19:49:03.892: INFO: Started pod liveness-2bf3290c-cf31-4a74-92f4-cd3a8de0a85a in namespace container-probe-5319 STEP: checking the pod's current state and verifying that restartCount is present Jan 11 19:49:03.982: INFO: Initial restart count of pod liveness-2bf3290c-cf31-4a74-92f4-cd3a8de0a85a is 0 Jan 11 19:49:29.155: INFO: Restart count of pod container-probe-5319/liveness-2bf3290c-cf31-4a74-92f4-cd3a8de0a85a is now 1 (25.172300744s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:49:29.248: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5319" for this suite. Jan 11 19:49:35.610: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:49:38.930: INFO: namespace container-probe-5319 deletion completed in 9.590321937s • [SLOW TEST:37.951 seconds] [k8s.io] Probing container /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693 should be restarted with a local redirect http liveness probe /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:232 ------------------------------ SSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:49:27.080: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename emptydir STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-4631 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:46 [It] new files should be created with FSGroup ownership when container is root /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:51 STEP: Creating a pod to test emptydir 0644 on tmpfs Jan 11 19:49:27.845: INFO: Waiting up to 5m0s for pod "pod-c6b8bef0-2b24-4c36-a4e9-0eee0514f639" in namespace "emptydir-4631" to be "success or failure" Jan 11 19:49:27.935: INFO: Pod "pod-c6b8bef0-2b24-4c36-a4e9-0eee0514f639": Phase="Pending", Reason="", readiness=false. Elapsed: 89.893944ms Jan 11 19:49:30.025: INFO: Pod "pod-c6b8bef0-2b24-4c36-a4e9-0eee0514f639": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.180215098s STEP: Saw pod success Jan 11 19:49:30.025: INFO: Pod "pod-c6b8bef0-2b24-4c36-a4e9-0eee0514f639" satisfied condition "success or failure" Jan 11 19:49:30.116: INFO: Trying to get logs from node ip-10-250-27-25.ec2.internal pod pod-c6b8bef0-2b24-4c36-a4e9-0eee0514f639 container test-container: STEP: delete the pod Jan 11 19:49:30.307: INFO: Waiting for pod pod-c6b8bef0-2b24-4c36-a4e9-0eee0514f639 to disappear Jan 11 19:49:30.397: INFO: Pod pod-c6b8bef0-2b24-4c36-a4e9-0eee0514f639 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:49:30.397: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4631" for this suite. Jan 11 19:49:36.758: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:49:40.081: INFO: namespace emptydir-4631 deletion completed in 9.592447634s • [SLOW TEST:13.001 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:44 new files should be created with FSGroup ownership when container is root /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:51 ------------------------------ SSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:49:31.752: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename gc STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in gc-8909 STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Jan 11 19:49:32.754: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"d933ce90-7ada-42ef-a842-9dec989d3ac1", Controller:(*bool)(0xc001f1ad9a), BlockOwnerDeletion:(*bool)(0xc001f1ad9b)}} Jan 11 19:49:32.845: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"876fd40f-0de0-433e-8206-7d93b502bf69", Controller:(*bool)(0xc000786676), BlockOwnerDeletion:(*bool)(0xc000786677)}} Jan 11 19:49:32.936: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"2e22d3f0-ed7e-46e4-a8bb-7da7c7962908", Controller:(*bool)(0xc0011d5306), BlockOwnerDeletion:(*bool)(0xc0011d5307)}} [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:49:38.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8909" for this suite. Jan 11 19:49:46.484: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:49:49.795: INFO: namespace gc-8909 deletion completed in 11.586871943s • [SLOW TEST:18.044 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:93 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:49:40.096: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename provisioning STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in provisioning-6290 STEP: Waiting for a default service account to be provisioned in namespace [It] should support readOnly directory specified in the volumeMount /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:347 Jan 11 19:49:40.756: INFO: Could not find CSI Name for in-tree plugin kubernetes.io/host-path Jan 11 19:49:40.847: INFO: Creating resource for inline volume STEP: Creating pod pod-subpath-test-hostpath-t82q STEP: Creating a pod to test subpath Jan 11 19:49:40.940: INFO: Waiting up to 5m0s for pod "pod-subpath-test-hostpath-t82q" in namespace "provisioning-6290" to be "success or failure" Jan 11 19:49:41.030: INFO: Pod "pod-subpath-test-hostpath-t82q": Phase="Pending", Reason="", readiness=false. Elapsed: 89.86837ms Jan 11 19:49:43.123: INFO: Pod "pod-subpath-test-hostpath-t82q": Phase="Pending", Reason="", readiness=false. Elapsed: 2.183250542s Jan 11 19:49:45.214: INFO: Pod "pod-subpath-test-hostpath-t82q": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.273752304s STEP: Saw pod success Jan 11 19:49:45.214: INFO: Pod "pod-subpath-test-hostpath-t82q" satisfied condition "success or failure" Jan 11 19:49:45.304: INFO: Trying to get logs from node ip-10-250-27-25.ec2.internal pod pod-subpath-test-hostpath-t82q container test-container-subpath-hostpath-t82q: STEP: delete the pod Jan 11 19:49:45.495: INFO: Waiting for pod pod-subpath-test-hostpath-t82q to disappear Jan 11 19:49:45.585: INFO: Pod pod-subpath-test-hostpath-t82q no longer exists STEP: Deleting pod pod-subpath-test-hostpath-t82q Jan 11 19:49:45.585: INFO: Deleting pod "pod-subpath-test-hostpath-t82q" in namespace "provisioning-6290" STEP: Deleting pod Jan 11 19:49:45.675: INFO: Deleting pod "pod-subpath-test-hostpath-t82q" in namespace "provisioning-6290" Jan 11 19:49:45.764: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics [AfterEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:49:45.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "provisioning-6290" for this suite. Jan 11 19:49:52.125: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:49:55.446: INFO: namespace provisioning-6290 deletion completed in 9.591012544s • [SLOW TEST:15.351 seconds] [sig-storage] In-tree Volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Driver: hostPath] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:69 [Testpattern: Inline-volume (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:92 should support readOnly directory specified in the volumeMount /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:347 ------------------------------ [BeforeEach] [sig-network] Networking /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:49:21.308: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename nettest STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nettest-941 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Networking /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:35 STEP: Executing a successful http request from the external internet [It] should function for pod-Service: http /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:105 STEP: Performing setup for networking test in namespace nettest-941 STEP: creating a selector STEP: Creating the service pods in kubernetes Jan 11 19:49:22.027: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods STEP: Getting node addresses Jan 11 19:49:43.555: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Jan 11 19:49:43.735: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-network] Networking /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:49:43.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "nettest-941" for this suite. Jan 11 19:49:56.095: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:49:59.398: INFO: namespace nettest-941 deletion completed in 15.571672767s S [SKIPPING] [38.090 seconds] [sig-network] Networking /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 Granular Checks: Services /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:103 should function for pod-Service: http [It] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:105 Requires at least 2 nodes (not -1) /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:597 ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:49:29.209: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename kubectl STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-2092 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:225 [It] should apply a new configuration to an existing RC /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:767 STEP: creating Redis RC Jan 11 19:49:29.848: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config create -f - --namespace=kubectl-2092' Jan 11 19:49:30.801: INFO: stderr: "" Jan 11 19:49:30.801: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: applying a modified configuration Jan 11 19:49:30.803: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config apply -f - --namespace=kubectl-2092' Jan 11 19:49:31.839: INFO: stderr: "Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply\n" Jan 11 19:49:31.839: INFO: stdout: "replicationcontroller/redis-master configured\n" STEP: checking the result [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:49:31.929: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2092" for this suite. Jan 11 19:50:00.288: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:50:03.593: INFO: namespace kubectl-2092 deletion completed in 31.573558228s • [SLOW TEST:34.384 seconds] [sig-cli] Kubectl client /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl apply /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:766 should apply a new configuration to an existing RC /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:767 ------------------------------ SSSSSS ------------------------------ [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:93 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:49:19.862: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename provisioning STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in provisioning-4625 STEP: Waiting for a default service account to be provisioned in namespace [It] should support creating multiple subpath from same volumes [Slow] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:277 STEP: deploying csi-hostpath driver Jan 11 19:49:20.688: INFO: creating *v1.ServiceAccount: provisioning-4625/csi-attacher Jan 11 19:49:20.778: INFO: creating *v1.ClusterRole: external-attacher-runner-provisioning-4625 Jan 11 19:49:20.778: INFO: Define cluster role external-attacher-runner-provisioning-4625 Jan 11 19:49:20.868: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-provisioning-4625 Jan 11 19:49:20.957: INFO: creating *v1.Role: provisioning-4625/external-attacher-cfg-provisioning-4625 Jan 11 19:49:21.047: INFO: creating *v1.RoleBinding: provisioning-4625/csi-attacher-role-cfg Jan 11 19:49:21.136: INFO: creating *v1.ServiceAccount: provisioning-4625/csi-provisioner Jan 11 19:49:21.225: INFO: creating *v1.ClusterRole: external-provisioner-runner-provisioning-4625 Jan 11 19:49:21.225: INFO: Define cluster role external-provisioner-runner-provisioning-4625 Jan 11 19:49:21.315: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-provisioning-4625 Jan 11 19:49:21.404: INFO: creating *v1.Role: provisioning-4625/external-provisioner-cfg-provisioning-4625 Jan 11 19:49:21.494: INFO: creating *v1.RoleBinding: provisioning-4625/csi-provisioner-role-cfg Jan 11 19:49:21.583: INFO: creating *v1.ServiceAccount: provisioning-4625/csi-snapshotter Jan 11 19:49:21.673: INFO: creating *v1.ClusterRole: external-snapshotter-runner-provisioning-4625 Jan 11 19:49:21.673: INFO: Define cluster role external-snapshotter-runner-provisioning-4625 Jan 11 19:49:21.763: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-provisioning-4625 Jan 11 19:49:21.852: INFO: creating *v1.Role: provisioning-4625/external-snapshotter-leaderelection-provisioning-4625 Jan 11 19:49:21.942: INFO: creating *v1.RoleBinding: provisioning-4625/external-snapshotter-leaderelection Jan 11 19:49:22.031: INFO: creating *v1.ServiceAccount: provisioning-4625/csi-resizer Jan 11 19:49:22.121: INFO: creating *v1.ClusterRole: external-resizer-runner-provisioning-4625 Jan 11 19:49:22.121: INFO: Define cluster role external-resizer-runner-provisioning-4625 Jan 11 19:49:22.210: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-provisioning-4625 Jan 11 19:49:22.299: INFO: creating *v1.Role: provisioning-4625/external-resizer-cfg-provisioning-4625 Jan 11 19:49:22.389: INFO: creating *v1.RoleBinding: provisioning-4625/csi-resizer-role-cfg Jan 11 19:49:22.478: INFO: creating *v1.Service: provisioning-4625/csi-hostpath-attacher Jan 11 19:49:22.572: INFO: creating *v1.StatefulSet: provisioning-4625/csi-hostpath-attacher Jan 11 19:49:22.661: INFO: creating *v1beta1.CSIDriver: csi-hostpath-provisioning-4625 Jan 11 19:49:22.751: INFO: creating *v1.Service: provisioning-4625/csi-hostpathplugin Jan 11 19:49:22.844: INFO: creating *v1.StatefulSet: provisioning-4625/csi-hostpathplugin Jan 11 19:49:22.934: INFO: creating *v1.Service: provisioning-4625/csi-hostpath-provisioner Jan 11 19:49:23.027: INFO: creating *v1.StatefulSet: provisioning-4625/csi-hostpath-provisioner Jan 11 19:49:23.116: INFO: creating *v1.Service: provisioning-4625/csi-hostpath-resizer Jan 11 19:49:23.210: INFO: creating *v1.StatefulSet: provisioning-4625/csi-hostpath-resizer Jan 11 19:49:23.299: INFO: creating *v1.Service: provisioning-4625/csi-snapshotter Jan 11 19:49:23.393: INFO: creating *v1.StatefulSet: provisioning-4625/csi-snapshotter Jan 11 19:49:23.482: INFO: creating *v1.ClusterRoleBinding: psp-csi-hostpath-role-provisioning-4625 Jan 11 19:49:23.572: INFO: Test running for native CSI Driver, not checking metrics Jan 11 19:49:23.572: INFO: Creating resource for dynamic PV STEP: creating a StorageClass provisioning-4625-csi-hostpath-provisioning-4625-scfxxlm STEP: creating a claim Jan 11 19:49:23.661: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Jan 11 19:49:23.751: INFO: Waiting up to 5m0s for PersistentVolumeClaims [csi-hostpathsx22v] to have phase Bound Jan 11 19:49:23.840: INFO: PersistentVolumeClaim csi-hostpathsx22v found but phase is Pending instead of Bound. Jan 11 19:49:25.931: INFO: PersistentVolumeClaim csi-hostpathsx22v found and phase=Bound (2.179646031s) STEP: Creating pod pod-subpath-test-csi-hostpath-dynamicpv-2r8v STEP: Creating a pod to test multi_subpath Jan 11 19:49:26.199: INFO: Waiting up to 5m0s for pod "pod-subpath-test-csi-hostpath-dynamicpv-2r8v" in namespace "provisioning-4625" to be "success or failure" Jan 11 19:49:26.288: INFO: Pod "pod-subpath-test-csi-hostpath-dynamicpv-2r8v": Phase="Pending", Reason="", readiness=false. Elapsed: 88.737221ms Jan 11 19:49:28.378: INFO: Pod "pod-subpath-test-csi-hostpath-dynamicpv-2r8v": Phase="Pending", Reason="", readiness=false. Elapsed: 2.178512481s Jan 11 19:49:30.468: INFO: Pod "pod-subpath-test-csi-hostpath-dynamicpv-2r8v": Phase="Pending", Reason="", readiness=false. Elapsed: 4.268598018s Jan 11 19:49:32.559: INFO: Pod "pod-subpath-test-csi-hostpath-dynamicpv-2r8v": Phase="Pending", Reason="", readiness=false. Elapsed: 6.359701983s Jan 11 19:49:34.649: INFO: Pod "pod-subpath-test-csi-hostpath-dynamicpv-2r8v": Phase="Pending", Reason="", readiness=false. Elapsed: 8.449680316s Jan 11 19:49:36.739: INFO: Pod "pod-subpath-test-csi-hostpath-dynamicpv-2r8v": Phase="Pending", Reason="", readiness=false. Elapsed: 10.539704978s Jan 11 19:49:38.829: INFO: Pod "pod-subpath-test-csi-hostpath-dynamicpv-2r8v": Phase="Pending", Reason="", readiness=false. Elapsed: 12.629361516s Jan 11 19:49:40.919: INFO: Pod "pod-subpath-test-csi-hostpath-dynamicpv-2r8v": Phase="Pending", Reason="", readiness=false. Elapsed: 14.71924092s Jan 11 19:49:43.008: INFO: Pod "pod-subpath-test-csi-hostpath-dynamicpv-2r8v": Phase="Pending", Reason="", readiness=false. Elapsed: 16.809046956s Jan 11 19:49:45.098: INFO: Pod "pod-subpath-test-csi-hostpath-dynamicpv-2r8v": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.898482408s STEP: Saw pod success Jan 11 19:49:45.098: INFO: Pod "pod-subpath-test-csi-hostpath-dynamicpv-2r8v" satisfied condition "success or failure" Jan 11 19:49:45.187: INFO: Trying to get logs from node ip-10-250-7-77.ec2.internal pod pod-subpath-test-csi-hostpath-dynamicpv-2r8v container test-container-subpath-csi-hostpath-dynamicpv-2r8v: STEP: delete the pod Jan 11 19:49:45.378: INFO: Waiting for pod pod-subpath-test-csi-hostpath-dynamicpv-2r8v to disappear Jan 11 19:49:45.468: INFO: Pod pod-subpath-test-csi-hostpath-dynamicpv-2r8v no longer exists STEP: Deleting pod Jan 11 19:49:45.468: INFO: Deleting pod "pod-subpath-test-csi-hostpath-dynamicpv-2r8v" in namespace "provisioning-4625" STEP: Deleting pvc Jan 11 19:49:45.557: INFO: Deleting PersistentVolumeClaim "csi-hostpathsx22v" Jan 11 19:49:45.647: INFO: Waiting up to 5m0s for PersistentVolume pvc-0a60b7b6-4e2d-44d0-b88f-46c9d87c31c5 to get deleted Jan 11 19:49:45.736: INFO: PersistentVolume pvc-0a60b7b6-4e2d-44d0-b88f-46c9d87c31c5 was removed STEP: Deleting sc STEP: uninstalling csi-hostpath driver Jan 11 19:49:45.827: INFO: deleting *v1.ServiceAccount: provisioning-4625/csi-attacher Jan 11 19:49:45.918: INFO: deleting *v1.ClusterRole: external-attacher-runner-provisioning-4625 Jan 11 19:49:46.008: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-provisioning-4625 Jan 11 19:49:46.099: INFO: deleting *v1.Role: provisioning-4625/external-attacher-cfg-provisioning-4625 Jan 11 19:49:46.189: INFO: deleting *v1.RoleBinding: provisioning-4625/csi-attacher-role-cfg Jan 11 19:49:46.280: INFO: deleting *v1.ServiceAccount: provisioning-4625/csi-provisioner Jan 11 19:49:46.371: INFO: deleting *v1.ClusterRole: external-provisioner-runner-provisioning-4625 Jan 11 19:49:46.462: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-provisioning-4625 Jan 11 19:49:46.553: INFO: deleting *v1.Role: provisioning-4625/external-provisioner-cfg-provisioning-4625 Jan 11 19:49:46.644: INFO: deleting *v1.RoleBinding: provisioning-4625/csi-provisioner-role-cfg Jan 11 19:49:46.735: INFO: deleting *v1.ServiceAccount: provisioning-4625/csi-snapshotter Jan 11 19:49:46.825: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-provisioning-4625 Jan 11 19:49:46.917: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-provisioning-4625 Jan 11 19:49:47.007: INFO: deleting *v1.Role: provisioning-4625/external-snapshotter-leaderelection-provisioning-4625 Jan 11 19:49:47.098: INFO: deleting *v1.RoleBinding: provisioning-4625/external-snapshotter-leaderelection Jan 11 19:49:47.188: INFO: deleting *v1.ServiceAccount: provisioning-4625/csi-resizer Jan 11 19:49:47.279: INFO: deleting *v1.ClusterRole: external-resizer-runner-provisioning-4625 Jan 11 19:49:47.370: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-provisioning-4625 Jan 11 19:49:47.462: INFO: deleting *v1.Role: provisioning-4625/external-resizer-cfg-provisioning-4625 Jan 11 19:49:47.552: INFO: deleting *v1.RoleBinding: provisioning-4625/csi-resizer-role-cfg Jan 11 19:49:47.643: INFO: deleting *v1.Service: provisioning-4625/csi-hostpath-attacher Jan 11 19:49:47.739: INFO: deleting *v1.StatefulSet: provisioning-4625/csi-hostpath-attacher Jan 11 19:49:47.832: INFO: deleting *v1beta1.CSIDriver: csi-hostpath-provisioning-4625 Jan 11 19:49:47.923: INFO: deleting *v1.Service: provisioning-4625/csi-hostpathplugin Jan 11 19:49:48.020: INFO: deleting *v1.StatefulSet: provisioning-4625/csi-hostpathplugin Jan 11 19:49:48.110: INFO: deleting *v1.Service: provisioning-4625/csi-hostpath-provisioner Jan 11 19:49:48.205: INFO: deleting *v1.StatefulSet: provisioning-4625/csi-hostpath-provisioner Jan 11 19:49:48.296: INFO: deleting *v1.Service: provisioning-4625/csi-hostpath-resizer Jan 11 19:49:48.392: INFO: deleting *v1.StatefulSet: provisioning-4625/csi-hostpath-resizer Jan 11 19:49:48.483: INFO: deleting *v1.Service: provisioning-4625/csi-snapshotter Jan 11 19:49:48.579: INFO: deleting *v1.StatefulSet: provisioning-4625/csi-snapshotter Jan 11 19:49:48.670: INFO: deleting *v1.ClusterRoleBinding: psp-csi-hostpath-role-provisioning-4625 [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:49:48.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "provisioning-4625" for this suite. Jan 11 19:50:01.121: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:50:04.484: INFO: namespace provisioning-4625 deletion completed in 15.631647345s • [SLOW TEST:44.622 seconds] [sig-storage] CSI Volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Driver: csi-hostpath] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:62 [Testpattern: Dynamic PV (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:92 should support creating multiple subpath from same volumes [Slow] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:277 ------------------------------ SSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:49:38.939: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename webhook STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-5741 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:88 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 11 19:49:40.673: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714368980, loc:(*time.Location)(0x84bfb00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714368980, loc:(*time.Location)(0x84bfb00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714368980, loc:(*time.Location)(0x84bfb00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714368980, loc:(*time.Location)(0x84bfb00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 11 19:49:42.764: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714368980, loc:(*time.Location)(0x84bfb00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714368980, loc:(*time.Location)(0x84bfb00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714368980, loc:(*time.Location)(0x84bfb00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714368980, loc:(*time.Location)(0x84bfb00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 11 19:49:45.860: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:49:47.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5741" for this suite. Jan 11 19:49:53.365: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:49:56.697: INFO: namespace webhook-5741 deletion completed in 9.602812952s STEP: Destroying namespace "webhook-5741-markers" for this suite. Jan 11 19:50:04.967: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:50:08.293: INFO: namespace webhook-5741-markers deletion completed in 11.596020193s [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:103 • [SLOW TEST:29.714 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SS ------------------------------ [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:93 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:49:49.817: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename provisioning STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in provisioning-9504 STEP: Waiting for a default service account to be provisioned in namespace [It] should fail if subpath with backstepping is outside the volume [Slow] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:261 Jan 11 19:49:50.457: INFO: Could not find CSI Name for in-tree plugin kubernetes.io/empty-dir Jan 11 19:49:50.457: INFO: Creating resource for inline volume STEP: Creating pod pod-subpath-test-emptydir-wdll STEP: Checking for subpath error in container status Jan 11 19:49:54.733: INFO: Deleting pod "pod-subpath-test-emptydir-wdll" in namespace "provisioning-9504" Jan 11 19:49:54.824: INFO: Wait up to 5m0s for pod "pod-subpath-test-emptydir-wdll" to be fully deleted STEP: Deleting pod Jan 11 19:50:05.003: INFO: Deleting pod "pod-subpath-test-emptydir-wdll" in namespace "provisioning-9504" Jan 11 19:50:05.093: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics [AfterEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:50:05.093: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "provisioning-9504" for this suite. Jan 11 19:50:13.456: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:50:16.766: INFO: namespace provisioning-9504 deletion completed in 11.581487034s • [SLOW TEST:26.949 seconds] [sig-storage] In-tree Volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Driver: emptydir] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:69 [Testpattern: Inline-volume (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:92 should fail if subpath with backstepping is outside the volume [Slow] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:261 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:49:11.149: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename statefulset STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in statefulset-8024 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:62 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:77 STEP: Creating service test in namespace statefulset-8024 [It] should implement legacy replacement when the update strategy is OnDelete /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:473 STEP: Creating a new StatefulSet Jan 11 19:49:12.124: INFO: Found 1 stateful pods, waiting for 3 Jan 11 19:49:22.214: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 11 19:49:22.214: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 11 19:49:22.214: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Restoring Pods to the current revision Jan 11 19:49:22.851: INFO: Found 1 stateful pods, waiting for 3 Jan 11 19:49:32.941: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 11 19:49:32.941: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 11 19:49:32.941: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Jan 11 19:49:33.311: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Recreating Pods at the new revision Jan 11 19:49:44.042: INFO: Found 1 stateful pods, waiting for 3 Jan 11 19:49:54.132: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 11 19:49:54.132: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 11 19:49:54.132: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 Jan 11 19:49:54.311: INFO: Deleting all statefulset in ns statefulset-8024 Jan 11 19:49:54.401: INFO: Scaling statefulset ss2 to 0 Jan 11 19:50:04.761: INFO: Waiting for statefulset status.replicas updated to 0 Jan 11 19:50:04.851: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:50:05.122: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-8024" for this suite. Jan 11 19:50:13.483: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:50:16.805: INFO: namespace statefulset-8024 deletion completed in 11.591972462s • [SLOW TEST:65.656 seconds] [sig-apps] StatefulSet /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693 should implement legacy replacement when the update strategy is OnDelete /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:473 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:50:03.605: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename security-context-test STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in security-context-test-3977 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:40 [It] should run the container with uid 0 [LinuxOnly] [NodeConformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:93 Jan 11 19:50:04.346: INFO: Waiting up to 5m0s for pod "busybox-user-0-b246c8cd-2f2e-465d-94a6-af820ebe3e41" in namespace "security-context-test-3977" to be "success or failure" Jan 11 19:50:04.436: INFO: Pod "busybox-user-0-b246c8cd-2f2e-465d-94a6-af820ebe3e41": Phase="Pending", Reason="", readiness=false. Elapsed: 89.441469ms Jan 11 19:50:06.526: INFO: Pod "busybox-user-0-b246c8cd-2f2e-465d-94a6-af820ebe3e41": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.179367713s Jan 11 19:50:06.526: INFO: Pod "busybox-user-0-b246c8cd-2f2e-465d-94a6-af820ebe3e41" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:50:06.526: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-3977" for this suite. Jan 11 19:50:14.885: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:50:18.184: INFO: namespace security-context-test-3977 deletion completed in 11.567555048s • [SLOW TEST:14.579 seconds] [k8s.io] Security Context /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693 When creating a container with runAsUser /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:44 should run the container with uid 0 [LinuxOnly] [NodeConformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:93 ------------------------------ SSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Dynamic Provisioning /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:50:08.657: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename volume-provisioning STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in volume-provisioning-4381 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Dynamic Provisioning /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:259 [It] should create persistent volume in the zone specified in allowedTopologies of storageclass /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:974 Jan 11 19:50:09.299: INFO: Skipping "AllowedTopologies EBS storage class test": cloud providers is not [aws] Jan 11 19:50:09.299: INFO: Skipping "AllowedTopologies GCE PD storage class test": cloud providers is not [gce gke] [AfterEach] [sig-storage] Dynamic Provisioning /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:50:09.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "volume-provisioning-4381" for this suite. Jan 11 19:50:15.660: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:50:18.979: INFO: namespace volume-provisioning-4381 deletion completed in 9.589097405s • [SLOW TEST:10.322 seconds] [sig-storage] Dynamic Provisioning /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 DynamicProvisioner allowedTopologies /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:973 should create persistent volume in the zone specified in allowedTopologies of storageclass /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:974 ------------------------------ SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Networking /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:50:16.821: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename nettest STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nettest-4420 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Networking /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:35 STEP: Executing a successful http request from the external internet [It] should provide unchanging, static URL paths for kubernetes api services /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:64 STEP: testing: /healthz STEP: testing: /api STEP: testing: /apis STEP: testing: /metrics STEP: testing: /openapi/v2 STEP: testing: /version [AfterEach] [sig-network] Networking /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:50:18.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "nettest-4420" for this suite. Jan 11 19:50:24.827: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:50:28.142: INFO: namespace nettest-4420 deletion completed in 9.584323447s • [SLOW TEST:11.321 seconds] [sig-network] Networking /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide unchanging, static URL paths for kubernetes api services /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:64 ------------------------------ SSSSS ------------------------------ [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:50:16.819: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename security-context-test STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in security-context-test-8734 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:40 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Jan 11 19:50:17.550: INFO: Waiting up to 5m0s for pod "busybox-user-65534-faf66c9e-fa54-4f9b-918e-0be3b3c70673" in namespace "security-context-test-8734" to be "success or failure" Jan 11 19:50:17.640: INFO: Pod "busybox-user-65534-faf66c9e-fa54-4f9b-918e-0be3b3c70673": Phase="Pending", Reason="", readiness=false. Elapsed: 89.735766ms Jan 11 19:50:19.730: INFO: Pod "busybox-user-65534-faf66c9e-fa54-4f9b-918e-0be3b3c70673": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.179448134s Jan 11 19:50:19.730: INFO: Pod "busybox-user-65534-faf66c9e-fa54-4f9b-918e-0be3b3c70673" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:50:19.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-8734" for this suite. Jan 11 19:50:26.091: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:50:29.396: INFO: namespace security-context-test-8734 deletion completed in 9.574747533s • [SLOW TEST:12.576 seconds] [k8s.io] Security Context /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693 When creating a container with runAsUser /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:44 should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSS ------------------------------ [BeforeEach] [k8s.io] [sig-node] Security Context /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:50:18.198: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename security-context STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in security-context-9884 STEP: Waiting for a default service account to be provisioned in namespace [It] should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:116 STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser Jan 11 19:50:18.940: INFO: Waiting up to 5m0s for pod "security-context-65fedcd1-c213-492a-bbad-06115fa13e1f" in namespace "security-context-9884" to be "success or failure" Jan 11 19:50:19.030: INFO: Pod "security-context-65fedcd1-c213-492a-bbad-06115fa13e1f": Phase="Pending", Reason="", readiness=false. Elapsed: 89.205734ms Jan 11 19:50:21.119: INFO: Pod "security-context-65fedcd1-c213-492a-bbad-06115fa13e1f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.178743439s STEP: Saw pod success Jan 11 19:50:21.119: INFO: Pod "security-context-65fedcd1-c213-492a-bbad-06115fa13e1f" satisfied condition "success or failure" Jan 11 19:50:21.209: INFO: Trying to get logs from node ip-10-250-27-25.ec2.internal pod security-context-65fedcd1-c213-492a-bbad-06115fa13e1f container test-container: STEP: delete the pod Jan 11 19:50:21.530: INFO: Waiting for pod security-context-65fedcd1-c213-492a-bbad-06115fa13e1f to disappear Jan 11 19:50:21.619: INFO: Pod security-context-65fedcd1-c213-492a-bbad-06115fa13e1f no longer exists [AfterEach] [k8s.io] [sig-node] Security Context /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:50:21.619: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-9884" for this suite. Jan 11 19:50:27.978: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:50:31.275: INFO: namespace security-context-9884 deletion completed in 9.565519384s • [SLOW TEST:13.077 seconds] [k8s.io] [sig-node] Security Context /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693 should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:116 ------------------------------ S ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:45:18.341: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename configmap STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-9343 STEP: Waiting for a default service account to be provisioned in namespace [It] Should fail non-optional pod creation due to the key in the configMap object does not exist [Slow] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:568 STEP: Creating configMap with name cm-test-opt-create-34dd0ff3-74aa-4ce5-9a1c-cb3d82ddc8ff STEP: Creating the pod [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:50:19.528: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9343" for this suite. Jan 11 19:50:47.889: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:50:51.204: INFO: namespace configmap-9343 deletion completed in 31.584626602s • [SLOW TEST:332.863 seconds] [sig-storage] ConfigMap /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:34 Should fail non-optional pod creation due to the key in the configMap object does not exist [Slow] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:568 ------------------------------ SS ------------------------------ [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:50:51.209: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename security-context-test STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in security-context-test-9735 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:40 [It] should run with an image specified user ID /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:145 Jan 11 19:50:52.142: INFO: Waiting up to 5m0s for pod "implicit-nonroot-uid" in namespace "security-context-test-9735" to be "success or failure" Jan 11 19:50:52.232: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 89.536904ms Jan 11 19:50:54.322: INFO: Pod "implicit-nonroot-uid": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.179419275s Jan 11 19:50:54.322: INFO: Pod "implicit-nonroot-uid" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:50:54.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-9735" for this suite. Jan 11 19:51:02.781: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:51:06.098: INFO: namespace security-context-test-9735 deletion completed in 11.587789998s • [SLOW TEST:14.889 seconds] [k8s.io] Security Context /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693 When creating a container with runAsNonRoot /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:98 should run with an image specified user ID /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:145 ------------------------------ S ------------------------------ [BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:93 [BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:50:28.151: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename volumemode STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in volumemode-2792 STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to use a volume in a pod with mismatched mode [Slow] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:278 STEP: deploying csi-hostpath driver Jan 11 19:50:28.978: INFO: creating *v1.ServiceAccount: volumemode-2792/csi-attacher Jan 11 19:50:29.068: INFO: creating *v1.ClusterRole: external-attacher-runner-volumemode-2792 Jan 11 19:50:29.068: INFO: Define cluster role external-attacher-runner-volumemode-2792 Jan 11 19:50:29.158: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-volumemode-2792 Jan 11 19:50:29.248: INFO: creating *v1.Role: volumemode-2792/external-attacher-cfg-volumemode-2792 Jan 11 19:50:29.338: INFO: creating *v1.RoleBinding: volumemode-2792/csi-attacher-role-cfg Jan 11 19:50:29.428: INFO: creating *v1.ServiceAccount: volumemode-2792/csi-provisioner Jan 11 19:50:29.518: INFO: creating *v1.ClusterRole: external-provisioner-runner-volumemode-2792 Jan 11 19:50:29.518: INFO: Define cluster role external-provisioner-runner-volumemode-2792 Jan 11 19:50:29.608: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-volumemode-2792 Jan 11 19:50:29.698: INFO: creating *v1.Role: volumemode-2792/external-provisioner-cfg-volumemode-2792 Jan 11 19:50:29.788: INFO: creating *v1.RoleBinding: volumemode-2792/csi-provisioner-role-cfg Jan 11 19:50:29.878: INFO: creating *v1.ServiceAccount: volumemode-2792/csi-snapshotter Jan 11 19:50:29.968: INFO: creating *v1.ClusterRole: external-snapshotter-runner-volumemode-2792 Jan 11 19:50:29.968: INFO: Define cluster role external-snapshotter-runner-volumemode-2792 Jan 11 19:50:30.058: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-volumemode-2792 Jan 11 19:50:30.148: INFO: creating *v1.Role: volumemode-2792/external-snapshotter-leaderelection-volumemode-2792 Jan 11 19:50:30.239: INFO: creating *v1.RoleBinding: volumemode-2792/external-snapshotter-leaderelection Jan 11 19:50:30.329: INFO: creating *v1.ServiceAccount: volumemode-2792/csi-resizer Jan 11 19:50:30.419: INFO: creating *v1.ClusterRole: external-resizer-runner-volumemode-2792 Jan 11 19:50:30.419: INFO: Define cluster role external-resizer-runner-volumemode-2792 Jan 11 19:50:30.509: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-volumemode-2792 Jan 11 19:50:30.599: INFO: creating *v1.Role: volumemode-2792/external-resizer-cfg-volumemode-2792 Jan 11 19:50:30.689: INFO: creating *v1.RoleBinding: volumemode-2792/csi-resizer-role-cfg Jan 11 19:50:30.779: INFO: creating *v1.Service: volumemode-2792/csi-hostpath-attacher Jan 11 19:50:30.873: INFO: creating *v1.StatefulSet: volumemode-2792/csi-hostpath-attacher Jan 11 19:50:30.964: INFO: creating *v1beta1.CSIDriver: csi-hostpath-volumemode-2792 Jan 11 19:50:31.053: INFO: creating *v1.Service: volumemode-2792/csi-hostpathplugin Jan 11 19:50:31.147: INFO: creating *v1.StatefulSet: volumemode-2792/csi-hostpathplugin Jan 11 19:50:31.238: INFO: creating *v1.Service: volumemode-2792/csi-hostpath-provisioner Jan 11 19:50:31.332: INFO: creating *v1.StatefulSet: volumemode-2792/csi-hostpath-provisioner Jan 11 19:50:31.422: INFO: creating *v1.Service: volumemode-2792/csi-hostpath-resizer Jan 11 19:50:31.516: INFO: creating *v1.StatefulSet: volumemode-2792/csi-hostpath-resizer Jan 11 19:50:31.607: INFO: creating *v1.Service: volumemode-2792/csi-snapshotter Jan 11 19:50:31.700: INFO: creating *v1.StatefulSet: volumemode-2792/csi-snapshotter Jan 11 19:50:31.790: INFO: creating *v1.ClusterRoleBinding: psp-csi-hostpath-role-volumemode-2792 Jan 11 19:50:31.880: INFO: Test running for native CSI Driver, not checking metrics Jan 11 19:50:31.880: INFO: Creating resource for dynamic PV STEP: creating a StorageClass volumemode-2792-csi-hostpath-volumemode-2792-scz9g7z STEP: creating a claim Jan 11 19:50:32.062: INFO: Waiting up to 5m0s for PersistentVolumeClaims [csi-hostpath7snr7] to have phase Bound Jan 11 19:50:32.151: INFO: PersistentVolumeClaim csi-hostpath7snr7 found but phase is Pending instead of Bound. Jan 11 19:50:34.240: INFO: PersistentVolumeClaim csi-hostpath7snr7 found but phase is Pending instead of Bound. Jan 11 19:50:36.331: INFO: PersistentVolumeClaim csi-hostpath7snr7 found but phase is Pending instead of Bound. Jan 11 19:50:38.421: INFO: PersistentVolumeClaim csi-hostpath7snr7 found but phase is Pending instead of Bound. Jan 11 19:50:40.510: INFO: PersistentVolumeClaim csi-hostpath7snr7 found but phase is Pending instead of Bound. Jan 11 19:50:42.600: INFO: PersistentVolumeClaim csi-hostpath7snr7 found but phase is Pending instead of Bound. Jan 11 19:50:44.689: INFO: PersistentVolumeClaim csi-hostpath7snr7 found but phase is Pending instead of Bound. Jan 11 19:50:46.779: INFO: PersistentVolumeClaim csi-hostpath7snr7 found but phase is Pending instead of Bound. Jan 11 19:50:48.869: INFO: PersistentVolumeClaim csi-hostpath7snr7 found but phase is Pending instead of Bound. Jan 11 19:50:50.959: INFO: PersistentVolumeClaim csi-hostpath7snr7 found but phase is Pending instead of Bound. Jan 11 19:50:53.049: INFO: PersistentVolumeClaim csi-hostpath7snr7 found but phase is Pending instead of Bound. Jan 11 19:50:55.138: INFO: PersistentVolumeClaim csi-hostpath7snr7 found but phase is Pending instead of Bound. Jan 11 19:50:57.228: INFO: PersistentVolumeClaim csi-hostpath7snr7 found but phase is Pending instead of Bound. Jan 11 19:50:59.318: INFO: PersistentVolumeClaim csi-hostpath7snr7 found but phase is Pending instead of Bound. Jan 11 19:51:01.408: INFO: PersistentVolumeClaim csi-hostpath7snr7 found but phase is Pending instead of Bound. Jan 11 19:51:03.497: INFO: PersistentVolumeClaim csi-hostpath7snr7 found but phase is Pending instead of Bound. Jan 11 19:51:05.588: INFO: PersistentVolumeClaim csi-hostpath7snr7 found but phase is Pending instead of Bound. Jan 11 19:51:07.677: INFO: PersistentVolumeClaim csi-hostpath7snr7 found and phase=Bound (35.615529361s) STEP: Creating pod STEP: Waiting for the pod to fail Jan 11 19:51:10.226: INFO: Deleting pod "security-context-00830e56-c73c-47c8-a894-a3a2b3793649" in namespace "volumemode-2792" Jan 11 19:51:10.316: INFO: Wait up to 5m0s for pod "security-context-00830e56-c73c-47c8-a894-a3a2b3793649" to be fully deleted STEP: Deleting pvc Jan 11 19:51:14.496: INFO: Deleting PersistentVolumeClaim "csi-hostpath7snr7" Jan 11 19:51:14.587: INFO: Waiting up to 5m0s for PersistentVolume pvc-87fd2d31-ac88-4a68-8187-4cbc72781ee4 to get deleted Jan 11 19:51:14.677: INFO: PersistentVolume pvc-87fd2d31-ac88-4a68-8187-4cbc72781ee4 found and phase=Bound (89.784182ms) Jan 11 19:51:19.767: INFO: PersistentVolume pvc-87fd2d31-ac88-4a68-8187-4cbc72781ee4 was removed STEP: Deleting sc STEP: uninstalling csi-hostpath driver Jan 11 19:51:19.859: INFO: deleting *v1.ServiceAccount: volumemode-2792/csi-attacher Jan 11 19:51:19.950: INFO: deleting *v1.ClusterRole: external-attacher-runner-volumemode-2792 Jan 11 19:51:20.041: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-volumemode-2792 Jan 11 19:51:20.133: INFO: deleting *v1.Role: volumemode-2792/external-attacher-cfg-volumemode-2792 Jan 11 19:51:20.224: INFO: deleting *v1.RoleBinding: volumemode-2792/csi-attacher-role-cfg Jan 11 19:51:20.316: INFO: deleting *v1.ServiceAccount: volumemode-2792/csi-provisioner Jan 11 19:51:20.407: INFO: deleting *v1.ClusterRole: external-provisioner-runner-volumemode-2792 Jan 11 19:51:20.498: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-volumemode-2792 Jan 11 19:51:20.590: INFO: deleting *v1.Role: volumemode-2792/external-provisioner-cfg-volumemode-2792 Jan 11 19:51:20.681: INFO: deleting *v1.RoleBinding: volumemode-2792/csi-provisioner-role-cfg Jan 11 19:51:20.772: INFO: deleting *v1.ServiceAccount: volumemode-2792/csi-snapshotter Jan 11 19:51:20.864: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-volumemode-2792 Jan 11 19:51:20.955: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-volumemode-2792 Jan 11 19:51:21.047: INFO: deleting *v1.Role: volumemode-2792/external-snapshotter-leaderelection-volumemode-2792 Jan 11 19:51:21.139: INFO: deleting *v1.RoleBinding: volumemode-2792/external-snapshotter-leaderelection Jan 11 19:51:21.230: INFO: deleting *v1.ServiceAccount: volumemode-2792/csi-resizer Jan 11 19:51:21.320: INFO: deleting *v1.ClusterRole: external-resizer-runner-volumemode-2792 Jan 11 19:51:21.411: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-volumemode-2792 Jan 11 19:51:21.502: INFO: deleting *v1.Role: volumemode-2792/external-resizer-cfg-volumemode-2792 Jan 11 19:51:21.592: INFO: deleting *v1.RoleBinding: volumemode-2792/csi-resizer-role-cfg Jan 11 19:51:21.684: INFO: deleting *v1.Service: volumemode-2792/csi-hostpath-attacher Jan 11 19:51:21.781: INFO: deleting *v1.StatefulSet: volumemode-2792/csi-hostpath-attacher Jan 11 19:51:21.872: INFO: deleting *v1beta1.CSIDriver: csi-hostpath-volumemode-2792 Jan 11 19:51:21.965: INFO: deleting *v1.Service: volumemode-2792/csi-hostpathplugin Jan 11 19:51:22.061: INFO: deleting *v1.StatefulSet: volumemode-2792/csi-hostpathplugin Jan 11 19:51:22.152: INFO: deleting *v1.Service: volumemode-2792/csi-hostpath-provisioner Jan 11 19:51:22.248: INFO: deleting *v1.StatefulSet: volumemode-2792/csi-hostpath-provisioner Jan 11 19:51:22.339: INFO: deleting *v1.Service: volumemode-2792/csi-hostpath-resizer Jan 11 19:51:22.434: INFO: deleting *v1.StatefulSet: volumemode-2792/csi-hostpath-resizer Jan 11 19:51:22.525: INFO: deleting *v1.Service: volumemode-2792/csi-snapshotter Jan 11 19:51:22.621: INFO: deleting *v1.StatefulSet: volumemode-2792/csi-snapshotter Jan 11 19:51:22.713: INFO: deleting *v1.ClusterRoleBinding: psp-csi-hostpath-role-volumemode-2792 [AfterEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:51:22.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "volumemode-2792" for this suite. Jan 11 19:51:31.163: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:51:34.572: INFO: namespace volumemode-2792 deletion completed in 11.677808368s • [SLOW TEST:66.421 seconds] [sig-storage] CSI Volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Driver: csi-hostpath] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:62 [Testpattern: Dynamic PV (filesystem volmode)] volumeMode /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:92 should fail to use a volume in a pod with mismatched mode [Slow] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:278 ------------------------------ SSSSS ------------------------------ [BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:93 [BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:49:59.400: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename volume STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in volume-2441 STEP: Waiting for a default service account to be provisioned in namespace [It] should store data /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:146 STEP: deploying csi-hostpath driver Jan 11 19:50:00.340: INFO: creating *v1.ServiceAccount: volume-2441/csi-attacher Jan 11 19:50:00.429: INFO: creating *v1.ClusterRole: external-attacher-runner-volume-2441 Jan 11 19:50:00.429: INFO: Define cluster role external-attacher-runner-volume-2441 Jan 11 19:50:00.519: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-volume-2441 Jan 11 19:50:00.608: INFO: creating *v1.Role: volume-2441/external-attacher-cfg-volume-2441 Jan 11 19:50:00.698: INFO: creating *v1.RoleBinding: volume-2441/csi-attacher-role-cfg Jan 11 19:50:00.788: INFO: creating *v1.ServiceAccount: volume-2441/csi-provisioner Jan 11 19:50:00.877: INFO: creating *v1.ClusterRole: external-provisioner-runner-volume-2441 Jan 11 19:50:00.877: INFO: Define cluster role external-provisioner-runner-volume-2441 Jan 11 19:50:00.967: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-volume-2441 Jan 11 19:50:01.056: INFO: creating *v1.Role: volume-2441/external-provisioner-cfg-volume-2441 Jan 11 19:50:01.146: INFO: creating *v1.RoleBinding: volume-2441/csi-provisioner-role-cfg Jan 11 19:50:01.235: INFO: creating *v1.ServiceAccount: volume-2441/csi-snapshotter Jan 11 19:50:01.325: INFO: creating *v1.ClusterRole: external-snapshotter-runner-volume-2441 Jan 11 19:50:01.325: INFO: Define cluster role external-snapshotter-runner-volume-2441 Jan 11 19:50:01.415: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-volume-2441 Jan 11 19:50:01.505: INFO: creating *v1.Role: volume-2441/external-snapshotter-leaderelection-volume-2441 Jan 11 19:50:01.595: INFO: creating *v1.RoleBinding: volume-2441/external-snapshotter-leaderelection Jan 11 19:50:01.685: INFO: creating *v1.ServiceAccount: volume-2441/csi-resizer Jan 11 19:50:01.774: INFO: creating *v1.ClusterRole: external-resizer-runner-volume-2441 Jan 11 19:50:01.774: INFO: Define cluster role external-resizer-runner-volume-2441 Jan 11 19:50:01.864: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-volume-2441 Jan 11 19:50:01.953: INFO: creating *v1.Role: volume-2441/external-resizer-cfg-volume-2441 Jan 11 19:50:02.061: INFO: creating *v1.RoleBinding: volume-2441/csi-resizer-role-cfg Jan 11 19:50:02.151: INFO: creating *v1.Service: volume-2441/csi-hostpath-attacher Jan 11 19:50:02.245: INFO: creating *v1.StatefulSet: volume-2441/csi-hostpath-attacher Jan 11 19:50:02.335: INFO: creating *v1beta1.CSIDriver: csi-hostpath-volume-2441 Jan 11 19:50:02.424: INFO: creating *v1.Service: volume-2441/csi-hostpathplugin Jan 11 19:50:02.517: INFO: creating *v1.StatefulSet: volume-2441/csi-hostpathplugin Jan 11 19:50:02.607: INFO: creating *v1.Service: volume-2441/csi-hostpath-provisioner Jan 11 19:50:02.700: INFO: creating *v1.StatefulSet: volume-2441/csi-hostpath-provisioner Jan 11 19:50:02.790: INFO: creating *v1.Service: volume-2441/csi-hostpath-resizer Jan 11 19:50:02.884: INFO: creating *v1.StatefulSet: volume-2441/csi-hostpath-resizer Jan 11 19:50:02.974: INFO: creating *v1.Service: volume-2441/csi-snapshotter Jan 11 19:50:03.066: INFO: creating *v1.StatefulSet: volume-2441/csi-snapshotter Jan 11 19:50:03.157: INFO: creating *v1.ClusterRoleBinding: psp-csi-hostpath-role-volume-2441 Jan 11 19:50:03.246: INFO: Test running for native CSI Driver, not checking metrics Jan 11 19:50:03.246: INFO: Creating resource for dynamic PV STEP: creating a StorageClass volume-2441-csi-hostpath-volume-2441-sc5dxbs STEP: creating a claim Jan 11 19:50:03.336: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Jan 11 19:50:03.426: INFO: Waiting up to 5m0s for PersistentVolumeClaims [csi-hostpath49878] to have phase Bound Jan 11 19:50:03.516: INFO: PersistentVolumeClaim csi-hostpath49878 found but phase is Pending instead of Bound. Jan 11 19:50:05.605: INFO: PersistentVolumeClaim csi-hostpath49878 found and phase=Bound (2.178679468s) STEP: starting hostpath-injector STEP: Writing text file contents in the container. Jan 11 19:50:24.052: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec hostpath-injector --namespace=volume-2441 -- /bin/sh -c echo 'Hello from csi-hostpath from namespace volume-2441' > /opt/0/index.html' Jan 11 19:50:25.349: INFO: stderr: "" Jan 11 19:50:25.349: INFO: stdout: "" STEP: Checking that text file contents are perfect. Jan 11 19:50:25.349: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec hostpath-injector --namespace=volume-2441 -- cat /opt/0/index.html' Jan 11 19:50:26.644: INFO: stderr: "" Jan 11 19:50:26.644: INFO: stdout: "Hello from csi-hostpath from namespace volume-2441\n" Jan 11 19:50:26.644: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=volume-2441 hostpath-injector -- /bin/sh -c test -d /opt/0' Jan 11 19:50:27.938: INFO: stderr: "" Jan 11 19:50:27.938: INFO: stdout: "" Jan 11 19:50:27.938: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=volume-2441 hostpath-injector -- /bin/sh -c test -b /opt/0' Jan 11 19:50:29.205: INFO: rc: 1 STEP: Deleting pod hostpath-injector in namespace volume-2441 Jan 11 19:50:29.295: INFO: Waiting for pod hostpath-injector to disappear Jan 11 19:50:29.385: INFO: Pod hostpath-injector still exists Jan 11 19:50:31.385: INFO: Waiting for pod hostpath-injector to disappear Jan 11 19:50:31.475: INFO: Pod hostpath-injector still exists Jan 11 19:50:33.385: INFO: Waiting for pod hostpath-injector to disappear Jan 11 19:50:33.476: INFO: Pod hostpath-injector still exists Jan 11 19:50:35.385: INFO: Waiting for pod hostpath-injector to disappear Jan 11 19:50:35.475: INFO: Pod hostpath-injector still exists Jan 11 19:50:37.385: INFO: Waiting for pod hostpath-injector to disappear Jan 11 19:50:37.477: INFO: Pod hostpath-injector still exists Jan 11 19:50:39.385: INFO: Waiting for pod hostpath-injector to disappear Jan 11 19:50:39.476: INFO: Pod hostpath-injector no longer exists STEP: starting hostpath-client STEP: Checking that text file contents are perfect. Jan 11 19:50:49.836: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec hostpath-client --namespace=volume-2441 -- cat /opt/0/index.html' Jan 11 19:50:51.161: INFO: stderr: "" Jan 11 19:50:51.161: INFO: stdout: "Hello from csi-hostpath from namespace volume-2441\n" Jan 11 19:50:51.161: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=volume-2441 hostpath-client -- /bin/sh -c test -d /opt/0' Jan 11 19:50:52.481: INFO: stderr: "" Jan 11 19:50:52.481: INFO: stdout: "" Jan 11 19:50:52.481: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=volume-2441 hostpath-client -- /bin/sh -c test -b /opt/0' Jan 11 19:50:53.783: INFO: rc: 1 STEP: cleaning the environment after hostpath Jan 11 19:50:53.783: INFO: Deleting pod "hostpath-client" in namespace "volume-2441" Jan 11 19:50:53.875: INFO: Wait up to 5m0s for pod "hostpath-client" to be fully deleted STEP: Deleting pvc Jan 11 19:51:00.057: INFO: Deleting PersistentVolumeClaim "csi-hostpath49878" Jan 11 19:51:00.147: INFO: Waiting up to 5m0s for PersistentVolume pvc-e54038d1-732e-4df9-86c3-858626ca7aee to get deleted Jan 11 19:51:00.236: INFO: PersistentVolume pvc-e54038d1-732e-4df9-86c3-858626ca7aee was removed STEP: Deleting sc STEP: uninstalling csi-hostpath driver Jan 11 19:51:00.327: INFO: deleting *v1.ServiceAccount: volume-2441/csi-attacher Jan 11 19:51:00.417: INFO: deleting *v1.ClusterRole: external-attacher-runner-volume-2441 Jan 11 19:51:00.507: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-volume-2441 Jan 11 19:51:00.598: INFO: deleting *v1.Role: volume-2441/external-attacher-cfg-volume-2441 Jan 11 19:51:00.689: INFO: deleting *v1.RoleBinding: volume-2441/csi-attacher-role-cfg Jan 11 19:51:00.781: INFO: deleting *v1.ServiceAccount: volume-2441/csi-provisioner Jan 11 19:51:00.872: INFO: deleting *v1.ClusterRole: external-provisioner-runner-volume-2441 Jan 11 19:51:00.963: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-volume-2441 Jan 11 19:51:01.054: INFO: deleting *v1.Role: volume-2441/external-provisioner-cfg-volume-2441 Jan 11 19:51:01.144: INFO: deleting *v1.RoleBinding: volume-2441/csi-provisioner-role-cfg Jan 11 19:51:01.235: INFO: deleting *v1.ServiceAccount: volume-2441/csi-snapshotter Jan 11 19:51:01.325: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-volume-2441 Jan 11 19:51:01.415: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-volume-2441 Jan 11 19:51:01.505: INFO: deleting *v1.Role: volume-2441/external-snapshotter-leaderelection-volume-2441 Jan 11 19:51:01.596: INFO: deleting *v1.RoleBinding: volume-2441/external-snapshotter-leaderelection Jan 11 19:51:01.687: INFO: deleting *v1.ServiceAccount: volume-2441/csi-resizer Jan 11 19:51:01.777: INFO: deleting *v1.ClusterRole: external-resizer-runner-volume-2441 Jan 11 19:51:01.868: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-volume-2441 Jan 11 19:51:01.958: INFO: deleting *v1.Role: volume-2441/external-resizer-cfg-volume-2441 Jan 11 19:51:02.049: INFO: deleting *v1.RoleBinding: volume-2441/csi-resizer-role-cfg Jan 11 19:51:02.140: INFO: deleting *v1.Service: volume-2441/csi-hostpath-attacher Jan 11 19:51:02.236: INFO: deleting *v1.StatefulSet: volume-2441/csi-hostpath-attacher Jan 11 19:51:02.327: INFO: deleting *v1beta1.CSIDriver: csi-hostpath-volume-2441 Jan 11 19:51:02.419: INFO: deleting *v1.Service: volume-2441/csi-hostpathplugin Jan 11 19:51:02.514: INFO: deleting *v1.StatefulSet: volume-2441/csi-hostpathplugin Jan 11 19:51:02.604: INFO: deleting *v1.Service: volume-2441/csi-hostpath-provisioner Jan 11 19:51:02.701: INFO: deleting *v1.StatefulSet: volume-2441/csi-hostpath-provisioner Jan 11 19:51:02.791: INFO: deleting *v1.Service: volume-2441/csi-hostpath-resizer Jan 11 19:51:02.897: INFO: deleting *v1.StatefulSet: volume-2441/csi-hostpath-resizer Jan 11 19:51:02.989: INFO: deleting *v1.Service: volume-2441/csi-snapshotter Jan 11 19:51:03.092: INFO: deleting *v1.StatefulSet: volume-2441/csi-snapshotter Jan 11 19:51:03.183: INFO: deleting *v1.ClusterRoleBinding: psp-csi-hostpath-role-volume-2441 [AfterEach] [Testpattern: Dynamic PV (default fs)] volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:51:03.274: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "volume-2441" for this suite. Jan 11 19:51:31.633: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:51:34.927: INFO: namespace volume-2441 deletion completed in 31.562374794s • [SLOW TEST:95.527 seconds] [sig-storage] CSI Volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Driver: csi-hostpath] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:62 [Testpattern: Dynamic PV (default fs)] volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:92 should store data /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:146 ------------------------------ SSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Secrets /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:51:34.935: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename secrets STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secrets-3103 STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating projection with secret that has name secret-emptykey-test-243cd7b2-6a41-4cd8-8a9c-82e6424cc4ca [AfterEach] [sig-api-machinery] Secrets /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:51:35.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3103" for this suite. Jan 11 19:51:44.020: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:51:47.309: INFO: namespace secrets-3103 deletion completed in 11.558070046s • [SLOW TEST:12.374 seconds] [sig-api-machinery] Secrets /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32 should fail to create secret due to empty secret key [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:51:34.580: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename downward-api STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-9551 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating the pod Jan 11 19:51:38.362: INFO: Successfully updated pod "labelsupdate22851f96-44c4-435d-88d5-b567d0e8c477" [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:51:40.556: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9551" for this suite. Jan 11 19:51:52.917: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:51:56.233: INFO: namespace downward-api-9551 deletion completed in 15.586696531s • [SLOW TEST:21.653 seconds] [sig-storage] Downward API volume /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:51:47.321: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename deployment STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in deployment-3409 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 [It] deployment reaping should cascade to its replica sets and pods /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:73 Jan 11 19:51:47.956: INFO: Creating simple deployment test-new-deployment Jan 11 19:51:48.313: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714369108, loc:(*time.Location)(0x84bfb00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714369108, loc:(*time.Location)(0x84bfb00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714369108, loc:(*time.Location)(0x84bfb00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714369108, loc:(*time.Location)(0x84bfb00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-595b5b9587\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 11 19:51:50.671: INFO: Deleting deployment test-new-deployment STEP: deleting Deployment.apps test-new-deployment in namespace deployment-3409, will wait for the garbage collector to delete the pods Jan 11 19:51:50.952: INFO: Deleting Deployment.apps test-new-deployment took: 91.184559ms Jan 11 19:51:51.553: INFO: Terminating Deployment.apps test-new-deployment pods took: 600.338327ms Jan 11 19:51:51.553: INFO: Ensuring deployment test-new-deployment was deleted Jan 11 19:51:51.642: INFO: Ensuring deployment test-new-deployment's RSes were deleted Jan 11 19:51:51.731: INFO: Ensuring deployment test-new-deployment's Pods were deleted [AfterEach] [sig-apps] Deployment /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:62 Jan 11 19:51:51.910: INFO: Log out all the ReplicaSets if there is no deployment created [AfterEach] [sig-apps] Deployment /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:51:51.999: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-3409" for this suite. Jan 11 19:51:58.359: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:52:01.669: INFO: namespace deployment-3409 deletion completed in 9.579652399s • [SLOW TEST:14.348 seconds] [sig-apps] Deployment /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment reaping should cascade to its replica sets and pods /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:73 ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:51:56.235: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename emptydir STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-8476 STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating a pod to test emptydir 0777 on node default medium Jan 11 19:51:56.966: INFO: Waiting up to 5m0s for pod "pod-bbfe6760-44c1-4f96-afd3-ec9d9d2ab6f6" in namespace "emptydir-8476" to be "success or failure" Jan 11 19:51:57.056: INFO: Pod "pod-bbfe6760-44c1-4f96-afd3-ec9d9d2ab6f6": Phase="Pending", Reason="", readiness=false. Elapsed: 89.53923ms Jan 11 19:51:59.146: INFO: Pod "pod-bbfe6760-44c1-4f96-afd3-ec9d9d2ab6f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.179407698s STEP: Saw pod success Jan 11 19:51:59.146: INFO: Pod "pod-bbfe6760-44c1-4f96-afd3-ec9d9d2ab6f6" satisfied condition "success or failure" Jan 11 19:51:59.236: INFO: Trying to get logs from node ip-10-250-27-25.ec2.internal pod pod-bbfe6760-44c1-4f96-afd3-ec9d9d2ab6f6 container test-container: STEP: delete the pod Jan 11 19:51:59.435: INFO: Waiting for pod pod-bbfe6760-44c1-4f96-afd3-ec9d9d2ab6f6 to disappear Jan 11 19:51:59.524: INFO: Pod pod-bbfe6760-44c1-4f96-afd3-ec9d9d2ab6f6 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:51:59.524: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8476" for this suite. Jan 11 19:52:05.885: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:52:09.200: INFO: namespace emptydir-8476 deletion completed in 9.584848669s • [SLOW TEST:12.965 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SS ------------------------------ [BeforeEach] [sig-storage] HostPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:52:01.671: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename hostpath STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in hostpath-6960 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating a pod to test hostPath mode Jan 11 19:52:02.436: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-6960" to be "success or failure" Jan 11 19:52:02.527: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 90.53737ms Jan 11 19:52:04.617: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.180256799s STEP: Saw pod success Jan 11 19:52:04.617: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Jan 11 19:52:04.706: INFO: Trying to get logs from node ip-10-250-27-25.ec2.internal pod pod-host-path-test container test-container-1: STEP: delete the pod Jan 11 19:52:04.900: INFO: Waiting for pod pod-host-path-test to disappear Jan 11 19:52:04.990: INFO: Pod pod-host-path-test no longer exists Jan 11 19:52:04.990: FAIL: Unexpected error: <*errors.errorString | 0xc0048ec160>: { s: "expected \"mode of file \\\"/test-volume\\\": dtrwxrwx\" in container output: Expected\n : mount type of \"/test-volume\": tmpfs\n mode of file \"/test-volume\": dgtrwxrwxrwx\n \nto contain substring\n : mode of file \"/test-volume\": dtrwxrwx", } expected "mode of file \"/test-volume\": dtrwxrwx" in container output: Expected : mount type of "/test-volume": tmpfs mode of file "/test-volume": dgtrwxrwxrwx to contain substring : mode of file "/test-volume": dtrwxrwx occurred [AfterEach] [sig-storage] HostPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 STEP: Collecting events from namespace "hostpath-6960". STEP: Found 7 events. Jan 11 19:52:05.080: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-host-path-test: {default-scheduler } Scheduled: Successfully assigned hostpath-6960/pod-host-path-test to ip-10-250-27-25.ec2.internal Jan 11 19:52:05.080: INFO: At 2020-01-11 19:52:03 +0000 UTC - event for pod-host-path-test: {kubelet ip-10-250-27-25.ec2.internal} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/mounttest:1.0" already present on machine Jan 11 19:52:05.080: INFO: At 2020-01-11 19:52:03 +0000 UTC - event for pod-host-path-test: {kubelet ip-10-250-27-25.ec2.internal} Created: Created container test-container-1 Jan 11 19:52:05.080: INFO: At 2020-01-11 19:52:03 +0000 UTC - event for pod-host-path-test: {kubelet ip-10-250-27-25.ec2.internal} Started: Started container test-container-1 Jan 11 19:52:05.080: INFO: At 2020-01-11 19:52:03 +0000 UTC - event for pod-host-path-test: {kubelet ip-10-250-27-25.ec2.internal} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/mounttest:1.0" already present on machine Jan 11 19:52:05.080: INFO: At 2020-01-11 19:52:03 +0000 UTC - event for pod-host-path-test: {kubelet ip-10-250-27-25.ec2.internal} Created: Created container test-container-2 Jan 11 19:52:05.080: INFO: At 2020-01-11 19:52:03 +0000 UTC - event for pod-host-path-test: {kubelet ip-10-250-27-25.ec2.internal} Started: Started container test-container-2 Jan 11 19:52:05.170: INFO: POD NODE PHASE GRACE CONDITIONS Jan 11 19:52:05.170: INFO: Jan 11 19:52:05.351: INFO: Logging node info for node ip-10-250-27-25.ec2.internal Jan 11 19:52:05.441: INFO: Node Info: &Node{ObjectMeta:{ip-10-250-27-25.ec2.internal /api/v1/nodes/ip-10-250-27-25.ec2.internal af7f64f3-a5de-4df3-9e07-f69e835ab580 60336 0 2020-01-11 15:56:03 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:m5.large beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:us-east-1 failure-domain.beta.kubernetes.io/zone:us-east-1c kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-10-250-27-25.ec2.internal kubernetes.io/os:linux node.kubernetes.io/role:node worker.garden.sapcloud.io/group:worker-1 worker.gardener.cloud/pool:worker-1] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-ephemeral-1641":"ip-10-250-27-25.ec2.internal","csi-hostpath-provisioning-6240":"ip-10-250-27-25.ec2.internal","csi-hostpath-volume-expand-7991":"ip-10-250-27-25.ec2.internal","csi-mock-csi-mock-volumes-1062":"csi-mock-csi-mock-volumes-1062","csi-mock-csi-mock-volumes-2239":"csi-mock-csi-mock-volumes-2239","csi-mock-csi-mock-volumes-6381":"csi-mock-csi-mock-volumes-6381","csi-mock-csi-mock-volumes-795":"csi-mock-csi-mock-volumes-795"} node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.250.27.25/19 projectcalico.org/IPv4IPIPTunnelAddr:100.64.1.1 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:100.64.1.0/24,DoNotUse_ExternalID:,ProviderID:aws:///us-east-1c/i-0a8c404292a3c92e9,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-aws-ebs: {{25 0} {} 25 DecimalSI},cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{28730179584 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{8054267904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-aws-ebs: {{25 0} {} 25 DecimalSI},cpu: {{1920 -3} {} 1920m DecimalSI},ephemeral-storage: {{27293670584 0} {} 27293670584 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{6577812679 0} {} 6577812679 DecimalSI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2020-01-11 19:51:24 +0000 UTC,LastTransitionTime:2020-01-11 15:56:58 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2020-01-11 19:51:24 +0000 UTC,LastTransitionTime:2020-01-11 15:56:58 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2020-01-11 19:51:24 +0000 UTC,LastTransitionTime:2020-01-11 15:56:58 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2020-01-11 19:51:24 +0000 UTC,LastTransitionTime:2020-01-11 15:56:58 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2020-01-11 19:51:24 +0000 UTC,LastTransitionTime:2020-01-11 15:56:58 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2020-01-11 19:51:24 +0000 UTC,LastTransitionTime:2020-01-11 15:56:58 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2020-01-11 19:51:24 +0000 UTC,LastTransitionTime:2020-01-11 15:56:58 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-01-11 15:56:18 +0000 UTC,LastTransitionTime:2020-01-11 15:56:18 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-01-11 19:52:04 +0000 UTC,LastTransitionTime:2020-01-11 15:56:03 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-01-11 19:52:04 +0000 UTC,LastTransitionTime:2020-01-11 15:56:03 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-01-11 19:52:04 +0000 UTC,LastTransitionTime:2020-01-11 15:56:03 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-01-11 19:52:04 +0000 UTC,LastTransitionTime:2020-01-11 15:56:13 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.250.27.25,},NodeAddress{Type:Hostname,Address:ip-10-250-27-25.ec2.internal,},NodeAddress{Type:InternalDNS,Address:ip-10-250-27-25.ec2.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec280dba3c1837e27848a3dec8c080a9,SystemUUID:ec280dba-3c18-37e2-7848-a3dec8c080a9,BootID:89e42b89-b944-47ea-8bf6-5f2fe6d80c97,KernelVersion:4.19.86-coreos,OSImage:Container Linux by CoreOS 2303.3.0 (Rhyolite),ContainerRuntimeVersion:docker://18.6.3,KubeletVersion:v1.16.4,KubeProxyVersion:v1.16.4,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[eu.gcr.io/gardener-project/k8s.gcr.io/hyperkube@sha256:1d8d7ef8bae1a6c8564d97a7d83a3661ea4b43127b0a6d901f3cd4b1126ee102 eu.gcr.io/gardener-project/k8s.gcr.io/hyperkube:v1.16.4],SizeBytes:601224435,},ContainerImage{Names:[gcr.io/google-samples/gb-frontend@sha256:35cb427341429fac3df10ff74600ea73e8ec0754d78f9ce89e0b4f3d70d53ba6 gcr.io/google-samples/gb-frontend:v6],SizeBytes:373099368,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa k8s.gcr.io/etcd:3.3.15],SizeBytes:246640776,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/volume/nfs@sha256:c2ad734346f608a5f7d69cfded93c4e8094069320657bd372d12ba21dea3ea71 gcr.io/kubernetes-e2e-test-images/volume/nfs:1.0],SizeBytes:225358913,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:195659796,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/quay_io/calico/node@sha256:d017c694acb9df5ad8e957a14b4c5a613c3a42771a34904f40c279dd2f61461e eu.gcr.io/gardener-project/3rd/quay_io/calico/node:v3.8.2-mod-1],SizeBytes:185406766,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/quay_io/calico/cni@sha256:fe6cb51f30add991b76eadfa26ec10fa8796383a1ddf807be5d4228725207b9d eu.gcr.io/gardener-project/3rd/quay_io/calico/cni:v3.8.2-mod-1],SizeBytes:153790666,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/k8s_gcr_io/node-problem-detector@sha256:00aceed3b4ef20d0d578aff3f904212daa2f0aaf18350d3e213cf4ca0703ccf0 eu.gcr.io/gardener-project/3rd/k8s_gcr_io/node-problem-detector:v0.7.1-mod-1],SizeBytes:96768084,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:1bafcc6fb1aa990b487850adba9cadc020e42d7905aa8a30481182a477ba24b0 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.10],SizeBytes:61365829,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6],SizeBytes:57345321,},ContainerImage{Names:[quay.io/k8scsi/csi-provisioner@sha256:0efcb424f1dde9b9fb11a1a14f2e48ab47e1c3f08bc3a929990dcfcb1f7ab34f quay.io/k8scsi/csi-provisioner:v1.4.0-rc1],SizeBytes:54431016,},ContainerImage{Names:[quay.io/k8scsi/csi-snapshotter@sha256:e3d3e742e32d00488fdb401045b9b1d033d7ca0ab6e760f77b24750fc95e5f70 quay.io/k8scsi/csi-snapshotter:v2.0.0-rc1],SizeBytes:51703561,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/quay_io/calico/typha@sha256:52298609a808087c774e95ded163e91828106bed6cf3117c51aba3f4d3b7943c eu.gcr.io/gardener-project/3rd/quay_io/calico/typha:v3.8.2],SizeBytes:49771411,},ContainerImage{Names:[quay.io/k8scsi/csi-attacher@sha256:26fccd7a99d973845df1193b46ebdcc6ab8dc5f6e6be319750c471fce1742d13 quay.io/k8scsi/csi-attacher:v1.2.0],SizeBytes:46226754,},ContainerImage{Names:[quay.io/k8scsi/csi-attacher@sha256:0aba670b4d9d6b2e720bbf575d733156c676b693ca26501235444490300db838 quay.io/k8scsi/csi-attacher:v1.1.0],SizeBytes:42839085,},ContainerImage{Names:[quay.io/k8scsi/csi-resizer@sha256:7d46fb6eb8b890dc546029d1565d502b4a1d974d33625c6ee2bc7991b77fc1a1 quay.io/k8scsi/csi-resizer:v0.2.0],SizeBytes:42817100,},ContainerImage{Names:[quay.io/k8scsi/csi-resizer@sha256:f315c9042e56def3c05c6b04fe79ec9da6d39ddc557ca365a76cf35964ea08b6 quay.io/k8scsi/csi-resizer:v0.1.0],SizeBytes:42623056,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonroot@sha256:d4ede5c74517090b6686219059118ed178cf4620f5db8781b32f806bb1e7395b gcr.io/kubernetes-e2e-test-images/nonroot:1.0],SizeBytes:42321438,},ContainerImage{Names:[redis@sha256:50899ea1ceed33fa03232f3ac57578a424faa1742c1ac9c7a7bdb95cdf19b858 redis:5.0.5-alpine],SizeBytes:29331594,},ContainerImage{Names:[quay.io/k8scsi/hostpathplugin@sha256:b4826e492fc1762fceaf9726f41575ca0a4567864d3d235da874818de18039de quay.io/k8scsi/hostpathplugin:v1.2.0-rc5],SizeBytes:28761497,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/quay_io/prometheus/node-exporter@sha256:fea82a3a79228af2840c72ff394d7446ace51ae035f5b26cd9767b250baf13b7 eu.gcr.io/gardener-project/3rd/quay_io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/echoserver@sha256:e9ba514b896cdf559eef8788b66c2c3ee55f3572df617647b4b0d8b6bf81cf19 gcr.io/kubernetes-e2e-test-images/echoserver:2.2],SizeBytes:21692741,},ContainerImage{Names:[quay.io/k8scsi/mock-driver@sha256:e0eed916b7d970bad2b7d9875f9ad16932f987f0f3d91ec5d86da68b0b5cc9d1 quay.io/k8scsi/mock-driver:v2.1.0],SizeBytes:16226335,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[quay.io/k8scsi/csi-node-driver-registrar@sha256:13daf82fb99e951a4bff8ae5fc7c17c3a8fe7130be6400990d8f6076c32d4599 quay.io/k8scsi/csi-node-driver-registrar:v1.1.0],SizeBytes:15815995,},ContainerImage{Names:[quay.io/k8scsi/livenessprobe@sha256:dde617756e0f602adc566ab71fd885f1dad451ad3fb063ac991c95a2ff47aea5 quay.io/k8scsi/livenessprobe:v1.1.0],SizeBytes:14967303,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/quay_io/calico/pod2daemon-flexvol@sha256:fd246ba4eb5b96a7b97bfd8d99eb823ba179e6eeb9852cb3e3f7bf2f44a800a8 eu.gcr.io/gardener-project/3rd/quay_io/calico/pod2daemon-flexvol:v3.8.2],SizeBytes:9371181,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1],SizeBytes:9349974,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6 gcr.io/kubernetes-e2e-test-images/kitten:1.0],SizeBytes:4747037,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0],SizeBytes:4732240,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0],SizeBytes:1450451,},ContainerImage{Names:[busybox@sha256:6915be4043561d64e0ab0f8f098dc2ac48e077fe23f488ac24b665166898115a busybox:latest],SizeBytes:1219782,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/gcr_io/google_containers/pause-amd64@sha256:ffa28932647c3b6cab6a618aafe98d33dd185d96158ecf9b1addf042d6244025 k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea eu.gcr.io/gardener-project/3rd/gcr_io/google_containers/pause-amd64:3.1 k8s.gcr.io/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 11 19:52:05.442: INFO: Logging kubelet events for node ip-10-250-27-25.ec2.internal Jan 11 19:52:05.531: INFO: Logging pods the kubelet thinks is on node ip-10-250-27-25.ec2.internal Jan 11 19:52:05.631: INFO: pod-secrets-80f57524-b8be-4384-b63f-e0587d44498a started at 2020-01-11 19:50:05 +0000 UTC (0+1 container statuses recorded) Jan 11 19:52:05.631: INFO: Container creates-volume-test ready: false, restart count 0 Jan 11 19:52:05.631: INFO: calico-node-m8r2d started at 2020-01-11 15:56:04 +0000 UTC (2+1 container statuses recorded) Jan 11 19:52:05.631: INFO: Init container install-cni ready: true, restart count 0 Jan 11 19:52:05.631: INFO: Init container flexvol-driver ready: true, restart count 0 Jan 11 19:52:05.631: INFO: Container calico-node ready: true, restart count 0 Jan 11 19:52:05.631: INFO: forbid-1578772200-2qvmj started at 2020-01-11 19:50:09 +0000 UTC (0+1 container statuses recorded) Jan 11 19:52:05.632: INFO: Container c ready: true, restart count 0 Jan 11 19:52:05.632: INFO: liveness-d9c04d87-22d3-4723-91d9-3bcb6c488d03 started at 2020-01-11 19:50:32 +0000 UTC (0+1 container statuses recorded) Jan 11 19:52:05.632: INFO: Container liveness ready: true, restart count 0 Jan 11 19:52:05.632: INFO: kube-proxy-rq4kf started at 2020-01-11 15:56:04 +0000 UTC (0+1 container statuses recorded) Jan 11 19:52:05.632: INFO: Container kube-proxy ready: true, restart count 0 Jan 11 19:52:05.632: INFO: node-problem-detector-9z5sq started at 2020-01-11 15:56:04 +0000 UTC (0+1 container statuses recorded) Jan 11 19:52:05.632: INFO: Container node-problem-detector ready: true, restart count 0 Jan 11 19:52:05.632: INFO: node-exporter-l6q84 started at 2020-01-11 15:56:04 +0000 UTC (0+1 container statuses recorded) Jan 11 19:52:05.632: INFO: Container node-exporter ready: true, restart count 0 W0111 19:52:05.722495 8631 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 11 19:52:05.938: INFO: Latency metrics for node ip-10-250-27-25.ec2.internal Jan 11 19:52:05.938: INFO: Logging node info for node ip-10-250-7-77.ec2.internal Jan 11 19:52:06.029: INFO: Node Info: &Node{ObjectMeta:{ip-10-250-7-77.ec2.internal /api/v1/nodes/ip-10-250-7-77.ec2.internal 3773c02c-1fbb-4cbe-a527-8933de0a8978 60353 0 2020-01-11 15:55:58 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:m5.large beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:us-east-1 failure-domain.beta.kubernetes.io/zone:us-east-1c kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-10-250-7-77.ec2.internal kubernetes.io/os:linux node.kubernetes.io/role:node worker.garden.sapcloud.io/group:worker-1 worker.gardener.cloud/pool:worker-1] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-ephemeral-9708":"ip-10-250-7-77.ec2.internal","csi-hostpath-provisioning-3332":"ip-10-250-7-77.ec2.internal","csi-hostpath-provisioning-4625":"ip-10-250-7-77.ec2.internal","csi-hostpath-provisioning-638":"ip-10-250-7-77.ec2.internal","csi-hostpath-provisioning-888":"ip-10-250-7-77.ec2.internal","csi-hostpath-provisioning-9667":"ip-10-250-7-77.ec2.internal","csi-hostpath-volume-2441":"ip-10-250-7-77.ec2.internal","csi-hostpath-volume-expand-8983":"ip-10-250-7-77.ec2.internal","csi-hostpath-volumeio-3164":"ip-10-250-7-77.ec2.internal","csi-hostpath-volumemode-2792":"ip-10-250-7-77.ec2.internal"} node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.250.7.77/19 projectcalico.org/IPv4IPIPTunnelAddr:100.64.0.1 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:100.64.0.0/24,DoNotUse_ExternalID:,ProviderID:aws:///us-east-1c/i-0551dba45aad7abfa,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-aws-ebs: {{25 0} {} 25 DecimalSI},cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{28730179584 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{8054267904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-aws-ebs: {{25 0} {} 25 DecimalSI},cpu: {{1920 -3} {} 1920m DecimalSI},ephemeral-storage: {{27293670584 0} {} 27293670584 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{6577812679 0} {} 6577812679 DecimalSI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2020-01-11 19:51:05 +0000 UTC,LastTransitionTime:2020-01-11 15:56:28 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2020-01-11 19:51:05 +0000 UTC,LastTransitionTime:2020-01-11 15:56:28 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2020-01-11 19:51:05 +0000 UTC,LastTransitionTime:2020-01-11 15:56:28 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2020-01-11 19:51:05 +0000 UTC,LastTransitionTime:2020-01-11 15:56:28 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2020-01-11 19:51:05 +0000 UTC,LastTransitionTime:2020-01-11 15:56:28 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2020-01-11 19:51:05 +0000 UTC,LastTransitionTime:2020-01-11 15:56:28 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2020-01-11 19:51:05 +0000 UTC,LastTransitionTime:2020-01-11 15:56:28 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-01-11 15:56:16 +0000 UTC,LastTransitionTime:2020-01-11 15:56:16 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-01-11 19:52:05 +0000 UTC,LastTransitionTime:2020-01-11 15:55:58 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-01-11 19:52:05 +0000 UTC,LastTransitionTime:2020-01-11 15:55:58 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-01-11 19:52:05 +0000 UTC,LastTransitionTime:2020-01-11 15:55:58 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-01-11 19:52:05 +0000 UTC,LastTransitionTime:2020-01-11 15:56:08 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.250.7.77,},NodeAddress{Type:Hostname,Address:ip-10-250-7-77.ec2.internal,},NodeAddress{Type:InternalDNS,Address:ip-10-250-7-77.ec2.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec223a25fa514279256b8b36a522519a,SystemUUID:ec223a25-fa51-4279-256b-8b36a522519a,BootID:652118c2-7bd4-4ebf-b248-be5c7a65a3aa,KernelVersion:4.19.86-coreos,OSImage:Container Linux by CoreOS 2303.3.0 (Rhyolite),ContainerRuntimeVersion:docker://18.6.3,KubeletVersion:v1.16.4,KubeProxyVersion:v1.16.4,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[eu.gcr.io/gardener-project/k8s.gcr.io/hyperkube@sha256:1d8d7ef8bae1a6c8564d97a7d83a3661ea4b43127b0a6d901f3cd4b1126ee102 eu.gcr.io/gardener-project/k8s.gcr.io/hyperkube:v1.16.4],SizeBytes:601224435,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/quay_io/kubernetes-ingress-controller/nginx-ingress-controller@sha256:4980f4ee069f767334c6fb6a7d75fbdc87236542fd749e22af5d80f2217959f4 eu.gcr.io/gardener-project/3rd/quay_io/kubernetes-ingress-controller/nginx-ingress-controller:0.22.0],SizeBytes:551728251,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/quay_io/calico/node@sha256:d017c694acb9df5ad8e957a14b4c5a613c3a42771a34904f40c279dd2f61461e eu.gcr.io/gardener-project/3rd/quay_io/calico/node:v3.8.2-mod-1],SizeBytes:185406766,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/quay_io/calico/cni@sha256:fe6cb51f30add991b76eadfa26ec10fa8796383a1ddf807be5d4228725207b9d eu.gcr.io/gardener-project/3rd/quay_io/calico/cni:v3.8.2-mod-1],SizeBytes:153790666,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/k8s_gcr_io/kubernetes-dashboard-amd64@sha256:2f4fefeb964b1b7b09a3d2607a963506a47a6628d5268825e8b45b8a4c5ace93 eu.gcr.io/gardener-project/3rd/k8s_gcr_io/kubernetes-dashboard-amd64:v1.10.1],SizeBytes:121711221,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/k8s_gcr_io/node-problem-detector@sha256:00aceed3b4ef20d0d578aff3f904212daa2f0aaf18350d3e213cf4ca0703ccf0 eu.gcr.io/gardener-project/3rd/k8s_gcr_io/node-problem-detector:v0.7.1-mod-1],SizeBytes:96768084,},ContainerImage{Names:[eu.gcr.io/gardener-project/gardener/ingress-default-backend@sha256:17b68928ead12cc9df88ee60d9c638d3fd642a7e122c2bb7586da1a21eb2de45 eu.gcr.io/gardener-project/gardener/ingress-default-backend:0.7.0],SizeBytes:69546830,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6],SizeBytes:57345321,},ContainerImage{Names:[quay.io/k8scsi/csi-provisioner@sha256:0efcb424f1dde9b9fb11a1a14f2e48ab47e1c3f08bc3a929990dcfcb1f7ab34f quay.io/k8scsi/csi-provisioner:v1.4.0-rc1],SizeBytes:54431016,},ContainerImage{Names:[quay.io/k8scsi/csi-snapshotter@sha256:e3d3e742e32d00488fdb401045b9b1d033d7ca0ab6e760f77b24750fc95e5f70 quay.io/k8scsi/csi-snapshotter:v2.0.0-rc1],SizeBytes:51703561,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/quay_io/calico/typha@sha256:52298609a808087c774e95ded163e91828106bed6cf3117c51aba3f4d3b7943c eu.gcr.io/gardener-project/3rd/quay_io/calico/typha:v3.8.2],SizeBytes:49771411,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/quay_io/calico/kube-controllers@sha256:242c3e83e41c5ad4a246cba351360d92fb90e1c140cd24e42140e640a0ed3290 eu.gcr.io/gardener-project/3rd/quay_io/calico/kube-controllers:v3.8.2],SizeBytes:46809393,},ContainerImage{Names:[quay.io/k8scsi/csi-attacher@sha256:26fccd7a99d973845df1193b46ebdcc6ab8dc5f6e6be319750c471fce1742d13 quay.io/k8scsi/csi-attacher:v1.2.0],SizeBytes:46226754,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/coredns/coredns@sha256:b1f81b52011f91ebcf512111caa6d6d0896a65251188210cd3145d5b23204531 eu.gcr.io/gardener-project/3rd/coredns/coredns:1.6.3],SizeBytes:44255363,},ContainerImage{Names:[quay.io/k8scsi/csi-resizer@sha256:7d46fb6eb8b890dc546029d1565d502b4a1d974d33625c6ee2bc7991b77fc1a1 quay.io/k8scsi/csi-resizer:v0.2.0],SizeBytes:42817100,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/k8s_gcr_io/cpvpa-amd64@sha256:5843435c534f0368f8980b1635976976b087f0b2dcde01226d9216da2276d24d eu.gcr.io/gardener-project/3rd/k8s_gcr_io/cpvpa-amd64:v0.8.1],SizeBytes:40616150,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/k8s_gcr_io/cluster-proportional-autoscaler-amd64@sha256:2cdb0f90aac21d3f648a945ef929bfb81159d7453499b2dce6164c78a348ac42 eu.gcr.io/gardener-project/3rd/k8s_gcr_io/cluster-proportional-autoscaler-amd64:1.7.1],SizeBytes:40067731,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/k8s_gcr_io/metrics-server-amd64@sha256:c3c8fb8757c3236343da9239a266c6ee9e16ac3c98b6f5d7a7cbb5f83058d4f1 eu.gcr.io/gardener-project/3rd/k8s_gcr_io/metrics-server-amd64:v0.3.3],SizeBytes:39933796,},ContainerImage{Names:[redis@sha256:50899ea1ceed33fa03232f3ac57578a424faa1742c1ac9c7a7bdb95cdf19b858 redis:5.0.5-alpine],SizeBytes:29331594,},ContainerImage{Names:[quay.io/k8scsi/hostpathplugin@sha256:b4826e492fc1762fceaf9726f41575ca0a4567864d3d235da874818de18039de quay.io/k8scsi/hostpathplugin:v1.2.0-rc5],SizeBytes:28761497,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/quay_io/prometheus/node-exporter@sha256:fea82a3a79228af2840c72ff394d7446ace51ae035f5b26cd9767b250baf13b7 eu.gcr.io/gardener-project/3rd/quay_io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/echoserver@sha256:e9ba514b896cdf559eef8788b66c2c3ee55f3572df617647b4b0d8b6bf81cf19 gcr.io/kubernetes-e2e-test-images/echoserver:2.2],SizeBytes:21692741,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/quay_io/prometheus/blackbox-exporter@sha256:c09cbb653e4708a0c14b205822f56026669c6a4a7d0502609c65da2dd741e669 eu.gcr.io/gardener-project/3rd/quay_io/prometheus/blackbox-exporter:v0.14.0],SizeBytes:17584252,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[quay.io/k8scsi/csi-node-driver-registrar@sha256:13daf82fb99e951a4bff8ae5fc7c17c3a8fe7130be6400990d8f6076c32d4599 quay.io/k8scsi/csi-node-driver-registrar:v1.1.0],SizeBytes:15815995,},ContainerImage{Names:[quay.io/k8scsi/livenessprobe@sha256:dde617756e0f602adc566ab71fd885f1dad451ad3fb063ac991c95a2ff47aea5 quay.io/k8scsi/livenessprobe:v1.1.0],SizeBytes:14967303,},ContainerImage{Names:[eu.gcr.io/gardener-project/gardener/vpn-shoot@sha256:6054c6ae62c2bca2f07c913390c3babf14bb8dfa80c707ee8d4fd03c06dbf93f eu.gcr.io/gardener-project/gardener/vpn-shoot:0.16.0],SizeBytes:13732716,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/quay_io/calico/pod2daemon-flexvol@sha256:fd246ba4eb5b96a7b97bfd8d99eb823ba179e6eeb9852cb3e3f7bf2f44a800a8 eu.gcr.io/gardener-project/3rd/quay_io/calico/pod2daemon-flexvol:v3.8.2],SizeBytes:9371181,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:4206620,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/gcr_io/google_containers/pause-amd64@sha256:ffa28932647c3b6cab6a618aafe98d33dd185d96158ecf9b1addf042d6244025 k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea eu.gcr.io/gardener-project/3rd/gcr_io/google_containers/pause-amd64:3.1 k8s.gcr.io/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 11 19:52:06.030: INFO: Logging kubelet events for node ip-10-250-7-77.ec2.internal Jan 11 19:52:06.119: INFO: Logging pods the kubelet thinks is on node ip-10-250-7-77.ec2.internal Jan 11 19:52:06.263: INFO: kube-proxy-nn5px started at 2020-01-11 15:55:58 +0000 UTC (0+1 container statuses recorded) Jan 11 19:52:06.263: INFO: Container kube-proxy ready: true, restart count 0 Jan 11 19:52:06.263: INFO: calico-typha-horizontal-autoscaler-85c99966bb-6j6rp started at 2020-01-11 15:56:08 +0000 UTC (0+1 container statuses recorded) Jan 11 19:52:06.263: INFO: Container autoscaler ready: true, restart count 0 Jan 11 19:52:06.263: INFO: calico-typha-vertical-autoscaler-5769b74b58-r8t6r started at 2020-01-11 15:56:13 +0000 UTC (0+1 container statuses recorded) Jan 11 19:52:06.263: INFO: Container autoscaler ready: true, restart count 5 Jan 11 19:52:06.263: INFO: addons-nginx-ingress-controller-7c75bb76db-cd9r9 started at 2020-01-11 15:56:13 +0000 UTC (0+1 container statuses recorded) Jan 11 19:52:06.263: INFO: Container nginx-ingress-controller ready: true, restart count 0 Jan 11 19:52:06.263: INFO: vpn-shoot-5d76665b65-6rkww started at 2020-01-11 15:56:13 +0000 UTC (0+1 container statuses recorded) Jan 11 19:52:06.263: INFO: Container vpn-shoot ready: true, restart count 0 Jan 11 19:52:06.263: INFO: csi-hostpath-attacher-0 started at 2020-01-11 19:51:09 +0000 UTC (0+1 container statuses recorded) Jan 11 19:52:06.263: INFO: Container csi-attacher ready: false, restart count 0 Jan 11 19:52:06.263: INFO: addons-nginx-ingress-nginx-ingress-k8s-backend-95f65778d-4fk7d started at 2020-01-11 15:56:08 +0000 UTC (0+1 container statuses recorded) Jan 11 19:52:06.263: INFO: Container nginx-ingress-nginx-ingress-k8s-backend ready: true, restart count 0 Jan 11 19:52:06.263: INFO: addons-kubernetes-dashboard-78954cc66b-69k8m started at 2020-01-11 15:56:08 +0000 UTC (0+1 container statuses recorded) Jan 11 19:52:06.263: INFO: Container kubernetes-dashboard ready: true, restart count 0 Jan 11 19:52:06.263: INFO: csi-hostpathplugin-0 started at 2020-01-11 19:51:23 +0000 UTC (0+3 container statuses recorded) Jan 11 19:52:06.263: INFO: Container hostpath ready: false, restart count 0 Jan 11 19:52:06.263: INFO: Container liveness-probe ready: false, restart count 0 Jan 11 19:52:06.263: INFO: Container node-driver-registrar ready: false, restart count 0 Jan 11 19:52:06.263: INFO: pod-subpath-test-hostpath-67j8 started at 2020-01-11 19:50:30 +0000 UTC (1+2 container statuses recorded) Jan 11 19:52:06.263: INFO: Init container init-volume-hostpath-67j8 ready: true, restart count 0 Jan 11 19:52:06.263: INFO: Container test-container-subpath-hostpath-67j8 ready: true, restart count 3 Jan 11 19:52:06.263: INFO: Container test-container-volume-hostpath-67j8 ready: true, restart count 0 Jan 11 19:52:06.263: INFO: inline-volume-tester-w4rf8 started at 2020-01-11 19:51:10 +0000 UTC (0+1 container statuses recorded) Jan 11 19:52:06.263: INFO: Container csi-volume-tester ready: false, restart count 0 Jan 11 19:52:06.263: INFO: blackbox-exporter-54bb5f55cc-452fk started at 2020-01-11 15:55:58 +0000 UTC (0+1 container statuses recorded) Jan 11 19:52:06.263: INFO: Container blackbox-exporter ready: true, restart count 0 Jan 11 19:52:06.263: INFO: csi-hostpath-resizer-0 started at 2020-01-11 19:51:10 +0000 UTC (0+1 container statuses recorded) Jan 11 19:52:06.263: INFO: Container csi-resizer ready: false, restart count 0 Jan 11 19:52:06.263: INFO: coredns-59c969ffb8-fqq79 started at 2020-01-11 15:56:08 +0000 UTC (0+1 container statuses recorded) Jan 11 19:52:06.263: INFO: Container coredns ready: true, restart count 0 Jan 11 19:52:06.263: INFO: csi-snapshotter-0 started at 2020-01-11 19:51:10 +0000 UTC (0+1 container statuses recorded) Jan 11 19:52:06.263: INFO: Container csi-snapshotter ready: false, restart count 0 Jan 11 19:52:06.263: INFO: csi-hostpath-provisioner-0 started at 2020-01-11 19:51:09 +0000 UTC (0+1 container statuses recorded) Jan 11 19:52:06.263: INFO: Container csi-provisioner ready: false, restart count 0 Jan 11 19:52:06.263: INFO: calico-node-dl8nk started at 2020-01-11 15:55:58 +0000 UTC (2+1 container statuses recorded) Jan 11 19:52:06.263: INFO: Init container install-cni ready: true, restart count 0 Jan 11 19:52:06.263: INFO: Init container flexvol-driver ready: true, restart count 0 Jan 11 19:52:06.263: INFO: Container calico-node ready: true, restart count 0 Jan 11 19:52:06.263: INFO: node-problem-detector-jx2p4 started at 2020-01-11 15:55:58 +0000 UTC (0+1 container statuses recorded) Jan 11 19:52:06.263: INFO: Container node-problem-detector ready: true, restart count 0 Jan 11 19:52:06.263: INFO: node-exporter-gp57h started at 2020-01-11 15:55:58 +0000 UTC (0+1 container statuses recorded) Jan 11 19:52:06.263: INFO: Container node-exporter ready: true, restart count 0 Jan 11 19:52:06.263: INFO: calico-kube-controllers-79bcd784b6-c46r9 started at 2020-01-11 15:56:08 +0000 UTC (0+1 container statuses recorded) Jan 11 19:52:06.263: INFO: Container calico-kube-controllers ready: true, restart count 0 Jan 11 19:52:06.263: INFO: metrics-server-7c797fd994-4x7v9 started at 2020-01-11 15:56:08 +0000 UTC (0+1 container statuses recorded) Jan 11 19:52:06.263: INFO: Container metrics-server ready: true, restart count 0 Jan 11 19:52:06.263: INFO: pod-subpath-test-hostpathsymlink-p8ph started at 2020-01-11 19:50:22 +0000 UTC (1+2 container statuses recorded) Jan 11 19:52:06.263: INFO: Init container init-volume-hostpathsymlink-p8ph ready: true, restart count 0 Jan 11 19:52:06.263: INFO: Container test-container-subpath-hostpathsymlink-p8ph ready: false, restart count 3 Jan 11 19:52:06.263: INFO: Container test-container-volume-hostpathsymlink-p8ph ready: false, restart count 0 Jan 11 19:52:06.263: INFO: coredns-59c969ffb8-57m7v started at 2020-01-11 15:56:11 +0000 UTC (0+1 container statuses recorded) Jan 11 19:52:06.263: INFO: Container coredns ready: true, restart count 0 Jan 11 19:52:06.263: INFO: calico-typha-deploy-9f6b455c4-vdrzx started at 2020-01-11 16:21:07 +0000 UTC (0+1 container statuses recorded) Jan 11 19:52:06.263: INFO: Container calico-typha ready: true, restart count 0 W0111 19:52:06.354135 8631 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 11 19:52:06.558: INFO: Latency metrics for node ip-10-250-7-77.ec2.internal Jan 11 19:52:06.558: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-6960" for this suite. Jan 11 19:52:12.918: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:52:16.213: INFO: namespace hostpath-6960 deletion completed in 9.564218839s • Failure [14.542 seconds] [sig-storage] HostPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] [It] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Jan 11 19:52:04.990: Unexpected error: <*errors.errorString | 0xc0048ec160>: { s: "expected \"mode of file \\\"/test-volume\\\": dtrwxrwx\" in container output: Expected\n : mount type of \"/test-volume\": tmpfs\n mode of file \"/test-volume\": dgtrwxrwxrwx\n \nto contain substring\n : mode of file \"/test-volume\": dtrwxrwx", } expected "mode of file \"/test-volume\": dtrwxrwx" in container output: Expected : mount type of "/test-volume": tmpfs mode of file "/test-volume": dgtrwxrwxrwx to contain substring : mode of file "/test-volume": dtrwxrwx occurred /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:1667 ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:52:09.206: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename kubectl STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-2 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:225 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: validating cluster-info Jan 11 19:52:09.951: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config cluster-info' Jan 11 19:52:10.917: INFO: stderr: "" Jan 11 19:52:10.917: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com\x1b[0m\n\x1b[0;32mCoreDNS\x1b[0m is running at \x1b[0;33mhttps://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\x1b[0;32mkubernetes-dashboard\x1b[0m is running at \x1b[0;33mhttps://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:52:10.917: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2" for this suite. Jan 11 19:52:19.277: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:52:22.596: INFO: namespace kubectl-2 deletion completed in 11.587653763s • [SLOW TEST:13.390 seconds] [sig-cli] Kubectl client /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl cluster-info /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:974 should check if Kubernetes master services is included in cluster-info [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:93 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:50:29.403: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename provisioning STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in provisioning-8361 STEP: Waiting for a default service account to be provisioned in namespace [It] should support restarting containers using directory as subpath [Slow] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:303 Jan 11 19:50:30.150: INFO: Could not find CSI Name for in-tree plugin kubernetes.io/host-path Jan 11 19:50:30.240: INFO: Creating resource for inline volume STEP: Creating pod pod-subpath-test-hostpath-67j8 STEP: Failing liveness probe Jan 11 19:50:34.512: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=provisioning-8361 pod-subpath-test-hostpath-67j8 --container test-container-volume-hostpath-67j8 -- /bin/sh -c rm /probe-volume/probe-file' Jan 11 19:50:35.837: INFO: stderr: "" Jan 11 19:50:35.837: INFO: stdout: "" Jan 11 19:50:35.837: INFO: Pod exec output: STEP: Waiting for container to restart Jan 11 19:50:35.927: INFO: Container test-container-subpath-hostpath-67j8, restarts: 0 Jan 11 19:50:46.017: INFO: Container test-container-subpath-hostpath-67j8, restarts: 2 Jan 11 19:50:46.017: INFO: Container has restart count: 2 STEP: Rewriting the file Jan 11 19:50:46.017: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=provisioning-8361 pod-subpath-test-hostpath-67j8 --container test-container-volume-hostpath-67j8 -- /bin/sh -c echo test-after > /probe-volume/probe-file' Jan 11 19:50:47.322: INFO: stderr: "" Jan 11 19:50:47.322: INFO: stdout: "" Jan 11 19:50:47.322: INFO: Pod exec output: STEP: Waiting for container to stop restarting Jan 11 19:51:01.502: INFO: Container has restart count: 3 Jan 11 19:52:03.502: INFO: Container restart has stabilized Jan 11 19:52:03.502: INFO: Deleting pod "pod-subpath-test-hostpath-67j8" in namespace "provisioning-8361" Jan 11 19:52:03.598: INFO: Wait up to 5m0s for pod "pod-subpath-test-hostpath-67j8" to be fully deleted STEP: Deleting pod Jan 11 19:52:19.778: INFO: Deleting pod "pod-subpath-test-hostpath-67j8" in namespace "provisioning-8361" Jan 11 19:52:19.867: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics [AfterEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:52:19.868: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "provisioning-8361" for this suite. Jan 11 19:52:26.228: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:52:29.539: INFO: namespace provisioning-8361 deletion completed in 9.580597425s • [SLOW TEST:120.137 seconds] [sig-storage] In-tree Volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Driver: hostPath] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:69 [Testpattern: Inline-volume (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:92 should support restarting containers using directory as subpath [Slow] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:303 ------------------------------ SSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:52:16.218: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename hostpath STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in hostpath-9151 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating a pod to test hostPath mode Jan 11 19:52:17.537: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-9151" to be "success or failure" Jan 11 19:52:17.626: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 89.314147ms Jan 11 19:52:19.716: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.179051857s STEP: Saw pod success Jan 11 19:52:19.716: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Jan 11 19:52:19.806: INFO: Trying to get logs from node ip-10-250-27-25.ec2.internal pod pod-host-path-test container test-container-1: STEP: delete the pod Jan 11 19:52:19.995: INFO: Waiting for pod pod-host-path-test to disappear Jan 11 19:52:20.083: INFO: Pod pod-host-path-test no longer exists Jan 11 19:52:20.084: FAIL: Unexpected error: <*errors.errorString | 0xc00377b210>: { s: "expected \"mode of file \\\"/test-volume\\\": dtrwxrwx\" in container output: Expected\n : mount type of \"/test-volume\": tmpfs\n mode of file \"/test-volume\": dgtrwxrwxrwx\n \nto contain substring\n : mode of file \"/test-volume\": dtrwxrwx", } expected "mode of file \"/test-volume\": dtrwxrwx" in container output: Expected : mount type of "/test-volume": tmpfs mode of file "/test-volume": dgtrwxrwxrwx to contain substring : mode of file "/test-volume": dtrwxrwx occurred [AfterEach] [sig-storage] HostPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 STEP: Collecting events from namespace "hostpath-9151". STEP: Found 7 events. Jan 11 19:52:20.175: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-host-path-test: {default-scheduler } Scheduled: Successfully assigned hostpath-9151/pod-host-path-test to ip-10-250-27-25.ec2.internal Jan 11 19:52:20.175: INFO: At 2020-01-11 19:52:18 +0000 UTC - event for pod-host-path-test: {kubelet ip-10-250-27-25.ec2.internal} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/mounttest:1.0" already present on machine Jan 11 19:52:20.175: INFO: At 2020-01-11 19:52:18 +0000 UTC - event for pod-host-path-test: {kubelet ip-10-250-27-25.ec2.internal} Created: Created container test-container-1 Jan 11 19:52:20.175: INFO: At 2020-01-11 19:52:18 +0000 UTC - event for pod-host-path-test: {kubelet ip-10-250-27-25.ec2.internal} Started: Started container test-container-1 Jan 11 19:52:20.175: INFO: At 2020-01-11 19:52:18 +0000 UTC - event for pod-host-path-test: {kubelet ip-10-250-27-25.ec2.internal} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/mounttest:1.0" already present on machine Jan 11 19:52:20.175: INFO: At 2020-01-11 19:52:18 +0000 UTC - event for pod-host-path-test: {kubelet ip-10-250-27-25.ec2.internal} Created: Created container test-container-2 Jan 11 19:52:20.175: INFO: At 2020-01-11 19:52:18 +0000 UTC - event for pod-host-path-test: {kubelet ip-10-250-27-25.ec2.internal} Started: Started container test-container-2 Jan 11 19:52:20.264: INFO: POD NODE PHASE GRACE CONDITIONS Jan 11 19:52:20.264: INFO: Jan 11 19:52:20.444: INFO: Logging node info for node ip-10-250-27-25.ec2.internal Jan 11 19:52:20.534: INFO: Node Info: &Node{ObjectMeta:{ip-10-250-27-25.ec2.internal /api/v1/nodes/ip-10-250-27-25.ec2.internal af7f64f3-a5de-4df3-9e07-f69e835ab580 60400 0 2020-01-11 15:56:03 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:m5.large beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:us-east-1 failure-domain.beta.kubernetes.io/zone:us-east-1c kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-10-250-27-25.ec2.internal kubernetes.io/os:linux node.kubernetes.io/role:node worker.garden.sapcloud.io/group:worker-1 worker.gardener.cloud/pool:worker-1] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-ephemeral-1641":"ip-10-250-27-25.ec2.internal","csi-hostpath-provisioning-6240":"ip-10-250-27-25.ec2.internal","csi-hostpath-volume-expand-7991":"ip-10-250-27-25.ec2.internal","csi-mock-csi-mock-volumes-1062":"csi-mock-csi-mock-volumes-1062","csi-mock-csi-mock-volumes-2239":"csi-mock-csi-mock-volumes-2239","csi-mock-csi-mock-volumes-6381":"csi-mock-csi-mock-volumes-6381","csi-mock-csi-mock-volumes-795":"csi-mock-csi-mock-volumes-795"} node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.250.27.25/19 projectcalico.org/IPv4IPIPTunnelAddr:100.64.1.1 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:100.64.1.0/24,DoNotUse_ExternalID:,ProviderID:aws:///us-east-1c/i-0a8c404292a3c92e9,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-aws-ebs: {{25 0} {} 25 DecimalSI},cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{28730179584 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{8054267904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-aws-ebs: {{25 0} {} 25 DecimalSI},cpu: {{1920 -3} {} 1920m DecimalSI},ephemeral-storage: {{27293670584 0} {} 27293670584 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{6577812679 0} {} 6577812679 DecimalSI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2020-01-11 19:51:24 +0000 UTC,LastTransitionTime:2020-01-11 15:56:58 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2020-01-11 19:51:24 +0000 UTC,LastTransitionTime:2020-01-11 15:56:58 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2020-01-11 19:51:24 +0000 UTC,LastTransitionTime:2020-01-11 15:56:58 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2020-01-11 19:51:24 +0000 UTC,LastTransitionTime:2020-01-11 15:56:58 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2020-01-11 19:51:24 +0000 UTC,LastTransitionTime:2020-01-11 15:56:58 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2020-01-11 19:51:24 +0000 UTC,LastTransitionTime:2020-01-11 15:56:58 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2020-01-11 19:51:24 +0000 UTC,LastTransitionTime:2020-01-11 15:56:58 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-01-11 15:56:18 +0000 UTC,LastTransitionTime:2020-01-11 15:56:18 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-01-11 19:52:14 +0000 UTC,LastTransitionTime:2020-01-11 15:56:03 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-01-11 19:52:14 +0000 UTC,LastTransitionTime:2020-01-11 15:56:03 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-01-11 19:52:14 +0000 UTC,LastTransitionTime:2020-01-11 15:56:03 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-01-11 19:52:14 +0000 UTC,LastTransitionTime:2020-01-11 15:56:13 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.250.27.25,},NodeAddress{Type:Hostname,Address:ip-10-250-27-25.ec2.internal,},NodeAddress{Type:InternalDNS,Address:ip-10-250-27-25.ec2.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec280dba3c1837e27848a3dec8c080a9,SystemUUID:ec280dba-3c18-37e2-7848-a3dec8c080a9,BootID:89e42b89-b944-47ea-8bf6-5f2fe6d80c97,KernelVersion:4.19.86-coreos,OSImage:Container Linux by CoreOS 2303.3.0 (Rhyolite),ContainerRuntimeVersion:docker://18.6.3,KubeletVersion:v1.16.4,KubeProxyVersion:v1.16.4,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[eu.gcr.io/gardener-project/k8s.gcr.io/hyperkube@sha256:1d8d7ef8bae1a6c8564d97a7d83a3661ea4b43127b0a6d901f3cd4b1126ee102 eu.gcr.io/gardener-project/k8s.gcr.io/hyperkube:v1.16.4],SizeBytes:601224435,},ContainerImage{Names:[gcr.io/google-samples/gb-frontend@sha256:35cb427341429fac3df10ff74600ea73e8ec0754d78f9ce89e0b4f3d70d53ba6 gcr.io/google-samples/gb-frontend:v6],SizeBytes:373099368,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa k8s.gcr.io/etcd:3.3.15],SizeBytes:246640776,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/volume/nfs@sha256:c2ad734346f608a5f7d69cfded93c4e8094069320657bd372d12ba21dea3ea71 gcr.io/kubernetes-e2e-test-images/volume/nfs:1.0],SizeBytes:225358913,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:195659796,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/quay_io/calico/node@sha256:d017c694acb9df5ad8e957a14b4c5a613c3a42771a34904f40c279dd2f61461e eu.gcr.io/gardener-project/3rd/quay_io/calico/node:v3.8.2-mod-1],SizeBytes:185406766,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/quay_io/calico/cni@sha256:fe6cb51f30add991b76eadfa26ec10fa8796383a1ddf807be5d4228725207b9d eu.gcr.io/gardener-project/3rd/quay_io/calico/cni:v3.8.2-mod-1],SizeBytes:153790666,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/k8s_gcr_io/node-problem-detector@sha256:00aceed3b4ef20d0d578aff3f904212daa2f0aaf18350d3e213cf4ca0703ccf0 eu.gcr.io/gardener-project/3rd/k8s_gcr_io/node-problem-detector:v0.7.1-mod-1],SizeBytes:96768084,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:1bafcc6fb1aa990b487850adba9cadc020e42d7905aa8a30481182a477ba24b0 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.10],SizeBytes:61365829,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6],SizeBytes:57345321,},ContainerImage{Names:[quay.io/k8scsi/csi-provisioner@sha256:0efcb424f1dde9b9fb11a1a14f2e48ab47e1c3f08bc3a929990dcfcb1f7ab34f quay.io/k8scsi/csi-provisioner:v1.4.0-rc1],SizeBytes:54431016,},ContainerImage{Names:[quay.io/k8scsi/csi-snapshotter@sha256:e3d3e742e32d00488fdb401045b9b1d033d7ca0ab6e760f77b24750fc95e5f70 quay.io/k8scsi/csi-snapshotter:v2.0.0-rc1],SizeBytes:51703561,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/quay_io/calico/typha@sha256:52298609a808087c774e95ded163e91828106bed6cf3117c51aba3f4d3b7943c eu.gcr.io/gardener-project/3rd/quay_io/calico/typha:v3.8.2],SizeBytes:49771411,},ContainerImage{Names:[quay.io/k8scsi/csi-attacher@sha256:26fccd7a99d973845df1193b46ebdcc6ab8dc5f6e6be319750c471fce1742d13 quay.io/k8scsi/csi-attacher:v1.2.0],SizeBytes:46226754,},ContainerImage{Names:[quay.io/k8scsi/csi-attacher@sha256:0aba670b4d9d6b2e720bbf575d733156c676b693ca26501235444490300db838 quay.io/k8scsi/csi-attacher:v1.1.0],SizeBytes:42839085,},ContainerImage{Names:[quay.io/k8scsi/csi-resizer@sha256:7d46fb6eb8b890dc546029d1565d502b4a1d974d33625c6ee2bc7991b77fc1a1 quay.io/k8scsi/csi-resizer:v0.2.0],SizeBytes:42817100,},ContainerImage{Names:[quay.io/k8scsi/csi-resizer@sha256:f315c9042e56def3c05c6b04fe79ec9da6d39ddc557ca365a76cf35964ea08b6 quay.io/k8scsi/csi-resizer:v0.1.0],SizeBytes:42623056,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonroot@sha256:d4ede5c74517090b6686219059118ed178cf4620f5db8781b32f806bb1e7395b gcr.io/kubernetes-e2e-test-images/nonroot:1.0],SizeBytes:42321438,},ContainerImage{Names:[redis@sha256:50899ea1ceed33fa03232f3ac57578a424faa1742c1ac9c7a7bdb95cdf19b858 redis:5.0.5-alpine],SizeBytes:29331594,},ContainerImage{Names:[quay.io/k8scsi/hostpathplugin@sha256:b4826e492fc1762fceaf9726f41575ca0a4567864d3d235da874818de18039de quay.io/k8scsi/hostpathplugin:v1.2.0-rc5],SizeBytes:28761497,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/quay_io/prometheus/node-exporter@sha256:fea82a3a79228af2840c72ff394d7446ace51ae035f5b26cd9767b250baf13b7 eu.gcr.io/gardener-project/3rd/quay_io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/echoserver@sha256:e9ba514b896cdf559eef8788b66c2c3ee55f3572df617647b4b0d8b6bf81cf19 gcr.io/kubernetes-e2e-test-images/echoserver:2.2],SizeBytes:21692741,},ContainerImage{Names:[quay.io/k8scsi/mock-driver@sha256:e0eed916b7d970bad2b7d9875f9ad16932f987f0f3d91ec5d86da68b0b5cc9d1 quay.io/k8scsi/mock-driver:v2.1.0],SizeBytes:16226335,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[quay.io/k8scsi/csi-node-driver-registrar@sha256:13daf82fb99e951a4bff8ae5fc7c17c3a8fe7130be6400990d8f6076c32d4599 quay.io/k8scsi/csi-node-driver-registrar:v1.1.0],SizeBytes:15815995,},ContainerImage{Names:[quay.io/k8scsi/livenessprobe@sha256:dde617756e0f602adc566ab71fd885f1dad451ad3fb063ac991c95a2ff47aea5 quay.io/k8scsi/livenessprobe:v1.1.0],SizeBytes:14967303,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/quay_io/calico/pod2daemon-flexvol@sha256:fd246ba4eb5b96a7b97bfd8d99eb823ba179e6eeb9852cb3e3f7bf2f44a800a8 eu.gcr.io/gardener-project/3rd/quay_io/calico/pod2daemon-flexvol:v3.8.2],SizeBytes:9371181,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1],SizeBytes:9349974,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6 gcr.io/kubernetes-e2e-test-images/kitten:1.0],SizeBytes:4747037,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0],SizeBytes:4732240,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0],SizeBytes:1450451,},ContainerImage{Names:[busybox@sha256:6915be4043561d64e0ab0f8f098dc2ac48e077fe23f488ac24b665166898115a busybox:latest],SizeBytes:1219782,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/gcr_io/google_containers/pause-amd64@sha256:ffa28932647c3b6cab6a618aafe98d33dd185d96158ecf9b1addf042d6244025 k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea eu.gcr.io/gardener-project/3rd/gcr_io/google_containers/pause-amd64:3.1 k8s.gcr.io/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 11 19:52:20.535: INFO: Logging kubelet events for node ip-10-250-27-25.ec2.internal Jan 11 19:52:20.624: INFO: Logging pods the kubelet thinks is on node ip-10-250-27-25.ec2.internal Jan 11 19:52:20.722: INFO: calico-node-m8r2d started at 2020-01-11 15:56:04 +0000 UTC (2+1 container statuses recorded) Jan 11 19:52:20.722: INFO: Init container install-cni ready: true, restart count 0 Jan 11 19:52:20.722: INFO: Init container flexvol-driver ready: true, restart count 0 Jan 11 19:52:20.722: INFO: Container calico-node ready: true, restart count 0 Jan 11 19:52:20.722: INFO: forbid-1578772200-2qvmj started at 2020-01-11 19:50:09 +0000 UTC (0+1 container statuses recorded) Jan 11 19:52:20.722: INFO: Container c ready: true, restart count 0 Jan 11 19:52:20.722: INFO: liveness-d9c04d87-22d3-4723-91d9-3bcb6c488d03 started at 2020-01-11 19:50:32 +0000 UTC (0+1 container statuses recorded) Jan 11 19:52:20.722: INFO: Container liveness ready: true, restart count 0 Jan 11 19:52:20.722: INFO: kube-proxy-rq4kf started at 2020-01-11 15:56:04 +0000 UTC (0+1 container statuses recorded) Jan 11 19:52:20.722: INFO: Container kube-proxy ready: true, restart count 0 Jan 11 19:52:20.722: INFO: node-problem-detector-9z5sq started at 2020-01-11 15:56:04 +0000 UTC (0+1 container statuses recorded) Jan 11 19:52:20.722: INFO: Container node-problem-detector ready: true, restart count 0 Jan 11 19:52:20.722: INFO: node-exporter-l6q84 started at 2020-01-11 15:56:04 +0000 UTC (0+1 container statuses recorded) Jan 11 19:52:20.722: INFO: Container node-exporter ready: true, restart count 0 Jan 11 19:52:20.722: INFO: pod-secrets-80f57524-b8be-4384-b63f-e0587d44498a started at 2020-01-11 19:50:05 +0000 UTC (0+1 container statuses recorded) Jan 11 19:52:20.722: INFO: Container creates-volume-test ready: false, restart count 0 W0111 19:52:20.813179 8631 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 11 19:52:21.017: INFO: Latency metrics for node ip-10-250-27-25.ec2.internal Jan 11 19:52:21.017: INFO: Logging node info for node ip-10-250-7-77.ec2.internal Jan 11 19:52:21.107: INFO: Node Info: &Node{ObjectMeta:{ip-10-250-7-77.ec2.internal /api/v1/nodes/ip-10-250-7-77.ec2.internal 3773c02c-1fbb-4cbe-a527-8933de0a8978 60405 0 2020-01-11 15:55:58 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:m5.large beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:us-east-1 failure-domain.beta.kubernetes.io/zone:us-east-1c kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-10-250-7-77.ec2.internal kubernetes.io/os:linux node.kubernetes.io/role:node worker.garden.sapcloud.io/group:worker-1 worker.gardener.cloud/pool:worker-1] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-ephemeral-9708":"ip-10-250-7-77.ec2.internal","csi-hostpath-provisioning-3332":"ip-10-250-7-77.ec2.internal","csi-hostpath-provisioning-4625":"ip-10-250-7-77.ec2.internal","csi-hostpath-provisioning-638":"ip-10-250-7-77.ec2.internal","csi-hostpath-provisioning-888":"ip-10-250-7-77.ec2.internal","csi-hostpath-provisioning-9667":"ip-10-250-7-77.ec2.internal","csi-hostpath-volume-2441":"ip-10-250-7-77.ec2.internal","csi-hostpath-volume-expand-8983":"ip-10-250-7-77.ec2.internal","csi-hostpath-volumeio-3164":"ip-10-250-7-77.ec2.internal","csi-hostpath-volumemode-2792":"ip-10-250-7-77.ec2.internal"} node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.250.7.77/19 projectcalico.org/IPv4IPIPTunnelAddr:100.64.0.1 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:100.64.0.0/24,DoNotUse_ExternalID:,ProviderID:aws:///us-east-1c/i-0551dba45aad7abfa,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-aws-ebs: {{25 0} {} 25 DecimalSI},cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{28730179584 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{8054267904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-aws-ebs: {{25 0} {} 25 DecimalSI},cpu: {{1920 -3} {} 1920m DecimalSI},ephemeral-storage: {{27293670584 0} {} 27293670584 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{6577812679 0} {} 6577812679 DecimalSI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2020-01-11 19:52:06 +0000 UTC,LastTransitionTime:2020-01-11 15:56:28 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2020-01-11 19:52:06 +0000 UTC,LastTransitionTime:2020-01-11 15:56:28 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2020-01-11 19:52:06 +0000 UTC,LastTransitionTime:2020-01-11 15:56:28 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2020-01-11 19:52:06 +0000 UTC,LastTransitionTime:2020-01-11 15:56:28 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2020-01-11 19:52:06 +0000 UTC,LastTransitionTime:2020-01-11 15:56:28 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2020-01-11 19:52:06 +0000 UTC,LastTransitionTime:2020-01-11 15:56:28 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2020-01-11 19:52:06 +0000 UTC,LastTransitionTime:2020-01-11 15:56:28 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-01-11 15:56:16 +0000 UTC,LastTransitionTime:2020-01-11 15:56:16 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-01-11 19:52:15 +0000 UTC,LastTransitionTime:2020-01-11 15:55:58 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-01-11 19:52:15 +0000 UTC,LastTransitionTime:2020-01-11 15:55:58 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-01-11 19:52:15 +0000 UTC,LastTransitionTime:2020-01-11 15:55:58 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-01-11 19:52:15 +0000 UTC,LastTransitionTime:2020-01-11 15:56:08 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.250.7.77,},NodeAddress{Type:Hostname,Address:ip-10-250-7-77.ec2.internal,},NodeAddress{Type:InternalDNS,Address:ip-10-250-7-77.ec2.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec223a25fa514279256b8b36a522519a,SystemUUID:ec223a25-fa51-4279-256b-8b36a522519a,BootID:652118c2-7bd4-4ebf-b248-be5c7a65a3aa,KernelVersion:4.19.86-coreos,OSImage:Container Linux by CoreOS 2303.3.0 (Rhyolite),ContainerRuntimeVersion:docker://18.6.3,KubeletVersion:v1.16.4,KubeProxyVersion:v1.16.4,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[eu.gcr.io/gardener-project/k8s.gcr.io/hyperkube@sha256:1d8d7ef8bae1a6c8564d97a7d83a3661ea4b43127b0a6d901f3cd4b1126ee102 eu.gcr.io/gardener-project/k8s.gcr.io/hyperkube:v1.16.4],SizeBytes:601224435,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/quay_io/kubernetes-ingress-controller/nginx-ingress-controller@sha256:4980f4ee069f767334c6fb6a7d75fbdc87236542fd749e22af5d80f2217959f4 eu.gcr.io/gardener-project/3rd/quay_io/kubernetes-ingress-controller/nginx-ingress-controller:0.22.0],SizeBytes:551728251,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/quay_io/calico/node@sha256:d017c694acb9df5ad8e957a14b4c5a613c3a42771a34904f40c279dd2f61461e eu.gcr.io/gardener-project/3rd/quay_io/calico/node:v3.8.2-mod-1],SizeBytes:185406766,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/quay_io/calico/cni@sha256:fe6cb51f30add991b76eadfa26ec10fa8796383a1ddf807be5d4228725207b9d eu.gcr.io/gardener-project/3rd/quay_io/calico/cni:v3.8.2-mod-1],SizeBytes:153790666,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/k8s_gcr_io/kubernetes-dashboard-amd64@sha256:2f4fefeb964b1b7b09a3d2607a963506a47a6628d5268825e8b45b8a4c5ace93 eu.gcr.io/gardener-project/3rd/k8s_gcr_io/kubernetes-dashboard-amd64:v1.10.1],SizeBytes:121711221,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/k8s_gcr_io/node-problem-detector@sha256:00aceed3b4ef20d0d578aff3f904212daa2f0aaf18350d3e213cf4ca0703ccf0 eu.gcr.io/gardener-project/3rd/k8s_gcr_io/node-problem-detector:v0.7.1-mod-1],SizeBytes:96768084,},ContainerImage{Names:[eu.gcr.io/gardener-project/gardener/ingress-default-backend@sha256:17b68928ead12cc9df88ee60d9c638d3fd642a7e122c2bb7586da1a21eb2de45 eu.gcr.io/gardener-project/gardener/ingress-default-backend:0.7.0],SizeBytes:69546830,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6],SizeBytes:57345321,},ContainerImage{Names:[quay.io/k8scsi/csi-provisioner@sha256:0efcb424f1dde9b9fb11a1a14f2e48ab47e1c3f08bc3a929990dcfcb1f7ab34f quay.io/k8scsi/csi-provisioner:v1.4.0-rc1],SizeBytes:54431016,},ContainerImage{Names:[quay.io/k8scsi/csi-snapshotter@sha256:e3d3e742e32d00488fdb401045b9b1d033d7ca0ab6e760f77b24750fc95e5f70 quay.io/k8scsi/csi-snapshotter:v2.0.0-rc1],SizeBytes:51703561,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/quay_io/calico/typha@sha256:52298609a808087c774e95ded163e91828106bed6cf3117c51aba3f4d3b7943c eu.gcr.io/gardener-project/3rd/quay_io/calico/typha:v3.8.2],SizeBytes:49771411,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/quay_io/calico/kube-controllers@sha256:242c3e83e41c5ad4a246cba351360d92fb90e1c140cd24e42140e640a0ed3290 eu.gcr.io/gardener-project/3rd/quay_io/calico/kube-controllers:v3.8.2],SizeBytes:46809393,},ContainerImage{Names:[quay.io/k8scsi/csi-attacher@sha256:26fccd7a99d973845df1193b46ebdcc6ab8dc5f6e6be319750c471fce1742d13 quay.io/k8scsi/csi-attacher:v1.2.0],SizeBytes:46226754,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/coredns/coredns@sha256:b1f81b52011f91ebcf512111caa6d6d0896a65251188210cd3145d5b23204531 eu.gcr.io/gardener-project/3rd/coredns/coredns:1.6.3],SizeBytes:44255363,},ContainerImage{Names:[quay.io/k8scsi/csi-resizer@sha256:7d46fb6eb8b890dc546029d1565d502b4a1d974d33625c6ee2bc7991b77fc1a1 quay.io/k8scsi/csi-resizer:v0.2.0],SizeBytes:42817100,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/k8s_gcr_io/cpvpa-amd64@sha256:5843435c534f0368f8980b1635976976b087f0b2dcde01226d9216da2276d24d eu.gcr.io/gardener-project/3rd/k8s_gcr_io/cpvpa-amd64:v0.8.1],SizeBytes:40616150,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/k8s_gcr_io/cluster-proportional-autoscaler-amd64@sha256:2cdb0f90aac21d3f648a945ef929bfb81159d7453499b2dce6164c78a348ac42 eu.gcr.io/gardener-project/3rd/k8s_gcr_io/cluster-proportional-autoscaler-amd64:1.7.1],SizeBytes:40067731,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/k8s_gcr_io/metrics-server-amd64@sha256:c3c8fb8757c3236343da9239a266c6ee9e16ac3c98b6f5d7a7cbb5f83058d4f1 eu.gcr.io/gardener-project/3rd/k8s_gcr_io/metrics-server-amd64:v0.3.3],SizeBytes:39933796,},ContainerImage{Names:[redis@sha256:50899ea1ceed33fa03232f3ac57578a424faa1742c1ac9c7a7bdb95cdf19b858 redis:5.0.5-alpine],SizeBytes:29331594,},ContainerImage{Names:[quay.io/k8scsi/hostpathplugin@sha256:b4826e492fc1762fceaf9726f41575ca0a4567864d3d235da874818de18039de quay.io/k8scsi/hostpathplugin:v1.2.0-rc5],SizeBytes:28761497,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/quay_io/prometheus/node-exporter@sha256:fea82a3a79228af2840c72ff394d7446ace51ae035f5b26cd9767b250baf13b7 eu.gcr.io/gardener-project/3rd/quay_io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/echoserver@sha256:e9ba514b896cdf559eef8788b66c2c3ee55f3572df617647b4b0d8b6bf81cf19 gcr.io/kubernetes-e2e-test-images/echoserver:2.2],SizeBytes:21692741,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/quay_io/prometheus/blackbox-exporter@sha256:c09cbb653e4708a0c14b205822f56026669c6a4a7d0502609c65da2dd741e669 eu.gcr.io/gardener-project/3rd/quay_io/prometheus/blackbox-exporter:v0.14.0],SizeBytes:17584252,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[quay.io/k8scsi/csi-node-driver-registrar@sha256:13daf82fb99e951a4bff8ae5fc7c17c3a8fe7130be6400990d8f6076c32d4599 quay.io/k8scsi/csi-node-driver-registrar:v1.1.0],SizeBytes:15815995,},ContainerImage{Names:[quay.io/k8scsi/livenessprobe@sha256:dde617756e0f602adc566ab71fd885f1dad451ad3fb063ac991c95a2ff47aea5 quay.io/k8scsi/livenessprobe:v1.1.0],SizeBytes:14967303,},ContainerImage{Names:[eu.gcr.io/gardener-project/gardener/vpn-shoot@sha256:6054c6ae62c2bca2f07c913390c3babf14bb8dfa80c707ee8d4fd03c06dbf93f eu.gcr.io/gardener-project/gardener/vpn-shoot:0.16.0],SizeBytes:13732716,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/quay_io/calico/pod2daemon-flexvol@sha256:fd246ba4eb5b96a7b97bfd8d99eb823ba179e6eeb9852cb3e3f7bf2f44a800a8 eu.gcr.io/gardener-project/3rd/quay_io/calico/pod2daemon-flexvol:v3.8.2],SizeBytes:9371181,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:4206620,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/gcr_io/google_containers/pause-amd64@sha256:ffa28932647c3b6cab6a618aafe98d33dd185d96158ecf9b1addf042d6244025 k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea eu.gcr.io/gardener-project/3rd/gcr_io/google_containers/pause-amd64:3.1 k8s.gcr.io/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 11 19:52:21.107: INFO: Logging kubelet events for node ip-10-250-7-77.ec2.internal Jan 11 19:52:21.197: INFO: Logging pods the kubelet thinks is on node ip-10-250-7-77.ec2.internal Jan 11 19:52:21.302: INFO: addons-nginx-ingress-nginx-ingress-k8s-backend-95f65778d-4fk7d started at 2020-01-11 15:56:08 +0000 UTC (0+1 container statuses recorded) Jan 11 19:52:21.302: INFO: Container nginx-ingress-nginx-ingress-k8s-backend ready: true, restart count 0 Jan 11 19:52:21.302: INFO: addons-kubernetes-dashboard-78954cc66b-69k8m started at 2020-01-11 15:56:08 +0000 UTC (0+1 container statuses recorded) Jan 11 19:52:21.302: INFO: Container kubernetes-dashboard ready: true, restart count 0 Jan 11 19:52:21.302: INFO: blackbox-exporter-54bb5f55cc-452fk started at 2020-01-11 15:55:58 +0000 UTC (0+1 container statuses recorded) Jan 11 19:52:21.302: INFO: Container blackbox-exporter ready: true, restart count 0 Jan 11 19:52:21.302: INFO: coredns-59c969ffb8-fqq79 started at 2020-01-11 15:56:08 +0000 UTC (0+1 container statuses recorded) Jan 11 19:52:21.302: INFO: Container coredns ready: true, restart count 0 Jan 11 19:52:21.302: INFO: hostpath-symlink-prep-provisioning-124 started at 2020-01-11 19:52:19 +0000 UTC (0+1 container statuses recorded) Jan 11 19:52:21.302: INFO: Container init-volume-provisioning-124 ready: false, restart count 0 Jan 11 19:52:21.302: INFO: calico-node-dl8nk started at 2020-01-11 15:55:58 +0000 UTC (2+1 container statuses recorded) Jan 11 19:52:21.302: INFO: Init container install-cni ready: true, restart count 0 Jan 11 19:52:21.302: INFO: Init container flexvol-driver ready: true, restart count 0 Jan 11 19:52:21.302: INFO: Container calico-node ready: true, restart count 0 Jan 11 19:52:21.302: INFO: node-problem-detector-jx2p4 started at 2020-01-11 15:55:58 +0000 UTC (0+1 container statuses recorded) Jan 11 19:52:21.302: INFO: Container node-problem-detector ready: true, restart count 0 Jan 11 19:52:21.302: INFO: calico-kube-controllers-79bcd784b6-c46r9 started at 2020-01-11 15:56:08 +0000 UTC (0+1 container statuses recorded) Jan 11 19:52:21.302: INFO: Container calico-kube-controllers ready: true, restart count 0 Jan 11 19:52:21.302: INFO: metrics-server-7c797fd994-4x7v9 started at 2020-01-11 15:56:08 +0000 UTC (0+1 container statuses recorded) Jan 11 19:52:21.302: INFO: Container metrics-server ready: true, restart count 0 Jan 11 19:52:21.302: INFO: node-exporter-gp57h started at 2020-01-11 15:55:58 +0000 UTC (0+1 container statuses recorded) Jan 11 19:52:21.302: INFO: Container node-exporter ready: true, restart count 0 Jan 11 19:52:21.302: INFO: coredns-59c969ffb8-57m7v started at 2020-01-11 15:56:11 +0000 UTC (0+1 container statuses recorded) Jan 11 19:52:21.302: INFO: Container coredns ready: true, restart count 0 Jan 11 19:52:21.302: INFO: calico-typha-deploy-9f6b455c4-vdrzx started at 2020-01-11 16:21:07 +0000 UTC (0+1 container statuses recorded) Jan 11 19:52:21.302: INFO: Container calico-typha ready: true, restart count 0 Jan 11 19:52:21.302: INFO: kube-proxy-nn5px started at 2020-01-11 15:55:58 +0000 UTC (0+1 container statuses recorded) Jan 11 19:52:21.302: INFO: Container kube-proxy ready: true, restart count 0 Jan 11 19:52:21.302: INFO: calico-typha-horizontal-autoscaler-85c99966bb-6j6rp started at 2020-01-11 15:56:08 +0000 UTC (0+1 container statuses recorded) Jan 11 19:52:21.302: INFO: Container autoscaler ready: true, restart count 0 Jan 11 19:52:21.302: INFO: calico-typha-vertical-autoscaler-5769b74b58-r8t6r started at 2020-01-11 15:56:13 +0000 UTC (0+1 container statuses recorded) Jan 11 19:52:21.302: INFO: Container autoscaler ready: true, restart count 5 Jan 11 19:52:21.302: INFO: addons-nginx-ingress-controller-7c75bb76db-cd9r9 started at 2020-01-11 15:56:13 +0000 UTC (0+1 container statuses recorded) Jan 11 19:52:21.302: INFO: Container nginx-ingress-controller ready: true, restart count 0 Jan 11 19:52:21.302: INFO: vpn-shoot-5d76665b65-6rkww started at 2020-01-11 15:56:13 +0000 UTC (0+1 container statuses recorded) Jan 11 19:52:21.302: INFO: Container vpn-shoot ready: true, restart count 0 W0111 19:52:21.393514 8631 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 11 19:52:21.609: INFO: Latency metrics for node ip-10-250-7-77.ec2.internal Jan 11 19:52:21.609: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-9151" for this suite. Jan 11 19:52:27.968: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:52:31.259: INFO: namespace hostpath-9151 deletion completed in 9.55954327s • Failure [15.042 seconds] [sig-storage] HostPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] [It] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Jan 11 19:52:20.084: Unexpected error: <*errors.errorString | 0xc00377b210>: { s: "expected \"mode of file \\\"/test-volume\\\": dtrwxrwx\" in container output: Expected\n : mount type of \"/test-volume\": tmpfs\n mode of file \"/test-volume\": dgtrwxrwxrwx\n \nto contain substring\n : mode of file \"/test-volume\": dtrwxrwx", } expected "mode of file \"/test-volume\": dtrwxrwx" in container output: Expected : mount type of "/test-volume": tmpfs mode of file "/test-volume": dgtrwxrwxrwx to contain substring : mode of file "/test-volume": dtrwxrwx occurred /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:1667 ------------------------------ [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:93 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:50:18.998: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename provisioning STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in provisioning-124 STEP: Waiting for a default service account to be provisioned in namespace [It] should support restarting containers using file as subpath [Slow][LinuxOnly] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:318 Jan 11 19:50:19.652: INFO: Could not find CSI Name for in-tree plugin kubernetes.io/host-path Jan 11 19:50:19.834: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-124" in namespace "provisioning-124" to be "success or failure" Jan 11 19:50:19.924: INFO: Pod "hostpath-symlink-prep-provisioning-124": Phase="Pending", Reason="", readiness=false. Elapsed: 90.046192ms Jan 11 19:50:22.015: INFO: Pod "hostpath-symlink-prep-provisioning-124": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.180471537s STEP: Saw pod success Jan 11 19:50:22.015: INFO: Pod "hostpath-symlink-prep-provisioning-124" satisfied condition "success or failure" Jan 11 19:50:22.015: INFO: Deleting pod "hostpath-symlink-prep-provisioning-124" in namespace "provisioning-124" Jan 11 19:50:22.167: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-124" to be fully deleted Jan 11 19:50:22.256: INFO: Creating resource for inline volume STEP: Creating pod pod-subpath-test-hostpathsymlink-p8ph STEP: Failing liveness probe Jan 11 19:50:26.528: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=provisioning-124 pod-subpath-test-hostpathsymlink-p8ph --container test-container-volume-hostpathsymlink-p8ph -- /bin/sh -c rm /probe-volume/probe-file' Jan 11 19:50:27.836: INFO: stderr: "" Jan 11 19:50:27.836: INFO: stdout: "" Jan 11 19:50:27.836: INFO: Pod exec output: STEP: Waiting for container to restart Jan 11 19:50:27.926: INFO: Container test-container-subpath-hostpathsymlink-p8ph, restarts: 0 Jan 11 19:50:38.016: INFO: Container test-container-subpath-hostpathsymlink-p8ph, restarts: 2 Jan 11 19:50:38.016: INFO: Container has restart count: 2 STEP: Rewriting the file Jan 11 19:50:38.016: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=provisioning-124 pod-subpath-test-hostpathsymlink-p8ph --container test-container-volume-hostpathsymlink-p8ph -- /bin/sh -c echo test-after > /probe-volume/probe-file' Jan 11 19:50:39.325: INFO: stderr: "" Jan 11 19:50:39.325: INFO: stdout: "" Jan 11 19:50:39.325: INFO: Pod exec output: STEP: Waiting for container to stop restarting Jan 11 19:50:59.506: INFO: Container has restart count: 3 Jan 11 19:52:01.506: INFO: Container restart has stabilized Jan 11 19:52:01.506: INFO: Deleting pod "pod-subpath-test-hostpathsymlink-p8ph" in namespace "provisioning-124" Jan 11 19:52:01.597: INFO: Wait up to 5m0s for pod "pod-subpath-test-hostpathsymlink-p8ph" to be fully deleted STEP: Deleting pod Jan 11 19:52:19.777: INFO: Deleting pod "pod-subpath-test-hostpathsymlink-p8ph" in namespace "provisioning-124" Jan 11 19:52:19.960: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-124" in namespace "provisioning-124" to be "success or failure" Jan 11 19:52:20.050: INFO: Pod "hostpath-symlink-prep-provisioning-124": Phase="Pending", Reason="", readiness=false. Elapsed: 89.763197ms Jan 11 19:52:22.140: INFO: Pod "hostpath-symlink-prep-provisioning-124": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.180200151s STEP: Saw pod success Jan 11 19:52:22.140: INFO: Pod "hostpath-symlink-prep-provisioning-124" satisfied condition "success or failure" Jan 11 19:52:22.140: INFO: Deleting pod "hostpath-symlink-prep-provisioning-124" in namespace "provisioning-124" Jan 11 19:52:22.232: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-124" to be fully deleted Jan 11 19:52:22.322: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics [AfterEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:52:22.322: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "provisioning-124" for this suite. Jan 11 19:52:28.684: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:52:31.999: INFO: namespace provisioning-124 deletion completed in 9.586048915s • [SLOW TEST:133.002 seconds] [sig-storage] In-tree Volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Driver: hostPathSymlink] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:69 [Testpattern: Inline-volume (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:92 should support restarting containers using file as subpath [Slow][LinuxOnly] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:318 ------------------------------ SSS ------------------------------ [BeforeEach] [Testpattern: inline ephemeral CSI volume] ephemeral /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:93 [BeforeEach] [Testpattern: inline ephemeral CSI volume] ephemeral /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:79 [BeforeEach] [Testpattern: inline ephemeral CSI volume] ephemeral /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:51:06.102: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename ephemeral STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in ephemeral-9708 STEP: Waiting for a default service account to be provisioned in namespace [It] should support multiple inline ephemeral volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:177 STEP: deploying csi-hostpath driver Jan 11 19:51:07.340: INFO: creating *v1.ServiceAccount: ephemeral-9708/csi-attacher Jan 11 19:51:07.430: INFO: creating *v1.ClusterRole: external-attacher-runner-ephemeral-9708 Jan 11 19:51:07.430: INFO: Define cluster role external-attacher-runner-ephemeral-9708 Jan 11 19:51:07.520: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-ephemeral-9708 Jan 11 19:51:07.610: INFO: creating *v1.Role: ephemeral-9708/external-attacher-cfg-ephemeral-9708 Jan 11 19:51:07.700: INFO: creating *v1.RoleBinding: ephemeral-9708/csi-attacher-role-cfg Jan 11 19:51:07.789: INFO: creating *v1.ServiceAccount: ephemeral-9708/csi-provisioner Jan 11 19:51:07.879: INFO: creating *v1.ClusterRole: external-provisioner-runner-ephemeral-9708 Jan 11 19:51:07.879: INFO: Define cluster role external-provisioner-runner-ephemeral-9708 Jan 11 19:51:07.969: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-ephemeral-9708 Jan 11 19:51:08.058: INFO: creating *v1.Role: ephemeral-9708/external-provisioner-cfg-ephemeral-9708 Jan 11 19:51:08.148: INFO: creating *v1.RoleBinding: ephemeral-9708/csi-provisioner-role-cfg Jan 11 19:51:08.238: INFO: creating *v1.ServiceAccount: ephemeral-9708/csi-snapshotter Jan 11 19:51:08.328: INFO: creating *v1.ClusterRole: external-snapshotter-runner-ephemeral-9708 Jan 11 19:51:08.328: INFO: Define cluster role external-snapshotter-runner-ephemeral-9708 Jan 11 19:51:08.418: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-ephemeral-9708 Jan 11 19:51:08.508: INFO: creating *v1.Role: ephemeral-9708/external-snapshotter-leaderelection-ephemeral-9708 Jan 11 19:51:08.597: INFO: creating *v1.RoleBinding: ephemeral-9708/external-snapshotter-leaderelection Jan 11 19:51:08.687: INFO: creating *v1.ServiceAccount: ephemeral-9708/csi-resizer Jan 11 19:51:08.777: INFO: creating *v1.ClusterRole: external-resizer-runner-ephemeral-9708 Jan 11 19:51:08.778: INFO: Define cluster role external-resizer-runner-ephemeral-9708 Jan 11 19:51:08.867: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-ephemeral-9708 Jan 11 19:51:08.957: INFO: creating *v1.Role: ephemeral-9708/external-resizer-cfg-ephemeral-9708 Jan 11 19:51:09.047: INFO: creating *v1.RoleBinding: ephemeral-9708/csi-resizer-role-cfg Jan 11 19:51:09.137: INFO: creating *v1.Service: ephemeral-9708/csi-hostpath-attacher Jan 11 19:51:09.231: INFO: creating *v1.StatefulSet: ephemeral-9708/csi-hostpath-attacher Jan 11 19:51:09.321: INFO: creating *v1beta1.CSIDriver: csi-hostpath-ephemeral-9708 Jan 11 19:51:09.411: INFO: creating *v1.Service: ephemeral-9708/csi-hostpathplugin Jan 11 19:51:09.504: INFO: creating *v1.StatefulSet: ephemeral-9708/csi-hostpathplugin Jan 11 19:51:09.594: INFO: creating *v1.Service: ephemeral-9708/csi-hostpath-provisioner Jan 11 19:51:09.688: INFO: creating *v1.StatefulSet: ephemeral-9708/csi-hostpath-provisioner Jan 11 19:51:09.778: INFO: creating *v1.Service: ephemeral-9708/csi-hostpath-resizer Jan 11 19:51:09.871: INFO: creating *v1.StatefulSet: ephemeral-9708/csi-hostpath-resizer Jan 11 19:51:09.961: INFO: creating *v1.Service: ephemeral-9708/csi-snapshotter Jan 11 19:51:10.055: INFO: creating *v1.StatefulSet: ephemeral-9708/csi-snapshotter Jan 11 19:51:10.145: INFO: creating *v1.ClusterRoleBinding: psp-csi-hostpath-role-ephemeral-9708 STEP: checking the requested inline volume exists in the pod running on node {Name:ip-10-250-7-77.ec2.internal Selector:map[] Affinity:nil} Jan 11 19:51:28.843: INFO: Pod inline-volume-tester-w4rf8 has the following logs: /dev/nvme0n1p9 on /mnt/test-0 type ext4 (rw,seclabel,relatime) /dev/nvme0n1p9 on /mnt/test-1 type ext4 (rw,seclabel,relatime) STEP: Deleting pod inline-volume-tester-w4rf8 in namespace ephemeral-9708 STEP: uninstalling csi-hostpath driver Jan 11 19:52:01.114: INFO: deleting *v1.ServiceAccount: ephemeral-9708/csi-attacher Jan 11 19:52:01.206: INFO: deleting *v1.ClusterRole: external-attacher-runner-ephemeral-9708 Jan 11 19:52:01.298: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-ephemeral-9708 Jan 11 19:52:01.401: INFO: deleting *v1.Role: ephemeral-9708/external-attacher-cfg-ephemeral-9708 Jan 11 19:52:01.496: INFO: deleting *v1.RoleBinding: ephemeral-9708/csi-attacher-role-cfg Jan 11 19:52:01.587: INFO: deleting *v1.ServiceAccount: ephemeral-9708/csi-provisioner Jan 11 19:52:01.678: INFO: deleting *v1.ClusterRole: external-provisioner-runner-ephemeral-9708 Jan 11 19:52:01.770: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-ephemeral-9708 Jan 11 19:52:01.865: INFO: deleting *v1.Role: ephemeral-9708/external-provisioner-cfg-ephemeral-9708 Jan 11 19:52:01.960: INFO: deleting *v1.RoleBinding: ephemeral-9708/csi-provisioner-role-cfg Jan 11 19:52:02.066: INFO: deleting *v1.ServiceAccount: ephemeral-9708/csi-snapshotter Jan 11 19:52:02.160: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-ephemeral-9708 Jan 11 19:52:02.251: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-ephemeral-9708 Jan 11 19:52:02.343: INFO: deleting *v1.Role: ephemeral-9708/external-snapshotter-leaderelection-ephemeral-9708 Jan 11 19:52:02.434: INFO: deleting *v1.RoleBinding: ephemeral-9708/external-snapshotter-leaderelection Jan 11 19:52:02.525: INFO: deleting *v1.ServiceAccount: ephemeral-9708/csi-resizer Jan 11 19:52:02.617: INFO: deleting *v1.ClusterRole: external-resizer-runner-ephemeral-9708 Jan 11 19:52:02.708: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-ephemeral-9708 Jan 11 19:52:02.800: INFO: deleting *v1.Role: ephemeral-9708/external-resizer-cfg-ephemeral-9708 Jan 11 19:52:02.891: INFO: deleting *v1.RoleBinding: ephemeral-9708/csi-resizer-role-cfg Jan 11 19:52:02.984: INFO: deleting *v1.Service: ephemeral-9708/csi-hostpath-attacher Jan 11 19:52:03.081: INFO: deleting *v1.StatefulSet: ephemeral-9708/csi-hostpath-attacher Jan 11 19:52:03.172: INFO: deleting *v1beta1.CSIDriver: csi-hostpath-ephemeral-9708 Jan 11 19:52:03.263: INFO: deleting *v1.Service: ephemeral-9708/csi-hostpathplugin Jan 11 19:52:03.359: INFO: deleting *v1.StatefulSet: ephemeral-9708/csi-hostpathplugin Jan 11 19:52:03.451: INFO: deleting *v1.Service: ephemeral-9708/csi-hostpath-provisioner Jan 11 19:52:03.546: INFO: deleting *v1.StatefulSet: ephemeral-9708/csi-hostpath-provisioner Jan 11 19:52:03.637: INFO: deleting *v1.Service: ephemeral-9708/csi-hostpath-resizer Jan 11 19:52:03.734: INFO: deleting *v1.StatefulSet: ephemeral-9708/csi-hostpath-resizer Jan 11 19:52:03.825: INFO: deleting *v1.Service: ephemeral-9708/csi-snapshotter Jan 11 19:52:03.921: INFO: deleting *v1.StatefulSet: ephemeral-9708/csi-snapshotter Jan 11 19:52:04.012: INFO: deleting *v1.ClusterRoleBinding: psp-csi-hostpath-role-ephemeral-9708 [AfterEach] [Testpattern: inline ephemeral CSI volume] ephemeral /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:52:04.104: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready WARNING: pod log: csi-hostpath-attacher-0/csi-attacher: context canceled STEP: Destroying namespace "ephemeral-9708" for this suite. Jan 11 19:52:32.467: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:52:35.780: INFO: namespace ephemeral-9708 deletion completed in 31.584531364s • [SLOW TEST:89.679 seconds] [sig-storage] CSI Volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Driver: csi-hostpath] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:62 [Testpattern: inline ephemeral CSI volume] ephemeral /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:92 should support multiple inline ephemeral volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:177 ------------------------------ SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:52:31.262: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename hostpath STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in hostpath-7250 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating a pod to test hostPath mode Jan 11 19:52:32.442: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-7250" to be "success or failure" Jan 11 19:52:32.532: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 89.213265ms Jan 11 19:52:34.621: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.178603726s STEP: Saw pod success Jan 11 19:52:34.621: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Jan 11 19:52:34.711: INFO: Trying to get logs from node ip-10-250-27-25.ec2.internal pod pod-host-path-test container test-container-1: STEP: delete the pod Jan 11 19:52:34.982: INFO: Waiting for pod pod-host-path-test to disappear Jan 11 19:52:35.072: INFO: Pod pod-host-path-test no longer exists Jan 11 19:52:35.072: FAIL: Unexpected error: <*errors.errorString | 0xc002e13e90>: { s: "expected \"mode of file \\\"/test-volume\\\": dtrwxrwx\" in container output: Expected\n : mount type of \"/test-volume\": tmpfs\n mode of file \"/test-volume\": dgtrwxrwxrwx\n \nto contain substring\n : mode of file \"/test-volume\": dtrwxrwx", } expected "mode of file \"/test-volume\": dtrwxrwx" in container output: Expected : mount type of "/test-volume": tmpfs mode of file "/test-volume": dgtrwxrwxrwx to contain substring : mode of file "/test-volume": dtrwxrwx occurred [AfterEach] [sig-storage] HostPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 STEP: Collecting events from namespace "hostpath-7250". STEP: Found 7 events. Jan 11 19:52:35.162: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-host-path-test: {default-scheduler } Scheduled: Successfully assigned hostpath-7250/pod-host-path-test to ip-10-250-27-25.ec2.internal Jan 11 19:52:35.162: INFO: At 2020-01-11 19:52:33 +0000 UTC - event for pod-host-path-test: {kubelet ip-10-250-27-25.ec2.internal} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/mounttest:1.0" already present on machine Jan 11 19:52:35.162: INFO: At 2020-01-11 19:52:33 +0000 UTC - event for pod-host-path-test: {kubelet ip-10-250-27-25.ec2.internal} Created: Created container test-container-1 Jan 11 19:52:35.162: INFO: At 2020-01-11 19:52:33 +0000 UTC - event for pod-host-path-test: {kubelet ip-10-250-27-25.ec2.internal} Started: Started container test-container-1 Jan 11 19:52:35.162: INFO: At 2020-01-11 19:52:33 +0000 UTC - event for pod-host-path-test: {kubelet ip-10-250-27-25.ec2.internal} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/mounttest:1.0" already present on machine Jan 11 19:52:35.162: INFO: At 2020-01-11 19:52:33 +0000 UTC - event for pod-host-path-test: {kubelet ip-10-250-27-25.ec2.internal} Created: Created container test-container-2 Jan 11 19:52:35.162: INFO: At 2020-01-11 19:52:33 +0000 UTC - event for pod-host-path-test: {kubelet ip-10-250-27-25.ec2.internal} Started: Started container test-container-2 Jan 11 19:52:35.251: INFO: POD NODE PHASE GRACE CONDITIONS Jan 11 19:52:35.251: INFO: Jan 11 19:52:35.431: INFO: Logging node info for node ip-10-250-27-25.ec2.internal Jan 11 19:52:35.520: INFO: Node Info: &Node{ObjectMeta:{ip-10-250-27-25.ec2.internal /api/v1/nodes/ip-10-250-27-25.ec2.internal af7f64f3-a5de-4df3-9e07-f69e835ab580 60584 0 2020-01-11 15:56:03 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:m5.large beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:us-east-1 failure-domain.beta.kubernetes.io/zone:us-east-1c kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-10-250-27-25.ec2.internal kubernetes.io/os:linux node.kubernetes.io/role:node worker.garden.sapcloud.io/group:worker-1 worker.gardener.cloud/pool:worker-1] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-ephemeral-1641":"ip-10-250-27-25.ec2.internal","csi-hostpath-provisioning-6240":"ip-10-250-27-25.ec2.internal","csi-hostpath-volume-expand-7991":"ip-10-250-27-25.ec2.internal","csi-mock-csi-mock-volumes-1062":"csi-mock-csi-mock-volumes-1062","csi-mock-csi-mock-volumes-2239":"csi-mock-csi-mock-volumes-2239","csi-mock-csi-mock-volumes-6381":"csi-mock-csi-mock-volumes-6381","csi-mock-csi-mock-volumes-795":"csi-mock-csi-mock-volumes-795"} node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.250.27.25/19 projectcalico.org/IPv4IPIPTunnelAddr:100.64.1.1 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:100.64.1.0/24,DoNotUse_ExternalID:,ProviderID:aws:///us-east-1c/i-0a8c404292a3c92e9,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-aws-ebs: {{25 0} {} 25 DecimalSI},cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{28730179584 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{8054267904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-aws-ebs: {{25 0} {} 25 DecimalSI},cpu: {{1920 -3} {} 1920m DecimalSI},ephemeral-storage: {{27293670584 0} {} 27293670584 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{6577812679 0} {} 6577812679 DecimalSI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2020-01-11 19:52:25 +0000 UTC,LastTransitionTime:2020-01-11 15:56:58 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2020-01-11 19:52:25 +0000 UTC,LastTransitionTime:2020-01-11 15:56:58 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2020-01-11 19:52:25 +0000 UTC,LastTransitionTime:2020-01-11 15:56:58 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2020-01-11 19:52:25 +0000 UTC,LastTransitionTime:2020-01-11 15:56:58 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2020-01-11 19:52:25 +0000 UTC,LastTransitionTime:2020-01-11 15:56:58 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2020-01-11 19:52:25 +0000 UTC,LastTransitionTime:2020-01-11 15:56:58 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2020-01-11 19:52:25 +0000 UTC,LastTransitionTime:2020-01-11 15:56:58 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-01-11 15:56:18 +0000 UTC,LastTransitionTime:2020-01-11 15:56:18 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-01-11 19:52:34 +0000 UTC,LastTransitionTime:2020-01-11 15:56:03 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-01-11 19:52:34 +0000 UTC,LastTransitionTime:2020-01-11 15:56:03 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-01-11 19:52:34 +0000 UTC,LastTransitionTime:2020-01-11 15:56:03 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-01-11 19:52:34 +0000 UTC,LastTransitionTime:2020-01-11 15:56:13 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.250.27.25,},NodeAddress{Type:Hostname,Address:ip-10-250-27-25.ec2.internal,},NodeAddress{Type:InternalDNS,Address:ip-10-250-27-25.ec2.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec280dba3c1837e27848a3dec8c080a9,SystemUUID:ec280dba-3c18-37e2-7848-a3dec8c080a9,BootID:89e42b89-b944-47ea-8bf6-5f2fe6d80c97,KernelVersion:4.19.86-coreos,OSImage:Container Linux by CoreOS 2303.3.0 (Rhyolite),ContainerRuntimeVersion:docker://18.6.3,KubeletVersion:v1.16.4,KubeProxyVersion:v1.16.4,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[eu.gcr.io/gardener-project/k8s.gcr.io/hyperkube@sha256:1d8d7ef8bae1a6c8564d97a7d83a3661ea4b43127b0a6d901f3cd4b1126ee102 eu.gcr.io/gardener-project/k8s.gcr.io/hyperkube:v1.16.4],SizeBytes:601224435,},ContainerImage{Names:[gcr.io/google-samples/gb-frontend@sha256:35cb427341429fac3df10ff74600ea73e8ec0754d78f9ce89e0b4f3d70d53ba6 gcr.io/google-samples/gb-frontend:v6],SizeBytes:373099368,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa k8s.gcr.io/etcd:3.3.15],SizeBytes:246640776,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/volume/nfs@sha256:c2ad734346f608a5f7d69cfded93c4e8094069320657bd372d12ba21dea3ea71 gcr.io/kubernetes-e2e-test-images/volume/nfs:1.0],SizeBytes:225358913,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:195659796,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/quay_io/calico/node@sha256:d017c694acb9df5ad8e957a14b4c5a613c3a42771a34904f40c279dd2f61461e eu.gcr.io/gardener-project/3rd/quay_io/calico/node:v3.8.2-mod-1],SizeBytes:185406766,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/quay_io/calico/cni@sha256:fe6cb51f30add991b76eadfa26ec10fa8796383a1ddf807be5d4228725207b9d eu.gcr.io/gardener-project/3rd/quay_io/calico/cni:v3.8.2-mod-1],SizeBytes:153790666,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/k8s_gcr_io/node-problem-detector@sha256:00aceed3b4ef20d0d578aff3f904212daa2f0aaf18350d3e213cf4ca0703ccf0 eu.gcr.io/gardener-project/3rd/k8s_gcr_io/node-problem-detector:v0.7.1-mod-1],SizeBytes:96768084,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:1bafcc6fb1aa990b487850adba9cadc020e42d7905aa8a30481182a477ba24b0 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.10],SizeBytes:61365829,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6],SizeBytes:57345321,},ContainerImage{Names:[quay.io/k8scsi/csi-provisioner@sha256:0efcb424f1dde9b9fb11a1a14f2e48ab47e1c3f08bc3a929990dcfcb1f7ab34f quay.io/k8scsi/csi-provisioner:v1.4.0-rc1],SizeBytes:54431016,},ContainerImage{Names:[quay.io/k8scsi/csi-snapshotter@sha256:e3d3e742e32d00488fdb401045b9b1d033d7ca0ab6e760f77b24750fc95e5f70 quay.io/k8scsi/csi-snapshotter:v2.0.0-rc1],SizeBytes:51703561,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/quay_io/calico/typha@sha256:52298609a808087c774e95ded163e91828106bed6cf3117c51aba3f4d3b7943c eu.gcr.io/gardener-project/3rd/quay_io/calico/typha:v3.8.2],SizeBytes:49771411,},ContainerImage{Names:[quay.io/k8scsi/csi-attacher@sha256:26fccd7a99d973845df1193b46ebdcc6ab8dc5f6e6be319750c471fce1742d13 quay.io/k8scsi/csi-attacher:v1.2.0],SizeBytes:46226754,},ContainerImage{Names:[quay.io/k8scsi/csi-attacher@sha256:0aba670b4d9d6b2e720bbf575d733156c676b693ca26501235444490300db838 quay.io/k8scsi/csi-attacher:v1.1.0],SizeBytes:42839085,},ContainerImage{Names:[quay.io/k8scsi/csi-resizer@sha256:7d46fb6eb8b890dc546029d1565d502b4a1d974d33625c6ee2bc7991b77fc1a1 quay.io/k8scsi/csi-resizer:v0.2.0],SizeBytes:42817100,},ContainerImage{Names:[quay.io/k8scsi/csi-resizer@sha256:f315c9042e56def3c05c6b04fe79ec9da6d39ddc557ca365a76cf35964ea08b6 quay.io/k8scsi/csi-resizer:v0.1.0],SizeBytes:42623056,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonroot@sha256:d4ede5c74517090b6686219059118ed178cf4620f5db8781b32f806bb1e7395b gcr.io/kubernetes-e2e-test-images/nonroot:1.0],SizeBytes:42321438,},ContainerImage{Names:[redis@sha256:50899ea1ceed33fa03232f3ac57578a424faa1742c1ac9c7a7bdb95cdf19b858 redis:5.0.5-alpine],SizeBytes:29331594,},ContainerImage{Names:[quay.io/k8scsi/hostpathplugin@sha256:b4826e492fc1762fceaf9726f41575ca0a4567864d3d235da874818de18039de quay.io/k8scsi/hostpathplugin:v1.2.0-rc5],SizeBytes:28761497,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/quay_io/prometheus/node-exporter@sha256:fea82a3a79228af2840c72ff394d7446ace51ae035f5b26cd9767b250baf13b7 eu.gcr.io/gardener-project/3rd/quay_io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/echoserver@sha256:e9ba514b896cdf559eef8788b66c2c3ee55f3572df617647b4b0d8b6bf81cf19 gcr.io/kubernetes-e2e-test-images/echoserver:2.2],SizeBytes:21692741,},ContainerImage{Names:[quay.io/k8scsi/mock-driver@sha256:e0eed916b7d970bad2b7d9875f9ad16932f987f0f3d91ec5d86da68b0b5cc9d1 quay.io/k8scsi/mock-driver:v2.1.0],SizeBytes:16226335,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[quay.io/k8scsi/csi-node-driver-registrar@sha256:13daf82fb99e951a4bff8ae5fc7c17c3a8fe7130be6400990d8f6076c32d4599 quay.io/k8scsi/csi-node-driver-registrar:v1.1.0],SizeBytes:15815995,},ContainerImage{Names:[quay.io/k8scsi/livenessprobe@sha256:dde617756e0f602adc566ab71fd885f1dad451ad3fb063ac991c95a2ff47aea5 quay.io/k8scsi/livenessprobe:v1.1.0],SizeBytes:14967303,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/quay_io/calico/pod2daemon-flexvol@sha256:fd246ba4eb5b96a7b97bfd8d99eb823ba179e6eeb9852cb3e3f7bf2f44a800a8 eu.gcr.io/gardener-project/3rd/quay_io/calico/pod2daemon-flexvol:v3.8.2],SizeBytes:9371181,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1],SizeBytes:9349974,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6 gcr.io/kubernetes-e2e-test-images/kitten:1.0],SizeBytes:4747037,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0],SizeBytes:4732240,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0],SizeBytes:1450451,},ContainerImage{Names:[busybox@sha256:6915be4043561d64e0ab0f8f098dc2ac48e077fe23f488ac24b665166898115a busybox:latest],SizeBytes:1219782,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/gcr_io/google_containers/pause-amd64@sha256:ffa28932647c3b6cab6a618aafe98d33dd185d96158ecf9b1addf042d6244025 k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea eu.gcr.io/gardener-project/3rd/gcr_io/google_containers/pause-amd64:3.1 k8s.gcr.io/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 11 19:52:35.521: INFO: Logging kubelet events for node ip-10-250-27-25.ec2.internal Jan 11 19:52:35.610: INFO: Logging pods the kubelet thinks is on node ip-10-250-27-25.ec2.internal Jan 11 19:52:35.708: INFO: affinity-clusterip-transition-gqtp8 started at 2020-01-11 19:52:23 +0000 UTC (0+1 container statuses recorded) Jan 11 19:52:35.709: INFO: Container affinity-clusterip-transition ready: true, restart count 0 Jan 11 19:52:35.709: INFO: affinity-clusterip-transition-clk7h started at 2020-01-11 19:52:23 +0000 UTC (0+1 container statuses recorded) Jan 11 19:52:35.709: INFO: Container affinity-clusterip-transition ready: true, restart count 0 Jan 11 19:52:35.709: INFO: webhook-to-be-mutated started at 2020-01-11 19:52:34 +0000 UTC (1+1 container statuses recorded) Jan 11 19:52:35.709: INFO: Init container webhook-added-init-container ready: false, restart count 0 Jan 11 19:52:35.709: INFO: Container example ready: false, restart count 0 Jan 11 19:52:35.709: INFO: calico-node-m8r2d started at 2020-01-11 15:56:04 +0000 UTC (2+1 container statuses recorded) Jan 11 19:52:35.709: INFO: Init container install-cni ready: true, restart count 0 Jan 11 19:52:35.709: INFO: Init container flexvol-driver ready: true, restart count 0 Jan 11 19:52:35.709: INFO: Container calico-node ready: true, restart count 0 Jan 11 19:52:35.709: INFO: forbid-1578772200-2qvmj started at 2020-01-11 19:50:09 +0000 UTC (0+1 container statuses recorded) Jan 11 19:52:35.709: INFO: Container c ready: true, restart count 0 Jan 11 19:52:35.709: INFO: execpod-affinityz22vc started at 2020-01-11 19:52:26 +0000 UTC (0+1 container statuses recorded) Jan 11 19:52:35.709: INFO: Container agnhost-pause ready: true, restart count 0 Jan 11 19:52:35.709: INFO: liveness-d9c04d87-22d3-4723-91d9-3bcb6c488d03 started at 2020-01-11 19:50:32 +0000 UTC (0+1 container statuses recorded) Jan 11 19:52:35.709: INFO: Container liveness ready: true, restart count 0 Jan 11 19:52:35.709: INFO: kube-proxy-rq4kf started at 2020-01-11 15:56:04 +0000 UTC (0+1 container statuses recorded) Jan 11 19:52:35.709: INFO: Container kube-proxy ready: true, restart count 0 Jan 11 19:52:35.709: INFO: sample-webhook-deployment-86d95b659d-lpzjp started at 2020-01-11 19:52:30 +0000 UTC (0+1 container statuses recorded) Jan 11 19:52:35.709: INFO: Container sample-webhook ready: true, restart count 0 Jan 11 19:52:35.709: INFO: node-problem-detector-9z5sq started at 2020-01-11 15:56:04 +0000 UTC (0+1 container statuses recorded) Jan 11 19:52:35.709: INFO: Container node-problem-detector ready: true, restart count 0 Jan 11 19:52:35.709: INFO: node-exporter-l6q84 started at 2020-01-11 15:56:04 +0000 UTC (0+1 container statuses recorded) Jan 11 19:52:35.709: INFO: Container node-exporter ready: true, restart count 0 Jan 11 19:52:35.709: INFO: pod-secrets-80f57524-b8be-4384-b63f-e0587d44498a started at 2020-01-11 19:50:05 +0000 UTC (0+1 container statuses recorded) Jan 11 19:52:35.709: INFO: Container creates-volume-test ready: false, restart count 0 W0111 19:52:35.799244 8631 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 11 19:52:36.006: INFO: Latency metrics for node ip-10-250-27-25.ec2.internal Jan 11 19:52:36.006: INFO: Logging node info for node ip-10-250-7-77.ec2.internal Jan 11 19:52:36.096: INFO: Node Info: &Node{ObjectMeta:{ip-10-250-7-77.ec2.internal /api/v1/nodes/ip-10-250-7-77.ec2.internal 3773c02c-1fbb-4cbe-a527-8933de0a8978 60599 0 2020-01-11 15:55:58 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:m5.large beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:us-east-1 failure-domain.beta.kubernetes.io/zone:us-east-1c kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-10-250-7-77.ec2.internal kubernetes.io/os:linux node.kubernetes.io/role:node worker.garden.sapcloud.io/group:worker-1 worker.gardener.cloud/pool:worker-1] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-ephemeral-9708":"ip-10-250-7-77.ec2.internal","csi-hostpath-provisioning-3332":"ip-10-250-7-77.ec2.internal","csi-hostpath-provisioning-4625":"ip-10-250-7-77.ec2.internal","csi-hostpath-provisioning-638":"ip-10-250-7-77.ec2.internal","csi-hostpath-provisioning-888":"ip-10-250-7-77.ec2.internal","csi-hostpath-provisioning-9667":"ip-10-250-7-77.ec2.internal","csi-hostpath-volume-2441":"ip-10-250-7-77.ec2.internal","csi-hostpath-volume-expand-8983":"ip-10-250-7-77.ec2.internal","csi-hostpath-volumeio-3164":"ip-10-250-7-77.ec2.internal","csi-hostpath-volumemode-2792":"ip-10-250-7-77.ec2.internal"} node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.250.7.77/19 projectcalico.org/IPv4IPIPTunnelAddr:100.64.0.1 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:100.64.0.0/24,DoNotUse_ExternalID:,ProviderID:aws:///us-east-1c/i-0551dba45aad7abfa,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-aws-ebs: {{25 0} {} 25 DecimalSI},cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{28730179584 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{8054267904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-aws-ebs: {{25 0} {} 25 DecimalSI},cpu: {{1920 -3} {} 1920m DecimalSI},ephemeral-storage: {{27293670584 0} {} 27293670584 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{6577812679 0} {} 6577812679 DecimalSI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2020-01-11 19:52:06 +0000 UTC,LastTransitionTime:2020-01-11 15:56:28 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2020-01-11 19:52:06 +0000 UTC,LastTransitionTime:2020-01-11 15:56:28 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2020-01-11 19:52:06 +0000 UTC,LastTransitionTime:2020-01-11 15:56:28 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2020-01-11 19:52:06 +0000 UTC,LastTransitionTime:2020-01-11 15:56:28 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2020-01-11 19:52:06 +0000 UTC,LastTransitionTime:2020-01-11 15:56:28 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2020-01-11 19:52:06 +0000 UTC,LastTransitionTime:2020-01-11 15:56:28 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2020-01-11 19:52:06 +0000 UTC,LastTransitionTime:2020-01-11 15:56:28 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-01-11 15:56:16 +0000 UTC,LastTransitionTime:2020-01-11 15:56:16 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-01-11 19:52:35 +0000 UTC,LastTransitionTime:2020-01-11 15:55:58 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-01-11 19:52:35 +0000 UTC,LastTransitionTime:2020-01-11 15:55:58 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-01-11 19:52:35 +0000 UTC,LastTransitionTime:2020-01-11 15:55:58 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-01-11 19:52:35 +0000 UTC,LastTransitionTime:2020-01-11 15:56:08 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.250.7.77,},NodeAddress{Type:Hostname,Address:ip-10-250-7-77.ec2.internal,},NodeAddress{Type:InternalDNS,Address:ip-10-250-7-77.ec2.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec223a25fa514279256b8b36a522519a,SystemUUID:ec223a25-fa51-4279-256b-8b36a522519a,BootID:652118c2-7bd4-4ebf-b248-be5c7a65a3aa,KernelVersion:4.19.86-coreos,OSImage:Container Linux by CoreOS 2303.3.0 (Rhyolite),ContainerRuntimeVersion:docker://18.6.3,KubeletVersion:v1.16.4,KubeProxyVersion:v1.16.4,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[eu.gcr.io/gardener-project/k8s.gcr.io/hyperkube@sha256:1d8d7ef8bae1a6c8564d97a7d83a3661ea4b43127b0a6d901f3cd4b1126ee102 eu.gcr.io/gardener-project/k8s.gcr.io/hyperkube:v1.16.4],SizeBytes:601224435,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/quay_io/kubernetes-ingress-controller/nginx-ingress-controller@sha256:4980f4ee069f767334c6fb6a7d75fbdc87236542fd749e22af5d80f2217959f4 eu.gcr.io/gardener-project/3rd/quay_io/kubernetes-ingress-controller/nginx-ingress-controller:0.22.0],SizeBytes:551728251,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/quay_io/calico/node@sha256:d017c694acb9df5ad8e957a14b4c5a613c3a42771a34904f40c279dd2f61461e eu.gcr.io/gardener-project/3rd/quay_io/calico/node:v3.8.2-mod-1],SizeBytes:185406766,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/quay_io/calico/cni@sha256:fe6cb51f30add991b76eadfa26ec10fa8796383a1ddf807be5d4228725207b9d eu.gcr.io/gardener-project/3rd/quay_io/calico/cni:v3.8.2-mod-1],SizeBytes:153790666,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/k8s_gcr_io/kubernetes-dashboard-amd64@sha256:2f4fefeb964b1b7b09a3d2607a963506a47a6628d5268825e8b45b8a4c5ace93 eu.gcr.io/gardener-project/3rd/k8s_gcr_io/kubernetes-dashboard-amd64:v1.10.1],SizeBytes:121711221,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/k8s_gcr_io/node-problem-detector@sha256:00aceed3b4ef20d0d578aff3f904212daa2f0aaf18350d3e213cf4ca0703ccf0 eu.gcr.io/gardener-project/3rd/k8s_gcr_io/node-problem-detector:v0.7.1-mod-1],SizeBytes:96768084,},ContainerImage{Names:[eu.gcr.io/gardener-project/gardener/ingress-default-backend@sha256:17b68928ead12cc9df88ee60d9c638d3fd642a7e122c2bb7586da1a21eb2de45 eu.gcr.io/gardener-project/gardener/ingress-default-backend:0.7.0],SizeBytes:69546830,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6],SizeBytes:57345321,},ContainerImage{Names:[quay.io/k8scsi/csi-provisioner@sha256:0efcb424f1dde9b9fb11a1a14f2e48ab47e1c3f08bc3a929990dcfcb1f7ab34f quay.io/k8scsi/csi-provisioner:v1.4.0-rc1],SizeBytes:54431016,},ContainerImage{Names:[quay.io/k8scsi/csi-snapshotter@sha256:e3d3e742e32d00488fdb401045b9b1d033d7ca0ab6e760f77b24750fc95e5f70 quay.io/k8scsi/csi-snapshotter:v2.0.0-rc1],SizeBytes:51703561,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/quay_io/calico/typha@sha256:52298609a808087c774e95ded163e91828106bed6cf3117c51aba3f4d3b7943c eu.gcr.io/gardener-project/3rd/quay_io/calico/typha:v3.8.2],SizeBytes:49771411,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/quay_io/calico/kube-controllers@sha256:242c3e83e41c5ad4a246cba351360d92fb90e1c140cd24e42140e640a0ed3290 eu.gcr.io/gardener-project/3rd/quay_io/calico/kube-controllers:v3.8.2],SizeBytes:46809393,},ContainerImage{Names:[quay.io/k8scsi/csi-attacher@sha256:26fccd7a99d973845df1193b46ebdcc6ab8dc5f6e6be319750c471fce1742d13 quay.io/k8scsi/csi-attacher:v1.2.0],SizeBytes:46226754,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/coredns/coredns@sha256:b1f81b52011f91ebcf512111caa6d6d0896a65251188210cd3145d5b23204531 eu.gcr.io/gardener-project/3rd/coredns/coredns:1.6.3],SizeBytes:44255363,},ContainerImage{Names:[quay.io/k8scsi/csi-resizer@sha256:7d46fb6eb8b890dc546029d1565d502b4a1d974d33625c6ee2bc7991b77fc1a1 quay.io/k8scsi/csi-resizer:v0.2.0],SizeBytes:42817100,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/k8s_gcr_io/cpvpa-amd64@sha256:5843435c534f0368f8980b1635976976b087f0b2dcde01226d9216da2276d24d eu.gcr.io/gardener-project/3rd/k8s_gcr_io/cpvpa-amd64:v0.8.1],SizeBytes:40616150,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/k8s_gcr_io/cluster-proportional-autoscaler-amd64@sha256:2cdb0f90aac21d3f648a945ef929bfb81159d7453499b2dce6164c78a348ac42 eu.gcr.io/gardener-project/3rd/k8s_gcr_io/cluster-proportional-autoscaler-amd64:1.7.1],SizeBytes:40067731,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/k8s_gcr_io/metrics-server-amd64@sha256:c3c8fb8757c3236343da9239a266c6ee9e16ac3c98b6f5d7a7cbb5f83058d4f1 eu.gcr.io/gardener-project/3rd/k8s_gcr_io/metrics-server-amd64:v0.3.3],SizeBytes:39933796,},ContainerImage{Names:[redis@sha256:50899ea1ceed33fa03232f3ac57578a424faa1742c1ac9c7a7bdb95cdf19b858 redis:5.0.5-alpine],SizeBytes:29331594,},ContainerImage{Names:[quay.io/k8scsi/hostpathplugin@sha256:b4826e492fc1762fceaf9726f41575ca0a4567864d3d235da874818de18039de quay.io/k8scsi/hostpathplugin:v1.2.0-rc5],SizeBytes:28761497,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/quay_io/prometheus/node-exporter@sha256:fea82a3a79228af2840c72ff394d7446ace51ae035f5b26cd9767b250baf13b7 eu.gcr.io/gardener-project/3rd/quay_io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/echoserver@sha256:e9ba514b896cdf559eef8788b66c2c3ee55f3572df617647b4b0d8b6bf81cf19 gcr.io/kubernetes-e2e-test-images/echoserver:2.2],SizeBytes:21692741,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/quay_io/prometheus/blackbox-exporter@sha256:c09cbb653e4708a0c14b205822f56026669c6a4a7d0502609c65da2dd741e669 eu.gcr.io/gardener-project/3rd/quay_io/prometheus/blackbox-exporter:v0.14.0],SizeBytes:17584252,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[quay.io/k8scsi/csi-node-driver-registrar@sha256:13daf82fb99e951a4bff8ae5fc7c17c3a8fe7130be6400990d8f6076c32d4599 quay.io/k8scsi/csi-node-driver-registrar:v1.1.0],SizeBytes:15815995,},ContainerImage{Names:[quay.io/k8scsi/livenessprobe@sha256:dde617756e0f602adc566ab71fd885f1dad451ad3fb063ac991c95a2ff47aea5 quay.io/k8scsi/livenessprobe:v1.1.0],SizeBytes:14967303,},ContainerImage{Names:[eu.gcr.io/gardener-project/gardener/vpn-shoot@sha256:6054c6ae62c2bca2f07c913390c3babf14bb8dfa80c707ee8d4fd03c06dbf93f eu.gcr.io/gardener-project/gardener/vpn-shoot:0.16.0],SizeBytes:13732716,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/quay_io/calico/pod2daemon-flexvol@sha256:fd246ba4eb5b96a7b97bfd8d99eb823ba179e6eeb9852cb3e3f7bf2f44a800a8 eu.gcr.io/gardener-project/3rd/quay_io/calico/pod2daemon-flexvol:v3.8.2],SizeBytes:9371181,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:4206620,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/gcr_io/google_containers/pause-amd64@sha256:ffa28932647c3b6cab6a618aafe98d33dd185d96158ecf9b1addf042d6244025 k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea eu.gcr.io/gardener-project/3rd/gcr_io/google_containers/pause-amd64:3.1 k8s.gcr.io/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 11 19:52:36.096: INFO: Logging kubelet events for node ip-10-250-7-77.ec2.internal Jan 11 19:52:36.187: INFO: Logging pods the kubelet thinks is on node ip-10-250-7-77.ec2.internal Jan 11 19:52:36.290: INFO: node-exporter-gp57h started at 2020-01-11 15:55:58 +0000 UTC (0+1 container statuses recorded) Jan 11 19:52:36.290: INFO: Container node-exporter ready: true, restart count 0 Jan 11 19:52:36.290: INFO: calico-kube-controllers-79bcd784b6-c46r9 started at 2020-01-11 15:56:08 +0000 UTC (0+1 container statuses recorded) Jan 11 19:52:36.290: INFO: Container calico-kube-controllers ready: true, restart count 0 Jan 11 19:52:36.290: INFO: metrics-server-7c797fd994-4x7v9 started at 2020-01-11 15:56:08 +0000 UTC (0+1 container statuses recorded) Jan 11 19:52:36.290: INFO: Container metrics-server ready: true, restart count 0 Jan 11 19:52:36.290: INFO: coredns-59c969ffb8-57m7v started at 2020-01-11 15:56:11 +0000 UTC (0+1 container statuses recorded) Jan 11 19:52:36.290: INFO: Container coredns ready: true, restart count 0 Jan 11 19:52:36.290: INFO: calico-typha-deploy-9f6b455c4-vdrzx started at 2020-01-11 16:21:07 +0000 UTC (0+1 container statuses recorded) Jan 11 19:52:36.290: INFO: Container calico-typha ready: true, restart count 0 Jan 11 19:52:36.290: INFO: kube-proxy-nn5px started at 2020-01-11 15:55:58 +0000 UTC (0+1 container statuses recorded) Jan 11 19:52:36.290: INFO: Container kube-proxy ready: true, restart count 0 Jan 11 19:52:36.290: INFO: calico-typha-horizontal-autoscaler-85c99966bb-6j6rp started at 2020-01-11 15:56:08 +0000 UTC (0+1 container statuses recorded) Jan 11 19:52:36.290: INFO: Container autoscaler ready: true, restart count 0 Jan 11 19:52:36.290: INFO: calico-typha-vertical-autoscaler-5769b74b58-r8t6r started at 2020-01-11 15:56:13 +0000 UTC (0+1 container statuses recorded) Jan 11 19:52:36.290: INFO: Container autoscaler ready: true, restart count 5 Jan 11 19:52:36.290: INFO: addons-nginx-ingress-controller-7c75bb76db-cd9r9 started at 2020-01-11 15:56:13 +0000 UTC (0+1 container statuses recorded) Jan 11 19:52:36.290: INFO: Container nginx-ingress-controller ready: true, restart count 0 Jan 11 19:52:36.291: INFO: vpn-shoot-5d76665b65-6rkww started at 2020-01-11 15:56:13 +0000 UTC (0+1 container statuses recorded) Jan 11 19:52:36.291: INFO: Container vpn-shoot ready: true, restart count 0 Jan 11 19:52:36.291: INFO: addons-nginx-ingress-nginx-ingress-k8s-backend-95f65778d-4fk7d started at 2020-01-11 15:56:08 +0000 UTC (0+1 container statuses recorded) Jan 11 19:52:36.291: INFO: Container nginx-ingress-nginx-ingress-k8s-backend ready: true, restart count 0 Jan 11 19:52:36.291: INFO: addons-kubernetes-dashboard-78954cc66b-69k8m started at 2020-01-11 15:56:08 +0000 UTC (0+1 container statuses recorded) Jan 11 19:52:36.291: INFO: Container kubernetes-dashboard ready: true, restart count 0 Jan 11 19:52:36.291: INFO: affinity-clusterip-transition-b2nms started at 2020-01-11 19:52:23 +0000 UTC (0+1 container statuses recorded) Jan 11 19:52:36.291: INFO: Container affinity-clusterip-transition ready: true, restart count 0 Jan 11 19:52:36.291: INFO: blackbox-exporter-54bb5f55cc-452fk started at 2020-01-11 15:55:58 +0000 UTC (0+1 container statuses recorded) Jan 11 19:52:36.291: INFO: Container blackbox-exporter ready: true, restart count 0 Jan 11 19:52:36.291: INFO: coredns-59c969ffb8-fqq79 started at 2020-01-11 15:56:08 +0000 UTC (0+1 container statuses recorded) Jan 11 19:52:36.291: INFO: Container coredns ready: true, restart count 0 Jan 11 19:52:36.291: INFO: calico-node-dl8nk started at 2020-01-11 15:55:58 +0000 UTC (2+1 container statuses recorded) Jan 11 19:52:36.291: INFO: Init container install-cni ready: true, restart count 0 Jan 11 19:52:36.291: INFO: Init container flexvol-driver ready: true, restart count 0 Jan 11 19:52:36.291: INFO: Container calico-node ready: true, restart count 0 Jan 11 19:52:36.291: INFO: node-problem-detector-jx2p4 started at 2020-01-11 15:55:58 +0000 UTC (0+1 container statuses recorded) Jan 11 19:52:36.291: INFO: Container node-problem-detector ready: true, restart count 0 W0111 19:52:36.381196 8631 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 11 19:52:36.595: INFO: Latency metrics for node ip-10-250-7-77.ec2.internal Jan 11 19:52:36.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-7250" for this suite. Jan 11 19:52:42.955: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:52:46.253: INFO: namespace hostpath-7250 deletion completed in 9.566629416s • Failure [14.991 seconds] [sig-storage] HostPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] [It] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Jan 11 19:52:35.072: Unexpected error: <*errors.errorString | 0xc002e13e90>: { s: "expected \"mode of file \\\"/test-volume\\\": dtrwxrwx\" in container output: Expected\n : mount type of \"/test-volume\": tmpfs\n mode of file \"/test-volume\": dgtrwxrwxrwx\n \nto contain substring\n : mode of file \"/test-volume\": dtrwxrwx", } expected "mode of file \"/test-volume\": dtrwxrwx" in container output: Expected : mount type of "/test-volume": tmpfs mode of file "/test-volume": dgtrwxrwxrwx to contain substring : mode of file "/test-volume": dtrwxrwx occurred /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:1667 ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:52:32.006: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename emptydir STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-7399 STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating a pod to test emptydir 0777 on node default medium Jan 11 19:52:32.846: INFO: Waiting up to 5m0s for pod "pod-68fe6265-21aa-4c35-b1b4-f23f1ca7db0b" in namespace "emptydir-7399" to be "success or failure" Jan 11 19:52:32.935: INFO: Pod "pod-68fe6265-21aa-4c35-b1b4-f23f1ca7db0b": Phase="Pending", Reason="", readiness=false. Elapsed: 89.89942ms Jan 11 19:52:35.026: INFO: Pod "pod-68fe6265-21aa-4c35-b1b4-f23f1ca7db0b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.180123574s STEP: Saw pod success Jan 11 19:52:35.026: INFO: Pod "pod-68fe6265-21aa-4c35-b1b4-f23f1ca7db0b" satisfied condition "success or failure" Jan 11 19:52:35.115: INFO: Trying to get logs from node ip-10-250-7-77.ec2.internal pod pod-68fe6265-21aa-4c35-b1b4-f23f1ca7db0b container test-container: STEP: delete the pod Jan 11 19:52:35.313: INFO: Waiting for pod pod-68fe6265-21aa-4c35-b1b4-f23f1ca7db0b to disappear Jan 11 19:52:35.403: INFO: Pod pod-68fe6265-21aa-4c35-b1b4-f23f1ca7db0b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:52:35.404: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7399" for this suite. Jan 11 19:52:43.765: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:52:47.080: INFO: namespace emptydir-7399 deletion completed in 11.585720369s • [SLOW TEST:15.075 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:52:35.796: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename downward-api STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-8660 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating a pod to test downward API volume plugin Jan 11 19:52:36.544: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6f1d6bae-b9d9-415b-a09e-8a62fdf7cc6a" in namespace "downward-api-8660" to be "success or failure" Jan 11 19:52:36.634: INFO: Pod "downwardapi-volume-6f1d6bae-b9d9-415b-a09e-8a62fdf7cc6a": Phase="Pending", Reason="", readiness=false. Elapsed: 89.610233ms Jan 11 19:52:38.724: INFO: Pod "downwardapi-volume-6f1d6bae-b9d9-415b-a09e-8a62fdf7cc6a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.179768527s STEP: Saw pod success Jan 11 19:52:38.724: INFO: Pod "downwardapi-volume-6f1d6bae-b9d9-415b-a09e-8a62fdf7cc6a" satisfied condition "success or failure" Jan 11 19:52:38.815: INFO: Trying to get logs from node ip-10-250-27-25.ec2.internal pod downwardapi-volume-6f1d6bae-b9d9-415b-a09e-8a62fdf7cc6a container client-container: STEP: delete the pod Jan 11 19:52:39.015: INFO: Waiting for pod downwardapi-volume-6f1d6bae-b9d9-415b-a09e-8a62fdf7cc6a to disappear Jan 11 19:52:39.105: INFO: Pod downwardapi-volume-6f1d6bae-b9d9-415b-a09e-8a62fdf7cc6a no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:52:39.105: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8660" for this suite. Jan 11 19:52:45.466: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:52:48.783: INFO: namespace downward-api-8660 deletion completed in 9.587357479s • [SLOW TEST:12.987 seconds] [sig-storage] Downward API volume /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:52:46.255: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename hostpath STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in hostpath-1462 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating a pod to test hostPath mode Jan 11 19:52:46.983: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-1462" to be "success or failure" Jan 11 19:52:47.072: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 88.888246ms Jan 11 19:52:49.162: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.178329023s STEP: Saw pod success Jan 11 19:52:49.162: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Jan 11 19:52:49.251: INFO: Trying to get logs from node ip-10-250-27-25.ec2.internal pod pod-host-path-test container test-container-1: STEP: delete the pod Jan 11 19:52:49.440: INFO: Waiting for pod pod-host-path-test to disappear Jan 11 19:52:49.529: INFO: Pod pod-host-path-test no longer exists Jan 11 19:52:49.529: FAIL: Unexpected error: <*errors.errorString | 0xc00454eed0>: { s: "expected \"mode of file \\\"/test-volume\\\": dtrwxrwx\" in container output: Expected\n : mount type of \"/test-volume\": tmpfs\n mode of file \"/test-volume\": dgtrwxrwxrwx\n \nto contain substring\n : mode of file \"/test-volume\": dtrwxrwx", } expected "mode of file \"/test-volume\": dtrwxrwx" in container output: Expected : mount type of "/test-volume": tmpfs mode of file "/test-volume": dgtrwxrwxrwx to contain substring : mode of file "/test-volume": dtrwxrwx occurred [AfterEach] [sig-storage] HostPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 STEP: Collecting events from namespace "hostpath-1462". STEP: Found 7 events. Jan 11 19:52:49.619: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-host-path-test: {default-scheduler } Scheduled: Successfully assigned hostpath-1462/pod-host-path-test to ip-10-250-27-25.ec2.internal Jan 11 19:52:49.619: INFO: At 2020-01-11 19:52:47 +0000 UTC - event for pod-host-path-test: {kubelet ip-10-250-27-25.ec2.internal} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/mounttest:1.0" already present on machine Jan 11 19:52:49.619: INFO: At 2020-01-11 19:52:47 +0000 UTC - event for pod-host-path-test: {kubelet ip-10-250-27-25.ec2.internal} Created: Created container test-container-1 Jan 11 19:52:49.619: INFO: At 2020-01-11 19:52:47 +0000 UTC - event for pod-host-path-test: {kubelet ip-10-250-27-25.ec2.internal} Started: Started container test-container-1 Jan 11 19:52:49.619: INFO: At 2020-01-11 19:52:47 +0000 UTC - event for pod-host-path-test: {kubelet ip-10-250-27-25.ec2.internal} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/mounttest:1.0" already present on machine Jan 11 19:52:49.619: INFO: At 2020-01-11 19:52:47 +0000 UTC - event for pod-host-path-test: {kubelet ip-10-250-27-25.ec2.internal} Created: Created container test-container-2 Jan 11 19:52:49.619: INFO: At 2020-01-11 19:52:47 +0000 UTC - event for pod-host-path-test: {kubelet ip-10-250-27-25.ec2.internal} Started: Started container test-container-2 Jan 11 19:52:49.708: INFO: POD NODE PHASE GRACE CONDITIONS Jan 11 19:52:49.708: INFO: Jan 11 19:52:49.888: INFO: Logging node info for node ip-10-250-27-25.ec2.internal Jan 11 19:52:49.977: INFO: Node Info: &Node{ObjectMeta:{ip-10-250-27-25.ec2.internal /api/v1/nodes/ip-10-250-27-25.ec2.internal af7f64f3-a5de-4df3-9e07-f69e835ab580 60665 0 2020-01-11 15:56:03 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:m5.large beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:us-east-1 failure-domain.beta.kubernetes.io/zone:us-east-1c kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-10-250-27-25.ec2.internal kubernetes.io/os:linux node.kubernetes.io/role:node worker.garden.sapcloud.io/group:worker-1 worker.gardener.cloud/pool:worker-1] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-ephemeral-1641":"ip-10-250-27-25.ec2.internal","csi-hostpath-provisioning-6240":"ip-10-250-27-25.ec2.internal","csi-hostpath-volume-expand-7991":"ip-10-250-27-25.ec2.internal","csi-mock-csi-mock-volumes-1062":"csi-mock-csi-mock-volumes-1062","csi-mock-csi-mock-volumes-2239":"csi-mock-csi-mock-volumes-2239","csi-mock-csi-mock-volumes-6381":"csi-mock-csi-mock-volumes-6381","csi-mock-csi-mock-volumes-795":"csi-mock-csi-mock-volumes-795"} node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.250.27.25/19 projectcalico.org/IPv4IPIPTunnelAddr:100.64.1.1 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:100.64.1.0/24,DoNotUse_ExternalID:,ProviderID:aws:///us-east-1c/i-0a8c404292a3c92e9,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-aws-ebs: {{25 0} {} 25 DecimalSI},cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{28730179584 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{8054267904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-aws-ebs: {{25 0} {} 25 DecimalSI},cpu: {{1920 -3} {} 1920m DecimalSI},ephemeral-storage: {{27293670584 0} {} 27293670584 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{6577812679 0} {} 6577812679 DecimalSI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2020-01-11 19:52:25 +0000 UTC,LastTransitionTime:2020-01-11 15:56:58 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2020-01-11 19:52:25 +0000 UTC,LastTransitionTime:2020-01-11 15:56:58 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2020-01-11 19:52:25 +0000 UTC,LastTransitionTime:2020-01-11 15:56:58 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2020-01-11 19:52:25 +0000 UTC,LastTransitionTime:2020-01-11 15:56:58 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2020-01-11 19:52:25 +0000 UTC,LastTransitionTime:2020-01-11 15:56:58 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2020-01-11 19:52:25 +0000 UTC,LastTransitionTime:2020-01-11 15:56:58 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2020-01-11 19:52:25 +0000 UTC,LastTransitionTime:2020-01-11 15:56:58 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-01-11 15:56:18 +0000 UTC,LastTransitionTime:2020-01-11 15:56:18 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-01-11 19:52:44 +0000 UTC,LastTransitionTime:2020-01-11 15:56:03 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-01-11 19:52:44 +0000 UTC,LastTransitionTime:2020-01-11 15:56:03 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-01-11 19:52:44 +0000 UTC,LastTransitionTime:2020-01-11 15:56:03 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-01-11 19:52:44 +0000 UTC,LastTransitionTime:2020-01-11 15:56:13 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.250.27.25,},NodeAddress{Type:Hostname,Address:ip-10-250-27-25.ec2.internal,},NodeAddress{Type:InternalDNS,Address:ip-10-250-27-25.ec2.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec280dba3c1837e27848a3dec8c080a9,SystemUUID:ec280dba-3c18-37e2-7848-a3dec8c080a9,BootID:89e42b89-b944-47ea-8bf6-5f2fe6d80c97,KernelVersion:4.19.86-coreos,OSImage:Container Linux by CoreOS 2303.3.0 (Rhyolite),ContainerRuntimeVersion:docker://18.6.3,KubeletVersion:v1.16.4,KubeProxyVersion:v1.16.4,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[eu.gcr.io/gardener-project/k8s.gcr.io/hyperkube@sha256:1d8d7ef8bae1a6c8564d97a7d83a3661ea4b43127b0a6d901f3cd4b1126ee102 eu.gcr.io/gardener-project/k8s.gcr.io/hyperkube:v1.16.4],SizeBytes:601224435,},ContainerImage{Names:[gcr.io/google-samples/gb-frontend@sha256:35cb427341429fac3df10ff74600ea73e8ec0754d78f9ce89e0b4f3d70d53ba6 gcr.io/google-samples/gb-frontend:v6],SizeBytes:373099368,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa k8s.gcr.io/etcd:3.3.15],SizeBytes:246640776,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/volume/nfs@sha256:c2ad734346f608a5f7d69cfded93c4e8094069320657bd372d12ba21dea3ea71 gcr.io/kubernetes-e2e-test-images/volume/nfs:1.0],SizeBytes:225358913,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:195659796,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/quay_io/calico/node@sha256:d017c694acb9df5ad8e957a14b4c5a613c3a42771a34904f40c279dd2f61461e eu.gcr.io/gardener-project/3rd/quay_io/calico/node:v3.8.2-mod-1],SizeBytes:185406766,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/quay_io/calico/cni@sha256:fe6cb51f30add991b76eadfa26ec10fa8796383a1ddf807be5d4228725207b9d eu.gcr.io/gardener-project/3rd/quay_io/calico/cni:v3.8.2-mod-1],SizeBytes:153790666,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/k8s_gcr_io/node-problem-detector@sha256:00aceed3b4ef20d0d578aff3f904212daa2f0aaf18350d3e213cf4ca0703ccf0 eu.gcr.io/gardener-project/3rd/k8s_gcr_io/node-problem-detector:v0.7.1-mod-1],SizeBytes:96768084,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:1bafcc6fb1aa990b487850adba9cadc020e42d7905aa8a30481182a477ba24b0 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.10],SizeBytes:61365829,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6],SizeBytes:57345321,},ContainerImage{Names:[quay.io/k8scsi/csi-provisioner@sha256:0efcb424f1dde9b9fb11a1a14f2e48ab47e1c3f08bc3a929990dcfcb1f7ab34f quay.io/k8scsi/csi-provisioner:v1.4.0-rc1],SizeBytes:54431016,},ContainerImage{Names:[quay.io/k8scsi/csi-snapshotter@sha256:e3d3e742e32d00488fdb401045b9b1d033d7ca0ab6e760f77b24750fc95e5f70 quay.io/k8scsi/csi-snapshotter:v2.0.0-rc1],SizeBytes:51703561,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/quay_io/calico/typha@sha256:52298609a808087c774e95ded163e91828106bed6cf3117c51aba3f4d3b7943c eu.gcr.io/gardener-project/3rd/quay_io/calico/typha:v3.8.2],SizeBytes:49771411,},ContainerImage{Names:[quay.io/k8scsi/csi-attacher@sha256:26fccd7a99d973845df1193b46ebdcc6ab8dc5f6e6be319750c471fce1742d13 quay.io/k8scsi/csi-attacher:v1.2.0],SizeBytes:46226754,},ContainerImage{Names:[quay.io/k8scsi/csi-attacher@sha256:0aba670b4d9d6b2e720bbf575d733156c676b693ca26501235444490300db838 quay.io/k8scsi/csi-attacher:v1.1.0],SizeBytes:42839085,},ContainerImage{Names:[quay.io/k8scsi/csi-resizer@sha256:7d46fb6eb8b890dc546029d1565d502b4a1d974d33625c6ee2bc7991b77fc1a1 quay.io/k8scsi/csi-resizer:v0.2.0],SizeBytes:42817100,},ContainerImage{Names:[quay.io/k8scsi/csi-resizer@sha256:f315c9042e56def3c05c6b04fe79ec9da6d39ddc557ca365a76cf35964ea08b6 quay.io/k8scsi/csi-resizer:v0.1.0],SizeBytes:42623056,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonroot@sha256:d4ede5c74517090b6686219059118ed178cf4620f5db8781b32f806bb1e7395b gcr.io/kubernetes-e2e-test-images/nonroot:1.0],SizeBytes:42321438,},ContainerImage{Names:[redis@sha256:50899ea1ceed33fa03232f3ac57578a424faa1742c1ac9c7a7bdb95cdf19b858 redis:5.0.5-alpine],SizeBytes:29331594,},ContainerImage{Names:[quay.io/k8scsi/hostpathplugin@sha256:b4826e492fc1762fceaf9726f41575ca0a4567864d3d235da874818de18039de quay.io/k8scsi/hostpathplugin:v1.2.0-rc5],SizeBytes:28761497,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/quay_io/prometheus/node-exporter@sha256:fea82a3a79228af2840c72ff394d7446ace51ae035f5b26cd9767b250baf13b7 eu.gcr.io/gardener-project/3rd/quay_io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/echoserver@sha256:e9ba514b896cdf559eef8788b66c2c3ee55f3572df617647b4b0d8b6bf81cf19 gcr.io/kubernetes-e2e-test-images/echoserver:2.2],SizeBytes:21692741,},ContainerImage{Names:[quay.io/k8scsi/mock-driver@sha256:e0eed916b7d970bad2b7d9875f9ad16932f987f0f3d91ec5d86da68b0b5cc9d1 quay.io/k8scsi/mock-driver:v2.1.0],SizeBytes:16226335,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[quay.io/k8scsi/csi-node-driver-registrar@sha256:13daf82fb99e951a4bff8ae5fc7c17c3a8fe7130be6400990d8f6076c32d4599 quay.io/k8scsi/csi-node-driver-registrar:v1.1.0],SizeBytes:15815995,},ContainerImage{Names:[quay.io/k8scsi/livenessprobe@sha256:dde617756e0f602adc566ab71fd885f1dad451ad3fb063ac991c95a2ff47aea5 quay.io/k8scsi/livenessprobe:v1.1.0],SizeBytes:14967303,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/quay_io/calico/pod2daemon-flexvol@sha256:fd246ba4eb5b96a7b97bfd8d99eb823ba179e6eeb9852cb3e3f7bf2f44a800a8 eu.gcr.io/gardener-project/3rd/quay_io/calico/pod2daemon-flexvol:v3.8.2],SizeBytes:9371181,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1],SizeBytes:9349974,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6 gcr.io/kubernetes-e2e-test-images/kitten:1.0],SizeBytes:4747037,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0],SizeBytes:4732240,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0],SizeBytes:1450451,},ContainerImage{Names:[busybox@sha256:6915be4043561d64e0ab0f8f098dc2ac48e077fe23f488ac24b665166898115a busybox:latest],SizeBytes:1219782,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/gcr_io/google_containers/pause-amd64@sha256:ffa28932647c3b6cab6a618aafe98d33dd185d96158ecf9b1addf042d6244025 k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea eu.gcr.io/gardener-project/3rd/gcr_io/google_containers/pause-amd64:3.1 k8s.gcr.io/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 11 19:52:49.977: INFO: Logging kubelet events for node ip-10-250-27-25.ec2.internal Jan 11 19:52:50.068: INFO: Logging pods the kubelet thinks is on node ip-10-250-27-25.ec2.internal Jan 11 19:52:50.166: INFO: node-problem-detector-9z5sq started at 2020-01-11 15:56:04 +0000 UTC (0+1 container statuses recorded) Jan 11 19:52:50.166: INFO: Container node-problem-detector ready: true, restart count 0 Jan 11 19:52:50.166: INFO: node-exporter-l6q84 started at 2020-01-11 15:56:04 +0000 UTC (0+1 container statuses recorded) Jan 11 19:52:50.166: INFO: Container node-exporter ready: true, restart count 0 Jan 11 19:52:50.166: INFO: pod-projected-secrets-cef06617-0cbd-446c-869c-1d6af1d08ab0 started at 2020-01-11 19:52:49 +0000 UTC (0+1 container statuses recorded) Jan 11 19:52:50.166: INFO: Container projected-secret-volume-test ready: false, restart count 0 Jan 11 19:52:50.166: INFO: pod-secrets-80f57524-b8be-4384-b63f-e0587d44498a started at 2020-01-11 19:50:05 +0000 UTC (0+1 container statuses recorded) Jan 11 19:52:50.166: INFO: Container creates-volume-test ready: false, restart count 0 Jan 11 19:52:50.166: INFO: affinity-clusterip-transition-gqtp8 started at 2020-01-11 19:52:23 +0000 UTC (0+1 container statuses recorded) Jan 11 19:52:50.166: INFO: Container affinity-clusterip-transition ready: true, restart count 0 Jan 11 19:52:50.166: INFO: affinity-clusterip-transition-clk7h started at 2020-01-11 19:52:23 +0000 UTC (0+1 container statuses recorded) Jan 11 19:52:50.166: INFO: Container affinity-clusterip-transition ready: true, restart count 0 Jan 11 19:52:50.166: INFO: simpletest.rc-vv88b started at 2020-01-11 19:52:48 +0000 UTC (0+1 container statuses recorded) Jan 11 19:52:50.166: INFO: Container nginx ready: false, restart count 0 Jan 11 19:52:50.166: INFO: calico-node-m8r2d started at 2020-01-11 15:56:04 +0000 UTC (2+1 container statuses recorded) Jan 11 19:52:50.166: INFO: Init container install-cni ready: true, restart count 0 Jan 11 19:52:50.166: INFO: Init container flexvol-driver ready: true, restart count 0 Jan 11 19:52:50.166: INFO: Container calico-node ready: true, restart count 0 Jan 11 19:52:50.166: INFO: forbid-1578772200-2qvmj started at 2020-01-11 19:50:09 +0000 UTC (0+1 container statuses recorded) Jan 11 19:52:50.166: INFO: Container c ready: true, restart count 0 Jan 11 19:52:50.166: INFO: execpod-affinityz22vc started at 2020-01-11 19:52:26 +0000 UTC (0+1 container statuses recorded) Jan 11 19:52:50.166: INFO: Container agnhost-pause ready: true, restart count 0 Jan 11 19:52:50.166: INFO: liveness-d9c04d87-22d3-4723-91d9-3bcb6c488d03 started at 2020-01-11 19:50:32 +0000 UTC (0+1 container statuses recorded) Jan 11 19:52:50.166: INFO: Container liveness ready: true, restart count 0 Jan 11 19:52:50.166: INFO: kube-proxy-rq4kf started at 2020-01-11 15:56:04 +0000 UTC (0+1 container statuses recorded) Jan 11 19:52:50.166: INFO: Container kube-proxy ready: true, restart count 0 W0111 19:52:50.256789 8631 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 11 19:52:50.475: INFO: Latency metrics for node ip-10-250-27-25.ec2.internal Jan 11 19:52:50.475: INFO: Logging node info for node ip-10-250-7-77.ec2.internal Jan 11 19:52:50.565: INFO: Node Info: &Node{ObjectMeta:{ip-10-250-7-77.ec2.internal /api/v1/nodes/ip-10-250-7-77.ec2.internal 3773c02c-1fbb-4cbe-a527-8933de0a8978 60676 0 2020-01-11 15:55:58 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:m5.large beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:us-east-1 failure-domain.beta.kubernetes.io/zone:us-east-1c kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-10-250-7-77.ec2.internal kubernetes.io/os:linux node.kubernetes.io/role:node worker.garden.sapcloud.io/group:worker-1 worker.gardener.cloud/pool:worker-1] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-ephemeral-9708":"ip-10-250-7-77.ec2.internal","csi-hostpath-provisioning-3332":"ip-10-250-7-77.ec2.internal","csi-hostpath-provisioning-4625":"ip-10-250-7-77.ec2.internal","csi-hostpath-provisioning-638":"ip-10-250-7-77.ec2.internal","csi-hostpath-provisioning-888":"ip-10-250-7-77.ec2.internal","csi-hostpath-provisioning-9667":"ip-10-250-7-77.ec2.internal","csi-hostpath-volume-2441":"ip-10-250-7-77.ec2.internal","csi-hostpath-volume-expand-8983":"ip-10-250-7-77.ec2.internal","csi-hostpath-volumeio-3164":"ip-10-250-7-77.ec2.internal","csi-hostpath-volumemode-2792":"ip-10-250-7-77.ec2.internal"} node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.250.7.77/19 projectcalico.org/IPv4IPIPTunnelAddr:100.64.0.1 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:100.64.0.0/24,DoNotUse_ExternalID:,ProviderID:aws:///us-east-1c/i-0551dba45aad7abfa,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-aws-ebs: {{25 0} {} 25 DecimalSI},cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{28730179584 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{8054267904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-aws-ebs: {{25 0} {} 25 DecimalSI},cpu: {{1920 -3} {} 1920m DecimalSI},ephemeral-storage: {{27293670584 0} {} 27293670584 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{6577812679 0} {} 6577812679 DecimalSI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2020-01-11 19:52:06 +0000 UTC,LastTransitionTime:2020-01-11 15:56:28 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2020-01-11 19:52:06 +0000 UTC,LastTransitionTime:2020-01-11 15:56:28 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2020-01-11 19:52:06 +0000 UTC,LastTransitionTime:2020-01-11 15:56:28 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2020-01-11 19:52:06 +0000 UTC,LastTransitionTime:2020-01-11 15:56:28 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2020-01-11 19:52:06 +0000 UTC,LastTransitionTime:2020-01-11 15:56:28 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2020-01-11 19:52:06 +0000 UTC,LastTransitionTime:2020-01-11 15:56:28 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2020-01-11 19:52:06 +0000 UTC,LastTransitionTime:2020-01-11 15:56:28 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-01-11 15:56:16 +0000 UTC,LastTransitionTime:2020-01-11 15:56:16 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-01-11 19:52:45 +0000 UTC,LastTransitionTime:2020-01-11 15:55:58 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-01-11 19:52:45 +0000 UTC,LastTransitionTime:2020-01-11 15:55:58 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-01-11 19:52:45 +0000 UTC,LastTransitionTime:2020-01-11 15:55:58 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-01-11 19:52:45 +0000 UTC,LastTransitionTime:2020-01-11 15:56:08 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.250.7.77,},NodeAddress{Type:Hostname,Address:ip-10-250-7-77.ec2.internal,},NodeAddress{Type:InternalDNS,Address:ip-10-250-7-77.ec2.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec223a25fa514279256b8b36a522519a,SystemUUID:ec223a25-fa51-4279-256b-8b36a522519a,BootID:652118c2-7bd4-4ebf-b248-be5c7a65a3aa,KernelVersion:4.19.86-coreos,OSImage:Container Linux by CoreOS 2303.3.0 (Rhyolite),ContainerRuntimeVersion:docker://18.6.3,KubeletVersion:v1.16.4,KubeProxyVersion:v1.16.4,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[eu.gcr.io/gardener-project/k8s.gcr.io/hyperkube@sha256:1d8d7ef8bae1a6c8564d97a7d83a3661ea4b43127b0a6d901f3cd4b1126ee102 eu.gcr.io/gardener-project/k8s.gcr.io/hyperkube:v1.16.4],SizeBytes:601224435,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/quay_io/kubernetes-ingress-controller/nginx-ingress-controller@sha256:4980f4ee069f767334c6fb6a7d75fbdc87236542fd749e22af5d80f2217959f4 eu.gcr.io/gardener-project/3rd/quay_io/kubernetes-ingress-controller/nginx-ingress-controller:0.22.0],SizeBytes:551728251,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/quay_io/calico/node@sha256:d017c694acb9df5ad8e957a14b4c5a613c3a42771a34904f40c279dd2f61461e eu.gcr.io/gardener-project/3rd/quay_io/calico/node:v3.8.2-mod-1],SizeBytes:185406766,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/quay_io/calico/cni@sha256:fe6cb51f30add991b76eadfa26ec10fa8796383a1ddf807be5d4228725207b9d eu.gcr.io/gardener-project/3rd/quay_io/calico/cni:v3.8.2-mod-1],SizeBytes:153790666,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/k8s_gcr_io/kubernetes-dashboard-amd64@sha256:2f4fefeb964b1b7b09a3d2607a963506a47a6628d5268825e8b45b8a4c5ace93 eu.gcr.io/gardener-project/3rd/k8s_gcr_io/kubernetes-dashboard-amd64:v1.10.1],SizeBytes:121711221,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/k8s_gcr_io/node-problem-detector@sha256:00aceed3b4ef20d0d578aff3f904212daa2f0aaf18350d3e213cf4ca0703ccf0 eu.gcr.io/gardener-project/3rd/k8s_gcr_io/node-problem-detector:v0.7.1-mod-1],SizeBytes:96768084,},ContainerImage{Names:[eu.gcr.io/gardener-project/gardener/ingress-default-backend@sha256:17b68928ead12cc9df88ee60d9c638d3fd642a7e122c2bb7586da1a21eb2de45 eu.gcr.io/gardener-project/gardener/ingress-default-backend:0.7.0],SizeBytes:69546830,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6],SizeBytes:57345321,},ContainerImage{Names:[quay.io/k8scsi/csi-provisioner@sha256:0efcb424f1dde9b9fb11a1a14f2e48ab47e1c3f08bc3a929990dcfcb1f7ab34f quay.io/k8scsi/csi-provisioner:v1.4.0-rc1],SizeBytes:54431016,},ContainerImage{Names:[quay.io/k8scsi/csi-snapshotter@sha256:e3d3e742e32d00488fdb401045b9b1d033d7ca0ab6e760f77b24750fc95e5f70 quay.io/k8scsi/csi-snapshotter:v2.0.0-rc1],SizeBytes:51703561,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/quay_io/calico/typha@sha256:52298609a808087c774e95ded163e91828106bed6cf3117c51aba3f4d3b7943c eu.gcr.io/gardener-project/3rd/quay_io/calico/typha:v3.8.2],SizeBytes:49771411,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/quay_io/calico/kube-controllers@sha256:242c3e83e41c5ad4a246cba351360d92fb90e1c140cd24e42140e640a0ed3290 eu.gcr.io/gardener-project/3rd/quay_io/calico/kube-controllers:v3.8.2],SizeBytes:46809393,},ContainerImage{Names:[quay.io/k8scsi/csi-attacher@sha256:26fccd7a99d973845df1193b46ebdcc6ab8dc5f6e6be319750c471fce1742d13 quay.io/k8scsi/csi-attacher:v1.2.0],SizeBytes:46226754,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/coredns/coredns@sha256:b1f81b52011f91ebcf512111caa6d6d0896a65251188210cd3145d5b23204531 eu.gcr.io/gardener-project/3rd/coredns/coredns:1.6.3],SizeBytes:44255363,},ContainerImage{Names:[quay.io/k8scsi/csi-resizer@sha256:7d46fb6eb8b890dc546029d1565d502b4a1d974d33625c6ee2bc7991b77fc1a1 quay.io/k8scsi/csi-resizer:v0.2.0],SizeBytes:42817100,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/k8s_gcr_io/cpvpa-amd64@sha256:5843435c534f0368f8980b1635976976b087f0b2dcde01226d9216da2276d24d eu.gcr.io/gardener-project/3rd/k8s_gcr_io/cpvpa-amd64:v0.8.1],SizeBytes:40616150,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/k8s_gcr_io/cluster-proportional-autoscaler-amd64@sha256:2cdb0f90aac21d3f648a945ef929bfb81159d7453499b2dce6164c78a348ac42 eu.gcr.io/gardener-project/3rd/k8s_gcr_io/cluster-proportional-autoscaler-amd64:1.7.1],SizeBytes:40067731,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/k8s_gcr_io/metrics-server-amd64@sha256:c3c8fb8757c3236343da9239a266c6ee9e16ac3c98b6f5d7a7cbb5f83058d4f1 eu.gcr.io/gardener-project/3rd/k8s_gcr_io/metrics-server-amd64:v0.3.3],SizeBytes:39933796,},ContainerImage{Names:[redis@sha256:50899ea1ceed33fa03232f3ac57578a424faa1742c1ac9c7a7bdb95cdf19b858 redis:5.0.5-alpine],SizeBytes:29331594,},ContainerImage{Names:[quay.io/k8scsi/hostpathplugin@sha256:b4826e492fc1762fceaf9726f41575ca0a4567864d3d235da874818de18039de quay.io/k8scsi/hostpathplugin:v1.2.0-rc5],SizeBytes:28761497,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/quay_io/prometheus/node-exporter@sha256:fea82a3a79228af2840c72ff394d7446ace51ae035f5b26cd9767b250baf13b7 eu.gcr.io/gardener-project/3rd/quay_io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/echoserver@sha256:e9ba514b896cdf559eef8788b66c2c3ee55f3572df617647b4b0d8b6bf81cf19 gcr.io/kubernetes-e2e-test-images/echoserver:2.2],SizeBytes:21692741,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/quay_io/prometheus/blackbox-exporter@sha256:c09cbb653e4708a0c14b205822f56026669c6a4a7d0502609c65da2dd741e669 eu.gcr.io/gardener-project/3rd/quay_io/prometheus/blackbox-exporter:v0.14.0],SizeBytes:17584252,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[quay.io/k8scsi/csi-node-driver-registrar@sha256:13daf82fb99e951a4bff8ae5fc7c17c3a8fe7130be6400990d8f6076c32d4599 quay.io/k8scsi/csi-node-driver-registrar:v1.1.0],SizeBytes:15815995,},ContainerImage{Names:[quay.io/k8scsi/livenessprobe@sha256:dde617756e0f602adc566ab71fd885f1dad451ad3fb063ac991c95a2ff47aea5 quay.io/k8scsi/livenessprobe:v1.1.0],SizeBytes:14967303,},ContainerImage{Names:[eu.gcr.io/gardener-project/gardener/vpn-shoot@sha256:6054c6ae62c2bca2f07c913390c3babf14bb8dfa80c707ee8d4fd03c06dbf93f eu.gcr.io/gardener-project/gardener/vpn-shoot:0.16.0],SizeBytes:13732716,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/quay_io/calico/pod2daemon-flexvol@sha256:fd246ba4eb5b96a7b97bfd8d99eb823ba179e6eeb9852cb3e3f7bf2f44a800a8 eu.gcr.io/gardener-project/3rd/quay_io/calico/pod2daemon-flexvol:v3.8.2],SizeBytes:9371181,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:4206620,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/gcr_io/google_containers/pause-amd64@sha256:ffa28932647c3b6cab6a618aafe98d33dd185d96158ecf9b1addf042d6244025 k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea eu.gcr.io/gardener-project/3rd/gcr_io/google_containers/pause-amd64:3.1 k8s.gcr.io/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 11 19:52:50.566: INFO: Logging kubelet events for node ip-10-250-7-77.ec2.internal Jan 11 19:52:50.655: INFO: Logging pods the kubelet thinks is on node ip-10-250-7-77.ec2.internal Jan 11 19:52:50.760: INFO: calico-node-dl8nk started at 2020-01-11 15:55:58 +0000 UTC (2+1 container statuses recorded) Jan 11 19:52:50.760: INFO: Init container install-cni ready: true, restart count 0 Jan 11 19:52:50.760: INFO: Init container flexvol-driver ready: true, restart count 0 Jan 11 19:52:50.760: INFO: Container calico-node ready: true, restart count 0 Jan 11 19:52:50.760: INFO: node-problem-detector-jx2p4 started at 2020-01-11 15:55:58 +0000 UTC (0+1 container statuses recorded) Jan 11 19:52:50.760: INFO: Container node-problem-detector ready: true, restart count 0 Jan 11 19:52:50.760: INFO: node-exporter-gp57h started at 2020-01-11 15:55:58 +0000 UTC (0+1 container statuses recorded) Jan 11 19:52:50.760: INFO: Container node-exporter ready: true, restart count 0 Jan 11 19:52:50.760: INFO: calico-kube-controllers-79bcd784b6-c46r9 started at 2020-01-11 15:56:08 +0000 UTC (0+1 container statuses recorded) Jan 11 19:52:50.760: INFO: Container calico-kube-controllers ready: true, restart count 0 Jan 11 19:52:50.760: INFO: metrics-server-7c797fd994-4x7v9 started at 2020-01-11 15:56:08 +0000 UTC (0+1 container statuses recorded) Jan 11 19:52:50.760: INFO: Container metrics-server ready: true, restart count 0 Jan 11 19:52:50.760: INFO: coredns-59c969ffb8-57m7v started at 2020-01-11 15:56:11 +0000 UTC (0+1 container statuses recorded) Jan 11 19:52:50.760: INFO: Container coredns ready: true, restart count 0 Jan 11 19:52:50.760: INFO: calico-typha-deploy-9f6b455c4-vdrzx started at 2020-01-11 16:21:07 +0000 UTC (0+1 container statuses recorded) Jan 11 19:52:50.760: INFO: Container calico-typha ready: true, restart count 0 Jan 11 19:52:50.760: INFO: kube-proxy-nn5px started at 2020-01-11 15:55:58 +0000 UTC (0+1 container statuses recorded) Jan 11 19:52:50.760: INFO: Container kube-proxy ready: true, restart count 0 Jan 11 19:52:50.760: INFO: simpletest.rc-j4w7n started at 2020-01-11 19:52:48 +0000 UTC (0+1 container statuses recorded) Jan 11 19:52:50.760: INFO: Container nginx ready: true, restart count 0 Jan 11 19:52:50.760: INFO: calico-typha-horizontal-autoscaler-85c99966bb-6j6rp started at 2020-01-11 15:56:08 +0000 UTC (0+1 container statuses recorded) Jan 11 19:52:50.760: INFO: Container autoscaler ready: true, restart count 0 Jan 11 19:52:50.760: INFO: calico-typha-vertical-autoscaler-5769b74b58-r8t6r started at 2020-01-11 15:56:13 +0000 UTC (0+1 container statuses recorded) Jan 11 19:52:50.760: INFO: Container autoscaler ready: true, restart count 5 Jan 11 19:52:50.760: INFO: addons-nginx-ingress-controller-7c75bb76db-cd9r9 started at 2020-01-11 15:56:13 +0000 UTC (0+1 container statuses recorded) Jan 11 19:52:50.760: INFO: Container nginx-ingress-controller ready: true, restart count 0 Jan 11 19:52:50.760: INFO: vpn-shoot-5d76665b65-6rkww started at 2020-01-11 15:56:13 +0000 UTC (0+1 container statuses recorded) Jan 11 19:52:50.760: INFO: Container vpn-shoot ready: true, restart count 0 Jan 11 19:52:50.760: INFO: addons-nginx-ingress-nginx-ingress-k8s-backend-95f65778d-4fk7d started at 2020-01-11 15:56:08 +0000 UTC (0+1 container statuses recorded) Jan 11 19:52:50.760: INFO: Container nginx-ingress-nginx-ingress-k8s-backend ready: true, restart count 0 Jan 11 19:52:50.760: INFO: addons-kubernetes-dashboard-78954cc66b-69k8m started at 2020-01-11 15:56:08 +0000 UTC (0+1 container statuses recorded) Jan 11 19:52:50.760: INFO: Container kubernetes-dashboard ready: true, restart count 0 Jan 11 19:52:50.760: INFO: affinity-clusterip-transition-b2nms started at 2020-01-11 19:52:23 +0000 UTC (0+1 container statuses recorded) Jan 11 19:52:50.760: INFO: Container affinity-clusterip-transition ready: true, restart count 0 Jan 11 19:52:50.760: INFO: blackbox-exporter-54bb5f55cc-452fk started at 2020-01-11 15:55:58 +0000 UTC (0+1 container statuses recorded) Jan 11 19:52:50.760: INFO: Container blackbox-exporter ready: true, restart count 0 Jan 11 19:52:50.760: INFO: coredns-59c969ffb8-fqq79 started at 2020-01-11 15:56:08 +0000 UTC (0+1 container statuses recorded) Jan 11 19:52:50.760: INFO: Container coredns ready: true, restart count 0 W0111 19:52:50.850676 8631 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 11 19:52:51.058: INFO: Latency metrics for node ip-10-250-7-77.ec2.internal Jan 11 19:52:51.058: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-1462" for this suite. Jan 11 19:52:57.417: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:53:00.710: INFO: namespace hostpath-1462 deletion completed in 9.561147608s • Failure [14.455 seconds] [sig-storage] HostPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] [It] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Jan 11 19:52:49.529: Unexpected error: <*errors.errorString | 0xc00454eed0>: { s: "expected \"mode of file \\\"/test-volume\\\": dtrwxrwx\" in container output: Expected\n : mount type of \"/test-volume\": tmpfs\n mode of file \"/test-volume\": dgtrwxrwxrwx\n \nto contain substring\n : mode of file \"/test-volume\": dtrwxrwx", } expected "mode of file \"/test-volume\": dtrwxrwx" in container output: Expected : mount type of "/test-volume": tmpfs mode of file "/test-volume": dgtrwxrwxrwx to contain substring : mode of file "/test-volume": dtrwxrwx occurred /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:1667 ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:52:48.792: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename projected STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-6657 STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating projection with secret that has name projected-secret-test-684ce689-e3aa-45cc-8e11-0ac57e3de955 STEP: Creating a pod to test consume secrets Jan 11 19:52:49.632: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-cef06617-0cbd-446c-869c-1d6af1d08ab0" in namespace "projected-6657" to be "success or failure" Jan 11 19:52:49.722: INFO: Pod "pod-projected-secrets-cef06617-0cbd-446c-869c-1d6af1d08ab0": Phase="Pending", Reason="", readiness=false. Elapsed: 89.620646ms Jan 11 19:52:51.812: INFO: Pod "pod-projected-secrets-cef06617-0cbd-446c-869c-1d6af1d08ab0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.179888825s STEP: Saw pod success Jan 11 19:52:51.812: INFO: Pod "pod-projected-secrets-cef06617-0cbd-446c-869c-1d6af1d08ab0" satisfied condition "success or failure" Jan 11 19:52:51.902: INFO: Trying to get logs from node ip-10-250-27-25.ec2.internal pod pod-projected-secrets-cef06617-0cbd-446c-869c-1d6af1d08ab0 container projected-secret-volume-test: STEP: delete the pod Jan 11 19:52:52.091: INFO: Waiting for pod pod-projected-secrets-cef06617-0cbd-446c-869c-1d6af1d08ab0 to disappear Jan 11 19:52:52.181: INFO: Pod pod-projected-secrets-cef06617-0cbd-446c-869c-1d6af1d08ab0 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:52:52.181: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6657" for this suite. Jan 11 19:52:58.542: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:53:01.860: INFO: namespace projected-6657 deletion completed in 9.588008738s • [SLOW TEST:13.068 seconds] [sig-storage] Projected secret /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:52:29.552: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename webhook STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-4534 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:88 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 11 19:52:31.204: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714369150, loc:(*time.Location)(0x84bfb00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714369150, loc:(*time.Location)(0x84bfb00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714369150, loc:(*time.Location)(0x84bfb00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714369150, loc:(*time.Location)(0x84bfb00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 11 19:52:34.389: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:52:34.993: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4534" for this suite. Jan 11 19:52:49.353: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:52:52.663: INFO: namespace webhook-4534 deletion completed in 17.578660092s STEP: Destroying namespace "webhook-4534-markers" for this suite. Jan 11 19:52:58.932: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:53:02.246: INFO: namespace webhook-4534-markers deletion completed in 9.582948794s [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:103 • [SLOW TEST:33.055 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSS ------------------------------ [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:52:47.100: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename gc STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in gc-4442 STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0111 19:52:59.195554 8611 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 11 19:52:59.195: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:52:59.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4442" for this suite. Jan 11 19:53:05.557: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:53:08.884: INFO: namespace gc-4442 deletion completed in 9.597417109s • [SLOW TEST:21.784 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:53:01.874: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename emptydir STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-2065 STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating a pod to test emptydir 0644 on tmpfs Jan 11 19:53:02.617: INFO: Waiting up to 5m0s for pod "pod-98d8bb4d-97a4-4494-a6d4-adfeea3a0ddb" in namespace "emptydir-2065" to be "success or failure" Jan 11 19:53:02.707: INFO: Pod "pod-98d8bb4d-97a4-4494-a6d4-adfeea3a0ddb": Phase="Pending", Reason="", readiness=false. Elapsed: 89.363931ms Jan 11 19:53:04.797: INFO: Pod "pod-98d8bb4d-97a4-4494-a6d4-adfeea3a0ddb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.179685435s STEP: Saw pod success Jan 11 19:53:04.797: INFO: Pod "pod-98d8bb4d-97a4-4494-a6d4-adfeea3a0ddb" satisfied condition "success or failure" Jan 11 19:53:04.887: INFO: Trying to get logs from node ip-10-250-27-25.ec2.internal pod pod-98d8bb4d-97a4-4494-a6d4-adfeea3a0ddb container test-container: STEP: delete the pod Jan 11 19:53:05.083: INFO: Waiting for pod pod-98d8bb4d-97a4-4494-a6d4-adfeea3a0ddb to disappear Jan 11 19:53:05.173: INFO: Pod pod-98d8bb4d-97a4-4494-a6d4-adfeea3a0ddb no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:53:05.173: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2065" for this suite. Jan 11 19:53:11.534: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:53:14.849: INFO: namespace emptydir-2065 deletion completed in 9.584614428s • [SLOW TEST:12.975 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ [BeforeEach] [sig-storage] HostPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:53:00.712: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename hostpath STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in hostpath-9417 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating a pod to test hostPath mode Jan 11 19:53:01.442: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-9417" to be "success or failure" Jan 11 19:53:01.531: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 89.117172ms Jan 11 19:53:03.621: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.17893528s STEP: Saw pod success Jan 11 19:53:03.621: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Jan 11 19:53:03.711: INFO: Trying to get logs from node ip-10-250-27-25.ec2.internal pod pod-host-path-test container test-container-1: STEP: delete the pod Jan 11 19:53:03.899: INFO: Waiting for pod pod-host-path-test to disappear Jan 11 19:53:03.989: INFO: Pod pod-host-path-test no longer exists Jan 11 19:53:03.989: FAIL: Unexpected error: <*errors.errorString | 0xc002d1b320>: { s: "expected \"mode of file \\\"/test-volume\\\": dtrwxrwx\" in container output: Expected\n : mount type of \"/test-volume\": tmpfs\n mode of file \"/test-volume\": dgtrwxrwxrwx\n \nto contain substring\n : mode of file \"/test-volume\": dtrwxrwx", } expected "mode of file \"/test-volume\": dtrwxrwx" in container output: Expected : mount type of "/test-volume": tmpfs mode of file "/test-volume": dgtrwxrwxrwx to contain substring : mode of file "/test-volume": dtrwxrwx occurred [AfterEach] [sig-storage] HostPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 STEP: Collecting events from namespace "hostpath-9417". STEP: Found 7 events. Jan 11 19:53:04.079: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-host-path-test: {default-scheduler } Scheduled: Successfully assigned hostpath-9417/pod-host-path-test to ip-10-250-27-25.ec2.internal Jan 11 19:53:04.079: INFO: At 2020-01-11 19:53:02 +0000 UTC - event for pod-host-path-test: {kubelet ip-10-250-27-25.ec2.internal} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/mounttest:1.0" already present on machine Jan 11 19:53:04.079: INFO: At 2020-01-11 19:53:02 +0000 UTC - event for pod-host-path-test: {kubelet ip-10-250-27-25.ec2.internal} Created: Created container test-container-1 Jan 11 19:53:04.079: INFO: At 2020-01-11 19:53:02 +0000 UTC - event for pod-host-path-test: {kubelet ip-10-250-27-25.ec2.internal} Started: Started container test-container-1 Jan 11 19:53:04.079: INFO: At 2020-01-11 19:53:02 +0000 UTC - event for pod-host-path-test: {kubelet ip-10-250-27-25.ec2.internal} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/mounttest:1.0" already present on machine Jan 11 19:53:04.079: INFO: At 2020-01-11 19:53:02 +0000 UTC - event for pod-host-path-test: {kubelet ip-10-250-27-25.ec2.internal} Created: Created container test-container-2 Jan 11 19:53:04.079: INFO: At 2020-01-11 19:53:02 +0000 UTC - event for pod-host-path-test: {kubelet ip-10-250-27-25.ec2.internal} Started: Started container test-container-2 Jan 11 19:53:04.169: INFO: POD NODE PHASE GRACE CONDITIONS Jan 11 19:53:04.169: INFO: Jan 11 19:53:04.350: INFO: Logging node info for node ip-10-250-27-25.ec2.internal Jan 11 19:53:04.440: INFO: Node Info: &Node{ObjectMeta:{ip-10-250-27-25.ec2.internal /api/v1/nodes/ip-10-250-27-25.ec2.internal af7f64f3-a5de-4df3-9e07-f69e835ab580 60814 0 2020-01-11 15:56:03 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:m5.large beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:us-east-1 failure-domain.beta.kubernetes.io/zone:us-east-1c kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-10-250-27-25.ec2.internal kubernetes.io/os:linux node.kubernetes.io/role:node worker.garden.sapcloud.io/group:worker-1 worker.gardener.cloud/pool:worker-1] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-ephemeral-1641":"ip-10-250-27-25.ec2.internal","csi-hostpath-provisioning-6240":"ip-10-250-27-25.ec2.internal","csi-hostpath-volume-expand-7991":"ip-10-250-27-25.ec2.internal","csi-mock-csi-mock-volumes-1062":"csi-mock-csi-mock-volumes-1062","csi-mock-csi-mock-volumes-2239":"csi-mock-csi-mock-volumes-2239","csi-mock-csi-mock-volumes-6381":"csi-mock-csi-mock-volumes-6381","csi-mock-csi-mock-volumes-795":"csi-mock-csi-mock-volumes-795"} node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.250.27.25/19 projectcalico.org/IPv4IPIPTunnelAddr:100.64.1.1 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:100.64.1.0/24,DoNotUse_ExternalID:,ProviderID:aws:///us-east-1c/i-0a8c404292a3c92e9,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-aws-ebs: {{25 0} {} 25 DecimalSI},cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{28730179584 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{8054267904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-aws-ebs: {{25 0} {} 25 DecimalSI},cpu: {{1920 -3} {} 1920m DecimalSI},ephemeral-storage: {{27293670584 0} {} 27293670584 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{6577812679 0} {} 6577812679 DecimalSI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2020-01-11 19:52:25 +0000 UTC,LastTransitionTime:2020-01-11 15:56:58 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2020-01-11 19:52:25 +0000 UTC,LastTransitionTime:2020-01-11 15:56:58 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2020-01-11 19:52:25 +0000 UTC,LastTransitionTime:2020-01-11 15:56:58 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2020-01-11 19:52:25 +0000 UTC,LastTransitionTime:2020-01-11 15:56:58 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2020-01-11 19:52:25 +0000 UTC,LastTransitionTime:2020-01-11 15:56:58 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2020-01-11 19:52:25 +0000 UTC,LastTransitionTime:2020-01-11 15:56:58 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2020-01-11 19:52:25 +0000 UTC,LastTransitionTime:2020-01-11 15:56:58 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-01-11 15:56:18 +0000 UTC,LastTransitionTime:2020-01-11 15:56:18 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-01-11 19:53:04 +0000 UTC,LastTransitionTime:2020-01-11 15:56:03 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-01-11 19:53:04 +0000 UTC,LastTransitionTime:2020-01-11 15:56:03 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-01-11 19:53:04 +0000 UTC,LastTransitionTime:2020-01-11 15:56:03 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-01-11 19:53:04 +0000 UTC,LastTransitionTime:2020-01-11 15:56:13 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.250.27.25,},NodeAddress{Type:Hostname,Address:ip-10-250-27-25.ec2.internal,},NodeAddress{Type:InternalDNS,Address:ip-10-250-27-25.ec2.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec280dba3c1837e27848a3dec8c080a9,SystemUUID:ec280dba-3c18-37e2-7848-a3dec8c080a9,BootID:89e42b89-b944-47ea-8bf6-5f2fe6d80c97,KernelVersion:4.19.86-coreos,OSImage:Container Linux by CoreOS 2303.3.0 (Rhyolite),ContainerRuntimeVersion:docker://18.6.3,KubeletVersion:v1.16.4,KubeProxyVersion:v1.16.4,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[eu.gcr.io/gardener-project/k8s.gcr.io/hyperkube@sha256:1d8d7ef8bae1a6c8564d97a7d83a3661ea4b43127b0a6d901f3cd4b1126ee102 eu.gcr.io/gardener-project/k8s.gcr.io/hyperkube:v1.16.4],SizeBytes:601224435,},ContainerImage{Names:[gcr.io/google-samples/gb-frontend@sha256:35cb427341429fac3df10ff74600ea73e8ec0754d78f9ce89e0b4f3d70d53ba6 gcr.io/google-samples/gb-frontend:v6],SizeBytes:373099368,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa k8s.gcr.io/etcd:3.3.15],SizeBytes:246640776,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/volume/nfs@sha256:c2ad734346f608a5f7d69cfded93c4e8094069320657bd372d12ba21dea3ea71 gcr.io/kubernetes-e2e-test-images/volume/nfs:1.0],SizeBytes:225358913,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:195659796,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/quay_io/calico/node@sha256:d017c694acb9df5ad8e957a14b4c5a613c3a42771a34904f40c279dd2f61461e eu.gcr.io/gardener-project/3rd/quay_io/calico/node:v3.8.2-mod-1],SizeBytes:185406766,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/quay_io/calico/cni@sha256:fe6cb51f30add991b76eadfa26ec10fa8796383a1ddf807be5d4228725207b9d eu.gcr.io/gardener-project/3rd/quay_io/calico/cni:v3.8.2-mod-1],SizeBytes:153790666,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/k8s_gcr_io/node-problem-detector@sha256:00aceed3b4ef20d0d578aff3f904212daa2f0aaf18350d3e213cf4ca0703ccf0 eu.gcr.io/gardener-project/3rd/k8s_gcr_io/node-problem-detector:v0.7.1-mod-1],SizeBytes:96768084,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:1bafcc6fb1aa990b487850adba9cadc020e42d7905aa8a30481182a477ba24b0 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.10],SizeBytes:61365829,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6],SizeBytes:57345321,},ContainerImage{Names:[quay.io/k8scsi/csi-provisioner@sha256:0efcb424f1dde9b9fb11a1a14f2e48ab47e1c3f08bc3a929990dcfcb1f7ab34f quay.io/k8scsi/csi-provisioner:v1.4.0-rc1],SizeBytes:54431016,},ContainerImage{Names:[quay.io/k8scsi/csi-snapshotter@sha256:e3d3e742e32d00488fdb401045b9b1d033d7ca0ab6e760f77b24750fc95e5f70 quay.io/k8scsi/csi-snapshotter:v2.0.0-rc1],SizeBytes:51703561,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/quay_io/calico/typha@sha256:52298609a808087c774e95ded163e91828106bed6cf3117c51aba3f4d3b7943c eu.gcr.io/gardener-project/3rd/quay_io/calico/typha:v3.8.2],SizeBytes:49771411,},ContainerImage{Names:[quay.io/k8scsi/csi-attacher@sha256:26fccd7a99d973845df1193b46ebdcc6ab8dc5f6e6be319750c471fce1742d13 quay.io/k8scsi/csi-attacher:v1.2.0],SizeBytes:46226754,},ContainerImage{Names:[quay.io/k8scsi/csi-attacher@sha256:0aba670b4d9d6b2e720bbf575d733156c676b693ca26501235444490300db838 quay.io/k8scsi/csi-attacher:v1.1.0],SizeBytes:42839085,},ContainerImage{Names:[quay.io/k8scsi/csi-resizer@sha256:7d46fb6eb8b890dc546029d1565d502b4a1d974d33625c6ee2bc7991b77fc1a1 quay.io/k8scsi/csi-resizer:v0.2.0],SizeBytes:42817100,},ContainerImage{Names:[quay.io/k8scsi/csi-resizer@sha256:f315c9042e56def3c05c6b04fe79ec9da6d39ddc557ca365a76cf35964ea08b6 quay.io/k8scsi/csi-resizer:v0.1.0],SizeBytes:42623056,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonroot@sha256:d4ede5c74517090b6686219059118ed178cf4620f5db8781b32f806bb1e7395b gcr.io/kubernetes-e2e-test-images/nonroot:1.0],SizeBytes:42321438,},ContainerImage{Names:[redis@sha256:50899ea1ceed33fa03232f3ac57578a424faa1742c1ac9c7a7bdb95cdf19b858 redis:5.0.5-alpine],SizeBytes:29331594,},ContainerImage{Names:[quay.io/k8scsi/hostpathplugin@sha256:b4826e492fc1762fceaf9726f41575ca0a4567864d3d235da874818de18039de quay.io/k8scsi/hostpathplugin:v1.2.0-rc5],SizeBytes:28761497,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/quay_io/prometheus/node-exporter@sha256:fea82a3a79228af2840c72ff394d7446ace51ae035f5b26cd9767b250baf13b7 eu.gcr.io/gardener-project/3rd/quay_io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/echoserver@sha256:e9ba514b896cdf559eef8788b66c2c3ee55f3572df617647b4b0d8b6bf81cf19 gcr.io/kubernetes-e2e-test-images/echoserver:2.2],SizeBytes:21692741,},ContainerImage{Names:[quay.io/k8scsi/mock-driver@sha256:e0eed916b7d970bad2b7d9875f9ad16932f987f0f3d91ec5d86da68b0b5cc9d1 quay.io/k8scsi/mock-driver:v2.1.0],SizeBytes:16226335,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[quay.io/k8scsi/csi-node-driver-registrar@sha256:13daf82fb99e951a4bff8ae5fc7c17c3a8fe7130be6400990d8f6076c32d4599 quay.io/k8scsi/csi-node-driver-registrar:v1.1.0],SizeBytes:15815995,},ContainerImage{Names:[quay.io/k8scsi/livenessprobe@sha256:dde617756e0f602adc566ab71fd885f1dad451ad3fb063ac991c95a2ff47aea5 quay.io/k8scsi/livenessprobe:v1.1.0],SizeBytes:14967303,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/quay_io/calico/pod2daemon-flexvol@sha256:fd246ba4eb5b96a7b97bfd8d99eb823ba179e6eeb9852cb3e3f7bf2f44a800a8 eu.gcr.io/gardener-project/3rd/quay_io/calico/pod2daemon-flexvol:v3.8.2],SizeBytes:9371181,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1],SizeBytes:9349974,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6 gcr.io/kubernetes-e2e-test-images/kitten:1.0],SizeBytes:4747037,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0],SizeBytes:4732240,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0],SizeBytes:1450451,},ContainerImage{Names:[busybox@sha256:6915be4043561d64e0ab0f8f098dc2ac48e077fe23f488ac24b665166898115a busybox:latest],SizeBytes:1219782,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/gcr_io/google_containers/pause-amd64@sha256:ffa28932647c3b6cab6a618aafe98d33dd185d96158ecf9b1addf042d6244025 k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea eu.gcr.io/gardener-project/3rd/gcr_io/google_containers/pause-amd64:3.1 k8s.gcr.io/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 11 19:53:04.440: INFO: Logging kubelet events for node ip-10-250-27-25.ec2.internal Jan 11 19:53:04.530: INFO: Logging pods the kubelet thinks is on node ip-10-250-27-25.ec2.internal Jan 11 19:53:04.627: INFO: pod-98d8bb4d-97a4-4494-a6d4-adfeea3a0ddb started at 2020-01-11 19:53:02 +0000 UTC (0+1 container statuses recorded) Jan 11 19:53:04.627: INFO: Container test-container ready: false, restart count 0 Jan 11 19:53:04.627: INFO: pod-secrets-80f57524-b8be-4384-b63f-e0587d44498a started at 2020-01-11 19:50:05 +0000 UTC (0+1 container statuses recorded) Jan 11 19:53:04.627: INFO: Container creates-volume-test ready: false, restart count 0 Jan 11 19:53:04.627: INFO: affinity-clusterip-transition-gqtp8 started at 2020-01-11 19:52:23 +0000 UTC (0+1 container statuses recorded) Jan 11 19:53:04.628: INFO: Container affinity-clusterip-transition ready: true, restart count 0 Jan 11 19:53:04.628: INFO: affinity-clusterip-transition-clk7h started at 2020-01-11 19:52:23 +0000 UTC (0+1 container statuses recorded) Jan 11 19:53:04.628: INFO: Container affinity-clusterip-transition ready: true, restart count 0 Jan 11 19:53:04.628: INFO: calico-node-m8r2d started at 2020-01-11 15:56:04 +0000 UTC (2+1 container statuses recorded) Jan 11 19:53:04.628: INFO: Init container install-cni ready: true, restart count 0 Jan 11 19:53:04.628: INFO: Init container flexvol-driver ready: true, restart count 0 Jan 11 19:53:04.628: INFO: Container calico-node ready: true, restart count 0 Jan 11 19:53:04.628: INFO: forbid-1578772200-2qvmj started at 2020-01-11 19:50:09 +0000 UTC (0+1 container statuses recorded) Jan 11 19:53:04.628: INFO: Container c ready: true, restart count 0 Jan 11 19:53:04.628: INFO: execpod-affinityz22vc started at 2020-01-11 19:52:26 +0000 UTC (0+1 container statuses recorded) Jan 11 19:53:04.628: INFO: Container agnhost-pause ready: true, restart count 0 Jan 11 19:53:04.628: INFO: liveness-d9c04d87-22d3-4723-91d9-3bcb6c488d03 started at 2020-01-11 19:50:32 +0000 UTC (0+1 container statuses recorded) Jan 11 19:53:04.628: INFO: Container liveness ready: true, restart count 0 Jan 11 19:53:04.628: INFO: kube-proxy-rq4kf started at 2020-01-11 15:56:04 +0000 UTC (0+1 container statuses recorded) Jan 11 19:53:04.628: INFO: Container kube-proxy ready: true, restart count 0 Jan 11 19:53:04.628: INFO: node-problem-detector-9z5sq started at 2020-01-11 15:56:04 +0000 UTC (0+1 container statuses recorded) Jan 11 19:53:04.628: INFO: Container node-problem-detector ready: true, restart count 0 Jan 11 19:53:04.628: INFO: node-exporter-l6q84 started at 2020-01-11 15:56:04 +0000 UTC (0+1 container statuses recorded) Jan 11 19:53:04.628: INFO: Container node-exporter ready: true, restart count 0 W0111 19:53:04.718631 8631 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 11 19:53:04.932: INFO: Latency metrics for node ip-10-250-27-25.ec2.internal Jan 11 19:53:04.932: INFO: Logging node info for node ip-10-250-7-77.ec2.internal Jan 11 19:53:05.023: INFO: Node Info: &Node{ObjectMeta:{ip-10-250-7-77.ec2.internal /api/v1/nodes/ip-10-250-7-77.ec2.internal 3773c02c-1fbb-4cbe-a527-8933de0a8978 60749 0 2020-01-11 15:55:58 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:m5.large beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:us-east-1 failure-domain.beta.kubernetes.io/zone:us-east-1c kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-10-250-7-77.ec2.internal kubernetes.io/os:linux node.kubernetes.io/role:node worker.garden.sapcloud.io/group:worker-1 worker.gardener.cloud/pool:worker-1] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-ephemeral-9708":"ip-10-250-7-77.ec2.internal","csi-hostpath-provisioning-3332":"ip-10-250-7-77.ec2.internal","csi-hostpath-provisioning-4625":"ip-10-250-7-77.ec2.internal","csi-hostpath-provisioning-638":"ip-10-250-7-77.ec2.internal","csi-hostpath-provisioning-888":"ip-10-250-7-77.ec2.internal","csi-hostpath-provisioning-9667":"ip-10-250-7-77.ec2.internal","csi-hostpath-volume-2441":"ip-10-250-7-77.ec2.internal","csi-hostpath-volume-expand-8983":"ip-10-250-7-77.ec2.internal","csi-hostpath-volumeio-3164":"ip-10-250-7-77.ec2.internal","csi-hostpath-volumemode-2792":"ip-10-250-7-77.ec2.internal"} node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.250.7.77/19 projectcalico.org/IPv4IPIPTunnelAddr:100.64.0.1 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:100.64.0.0/24,DoNotUse_ExternalID:,ProviderID:aws:///us-east-1c/i-0551dba45aad7abfa,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-aws-ebs: {{25 0} {} 25 DecimalSI},cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{28730179584 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{8054267904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-aws-ebs: {{25 0} {} 25 DecimalSI},cpu: {{1920 -3} {} 1920m DecimalSI},ephemeral-storage: {{27293670584 0} {} 27293670584 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{6577812679 0} {} 6577812679 DecimalSI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2020-01-11 19:52:06 +0000 UTC,LastTransitionTime:2020-01-11 15:56:28 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2020-01-11 19:52:06 +0000 UTC,LastTransitionTime:2020-01-11 15:56:28 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2020-01-11 19:52:06 +0000 UTC,LastTransitionTime:2020-01-11 15:56:28 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2020-01-11 19:52:06 +0000 UTC,LastTransitionTime:2020-01-11 15:56:28 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2020-01-11 19:52:06 +0000 UTC,LastTransitionTime:2020-01-11 15:56:28 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2020-01-11 19:52:06 +0000 UTC,LastTransitionTime:2020-01-11 15:56:28 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2020-01-11 19:52:06 +0000 UTC,LastTransitionTime:2020-01-11 15:56:28 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-01-11 15:56:16 +0000 UTC,LastTransitionTime:2020-01-11 15:56:16 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-01-11 19:52:55 +0000 UTC,LastTransitionTime:2020-01-11 15:55:58 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-01-11 19:52:55 +0000 UTC,LastTransitionTime:2020-01-11 15:55:58 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-01-11 19:52:55 +0000 UTC,LastTransitionTime:2020-01-11 15:55:58 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-01-11 19:52:55 +0000 UTC,LastTransitionTime:2020-01-11 15:56:08 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.250.7.77,},NodeAddress{Type:Hostname,Address:ip-10-250-7-77.ec2.internal,},NodeAddress{Type:InternalDNS,Address:ip-10-250-7-77.ec2.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec223a25fa514279256b8b36a522519a,SystemUUID:ec223a25-fa51-4279-256b-8b36a522519a,BootID:652118c2-7bd4-4ebf-b248-be5c7a65a3aa,KernelVersion:4.19.86-coreos,OSImage:Container Linux by CoreOS 2303.3.0 (Rhyolite),ContainerRuntimeVersion:docker://18.6.3,KubeletVersion:v1.16.4,KubeProxyVersion:v1.16.4,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[eu.gcr.io/gardener-project/k8s.gcr.io/hyperkube@sha256:1d8d7ef8bae1a6c8564d97a7d83a3661ea4b43127b0a6d901f3cd4b1126ee102 eu.gcr.io/gardener-project/k8s.gcr.io/hyperkube:v1.16.4],SizeBytes:601224435,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/quay_io/kubernetes-ingress-controller/nginx-ingress-controller@sha256:4980f4ee069f767334c6fb6a7d75fbdc87236542fd749e22af5d80f2217959f4 eu.gcr.io/gardener-project/3rd/quay_io/kubernetes-ingress-controller/nginx-ingress-controller:0.22.0],SizeBytes:551728251,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/quay_io/calico/node@sha256:d017c694acb9df5ad8e957a14b4c5a613c3a42771a34904f40c279dd2f61461e eu.gcr.io/gardener-project/3rd/quay_io/calico/node:v3.8.2-mod-1],SizeBytes:185406766,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/quay_io/calico/cni@sha256:fe6cb51f30add991b76eadfa26ec10fa8796383a1ddf807be5d4228725207b9d eu.gcr.io/gardener-project/3rd/quay_io/calico/cni:v3.8.2-mod-1],SizeBytes:153790666,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/k8s_gcr_io/kubernetes-dashboard-amd64@sha256:2f4fefeb964b1b7b09a3d2607a963506a47a6628d5268825e8b45b8a4c5ace93 eu.gcr.io/gardener-project/3rd/k8s_gcr_io/kubernetes-dashboard-amd64:v1.10.1],SizeBytes:121711221,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/k8s_gcr_io/node-problem-detector@sha256:00aceed3b4ef20d0d578aff3f904212daa2f0aaf18350d3e213cf4ca0703ccf0 eu.gcr.io/gardener-project/3rd/k8s_gcr_io/node-problem-detector:v0.7.1-mod-1],SizeBytes:96768084,},ContainerImage{Names:[eu.gcr.io/gardener-project/gardener/ingress-default-backend@sha256:17b68928ead12cc9df88ee60d9c638d3fd642a7e122c2bb7586da1a21eb2de45 eu.gcr.io/gardener-project/gardener/ingress-default-backend:0.7.0],SizeBytes:69546830,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6],SizeBytes:57345321,},ContainerImage{Names:[quay.io/k8scsi/csi-provisioner@sha256:0efcb424f1dde9b9fb11a1a14f2e48ab47e1c3f08bc3a929990dcfcb1f7ab34f quay.io/k8scsi/csi-provisioner:v1.4.0-rc1],SizeBytes:54431016,},ContainerImage{Names:[quay.io/k8scsi/csi-snapshotter@sha256:e3d3e742e32d00488fdb401045b9b1d033d7ca0ab6e760f77b24750fc95e5f70 quay.io/k8scsi/csi-snapshotter:v2.0.0-rc1],SizeBytes:51703561,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/quay_io/calico/typha@sha256:52298609a808087c774e95ded163e91828106bed6cf3117c51aba3f4d3b7943c eu.gcr.io/gardener-project/3rd/quay_io/calico/typha:v3.8.2],SizeBytes:49771411,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/quay_io/calico/kube-controllers@sha256:242c3e83e41c5ad4a246cba351360d92fb90e1c140cd24e42140e640a0ed3290 eu.gcr.io/gardener-project/3rd/quay_io/calico/kube-controllers:v3.8.2],SizeBytes:46809393,},ContainerImage{Names:[quay.io/k8scsi/csi-attacher@sha256:26fccd7a99d973845df1193b46ebdcc6ab8dc5f6e6be319750c471fce1742d13 quay.io/k8scsi/csi-attacher:v1.2.0],SizeBytes:46226754,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/coredns/coredns@sha256:b1f81b52011f91ebcf512111caa6d6d0896a65251188210cd3145d5b23204531 eu.gcr.io/gardener-project/3rd/coredns/coredns:1.6.3],SizeBytes:44255363,},ContainerImage{Names:[quay.io/k8scsi/csi-resizer@sha256:7d46fb6eb8b890dc546029d1565d502b4a1d974d33625c6ee2bc7991b77fc1a1 quay.io/k8scsi/csi-resizer:v0.2.0],SizeBytes:42817100,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/k8s_gcr_io/cpvpa-amd64@sha256:5843435c534f0368f8980b1635976976b087f0b2dcde01226d9216da2276d24d eu.gcr.io/gardener-project/3rd/k8s_gcr_io/cpvpa-amd64:v0.8.1],SizeBytes:40616150,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/k8s_gcr_io/cluster-proportional-autoscaler-amd64@sha256:2cdb0f90aac21d3f648a945ef929bfb81159d7453499b2dce6164c78a348ac42 eu.gcr.io/gardener-project/3rd/k8s_gcr_io/cluster-proportional-autoscaler-amd64:1.7.1],SizeBytes:40067731,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/k8s_gcr_io/metrics-server-amd64@sha256:c3c8fb8757c3236343da9239a266c6ee9e16ac3c98b6f5d7a7cbb5f83058d4f1 eu.gcr.io/gardener-project/3rd/k8s_gcr_io/metrics-server-amd64:v0.3.3],SizeBytes:39933796,},ContainerImage{Names:[redis@sha256:50899ea1ceed33fa03232f3ac57578a424faa1742c1ac9c7a7bdb95cdf19b858 redis:5.0.5-alpine],SizeBytes:29331594,},ContainerImage{Names:[quay.io/k8scsi/hostpathplugin@sha256:b4826e492fc1762fceaf9726f41575ca0a4567864d3d235da874818de18039de quay.io/k8scsi/hostpathplugin:v1.2.0-rc5],SizeBytes:28761497,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/quay_io/prometheus/node-exporter@sha256:fea82a3a79228af2840c72ff394d7446ace51ae035f5b26cd9767b250baf13b7 eu.gcr.io/gardener-project/3rd/quay_io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/echoserver@sha256:e9ba514b896cdf559eef8788b66c2c3ee55f3572df617647b4b0d8b6bf81cf19 gcr.io/kubernetes-e2e-test-images/echoserver:2.2],SizeBytes:21692741,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/quay_io/prometheus/blackbox-exporter@sha256:c09cbb653e4708a0c14b205822f56026669c6a4a7d0502609c65da2dd741e669 eu.gcr.io/gardener-project/3rd/quay_io/prometheus/blackbox-exporter:v0.14.0],SizeBytes:17584252,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[quay.io/k8scsi/csi-node-driver-registrar@sha256:13daf82fb99e951a4bff8ae5fc7c17c3a8fe7130be6400990d8f6076c32d4599 quay.io/k8scsi/csi-node-driver-registrar:v1.1.0],SizeBytes:15815995,},ContainerImage{Names:[quay.io/k8scsi/livenessprobe@sha256:dde617756e0f602adc566ab71fd885f1dad451ad3fb063ac991c95a2ff47aea5 quay.io/k8scsi/livenessprobe:v1.1.0],SizeBytes:14967303,},ContainerImage{Names:[eu.gcr.io/gardener-project/gardener/vpn-shoot@sha256:6054c6ae62c2bca2f07c913390c3babf14bb8dfa80c707ee8d4fd03c06dbf93f eu.gcr.io/gardener-project/gardener/vpn-shoot:0.16.0],SizeBytes:13732716,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/quay_io/calico/pod2daemon-flexvol@sha256:fd246ba4eb5b96a7b97bfd8d99eb823ba179e6eeb9852cb3e3f7bf2f44a800a8 eu.gcr.io/gardener-project/3rd/quay_io/calico/pod2daemon-flexvol:v3.8.2],SizeBytes:9371181,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:4206620,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/gcr_io/google_containers/pause-amd64@sha256:ffa28932647c3b6cab6a618aafe98d33dd185d96158ecf9b1addf042d6244025 k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea eu.gcr.io/gardener-project/3rd/gcr_io/google_containers/pause-amd64:3.1 k8s.gcr.io/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 11 19:53:05.023: INFO: Logging kubelet events for node ip-10-250-7-77.ec2.internal Jan 11 19:53:05.114: INFO: Logging pods the kubelet thinks is on node ip-10-250-7-77.ec2.internal Jan 11 19:53:05.217: INFO: addons-nginx-ingress-nginx-ingress-k8s-backend-95f65778d-4fk7d started at 2020-01-11 15:56:08 +0000 UTC (0+1 container statuses recorded) Jan 11 19:53:05.217: INFO: Container nginx-ingress-nginx-ingress-k8s-backend ready: true, restart count 0 Jan 11 19:53:05.217: INFO: addons-kubernetes-dashboard-78954cc66b-69k8m started at 2020-01-11 15:56:08 +0000 UTC (0+1 container statuses recorded) Jan 11 19:53:05.217: INFO: Container kubernetes-dashboard ready: true, restart count 0 Jan 11 19:53:05.217: INFO: affinity-clusterip-transition-b2nms started at 2020-01-11 19:52:23 +0000 UTC (0+1 container statuses recorded) Jan 11 19:53:05.217: INFO: Container affinity-clusterip-transition ready: true, restart count 0 Jan 11 19:53:05.217: INFO: var-expansion-27cdc9b8-eb7c-499e-ae54-e3c9745f0894 started at 2020-01-11 19:53:03 +0000 UTC (0+1 container statuses recorded) Jan 11 19:53:05.217: INFO: Container dapi-container ready: false, restart count 0 Jan 11 19:53:05.217: INFO: blackbox-exporter-54bb5f55cc-452fk started at 2020-01-11 15:55:58 +0000 UTC (0+1 container statuses recorded) Jan 11 19:53:05.217: INFO: Container blackbox-exporter ready: true, restart count 0 Jan 11 19:53:05.217: INFO: coredns-59c969ffb8-fqq79 started at 2020-01-11 15:56:08 +0000 UTC (0+1 container statuses recorded) Jan 11 19:53:05.217: INFO: Container coredns ready: true, restart count 0 Jan 11 19:53:05.217: INFO: calico-node-dl8nk started at 2020-01-11 15:55:58 +0000 UTC (2+1 container statuses recorded) Jan 11 19:53:05.217: INFO: Init container install-cni ready: true, restart count 0 Jan 11 19:53:05.217: INFO: Init container flexvol-driver ready: true, restart count 0 Jan 11 19:53:05.217: INFO: Container calico-node ready: true, restart count 0 Jan 11 19:53:05.217: INFO: node-problem-detector-jx2p4 started at 2020-01-11 15:55:58 +0000 UTC (0+1 container statuses recorded) Jan 11 19:53:05.217: INFO: Container node-problem-detector ready: true, restart count 0 Jan 11 19:53:05.217: INFO: node-exporter-gp57h started at 2020-01-11 15:55:58 +0000 UTC (0+1 container statuses recorded) Jan 11 19:53:05.217: INFO: Container node-exporter ready: true, restart count 0 Jan 11 19:53:05.217: INFO: calico-kube-controllers-79bcd784b6-c46r9 started at 2020-01-11 15:56:08 +0000 UTC (0+1 container statuses recorded) Jan 11 19:53:05.217: INFO: Container calico-kube-controllers ready: true, restart count 0 Jan 11 19:53:05.217: INFO: metrics-server-7c797fd994-4x7v9 started at 2020-01-11 15:56:08 +0000 UTC (0+1 container statuses recorded) Jan 11 19:53:05.217: INFO: Container metrics-server ready: true, restart count 0 Jan 11 19:53:05.217: INFO: coredns-59c969ffb8-57m7v started at 2020-01-11 15:56:11 +0000 UTC (0+1 container statuses recorded) Jan 11 19:53:05.217: INFO: Container coredns ready: true, restart count 0 Jan 11 19:53:05.217: INFO: calico-typha-deploy-9f6b455c4-vdrzx started at 2020-01-11 16:21:07 +0000 UTC (0+1 container statuses recorded) Jan 11 19:53:05.217: INFO: Container calico-typha ready: true, restart count 0 Jan 11 19:53:05.217: INFO: kube-proxy-nn5px started at 2020-01-11 15:55:58 +0000 UTC (0+1 container statuses recorded) Jan 11 19:53:05.217: INFO: Container kube-proxy ready: true, restart count 0 Jan 11 19:53:05.217: INFO: calico-typha-horizontal-autoscaler-85c99966bb-6j6rp started at 2020-01-11 15:56:08 +0000 UTC (0+1 container statuses recorded) Jan 11 19:53:05.217: INFO: Container autoscaler ready: true, restart count 0 Jan 11 19:53:05.217: INFO: calico-typha-vertical-autoscaler-5769b74b58-r8t6r started at 2020-01-11 15:56:13 +0000 UTC (0+1 container statuses recorded) Jan 11 19:53:05.217: INFO: Container autoscaler ready: true, restart count 5 Jan 11 19:53:05.217: INFO: addons-nginx-ingress-controller-7c75bb76db-cd9r9 started at 2020-01-11 15:56:13 +0000 UTC (0+1 container statuses recorded) Jan 11 19:53:05.217: INFO: Container nginx-ingress-controller ready: true, restart count 0 Jan 11 19:53:05.217: INFO: vpn-shoot-5d76665b65-6rkww started at 2020-01-11 15:56:13 +0000 UTC (0+1 container statuses recorded) Jan 11 19:53:05.217: INFO: Container vpn-shoot ready: true, restart count 0 W0111 19:53:05.307858 8631 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 11 19:53:05.538: INFO: Latency metrics for node ip-10-250-7-77.ec2.internal Jan 11 19:53:05.538: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-9417" for this suite. Jan 11 19:53:11.897: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:53:15.198: INFO: namespace hostpath-9417 deletion completed in 9.568880453s • Failure [14.485 seconds] [sig-storage] HostPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] [It] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Jan 11 19:53:03.989: Unexpected error: <*errors.errorString | 0xc002d1b320>: { s: "expected \"mode of file \\\"/test-volume\\\": dtrwxrwx\" in container output: Expected\n : mount type of \"/test-volume\": tmpfs\n mode of file \"/test-volume\": dgtrwxrwxrwx\n \nto contain substring\n : mode of file \"/test-volume\": dtrwxrwx", } expected "mode of file \"/test-volume\": dtrwxrwx" in container output: Expected : mount type of "/test-volume": tmpfs mode of file "/test-volume": dgtrwxrwxrwx to contain substring : mode of file "/test-volume": dtrwxrwx occurred /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:1667 ------------------------------ SS ------------------------------ [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:53:02.613: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename var-expansion STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in var-expansion-3482 STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating a pod to test substitution in container's command Jan 11 19:53:03.347: INFO: Waiting up to 5m0s for pod "var-expansion-27cdc9b8-eb7c-499e-ae54-e3c9745f0894" in namespace "var-expansion-3482" to be "success or failure" Jan 11 19:53:03.436: INFO: Pod "var-expansion-27cdc9b8-eb7c-499e-ae54-e3c9745f0894": Phase="Pending", Reason="", readiness=false. Elapsed: 89.781418ms Jan 11 19:53:05.526: INFO: Pod "var-expansion-27cdc9b8-eb7c-499e-ae54-e3c9745f0894": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.179641955s STEP: Saw pod success Jan 11 19:53:05.526: INFO: Pod "var-expansion-27cdc9b8-eb7c-499e-ae54-e3c9745f0894" satisfied condition "success or failure" Jan 11 19:53:05.617: INFO: Trying to get logs from node ip-10-250-7-77.ec2.internal pod var-expansion-27cdc9b8-eb7c-499e-ae54-e3c9745f0894 container dapi-container: STEP: delete the pod Jan 11 19:53:05.807: INFO: Waiting for pod var-expansion-27cdc9b8-eb7c-499e-ae54-e3c9745f0894 to disappear Jan 11 19:53:05.897: INFO: Pod var-expansion-27cdc9b8-eb7c-499e-ae54-e3c9745f0894 no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:53:05.897: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-3482" for this suite. Jan 11 19:53:12.258: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:53:15.561: INFO: namespace var-expansion-3482 deletion completed in 9.573391937s • [SLOW TEST:12.948 seconds] [k8s.io] Variable Expansion /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693 should allow substituting values in a container's command [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSS ------------------------------ [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:53:08.909: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename deployment STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in deployment-641 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 [It] deployment should support proportional scaling [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Jan 11 19:53:09.552: INFO: Creating deployment "webserver-deployment" Jan 11 19:53:09.642: INFO: Waiting for observed generation 1 Jan 11 19:53:09.733: INFO: Waiting for all required pods to come up Jan 11 19:53:09.824: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running Jan 11 19:53:14.006: INFO: Waiting for deployment "webserver-deployment" to complete Jan 11 19:53:14.187: INFO: Updating deployment "webserver-deployment" with a non-existent image Jan 11 19:53:14.367: INFO: Updating deployment webserver-deployment Jan 11 19:53:14.367: INFO: Waiting for observed generation 2 Jan 11 19:53:14.457: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Jan 11 19:53:14.547: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Jan 11 19:53:14.637: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Jan 11 19:53:14.908: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Jan 11 19:53:14.908: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Jan 11 19:53:14.998: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Jan 11 19:53:15.179: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas Jan 11 19:53:15.179: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 Jan 11 19:53:15.360: INFO: Updating deployment webserver-deployment Jan 11 19:53:15.360: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas Jan 11 19:53:15.539: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Jan 11 19:53:17.719: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:62 Jan 11 19:53:17.899: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-641 /apis/apps/v1/namespaces/deployment-641/deployments/webserver-deployment c2cecc51-aa26-43f4-9068-50bd49e9b431 61084 3 2020-01-11 19:53:09 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0034ab988 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-01-11 19:53:15 +0000 UTC,LastTransitionTime:2020-01-11 19:53:15 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-c7997dcc8" is progressing.,LastUpdateTime:2020-01-11 19:53:16 +0000 UTC,LastTransitionTime:2020-01-11 19:53:09 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} Jan 11 19:53:17.989: INFO: New ReplicaSet "webserver-deployment-c7997dcc8" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-c7997dcc8 deployment-641 /apis/apps/v1/namespaces/deployment-641/replicasets/webserver-deployment-c7997dcc8 e6c11ef4-b627-433b-9049-0f5f4894b64d 61083 3 2020-01-11 19:53:14 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment c2cecc51-aa26-43f4-9068-50bd49e9b431 0xc0034abe87 0xc0034abe88}] [] []},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: c7997dcc8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0034abef8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jan 11 19:53:17.989: INFO: All old ReplicaSets of Deployment "webserver-deployment": Jan 11 19:53:17.989: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-595b5b9587 deployment-641 /apis/apps/v1/namespaces/deployment-641/replicasets/webserver-deployment-595b5b9587 5b32ada0-5fbc-47c5-ba86-c6b9e3c2121e 61074 3 2020-01-11 19:53:09 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment c2cecc51-aa26-43f4-9068-50bd49e9b431 0xc0034abdc7 0xc0034abdc8}] [] []},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 595b5b9587,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0034abe28 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} Jan 11 19:53:18.082: INFO: Pod "webserver-deployment-595b5b9587-6jldh" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-6jldh webserver-deployment-595b5b9587- deployment-641 /api/v1/namespaces/deployment-641/pods/webserver-deployment-595b5b9587-6jldh 404a946d-73fc-462c-b799-9a74d815e64f 61067 0 2020-01-11 19:53:15 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 5b32ada0-5fbc-47c5-ba86-c6b9e3c2121e 0xc001e9c3c7 0xc001e9c3c8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vcm86,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vcm86,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vcm86,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-10-250-7-77.ec2.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:53:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:53:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:53:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:53:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.7.77,PodIP:,StartTime:2020-01-11 19:53:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 11 19:53:18.083: INFO: Pod "webserver-deployment-595b5b9587-6wcqz" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-6wcqz webserver-deployment-595b5b9587- deployment-641 /api/v1/namespaces/deployment-641/pods/webserver-deployment-595b5b9587-6wcqz 7e17b406-7b76-4bf1-80ab-f96627ce646b 61093 0 2020-01-11 19:53:15 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[cni.projectcalico.org/podIP:100.64.1.210/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 5b32ada0-5fbc-47c5-ba86-c6b9e3c2121e 0xc001e9c527 0xc001e9c528}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vcm86,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vcm86,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vcm86,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-10-250-27-25.ec2.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:53:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:53:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:53:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:53:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.27.25,PodIP:,StartTime:2020-01-11 19:53:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 11 19:53:18.083: INFO: Pod "webserver-deployment-595b5b9587-79dz6" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-79dz6 webserver-deployment-595b5b9587- deployment-641 /api/v1/namespaces/deployment-641/pods/webserver-deployment-595b5b9587-79dz6 5f46c1ad-21c4-4a73-8171-f8e90ecf6b23 60937 0 2020-01-11 19:53:09 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[cni.projectcalico.org/podIP:100.64.1.204/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 5b32ada0-5fbc-47c5-ba86-c6b9e3c2121e 0xc001e9c687 0xc001e9c688}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vcm86,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vcm86,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vcm86,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-10-250-27-25.ec2.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:53:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:53:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:53:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:53:09 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.27.25,PodIP:100.64.1.204,StartTime:2020-01-11 19:53:09 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-11 19:53:11 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://77c5a0b0eecc234d85390752d5bf0794240d8a6a2491e306fe776432b4de839d,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:100.64.1.204,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 11 19:53:18.083: INFO: Pod "webserver-deployment-595b5b9587-bcvpz" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-bcvpz webserver-deployment-595b5b9587- deployment-641 /api/v1/namespaces/deployment-641/pods/webserver-deployment-595b5b9587-bcvpz 3fb4be8d-8599-4e96-83f2-b0c1bcb0b8c2 61107 0 2020-01-11 19:53:15 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[cni.projectcalico.org/podIP:100.64.0.69/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 5b32ada0-5fbc-47c5-ba86-c6b9e3c2121e 0xc001e9c807 0xc001e9c808}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vcm86,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vcm86,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vcm86,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-10-250-7-77.ec2.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:53:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:53:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:53:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:53:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.7.77,PodIP:,StartTime:2020-01-11 19:53:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 11 19:53:18.083: INFO: Pod "webserver-deployment-595b5b9587-bjg7p" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-bjg7p webserver-deployment-595b5b9587- deployment-641 /api/v1/namespaces/deployment-641/pods/webserver-deployment-595b5b9587-bjg7p 7754cb89-fe13-4e84-b244-5f9de2915d21 61109 0 2020-01-11 19:53:15 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[cni.projectcalico.org/podIP:100.64.1.212/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 5b32ada0-5fbc-47c5-ba86-c6b9e3c2121e 0xc001e9c967 0xc001e9c968}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vcm86,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vcm86,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vcm86,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-10-250-27-25.ec2.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:53:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:53:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:53:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:53:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.27.25,PodIP:,StartTime:2020-01-11 19:53:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 11 19:53:18.083: INFO: Pod "webserver-deployment-595b5b9587-drfqj" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-drfqj webserver-deployment-595b5b9587- deployment-641 /api/v1/namespaces/deployment-641/pods/webserver-deployment-595b5b9587-drfqj 145173c5-1af6-4497-8deb-6c125e68090c 61079 0 2020-01-11 19:53:15 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 5b32ada0-5fbc-47c5-ba86-c6b9e3c2121e 0xc001e9cab7 0xc001e9cab8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vcm86,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vcm86,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vcm86,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-10-250-27-25.ec2.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:53:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:53:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:53:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:53:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.27.25,PodIP:,StartTime:2020-01-11 19:53:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 11 19:53:18.083: INFO: Pod "webserver-deployment-595b5b9587-fwqwh" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-fwqwh webserver-deployment-595b5b9587- deployment-641 /api/v1/namespaces/deployment-641/pods/webserver-deployment-595b5b9587-fwqwh cd1de877-aafc-475b-a851-61334612298e 61111 0 2020-01-11 19:53:15 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[cni.projectcalico.org/podIP:100.64.1.213/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 5b32ada0-5fbc-47c5-ba86-c6b9e3c2121e 0xc001e9cc17 0xc001e9cc18}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vcm86,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vcm86,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vcm86,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-10-250-27-25.ec2.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:53:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:53:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:53:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:53:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.27.25,PodIP:,StartTime:2020-01-11 19:53:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 11 19:53:18.083: INFO: Pod "webserver-deployment-595b5b9587-g5n58" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-g5n58 webserver-deployment-595b5b9587- deployment-641 /api/v1/namespaces/deployment-641/pods/webserver-deployment-595b5b9587-g5n58 0f07f909-30c4-457d-a5f4-5048896aeaa8 61029 0 2020-01-11 19:53:15 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 5b32ada0-5fbc-47c5-ba86-c6b9e3c2121e 0xc001e9cd67 0xc001e9cd68}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vcm86,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vcm86,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vcm86,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-10-250-27-25.ec2.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:53:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:53:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:53:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:53:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.27.25,PodIP:,StartTime:2020-01-11 19:53:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 11 19:53:18.084: INFO: Pod "webserver-deployment-595b5b9587-jrnfd" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-jrnfd webserver-deployment-595b5b9587- deployment-641 /api/v1/namespaces/deployment-641/pods/webserver-deployment-595b5b9587-jrnfd a6aa59a7-3b60-4d5d-ad18-3f880bdd7d1b 61081 0 2020-01-11 19:53:15 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 5b32ada0-5fbc-47c5-ba86-c6b9e3c2121e 0xc001e9ceb7 0xc001e9ceb8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vcm86,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vcm86,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vcm86,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-10-250-27-25.ec2.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:53:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:53:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:53:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:53:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.27.25,PodIP:,StartTime:2020-01-11 19:53:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 11 19:53:18.084: INFO: Pod "webserver-deployment-595b5b9587-jwz2n" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-jwz2n webserver-deployment-595b5b9587- deployment-641 /api/v1/namespaces/deployment-641/pods/webserver-deployment-595b5b9587-jwz2n 21d902ab-7ea5-4739-a3bd-0293ceb93e43 61078 0 2020-01-11 19:53:15 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 5b32ada0-5fbc-47c5-ba86-c6b9e3c2121e 0xc001e9d007 0xc001e9d008}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vcm86,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vcm86,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vcm86,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-10-250-27-25.ec2.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:53:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:53:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:53:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:53:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.27.25,PodIP:,StartTime:2020-01-11 19:53:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 11 19:53:18.084: INFO: Pod "webserver-deployment-595b5b9587-l6hww" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-l6hww webserver-deployment-595b5b9587- deployment-641 /api/v1/namespaces/deployment-641/pods/webserver-deployment-595b5b9587-l6hww bb946011-9b40-4922-8340-7cf94a6d604d 60912 0 2020-01-11 19:53:09 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[cni.projectcalico.org/podIP:100.64.1.201/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 5b32ada0-5fbc-47c5-ba86-c6b9e3c2121e 0xc001e9d167 0xc001e9d168}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vcm86,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vcm86,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vcm86,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-10-250-27-25.ec2.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:53:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:53:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:53:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:53:09 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.27.25,PodIP:100.64.1.201,StartTime:2020-01-11 19:53:09 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-11 19:53:11 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://0b46362f4fc753ecd45d8006c72deae59f3b08f17926dc57afbe0a8a46ba521f,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:100.64.1.201,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 11 19:53:18.084: INFO: Pod "webserver-deployment-595b5b9587-lz6f4" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-lz6f4 webserver-deployment-595b5b9587- deployment-641 /api/v1/namespaces/deployment-641/pods/webserver-deployment-595b5b9587-lz6f4 2d7ff35b-1d5e-4fc8-94c2-4a6aadc7db9f 60913 0 2020-01-11 19:53:09 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[cni.projectcalico.org/podIP:100.64.1.202/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 5b32ada0-5fbc-47c5-ba86-c6b9e3c2121e 0xc001e9d2e7 0xc001e9d2e8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vcm86,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vcm86,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vcm86,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-10-250-27-25.ec2.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:53:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:53:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:53:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:53:09 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.27.25,PodIP:100.64.1.202,StartTime:2020-01-11 19:53:09 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-11 19:53:11 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://1aec48a43eaecc500e0c0d53fe3fcb18e3ce4660b7acea74b122e794200f3ab2,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:100.64.1.202,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 11 19:53:18.084: INFO: Pod "webserver-deployment-595b5b9587-nsnd7" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-nsnd7 webserver-deployment-595b5b9587- deployment-641 /api/v1/namespaces/deployment-641/pods/webserver-deployment-595b5b9587-nsnd7 8a210df3-43eb-411e-a234-e16963af949c 60944 0 2020-01-11 19:53:09 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[cni.projectcalico.org/podIP:100.64.1.206/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 5b32ada0-5fbc-47c5-ba86-c6b9e3c2121e 0xc001e9d467 0xc001e9d468}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vcm86,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vcm86,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vcm86,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-10-250-27-25.ec2.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:53:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:53:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:53:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:53:09 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.27.25,PodIP:100.64.1.206,StartTime:2020-01-11 19:53:09 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-11 19:53:12 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://526d52487684a14fa937ec883102102cfceec79db36ec7ff09290f1710eb7fae,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:100.64.1.206,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 11 19:53:18.084: INFO: Pod "webserver-deployment-595b5b9587-ntg5v" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-ntg5v webserver-deployment-595b5b9587- deployment-641 /api/v1/namespaces/deployment-641/pods/webserver-deployment-595b5b9587-ntg5v 552b5a0e-9320-4cf2-acff-4e636332707d 60926 0 2020-01-11 19:53:09 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[cni.projectcalico.org/podIP:100.64.0.59/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 5b32ada0-5fbc-47c5-ba86-c6b9e3c2121e 0xc001e9d5e7 0xc001e9d5e8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vcm86,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vcm86,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vcm86,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-10-250-7-77.ec2.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:53:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:53:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:53:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:53:09 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.7.77,PodIP:100.64.0.59,StartTime:2020-01-11 19:53:09 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-11 19:53:10 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://b4adf25c61a17abfd1a9ab945e68d00a7e5068cce2f8864861d71fc4332a4379,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:100.64.0.59,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 11 19:53:18.084: INFO: Pod "webserver-deployment-595b5b9587-qn5gg" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-qn5gg webserver-deployment-595b5b9587- deployment-641 /api/v1/namespaces/deployment-641/pods/webserver-deployment-595b5b9587-qn5gg df938588-8ead-4d1d-8a31-7a8aaef703c7 61033 0 2020-01-11 19:53:15 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 5b32ada0-5fbc-47c5-ba86-c6b9e3c2121e 0xc001e9d750 0xc001e9d751}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vcm86,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vcm86,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vcm86,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-10-250-27-25.ec2.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:53:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:53:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:53:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:53:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.27.25,PodIP:,StartTime:2020-01-11 19:53:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 11 19:53:18.085: INFO: Pod "webserver-deployment-595b5b9587-rs2dw" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-rs2dw webserver-deployment-595b5b9587- deployment-641 /api/v1/namespaces/deployment-641/pods/webserver-deployment-595b5b9587-rs2dw f1e1fc86-af5c-49df-bbc8-ffd9cb8044a7 61105 0 2020-01-11 19:53:15 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[cni.projectcalico.org/podIP:100.64.0.68/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 5b32ada0-5fbc-47c5-ba86-c6b9e3c2121e 0xc001e9d8a7 0xc001e9d8a8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vcm86,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vcm86,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vcm86,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-10-250-7-77.ec2.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:53:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:53:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:53:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:53:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.7.77,PodIP:,StartTime:2020-01-11 19:53:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 11 19:53:18.085: INFO: Pod "webserver-deployment-595b5b9587-trwzs" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-trwzs webserver-deployment-595b5b9587- deployment-641 /api/v1/namespaces/deployment-641/pods/webserver-deployment-595b5b9587-trwzs a23262b5-9666-4551-89cf-fe4e445e363b 60923 0 2020-01-11 19:53:09 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[cni.projectcalico.org/podIP:100.64.0.60/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 5b32ada0-5fbc-47c5-ba86-c6b9e3c2121e 0xc001e9da07 0xc001e9da08}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vcm86,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vcm86,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vcm86,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-10-250-7-77.ec2.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:53:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:53:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:53:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:53:09 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.7.77,PodIP:100.64.0.60,StartTime:2020-01-11 19:53:09 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-11 19:53:11 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://d60f90b6bd6b6b2f1ea62ad0eeeceb1cabde2d6af0d222b446f191990192cb77,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:100.64.0.60,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 11 19:53:18.085: INFO: Pod "webserver-deployment-595b5b9587-xh6mf" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-xh6mf webserver-deployment-595b5b9587- deployment-641 /api/v1/namespaces/deployment-641/pods/webserver-deployment-595b5b9587-xh6mf ebb84b4a-5df8-4d24-a2f3-884b1a551d37 60922 0 2020-01-11 19:53:09 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[cni.projectcalico.org/podIP:100.64.0.61/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 5b32ada0-5fbc-47c5-ba86-c6b9e3c2121e 0xc001e9db80 0xc001e9db81}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vcm86,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vcm86,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vcm86,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-10-250-7-77.ec2.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:53:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:53:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:53:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:53:09 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.7.77,PodIP:100.64.0.61,StartTime:2020-01-11 19:53:09 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-11 19:53:11 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://c1f3ba65c2b2d49a2112333aa321bc4ce710bafd07b4068d7fc65b7f6f78b661,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:100.64.0.61,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 11 19:53:18.085: INFO: Pod "webserver-deployment-595b5b9587-xn28j" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-xn28j webserver-deployment-595b5b9587- deployment-641 /api/v1/namespaces/deployment-641/pods/webserver-deployment-595b5b9587-xn28j a4e9509f-93aa-4b44-96b2-146b083171fd 61089 0 2020-01-11 19:53:15 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[cni.projectcalico.org/podIP:100.64.0.65/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 5b32ada0-5fbc-47c5-ba86-c6b9e3c2121e 0xc001e9dcf0 0xc001e9dcf1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vcm86,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vcm86,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vcm86,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-10-250-7-77.ec2.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:53:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:53:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:53:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:53:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.7.77,PodIP:,StartTime:2020-01-11 19:53:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 11 19:53:18.085: INFO: Pod "webserver-deployment-595b5b9587-xnq7x" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-xnq7x webserver-deployment-595b5b9587- deployment-641 /api/v1/namespaces/deployment-641/pods/webserver-deployment-595b5b9587-xnq7x 831287a6-9906-4c94-a4dd-122e6f845624 60924 0 2020-01-11 19:53:09 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[cni.projectcalico.org/podIP:100.64.0.62/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 5b32ada0-5fbc-47c5-ba86-c6b9e3c2121e 0xc001e9de47 0xc001e9de48}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vcm86,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vcm86,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vcm86,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-10-250-7-77.ec2.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:53:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:53:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:53:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:53:09 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.7.77,PodIP:100.64.0.62,StartTime:2020-01-11 19:53:09 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-11 19:53:11 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://9dc41ef71489ceddb5ecb2881dc56158a2090bedff6c526e2835d7324faeb6fd,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:100.64.0.62,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 11 19:53:18.085: INFO: Pod "webserver-deployment-c7997dcc8-4wl6p" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-4wl6p webserver-deployment-c7997dcc8- deployment-641 /api/v1/namespaces/deployment-641/pods/webserver-deployment-c7997dcc8-4wl6p f7c2ebf4-4c92-4163-82f5-ad9914dc9ad8 61040 0 2020-01-11 19:53:14 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[cni.projectcalico.org/podIP:100.64.1.208/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 e6c11ef4-b627-433b-9049-0f5f4894b64d 0xc001e9dfc0 0xc001e9dfc1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vcm86,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vcm86,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vcm86,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-10-250-27-25.ec2.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:53:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:53:14 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:53:14 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:53:14 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.27.25,PodIP:,StartTime:2020-01-11 19:53:14 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 11 19:53:18.085: INFO: Pod "webserver-deployment-c7997dcc8-5klzx" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-5klzx webserver-deployment-c7997dcc8- deployment-641 /api/v1/namespaces/deployment-641/pods/webserver-deployment-c7997dcc8-5klzx b86305e3-1504-4089-bda1-41fae442d541 61065 0 2020-01-11 19:53:14 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[cni.projectcalico.org/podIP:100.64.1.209/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 e6c11ef4-b627-433b-9049-0f5f4894b64d 0xc002060137 0xc002060138}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vcm86,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vcm86,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vcm86,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-10-250-27-25.ec2.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:53:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:53:14 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:53:14 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:53:14 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.27.25,PodIP:100.64.1.209,StartTime:2020-01-11 19:53:14 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = Error response from daemon: pull access denied for webserver, repository does not exist or may require 'docker login',},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:100.64.1.209,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 11 19:53:18.085: INFO: Pod "webserver-deployment-c7997dcc8-87gtb" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-87gtb webserver-deployment-c7997dcc8- deployment-641 /api/v1/namespaces/deployment-641/pods/webserver-deployment-c7997dcc8-87gtb f7b61103-5d29-4ac4-84e6-6b721ba52721 61070 0 2020-01-11 19:53:14 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[cni.projectcalico.org/podIP:100.64.0.64/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 e6c11ef4-b627-433b-9049-0f5f4894b64d 0xc0020602e7 0xc0020602e8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vcm86,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vcm86,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vcm86,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-10-250-7-77.ec2.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:53:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:53:14 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:53:14 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:53:14 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.7.77,PodIP:100.64.0.64,StartTime:2020-01-11 19:53:14 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = Error response from daemon: pull access denied for webserver, repository does not exist or may require 'docker login',},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:100.64.0.64,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 11 19:53:18.086: INFO: Pod "webserver-deployment-c7997dcc8-8d2mv" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-8d2mv webserver-deployment-c7997dcc8- deployment-641 /api/v1/namespaces/deployment-641/pods/webserver-deployment-c7997dcc8-8d2mv 71ea0e50-2a2b-4b62-8090-5799a4491dba 61037 0 2020-01-11 19:53:14 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[cni.projectcalico.org/podIP:100.64.1.207/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 e6c11ef4-b627-433b-9049-0f5f4894b64d 0xc002060490 0xc002060491}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vcm86,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vcm86,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vcm86,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-10-250-27-25.ec2.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:53:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:53:14 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:53:14 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:53:14 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.27.25,PodIP:,StartTime:2020-01-11 19:53:14 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 11 19:53:18.086: INFO: Pod "webserver-deployment-c7997dcc8-8dht4" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-8dht4 webserver-deployment-c7997dcc8- deployment-641 /api/v1/namespaces/deployment-641/pods/webserver-deployment-c7997dcc8-8dht4 1fdf7910-a92d-4ece-b352-755a28f765b9 61082 0 2020-01-11 19:53:15 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 e6c11ef4-b627-433b-9049-0f5f4894b64d 0xc0020605f7 0xc0020605f8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vcm86,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vcm86,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vcm86,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-10-250-27-25.ec2.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:53:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:53:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:53:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:53:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.27.25,PodIP:,StartTime:2020-01-11 19:53:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 11 19:53:18.086: INFO: Pod "webserver-deployment-c7997dcc8-bvgbb" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-bvgbb webserver-deployment-c7997dcc8- deployment-641 /api/v1/namespaces/deployment-641/pods/webserver-deployment-c7997dcc8-bvgbb ac03eade-f851-4d9b-9955-a93a44e0a487 61108 0 2020-01-11 19:53:14 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[cni.projectcalico.org/podIP:100.64.0.63/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 e6c11ef4-b627-433b-9049-0f5f4894b64d 0xc002060777 0xc002060778}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vcm86,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vcm86,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vcm86,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-10-250-7-77.ec2.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:53:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:53:14 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:53:14 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:53:14 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.7.77,PodIP:100.64.0.63,StartTime:2020-01-11 19:53:14 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = Error response from daemon: pull access denied for webserver, repository does not exist or may require 'docker login',},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:100.64.0.63,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 11 19:53:18.086: INFO: Pod "webserver-deployment-c7997dcc8-clpn7" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-clpn7 webserver-deployment-c7997dcc8- deployment-641 /api/v1/namespaces/deployment-641/pods/webserver-deployment-c7997dcc8-clpn7 c87e88d5-ac25-4383-a496-eec08dfa5a2d 61068 0 2020-01-11 19:53:15 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 e6c11ef4-b627-433b-9049-0f5f4894b64d 0xc002060910 0xc002060911}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vcm86,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vcm86,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vcm86,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-10-250-7-77.ec2.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:53:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:53:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:53:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:53:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.7.77,PodIP:,StartTime:2020-01-11 19:53:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 11 19:53:18.086: INFO: Pod "webserver-deployment-c7997dcc8-csl7p" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-csl7p webserver-deployment-c7997dcc8- deployment-641 /api/v1/namespaces/deployment-641/pods/webserver-deployment-c7997dcc8-csl7p dc205353-3b1c-4c4f-a9c5-1bce5931284f 61069 0 2020-01-11 19:53:15 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 e6c11ef4-b627-433b-9049-0f5f4894b64d 0xc002060a70 0xc002060a71}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vcm86,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vcm86,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vcm86,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-10-250-27-25.ec2.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:53:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:53:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:53:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:53:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.27.25,PodIP:,StartTime:2020-01-11 19:53:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 11 19:53:18.086: INFO: Pod "webserver-deployment-c7997dcc8-dxzs8" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-dxzs8 webserver-deployment-c7997dcc8- deployment-641 /api/v1/namespaces/deployment-641/pods/webserver-deployment-c7997dcc8-dxzs8 06144c8f-11e9-4216-923d-69bd0e4cb448 61096 0 2020-01-11 19:53:15 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[cni.projectcalico.org/podIP:100.64.0.66/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 e6c11ef4-b627-433b-9049-0f5f4894b64d 0xc002060be7 0xc002060be8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vcm86,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vcm86,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vcm86,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-10-250-7-77.ec2.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:53:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:53:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:53:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:53:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.7.77,PodIP:,StartTime:2020-01-11 19:53:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 11 19:53:18.086: INFO: Pod "webserver-deployment-c7997dcc8-jczsx" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-jczsx webserver-deployment-c7997dcc8- deployment-641 /api/v1/namespaces/deployment-641/pods/webserver-deployment-c7997dcc8-jczsx a9dc24af-7174-46f7-88b1-58bdd65e634f 61106 0 2020-01-11 19:53:15 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[cni.projectcalico.org/podIP:100.64.1.211/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 e6c11ef4-b627-433b-9049-0f5f4894b64d 0xc002060d60 0xc002060d61}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vcm86,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vcm86,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vcm86,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-10-250-27-25.ec2.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:53:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:53:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:53:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:53:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.27.25,PodIP:,StartTime:2020-01-11 19:53:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 11 19:53:18.086: INFO: Pod "webserver-deployment-c7997dcc8-pfj9l" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-pfj9l webserver-deployment-c7997dcc8- deployment-641 /api/v1/namespaces/deployment-641/pods/webserver-deployment-c7997dcc8-pfj9l a194a6fc-f611-40b7-b291-f69e53393976 61103 0 2020-01-11 19:53:15 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[cni.projectcalico.org/podIP:100.64.0.67/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 e6c11ef4-b627-433b-9049-0f5f4894b64d 0xc002060ed7 0xc002060ed8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vcm86,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vcm86,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vcm86,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-10-250-7-77.ec2.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:53:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:53:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:53:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:53:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.7.77,PodIP:,StartTime:2020-01-11 19:53:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 11 19:53:18.087: INFO: Pod "webserver-deployment-c7997dcc8-pv2nv" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-pv2nv webserver-deployment-c7997dcc8- deployment-641 /api/v1/namespaces/deployment-641/pods/webserver-deployment-c7997dcc8-pv2nv 6fb5b7fd-dcaf-4456-a27d-b3c2106cf35b 61066 0 2020-01-11 19:53:15 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 e6c11ef4-b627-433b-9049-0f5f4894b64d 0xc002061040 0xc002061041}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vcm86,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vcm86,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vcm86,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-10-250-7-77.ec2.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:53:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:53:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:53:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:53:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.7.77,PodIP:,StartTime:2020-01-11 19:53:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 11 19:53:18.087: INFO: Pod "webserver-deployment-c7997dcc8-qzmt8" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-qzmt8 webserver-deployment-c7997dcc8- deployment-641 /api/v1/namespaces/deployment-641/pods/webserver-deployment-c7997dcc8-qzmt8 7b9d3170-d198-46b4-8734-eaea4eae66a4 61077 0 2020-01-11 19:53:16 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 e6c11ef4-b627-433b-9049-0f5f4894b64d 0xc0020611a0 0xc0020611a1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vcm86,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vcm86,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vcm86,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-10-250-7-77.ec2.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:53:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:53:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:53:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:53:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.7.77,PodIP:,StartTime:2020-01-11 19:53:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:53:18.087: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-641" for this suite. Jan 11 19:53:26.447: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:53:29.763: INFO: namespace deployment-641 deletion completed in 11.585684246s • [SLOW TEST:20.855 seconds] [sig-apps] Deployment /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ [BeforeEach] [sig-network] Services /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:52:22.597: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename services STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-4413 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:91 [It] should be able to switch session affinity for service with type clusterIP /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1807 STEP: creating service in namespace services-4413 STEP: creating service affinity-clusterip-transition in namespace services-4413 STEP: creating replication controller affinity-clusterip-transition in namespace services-4413 I0111 19:52:23.438507 8609 runners.go:184] Created replication controller with name: affinity-clusterip-transition, namespace: services-4413, replica count: 3 I0111 19:52:26.539122 8609 runners.go:184] affinity-clusterip-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 11 19:52:26.717: INFO: Creating new exec pod Jan 11 19:52:29.989: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=services-4413 execpod-affinityz22vc -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80' Jan 11 19:52:31.264: INFO: stderr: "+ nc -zv -t -w 2 affinity-clusterip-transition 80\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\n" Jan 11 19:52:31.264: INFO: stdout: "" Jan 11 19:52:31.265: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=services-4413 execpod-affinityz22vc -- /bin/sh -x -c nc -zv -t -w 2 100.110.87.16 80' Jan 11 19:52:32.547: INFO: stderr: "+ nc -zv -t -w 2 100.110.87.16 80\nConnection to 100.110.87.16 80 port [tcp/http] succeeded!\n" Jan 11 19:52:32.547: INFO: stdout: "" Jan 11 19:52:32.727: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=services-4413 execpod-affinityz22vc -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://100.110.87.16:80/' Jan 11 19:52:34.043: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://100.110.87.16:80/\n" Jan 11 19:52:34.043: INFO: stdout: "affinity-clusterip-transition-clk7h" Jan 11 19:52:34.043: INFO: Received response from host: affinity-clusterip-transition-clk7h Jan 11 19:52:36.044: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=services-4413 execpod-affinityz22vc -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://100.110.87.16:80/' Jan 11 19:52:37.433: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://100.110.87.16:80/\n" Jan 11 19:52:37.433: INFO: stdout: "affinity-clusterip-transition-gqtp8" Jan 11 19:52:37.433: INFO: Received response from host: affinity-clusterip-transition-gqtp8 Jan 11 19:52:37.614: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=services-4413 execpod-affinityz22vc -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://100.110.87.16:80/' Jan 11 19:52:38.950: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://100.110.87.16:80/\n" Jan 11 19:52:38.950: INFO: stdout: "affinity-clusterip-transition-clk7h" Jan 11 19:52:38.950: INFO: Received response from host: affinity-clusterip-transition-clk7h Jan 11 19:52:40.950: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=services-4413 execpod-affinityz22vc -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://100.110.87.16:80/' Jan 11 19:52:42.217: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://100.110.87.16:80/\n" Jan 11 19:52:42.217: INFO: stdout: "affinity-clusterip-transition-clk7h" Jan 11 19:52:42.217: INFO: Received response from host: affinity-clusterip-transition-clk7h Jan 11 19:52:42.950: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=services-4413 execpod-affinityz22vc -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://100.110.87.16:80/' Jan 11 19:52:44.229: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://100.110.87.16:80/\n" Jan 11 19:52:44.229: INFO: stdout: "affinity-clusterip-transition-clk7h" Jan 11 19:52:44.229: INFO: Received response from host: affinity-clusterip-transition-clk7h Jan 11 19:52:44.950: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=services-4413 execpod-affinityz22vc -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://100.110.87.16:80/' Jan 11 19:52:46.213: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://100.110.87.16:80/\n" Jan 11 19:52:46.213: INFO: stdout: "affinity-clusterip-transition-clk7h" Jan 11 19:52:46.213: INFO: Received response from host: affinity-clusterip-transition-clk7h Jan 11 19:52:46.950: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=services-4413 execpod-affinityz22vc -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://100.110.87.16:80/' Jan 11 19:52:48.294: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://100.110.87.16:80/\n" Jan 11 19:52:48.294: INFO: stdout: "affinity-clusterip-transition-clk7h" Jan 11 19:52:48.294: INFO: Received response from host: affinity-clusterip-transition-clk7h Jan 11 19:52:48.950: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=services-4413 execpod-affinityz22vc -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://100.110.87.16:80/' Jan 11 19:52:50.308: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://100.110.87.16:80/\n" Jan 11 19:52:50.308: INFO: stdout: "affinity-clusterip-transition-clk7h" Jan 11 19:52:50.308: INFO: Received response from host: affinity-clusterip-transition-clk7h Jan 11 19:52:50.950: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=services-4413 execpod-affinityz22vc -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://100.110.87.16:80/' Jan 11 19:52:52.246: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://100.110.87.16:80/\n" Jan 11 19:52:52.246: INFO: stdout: "affinity-clusterip-transition-clk7h" Jan 11 19:52:52.246: INFO: Received response from host: affinity-clusterip-transition-clk7h Jan 11 19:52:52.950: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=services-4413 execpod-affinityz22vc -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://100.110.87.16:80/' Jan 11 19:52:54.232: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://100.110.87.16:80/\n" Jan 11 19:52:54.232: INFO: stdout: "affinity-clusterip-transition-clk7h" Jan 11 19:52:54.232: INFO: Received response from host: affinity-clusterip-transition-clk7h Jan 11 19:52:54.950: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=services-4413 execpod-affinityz22vc -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://100.110.87.16:80/' Jan 11 19:52:56.225: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://100.110.87.16:80/\n" Jan 11 19:52:56.225: INFO: stdout: "affinity-clusterip-transition-clk7h" Jan 11 19:52:56.225: INFO: Received response from host: affinity-clusterip-transition-clk7h Jan 11 19:52:56.950: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=services-4413 execpod-affinityz22vc -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://100.110.87.16:80/' Jan 11 19:52:58.162: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://100.110.87.16:80/\n" Jan 11 19:52:58.162: INFO: stdout: "affinity-clusterip-transition-clk7h" Jan 11 19:52:58.162: INFO: Received response from host: affinity-clusterip-transition-clk7h Jan 11 19:52:58.950: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=services-4413 execpod-affinityz22vc -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://100.110.87.16:80/' Jan 11 19:53:00.225: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://100.110.87.16:80/\n" Jan 11 19:53:00.225: INFO: stdout: "affinity-clusterip-transition-clk7h" Jan 11 19:53:00.225: INFO: Received response from host: affinity-clusterip-transition-clk7h Jan 11 19:53:00.950: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=services-4413 execpod-affinityz22vc -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://100.110.87.16:80/' Jan 11 19:53:02.275: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://100.110.87.16:80/\n" Jan 11 19:53:02.275: INFO: stdout: "affinity-clusterip-transition-clk7h" Jan 11 19:53:02.275: INFO: Received response from host: affinity-clusterip-transition-clk7h Jan 11 19:53:02.950: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=services-4413 execpod-affinityz22vc -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://100.110.87.16:80/' Jan 11 19:53:04.243: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://100.110.87.16:80/\n" Jan 11 19:53:04.243: INFO: stdout: "affinity-clusterip-transition-clk7h" Jan 11 19:53:04.243: INFO: Received response from host: affinity-clusterip-transition-clk7h Jan 11 19:53:04.950: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=services-4413 execpod-affinityz22vc -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://100.110.87.16:80/' Jan 11 19:53:06.239: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://100.110.87.16:80/\n" Jan 11 19:53:06.239: INFO: stdout: "affinity-clusterip-transition-clk7h" Jan 11 19:53:06.239: INFO: Received response from host: affinity-clusterip-transition-clk7h Jan 11 19:53:06.950: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=services-4413 execpod-affinityz22vc -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://100.110.87.16:80/' Jan 11 19:53:08.213: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://100.110.87.16:80/\n" Jan 11 19:53:08.213: INFO: stdout: "affinity-clusterip-transition-clk7h" Jan 11 19:53:08.213: INFO: Received response from host: affinity-clusterip-transition-clk7h Jan 11 19:53:08.214: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip-transition in namespace services-4413, will wait for the garbage collector to delete the pods Jan 11 19:53:08.588: INFO: Deleting ReplicationController affinity-clusterip-transition took: 91.502837ms Jan 11 19:53:08.689: INFO: Terminating ReplicationController affinity-clusterip-transition pods took: 100.344192ms [AfterEach] [sig-network] Services /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:53:24.085: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4413" for this suite. Jan 11 19:53:30.447: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:53:33.771: INFO: namespace services-4413 deletion completed in 9.593920087s [AfterEach] [sig-network] Services /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:95 • [SLOW TEST:71.173 seconds] [sig-network] Services /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to switch session affinity for service with type clusterIP /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1807 ------------------------------ SSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:53:15.204: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename webhook STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-9767 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:88 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 11 19:53:18.337: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714369198, loc:(*time.Location)(0x84bfb00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714369198, loc:(*time.Location)(0x84bfb00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714369198, loc:(*time.Location)(0x84bfb00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714369198, loc:(*time.Location)(0x84bfb00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 11 19:53:21.521: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Jan 11 19:53:21.610: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:53:23.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9767" for this suite. Jan 11 19:53:29.817: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:53:33.143: INFO: namespace webhook-9767 deletion completed in 9.595029164s STEP: Destroying namespace "webhook-9767-markers" for this suite. Jan 11 19:53:39.412: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:53:42.717: INFO: namespace webhook-9767-markers deletion completed in 9.573443488s [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:103 • [SLOW TEST:27.873 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:53:14.850: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in persistent-local-volumes-test-5644 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:152 [BeforeEach] [Volume type: dir-link-bindmounted] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Jan 11 19:53:21.943: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-5644 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-4b0f5796-123d-4e88-ac7c-cad7e7766b77-backend && mount --bind /tmp/local-volume-test-4b0f5796-123d-4e88-ac7c-cad7e7766b77-backend /tmp/local-volume-test-4b0f5796-123d-4e88-ac7c-cad7e7766b77-backend && ln -s /tmp/local-volume-test-4b0f5796-123d-4e88-ac7c-cad7e7766b77-backend /tmp/local-volume-test-4b0f5796-123d-4e88-ac7c-cad7e7766b77' Jan 11 19:53:23.264: INFO: stderr: "" Jan 11 19:53:23.264: INFO: stdout: "" STEP: Creating local PVCs and PVs Jan 11 19:53:23.264: INFO: Creating a PV followed by a PVC Jan 11 19:53:23.444: INFO: Waiting for PV local-pv78bfd to bind to PVC pvc-s8bwr Jan 11 19:53:23.444: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-s8bwr] to have phase Bound Jan 11 19:53:23.534: INFO: PersistentVolumeClaim pvc-s8bwr found but phase is Pending instead of Bound. Jan 11 19:53:25.624: INFO: PersistentVolumeClaim pvc-s8bwr found and phase=Bound (2.17939487s) Jan 11 19:53:25.624: INFO: Waiting up to 3m0s for PersistentVolume local-pv78bfd to have phase Bound Jan 11 19:53:25.714: INFO: PersistentVolume local-pv78bfd found and phase=Bound (90.113769ms) [BeforeEach] One pod requesting one prebound PVC /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Jan 11 19:53:30.344: INFO: pod "security-context-d9c25bfe-bacd-4de2-b43a-ea14588b5c2f" created on Node "ip-10-250-27-25.ec2.internal" STEP: Writing in pod1 Jan 11 19:53:30.344: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-5644 security-context-d9c25bfe-bacd-4de2-b43a-ea14588b5c2f -- /bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file' Jan 11 19:53:31.626: INFO: stderr: "" Jan 11 19:53:31.627: INFO: stdout: "" Jan 11 19:53:31.627: INFO: podRWCmdExec out: "" err: [It] should be able to mount volume and read from pod1 /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 STEP: Reading in pod1 Jan 11 19:53:31.627: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-5644 security-context-d9c25bfe-bacd-4de2-b43a-ea14588b5c2f -- /bin/sh -c cat /mnt/volume1/test-file' Jan 11 19:53:32.961: INFO: stderr: "" Jan 11 19:53:32.961: INFO: stdout: "test-file-content\n" Jan 11 19:53:32.961: INFO: podRWCmdExec out: "test-file-content\n" err: [AfterEach] One pod requesting one prebound PVC /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod security-context-d9c25bfe-bacd-4de2-b43a-ea14588b5c2f in namespace persistent-local-volumes-test-5644 [AfterEach] [Volume type: dir-link-bindmounted] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jan 11 19:53:33.053: INFO: Deleting PersistentVolumeClaim "pvc-s8bwr" Jan 11 19:53:33.144: INFO: Deleting PersistentVolume "local-pv78bfd" STEP: Removing the test directory Jan 11 19:53:33.234: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-5644 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm /tmp/local-volume-test-4b0f5796-123d-4e88-ac7c-cad7e7766b77 && umount /tmp/local-volume-test-4b0f5796-123d-4e88-ac7c-cad7e7766b77-backend && rm -r /tmp/local-volume-test-4b0f5796-123d-4e88-ac7c-cad7e7766b77-backend' Jan 11 19:53:34.586: INFO: stderr: "" Jan 11 19:53:34.586: INFO: stdout: "" [AfterEach] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:53:34.677: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-5644" for this suite. Jan 11 19:53:41.040: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:53:44.363: INFO: namespace persistent-local-volumes-test-5644 deletion completed in 9.592440683s • [SLOW TEST:29.512 seconds] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-link-bindmounted] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and read from pod1 /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 ------------------------------ SSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:53:15.571: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in persistent-local-volumes-test-3399 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:152 [BeforeEach] [Volume type: blockfswithoutformat] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "ip-10-250-27-25.ec2.internal" using path "/tmp/local-volume-test-899970a5-1ad0-48e5-b5e8-ccdcaeadf3de" Jan 11 19:53:21.599: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-3399 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-899970a5-1ad0-48e5-b5e8-ccdcaeadf3de && dd if=/dev/zero of=/tmp/local-volume-test-899970a5-1ad0-48e5-b5e8-ccdcaeadf3de/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-899970a5-1ad0-48e5-b5e8-ccdcaeadf3de/file' Jan 11 19:53:22.935: INFO: stderr: "5120+0 records in\n5120+0 records out\n20971520 bytes (21 MB, 20 MiB) copied, 0.0169125 s, 1.2 GB/s\n" Jan 11 19:53:22.935: INFO: stdout: "" Jan 11 19:53:22.935: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-3399 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-899970a5-1ad0-48e5-b5e8-ccdcaeadf3de/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}' Jan 11 19:53:25.594: INFO: stderr: "" Jan 11 19:53:25.594: INFO: stdout: "/dev/loop0\n" STEP: Creating local PVCs and PVs Jan 11 19:53:25.594: INFO: Creating a PV followed by a PVC Jan 11 19:53:25.774: INFO: Waiting for PV local-pvgpb8j to bind to PVC pvc-8rzpz Jan 11 19:53:25.774: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-8rzpz] to have phase Bound Jan 11 19:53:25.864: INFO: PersistentVolumeClaim pvc-8rzpz found and phase=Bound (89.976258ms) Jan 11 19:53:25.864: INFO: Waiting up to 3m0s for PersistentVolume local-pvgpb8j to have phase Bound Jan 11 19:53:25.954: INFO: PersistentVolume local-pvgpb8j found and phase=Bound (89.906112ms) [It] should be able to write from pod1 and read from pod2 /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 STEP: Creating pod1 STEP: Creating a pod Jan 11 19:53:30.584: INFO: pod "security-context-55f445fe-9132-4e30-a764-e0b160adf7c5" created on Node "ip-10-250-27-25.ec2.internal" STEP: Writing in pod1 Jan 11 19:53:30.584: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-3399 security-context-55f445fe-9132-4e30-a764-e0b160adf7c5 -- /bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file' Jan 11 19:53:31.913: INFO: stderr: "" Jan 11 19:53:31.913: INFO: stdout: "" Jan 11 19:53:31.913: INFO: podRWCmdExec out: "" err: Jan 11 19:53:31.913: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-3399 security-context-55f445fe-9132-4e30-a764-e0b160adf7c5 -- /bin/sh -c cat /mnt/volume1/test-file' Jan 11 19:53:33.312: INFO: stderr: "" Jan 11 19:53:33.312: INFO: stdout: "test-file-content\n" Jan 11 19:53:33.312: INFO: podRWCmdExec out: "test-file-content\n" err: STEP: Deleting pod1 STEP: Deleting pod security-context-55f445fe-9132-4e30-a764-e0b160adf7c5 in namespace persistent-local-volumes-test-3399 STEP: Creating pod2 STEP: Creating a pod Jan 11 19:53:35.853: INFO: pod "security-context-9e5b1ae4-35d2-4c1c-bbd8-6456bdf44a85" created on Node "ip-10-250-27-25.ec2.internal" STEP: Reading in pod2 Jan 11 19:53:35.853: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-3399 security-context-9e5b1ae4-35d2-4c1c-bbd8-6456bdf44a85 -- /bin/sh -c cat /mnt/volume1/test-file' Jan 11 19:53:37.124: INFO: stderr: "" Jan 11 19:53:37.125: INFO: stdout: "test-file-content\n" Jan 11 19:53:37.125: INFO: podRWCmdExec out: "test-file-content\n" err: STEP: Deleting pod2 STEP: Deleting pod security-context-9e5b1ae4-35d2-4c1c-bbd8-6456bdf44a85 in namespace persistent-local-volumes-test-3399 [AfterEach] [Volume type: blockfswithoutformat] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jan 11 19:53:37.216: INFO: Deleting PersistentVolumeClaim "pvc-8rzpz" Jan 11 19:53:37.307: INFO: Deleting PersistentVolume "local-pvgpb8j" Jan 11 19:53:37.398: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-3399 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-899970a5-1ad0-48e5-b5e8-ccdcaeadf3de/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}' Jan 11 19:53:38.713: INFO: stderr: "" Jan 11 19:53:38.713: INFO: stdout: "/dev/loop0\n" STEP: Tear down block device "/dev/loop0" on node "ip-10-250-27-25.ec2.internal" at path /tmp/local-volume-test-899970a5-1ad0-48e5-b5e8-ccdcaeadf3de/file Jan 11 19:53:38.713: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-3399 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0' Jan 11 19:53:40.047: INFO: stderr: "" Jan 11 19:53:40.047: INFO: stdout: "" STEP: Removing the test directory /tmp/local-volume-test-899970a5-1ad0-48e5-b5e8-ccdcaeadf3de Jan 11 19:53:40.047: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-3399 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-899970a5-1ad0-48e5-b5e8-ccdcaeadf3de' Jan 11 19:53:41.315: INFO: stderr: "" Jan 11 19:53:41.315: INFO: stdout: "" [AfterEach] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:53:41.405: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-3399" for this suite. Jan 11 19:53:47.768: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:53:51.084: INFO: namespace persistent-local-volumes-test-3399 deletion completed in 9.587038984s • [SLOW TEST:35.513 seconds] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: blockfswithoutformat] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume one after the other /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254 should be able to write from pod1 and read from pod2 /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 ------------------------------ SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:53:43.081: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename projected STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-1847 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating the pod Jan 11 19:53:46.860: INFO: Successfully updated pod "labelsupdate408b13c6-acd6-40e1-b582-70203e36ef0f" [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:53:49.056: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1847" for this suite. Jan 11 19:54:01.415: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:54:04.755: INFO: namespace projected-1847 deletion completed in 15.609307566s • [SLOW TEST:21.674 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:53:44.372: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename services STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-3432 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:91 [It] should be able to change the type from ExternalName to NodePort [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: creating a service externalname-service with the type=ExternalName in namespace services-3432 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-3432 I0111 19:53:45.378201 8610 runners.go:184] Created replication controller with name: externalname-service, namespace: services-3432, replica count: 2 I0111 19:53:48.478918 8610 runners.go:184] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 11 19:53:48.478: INFO: Creating new exec pod Jan 11 19:53:51.842: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=services-3432 execpod48njp -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Jan 11 19:53:53.105: INFO: stderr: "+ nc -zv -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Jan 11 19:53:53.105: INFO: stdout: "" Jan 11 19:53:53.106: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=services-3432 execpod48njp -- /bin/sh -x -c nc -zv -t -w 2 100.109.111.210 80' Jan 11 19:53:54.609: INFO: stderr: "+ nc -zv -t -w 2 100.109.111.210 80\nConnection to 100.109.111.210 80 port [tcp/http] succeeded!\n" Jan 11 19:53:54.609: INFO: stdout: "" Jan 11 19:53:54.609: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=services-3432 execpod48njp -- /bin/sh -x -c nc -zv -t -w 2 10.250.27.25 31921' Jan 11 19:53:56.014: INFO: stderr: "+ nc -zv -t -w 2 10.250.27.25 31921\nConnection to 10.250.27.25 31921 port [tcp/31921] succeeded!\n" Jan 11 19:53:56.015: INFO: stdout: "" Jan 11 19:53:56.015: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=services-3432 execpod48njp -- /bin/sh -x -c nc -zv -t -w 2 10.250.7.77 31921' Jan 11 19:53:57.333: INFO: stderr: "+ nc -zv -t -w 2 10.250.7.77 31921\nConnection to 10.250.7.77 31921 port [tcp/31921] succeeded!\n" Jan 11 19:53:57.333: INFO: stdout: "" Jan 11 19:53:57.333: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:53:57.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3432" for this suite. Jan 11 19:54:03.794: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:54:07.142: INFO: namespace services-3432 deletion completed in 9.618816348s [AfterEach] [sig-network] Services /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:95 • [SLOW TEST:22.770 seconds] [sig-network] Services /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSS ------------------------------ [BeforeEach] [sig-network] Networking /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:53:29.769: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename nettest STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nettest-2501 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Networking /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:35 STEP: Executing a successful http request from the external internet [It] should function for client IP based session affinity: udp /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:222 STEP: Performing setup for networking test in namespace nettest-2501 STEP: creating a selector STEP: Creating the service pods in kubernetes Jan 11 19:53:30.490: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods STEP: Getting node addresses Jan 11 19:53:53.938: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Jan 11 19:53:54.121: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-network] Networking /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:53:54.121: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "nettest-2501" for this suite. Jan 11 19:54:08.483: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:54:11.814: INFO: namespace nettest-2501 deletion completed in 17.601853984s S [SKIPPING] [42.046 seconds] [sig-network] Networking /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 Granular Checks: Services /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:103 should function for client IP based session affinity: udp [It] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:222 Requires at least 2 nodes (not -1) /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:597 ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:54:07.152: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename resourcequota STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in resourcequota-5296 STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:54:21.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5296" for this suite. Jan 11 19:54:28.166: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:54:31.484: INFO: namespace resourcequota-5296 deletion completed in 9.589382042s • [SLOW TEST:24.333 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:54:11.817: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename webhook STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-6368 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:88 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 11 19:54:13.679: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714369253, loc:(*time.Location)(0x84bfb00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714369253, loc:(*time.Location)(0x84bfb00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714369253, loc:(*time.Location)(0x84bfb00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714369253, loc:(*time.Location)(0x84bfb00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 11 19:54:16.863: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Jan 11 19:54:16.954: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-5313-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:54:17.998: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6368" for this suite. Jan 11 19:54:24.363: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:54:27.689: INFO: namespace webhook-6368 deletion completed in 9.598510238s STEP: Destroying namespace "webhook-6368-markers" for this suite. Jan 11 19:54:33.960: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:54:37.287: INFO: namespace webhook-6368-markers deletion completed in 9.597517097s [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:103 • [SLOW TEST:25.832 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSS ------------------------------ [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:50:31.278: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename container-probe STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-probe-8415 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:52 [It] should *not* be restarted with a non-local redirect http liveness probe /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:247 STEP: Creating pod liveness-d9c04d87-22d3-4723-91d9-3bcb6c488d03 in namespace container-probe-8415 Jan 11 19:50:34.226: INFO: Started pod liveness-d9c04d87-22d3-4723-91d9-3bcb6c488d03 in namespace container-probe-8415 STEP: checking the pod's current state and verifying that restartCount is present Jan 11 19:50:34.316: INFO: Initial restart count of pod liveness-d9c04d87-22d3-4723-91d9-3bcb6c488d03 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:54:35.333: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8415" for this suite. Jan 11 19:54:41.692: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:54:45.004: INFO: namespace container-probe-8415 deletion completed in 9.58054671s • [SLOW TEST:253.726 seconds] [k8s.io] Probing container /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693 should *not* be restarted with a non-local redirect http liveness probe /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:247 ------------------------------ SSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Container Runtime /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:54:45.020: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename container-runtime STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-runtime-5766 STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to pull image from docker hub [NodeConformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:378 STEP: create the container STEP: check the container status STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:54:48.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-5766" for this suite. Jan 11 19:54:54.558: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:54:57.867: INFO: namespace container-runtime-5766 deletion completed in 9.577278692s • [SLOW TEST:12.847 seconds] [k8s.io] Container Runtime /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693 blackbox test /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 when running a container with a new image /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:252 should be able to pull image from docker hub [NodeConformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:378 ------------------------------ SSSSS ------------------------------ [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:93 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:54:57.876: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename provisioning STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in provisioning-4233 STEP: Waiting for a default service account to be provisioned in namespace [It] should support creating multiple subpath from same volumes [Slow] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:277 Jan 11 19:54:58.514: INFO: Could not find CSI Name for in-tree plugin kubernetes.io/host-path Jan 11 19:54:58.605: INFO: Creating resource for inline volume STEP: Creating pod pod-subpath-test-hostpath-vft6 STEP: Creating a pod to test multi_subpath Jan 11 19:54:58.697: INFO: Waiting up to 5m0s for pod "pod-subpath-test-hostpath-vft6" in namespace "provisioning-4233" to be "success or failure" Jan 11 19:54:58.786: INFO: Pod "pod-subpath-test-hostpath-vft6": Phase="Pending", Reason="", readiness=false. Elapsed: 89.325858ms Jan 11 19:55:00.877: INFO: Pod "pod-subpath-test-hostpath-vft6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.1806027s Jan 11 19:55:02.967: INFO: Pod "pod-subpath-test-hostpath-vft6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.270648881s STEP: Saw pod success Jan 11 19:55:02.968: INFO: Pod "pod-subpath-test-hostpath-vft6" satisfied condition "success or failure" Jan 11 19:55:03.058: INFO: Trying to get logs from node ip-10-250-7-77.ec2.internal pod pod-subpath-test-hostpath-vft6 container test-container-subpath-hostpath-vft6: STEP: delete the pod Jan 11 19:55:03.248: INFO: Waiting for pod pod-subpath-test-hostpath-vft6 to disappear Jan 11 19:55:03.337: INFO: Pod pod-subpath-test-hostpath-vft6 no longer exists STEP: Deleting pod Jan 11 19:55:03.337: INFO: Deleting pod "pod-subpath-test-hostpath-vft6" in namespace "provisioning-4233" Jan 11 19:55:03.427: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics [AfterEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:55:03.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "provisioning-4233" for this suite. Jan 11 19:55:09.787: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:55:13.093: INFO: namespace provisioning-4233 deletion completed in 9.576311012s • [SLOW TEST:15.217 seconds] [sig-storage] In-tree Volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Driver: hostPath] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:69 [Testpattern: Inline-volume (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:92 should support creating multiple subpath from same volumes [Slow] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:277 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:50:04.493: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename secrets STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secrets-1216 STEP: Waiting for a default service account to be provisioned in namespace [It] Should fail non-optional pod creation due to secret object does not exist [Slow] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:376 STEP: Creating the pod [AfterEach] [sig-storage] Secrets /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:55:05.601: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1216" for this suite. Jan 11 19:55:17.960: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:55:21.262: INFO: namespace secrets-1216 deletion completed in 15.569890291s • [SLOW TEST:316.769 seconds] [sig-storage] Secrets /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:35 Should fail non-optional pod creation due to secret object does not exist [Slow] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:376 ------------------------------ S ------------------------------ [BeforeEach] [sig-apps] CronJob /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:49:55.448: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename cronjob STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in cronjob-516 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] CronJob /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:55 [It] should not schedule new jobs when ForbidConcurrent [Slow] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:107 STEP: Creating a ForbidConcurrent cronjob STEP: Ensuring a job is scheduled STEP: Ensuring exactly one is scheduled STEP: Ensuring exactly one running job exists by listing jobs explicitly STEP: Ensuring no more jobs are scheduled STEP: Removing cronjob [AfterEach] [sig-apps] CronJob /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:55:10.809: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "cronjob-516" for this suite. Jan 11 19:55:19.171: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:55:22.494: INFO: namespace cronjob-516 deletion completed in 11.594029816s • [SLOW TEST:327.046 seconds] [sig-apps] CronJob /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should not schedule new jobs when ForbidConcurrent [Slow] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:107 ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:54:37.659: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in csi-mock-volumes-8663 STEP: Waiting for a default service account to be provisioned in namespace [It] should expand volume without restarting pod if attach=on, nodeExpansion=on /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:546 STEP: deploying csi mock driver Jan 11 19:54:38.491: INFO: creating *v1.ServiceAccount: csi-mock-volumes-8663/csi-attacher Jan 11 19:54:38.582: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-8663 Jan 11 19:54:38.582: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-8663 Jan 11 19:54:38.672: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-8663 Jan 11 19:54:38.762: INFO: creating *v1.Role: csi-mock-volumes-8663/external-attacher-cfg-csi-mock-volumes-8663 Jan 11 19:54:38.852: INFO: creating *v1.RoleBinding: csi-mock-volumes-8663/csi-attacher-role-cfg Jan 11 19:54:38.943: INFO: creating *v1.ServiceAccount: csi-mock-volumes-8663/csi-provisioner Jan 11 19:54:39.033: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-8663 Jan 11 19:54:39.033: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-8663 Jan 11 19:54:39.123: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-8663 Jan 11 19:54:39.213: INFO: creating *v1.Role: csi-mock-volumes-8663/external-provisioner-cfg-csi-mock-volumes-8663 Jan 11 19:54:39.303: INFO: creating *v1.RoleBinding: csi-mock-volumes-8663/csi-provisioner-role-cfg Jan 11 19:54:39.393: INFO: creating *v1.ServiceAccount: csi-mock-volumes-8663/csi-resizer Jan 11 19:54:39.484: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-8663 Jan 11 19:54:39.484: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-8663 Jan 11 19:54:39.574: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-8663 Jan 11 19:54:39.664: INFO: creating *v1.Role: csi-mock-volumes-8663/external-resizer-cfg-csi-mock-volumes-8663 Jan 11 19:54:39.754: INFO: creating *v1.RoleBinding: csi-mock-volumes-8663/csi-resizer-role-cfg Jan 11 19:54:39.845: INFO: creating *v1.ServiceAccount: csi-mock-volumes-8663/csi-mock Jan 11 19:54:39.935: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-8663 Jan 11 19:54:40.025: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-8663 Jan 11 19:54:40.116: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-8663 Jan 11 19:54:40.205: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-8663 Jan 11 19:54:40.296: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-8663 Jan 11 19:54:40.386: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-8663 Jan 11 19:54:40.477: INFO: creating *v1.StatefulSet: csi-mock-volumes-8663/csi-mockplugin Jan 11 19:54:40.568: INFO: creating *v1.StatefulSet: csi-mock-volumes-8663/csi-mockplugin-attacher Jan 11 19:54:40.658: INFO: creating *v1.StatefulSet: csi-mock-volumes-8663/csi-mockplugin-resizer STEP: Creating pod Jan 11 19:54:40.928: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Jan 11 19:54:41.020: INFO: Waiting up to 5m0s for PersistentVolumeClaims [pvc-brwh5] to have phase Bound Jan 11 19:54:41.110: INFO: PersistentVolumeClaim pvc-brwh5 found but phase is Pending instead of Bound. Jan 11 19:54:43.200: INFO: PersistentVolumeClaim pvc-brwh5 found but phase is Pending instead of Bound. Jan 11 19:54:45.290: INFO: PersistentVolumeClaim pvc-brwh5 found but phase is Pending instead of Bound. Jan 11 19:54:47.381: INFO: PersistentVolumeClaim pvc-brwh5 found and phase=Bound (6.361111723s) STEP: Expanding current pvc STEP: Waiting for persistent volume resize to finish STEP: Waiting for PVC resize to finish STEP: Deleting pod pvc-volume-tester-xx2bc Jan 11 19:55:08.284: INFO: Deleting pod "pvc-volume-tester-xx2bc" in namespace "csi-mock-volumes-8663" Jan 11 19:55:08.375: INFO: Wait up to 5m0s for pod "pvc-volume-tester-xx2bc" to be fully deleted STEP: Deleting claim pvc-brwh5 Jan 11 19:55:10.736: INFO: Waiting up to 2m0s for PersistentVolume pvc-762f0f57-23ee-423b-babd-8bb768c57f18 to get deleted Jan 11 19:55:10.826: INFO: PersistentVolume pvc-762f0f57-23ee-423b-babd-8bb768c57f18 found and phase=Released (89.757804ms) Jan 11 19:55:12.916: INFO: PersistentVolume pvc-762f0f57-23ee-423b-babd-8bb768c57f18 found and phase=Released (2.180029324s) Jan 11 19:55:15.007: INFO: PersistentVolume pvc-762f0f57-23ee-423b-babd-8bb768c57f18 found and phase=Released (4.270242302s) Jan 11 19:55:17.097: INFO: PersistentVolume pvc-762f0f57-23ee-423b-babd-8bb768c57f18 was removed STEP: Deleting storageclass csi-mock-volumes-8663-sc STEP: Cleaning up resources STEP: uninstalling csi mock driver Jan 11 19:55:17.188: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8663/csi-attacher Jan 11 19:55:17.279: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-8663 Jan 11 19:55:17.370: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-8663 Jan 11 19:55:17.462: INFO: deleting *v1.Role: csi-mock-volumes-8663/external-attacher-cfg-csi-mock-volumes-8663 Jan 11 19:55:17.553: INFO: deleting *v1.RoleBinding: csi-mock-volumes-8663/csi-attacher-role-cfg Jan 11 19:55:17.645: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8663/csi-provisioner Jan 11 19:55:17.736: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-8663 Jan 11 19:55:17.827: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-8663 Jan 11 19:55:17.917: INFO: deleting *v1.Role: csi-mock-volumes-8663/external-provisioner-cfg-csi-mock-volumes-8663 Jan 11 19:55:18.009: INFO: deleting *v1.RoleBinding: csi-mock-volumes-8663/csi-provisioner-role-cfg Jan 11 19:55:18.100: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8663/csi-resizer Jan 11 19:55:18.191: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-8663 Jan 11 19:55:18.283: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-8663 Jan 11 19:55:18.374: INFO: deleting *v1.Role: csi-mock-volumes-8663/external-resizer-cfg-csi-mock-volumes-8663 Jan 11 19:55:18.465: INFO: deleting *v1.RoleBinding: csi-mock-volumes-8663/csi-resizer-role-cfg Jan 11 19:55:18.557: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8663/csi-mock Jan 11 19:55:18.649: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-8663 Jan 11 19:55:18.742: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-8663 Jan 11 19:55:18.833: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-8663 Jan 11 19:55:18.925: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-8663 Jan 11 19:55:19.016: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-8663 Jan 11 19:55:19.107: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-8663 Jan 11 19:55:19.198: INFO: deleting *v1.StatefulSet: csi-mock-volumes-8663/csi-mockplugin Jan 11 19:55:19.289: INFO: deleting *v1.StatefulSet: csi-mock-volumes-8663/csi-mockplugin-attacher Jan 11 19:55:19.380: INFO: deleting *v1.StatefulSet: csi-mock-volumes-8663/csi-mockplugin-resizer [AfterEach] [sig-storage] CSI mock volume /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:55:19.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "csi-mock-volumes-8663" for this suite. Jan 11 19:55:31.836: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:55:35.163: INFO: namespace csi-mock-volumes-8663 deletion completed in 15.599045898s • [SLOW TEST:57.504 seconds] [sig-storage] CSI mock volume /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI online volume expansion /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:531 should expand volume without restarting pod if attach=on, nodeExpansion=on /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:546 ------------------------------ SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:54:04.766: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename configmap STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-4962 STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating configMap with name configmap-test-upd-aaf114cc-2d59-4b09-ae17-59f5a5c61ee8 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-aaf114cc-2d59-4b09-ae17-59f5a5c61ee8 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:55:21.607: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4962" for this suite. Jan 11 19:55:33.965: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:55:37.262: INFO: namespace configmap-4962 deletion completed in 15.564723118s • [SLOW TEST:92.496 seconds] [sig-storage] ConfigMap /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:34 updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:55:13.120: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in csi-mock-volumes-3620 STEP: Waiting for a default service account to be provisioned in namespace [It] should preserve attachment policy when no CSIDriver present /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:263 STEP: deploying csi mock driver Jan 11 19:55:13.944: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3620/csi-attacher Jan 11 19:55:14.034: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-3620 Jan 11 19:55:14.034: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-3620 Jan 11 19:55:14.124: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-3620 Jan 11 19:55:14.213: INFO: creating *v1.Role: csi-mock-volumes-3620/external-attacher-cfg-csi-mock-volumes-3620 Jan 11 19:55:14.316: INFO: creating *v1.RoleBinding: csi-mock-volumes-3620/csi-attacher-role-cfg Jan 11 19:55:14.406: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3620/csi-provisioner Jan 11 19:55:14.496: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-3620 Jan 11 19:55:14.496: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-3620 Jan 11 19:55:14.585: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-3620 Jan 11 19:55:14.675: INFO: creating *v1.Role: csi-mock-volumes-3620/external-provisioner-cfg-csi-mock-volumes-3620 Jan 11 19:55:14.765: INFO: creating *v1.RoleBinding: csi-mock-volumes-3620/csi-provisioner-role-cfg Jan 11 19:55:14.854: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3620/csi-resizer Jan 11 19:55:14.944: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-3620 Jan 11 19:55:14.944: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-3620 Jan 11 19:55:15.034: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-3620 Jan 11 19:55:15.124: INFO: creating *v1.Role: csi-mock-volumes-3620/external-resizer-cfg-csi-mock-volumes-3620 Jan 11 19:55:15.214: INFO: creating *v1.RoleBinding: csi-mock-volumes-3620/csi-resizer-role-cfg Jan 11 19:55:15.304: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3620/csi-mock Jan 11 19:55:15.394: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-3620 Jan 11 19:55:15.483: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-3620 Jan 11 19:55:15.573: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-3620 Jan 11 19:55:15.663: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-3620 Jan 11 19:55:15.753: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-3620 Jan 11 19:55:15.842: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-3620 Jan 11 19:55:15.932: INFO: creating *v1.StatefulSet: csi-mock-volumes-3620/csi-mockplugin Jan 11 19:55:16.022: INFO: creating *v1.StatefulSet: csi-mock-volumes-3620/csi-mockplugin-attacher STEP: Creating pod Jan 11 19:55:16.291: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Jan 11 19:55:16.382: INFO: Waiting up to 5m0s for PersistentVolumeClaims [pvc-dmkq9] to have phase Bound Jan 11 19:55:16.471: INFO: PersistentVolumeClaim pvc-dmkq9 found but phase is Pending instead of Bound. Jan 11 19:55:18.561: INFO: PersistentVolumeClaim pvc-dmkq9 found but phase is Pending instead of Bound. Jan 11 19:55:20.651: INFO: PersistentVolumeClaim pvc-dmkq9 found and phase=Bound (4.268470642s) STEP: Checking if VolumeAttachment was created for the pod STEP: Deleting pod pvc-volume-tester-4xcj2 Jan 11 19:55:31.385: INFO: Deleting pod "pvc-volume-tester-4xcj2" in namespace "csi-mock-volumes-3620" Jan 11 19:55:31.475: INFO: Wait up to 5m0s for pod "pvc-volume-tester-4xcj2" to be fully deleted STEP: Deleting claim pvc-dmkq9 Jan 11 19:55:35.835: INFO: Waiting up to 2m0s for PersistentVolume pvc-0cf2323b-1302-4915-83bc-20d3e50fd844 to get deleted Jan 11 19:55:35.924: INFO: PersistentVolume pvc-0cf2323b-1302-4915-83bc-20d3e50fd844 was removed STEP: Deleting storageclass csi-mock-volumes-3620-sc STEP: Cleaning up resources STEP: uninstalling csi mock driver Jan 11 19:55:36.015: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3620/csi-attacher Jan 11 19:55:36.105: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-3620 Jan 11 19:55:36.195: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-3620 Jan 11 19:55:36.286: INFO: deleting *v1.Role: csi-mock-volumes-3620/external-attacher-cfg-csi-mock-volumes-3620 Jan 11 19:55:36.376: INFO: deleting *v1.RoleBinding: csi-mock-volumes-3620/csi-attacher-role-cfg Jan 11 19:55:36.466: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3620/csi-provisioner Jan 11 19:55:36.557: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-3620 Jan 11 19:55:36.647: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-3620 Jan 11 19:55:36.737: INFO: deleting *v1.Role: csi-mock-volumes-3620/external-provisioner-cfg-csi-mock-volumes-3620 Jan 11 19:55:36.828: INFO: deleting *v1.RoleBinding: csi-mock-volumes-3620/csi-provisioner-role-cfg Jan 11 19:55:36.918: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3620/csi-resizer Jan 11 19:55:37.009: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-3620 Jan 11 19:55:37.099: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-3620 Jan 11 19:55:37.189: INFO: deleting *v1.Role: csi-mock-volumes-3620/external-resizer-cfg-csi-mock-volumes-3620 Jan 11 19:55:37.280: INFO: deleting *v1.RoleBinding: csi-mock-volumes-3620/csi-resizer-role-cfg Jan 11 19:55:37.370: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3620/csi-mock Jan 11 19:55:37.460: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-3620 Jan 11 19:55:37.551: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-3620 Jan 11 19:55:37.641: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-3620 Jan 11 19:55:37.733: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-3620 Jan 11 19:55:37.823: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-3620 Jan 11 19:55:37.915: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-3620 Jan 11 19:55:38.006: INFO: deleting *v1.StatefulSet: csi-mock-volumes-3620/csi-mockplugin Jan 11 19:55:38.097: INFO: deleting *v1.StatefulSet: csi-mock-volumes-3620/csi-mockplugin-attacher [AfterEach] [sig-storage] CSI mock volume /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:55:38.189: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "csi-mock-volumes-3620" for this suite. Jan 11 19:55:44.548: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:55:47.847: INFO: namespace csi-mock-volumes-3620 deletion completed in 9.567118426s • [SLOW TEST:34.727 seconds] [sig-storage] CSI mock volume /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI attach test using mock driver /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:241 should preserve attachment policy when no CSIDriver present /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:263 ------------------------------ SSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:55:47.860: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename projected STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-4123 STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating projection with secret that has name projected-secret-test-040e3228-2f09-4df1-b1e0-04c77d361cb4 STEP: Creating a pod to test consume secrets Jan 11 19:55:48.678: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-f40120dc-babd-4d6c-98ac-0ab41875706c" in namespace "projected-4123" to be "success or failure" Jan 11 19:55:48.767: INFO: Pod "pod-projected-secrets-f40120dc-babd-4d6c-98ac-0ab41875706c": Phase="Pending", Reason="", readiness=false. Elapsed: 89.564528ms Jan 11 19:55:50.857: INFO: Pod "pod-projected-secrets-f40120dc-babd-4d6c-98ac-0ab41875706c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.179511262s STEP: Saw pod success Jan 11 19:55:50.857: INFO: Pod "pod-projected-secrets-f40120dc-babd-4d6c-98ac-0ab41875706c" satisfied condition "success or failure" Jan 11 19:55:50.947: INFO: Trying to get logs from node ip-10-250-27-25.ec2.internal pod pod-projected-secrets-f40120dc-babd-4d6c-98ac-0ab41875706c container projected-secret-volume-test: STEP: delete the pod Jan 11 19:55:51.136: INFO: Waiting for pod pod-projected-secrets-f40120dc-babd-4d6c-98ac-0ab41875706c to disappear Jan 11 19:55:51.226: INFO: Pod pod-projected-secrets-f40120dc-babd-4d6c-98ac-0ab41875706c no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:55:51.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4123" for this suite. Jan 11 19:55:57.586: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:56:00.897: INFO: namespace projected-4123 deletion completed in 9.580422794s • [SLOW TEST:13.037 seconds] [sig-storage] Projected secret /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSS ------------------------------ [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:93 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:55:22.496: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename provisioning STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in provisioning-5792 STEP: Waiting for a default service account to be provisioned in namespace [It] should support file as subpath [LinuxOnly] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:213 Jan 11 19:55:23.153: INFO: Could not find CSI Name for in-tree plugin kubernetes.io/host-path Jan 11 19:55:23.337: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-5792" in namespace "provisioning-5792" to be "success or failure" Jan 11 19:55:23.427: INFO: Pod "hostpath-symlink-prep-provisioning-5792": Phase="Pending", Reason="", readiness=false. Elapsed: 89.930831ms Jan 11 19:55:25.517: INFO: Pod "hostpath-symlink-prep-provisioning-5792": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.180216587s STEP: Saw pod success Jan 11 19:55:25.517: INFO: Pod "hostpath-symlink-prep-provisioning-5792" satisfied condition "success or failure" Jan 11 19:55:25.517: INFO: Deleting pod "hostpath-symlink-prep-provisioning-5792" in namespace "provisioning-5792" Jan 11 19:55:25.610: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-5792" to be fully deleted Jan 11 19:55:25.700: INFO: Creating resource for inline volume STEP: Creating pod pod-subpath-test-hostpathsymlink-76lc STEP: Creating a pod to test atomic-volume-subpath Jan 11 19:55:25.791: INFO: Waiting up to 5m0s for pod "pod-subpath-test-hostpathsymlink-76lc" in namespace "provisioning-5792" to be "success or failure" Jan 11 19:55:25.881: INFO: Pod "pod-subpath-test-hostpathsymlink-76lc": Phase="Pending", Reason="", readiness=false. Elapsed: 90.032952ms Jan 11 19:55:27.972: INFO: Pod "pod-subpath-test-hostpathsymlink-76lc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.180710871s Jan 11 19:55:30.062: INFO: Pod "pod-subpath-test-hostpathsymlink-76lc": Phase="Running", Reason="", readiness=true. Elapsed: 4.271103153s Jan 11 19:55:32.153: INFO: Pod "pod-subpath-test-hostpathsymlink-76lc": Phase="Running", Reason="", readiness=true. Elapsed: 6.361616185s Jan 11 19:55:34.243: INFO: Pod "pod-subpath-test-hostpathsymlink-76lc": Phase="Running", Reason="", readiness=true. Elapsed: 8.452050627s Jan 11 19:55:36.333: INFO: Pod "pod-subpath-test-hostpathsymlink-76lc": Phase="Running", Reason="", readiness=true. Elapsed: 10.542584622s Jan 11 19:55:38.424: INFO: Pod "pod-subpath-test-hostpathsymlink-76lc": Phase="Running", Reason="", readiness=true. Elapsed: 12.632811833s Jan 11 19:55:40.514: INFO: Pod "pod-subpath-test-hostpathsymlink-76lc": Phase="Running", Reason="", readiness=true. Elapsed: 14.722857863s Jan 11 19:55:42.604: INFO: Pod "pod-subpath-test-hostpathsymlink-76lc": Phase="Running", Reason="", readiness=true. Elapsed: 16.812936372s Jan 11 19:55:44.694: INFO: Pod "pod-subpath-test-hostpathsymlink-76lc": Phase="Running", Reason="", readiness=true. Elapsed: 18.903033959s Jan 11 19:55:46.784: INFO: Pod "pod-subpath-test-hostpathsymlink-76lc": Phase="Running", Reason="", readiness=true. Elapsed: 20.993282686s Jan 11 19:55:48.874: INFO: Pod "pod-subpath-test-hostpathsymlink-76lc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 23.083571857s STEP: Saw pod success Jan 11 19:55:48.875: INFO: Pod "pod-subpath-test-hostpathsymlink-76lc" satisfied condition "success or failure" Jan 11 19:55:48.965: INFO: Trying to get logs from node ip-10-250-27-25.ec2.internal pod pod-subpath-test-hostpathsymlink-76lc container test-container-subpath-hostpathsymlink-76lc: STEP: delete the pod Jan 11 19:55:49.158: INFO: Waiting for pod pod-subpath-test-hostpathsymlink-76lc to disappear Jan 11 19:55:49.247: INFO: Pod pod-subpath-test-hostpathsymlink-76lc no longer exists STEP: Deleting pod pod-subpath-test-hostpathsymlink-76lc Jan 11 19:55:49.247: INFO: Deleting pod "pod-subpath-test-hostpathsymlink-76lc" in namespace "provisioning-5792" STEP: Deleting pod Jan 11 19:55:49.337: INFO: Deleting pod "pod-subpath-test-hostpathsymlink-76lc" in namespace "provisioning-5792" Jan 11 19:55:49.518: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-5792" in namespace "provisioning-5792" to be "success or failure" Jan 11 19:55:49.608: INFO: Pod "hostpath-symlink-prep-provisioning-5792": Phase="Pending", Reason="", readiness=false. Elapsed: 89.947785ms Jan 11 19:55:51.699: INFO: Pod "hostpath-symlink-prep-provisioning-5792": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.180281378s STEP: Saw pod success Jan 11 19:55:51.699: INFO: Pod "hostpath-symlink-prep-provisioning-5792" satisfied condition "success or failure" Jan 11 19:55:51.699: INFO: Deleting pod "hostpath-symlink-prep-provisioning-5792" in namespace "provisioning-5792" Jan 11 19:55:51.793: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-5792" to be fully deleted Jan 11 19:55:51.884: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics [AfterEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:55:51.884: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "provisioning-5792" for this suite. Jan 11 19:55:58.249: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:56:01.568: INFO: namespace provisioning-5792 deletion completed in 9.591851189s • [SLOW TEST:39.072 seconds] [sig-storage] In-tree Volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Driver: hostPathSymlink] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:69 [Testpattern: Inline-volume (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:92 should support file as subpath [LinuxOnly] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:213 ------------------------------ SSSSS ------------------------------ [BeforeEach] [sig-storage] PVC Protection /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:55:35.178: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename pvc-protection STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pvc-protection-174 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PVC Protection /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:45 Jan 11 19:55:35.820: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable STEP: Creating a PVC Jan 11 19:55:36.001: INFO: Default storage class: "default" Jan 11 19:55:36.001: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil STEP: Creating a Pod that becomes Running and therefore is actively using the PVC STEP: Waiting for PVC to become Bound Jan 11 19:55:50.454: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-protection2l2pl] to have phase Bound Jan 11 19:55:50.544: INFO: PersistentVolumeClaim pvc-protection2l2pl found and phase=Bound (90.401602ms) STEP: Checking that PVC Protection finalizer is set [It] Verify that PVC in active use by a pod is not removed immediately /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:98 STEP: Deleting the PVC, however, the PVC must not be removed from the system as it's in active use by a pod STEP: Checking that the PVC status is Terminating STEP: Deleting the pod that uses the PVC Jan 11 19:55:50.815: INFO: Deleting pod "pvc-tester-bwg5j" in namespace "pvc-protection-174" Jan 11 19:55:50.906: INFO: Wait up to 5m0s for pod "pvc-tester-bwg5j" to be fully deleted STEP: Checking that the PVC is automatically removed from the system because it's no longer in active use by a pod Jan 11 19:55:59.088: INFO: Waiting up to 3m0s for PersistentVolumeClaim pvc-protection2l2pl to be removed Jan 11 19:55:59.177: INFO: Claim "pvc-protection2l2pl" in namespace "pvc-protection-174" doesn't exist in the system [AfterEach] [sig-storage] PVC Protection /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:55:59.177: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pvc-protection-174" for this suite. Jan 11 19:56:05.539: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:56:08.866: INFO: namespace pvc-protection-174 deletion completed in 9.597640217s [AfterEach] [sig-storage] PVC Protection /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:80 • [SLOW TEST:33.688 seconds] [sig-storage] PVC Protection /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Verify that PVC in active use by a pod is not removed immediately /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:98 ------------------------------ SSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:55:37.275: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in csi-mock-volumes-4249 STEP: Waiting for a default service account to be provisioned in namespace [It] should expand volume without restarting pod if attach=off, nodeExpansion=on /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:546 STEP: deploying csi mock driver Jan 11 19:55:38.096: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4249/csi-attacher Jan 11 19:55:38.186: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-4249 Jan 11 19:55:38.186: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-4249 Jan 11 19:55:38.276: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-4249 Jan 11 19:55:38.365: INFO: creating *v1.Role: csi-mock-volumes-4249/external-attacher-cfg-csi-mock-volumes-4249 Jan 11 19:55:38.455: INFO: creating *v1.RoleBinding: csi-mock-volumes-4249/csi-attacher-role-cfg Jan 11 19:55:38.544: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4249/csi-provisioner Jan 11 19:55:38.633: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-4249 Jan 11 19:55:38.634: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-4249 Jan 11 19:55:38.723: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-4249 Jan 11 19:55:38.812: INFO: creating *v1.Role: csi-mock-volumes-4249/external-provisioner-cfg-csi-mock-volumes-4249 Jan 11 19:55:38.902: INFO: creating *v1.RoleBinding: csi-mock-volumes-4249/csi-provisioner-role-cfg Jan 11 19:55:38.992: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4249/csi-resizer Jan 11 19:55:39.081: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-4249 Jan 11 19:55:39.081: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-4249 Jan 11 19:55:39.171: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-4249 Jan 11 19:55:39.260: INFO: creating *v1.Role: csi-mock-volumes-4249/external-resizer-cfg-csi-mock-volumes-4249 Jan 11 19:55:39.350: INFO: creating *v1.RoleBinding: csi-mock-volumes-4249/csi-resizer-role-cfg Jan 11 19:55:39.439: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4249/csi-mock Jan 11 19:55:39.529: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-4249 Jan 11 19:55:39.618: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-4249 Jan 11 19:55:39.708: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-4249 Jan 11 19:55:39.798: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-4249 Jan 11 19:55:39.887: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-4249 Jan 11 19:55:39.977: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-4249 Jan 11 19:55:40.066: INFO: creating *v1.StatefulSet: csi-mock-volumes-4249/csi-mockplugin Jan 11 19:55:40.156: INFO: creating *v1beta1.CSIDriver: csi-mock-csi-mock-volumes-4249 Jan 11 19:55:40.245: INFO: creating *v1.StatefulSet: csi-mock-volumes-4249/csi-mockplugin-resizer Jan 11 19:55:40.335: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-4249" STEP: Creating pod Jan 11 19:55:40.603: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Jan 11 19:55:40.694: INFO: Waiting up to 5m0s for PersistentVolumeClaims [pvc-lr5cg] to have phase Bound Jan 11 19:55:40.783: INFO: PersistentVolumeClaim pvc-lr5cg found but phase is Pending instead of Bound. Jan 11 19:55:42.873: INFO: PersistentVolumeClaim pvc-lr5cg found and phase=Bound (2.178943677s) STEP: Expanding current pvc STEP: Waiting for persistent volume resize to finish STEP: Waiting for PVC resize to finish STEP: Deleting pod pvc-volume-tester-49x8p Jan 11 19:55:47.768: INFO: Deleting pod "pvc-volume-tester-49x8p" in namespace "csi-mock-volumes-4249" Jan 11 19:55:47.858: INFO: Wait up to 5m0s for pod "pvc-volume-tester-49x8p" to be fully deleted STEP: Deleting claim pvc-lr5cg Jan 11 19:55:54.217: INFO: Waiting up to 2m0s for PersistentVolume pvc-ac626c7a-d0d3-4446-8c0b-e652b4660409 to get deleted Jan 11 19:55:54.306: INFO: PersistentVolume pvc-ac626c7a-d0d3-4446-8c0b-e652b4660409 was removed STEP: Deleting storageclass csi-mock-volumes-4249-sc STEP: Cleaning up resources STEP: uninstalling csi mock driver Jan 11 19:55:54.397: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4249/csi-attacher Jan 11 19:55:54.488: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-4249 Jan 11 19:55:54.579: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-4249 Jan 11 19:55:54.670: INFO: deleting *v1.Role: csi-mock-volumes-4249/external-attacher-cfg-csi-mock-volumes-4249 Jan 11 19:55:54.760: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4249/csi-attacher-role-cfg Jan 11 19:55:54.851: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4249/csi-provisioner Jan 11 19:55:54.942: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-4249 Jan 11 19:55:55.032: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-4249 Jan 11 19:55:55.124: INFO: deleting *v1.Role: csi-mock-volumes-4249/external-provisioner-cfg-csi-mock-volumes-4249 Jan 11 19:55:55.215: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4249/csi-provisioner-role-cfg Jan 11 19:55:55.306: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4249/csi-resizer Jan 11 19:55:55.397: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-4249 Jan 11 19:55:55.490: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-4249 Jan 11 19:55:55.581: INFO: deleting *v1.Role: csi-mock-volumes-4249/external-resizer-cfg-csi-mock-volumes-4249 Jan 11 19:55:55.672: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4249/csi-resizer-role-cfg Jan 11 19:55:55.764: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4249/csi-mock Jan 11 19:55:55.855: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-4249 Jan 11 19:55:55.946: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-4249 Jan 11 19:55:56.037: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-4249 Jan 11 19:55:56.128: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-4249 Jan 11 19:55:56.219: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-4249 Jan 11 19:55:56.310: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-4249 Jan 11 19:55:56.401: INFO: deleting *v1.StatefulSet: csi-mock-volumes-4249/csi-mockplugin Jan 11 19:55:56.492: INFO: deleting *v1beta1.CSIDriver: csi-mock-csi-mock-volumes-4249 Jan 11 19:55:56.582: INFO: deleting *v1.StatefulSet: csi-mock-volumes-4249/csi-mockplugin-resizer [AfterEach] [sig-storage] CSI mock volume /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:55:56.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "csi-mock-volumes-4249" for this suite. Jan 11 19:56:09.122: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:56:12.411: INFO: namespace csi-mock-volumes-4249 deletion completed in 15.558172113s • [SLOW TEST:35.136 seconds] [sig-storage] CSI mock volume /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI online volume expansion /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:531 should expand volume without restarting pod if attach=off, nodeExpansion=on /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:546 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Networking /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:53:33.778: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename nettest STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nettest-5543 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Networking /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:35 STEP: Executing a successful http request from the external internet [It] should update nodePort: http [Slow] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:182 STEP: Performing setup for networking test in namespace nettest-5543 STEP: creating a selector STEP: Creating the service pods in kubernetes Jan 11 19:53:34.492: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods STEP: Getting node addresses Jan 11 19:53:58.027: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating the service on top of the pods in kubernetes Jan 11 19:53:58.395: INFO: Service node-port-service in namespace nettest-5543 found. Jan 11 19:53:58.672: INFO: Service session-affinity-service in namespace nettest-5543 found. STEP: dialing(http) 10.250.27.25 (node) --> 10.250.27.25:31082 (nodeIP) Jan 11 19:53:58.852: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.250.27.25:31082/hostName | grep -v '^\s*$'] Namespace:nettest-5543 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 11 19:53:58.852: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config Jan 11 19:53:59.744: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0 netserver-1], actual=[netserver-1]) Jan 11 19:54:01.835: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.250.27.25:31082/hostName | grep -v '^\s*$'] Namespace:nettest-5543 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 11 19:54:01.835: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config Jan 11 19:54:02.769: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0 netserver-1], actual=[netserver-1]) Jan 11 19:54:04.859: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.250.27.25:31082/hostName | grep -v '^\s*$'] Namespace:nettest-5543 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 11 19:54:04.859: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config Jan 11 19:54:05.703: INFO: Found all expected endpoints: [netserver-0 netserver-1] STEP: dialing(http) 10.250.27.25 (node) --> 10.250.27.25:31082 (nodeIP) Jan 11 19:54:20.895: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.250.27.25:31082/hostName | grep -v '^\s*$'] Namespace:nettest-5543 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 11 19:54:20.895: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config Jan 11 19:54:21.787: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://10.250.27.25:31082/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: "" Jan 11 19:54:21.787: INFO: Waiting for [] endpoints (expected=[], actual=[]) Jan 11 19:54:23.878: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.250.27.25:31082/hostName | grep -v '^\s*$'] Namespace:nettest-5543 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 11 19:54:23.878: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config Jan 11 19:54:24.748: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://10.250.27.25:31082/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: "" Jan 11 19:54:24.748: INFO: Waiting for [] endpoints (expected=[], actual=[]) Jan 11 19:54:26.838: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.250.27.25:31082/hostName | grep -v '^\s*$'] Namespace:nettest-5543 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 11 19:54:26.838: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config Jan 11 19:54:27.728: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://10.250.27.25:31082/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: "" Jan 11 19:54:27.728: INFO: Waiting for [] endpoints (expected=[], actual=[]) Jan 11 19:54:29.818: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.250.27.25:31082/hostName | grep -v '^\s*$'] Namespace:nettest-5543 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 11 19:54:29.818: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config Jan 11 19:54:30.658: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://10.250.27.25:31082/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: "" Jan 11 19:54:30.658: INFO: Waiting for [] endpoints (expected=[], actual=[]) Jan 11 19:54:32.748: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.250.27.25:31082/hostName | grep -v '^\s*$'] Namespace:nettest-5543 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 11 19:54:32.748: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config Jan 11 19:54:33.605: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://10.250.27.25:31082/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: "" Jan 11 19:54:33.605: INFO: Waiting for [] endpoints (expected=[], actual=[]) Jan 11 19:54:35.695: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.250.27.25:31082/hostName | grep -v '^\s*$'] Namespace:nettest-5543 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 11 19:54:35.695: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config Jan 11 19:54:36.528: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://10.250.27.25:31082/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: "" Jan 11 19:54:36.528: INFO: Waiting for [] endpoints (expected=[], actual=[]) Jan 11 19:54:38.619: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.250.27.25:31082/hostName | grep -v '^\s*$'] Namespace:nettest-5543 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 11 19:54:38.619: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config Jan 11 19:54:39.460: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://10.250.27.25:31082/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: "" Jan 11 19:54:39.460: INFO: Waiting for [] endpoints (expected=[], actual=[]) Jan 11 19:54:41.550: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.250.27.25:31082/hostName | grep -v '^\s*$'] Namespace:nettest-5543 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 11 19:54:41.550: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config Jan 11 19:54:42.421: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://10.250.27.25:31082/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: "" Jan 11 19:54:42.421: INFO: Waiting for [] endpoints (expected=[], actual=[]) Jan 11 19:54:44.511: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.250.27.25:31082/hostName | grep -v '^\s*$'] Namespace:nettest-5543 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 11 19:54:44.511: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config Jan 11 19:54:45.350: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://10.250.27.25:31082/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: "" Jan 11 19:54:45.350: INFO: Waiting for [] endpoints (expected=[], actual=[]) Jan 11 19:54:47.440: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.250.27.25:31082/hostName | grep -v '^\s*$'] Namespace:nettest-5543 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 11 19:54:47.440: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config Jan 11 19:54:48.286: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://10.250.27.25:31082/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: "" Jan 11 19:54:48.287: INFO: Waiting for [] endpoints (expected=[], actual=[]) Jan 11 19:54:50.377: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.250.27.25:31082/hostName | grep -v '^\s*$'] Namespace:nettest-5543 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 11 19:54:50.377: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config Jan 11 19:54:51.208: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://10.250.27.25:31082/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: "" Jan 11 19:54:51.208: INFO: Waiting for [] endpoints (expected=[], actual=[]) Jan 11 19:54:53.299: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.250.27.25:31082/hostName | grep -v '^\s*$'] Namespace:nettest-5543 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 11 19:54:53.299: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config Jan 11 19:54:54.176: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://10.250.27.25:31082/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: "" Jan 11 19:54:54.176: INFO: Waiting for [] endpoints (expected=[], actual=[]) Jan 11 19:54:56.266: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.250.27.25:31082/hostName | grep -v '^\s*$'] Namespace:nettest-5543 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 11 19:54:56.266: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config Jan 11 19:54:57.109: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://10.250.27.25:31082/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: "" Jan 11 19:54:57.109: INFO: Waiting for [] endpoints (expected=[], actual=[]) Jan 11 19:54:59.200: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.250.27.25:31082/hostName | grep -v '^\s*$'] Namespace:nettest-5543 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 11 19:54:59.200: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config Jan 11 19:55:00.064: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://10.250.27.25:31082/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: "" Jan 11 19:55:00.064: INFO: Waiting for [] endpoints (expected=[], actual=[]) Jan 11 19:55:02.154: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.250.27.25:31082/hostName | grep -v '^\s*$'] Namespace:nettest-5543 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 11 19:55:02.154: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config Jan 11 19:55:03.012: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://10.250.27.25:31082/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: "" Jan 11 19:55:03.012: INFO: Waiting for [] endpoints (expected=[], actual=[]) Jan 11 19:55:05.102: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.250.27.25:31082/hostName | grep -v '^\s*$'] Namespace:nettest-5543 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 11 19:55:05.102: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config Jan 11 19:55:05.947: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://10.250.27.25:31082/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: "" Jan 11 19:55:05.948: INFO: Waiting for [] endpoints (expected=[], actual=[]) Jan 11 19:55:08.038: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.250.27.25:31082/hostName | grep -v '^\s*$'] Namespace:nettest-5543 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 11 19:55:08.038: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config Jan 11 19:55:08.926: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://10.250.27.25:31082/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: "" Jan 11 19:55:08.926: INFO: Waiting for [] endpoints (expected=[], actual=[]) Jan 11 19:55:11.016: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.250.27.25:31082/hostName | grep -v '^\s*$'] Namespace:nettest-5543 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 11 19:55:11.016: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config Jan 11 19:55:11.912: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://10.250.27.25:31082/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: "" Jan 11 19:55:11.912: INFO: Waiting for [] endpoints (expected=[], actual=[]) Jan 11 19:55:14.002: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.250.27.25:31082/hostName | grep -v '^\s*$'] Namespace:nettest-5543 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 11 19:55:14.002: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config Jan 11 19:55:14.868: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://10.250.27.25:31082/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: "" Jan 11 19:55:14.868: INFO: Waiting for [] endpoints (expected=[], actual=[]) Jan 11 19:55:16.958: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.250.27.25:31082/hostName | grep -v '^\s*$'] Namespace:nettest-5543 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 11 19:55:16.958: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config Jan 11 19:55:17.878: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://10.250.27.25:31082/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: "" Jan 11 19:55:17.878: INFO: Waiting for [] endpoints (expected=[], actual=[]) Jan 11 19:55:19.969: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.250.27.25:31082/hostName | grep -v '^\s*$'] Namespace:nettest-5543 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 11 19:55:19.969: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config Jan 11 19:55:20.819: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://10.250.27.25:31082/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: "" Jan 11 19:55:20.819: INFO: Waiting for [] endpoints (expected=[], actual=[]) Jan 11 19:55:22.910: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.250.27.25:31082/hostName | grep -v '^\s*$'] Namespace:nettest-5543 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 11 19:55:22.910: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config Jan 11 19:55:23.851: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://10.250.27.25:31082/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: "" Jan 11 19:55:23.851: INFO: Waiting for [] endpoints (expected=[], actual=[]) Jan 11 19:55:25.942: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.250.27.25:31082/hostName | grep -v '^\s*$'] Namespace:nettest-5543 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 11 19:55:25.942: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config Jan 11 19:55:26.837: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://10.250.27.25:31082/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: "" Jan 11 19:55:26.837: INFO: Waiting for [] endpoints (expected=[], actual=[]) Jan 11 19:55:28.927: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.250.27.25:31082/hostName | grep -v '^\s*$'] Namespace:nettest-5543 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 11 19:55:28.927: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config Jan 11 19:55:29.795: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://10.250.27.25:31082/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: "" Jan 11 19:55:29.795: INFO: Waiting for [] endpoints (expected=[], actual=[]) Jan 11 19:55:31.885: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.250.27.25:31082/hostName | grep -v '^\s*$'] Namespace:nettest-5543 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 11 19:55:31.885: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config Jan 11 19:55:32.776: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://10.250.27.25:31082/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: "" Jan 11 19:55:32.776: INFO: Waiting for [] endpoints (expected=[], actual=[]) Jan 11 19:55:34.866: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.250.27.25:31082/hostName | grep -v '^\s*$'] Namespace:nettest-5543 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 11 19:55:34.867: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config Jan 11 19:55:35.748: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://10.250.27.25:31082/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: "" Jan 11 19:55:35.748: INFO: Waiting for [] endpoints (expected=[], actual=[]) Jan 11 19:55:37.838: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.250.27.25:31082/hostName | grep -v '^\s*$'] Namespace:nettest-5543 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 11 19:55:37.838: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config Jan 11 19:55:38.849: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://10.250.27.25:31082/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: "" Jan 11 19:55:38.850: INFO: Waiting for [] endpoints (expected=[], actual=[]) Jan 11 19:55:40.939: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.250.27.25:31082/hostName | grep -v '^\s*$'] Namespace:nettest-5543 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 11 19:55:40.939: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config Jan 11 19:55:41.856: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://10.250.27.25:31082/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: "" Jan 11 19:55:41.856: INFO: Waiting for [] endpoints (expected=[], actual=[]) Jan 11 19:55:43.946: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.250.27.25:31082/hostName | grep -v '^\s*$'] Namespace:nettest-5543 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 11 19:55:43.946: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config Jan 11 19:55:44.828: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://10.250.27.25:31082/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: "" Jan 11 19:55:44.828: INFO: Waiting for [] endpoints (expected=[], actual=[]) Jan 11 19:55:46.918: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.250.27.25:31082/hostName | grep -v '^\s*$'] Namespace:nettest-5543 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 11 19:55:46.918: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config Jan 11 19:55:47.837: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://10.250.27.25:31082/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: "" Jan 11 19:55:47.837: INFO: Waiting for [] endpoints (expected=[], actual=[]) Jan 11 19:55:49.927: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.250.27.25:31082/hostName | grep -v '^\s*$'] Namespace:nettest-5543 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 11 19:55:49.927: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config Jan 11 19:55:50.752: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://10.250.27.25:31082/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: "" Jan 11 19:55:50.752: INFO: Waiting for [] endpoints (expected=[], actual=[]) Jan 11 19:55:52.842: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.250.27.25:31082/hostName | grep -v '^\s*$'] Namespace:nettest-5543 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 11 19:55:52.842: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config Jan 11 19:55:53.698: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://10.250.27.25:31082/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: "" Jan 11 19:55:53.698: INFO: Waiting for [] endpoints (expected=[], actual=[]) Jan 11 19:55:55.788: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.250.27.25:31082/hostName | grep -v '^\s*$'] Namespace:nettest-5543 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 11 19:55:55.788: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config Jan 11 19:55:56.665: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://10.250.27.25:31082/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: "" Jan 11 19:55:56.665: INFO: Waiting for [] endpoints (expected=[], actual=[]) Jan 11 19:55:58.755: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.250.27.25:31082/hostName | grep -v '^\s*$'] Namespace:nettest-5543 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 11 19:55:58.755: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config Jan 11 19:55:59.624: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://10.250.27.25:31082/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: "" Jan 11 19:55:59.624: INFO: Found all expected endpoints: [] [AfterEach] [sig-network] Networking /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:55:59.624: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "nettest-5543" for this suite. Jan 11 19:56:11.984: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:56:15.305: INFO: namespace nettest-5543 deletion completed in 15.590027911s • [SLOW TEST:161.527 seconds] [sig-network] Networking /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 Granular Checks: Services /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:103 should update nodePort: http [Slow] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:182 ------------------------------ SSSSSSSS ------------------------------ [BeforeEach] [k8s.io] [sig-node] PreStop /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:54:31.486: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename prestop STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in prestop-8721 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:173 [It] graceful pod terminated should wait until preStop hook completes the process /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:186 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: waiting for pod running STEP: deleting the pod gracefully STEP: verifying the pod running state after graceful termination [AfterEach] [k8s.io] [sig-node] PreStop /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:56:05.481: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-8721" for this suite. Jan 11 19:56:13.842: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:56:17.163: INFO: namespace prestop-8721 deletion completed in 11.591494444s • [SLOW TEST:105.677 seconds] [k8s.io] [sig-node] PreStop /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693 graceful pod terminated should wait until preStop hook completes the process /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:186 ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:56:01.577: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename secrets STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secrets-1542 STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating secret with name secret-test-map-8ab5f5e8-62db-424a-88fb-ac243f71affe STEP: Creating a pod to test consume secrets Jan 11 19:56:03.332: INFO: Waiting up to 5m0s for pod "pod-secrets-f468869d-ddd8-4482-8e99-64d9c4334e79" in namespace "secrets-1542" to be "success or failure" Jan 11 19:56:03.422: INFO: Pod "pod-secrets-f468869d-ddd8-4482-8e99-64d9c4334e79": Phase="Pending", Reason="", readiness=false. Elapsed: 89.792903ms Jan 11 19:56:05.512: INFO: Pod "pod-secrets-f468869d-ddd8-4482-8e99-64d9c4334e79": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.180067549s STEP: Saw pod success Jan 11 19:56:05.512: INFO: Pod "pod-secrets-f468869d-ddd8-4482-8e99-64d9c4334e79" satisfied condition "success or failure" Jan 11 19:56:05.602: INFO: Trying to get logs from node ip-10-250-27-25.ec2.internal pod pod-secrets-f468869d-ddd8-4482-8e99-64d9c4334e79 container secret-volume-test: STEP: delete the pod Jan 11 19:56:05.799: INFO: Waiting for pod pod-secrets-f468869d-ddd8-4482-8e99-64d9c4334e79 to disappear Jan 11 19:56:05.889: INFO: Pod pod-secrets-f468869d-ddd8-4482-8e99-64d9c4334e79 no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:56:05.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1542" for this suite. Jan 11 19:56:14.251: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:56:17.568: INFO: namespace secrets-1542 deletion completed in 11.58804932s • [SLOW TEST:15.992 seconds] [sig-storage] Secrets /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:35 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSS ------------------------------ [BeforeEach] [sig-node] Downward API /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:56:12.442: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename downward-api STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-1252 STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating a pod to test downward api env vars Jan 11 19:56:13.169: INFO: Waiting up to 5m0s for pod "downward-api-9fde374b-6df6-406a-a9d4-46a6617468e3" in namespace "downward-api-1252" to be "success or failure" Jan 11 19:56:13.259: INFO: Pod "downward-api-9fde374b-6df6-406a-a9d4-46a6617468e3": Phase="Pending", Reason="", readiness=false. Elapsed: 89.441194ms Jan 11 19:56:15.349: INFO: Pod "downward-api-9fde374b-6df6-406a-a9d4-46a6617468e3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.179242191s STEP: Saw pod success Jan 11 19:56:15.349: INFO: Pod "downward-api-9fde374b-6df6-406a-a9d4-46a6617468e3" satisfied condition "success or failure" Jan 11 19:56:15.438: INFO: Trying to get logs from node ip-10-250-27-25.ec2.internal pod downward-api-9fde374b-6df6-406a-a9d4-46a6617468e3 container dapi-container: STEP: delete the pod Jan 11 19:56:15.625: INFO: Waiting for pod downward-api-9fde374b-6df6-406a-a9d4-46a6617468e3 to disappear Jan 11 19:56:15.713: INFO: Pod downward-api-9fde374b-6df6-406a-a9d4-46a6617468e3 no longer exists [AfterEach] [sig-node] Downward API /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:56:15.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1252" for this suite. Jan 11 19:56:24.073: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:56:27.375: INFO: namespace downward-api-1252 deletion completed in 11.570863837s • [SLOW TEST:14.933 seconds] [sig-node] Downward API /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide pod UID as env vars [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ S ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:56:08.874: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in persistent-local-volumes-test-9882 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:152 [BeforeEach] Pod with node different from PV's NodeAffinity /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:349 STEP: Initializing test volumes Jan 11 19:56:11.970: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-9882 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-4882d7e1-b431-434d-a2a5-57c675e76e13' Jan 11 19:56:13.244: INFO: stderr: "" Jan 11 19:56:13.244: INFO: stdout: "" STEP: Creating local PVCs and PVs Jan 11 19:56:13.244: INFO: Creating a PV followed by a PVC Jan 11 19:56:13.425: INFO: Waiting for PV local-pvpfvqz to bind to PVC pvc-2q5zh Jan 11 19:56:13.425: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-2q5zh] to have phase Bound Jan 11 19:56:13.515: INFO: PersistentVolumeClaim pvc-2q5zh found and phase=Bound (89.755026ms) Jan 11 19:56:13.515: INFO: Waiting up to 3m0s for PersistentVolume local-pvpfvqz to have phase Bound Jan 11 19:56:13.605: INFO: PersistentVolume local-pvpfvqz found and phase=Bound (90.189697ms) [It] should fail scheduling due to different NodeSelector /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:369 STEP: local-volume-type: dir STEP: Initializing test volumes Jan 11 19:56:13.786: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-9882 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-b82b7f7a-9f1b-488d-ae6b-6359278fcde9' Jan 11 19:56:15.053: INFO: stderr: "" Jan 11 19:56:15.053: INFO: stdout: "" STEP: Creating local PVCs and PVs Jan 11 19:56:15.053: INFO: Creating a PV followed by a PVC Jan 11 19:56:15.234: INFO: Waiting for PV local-pvqtt7d to bind to PVC pvc-mc2km Jan 11 19:56:15.234: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-mc2km] to have phase Bound Jan 11 19:56:15.324: INFO: PersistentVolumeClaim pvc-mc2km found and phase=Bound (89.666345ms) Jan 11 19:56:15.324: INFO: Waiting up to 3m0s for PersistentVolume local-pvqtt7d to have phase Bound Jan 11 19:56:15.415: INFO: PersistentVolume local-pvqtt7d found and phase=Bound (90.406135ms) Jan 11 19:56:15.687: INFO: Waiting up to 5m0s for pod "security-context-f75e421d-79a2-4b34-84cb-f605baa398c8" in namespace "persistent-local-volumes-test-9882" to be "Unschedulable" Jan 11 19:56:15.777: INFO: Pod "security-context-f75e421d-79a2-4b34-84cb-f605baa398c8": Phase="Pending", Reason="", readiness=false. Elapsed: 89.790071ms Jan 11 19:56:15.777: INFO: Pod "security-context-f75e421d-79a2-4b34-84cb-f605baa398c8" satisfied condition "Unschedulable" [AfterEach] Pod with node different from PV's NodeAffinity /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:360 STEP: Cleaning up PVC and PV Jan 11 19:56:15.777: INFO: Deleting PersistentVolumeClaim "pvc-2q5zh" Jan 11 19:56:15.867: INFO: Deleting PersistentVolume "local-pvpfvqz" STEP: Removing the test directory Jan 11 19:56:15.958: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-9882 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-4882d7e1-b431-434d-a2a5-57c675e76e13' Jan 11 19:56:17.226: INFO: stderr: "" Jan 11 19:56:17.226: INFO: stdout: "" [AfterEach] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:56:17.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-9882" for this suite. Jan 11 19:56:29.678: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:56:33.001: INFO: namespace persistent-local-volumes-test-9882 deletion completed in 15.593231368s • [SLOW TEST:24.127 seconds] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Pod with node different from PV's NodeAffinity /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:343 should fail scheduling due to different NodeSelector /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:369 ------------------------------ SSS ------------------------------ [BeforeEach] [k8s.io] Lease /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:56:27.378: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename lease-test STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in lease-test-9087 STEP: Waiting for a default service account to be provisioned in namespace [It] API should be available /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lease.go:57 [AfterEach] [k8s.io] Lease /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:56:29.093: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-9087" for this suite. Jan 11 19:56:35.453: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:56:38.752: INFO: namespace lease-test-9087 deletion completed in 9.567750162s • [SLOW TEST:11.374 seconds] [k8s.io] Lease /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693 API should be available /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lease.go:57 ------------------------------ SSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:56:00.908: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename kubectl STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-3768 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:225 [BeforeEach] Kubectl run rc /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1439 [It] should create an rc from an image [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: running the image docker.io/library/httpd:2.4.38-alpine Jan 11 19:56:01.550: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-3768' Jan 11 19:56:01.984: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jan 11 19:56:01.984: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n" STEP: verifying the rc e2e-test-httpd-rc was created STEP: verifying the pod controlled by rc e2e-test-httpd-rc was created STEP: confirm that you can get logs from an rc Jan 11 19:56:04.253: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-httpd-rc-d6j9v] Jan 11 19:56:04.253: INFO: Waiting up to 5m0s for pod "e2e-test-httpd-rc-d6j9v" in namespace "kubectl-3768" to be "running and ready" Jan 11 19:56:04.343: INFO: Pod "e2e-test-httpd-rc-d6j9v": Phase="Running", Reason="", readiness=true. Elapsed: 89.559903ms Jan 11 19:56:04.343: INFO: Pod "e2e-test-httpd-rc-d6j9v" satisfied condition "running and ready" Jan 11 19:56:04.343: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-httpd-rc-d6j9v] Jan 11 19:56:04.343: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config logs rc/e2e-test-httpd-rc --namespace=kubectl-3768' Jan 11 19:56:04.962: INFO: stderr: "" Jan 11 19:56:04.962: INFO: stdout: "AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 100.64.0.86. Set the 'ServerName' directive globally to suppress this message\nAH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 100.64.0.86. Set the 'ServerName' directive globally to suppress this message\n[Sat Jan 11 19:56:02.962688 2020] [mpm_event:notice] [pid 1:tid 140292943932264] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations\n[Sat Jan 11 19:56:02.962738 2020] [core:notice] [pid 1:tid 140292943932264] AH00094: Command line: 'httpd -D FOREGROUND'\n" [AfterEach] Kubectl run rc /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1444 Jan 11 19:56:04.962: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config delete rc e2e-test-httpd-rc --namespace=kubectl-3768' Jan 11 19:56:05.492: INFO: stderr: "" Jan 11 19:56:05.492: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:56:05.492: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3768" for this suite. Jan 11 19:56:35.852: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:56:39.159: INFO: namespace kubectl-3768 deletion completed in 33.576341503s • [SLOW TEST:38.251 seconds] [sig-cli] Kubectl client /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run rc /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1435 should create an rc from an image [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:56:17.577: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename webhook STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-7214 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:88 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 11 19:56:19.736: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714369379, loc:(*time.Location)(0x84bfb00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714369379, loc:(*time.Location)(0x84bfb00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714369379, loc:(*time.Location)(0x84bfb00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714369379, loc:(*time.Location)(0x84bfb00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 11 19:56:21.826: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714369379, loc:(*time.Location)(0x84bfb00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714369379, loc:(*time.Location)(0x84bfb00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714369379, loc:(*time.Location)(0x84bfb00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714369379, loc:(*time.Location)(0x84bfb00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 11 19:56:24.921: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:56:26.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7214" for this suite. Jan 11 19:56:32.605: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:56:35.932: INFO: namespace webhook-7214 deletion completed in 9.597511312s STEP: Destroying namespace "webhook-7214-markers" for this suite. Jan 11 19:56:42.203: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:56:45.524: INFO: namespace webhook-7214-markers deletion completed in 9.592329922s [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:103 • [SLOW TEST:28.307 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ S ------------------------------ [BeforeEach] [k8s.io] [sig-node] Security Context /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:56:38.758: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename security-context STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in security-context-3884 STEP: Waiting for a default service account to be provisioned in namespace [It] should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:67 STEP: Creating a pod to test pod.Spec.SecurityContext.SupplementalGroups Jan 11 19:56:39.486: INFO: Waiting up to 5m0s for pod "security-context-68ab55da-1a1d-4f12-9862-cbf13975772a" in namespace "security-context-3884" to be "success or failure" Jan 11 19:56:39.575: INFO: Pod "security-context-68ab55da-1a1d-4f12-9862-cbf13975772a": Phase="Pending", Reason="", readiness=false. Elapsed: 89.489743ms Jan 11 19:56:41.665: INFO: Pod "security-context-68ab55da-1a1d-4f12-9862-cbf13975772a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.179101204s STEP: Saw pod success Jan 11 19:56:41.665: INFO: Pod "security-context-68ab55da-1a1d-4f12-9862-cbf13975772a" satisfied condition "success or failure" Jan 11 19:56:41.755: INFO: Trying to get logs from node ip-10-250-27-25.ec2.internal pod security-context-68ab55da-1a1d-4f12-9862-cbf13975772a container test-container: STEP: delete the pod Jan 11 19:56:42.083: INFO: Waiting for pod security-context-68ab55da-1a1d-4f12-9862-cbf13975772a to disappear Jan 11 19:56:42.172: INFO: Pod security-context-68ab55da-1a1d-4f12-9862-cbf13975772a no longer exists [AfterEach] [k8s.io] [sig-node] Security Context /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:56:42.173: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-3884" for this suite. Jan 11 19:56:48.531: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:56:51.947: INFO: namespace security-context-3884 deletion completed in 9.684262324s • [SLOW TEST:13.190 seconds] [k8s.io] [sig-node] Security Context /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693 should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:67 ------------------------------ SS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:56:33.008: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename resourcequota STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in resourcequota-837 STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a persistent volume claim with a storage class. [sig-storage] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/resource_quota.go:508 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a PersistentVolumeClaim with storage class STEP: Ensuring resource quota status captures persistent volume claim creation STEP: Deleting a PersistentVolumeClaim STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 19:56:45.564: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-837" for this suite. Jan 11 19:56:51.930: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 19:56:55.382: INFO: namespace resourcequota-837 deletion completed in 9.726273409s • [SLOW TEST:22.374 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a persistent volume claim with a storage class. [sig-storage] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/resource_quota.go:508 ------------------------------ S ------------------------------ [BeforeEach] version v1 /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 19:56:55.385: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename proxy STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in proxy-6319 STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Jan 11 19:56:56.308: INFO: (0) /api/v1/nodes/ip-10-250-27-25.ec2.internal/proxy/logs/:
btmp
containers/
faillog... (200; 190.274592ms)
Jan 11 19:56:56.402: INFO: (1) /api/v1/nodes/ip-10-250-27-25.ec2.internal/proxy/logs/: 
btmp
containers/
faillog... (200; 93.628882ms)
Jan 11 19:56:56.494: INFO: (2) /api/v1/nodes/ip-10-250-27-25.ec2.internal/proxy/logs/: 
btmp
containers/
faillog... (200; 92.338714ms)
Jan 11 19:56:56.587: INFO: (3) /api/v1/nodes/ip-10-250-27-25.ec2.internal/proxy/logs/: 
btmp
containers/
faillog... (200; 92.763605ms)
Jan 11 19:56:56.680: INFO: (4) /api/v1/nodes/ip-10-250-27-25.ec2.internal/proxy/logs/: 
btmp
containers/
faillog... (200; 92.597205ms)
Jan 11 19:56:56.773: INFO: (5) /api/v1/nodes/ip-10-250-27-25.ec2.internal/proxy/logs/: 
btmp
containers/
faillog... (200; 93.039537ms)
Jan 11 19:56:56.865: INFO: (6) /api/v1/nodes/ip-10-250-27-25.ec2.internal/proxy/logs/: 
btmp
containers/
faillog... (200; 92.431883ms)
Jan 11 19:56:56.958: INFO: (7) /api/v1/nodes/ip-10-250-27-25.ec2.internal/proxy/logs/: 
btmp
containers/
faillog... (200; 92.367954ms)
Jan 11 19:56:57.051: INFO: (8) /api/v1/nodes/ip-10-250-27-25.ec2.internal/proxy/logs/: 
btmp
containers/
faillog... (200; 93.332679ms)
Jan 11 19:56:57.144: INFO: (9) /api/v1/nodes/ip-10-250-27-25.ec2.internal/proxy/logs/: 
btmp
containers/
faillog... (200; 92.483837ms)
Jan 11 19:56:57.248: INFO: (10) /api/v1/nodes/ip-10-250-27-25.ec2.internal/proxy/logs/: 
btmp
containers/
faillog... (200; 103.890343ms)
Jan 11 19:56:57.340: INFO: (11) /api/v1/nodes/ip-10-250-27-25.ec2.internal/proxy/logs/: 
btmp
containers/
faillog... (200; 92.146195ms)
Jan 11 19:56:57.434: INFO: (12) /api/v1/nodes/ip-10-250-27-25.ec2.internal/proxy/logs/: 
btmp
containers/
faillog... (200; 94.332993ms)
Jan 11 19:56:57.528: INFO: (13) /api/v1/nodes/ip-10-250-27-25.ec2.internal/proxy/logs/: 
btmp
containers/
faillog... (200; 93.355262ms)
Jan 11 19:56:57.621: INFO: (14) /api/v1/nodes/ip-10-250-27-25.ec2.internal/proxy/logs/: 
btmp
containers/
faillog... (200; 93.025089ms)
Jan 11 19:56:57.731: INFO: (15) /api/v1/nodes/ip-10-250-27-25.ec2.internal/proxy/logs/: 
btmp
containers/
faillog... (200; 109.852696ms)
Jan 11 19:56:57.823: INFO: (16) /api/v1/nodes/ip-10-250-27-25.ec2.internal/proxy/logs/: 
btmp
containers/
faillog... (200; 92.396875ms)
Jan 11 19:56:57.959: INFO: (17) /api/v1/nodes/ip-10-250-27-25.ec2.internal/proxy/logs/: 
btmp
containers/
faillog... (200; 135.274721ms)
Jan 11 19:56:58.051: INFO: (18) /api/v1/nodes/ip-10-250-27-25.ec2.internal/proxy/logs/: 
btmp
containers/
faillog... (200; 92.725961ms)
Jan 11 19:56:58.144: INFO: (19) /api/v1/nodes/ip-10-250-27-25.ec2.internal/proxy/logs/: 
btmp
containers/
faillog... (200; 92.267072ms)
[AfterEach] version v1
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 11 19:56:58.144: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-6319" for this suite.
Jan 11 19:57:04.505: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 19:57:07.836: INFO: namespace proxy-6319 deletion completed in 9.600597374s


• [SLOW TEST:12.451 seconds]
[sig-network] Proxy
/workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:57
    should proxy logs on node using proxy subresource  [Conformance]
    /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
S
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 11 19:56:39.161: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
STEP: Building a namespace api object, basename persistent-local-volumes-test
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in persistent-local-volumes-test-1290
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:152
[BeforeEach] [Volume type: dir-link]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195
STEP: Initializing test volumes
Jan 11 19:56:42.253: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-1290 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-ee38d276-c60f-463d-9851-4ca5f19446b0-backend && ln -s /tmp/local-volume-test-ee38d276-c60f-463d-9851-4ca5f19446b0-backend /tmp/local-volume-test-ee38d276-c60f-463d-9851-4ca5f19446b0'
Jan 11 19:56:43.580: INFO: stderr: ""
Jan 11 19:56:43.580: INFO: stdout: ""
STEP: Creating local PVCs and PVs
Jan 11 19:56:43.580: INFO: Creating a PV followed by a PVC
Jan 11 19:56:43.760: INFO: Waiting for PV local-pvt6hrk to bind to PVC pvc-k5x5b
Jan 11 19:56:43.760: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-k5x5b] to have phase Bound
Jan 11 19:56:43.849: INFO: PersistentVolumeClaim pvc-k5x5b found and phase=Bound (89.486933ms)
Jan 11 19:56:43.849: INFO: Waiting up to 3m0s for PersistentVolume local-pvt6hrk to have phase Bound
Jan 11 19:56:43.939: INFO: PersistentVolume local-pvt6hrk found and phase=Bound (89.211647ms)
[BeforeEach] Set fsGroup for local volume
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261
[It] should set same fsGroup for two pods simultaneously [Slow]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:274
STEP: Create first pod and check fsGroup is set
STEP: Creating a pod
Jan 11 19:56:46.477: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec security-context-93fead58-b5f8-48ee-9d8d-197211e02c78 --namespace=persistent-local-volumes-test-1290 -- stat -c %g /mnt/volume1'
Jan 11 19:56:47.775: INFO: stderr: ""
Jan 11 19:56:47.775: INFO: stdout: "1234\n"
STEP: Create second pod with same fsGroup and check fsGroup is correct
STEP: Creating a pod
Jan 11 19:56:50.134: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec security-context-1de18d45-9c21-47e3-83c0-686e84b9c93f --namespace=persistent-local-volumes-test-1290 -- stat -c %g /mnt/volume1'
Jan 11 19:56:51.450: INFO: stderr: ""
Jan 11 19:56:51.450: INFO: stdout: "1234\n"
STEP: Deleting first pod
STEP: Deleting pod security-context-93fead58-b5f8-48ee-9d8d-197211e02c78 in namespace persistent-local-volumes-test-1290
STEP: Deleting second pod
STEP: Deleting pod security-context-1de18d45-9c21-47e3-83c0-686e84b9c93f in namespace persistent-local-volumes-test-1290
[AfterEach] [Volume type: dir-link]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204
STEP: Cleaning up PVC and PV
Jan 11 19:56:51.631: INFO: Deleting PersistentVolumeClaim "pvc-k5x5b"
Jan 11 19:56:51.721: INFO: Deleting PersistentVolume "local-pvt6hrk"
STEP: Removing the test directory
Jan 11 19:56:51.812: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-1290 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-ee38d276-c60f-463d-9851-4ca5f19446b0 && rm -r /tmp/local-volume-test-ee38d276-c60f-463d-9851-4ca5f19446b0-backend'
Jan 11 19:56:53.106: INFO: stderr: ""
Jan 11 19:56:53.106: INFO: stdout: ""
[AfterEach] [sig-storage] PersistentVolumes-local 
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 11 19:56:53.197: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "persistent-local-volumes-test-1290" for this suite.
Jan 11 19:57:05.558: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 19:57:08.878: INFO: namespace persistent-local-volumes-test-1290 deletion completed in 15.589455632s


• [SLOW TEST:29.716 seconds]
[sig-storage] PersistentVolumes-local 
/workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Volume type: dir-link]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Set fsGroup for local volume
    /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260
      should set same fsGroup for two pods simultaneously [Slow]
      /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:274
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 11 19:56:45.889: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
STEP: Building a namespace api object, basename kubectl
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-4312
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:225
[It] should create/apply a valid CR for CRD with validation schema
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:917
STEP: prepare CRD with validation schema
Jan 11 19:56:46.551: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
STEP: sleep for 10s to wait for potential crd openapi publishing alpha feature
STEP: successfully create CR
Jan 11 19:56:56.830: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-4312 create --validate=true -f -'
Jan 11 19:56:58.314: INFO: stderr: ""
Jan 11 19:56:58.314: INFO: stdout: "e2e-test-kubectl-2853-crd.kubectl.example.com/test-cr created\n"
Jan 11 19:56:58.314: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-4312 delete e2e-test-kubectl-2853-crds test-cr'
Jan 11 19:56:58.829: INFO: stderr: ""
Jan 11 19:56:58.829: INFO: stdout: "e2e-test-kubectl-2853-crd.kubectl.example.com \"test-cr\" deleted\n"
STEP: successfully apply CR
Jan 11 19:56:58.829: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-4312 apply --validate=true -f -'
Jan 11 19:56:59.606: INFO: stderr: ""
Jan 11 19:56:59.606: INFO: stdout: "e2e-test-kubectl-2853-crd.kubectl.example.com/test-cr created\n"
Jan 11 19:56:59.606: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-4312 delete e2e-test-kubectl-2853-crds test-cr'
Jan 11 19:57:00.128: INFO: stderr: ""
Jan 11 19:57:00.128: INFO: stdout: "e2e-test-kubectl-2853-crd.kubectl.example.com \"test-cr\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 11 19:57:00.309: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4312" for this suite.
Jan 11 19:57:08.670: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 19:57:11.979: INFO: namespace kubectl-4312 deletion completed in 11.578893086s


• [SLOW TEST:26.090 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl client-side validation
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:898
    should create/apply a valid CR for CRD with validation schema
    /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:917
------------------------------
SS
------------------------------
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 11 19:56:17.165: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
STEP: Building a namespace api object, basename kubelet-test
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubelet-test-2629
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[AfterEach] [k8s.io] Kubelet
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 11 19:56:22.400: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-2629" for this suite.
Jan 11 19:57:08.760: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 19:57:12.069: INFO: namespace kubelet-test-2629 deletion completed in 49.577727678s


• [SLOW TEST:54.903 seconds]
[k8s.io] Kubelet
/workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
  when scheduling a read only busybox container
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187
    should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 11 19:56:51.952: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
STEP: Building a namespace api object, basename persistent-local-volumes-test
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in persistent-local-volumes-test-6719
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:152
[BeforeEach] [Volume type: blockfswithformat]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195
STEP: Initializing test volumes
STEP: Creating block device on node "ip-10-250-27-25.ec2.internal" using path "/tmp/local-volume-test-61951c62-5357-4ca0-9cc5-24e66ddca832"
Jan 11 19:56:55.040: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-6719 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-61951c62-5357-4ca0-9cc5-24e66ddca832 && dd if=/dev/zero of=/tmp/local-volume-test-61951c62-5357-4ca0-9cc5-24e66ddca832/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-61951c62-5357-4ca0-9cc5-24e66ddca832/file'
Jan 11 19:56:56.398: INFO: stderr: "5120+0 records in\n5120+0 records out\n20971520 bytes (21 MB, 20 MiB) copied, 0.0175684 s, 1.2 GB/s\n"
Jan 11 19:56:56.398: INFO: stdout: ""
Jan 11 19:56:56.399: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-6719 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-61951c62-5357-4ca0-9cc5-24e66ddca832/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}'
Jan 11 19:56:57.694: INFO: stderr: ""
Jan 11 19:56:57.694: INFO: stdout: "/dev/loop0\n"
Jan 11 19:56:57.694: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-6719 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkfs -t ext4 /dev/loop0 && mount -t ext4 /dev/loop0 /tmp/local-volume-test-61951c62-5357-4ca0-9cc5-24e66ddca832 && chmod o+rwx /tmp/local-volume-test-61951c62-5357-4ca0-9cc5-24e66ddca832'
Jan 11 19:56:59.101: INFO: stderr: "mke2fs 1.44.5 (15-Dec-2018)\n"
Jan 11 19:56:59.101: INFO: stdout: "Discarding device blocks:  1024/20480\b\b\b\b\b\b\b\b\b\b\b           \b\b\b\b\b\b\b\b\b\b\bdone                            \nCreating filesystem with 20480 1k blocks and 5136 inodes\nFilesystem UUID: 702ab722-89f9-4de5-9d71-6402c4b65640\nSuperblock backups stored on blocks: \n\t8193\n\nAllocating group tables: 0/3\b\b\b   \b\b\bdone                            \nWriting inode tables: 0/3\b\b\b   \b\b\bdone                            \nCreating journal (1024 blocks): done\nWriting superblocks and filesystem accounting information: 0/3\b\b\b   \b\b\bdone\n\n"
STEP: Creating local PVCs and PVs
Jan 11 19:56:59.101: INFO: Creating a PV followed by a PVC
Jan 11 19:56:59.280: INFO: Waiting for PV local-pvklkmp to bind to PVC pvc-fvnlc
Jan 11 19:56:59.280: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-fvnlc] to have phase Bound
Jan 11 19:56:59.372: INFO: PersistentVolumeClaim pvc-fvnlc found and phase=Bound (91.105463ms)
Jan 11 19:56:59.372: INFO: Waiting up to 3m0s for PersistentVolume local-pvklkmp to have phase Bound
Jan 11 19:56:59.461: INFO: PersistentVolume local-pvklkmp found and phase=Bound (89.585531ms)
[BeforeEach] One pod requesting one prebound PVC
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215
STEP: Creating pod1
STEP: Creating a pod
Jan 11 19:57:02.089: INFO: pod "security-context-d9bd5862-8d72-45b2-b7e9-d5b4ad3d1038" created on Node "ip-10-250-27-25.ec2.internal"
STEP: Writing in pod1
Jan 11 19:57:02.089: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-6719 security-context-d9bd5862-8d72-45b2-b7e9-d5b4ad3d1038 -- /bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file'
Jan 11 19:57:03.371: INFO: stderr: ""
Jan 11 19:57:03.371: INFO: stdout: ""
Jan 11 19:57:03.371: INFO: podRWCmdExec out: "" err: 
[It] should be able to mount volume and read from pod1
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232
STEP: Reading in pod1
Jan 11 19:57:03.371: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-6719 security-context-d9bd5862-8d72-45b2-b7e9-d5b4ad3d1038 -- /bin/sh -c cat /mnt/volume1/test-file'
Jan 11 19:57:04.681: INFO: stderr: ""
Jan 11 19:57:04.681: INFO: stdout: "test-file-content\n"
Jan 11 19:57:04.681: INFO: podRWCmdExec out: "test-file-content\n" err: 
[AfterEach] One pod requesting one prebound PVC
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227
STEP: Deleting pod1
STEP: Deleting pod security-context-d9bd5862-8d72-45b2-b7e9-d5b4ad3d1038 in namespace persistent-local-volumes-test-6719
[AfterEach] [Volume type: blockfswithformat]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204
STEP: Cleaning up PVC and PV
Jan 11 19:57:04.772: INFO: Deleting PersistentVolumeClaim "pvc-fvnlc"
Jan 11 19:57:04.862: INFO: Deleting PersistentVolume "local-pvklkmp"
Jan 11 19:57:04.952: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-6719 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount /tmp/local-volume-test-61951c62-5357-4ca0-9cc5-24e66ddca832'
Jan 11 19:57:06.381: INFO: stderr: ""
Jan 11 19:57:06.381: INFO: stdout: ""
Jan 11 19:57:06.381: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-6719 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-61951c62-5357-4ca0-9cc5-24e66ddca832/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}'
Jan 11 19:57:07.688: INFO: stderr: ""
Jan 11 19:57:07.688: INFO: stdout: "/dev/loop0\n"
STEP: Tear down block device "/dev/loop0" on node "ip-10-250-27-25.ec2.internal" at path /tmp/local-volume-test-61951c62-5357-4ca0-9cc5-24e66ddca832/file
Jan 11 19:57:07.688: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-6719 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0'
Jan 11 19:57:08.970: INFO: stderr: ""
Jan 11 19:57:08.970: INFO: stdout: ""
STEP: Removing the test directory /tmp/local-volume-test-61951c62-5357-4ca0-9cc5-24e66ddca832
Jan 11 19:57:08.970: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-6719 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-61951c62-5357-4ca0-9cc5-24e66ddca832'
Jan 11 19:57:10.319: INFO: stderr: ""
Jan 11 19:57:10.319: INFO: stdout: ""
[AfterEach] [sig-storage] PersistentVolumes-local 
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 11 19:57:10.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "persistent-local-volumes-test-6719" for this suite.
Jan 11 19:57:16.769: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 19:57:20.063: INFO: namespace persistent-local-volumes-test-6719 deletion completed in 9.562794363s


• [SLOW TEST:28.111 seconds]
[sig-storage] PersistentVolumes-local 
/workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Volume type: blockfswithformat]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and read from pod1
      /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232
------------------------------
SSS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 11 19:57:12.079: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
STEP: Building a namespace api object, basename kubectl
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-6722
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:225
[It] should reject quota with invalid scopes
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:2003
STEP: calling kubectl quota
Jan 11 19:57:12.753: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config create quota scopes --hard=hard=pods=1000000 --scopes=Foo --namespace=kubectl-6722'
Jan 11 19:57:12.806: INFO: rc: 1
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 11 19:57:12.806: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6722" for this suite.
Jan 11 19:57:19.167: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 19:57:22.487: INFO: namespace kubectl-6722 deletion completed in 9.589766053s


• [SLOW TEST:10.408 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl create quota
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1945
    should reject quota with invalid scopes
    /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:2003
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 11 19:57:08.880: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
STEP: Building a namespace api object, basename persistent-local-volumes-test
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in persistent-local-volumes-test-4653
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:152
[BeforeEach] [Volume type: blockfswithoutformat]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195
STEP: Initializing test volumes
STEP: Creating block device on node "ip-10-250-27-25.ec2.internal" using path "/tmp/local-volume-test-f0694cde-a0c8-4c59-b18e-5d82e830903c"
Jan 11 19:57:11.978: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-4653 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-f0694cde-a0c8-4c59-b18e-5d82e830903c && dd if=/dev/zero of=/tmp/local-volume-test-f0694cde-a0c8-4c59-b18e-5d82e830903c/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-f0694cde-a0c8-4c59-b18e-5d82e830903c/file'
Jan 11 19:57:13.322: INFO: stderr: "5120+0 records in\n5120+0 records out\n20971520 bytes (21 MB, 20 MiB) copied, 0.0165135 s, 1.3 GB/s\n"
Jan 11 19:57:13.322: INFO: stdout: ""
Jan 11 19:57:13.322: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-4653 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-f0694cde-a0c8-4c59-b18e-5d82e830903c/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}'
Jan 11 19:57:14.646: INFO: stderr: ""
Jan 11 19:57:14.646: INFO: stdout: "/dev/loop0\n"
STEP: Creating local PVCs and PVs
Jan 11 19:57:14.646: INFO: Creating a PV followed by a PVC
Jan 11 19:57:14.825: INFO: Waiting for PV local-pvdsgwl to bind to PVC pvc-ktm9b
Jan 11 19:57:14.825: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-ktm9b] to have phase Bound
Jan 11 19:57:14.915: INFO: PersistentVolumeClaim pvc-ktm9b found and phase=Bound (89.068679ms)
Jan 11 19:57:14.915: INFO: Waiting up to 3m0s for PersistentVolume local-pvdsgwl to have phase Bound
Jan 11 19:57:15.004: INFO: PersistentVolume local-pvdsgwl found and phase=Bound (89.300901ms)
[BeforeEach] Set fsGroup for local volume
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261
[It] should set fsGroup for one pod [Slow]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:267
STEP: Checking fsGroup is set
STEP: Creating a pod
Jan 11 19:57:17.541: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec security-context-12b31542-59ca-462f-ba44-bf7799d9291d --namespace=persistent-local-volumes-test-4653 -- stat -c %g /mnt/volume1'
Jan 11 19:57:18.809: INFO: stderr: ""
Jan 11 19:57:18.809: INFO: stdout: "1234\n"
STEP: Deleting pod
STEP: Deleting pod security-context-12b31542-59ca-462f-ba44-bf7799d9291d in namespace persistent-local-volumes-test-4653
[AfterEach] [Volume type: blockfswithoutformat]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204
STEP: Cleaning up PVC and PV
Jan 11 19:57:18.900: INFO: Deleting PersistentVolumeClaim "pvc-ktm9b"
Jan 11 19:57:18.990: INFO: Deleting PersistentVolume "local-pvdsgwl"
Jan 11 19:57:19.080: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-4653 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-f0694cde-a0c8-4c59-b18e-5d82e830903c/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}'
Jan 11 19:57:20.409: INFO: stderr: ""
Jan 11 19:57:20.409: INFO: stdout: "/dev/loop0\n"
STEP: Tear down block device "/dev/loop0" on node "ip-10-250-27-25.ec2.internal" at path /tmp/local-volume-test-f0694cde-a0c8-4c59-b18e-5d82e830903c/file
Jan 11 19:57:20.409: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-4653 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0'
Jan 11 19:57:21.702: INFO: stderr: ""
Jan 11 19:57:21.702: INFO: stdout: ""
STEP: Removing the test directory /tmp/local-volume-test-f0694cde-a0c8-4c59-b18e-5d82e830903c
Jan 11 19:57:21.703: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-4653 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-f0694cde-a0c8-4c59-b18e-5d82e830903c'
Jan 11 19:57:23.073: INFO: stderr: ""
Jan 11 19:57:23.073: INFO: stdout: ""
[AfterEach] [sig-storage] PersistentVolumes-local 
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 11 19:57:23.165: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "persistent-local-volumes-test-4653" for this suite.
Jan 11 19:57:29.525: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 19:57:32.850: INFO: namespace persistent-local-volumes-test-4653 deletion completed in 9.594195274s


• [SLOW TEST:23.970 seconds]
[sig-storage] PersistentVolumes-local 
/workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Volume type: blockfswithoutformat]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Set fsGroup for local volume
    /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260
      should set fsGroup for one pod [Slow]
      /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:267
------------------------------
SSSSS
------------------------------
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 11 19:57:11.984: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
STEP: Building a namespace api object, basename secrets
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secrets-7877
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating secret with name s-test-opt-del-9790d532-4676-4bf0-bd49-dc154b0ac88b
STEP: Creating secret with name s-test-opt-upd-3badf91b-ca5f-4dcc-8e40-2886316a9b61
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-9790d532-4676-4bf0-bd49-dc154b0ac88b
STEP: Updating secret s-test-opt-upd-3badf91b-ca5f-4dcc-8e40-2886316a9b61
STEP: Creating secret with name s-test-opt-create-a12b9585-4c2e-4e76-b153-7e3057b18fe0
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 11 19:57:18.261: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-7877" for this suite.
Jan 11 19:57:30.623: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 19:57:33.955: INFO: namespace secrets-7877 deletion completed in 15.601997119s


• [SLOW TEST:21.971 seconds]
[sig-storage] Secrets
/workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:35
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:93
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 11 19:57:20.069: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
STEP: Building a namespace api object, basename provisioning
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in provisioning-6427
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support existing single file [LinuxOnly]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:202
Jan 11 19:57:20.706: INFO: Could not find CSI Name for in-tree plugin kubernetes.io/empty-dir
Jan 11 19:57:20.706: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-emptydir-7gxr
STEP: Creating a pod to test subpath
Jan 11 19:57:20.799: INFO: Waiting up to 5m0s for pod "pod-subpath-test-emptydir-7gxr" in namespace "provisioning-6427" to be "success or failure"
Jan 11 19:57:20.888: INFO: Pod "pod-subpath-test-emptydir-7gxr": Phase="Pending", Reason="", readiness=false. Elapsed: 89.221483ms
Jan 11 19:57:22.978: INFO: Pod "pod-subpath-test-emptydir-7gxr": Phase="Pending", Reason="", readiness=false. Elapsed: 2.178994858s
Jan 11 19:57:25.068: INFO: Pod "pod-subpath-test-emptydir-7gxr": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.268684328s
STEP: Saw pod success
Jan 11 19:57:25.068: INFO: Pod "pod-subpath-test-emptydir-7gxr" satisfied condition "success or failure"
Jan 11 19:57:25.157: INFO: Trying to get logs from node ip-10-250-27-25.ec2.internal pod pod-subpath-test-emptydir-7gxr container test-container-subpath-emptydir-7gxr: 
STEP: delete the pod
Jan 11 19:57:25.347: INFO: Waiting for pod pod-subpath-test-emptydir-7gxr to disappear
Jan 11 19:57:25.436: INFO: Pod pod-subpath-test-emptydir-7gxr no longer exists
STEP: Deleting pod pod-subpath-test-emptydir-7gxr
Jan 11 19:57:25.436: INFO: Deleting pod "pod-subpath-test-emptydir-7gxr" in namespace "provisioning-6427"
STEP: Deleting pod
Jan 11 19:57:25.525: INFO: Deleting pod "pod-subpath-test-emptydir-7gxr" in namespace "provisioning-6427"
Jan 11 19:57:25.614: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 11 19:57:25.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "provisioning-6427" for this suite.
Jan 11 19:57:31.973: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 19:57:35.271: INFO: namespace provisioning-6427 deletion completed in 9.566756073s


• [SLOW TEST:15.202 seconds]
[sig-storage] In-tree Volumes
/workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: emptydir]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:69
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:92
      should support existing single file [LinuxOnly]
      /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:202
------------------------------
SSSSSS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 11 19:57:22.510: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
STEP: Building a namespace api object, basename persistent-local-volumes-test
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in persistent-local-volumes-test-5318
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:152
[BeforeEach] [Volume type: dir-bindmounted]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195
STEP: Initializing test volumes
Jan 11 19:57:25.611: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-5318 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-23a29eff-04f3-470d-82c7-8b31ab04631d && mount --bind /tmp/local-volume-test-23a29eff-04f3-470d-82c7-8b31ab04631d /tmp/local-volume-test-23a29eff-04f3-470d-82c7-8b31ab04631d'
Jan 11 19:57:26.913: INFO: stderr: ""
Jan 11 19:57:26.913: INFO: stdout: ""
STEP: Creating local PVCs and PVs
Jan 11 19:57:26.913: INFO: Creating a PV followed by a PVC
Jan 11 19:57:27.094: INFO: Waiting for PV local-pv4fcg5 to bind to PVC pvc-9kdgw
Jan 11 19:57:27.094: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-9kdgw] to have phase Bound
Jan 11 19:57:27.184: INFO: PersistentVolumeClaim pvc-9kdgw found and phase=Bound (89.81234ms)
Jan 11 19:57:27.184: INFO: Waiting up to 3m0s for PersistentVolume local-pv4fcg5 to have phase Bound
Jan 11 19:57:27.274: INFO: PersistentVolume local-pv4fcg5 found and phase=Bound (89.868893ms)
[BeforeEach] One pod requesting one prebound PVC
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215
STEP: Creating pod1
STEP: Creating a pod
Jan 11 19:57:29.950: INFO: pod "security-context-4e9c163e-b525-42d9-91b3-f3f5d5f9fa6e" created on Node "ip-10-250-27-25.ec2.internal"
STEP: Writing in pod1
Jan 11 19:57:29.950: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-5318 security-context-4e9c163e-b525-42d9-91b3-f3f5d5f9fa6e -- /bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file'
Jan 11 19:57:31.289: INFO: stderr: ""
Jan 11 19:57:31.289: INFO: stdout: ""
Jan 11 19:57:31.289: INFO: podRWCmdExec out: "" err: 
[It] should be able to mount volume and read from pod1
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232
STEP: Reading in pod1
Jan 11 19:57:31.289: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-5318 security-context-4e9c163e-b525-42d9-91b3-f3f5d5f9fa6e -- /bin/sh -c cat /mnt/volume1/test-file'
Jan 11 19:57:32.645: INFO: stderr: ""
Jan 11 19:57:32.645: INFO: stdout: "test-file-content\n"
Jan 11 19:57:32.645: INFO: podRWCmdExec out: "test-file-content\n" err: 
[AfterEach] One pod requesting one prebound PVC
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227
STEP: Deleting pod1
STEP: Deleting pod security-context-4e9c163e-b525-42d9-91b3-f3f5d5f9fa6e in namespace persistent-local-volumes-test-5318
[AfterEach] [Volume type: dir-bindmounted]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204
STEP: Cleaning up PVC and PV
Jan 11 19:57:32.736: INFO: Deleting PersistentVolumeClaim "pvc-9kdgw"
Jan 11 19:57:32.827: INFO: Deleting PersistentVolume "local-pv4fcg5"
STEP: Removing the test directory
Jan 11 19:57:32.918: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-5318 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount /tmp/local-volume-test-23a29eff-04f3-470d-82c7-8b31ab04631d && rm -r /tmp/local-volume-test-23a29eff-04f3-470d-82c7-8b31ab04631d'
Jan 11 19:57:34.194: INFO: stderr: ""
Jan 11 19:57:34.194: INFO: stdout: ""
[AfterEach] [sig-storage] PersistentVolumes-local 
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 11 19:57:34.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "persistent-local-volumes-test-5318" for this suite.
Jan 11 19:57:40.648: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 19:57:43.974: INFO: namespace persistent-local-volumes-test-5318 deletion completed in 9.59649941s


• [SLOW TEST:21.463 seconds]
[sig-storage] PersistentVolumes-local 
/workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Volume type: dir-bindmounted]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and read from pod1
      /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232
------------------------------
SSS
------------------------------
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 11 19:57:35.288: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
STEP: Building a namespace api object, basename configmap
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-2899
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating configMap with name configmap-test-volume-5d07ceeb-4798-496f-bf7e-ce9ff0f81f5d
STEP: Creating a pod to test consume configMaps
Jan 11 19:57:36.109: INFO: Waiting up to 5m0s for pod "pod-configmaps-292d390e-1edd-4dfa-b8c5-907a9ec704eb" in namespace "configmap-2899" to be "success or failure"
Jan 11 19:57:36.200: INFO: Pod "pod-configmaps-292d390e-1edd-4dfa-b8c5-907a9ec704eb": Phase="Pending", Reason="", readiness=false. Elapsed: 90.658945ms
Jan 11 19:57:38.290: INFO: Pod "pod-configmaps-292d390e-1edd-4dfa-b8c5-907a9ec704eb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.18042619s
STEP: Saw pod success
Jan 11 19:57:38.290: INFO: Pod "pod-configmaps-292d390e-1edd-4dfa-b8c5-907a9ec704eb" satisfied condition "success or failure"
Jan 11 19:57:38.379: INFO: Trying to get logs from node ip-10-250-27-25.ec2.internal pod pod-configmaps-292d390e-1edd-4dfa-b8c5-907a9ec704eb container configmap-volume-test: 
STEP: delete the pod
Jan 11 19:57:38.570: INFO: Waiting for pod pod-configmaps-292d390e-1edd-4dfa-b8c5-907a9ec704eb to disappear
Jan 11 19:57:38.659: INFO: Pod pod-configmaps-292d390e-1edd-4dfa-b8c5-907a9ec704eb no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 11 19:57:38.659: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2899" for this suite.
Jan 11 19:57:45.020: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 19:57:48.317: INFO: namespace configmap-2899 deletion completed in 9.566498247s


• [SLOW TEST:13.029 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSS
------------------------------
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 11 19:57:33.966: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
STEP: Building a namespace api object, basename dns
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in dns-1736
STEP: Waiting for a default service account to be provisioned in namespace
[It] should resolve DNS of partial qualified names for the cluster [LinuxOnly]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:87
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc;test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-1736.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-1736.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1736.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc;test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-1736.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-1736.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1736.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan 11 19:57:48.655: INFO: DNS probes using dns-1736/dns-test-db35f479-a1b0-429c-8955-dc9ef9b39129 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 11 19:57:48.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-1736" for this suite.
Jan 11 19:57:55.113: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 19:57:58.434: INFO: namespace dns-1736 deletion completed in 9.591645569s


• [SLOW TEST:24.468 seconds]
[sig-network] DNS
/workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should resolve DNS of partial qualified names for the cluster [LinuxOnly]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:87
------------------------------
SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:93
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 11 19:57:32.859: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
STEP: Building a namespace api object, basename provisioning
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in provisioning-2263
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:250
STEP: deploying csi-hostpath driver
Jan 11 19:57:33.689: INFO: creating *v1.ServiceAccount: provisioning-2263/csi-attacher
Jan 11 19:57:33.778: INFO: creating *v1.ClusterRole: external-attacher-runner-provisioning-2263
Jan 11 19:57:33.778: INFO: Define cluster role external-attacher-runner-provisioning-2263
Jan 11 19:57:33.868: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-provisioning-2263
Jan 11 19:57:33.957: INFO: creating *v1.Role: provisioning-2263/external-attacher-cfg-provisioning-2263
Jan 11 19:57:34.047: INFO: creating *v1.RoleBinding: provisioning-2263/csi-attacher-role-cfg
Jan 11 19:57:34.137: INFO: creating *v1.ServiceAccount: provisioning-2263/csi-provisioner
Jan 11 19:57:34.226: INFO: creating *v1.ClusterRole: external-provisioner-runner-provisioning-2263
Jan 11 19:57:34.226: INFO: Define cluster role external-provisioner-runner-provisioning-2263
Jan 11 19:57:34.316: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-provisioning-2263
Jan 11 19:57:34.405: INFO: creating *v1.Role: provisioning-2263/external-provisioner-cfg-provisioning-2263
Jan 11 19:57:34.495: INFO: creating *v1.RoleBinding: provisioning-2263/csi-provisioner-role-cfg
Jan 11 19:57:34.586: INFO: creating *v1.ServiceAccount: provisioning-2263/csi-snapshotter
Jan 11 19:57:34.676: INFO: creating *v1.ClusterRole: external-snapshotter-runner-provisioning-2263
Jan 11 19:57:34.676: INFO: Define cluster role external-snapshotter-runner-provisioning-2263
Jan 11 19:57:34.765: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-provisioning-2263
Jan 11 19:57:34.855: INFO: creating *v1.Role: provisioning-2263/external-snapshotter-leaderelection-provisioning-2263
Jan 11 19:57:34.945: INFO: creating *v1.RoleBinding: provisioning-2263/external-snapshotter-leaderelection
Jan 11 19:57:35.035: INFO: creating *v1.ServiceAccount: provisioning-2263/csi-resizer
Jan 11 19:57:35.125: INFO: creating *v1.ClusterRole: external-resizer-runner-provisioning-2263
Jan 11 19:57:35.125: INFO: Define cluster role external-resizer-runner-provisioning-2263
Jan 11 19:57:35.215: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-provisioning-2263
Jan 11 19:57:35.305: INFO: creating *v1.Role: provisioning-2263/external-resizer-cfg-provisioning-2263
Jan 11 19:57:35.394: INFO: creating *v1.RoleBinding: provisioning-2263/csi-resizer-role-cfg
Jan 11 19:57:35.484: INFO: creating *v1.Service: provisioning-2263/csi-hostpath-attacher
Jan 11 19:57:35.577: INFO: creating *v1.StatefulSet: provisioning-2263/csi-hostpath-attacher
Jan 11 19:57:35.667: INFO: creating *v1beta1.CSIDriver: csi-hostpath-provisioning-2263
Jan 11 19:57:35.757: INFO: creating *v1.Service: provisioning-2263/csi-hostpathplugin
Jan 11 19:57:35.850: INFO: creating *v1.StatefulSet: provisioning-2263/csi-hostpathplugin
Jan 11 19:57:35.940: INFO: creating *v1.Service: provisioning-2263/csi-hostpath-provisioner
Jan 11 19:57:36.033: INFO: creating *v1.StatefulSet: provisioning-2263/csi-hostpath-provisioner
Jan 11 19:57:36.122: INFO: creating *v1.Service: provisioning-2263/csi-hostpath-resizer
Jan 11 19:57:36.215: INFO: creating *v1.StatefulSet: provisioning-2263/csi-hostpath-resizer
Jan 11 19:57:36.305: INFO: creating *v1.Service: provisioning-2263/csi-snapshotter
Jan 11 19:57:36.400: INFO: creating *v1.StatefulSet: provisioning-2263/csi-snapshotter
Jan 11 19:57:36.490: INFO: creating *v1.ClusterRoleBinding: psp-csi-hostpath-role-provisioning-2263
Jan 11 19:57:36.580: INFO: Test running for native CSI Driver, not checking metrics
Jan 11 19:57:36.580: INFO: Creating resource for dynamic PV
STEP: creating a StorageClass provisioning-2263-csi-hostpath-provisioning-2263-scpmr4x
STEP: creating a claim
Jan 11 19:57:36.669: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
Jan 11 19:57:36.759: INFO: Waiting up to 5m0s for PersistentVolumeClaims [csi-hostpathqwsk4] to have phase Bound
Jan 11 19:57:36.849: INFO: PersistentVolumeClaim csi-hostpathqwsk4 found but phase is Pending instead of Bound.
Jan 11 19:57:38.939: INFO: PersistentVolumeClaim csi-hostpathqwsk4 found and phase=Bound (2.179333622s)
STEP: Creating pod pod-subpath-test-csi-hostpath-dynamicpv-kmnl
STEP: Checking for subpath error in container status
Jan 11 19:57:51.399: INFO: Deleting pod "pod-subpath-test-csi-hostpath-dynamicpv-kmnl" in namespace "provisioning-2263"
Jan 11 19:57:51.489: INFO: Wait up to 5m0s for pod "pod-subpath-test-csi-hostpath-dynamicpv-kmnl" to be fully deleted
STEP: Deleting pod
Jan 11 19:57:59.669: INFO: Deleting pod "pod-subpath-test-csi-hostpath-dynamicpv-kmnl" in namespace "provisioning-2263"
STEP: Deleting pvc
Jan 11 19:57:59.758: INFO: Deleting PersistentVolumeClaim "csi-hostpathqwsk4"
Jan 11 19:57:59.848: INFO: Waiting up to 5m0s for PersistentVolume pvc-b57c3a0d-f46b-43ac-ab2c-81c39311820e to get deleted
Jan 11 19:57:59.937: INFO: PersistentVolume pvc-b57c3a0d-f46b-43ac-ab2c-81c39311820e was removed
STEP: Deleting sc
STEP: uninstalling csi-hostpath driver
Jan 11 19:58:00.028: INFO: deleting *v1.ServiceAccount: provisioning-2263/csi-attacher
Jan 11 19:58:00.119: INFO: deleting *v1.ClusterRole: external-attacher-runner-provisioning-2263
Jan 11 19:58:00.209: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-provisioning-2263
Jan 11 19:58:00.300: INFO: deleting *v1.Role: provisioning-2263/external-attacher-cfg-provisioning-2263
Jan 11 19:58:00.391: INFO: deleting *v1.RoleBinding: provisioning-2263/csi-attacher-role-cfg
Jan 11 19:58:00.483: INFO: deleting *v1.ServiceAccount: provisioning-2263/csi-provisioner
Jan 11 19:58:00.574: INFO: deleting *v1.ClusterRole: external-provisioner-runner-provisioning-2263
Jan 11 19:58:00.665: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-provisioning-2263
Jan 11 19:58:00.756: INFO: deleting *v1.Role: provisioning-2263/external-provisioner-cfg-provisioning-2263
Jan 11 19:58:00.847: INFO: deleting *v1.RoleBinding: provisioning-2263/csi-provisioner-role-cfg
Jan 11 19:58:00.938: INFO: deleting *v1.ServiceAccount: provisioning-2263/csi-snapshotter
Jan 11 19:58:01.028: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-provisioning-2263
Jan 11 19:58:01.119: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-provisioning-2263
Jan 11 19:58:01.210: INFO: deleting *v1.Role: provisioning-2263/external-snapshotter-leaderelection-provisioning-2263
Jan 11 19:58:01.301: INFO: deleting *v1.RoleBinding: provisioning-2263/external-snapshotter-leaderelection
Jan 11 19:58:01.393: INFO: deleting *v1.ServiceAccount: provisioning-2263/csi-resizer
Jan 11 19:58:01.485: INFO: deleting *v1.ClusterRole: external-resizer-runner-provisioning-2263
Jan 11 19:58:01.576: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-provisioning-2263
Jan 11 19:58:01.666: INFO: deleting *v1.Role: provisioning-2263/external-resizer-cfg-provisioning-2263
Jan 11 19:58:01.757: INFO: deleting *v1.RoleBinding: provisioning-2263/csi-resizer-role-cfg
Jan 11 19:58:01.848: INFO: deleting *v1.Service: provisioning-2263/csi-hostpath-attacher
Jan 11 19:58:01.942: INFO: deleting *v1.StatefulSet: provisioning-2263/csi-hostpath-attacher
Jan 11 19:58:02.033: INFO: deleting *v1beta1.CSIDriver: csi-hostpath-provisioning-2263
Jan 11 19:58:02.124: INFO: deleting *v1.Service: provisioning-2263/csi-hostpathplugin
Jan 11 19:58:02.218: INFO: deleting *v1.StatefulSet: provisioning-2263/csi-hostpathplugin
Jan 11 19:58:02.309: INFO: deleting *v1.Service: provisioning-2263/csi-hostpath-provisioner
Jan 11 19:58:02.403: INFO: deleting *v1.StatefulSet: provisioning-2263/csi-hostpath-provisioner
Jan 11 19:58:02.494: INFO: deleting *v1.Service: provisioning-2263/csi-hostpath-resizer
Jan 11 19:58:02.588: INFO: deleting *v1.StatefulSet: provisioning-2263/csi-hostpath-resizer
Jan 11 19:58:02.678: INFO: deleting *v1.Service: provisioning-2263/csi-snapshotter
Jan 11 19:58:02.774: INFO: deleting *v1.StatefulSet: provisioning-2263/csi-snapshotter
Jan 11 19:58:02.864: INFO: deleting *v1.ClusterRoleBinding: psp-csi-hostpath-role-provisioning-2263
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 11 19:58:02.955: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "provisioning-2263" for this suite.
Jan 11 19:58:09.315: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 19:58:12.620: INFO: namespace provisioning-2263 deletion completed in 9.574716929s


• [SLOW TEST:39.761 seconds]
[sig-storage] CSI Volumes
/workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: csi-hostpath]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:62
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:92
      should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
      /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:250
------------------------------
SS
------------------------------
[BeforeEach] [sig-cli] Kubectl Port forwarding
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 11 19:57:58.442: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
STEP: Building a namespace api object, basename port-forwarding
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in port-forwarding-4259
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support a client that connects, sends DATA, and disconnects
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:452
STEP: Creating the target pod
STEP: Running 'kubectl port-forward'
Jan 11 19:58:05.356: INFO: starting port-forward command and streaming output
Jan 11 19:58:05.356: INFO: Asynchronously running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config port-forward --namespace=port-forwarding-4259 pfpod :80'
Jan 11 19:58:05.357: INFO: reading from `kubectl port-forward` command's stdout
STEP: Dialing the local port
STEP: Reading data from the local port
STEP: Waiting for the target pod to stop running
Jan 11 19:58:08.011: INFO: Waiting up to 5m0s for pod "pfpod" in namespace "port-forwarding-4259" to be "container terminated"
Jan 11 19:58:08.100: INFO: Pod "pfpod": Phase="Running", Reason="", readiness=true. Elapsed: 89.792552ms
Jan 11 19:58:10.191: INFO: Pod "pfpod": Phase="Running", Reason="", readiness=false. Elapsed: 2.180703768s
Jan 11 19:58:10.191: INFO: Pod "pfpod" satisfied condition "container terminated"
STEP: Verifying logs
STEP: Closing the connection to the local port
[AfterEach] [sig-cli] Kubectl Port forwarding
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 11 19:58:10.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "port-forwarding-4259" for this suite.
Jan 11 19:58:24.652: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 19:58:27.978: INFO: namespace port-forwarding-4259 deletion completed in 17.595533451s


• [SLOW TEST:29.536 seconds]
[sig-cli] Kubectl Port forwarding
/workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  With a server listening on 0.0.0.0
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:441
    that expects NO client request
    /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:451
      should support a client that connects, sends DATA, and disconnects
      /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:452
------------------------------
SSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:93
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 11 19:58:12.625: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
STEP: Building a namespace api object, basename provisioning
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in provisioning-8164
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail if subpath file is outside the volume [Slow][LinuxOnly]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:239
Jan 11 19:58:13.264: INFO: Could not find CSI Name for in-tree plugin kubernetes.io/host-path
Jan 11 19:58:13.355: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-hostpath-qt8t
STEP: Checking for subpath error in container status
Jan 11 19:58:17.628: INFO: Deleting pod "pod-subpath-test-hostpath-qt8t" in namespace "provisioning-8164"
Jan 11 19:58:17.720: INFO: Wait up to 5m0s for pod "pod-subpath-test-hostpath-qt8t" to be fully deleted
STEP: Deleting pod
Jan 11 19:58:23.899: INFO: Deleting pod "pod-subpath-test-hostpath-qt8t" in namespace "provisioning-8164"
Jan 11 19:58:23.989: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 11 19:58:23.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "provisioning-8164" for this suite.
Jan 11 19:58:30.348: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 19:58:33.658: INFO: namespace provisioning-8164 deletion completed in 9.578480743s


• [SLOW TEST:21.032 seconds]
[sig-storage] In-tree Volumes
/workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: hostPath]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:69
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:92
      should fail if subpath file is outside the volume [Slow][LinuxOnly]
      /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:239
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 11 19:57:07.839: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
STEP: Building a namespace api object, basename projected
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-7004
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating secret with name s-test-opt-del-a0d74a4e-0199-4839-a4ee-561ec2886873
STEP: Creating secret with name s-test-opt-upd-ed3f7f14-9e39-4827-962a-c4ba615064be
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-a0d74a4e-0199-4839-a4ee-561ec2886873
STEP: Updating secret s-test-opt-upd-ed3f7f14-9e39-4827-962a-c4ba615064be
STEP: Creating secret with name s-test-opt-create-30e47fd5-369c-4f09-9296-27664b035368
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 11 19:58:23.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7004" for this suite.
Jan 11 19:58:35.637: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 19:58:38.960: INFO: namespace projected-7004 deletion completed in 15.59335757s


• [SLOW TEST:91.121 seconds]
[sig-storage] Projected secret
/workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SS
------------------------------
[BeforeEach] [sig-storage] CSI mock volume
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 11 19:56:15.319: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
STEP: Building a namespace api object, basename csi-mock-volumes
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in csi-mock-volumes-7446
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not expand volume if resizingOnDriver=off, resizingOnSC=on
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:449
STEP: deploying csi mock driver
Jan 11 19:56:16.144: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7446/csi-attacher
Jan 11 19:56:16.234: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-7446
Jan 11 19:56:16.234: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-7446
Jan 11 19:56:16.324: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-7446
Jan 11 19:56:16.414: INFO: creating *v1.Role: csi-mock-volumes-7446/external-attacher-cfg-csi-mock-volumes-7446
Jan 11 19:56:16.504: INFO: creating *v1.RoleBinding: csi-mock-volumes-7446/csi-attacher-role-cfg
Jan 11 19:56:16.594: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7446/csi-provisioner
Jan 11 19:56:16.684: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-7446
Jan 11 19:56:16.684: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-7446
Jan 11 19:56:16.773: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-7446
Jan 11 19:56:16.863: INFO: creating *v1.Role: csi-mock-volumes-7446/external-provisioner-cfg-csi-mock-volumes-7446
Jan 11 19:56:16.953: INFO: creating *v1.RoleBinding: csi-mock-volumes-7446/csi-provisioner-role-cfg
Jan 11 19:56:17.043: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7446/csi-resizer
Jan 11 19:56:17.133: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-7446
Jan 11 19:56:17.133: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-7446
Jan 11 19:56:17.223: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-7446
Jan 11 19:56:17.313: INFO: creating *v1.Role: csi-mock-volumes-7446/external-resizer-cfg-csi-mock-volumes-7446
Jan 11 19:56:17.403: INFO: creating *v1.RoleBinding: csi-mock-volumes-7446/csi-resizer-role-cfg
Jan 11 19:56:17.493: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7446/csi-mock
Jan 11 19:56:17.582: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-7446
Jan 11 19:56:17.672: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-7446
Jan 11 19:56:17.762: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-7446
Jan 11 19:56:17.851: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-7446
Jan 11 19:56:17.941: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-7446
Jan 11 19:56:18.031: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-7446
Jan 11 19:56:18.121: INFO: creating *v1.StatefulSet: csi-mock-volumes-7446/csi-mockplugin
Jan 11 19:56:18.212: INFO: creating *v1.StatefulSet: csi-mock-volumes-7446/csi-mockplugin-attacher
STEP: Creating pod
Jan 11 19:56:18.481: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
Jan 11 19:56:18.573: INFO: Waiting up to 5m0s for PersistentVolumeClaims [pvc-6d2f4] to have phase Bound
Jan 11 19:56:18.663: INFO: PersistentVolumeClaim pvc-6d2f4 found but phase is Pending instead of Bound.
Jan 11 19:56:20.753: INFO: PersistentVolumeClaim pvc-6d2f4 found and phase=Bound (2.179732442s)
STEP: Expanding current pvc
STEP: Deleting pod pvc-volume-tester-mqr5w
Jan 11 19:58:31.652: INFO: Deleting pod "pvc-volume-tester-mqr5w" in namespace "csi-mock-volumes-7446"
Jan 11 19:58:31.745: INFO: Wait up to 5m0s for pod "pvc-volume-tester-mqr5w" to be fully deleted
STEP: Deleting claim pvc-6d2f4
Jan 11 19:58:44.106: INFO: Waiting up to 2m0s for PersistentVolume pvc-3a48ccad-953c-4eda-b216-4b39c130a247 to get deleted
Jan 11 19:58:44.197: INFO: PersistentVolume pvc-3a48ccad-953c-4eda-b216-4b39c130a247 was removed
STEP: Deleting storageclass csi-mock-volumes-7446-sc
STEP: Cleaning up resources
STEP: uninstalling csi mock driver
Jan 11 19:58:44.288: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7446/csi-attacher
Jan 11 19:58:44.379: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-7446
Jan 11 19:58:44.470: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-7446
Jan 11 19:58:44.561: INFO: deleting *v1.Role: csi-mock-volumes-7446/external-attacher-cfg-csi-mock-volumes-7446
Jan 11 19:58:44.652: INFO: deleting *v1.RoleBinding: csi-mock-volumes-7446/csi-attacher-role-cfg
Jan 11 19:58:44.744: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7446/csi-provisioner
Jan 11 19:58:44.835: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-7446
Jan 11 19:58:44.926: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-7446
Jan 11 19:58:45.017: INFO: deleting *v1.Role: csi-mock-volumes-7446/external-provisioner-cfg-csi-mock-volumes-7446
Jan 11 19:58:45.109: INFO: deleting *v1.RoleBinding: csi-mock-volumes-7446/csi-provisioner-role-cfg
Jan 11 19:58:45.201: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7446/csi-resizer
Jan 11 19:58:45.292: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-7446
Jan 11 19:58:45.384: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-7446
Jan 11 19:58:45.475: INFO: deleting *v1.Role: csi-mock-volumes-7446/external-resizer-cfg-csi-mock-volumes-7446
Jan 11 19:58:45.566: INFO: deleting *v1.RoleBinding: csi-mock-volumes-7446/csi-resizer-role-cfg
Jan 11 19:58:45.657: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7446/csi-mock
Jan 11 19:58:45.748: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-7446
Jan 11 19:58:45.840: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-7446
Jan 11 19:58:45.931: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-7446
Jan 11 19:58:46.023: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-7446
Jan 11 19:58:46.114: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-7446
Jan 11 19:58:46.207: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-7446
Jan 11 19:58:46.298: INFO: deleting *v1.StatefulSet: csi-mock-volumes-7446/csi-mockplugin
Jan 11 19:58:46.390: INFO: deleting *v1.StatefulSet: csi-mock-volumes-7446/csi-mockplugin-attacher
[AfterEach] [sig-storage] CSI mock volume
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 11 19:58:46.481: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "csi-mock-volumes-7446" for this suite.
Jan 11 19:58:58.842: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 19:59:02.164: INFO: namespace csi-mock-volumes-7446 deletion completed in 15.592075085s


• [SLOW TEST:166.845 seconds]
[sig-storage] CSI mock volume
/workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI Volume expansion
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:420
    should not expand volume if resizingOnDriver=off, resizingOnSC=on
    /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:449
------------------------------
SSSSSSS
------------------------------
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 11 19:53:51.104: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
STEP: Building a namespace api object, basename projected
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-6933
STEP: Waiting for a default service account to be provisioned in namespace
[It] Should fail non-optional pod creation due to the key in the configMap object does not exist [Slow]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:509
STEP: Creating configMap with name cm-test-opt-create-7a301c1d-2bc3-4af5-8327-f04f215f31e1
STEP: Creating the pod
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 11 19:58:52.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6933" for this suite.
Jan 11 19:59:04.655: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 19:59:07.972: INFO: namespace projected-6933 deletion completed in 15.587046149s


• [SLOW TEST:316.868 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:35
  Should fail non-optional pod creation due to the key in the configMap object does not exist [Slow]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:509
------------------------------
S
------------------------------
[BeforeEach] [sig-storage] Ephemeralstorage
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 11 19:58:27.987: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
STEP: Building a namespace api object, basename pv
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pv-2918
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Ephemeralstorage
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:50
[It] should allow deletion of pod with invalid volume : configmap
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:56
Jan 11 19:58:58.742: INFO: Deleting pod "pv-2918"/"pod-ephm-test-projected-q44n"
Jan 11 19:58:58.742: INFO: Deleting pod "pod-ephm-test-projected-q44n" in namespace "pv-2918"
Jan 11 19:58:58.834: INFO: Wait up to 5m0s for pod "pod-ephm-test-projected-q44n" to be fully deleted
[AfterEach] [sig-storage] Ephemeralstorage
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 11 19:59:09.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pv-2918" for this suite.
Jan 11 19:59:15.375: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 19:59:18.695: INFO: namespace pv-2918 deletion completed in 9.589607642s


• [SLOW TEST:50.709 seconds]
[sig-storage] Ephemeralstorage
/workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  When pod refers to non-existent ephemeral storage
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:54
    should allow deletion of pod with invalid volume : configmap
    /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:56
------------------------------
S
------------------------------
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 11 19:59:02.174: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
STEP: Building a namespace api object, basename container-probe
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-probe-8482
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:52
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating pod liveness-091ce5ed-ffaa-4724-8069-597964c3217d in namespace container-probe-8482
Jan 11 19:59:05.088: INFO: Started pod liveness-091ce5ed-ffaa-4724-8069-597964c3217d in namespace container-probe-8482
STEP: checking the pod's current state and verifying that restartCount is present
Jan 11 19:59:05.178: INFO: Initial restart count of pod liveness-091ce5ed-ffaa-4724-8069-597964c3217d is 0
Jan 11 19:59:28.260: INFO: Restart count of pod container-probe-8482/liveness-091ce5ed-ffaa-4724-8069-597964c3217d is now 1 (23.081941487s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 11 19:59:28.353: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-8482" for this suite.
Jan 11 19:59:34.716: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 19:59:38.038: INFO: namespace container-probe-8482 deletion completed in 9.592248624s


• [SLOW TEST:35.864 seconds]
[k8s.io] Probing container
/workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
[BeforeEach] [k8s.io] Container Runtime
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 11 19:59:38.041: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
STEP: Building a namespace api object, basename container-runtime
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-runtime-1260
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: create the container
STEP: wait for the container to reach Failed
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Jan 11 19:59:40.043: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 11 19:59:40.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-1260" for this suite.
Jan 11 19:59:46.592: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 19:59:49.907: INFO: namespace container-runtime-1260 deletion completed in 9.58451697s


• [SLOW TEST:11.866 seconds]
[k8s.io] Container Runtime
/workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
  blackbox test
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39
    on terminated container
    /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:132
      should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SS
------------------------------
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 11 19:59:18.698: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
STEP: Building a namespace api object, basename tables
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in tables-6123
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47
[It] should return pod details
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:51
Jan 11 19:59:19.447: INFO: Creating pod pod-1
Jan 11 19:59:19.631: INFO: Table: &v1.Table{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"/api/v1/namespaces/tables-6123/pods/pod-1", ResourceVersion:"64302", Continue:"", RemainingItemCount:(*int64)(nil)}, ColumnDefinitions:[]v1.TableColumnDefinition{v1.TableColumnDefinition{Name:"Name", Type:"string", Format:"name", Description:"Name must be unique within a namespace. Is required when creating resources, although some resources may allow a client to request the generation of an appropriate name automatically. Name is primarily intended for creation idempotence and configuration definition. Cannot be updated. More info: http://kubernetes.io/docs/user-guide/identifiers#names", Priority:0}, v1.TableColumnDefinition{Name:"Ready", Type:"string", Format:"", Description:"The aggregate readiness state of this pod for accepting traffic.", Priority:0}, v1.TableColumnDefinition{Name:"Status", Type:"string", Format:"", Description:"The aggregate status of the containers in this pod.", Priority:0}, v1.TableColumnDefinition{Name:"Restarts", Type:"integer", Format:"", Description:"The number of times the containers in this pod have been restarted.", Priority:0}, v1.TableColumnDefinition{Name:"Age", Type:"string", Format:"", Description:"CreationTimestamp is a timestamp representing the server time when this object was created. It is not guaranteed to be set in happens-before order across separate operations. Clients may not set this value. It is represented in RFC3339 form and is in UTC.\n\nPopulated by the system. Read-only. Null for lists. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata", Priority:0}, v1.TableColumnDefinition{Name:"IP", Type:"string", Format:"", Description:"IP address allocated to the pod. Routable at least within the cluster. Empty if not yet allocated.", Priority:1}, v1.TableColumnDefinition{Name:"Node", Type:"string", Format:"", Description:"NodeName is a request to schedule this pod onto a specific node. If it is non-empty, the scheduler simply schedules this pod onto that node, assuming that it fits resource requirements.", Priority:1}, v1.TableColumnDefinition{Name:"Nominated Node", Type:"string", Format:"", Description:"nominatedNodeName is set only when this pod preempts other pods on the node, but it cannot be scheduled right away as preemption victims receive their graceful termination periods. This field does not guarantee that the pod will be scheduled on this node. Scheduler may decide to place the pod elsewhere if other nodes become available sooner. Scheduler may also decide to give the resources on this node to a higher priority pod that is created after preemption. As a result, this field may be different than PodSpec.nodeName when the pod is scheduled.", Priority:1}, v1.TableColumnDefinition{Name:"Readiness Gates", Type:"string", Format:"", Description:"If specified, all readiness gates will be evaluated for pod readiness. A pod is ready when all its containers are ready AND all conditions specified in the readiness gates have status equal to \"True\" More info: https://git.k8s.io/enhancements/keps/sig-network/0007-pod-ready%2B%2B.md", Priority:1}}, Rows:[]v1.TableRow{v1.TableRow{Cells:[]interface {}{"pod-1", "0/1", "ContainerCreating", 0, "0s", "", "ip-10-250-27-25.ec2.internal", "", ""}, Conditions:[]v1.TableRowCondition(nil), Object:runtime.RawExtension{Raw:[]uint8{0x7b, 0x22, 0x6b, 0x69, 0x6e, 0x64, 0x22, 0x3a, 0x22, 0x50, 0x61, 0x72, 0x74, 0x69, 0x61, 0x6c, 0x4f, 0x62, 0x6a, 0x65, 0x63, 0x74, 0x4d, 0x65, 0x74, 0x61, 0x64, 0x61, 0x74, 0x61, 0x22, 0x2c, 0x22, 0x61, 0x70, 0x69, 0x56, 0x65, 0x72, 0x73, 0x69, 0x6f, 0x6e, 0x22, 0x3a, 0x22, 0x6d, 0x65, 0x74, 0x61, 0x2e, 0x6b, 0x38, 0x73, 0x2e, 0x69, 0x6f, 0x2f, 0x76, 0x31, 0x62, 0x65, 0x74, 0x61, 0x31, 0x22, 0x2c, 0x22, 0x6d, 0x65, 0x74, 0x61, 0x64, 0x61, 0x74, 0x61, 0x22, 0x3a, 0x7b, 0x22, 0x6e, 0x61, 0x6d, 0x65, 0x22, 0x3a, 0x22, 0x70, 0x6f, 0x64, 0x2d, 0x31, 0x22, 0x2c, 0x22, 0x6e, 0x61, 0x6d, 0x65, 0x73, 0x70, 0x61, 0x63, 0x65, 0x22, 0x3a, 0x22, 0x74, 0x61, 0x62, 0x6c, 0x65, 0x73, 0x2d, 0x36, 0x31, 0x32, 0x33, 0x22, 0x2c, 0x22, 0x73, 0x65, 0x6c, 0x66, 0x4c, 0x69, 0x6e, 0x6b, 0x22, 0x3a, 0x22, 0x2f, 0x61, 0x70, 0x69, 0x2f, 0x76, 0x31, 0x2f, 0x6e, 0x61, 0x6d, 0x65, 0x73, 0x70, 0x61, 0x63, 0x65, 0x73, 0x2f, 0x74, 0x61, 0x62, 0x6c, 0x65, 0x73, 0x2d, 0x36, 0x31, 0x32, 0x33, 0x2f, 0x70, 0x6f, 0x64, 0x73, 0x2f, 0x70, 0x6f, 0x64, 0x2d, 0x31, 0x22, 0x2c, 0x22, 0x75, 0x69, 0x64, 0x22, 0x3a, 0x22, 0x38, 0x33, 0x39, 0x66, 0x32, 0x62, 0x32, 0x30, 0x2d, 0x36, 0x36, 0x66, 0x38, 0x2d, 0x34, 0x37, 0x38, 0x35, 0x2d, 0x39, 0x34, 0x63, 0x38, 0x2d, 0x62, 0x36, 0x64, 0x62, 0x33, 0x38, 0x31, 0x35, 0x37, 0x64, 0x65, 0x31, 0x22, 0x2c, 0x22, 0x72, 0x65, 0x73, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x56, 0x65, 0x72, 0x73, 0x69, 0x6f, 0x6e, 0x22, 0x3a, 0x22, 0x36, 0x34, 0x33, 0x30, 0x32, 0x22, 0x2c, 0x22, 0x63, 0x72, 0x65, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x54, 0x69, 0x6d, 0x65, 0x73, 0x74, 0x61, 0x6d, 0x70, 0x22, 0x3a, 0x22, 0x32, 0x30, 0x32, 0x30, 0x2d, 0x30, 0x31, 0x2d, 0x31, 0x31, 0x54, 0x31, 0x39, 0x3a, 0x35, 0x39, 0x3a, 0x31, 0x39, 0x5a, 0x22, 0x2c, 0x22, 0x61, 0x6e, 0x6e, 0x6f, 0x74, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x73, 0x22, 0x3a, 0x7b, 0x22, 0x6b, 0x75, 0x62, 0x65, 0x72, 0x6e, 0x65, 0x74, 0x65, 0x73, 0x2e, 0x69, 0x6f, 0x2f, 0x70, 0x73, 0x70, 0x22, 0x3a, 0x22, 0x65, 0x32, 0x65, 0x2d, 0x74, 0x65, 0x73, 0x74, 0x2d, 0x70, 0x72, 0x69, 0x76, 0x69, 0x6c, 0x65, 0x67, 0x65, 0x64, 0x2d, 0x70, 0x73, 0x70, 0x22, 0x7d, 0x7d, 0x7d}, Object:runtime.Object(nil)}}}}
Jan 11 19:59:19.631: INFO: Table:
NAME    READY   STATUS              RESTARTS   AGE
pod-1   0/1     ContainerCreating   0          0s

[AfterEach] [sig-api-machinery] Servers with support for Table transformation
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 11 19:59:19.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "tables-6123" for this suite.
Jan 11 19:59:47.993: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 19:59:51.311: INFO: namespace tables-6123 deletion completed in 31.588516087s


• [SLOW TEST:32.613 seconds]
[sig-api-machinery] Servers with support for Table transformation
/workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should return pod details
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:51
------------------------------
SSSSSSS
------------------------------
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 11 19:59:07.976: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
STEP: Building a namespace api object, basename pods
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pods-8617
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:165
[It] should support remote command execution over websockets [NodeConformance] [Conformance]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
Jan 11 19:59:09.250: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 11 19:59:12.259: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-8617" for this suite.
Jan 11 19:59:56.619: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 19:59:59.937: INFO: namespace pods-8617 deletion completed in 47.587859194s


• [SLOW TEST:51.962 seconds]
[k8s.io] Pods
/workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSS
------------------------------
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 11 19:59:49.911: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
STEP: Building a namespace api object, basename deployment
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in deployment-3562
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68
[It] deployment should delete old replica sets [Conformance]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
Jan 11 19:59:50.734: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jan 11 19:59:52.914: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:62
Jan 11 19:59:55.632: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:{test-cleanup-deployment  deployment-3562 /apis/apps/v1/namespaces/deployment-3562/deployments/test-cleanup-deployment de42f01a-50df-4262-a8d1-b653e892fcb6 64599 1 2020-01-11 19:59:53 +0000 UTC   map[name:cleanup-pod] map[deployment.kubernetes.io/revision:1] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:cleanup-pod] map[] [] []  []} {[] [] [{redis docker.io/library/redis:5.0.5-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00332d538  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-01-11 19:59:53 +0000 UTC,LastTransitionTime:2020-01-11 19:59:53 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-cleanup-deployment-65db99849b" has successfully progressed.,LastUpdateTime:2020-01-11 19:59:54 +0000 UTC,LastTransitionTime:2020-01-11 19:59:53 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},}

Jan 11 19:59:55.722: INFO: New ReplicaSet "test-cleanup-deployment-65db99849b" of Deployment "test-cleanup-deployment":
&ReplicaSet{ObjectMeta:{test-cleanup-deployment-65db99849b  deployment-3562 /apis/apps/v1/namespaces/deployment-3562/replicasets/test-cleanup-deployment-65db99849b e3621531-7602-4d09-a54f-b726cac21d67 64592 1 2020-01-11 19:59:53 +0000 UTC   map[name:cleanup-pod pod-template-hash:65db99849b] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment de42f01a-50df-4262-a8d1-b653e892fcb6 0xc00332d937 0xc00332d938}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 65db99849b,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:cleanup-pod pod-template-hash:65db99849b] map[] [] []  []} {[] [] [{redis docker.io/library/redis:5.0.5-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00332d998  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
Jan 11 19:59:55.812: INFO: Pod "test-cleanup-deployment-65db99849b-x66rp" is available:
&Pod{ObjectMeta:{test-cleanup-deployment-65db99849b-x66rp test-cleanup-deployment-65db99849b- deployment-3562 /api/v1/namespaces/deployment-3562/pods/test-cleanup-deployment-65db99849b-x66rp e6519e81-5315-4c01-84fa-7088a2fc9c9b 64591 0 2020-01-11 19:59:53 +0000 UTC   map[name:cleanup-pod pod-template-hash:65db99849b] map[cni.projectcalico.org/podIP:100.64.1.26/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet test-cleanup-deployment-65db99849b e3621531-7602-4d09-a54f-b726cac21d67 0xc00332dd37 0xc00332dd38}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gd8pr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gd8pr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:redis,Image:docker.io/library/redis:5.0.5-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gd8pr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-10-250-27-25.ec2.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:59:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:59:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:59:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 19:59:53 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.27.25,PodIP:100.64.1.26,StartTime:2020-01-11 19:59:53 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:redis,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-11 19:59:53 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:redis:5.0.5-alpine,ImageID:docker-pullable://redis@sha256:50899ea1ceed33fa03232f3ac57578a424faa1742c1ac9c7a7bdb95cdf19b858,ContainerID:docker://886ca426c7dce429880fa5a365afd86b5d1436d8344e8992c10ffd67c636fb9e,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:100.64.1.26,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 11 19:59:55.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-3562" for this suite.
Jan 11 20:00:02.182: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 20:00:05.697: INFO: namespace deployment-3562 deletion completed in 9.793302077s


• [SLOW TEST:15.786 seconds]
[sig-apps] Deployment
/workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should delete old replica sets [Conformance]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSS
------------------------------
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 11 19:59:51.323: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
STEP: Building a namespace api object, basename pods
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pods-2696
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:179
[It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 11 19:59:52.146: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-2696" for this suite.
Jan 11 20:00:04.508: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 20:00:07.954: INFO: namespace pods-2696 deletion completed in 15.716679181s


• [SLOW TEST:16.631 seconds]
[k8s.io] [sig-node] Pods Extended
/workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
  [k8s.io] Pods Set QOS Class
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
    should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
    /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 11 19:58:33.681: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
STEP: Building a namespace api object, basename persistent-local-volumes-test
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in persistent-local-volumes-test-9811
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:152
[BeforeEach] StatefulSet with pod affinity [Slow]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:381
STEP: Setting up local volumes on node "ip-10-250-27-25.ec2.internal"
STEP: Initializing test volumes
Jan 11 19:58:37.504: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-9811 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-6eec503b-35d6-4f2f-9916-d1282b0a455c'
Jan 11 19:58:38.830: INFO: stderr: ""
Jan 11 19:58:38.831: INFO: stdout: ""
Jan 11 19:58:38.831: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-9811 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-eadc341c-9155-43f9-b7a9-ead525e16373'
Jan 11 19:58:40.136: INFO: stderr: ""
Jan 11 19:58:40.136: INFO: stdout: ""
Jan 11 19:58:40.136: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-9811 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-f6c8a93b-55c4-44b0-ba42-d00be33748c2'
Jan 11 19:58:41.440: INFO: stderr: ""
Jan 11 19:58:41.440: INFO: stdout: ""
Jan 11 19:58:41.440: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-9811 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-a6b481da-3cd8-4c3b-ab11-f6ea7276c594'
Jan 11 19:58:42.769: INFO: stderr: ""
Jan 11 19:58:42.769: INFO: stdout: ""
Jan 11 19:58:42.769: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-9811 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-801a02e0-ea67-4b71-a9a2-9590736a4ea9'
Jan 11 19:58:44.059: INFO: stderr: ""
Jan 11 19:58:44.059: INFO: stdout: ""
Jan 11 19:58:44.059: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-9811 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-39182fcc-41a3-43c6-b9bc-b64ee304ddaf'
Jan 11 19:58:45.394: INFO: stderr: ""
Jan 11 19:58:45.394: INFO: stdout: ""
STEP: Creating local PVCs and PVs
Jan 11 19:58:45.394: INFO: Creating a PV followed by a PVC
Jan 11 19:58:45.573: INFO: Creating a PV followed by a PVC
Jan 11 19:58:45.752: INFO: Creating a PV followed by a PVC
Jan 11 19:58:45.932: INFO: Creating a PV followed by a PVC
Jan 11 19:58:46.115: INFO: Creating a PV followed by a PVC
Jan 11 19:58:46.294: INFO: Creating a PV followed by a PVC
STEP: Setting up local volumes on node "ip-10-250-7-77.ec2.internal"
STEP: Initializing test volumes
Jan 11 19:59:00.359: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-9811 hostexec-ip-10-250-7-77.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-f0f61c5a-c89b-48a2-84cf-ac1d6af2e459'
Jan 11 19:59:01.718: INFO: stderr: ""
Jan 11 19:59:01.718: INFO: stdout: ""
Jan 11 19:59:01.718: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-9811 hostexec-ip-10-250-7-77.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-077efbcc-a609-4ee6-b47e-58db77be12a6'
Jan 11 19:59:02.997: INFO: stderr: ""
Jan 11 19:59:02.997: INFO: stdout: ""
Jan 11 19:59:02.997: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-9811 hostexec-ip-10-250-7-77.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-8173f07e-a61b-4456-952b-492d9566fda7'
Jan 11 19:59:04.337: INFO: stderr: ""
Jan 11 19:59:04.337: INFO: stdout: ""
Jan 11 19:59:04.337: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-9811 hostexec-ip-10-250-7-77.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-d6e601b6-bccf-42d4-ad19-c7e819cbd921'
Jan 11 19:59:05.656: INFO: stderr: ""
Jan 11 19:59:05.656: INFO: stdout: ""
Jan 11 19:59:05.656: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-9811 hostexec-ip-10-250-7-77.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-f2120278-48f4-438a-9b55-95dea00e0420'
Jan 11 19:59:07.029: INFO: stderr: ""
Jan 11 19:59:07.029: INFO: stdout: ""
Jan 11 19:59:07.029: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-9811 hostexec-ip-10-250-7-77.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-00f2561e-3724-49c1-b488-fd9a8ce88404'
Jan 11 19:59:08.342: INFO: stderr: ""
Jan 11 19:59:08.342: INFO: stdout: ""
STEP: Creating local PVCs and PVs
Jan 11 19:59:08.342: INFO: Creating a PV followed by a PVC
Jan 11 19:59:08.524: INFO: Creating a PV followed by a PVC
Jan 11 19:59:08.703: INFO: Creating a PV followed by a PVC
Jan 11 19:59:08.882: INFO: Creating a PV followed by a PVC
Jan 11 19:59:09.061: INFO: Creating a PV followed by a PVC
Jan 11 19:59:09.241: INFO: Creating a PV followed by a PVC
[It] should use volumes on one node when pod management is parallel and pod has affinity
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:424
STEP: Creating a StatefulSet with pod affinity on nodes
Jan 11 19:59:21.211: INFO: Waiting for pod local-volume-statefulset-0 to enter Running - Ready=true, currently Pending - Ready=false
Jan 11 19:59:31.301: INFO: Waiting for pod local-volume-statefulset-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 11 19:59:31.301: INFO: Waiting for pod local-volume-statefulset-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 11 19:59:31.301: INFO: Waiting for pod local-volume-statefulset-2 to enter Running - Ready=true, currently Running - Ready=true
Jan 11 19:59:31.391: INFO: Waiting up to 1s for PersistentVolumeClaims [vol1-local-volume-statefulset-0] to have phase Bound
Jan 11 19:59:31.480: INFO: PersistentVolumeClaim vol1-local-volume-statefulset-0 found and phase=Bound (89.14863ms)
Jan 11 19:59:31.480: INFO: Waiting up to 1s for PersistentVolumeClaims [vol1-local-volume-statefulset-1] to have phase Bound
Jan 11 19:59:31.570: INFO: PersistentVolumeClaim vol1-local-volume-statefulset-1 found and phase=Bound (89.413312ms)
Jan 11 19:59:31.570: INFO: Waiting up to 1s for PersistentVolumeClaims [vol1-local-volume-statefulset-2] to have phase Bound
Jan 11 19:59:31.659: INFO: PersistentVolumeClaim vol1-local-volume-statefulset-2 found and phase=Bound (89.561092ms)
[AfterEach] StatefulSet with pod affinity [Slow]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:393
STEP: Cleaning up PVC and PV
Jan 11 19:59:31.659: INFO: Deleting PersistentVolumeClaim "pvc-vlr79"
Jan 11 19:59:31.750: INFO: Deleting PersistentVolume "local-pv4fmmh"
STEP: Cleaning up PVC and PV
Jan 11 19:59:31.841: INFO: Deleting PersistentVolumeClaim "pvc-dv82k"
Jan 11 19:59:31.931: INFO: Deleting PersistentVolume "local-pvcfztl"
STEP: Cleaning up PVC and PV
Jan 11 19:59:32.021: INFO: Deleting PersistentVolumeClaim "pvc-kk25k"
Jan 11 19:59:32.115: INFO: Deleting PersistentVolume "local-pv6f7fv"
STEP: Cleaning up PVC and PV
Jan 11 19:59:32.206: INFO: Deleting PersistentVolumeClaim "pvc-xhhqg"
Jan 11 19:59:32.296: INFO: Deleting PersistentVolume "local-pv4z4qq"
STEP: Cleaning up PVC and PV
Jan 11 19:59:32.386: INFO: Deleting PersistentVolumeClaim "pvc-tvvpp"
Jan 11 19:59:32.476: INFO: Deleting PersistentVolume "local-pv4892z"
STEP: Cleaning up PVC and PV
Jan 11 19:59:32.566: INFO: Deleting PersistentVolumeClaim "pvc-x77q8"
Jan 11 19:59:32.656: INFO: Deleting PersistentVolume "local-pvshzp9"
STEP: Removing the test directory
Jan 11 19:59:32.746: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-9811 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-6eec503b-35d6-4f2f-9916-d1282b0a455c'
Jan 11 19:59:34.034: INFO: stderr: ""
Jan 11 19:59:34.035: INFO: stdout: ""
STEP: Removing the test directory
Jan 11 19:59:34.035: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-9811 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-eadc341c-9155-43f9-b7a9-ead525e16373'
Jan 11 19:59:40.473: INFO: stderr: ""
Jan 11 19:59:40.473: INFO: stdout: ""
STEP: Removing the test directory
Jan 11 19:59:40.473: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-9811 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-f6c8a93b-55c4-44b0-ba42-d00be33748c2'
Jan 11 19:59:41.810: INFO: stderr: ""
Jan 11 19:59:41.810: INFO: stdout: ""
STEP: Removing the test directory
Jan 11 19:59:41.810: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-9811 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-a6b481da-3cd8-4c3b-ab11-f6ea7276c594'
Jan 11 19:59:43.077: INFO: stderr: ""
Jan 11 19:59:43.077: INFO: stdout: ""
STEP: Removing the test directory
Jan 11 19:59:43.078: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-9811 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-801a02e0-ea67-4b71-a9a2-9590736a4ea9'
Jan 11 19:59:44.375: INFO: stderr: ""
Jan 11 19:59:44.375: INFO: stdout: ""
STEP: Removing the test directory
Jan 11 19:59:44.375: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-9811 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-39182fcc-41a3-43c6-b9bc-b64ee304ddaf'
Jan 11 19:59:45.697: INFO: stderr: ""
Jan 11 19:59:45.697: INFO: stdout: ""
STEP: Cleaning up PVC and PV
Jan 11 19:59:45.697: INFO: Deleting PersistentVolumeClaim "pvc-6trmj"
Jan 11 19:59:45.787: INFO: Deleting PersistentVolume "local-pv25qfh"
STEP: Cleaning up PVC and PV
Jan 11 19:59:45.877: INFO: Deleting PersistentVolumeClaim "pvc-cgq59"
Jan 11 19:59:45.967: INFO: Deleting PersistentVolume "local-pvb97ld"
STEP: Cleaning up PVC and PV
Jan 11 19:59:46.058: INFO: Deleting PersistentVolumeClaim "pvc-q79vp"
Jan 11 19:59:46.147: INFO: Deleting PersistentVolume "local-pv6m2g5"
STEP: Cleaning up PVC and PV
Jan 11 19:59:46.238: INFO: Deleting PersistentVolumeClaim "pvc-qqcwt"
Jan 11 19:59:46.328: INFO: Deleting PersistentVolume "local-pvn4f78"
STEP: Cleaning up PVC and PV
Jan 11 19:59:46.419: INFO: Deleting PersistentVolumeClaim "pvc-fv8kv"
Jan 11 19:59:46.509: INFO: Deleting PersistentVolume "local-pvspcv4"
STEP: Cleaning up PVC and PV
Jan 11 19:59:46.599: INFO: Deleting PersistentVolumeClaim "pvc-r78vr"
Jan 11 19:59:46.689: INFO: Deleting PersistentVolume "local-pvvntcm"
STEP: Removing the test directory
Jan 11 19:59:46.779: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-9811 hostexec-ip-10-250-7-77.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-f0f61c5a-c89b-48a2-84cf-ac1d6af2e459'
Jan 11 19:59:48.056: INFO: stderr: ""
Jan 11 19:59:48.091: INFO: stdout: ""
STEP: Removing the test directory
Jan 11 19:59:48.091: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-9811 hostexec-ip-10-250-7-77.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-077efbcc-a609-4ee6-b47e-58db77be12a6'
Jan 11 19:59:49.406: INFO: stderr: ""
Jan 11 19:59:49.406: INFO: stdout: ""
STEP: Removing the test directory
Jan 11 19:59:49.406: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-9811 hostexec-ip-10-250-7-77.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-8173f07e-a61b-4456-952b-492d9566fda7'
Jan 11 19:59:50.678: INFO: stderr: ""
Jan 11 19:59:50.678: INFO: stdout: ""
STEP: Removing the test directory
Jan 11 19:59:50.678: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-9811 hostexec-ip-10-250-7-77.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-d6e601b6-bccf-42d4-ad19-c7e819cbd921'
Jan 11 19:59:51.989: INFO: stderr: ""
Jan 11 19:59:51.989: INFO: stdout: ""
STEP: Removing the test directory
Jan 11 19:59:51.989: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-9811 hostexec-ip-10-250-7-77.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-f2120278-48f4-438a-9b55-95dea00e0420'
Jan 11 19:59:53.344: INFO: stderr: ""
Jan 11 19:59:53.344: INFO: stdout: ""
STEP: Removing the test directory
Jan 11 19:59:53.344: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-9811 hostexec-ip-10-250-7-77.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-00f2561e-3724-49c1-b488-fd9a8ce88404'
Jan 11 19:59:54.711: INFO: stderr: ""
Jan 11 19:59:54.711: INFO: stdout: ""
[AfterEach] [sig-storage] PersistentVolumes-local 
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 11 19:59:54.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "persistent-local-volumes-test-9811" for this suite.
Jan 11 20:00:07.162: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 20:00:10.464: INFO: namespace persistent-local-volumes-test-9811 deletion completed in 15.571433142s


• [SLOW TEST:96.783 seconds]
[sig-storage] PersistentVolumes-local 
/workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  StatefulSet with pod affinity [Slow]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:374
    should use volumes on one node when pod management is parallel and pod has affinity
    /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:424
------------------------------
SSSSSSS
------------------------------
[BeforeEach] [k8s.io] Docker Containers
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 11 20:00:07.971: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
STEP: Building a namespace api object, basename containers
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in containers-6607
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating a pod to test override command
Jan 11 20:00:08.702: INFO: Waiting up to 5m0s for pod "client-containers-a081ece0-2c5d-4ce0-9424-0bbedbc12f2b" in namespace "containers-6607" to be "success or failure"
Jan 11 20:00:08.791: INFO: Pod "client-containers-a081ece0-2c5d-4ce0-9424-0bbedbc12f2b": Phase="Pending", Reason="", readiness=false. Elapsed: 89.715408ms
Jan 11 20:00:10.881: INFO: Pod "client-containers-a081ece0-2c5d-4ce0-9424-0bbedbc12f2b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.179679145s
STEP: Saw pod success
Jan 11 20:00:10.881: INFO: Pod "client-containers-a081ece0-2c5d-4ce0-9424-0bbedbc12f2b" satisfied condition "success or failure"
Jan 11 20:00:10.971: INFO: Trying to get logs from node ip-10-250-27-25.ec2.internal pod client-containers-a081ece0-2c5d-4ce0-9424-0bbedbc12f2b container test-container: 
STEP: delete the pod
Jan 11 20:00:11.309: INFO: Waiting for pod client-containers-a081ece0-2c5d-4ce0-9424-0bbedbc12f2b to disappear
Jan 11 20:00:11.398: INFO: Pod client-containers-a081ece0-2c5d-4ce0-9424-0bbedbc12f2b no longer exists
[AfterEach] [k8s.io] Docker Containers
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 11 20:00:11.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-6607" for this suite.
Jan 11 20:00:17.759: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 20:00:21.197: INFO: namespace containers-6607 deletion completed in 9.707587335s


• [SLOW TEST:13.226 seconds]
[k8s.io] Docker Containers
/workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SS
------------------------------
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 11 19:58:38.964: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
STEP: Building a namespace api object, basename statefulset
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in statefulset-5504
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:62
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:77
STEP: Creating service test in namespace statefulset-5504
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating a new StatefulSet
Jan 11 19:58:39.878: INFO: Found 1 stateful pods, waiting for 3
Jan 11 19:58:49.969: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 11 19:58:49.969: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 11 19:58:49.969: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine
Jan 11 19:58:50.431: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Jan 11 19:58:50.802: INFO: Updating stateful set ss2
Jan 11 19:58:50.982: INFO: Waiting for Pod statefulset-5504/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jan 11 19:59:01.163: INFO: Waiting for Pod statefulset-5504/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
STEP: Restoring Pods to the correct revision when they are deleted
Jan 11 19:59:11.438: INFO: Found 2 stateful pods, waiting for 3
Jan 11 19:59:21.529: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 11 19:59:21.529: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 11 19:59:21.529: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Jan 11 19:59:21.899: INFO: Updating stateful set ss2
Jan 11 19:59:22.080: INFO: Waiting for Pod statefulset-5504/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jan 11 19:59:32.451: INFO: Updating stateful set ss2
Jan 11 19:59:32.631: INFO: Waiting for StatefulSet statefulset-5504/ss2 to complete update
Jan 11 19:59:32.631: INFO: Waiting for Pod statefulset-5504/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jan 11 19:59:42.812: INFO: Waiting for StatefulSet statefulset-5504/ss2 to complete update
Jan 11 19:59:42.812: INFO: Waiting for Pod statefulset-5504/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88
Jan 11 19:59:52.812: INFO: Deleting all statefulset in ns statefulset-5504
Jan 11 19:59:52.901: INFO: Scaling statefulset ss2 to 0
Jan 11 20:00:13.262: INFO: Waiting for statefulset status.replicas updated to 0
Jan 11 20:00:13.352: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 11 20:00:13.624: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-5504" for this suite.
Jan 11 20:00:19.986: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 20:00:23.424: INFO: namespace statefulset-5504 deletion completed in 9.70920954s


• [SLOW TEST:104.460 seconds]
[sig-apps] StatefulSet
/workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 11 20:00:05.709: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
STEP: Building a namespace api object, basename persistent-local-volumes-test
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in persistent-local-volumes-test-8848
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:152
[BeforeEach] [Volume type: tmpfs]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195
STEP: Initializing test volumes
STEP: Creating tmpfs mount point on node "ip-10-250-27-25.ec2.internal" at path "/tmp/local-volume-test-829fe966-fa48-42a8-aba3-128f643a6e51"
Jan 11 20:00:08.845: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-8848 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-829fe966-fa48-42a8-aba3-128f643a6e51" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-829fe966-fa48-42a8-aba3-128f643a6e51" "/tmp/local-volume-test-829fe966-fa48-42a8-aba3-128f643a6e51"'
Jan 11 20:00:10.165: INFO: stderr: ""
Jan 11 20:00:10.165: INFO: stdout: ""
STEP: Creating local PVCs and PVs
Jan 11 20:00:10.165: INFO: Creating a PV followed by a PVC
Jan 11 20:00:10.347: INFO: Waiting for PV local-pvbmml5 to bind to PVC pvc-wrdpb
Jan 11 20:00:10.347: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-wrdpb] to have phase Bound
Jan 11 20:00:10.436: INFO: PersistentVolumeClaim pvc-wrdpb found and phase=Bound (89.359525ms)
Jan 11 20:00:10.436: INFO: Waiting up to 3m0s for PersistentVolume local-pvbmml5 to have phase Bound
Jan 11 20:00:10.526: INFO: PersistentVolume local-pvbmml5 found and phase=Bound (89.447827ms)
[BeforeEach] One pod requesting one prebound PVC
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215
STEP: Creating pod1
STEP: Creating a pod
Jan 11 20:00:13.154: INFO: pod "security-context-c1d21c62-c099-4c56-83fa-82105f46bc12" created on Node "ip-10-250-27-25.ec2.internal"
STEP: Writing in pod1
Jan 11 20:00:13.154: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-8848 security-context-c1d21c62-c099-4c56-83fa-82105f46bc12 -- /bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file'
Jan 11 20:00:14.437: INFO: stderr: ""
Jan 11 20:00:14.437: INFO: stdout: ""
Jan 11 20:00:14.437: INFO: podRWCmdExec out: "" err: 
[It] should be able to mount volume and write from pod1
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238
Jan 11 20:00:14.437: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-8848 security-context-c1d21c62-c099-4c56-83fa-82105f46bc12 -- /bin/sh -c cat /mnt/volume1/test-file'
Jan 11 20:00:15.728: INFO: stderr: ""
Jan 11 20:00:15.728: INFO: stdout: "test-file-content\n"
Jan 11 20:00:15.728: INFO: podRWCmdExec out: "test-file-content\n" err: 
STEP: Writing in pod1
Jan 11 20:00:15.728: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-8848 security-context-c1d21c62-c099-4c56-83fa-82105f46bc12 -- /bin/sh -c mkdir -p /mnt/volume1; echo /tmp/local-volume-test-829fe966-fa48-42a8-aba3-128f643a6e51 > /mnt/volume1/test-file'
Jan 11 20:00:17.005: INFO: stderr: ""
Jan 11 20:00:17.005: INFO: stdout: ""
Jan 11 20:00:17.005: INFO: podRWCmdExec out: "" err: 
[AfterEach] One pod requesting one prebound PVC
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227
STEP: Deleting pod1
STEP: Deleting pod security-context-c1d21c62-c099-4c56-83fa-82105f46bc12 in namespace persistent-local-volumes-test-8848
[AfterEach] [Volume type: tmpfs]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204
STEP: Cleaning up PVC and PV
Jan 11 20:00:17.097: INFO: Deleting PersistentVolumeClaim "pvc-wrdpb"
Jan 11 20:00:17.187: INFO: Deleting PersistentVolume "local-pvbmml5"
STEP: Unmount tmpfs mount point on node "ip-10-250-27-25.ec2.internal" at path "/tmp/local-volume-test-829fe966-fa48-42a8-aba3-128f643a6e51"
Jan 11 20:00:17.279: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-8848 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-829fe966-fa48-42a8-aba3-128f643a6e51"'
Jan 11 20:00:18.630: INFO: stderr: ""
Jan 11 20:00:18.630: INFO: stdout: ""
STEP: Removing the test directory
Jan 11 20:00:18.630: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-8848 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-829fe966-fa48-42a8-aba3-128f643a6e51'
Jan 11 20:00:19.855: INFO: stderr: ""
Jan 11 20:00:19.856: INFO: stdout: ""
[AfterEach] [sig-storage] PersistentVolumes-local 
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 11 20:00:19.947: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "persistent-local-volumes-test-8848" for this suite.
Jan 11 20:00:26.308: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 20:00:29.621: INFO: namespace persistent-local-volumes-test-8848 deletion completed in 9.583557025s


• [SLOW TEST:23.912 seconds]
[sig-storage] PersistentVolumes-local 
/workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Volume type: tmpfs]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and write from pod1
      /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238
------------------------------
S
------------------------------
[BeforeEach] [k8s.io] NodeLease
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 11 19:55:21.266: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
STEP: Building a namespace api object, basename node-lease-test
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in node-lease-test-7289
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] NodeLease
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node_lease.go:43
[It] the kubelet should report node status infrequently
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node_lease.go:88
STEP: wait until node is ready
Jan 11 19:55:21.993: INFO: Waiting up to 5m0s for node ip-10-250-27-25.ec2.internal condition Ready to be true
STEP: wait until there is node lease
STEP: verify NodeStatus report period is longer than lease duration
Jan 11 19:55:23.352: INFO: node status heartbeat is unchanged for 1.090297728s, waiting for 1m20s
Jan 11 19:55:24.352: INFO: node status heartbeat is unchanged for 2.090057313s, waiting for 1m20s
Jan 11 19:55:25.352: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s
Jan 11 19:55:25.354: INFO:   v1.NodeStatus{
  	Capacity:    v1.ResourceList{"attachable-volumes-aws-ebs": {i: resource.int64Amount{value: 25}, s: "25", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 2}, s: "2", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 28730179584}, Format: "BinarySI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {s: "0", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 8054267904}, Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},
  	Allocatable: v1.ResourceList{"attachable-volumes-aws-ebs": {i: resource.int64Amount{value: 25}, s: "25", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 1920, scale: -3}, s: "1920m", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 27293670584}, s: "27293670584", Format: "DecimalSI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {s: "0", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 6577812679}, s: "6577812679", Format: "DecimalSI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},
  	Phase:       "",
  	Conditions: []v1.NodeCondition{
  		... // 6 identical elements
  		{Type: "ReadonlyFilesystem", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2020-01-11 19:54:26 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:58 +0000 UTC"}, Reason: "FilesystemIsNotReadOnly", Message: "Filesystem is not read-only"},
  		{Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2020-01-11 15:56:18 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:18 +0000 UTC"}, Reason: "CalicoIsUp", Message: "Calico is running on this node"},
  		{
  			Type:               "MemoryPressure",
  			Status:             "False",
- 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:55:14 +0000 UTC"},
+ 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:55:24 +0000 UTC"},
  			LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:03 +0000 UTC"},
  			Reason:             "KubeletHasSufficientMemory",
  			Message:            "kubelet has sufficient memory available",
  		},
  		{
  			Type:               "DiskPressure",
  			Status:             "False",
- 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:55:14 +0000 UTC"},
+ 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:55:24 +0000 UTC"},
  			LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:03 +0000 UTC"},
  			Reason:             "KubeletHasNoDiskPressure",
  			Message:            "kubelet has no disk pressure",
  		},
  		{
  			Type:               "PIDPressure",
  			Status:             "False",
- 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:55:14 +0000 UTC"},
+ 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:55:24 +0000 UTC"},
  			LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:03 +0000 UTC"},
  			Reason:             "KubeletHasSufficientPID",
  			Message:            "kubelet has sufficient PID available",
  		},
  		{Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:13 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"},
  	},
  	Addresses:       []v1.NodeAddress{{Type: "InternalIP", Address: "10.250.27.25"}, {Type: "Hostname", Address: "ip-10-250-27-25.ec2.internal"}, {Type: "InternalDNS", Address: "ip-10-250-27-25.ec2.internal"}},
  	DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}},
  	NodeInfo:        v1.NodeSystemInfo{MachineID: "ec280dba3c1837e27848a3dec8c080a9", SystemUUID: "ec280dba-3c18-37e2-7848-a3dec8c080a9", BootID: "89e42b89-b944-47ea-8bf6-5f2fe6d80c97", KernelVersion: "4.19.86-coreos", OSImage: "Container Linux by CoreOS 2303.3.0 (Rhyolite)", ContainerRuntimeVersion: "docker://18.6.3", KubeletVersion: "v1.16.4", KubeProxyVersion: "v1.16.4", OperatingSystem: "linux", Architecture: "amd64"},
  	Images:          []v1.ContainerImage{{Names: []string{"eu.gcr.io/gardener-project/k8s.gcr.io/hyperkube@sha256:1d8d7ef8bae1a6c8564d97a7d83a3661ea4b43127b0a6d901f3cd4b1126ee102", "eu.gcr.io/gardener-project/k8s.gcr.io/hyperkube:v1.16.4"}, SizeBytes: 601224435}, {Names: []string{"gcr.io/google-samples/gb-frontend@sha256:35cb427341429fac3df10ff74600ea73e8ec0754d78f9ce89e0b4f3d70d53ba6", "gcr.io/google-samples/gb-frontend:v6"}, SizeBytes: 373099368}, {Names: []string{"k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa", "k8s.gcr.io/etcd:3.3.15"}, SizeBytes: 246640776}, {Names: []string{"gcr.io/kubernetes-e2e-test-images/volume/nfs@sha256:c2ad734346f608a5f7d69cfded93c4e8094069320657bd372d12ba21dea3ea71", "gcr.io/kubernetes-e2e-test-images/volume/nfs:1.0"}, SizeBytes: 225358913}, {Names: []string{"gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb", "gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0"}, SizeBytes: 195659796}, {Names: []string{"eu.gcr.io/gardener-project/3rd/quay_io/calico/node@sha256:d017c694acb9df5ad8e957a14b4c5a613c3a42771a34904f40c279dd2f61461e", "eu.gcr.io/gardener-project/3rd/quay_io/calico/node:v3.8.2-mod-1"}, SizeBytes: 185406766}, {Names: []string{"eu.gcr.io/gardener-project/3rd/quay_io/calico/cni@sha256:fe6cb51f30add991b76eadfa26ec10fa8796383a1ddf807be5d4228725207b9d", "eu.gcr.io/gardener-project/3rd/quay_io/calico/cni:v3.8.2-mod-1"}, SizeBytes: 153790666}, {Names: []string{"httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a", "httpd:2.4.39-alpine"}, SizeBytes: 126894770}, {Names: []string{"httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060", "httpd:2.4.38-alpine"}, SizeBytes: 123781643}, {Names: []string{"eu.gcr.io/gardener-project/3rd/k8s_gcr_io/node-problem-detector@sha256:00aceed3b4ef20d0d578aff3f904212daa2f0aaf18350d3e213cf4ca0703ccf0", "eu.gcr.io/gardener-project/3rd/k8s_gcr_io/node-problem-detector:v0.7.1-mod-1"}, SizeBytes: 96768084}, {Names: []string{"gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:1bafcc6fb1aa990b487850adba9cadc020e42d7905aa8a30481182a477ba24b0", "gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.10"}, SizeBytes: 61365829}, {Names: []string{"gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727", "gcr.io/kubernetes-e2e-test-images/agnhost:2.6"}, SizeBytes: 57345321}, {Names: []string{"quay.io/k8scsi/csi-provisioner@sha256:0efcb424f1dde9b9fb11a1a14f2e48ab47e1c3f08bc3a929990dcfcb1f7ab34f", "quay.io/k8scsi/csi-provisioner:v1.4.0-rc1"}, SizeBytes: 54431016}, {Names: []string{"quay.io/k8scsi/csi-snapshotter@sha256:e3d3e742e32d00488fdb401045b9b1d033d7ca0ab6e760f77b24750fc95e5f70", "quay.io/k8scsi/csi-snapshotter:v2.0.0-rc1"}, SizeBytes: 51703561}, {Names: []string{"eu.gcr.io/gardener-project/3rd/quay_io/calico/typha@sha256:52298609a808087c774e95ded163e91828106bed6cf3117c51aba3f4d3b7943c", "eu.gcr.io/gardener-project/3rd/quay_io/calico/typha:v3.8.2"}, SizeBytes: 49771411}, {Names: []string{"quay.io/k8scsi/csi-attacher@sha256:26fccd7a99d973845df1193b46ebdcc6ab8dc5f6e6be319750c471fce1742d13", "quay.io/k8scsi/csi-attacher:v1.2.0"}, SizeBytes: 46226754}, {Names: []string{"quay.io/k8scsi/csi-attacher@sha256:0aba670b4d9d6b2e720bbf575d733156c676b693ca26501235444490300db838", "quay.io/k8scsi/csi-attacher:v1.1.0"}, SizeBytes: 42839085}, {Names: []string{"quay.io/k8scsi/csi-resizer@sha256:7d46fb6eb8b890dc546029d1565d502b4a1d974d33625c6ee2bc7991b77fc1a1", "quay.io/k8scsi/csi-resizer:v0.2.0"}, SizeBytes: 42817100}, {Names: []string{"quay.io/k8scsi/csi-resizer@sha256:f315c9042e56def3c05c6b04fe79ec9da6d39ddc557ca365a76cf35964ea08b6", "quay.io/k8scsi/csi-resizer:v0.1.0"}, SizeBytes: 42623056}, {Names: []string{"gcr.io/kubernetes-e2e-test-images/nonroot@sha256:d4ede5c74517090b6686219059118ed178cf4620f5db8781b32f806bb1e7395b", "gcr.io/kubernetes-e2e-test-images/nonroot:1.0"}, SizeBytes: 42321438}, {Names: []string{"redis@sha256:50899ea1ceed33fa03232f3ac57578a424faa1742c1ac9c7a7bdb95cdf19b858", "redis:5.0.5-alpine"}, SizeBytes: 29331594}, {Names: []string{"quay.io/k8scsi/hostpathplugin@sha256:b4826e492fc1762fceaf9726f41575ca0a4567864d3d235da874818de18039de", "quay.io/k8scsi/hostpathplugin:v1.2.0-rc5"}, SizeBytes: 28761497}, {Names: []string{"eu.gcr.io/gardener-project/3rd/quay_io/prometheus/node-exporter@sha256:fea82a3a79228af2840c72ff394d7446ace51ae035f5b26cd9767b250baf13b7", "eu.gcr.io/gardener-project/3rd/quay_io/prometheus/node-exporter:v0.18.1"}, SizeBytes: 22933477}, {Names: []string{"gcr.io/kubernetes-e2e-test-images/echoserver@sha256:e9ba514b896cdf559eef8788b66c2c3ee55f3572df617647b4b0d8b6bf81cf19", "gcr.io/kubernetes-e2e-test-images/echoserver:2.2"}, SizeBytes: 21692741}, {Names: []string{"quay.io/k8scsi/mock-driver@sha256:e0eed916b7d970bad2b7d9875f9ad16932f987f0f3d91ec5d86da68b0b5cc9d1", "quay.io/k8scsi/mock-driver:v2.1.0"}, SizeBytes: 16226335}, {Names: []string{"nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7", "nginx:1.14-alpine"}, SizeBytes: 16032814}, {Names: []string{"quay.io/k8scsi/csi-node-driver-registrar@sha256:13daf82fb99e951a4bff8ae5fc7c17c3a8fe7130be6400990d8f6076c32d4599", "quay.io/k8scsi/csi-node-driver-registrar:v1.1.0"}, SizeBytes: 15815995}, {Names: []string{"quay.io/k8scsi/livenessprobe@sha256:dde617756e0f602adc566ab71fd885f1dad451ad3fb063ac991c95a2ff47aea5", "quay.io/k8scsi/livenessprobe:v1.1.0"}, SizeBytes: 14967303}, {Names: []string{"eu.gcr.io/gardener-project/3rd/quay_io/calico/pod2daemon-flexvol@sha256:fd246ba4eb5b96a7b97bfd8d99eb823ba179e6eeb9852cb3e3f7bf2f44a800a8", "eu.gcr.io/gardener-project/3rd/quay_io/calico/pod2daemon-flexvol:v3.8.2"}, SizeBytes: 9371181}, {Names: []string{"gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd", "gcr.io/kubernetes-e2e-test-images/dnsutils:1.1"}, SizeBytes: 9349974}, {Names: []string{"gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411", "gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0"}, SizeBytes: 6757579}, {Names: []string{"gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc", "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"}, SizeBytes: 4753501}, {Names: []string{"gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6", "gcr.io/kubernetes-e2e-test-images/kitten:1.0"}, SizeBytes: 4747037}, {Names: []string{"gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e", "gcr.io/kubernetes-e2e-test-images/test-webserver:1.0"}, SizeBytes: 4732240}, {Names: []string{"alpine@sha256:8421d9a84432575381bfabd248f1eb56f3aa21d9d7cd2511583c68c9b7511d10", "alpine:3.7"}, SizeBytes: 4206494}, {Names: []string{"gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2", "gcr.io/kubernetes-e2e-test-images/mounttest:1.0"}, SizeBytes: 1563521}, {Names: []string{"gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d", "gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0"}, SizeBytes: 1450451}, {Names: []string{"busybox@sha256:6915be4043561d64e0ab0f8f098dc2ac48e077fe23f488ac24b665166898115a", "busybox:latest"}, SizeBytes: 1219782}, {Names: []string{"busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", "busybox:1.29"}, SizeBytes: 1154361}, {Names: []string{"eu.gcr.io/gardener-project/3rd/gcr_io/google_containers/pause-amd64@sha256:ffa28932647c3b6cab6a618aafe98d33dd185d96158ecf9b1addf042d6244025", "k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea", "eu.gcr.io/gardener-project/3rd/gcr_io/google_containers/pause-amd64:3.1", "k8s.gcr.io/pause:3.1"}, SizeBytes: 742472}},
- 	VolumesInUse:    nil,
+ 	VolumesInUse:    []v1.UniqueVolumeName{"kubernetes.io/csi/csi-mock-csi-mock-volumes-3620^4"},
  	VolumesAttached: []v1.AttachedVolume{{Name: "kubernetes.io/csi/csi-mock-csi-mock-volumes-3620^4"}},
  	Config:          nil,
  }

Jan 11 19:55:26.352: INFO: node status heartbeat is unchanged for 1.000067858s, waiting for 1m20s
Jan 11 19:55:27.352: INFO: node status heartbeat is unchanged for 2.000464391s, waiting for 1m20s
Jan 11 19:55:28.352: INFO: node status heartbeat is unchanged for 3.000232425s, waiting for 1m20s
Jan 11 19:55:29.352: INFO: node status heartbeat is unchanged for 4.000197644s, waiting for 1m20s
Jan 11 19:55:30.352: INFO: node status heartbeat is unchanged for 5.000076541s, waiting for 1m20s
Jan 11 19:55:31.353: INFO: node status heartbeat is unchanged for 6.000576811s, waiting for 1m20s
Jan 11 19:55:32.352: INFO: node status heartbeat is unchanged for 7.000148817s, waiting for 1m20s
Jan 11 19:55:33.352: INFO: node status heartbeat is unchanged for 8.00017645s, waiting for 1m20s
Jan 11 19:55:34.352: INFO: node status heartbeat is unchanged for 9.00005745s, waiting for 1m20s
Jan 11 19:55:35.352: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s
Jan 11 19:55:35.356: INFO:   v1.NodeStatus{
  	Capacity:    v1.ResourceList{"attachable-volumes-aws-ebs": {i: resource.int64Amount{value: 25}, s: "25", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 2}, s: "2", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 28730179584}, Format: "BinarySI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {s: "0", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 8054267904}, Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},
  	Allocatable: v1.ResourceList{"attachable-volumes-aws-ebs": {i: resource.int64Amount{value: 25}, s: "25", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 1920, scale: -3}, s: "1920m", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 27293670584}, s: "27293670584", Format: "DecimalSI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {s: "0", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 6577812679}, s: "6577812679", Format: "DecimalSI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},
  	Phase:       "",
  	Conditions: []v1.NodeCondition{
- 		{
- 			Type:               "FrequentUnregisterNetDevice",
- 			Status:             "False",
- 			LastHeartbeatTime:  s"2020-01-11 19:54:26 +0000 UTC",
- 			LastTransitionTime: s"2020-01-11 15:56:58 +0000 UTC",
- 			Reason:             "NoFrequentUnregisterNetDevice",
- 			Message:            "node is functioning properly",
- 		},
+ 		{
+ 			Type:               "FrequentContainerdRestart",
+ 			Status:             "False",
+ 			LastHeartbeatTime:  s"2020-01-11 19:55:26 +0000 UTC",
+ 			LastTransitionTime: s"2020-01-11 15:56:58 +0000 UTC",
+ 			Reason:             "NoFrequentContainerdRestart",
+ 			Message:            "containerd is functioning properly",
+ 		},
- 		{
- 			Type:               "FrequentKubeletRestart",
- 			Status:             "False",
- 			LastHeartbeatTime:  s"2020-01-11 19:54:26 +0000 UTC",
- 			LastTransitionTime: s"2020-01-11 15:56:58 +0000 UTC",
- 			Reason:             "NoFrequentKubeletRestart",
- 			Message:            "kubelet is functioning properly",
- 		},
+ 		{
+ 			Type:               "CorruptDockerOverlay2",
+ 			Status:             "False",
+ 			LastHeartbeatTime:  s"2020-01-11 19:55:26 +0000 UTC",
+ 			LastTransitionTime: s"2020-01-11 15:56:58 +0000 UTC",
+ 			Reason:             "NoCorruptDockerOverlay2",
+ 			Message:            "docker overlay2 is functioning properly",
+ 		},
- 		{
- 			Type:               "FrequentDockerRestart",
- 			Status:             "False",
- 			LastHeartbeatTime:  s"2020-01-11 19:54:26 +0000 UTC",
- 			LastTransitionTime: s"2020-01-11 15:56:58 +0000 UTC",
- 			Reason:             "NoFrequentDockerRestart",
- 			Message:            "docker is functioning properly",
- 		},
+ 		{
+ 			Type:               "KernelDeadlock",
+ 			Status:             "False",
+ 			LastHeartbeatTime:  s"2020-01-11 19:55:26 +0000 UTC",
+ 			LastTransitionTime: s"2020-01-11 15:56:58 +0000 UTC",
+ 			Reason:             "KernelHasNoDeadlock",
+ 			Message:            "kernel has no deadlock",
+ 		},
- 		{
- 			Type:               "FrequentContainerdRestart",
- 			Status:             "False",
- 			LastHeartbeatTime:  s"2020-01-11 19:54:26 +0000 UTC",
- 			LastTransitionTime: s"2020-01-11 15:56:58 +0000 UTC",
- 			Reason:             "NoFrequentContainerdRestart",
- 			Message:            "containerd is functioning properly",
- 		},
+ 		{
+ 			Type:               "ReadonlyFilesystem",
+ 			Status:             "False",
+ 			LastHeartbeatTime:  s"2020-01-11 19:55:26 +0000 UTC",
+ 			LastTransitionTime: s"2020-01-11 15:56:58 +0000 UTC",
+ 			Reason:             "FilesystemIsNotReadOnly",
+ 			Message:            "Filesystem is not read-only",
+ 		},
- 		{
- 			Type:               "CorruptDockerOverlay2",
- 			Status:             "False",
- 			LastHeartbeatTime:  s"2020-01-11 19:54:26 +0000 UTC",
- 			LastTransitionTime: s"2020-01-11 15:56:58 +0000 UTC",
- 			Reason:             "NoCorruptDockerOverlay2",
- 			Message:            "docker overlay2 is functioning properly",
- 		},
+ 		{
+ 			Type:               "FrequentUnregisterNetDevice",
+ 			Status:             "False",
+ 			LastHeartbeatTime:  s"2020-01-11 19:55:26 +0000 UTC",
+ 			LastTransitionTime: s"2020-01-11 15:56:58 +0000 UTC",
+ 			Reason:             "NoFrequentUnregisterNetDevice",
+ 			Message:            "node is functioning properly",
+ 		},
- 		{
- 			Type:               "KernelDeadlock",
- 			Status:             "False",
- 			LastHeartbeatTime:  s"2020-01-11 19:54:26 +0000 UTC",
- 			LastTransitionTime: s"2020-01-11 15:56:58 +0000 UTC",
- 			Reason:             "KernelHasNoDeadlock",
- 			Message:            "kernel has no deadlock",
- 		},
+ 		{
+ 			Type:               "FrequentKubeletRestart",
+ 			Status:             "False",
+ 			LastHeartbeatTime:  s"2020-01-11 19:55:26 +0000 UTC",
+ 			LastTransitionTime: s"2020-01-11 15:56:58 +0000 UTC",
+ 			Reason:             "NoFrequentKubeletRestart",
+ 			Message:            "kubelet is functioning properly",
+ 		},
- 		{
- 			Type:               "ReadonlyFilesystem",
- 			Status:             "False",
- 			LastHeartbeatTime:  s"2020-01-11 19:54:26 +0000 UTC",
- 			LastTransitionTime: s"2020-01-11 15:56:58 +0000 UTC",
- 			Reason:             "FilesystemIsNotReadOnly",
- 			Message:            "Filesystem is not read-only",
- 		},
+ 		{
+ 			Type:               "FrequentDockerRestart",
+ 			Status:             "False",
+ 			LastHeartbeatTime:  s"2020-01-11 19:55:26 +0000 UTC",
+ 			LastTransitionTime: s"2020-01-11 15:56:58 +0000 UTC",
+ 			Reason:             "NoFrequentDockerRestart",
+ 			Message:            "docker is functioning properly",
+ 		},
  		{Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2020-01-11 15:56:18 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:18 +0000 UTC"}, Reason: "CalicoIsUp", Message: "Calico is running on this node"},
  		{
  			Type:               "MemoryPressure",
  			Status:             "False",
- 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:55:24 +0000 UTC"},
+ 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:55:34 +0000 UTC"},
  			LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:03 +0000 UTC"},
  			Reason:             "KubeletHasSufficientMemory",
  			Message:            "kubelet has sufficient memory available",
  		},
  		{
  			Type:               "DiskPressure",
  			Status:             "False",
- 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:55:24 +0000 UTC"},
+ 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:55:34 +0000 UTC"},
  			LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:03 +0000 UTC"},
  			Reason:             "KubeletHasNoDiskPressure",
  			Message:            "kubelet has no disk pressure",
  		},
  		{
  			Type:               "PIDPressure",
  			Status:             "False",
- 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:55:24 +0000 UTC"},
+ 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:55:34 +0000 UTC"},
  			LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:03 +0000 UTC"},
  			Reason:             "KubeletHasSufficientPID",
  			Message:            "kubelet has sufficient PID available",
  		},
  		{Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:13 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"},
  	},
  	Addresses:       []v1.NodeAddress{{Type: "InternalIP", Address: "10.250.27.25"}, {Type: "Hostname", Address: "ip-10-250-27-25.ec2.internal"}, {Type: "InternalDNS", Address: "ip-10-250-27-25.ec2.internal"}},
  	DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}},
  	NodeInfo:        v1.NodeSystemInfo{MachineID: "ec280dba3c1837e27848a3dec8c080a9", SystemUUID: "ec280dba-3c18-37e2-7848-a3dec8c080a9", BootID: "89e42b89-b944-47ea-8bf6-5f2fe6d80c97", KernelVersion: "4.19.86-coreos", OSImage: "Container Linux by CoreOS 2303.3.0 (Rhyolite)", ContainerRuntimeVersion: "docker://18.6.3", KubeletVersion: "v1.16.4", KubeProxyVersion: "v1.16.4", OperatingSystem: "linux", Architecture: "amd64"},
  	Images:          []v1.ContainerImage{{Names: []string{"eu.gcr.io/gardener-project/k8s.gcr.io/hyperkube@sha256:1d8d7ef8bae1a6c8564d97a7d83a3661ea4b43127b0a6d901f3cd4b1126ee102", "eu.gcr.io/gardener-project/k8s.gcr.io/hyperkube:v1.16.4"}, SizeBytes: 601224435}, {Names: []string{"gcr.io/google-samples/gb-frontend@sha256:35cb427341429fac3df10ff74600ea73e8ec0754d78f9ce89e0b4f3d70d53ba6", "gcr.io/google-samples/gb-frontend:v6"}, SizeBytes: 373099368}, {Names: []string{"k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa", "k8s.gcr.io/etcd:3.3.15"}, SizeBytes: 246640776}, {Names: []string{"gcr.io/kubernetes-e2e-test-images/volume/nfs@sha256:c2ad734346f608a5f7d69cfded93c4e8094069320657bd372d12ba21dea3ea71", "gcr.io/kubernetes-e2e-test-images/volume/nfs:1.0"}, SizeBytes: 225358913}, {Names: []string{"gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb", "gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0"}, SizeBytes: 195659796}, {Names: []string{"eu.gcr.io/gardener-project/3rd/quay_io/calico/node@sha256:d017c694acb9df5ad8e957a14b4c5a613c3a42771a34904f40c279dd2f61461e", "eu.gcr.io/gardener-project/3rd/quay_io/calico/node:v3.8.2-mod-1"}, SizeBytes: 185406766}, {Names: []string{"eu.gcr.io/gardener-project/3rd/quay_io/calico/cni@sha256:fe6cb51f30add991b76eadfa26ec10fa8796383a1ddf807be5d4228725207b9d", "eu.gcr.io/gardener-project/3rd/quay_io/calico/cni:v3.8.2-mod-1"}, SizeBytes: 153790666}, {Names: []string{"httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a", "httpd:2.4.39-alpine"}, SizeBytes: 126894770}, {Names: []string{"httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060", "httpd:2.4.38-alpine"}, SizeBytes: 123781643}, {Names: []string{"eu.gcr.io/gardener-project/3rd/k8s_gcr_io/node-problem-detector@sha256:00aceed3b4ef20d0d578aff3f904212daa2f0aaf18350d3e213cf4ca0703ccf0", "eu.gcr.io/gardener-project/3rd/k8s_gcr_io/node-problem-detector:v0.7.1-mod-1"}, SizeBytes: 96768084}, {Names: []string{"gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:1bafcc6fb1aa990b487850adba9cadc020e42d7905aa8a30481182a477ba24b0", "gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.10"}, SizeBytes: 61365829}, {Names: []string{"gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727", "gcr.io/kubernetes-e2e-test-images/agnhost:2.6"}, SizeBytes: 57345321}, {Names: []string{"quay.io/k8scsi/csi-provisioner@sha256:0efcb424f1dde9b9fb11a1a14f2e48ab47e1c3f08bc3a929990dcfcb1f7ab34f", "quay.io/k8scsi/csi-provisioner:v1.4.0-rc1"}, SizeBytes: 54431016}, {Names: []string{"quay.io/k8scsi/csi-snapshotter@sha256:e3d3e742e32d00488fdb401045b9b1d033d7ca0ab6e760f77b24750fc95e5f70", "quay.io/k8scsi/csi-snapshotter:v2.0.0-rc1"}, SizeBytes: 51703561}, {Names: []string{"eu.gcr.io/gardener-project/3rd/quay_io/calico/typha@sha256:52298609a808087c774e95ded163e91828106bed6cf3117c51aba3f4d3b7943c", "eu.gcr.io/gardener-project/3rd/quay_io/calico/typha:v3.8.2"}, SizeBytes: 49771411}, {Names: []string{"quay.io/k8scsi/csi-attacher@sha256:26fccd7a99d973845df1193b46ebdcc6ab8dc5f6e6be319750c471fce1742d13", "quay.io/k8scsi/csi-attacher:v1.2.0"}, SizeBytes: 46226754}, {Names: []string{"quay.io/k8scsi/csi-attacher@sha256:0aba670b4d9d6b2e720bbf575d733156c676b693ca26501235444490300db838", "quay.io/k8scsi/csi-attacher:v1.1.0"}, SizeBytes: 42839085}, {Names: []string{"quay.io/k8scsi/csi-resizer@sha256:7d46fb6eb8b890dc546029d1565d502b4a1d974d33625c6ee2bc7991b77fc1a1", "quay.io/k8scsi/csi-resizer:v0.2.0"}, SizeBytes: 42817100}, {Names: []string{"quay.io/k8scsi/csi-resizer@sha256:f315c9042e56def3c05c6b04fe79ec9da6d39ddc557ca365a76cf35964ea08b6", "quay.io/k8scsi/csi-resizer:v0.1.0"}, SizeBytes: 42623056}, {Names: []string{"gcr.io/kubernetes-e2e-test-images/nonroot@sha256:d4ede5c74517090b6686219059118ed178cf4620f5db8781b32f806bb1e7395b", "gcr.io/kubernetes-e2e-test-images/nonroot:1.0"}, SizeBytes: 42321438}, {Names: []string{"redis@sha256:50899ea1ceed33fa03232f3ac57578a424faa1742c1ac9c7a7bdb95cdf19b858", "redis:5.0.5-alpine"}, SizeBytes: 29331594}, {Names: []string{"quay.io/k8scsi/hostpathplugin@sha256:b4826e492fc1762fceaf9726f41575ca0a4567864d3d235da874818de18039de", "quay.io/k8scsi/hostpathplugin:v1.2.0-rc5"}, SizeBytes: 28761497}, {Names: []string{"eu.gcr.io/gardener-project/3rd/quay_io/prometheus/node-exporter@sha256:fea82a3a79228af2840c72ff394d7446ace51ae035f5b26cd9767b250baf13b7", "eu.gcr.io/gardener-project/3rd/quay_io/prometheus/node-exporter:v0.18.1"}, SizeBytes: 22933477}, {Names: []string{"gcr.io/kubernetes-e2e-test-images/echoserver@sha256:e9ba514b896cdf559eef8788b66c2c3ee55f3572df617647b4b0d8b6bf81cf19", "gcr.io/kubernetes-e2e-test-images/echoserver:2.2"}, SizeBytes: 21692741}, {Names: []string{"quay.io/k8scsi/mock-driver@sha256:e0eed916b7d970bad2b7d9875f9ad16932f987f0f3d91ec5d86da68b0b5cc9d1", "quay.io/k8scsi/mock-driver:v2.1.0"}, SizeBytes: 16226335}, {Names: []string{"nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7", "nginx:1.14-alpine"}, SizeBytes: 16032814}, {Names: []string{"quay.io/k8scsi/csi-node-driver-registrar@sha256:13daf82fb99e951a4bff8ae5fc7c17c3a8fe7130be6400990d8f6076c32d4599", "quay.io/k8scsi/csi-node-driver-registrar:v1.1.0"}, SizeBytes: 15815995}, {Names: []string{"quay.io/k8scsi/livenessprobe@sha256:dde617756e0f602adc566ab71fd885f1dad451ad3fb063ac991c95a2ff47aea5", "quay.io/k8scsi/livenessprobe:v1.1.0"}, SizeBytes: 14967303}, {Names: []string{"eu.gcr.io/gardener-project/3rd/quay_io/calico/pod2daemon-flexvol@sha256:fd246ba4eb5b96a7b97bfd8d99eb823ba179e6eeb9852cb3e3f7bf2f44a800a8", "eu.gcr.io/gardener-project/3rd/quay_io/calico/pod2daemon-flexvol:v3.8.2"}, SizeBytes: 9371181}, {Names: []string{"gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd", "gcr.io/kubernetes-e2e-test-images/dnsutils:1.1"}, SizeBytes: 9349974}, {Names: []string{"gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411", "gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0"}, SizeBytes: 6757579}, {Names: []string{"gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc", "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"}, SizeBytes: 4753501}, {Names: []string{"gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6", "gcr.io/kubernetes-e2e-test-images/kitten:1.0"}, SizeBytes: 4747037}, {Names: []string{"gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e", "gcr.io/kubernetes-e2e-test-images/test-webserver:1.0"}, SizeBytes: 4732240}, {Names: []string{"alpine@sha256:8421d9a84432575381bfabd248f1eb56f3aa21d9d7cd2511583c68c9b7511d10", "alpine:3.7"}, SizeBytes: 4206494}, {Names: []string{"gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2", "gcr.io/kubernetes-e2e-test-images/mounttest:1.0"}, SizeBytes: 1563521}, {Names: []string{"gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d", "gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0"}, SizeBytes: 1450451}, {Names: []string{"busybox@sha256:6915be4043561d64e0ab0f8f098dc2ac48e077fe23f488ac24b665166898115a", "busybox:latest"}, SizeBytes: 1219782}, {Names: []string{"busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", "busybox:1.29"}, SizeBytes: 1154361}, {Names: []string{"eu.gcr.io/gardener-project/3rd/gcr_io/google_containers/pause-amd64@sha256:ffa28932647c3b6cab6a618aafe98d33dd185d96158ecf9b1addf042d6244025", "k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea", "eu.gcr.io/gardener-project/3rd/gcr_io/google_containers/pause-amd64:3.1", "k8s.gcr.io/pause:3.1"}, SizeBytes: 742472}},
- 	VolumesInUse:    []v1.UniqueVolumeName{"kubernetes.io/csi/csi-mock-csi-mock-volumes-3620^4"},
+ 	VolumesInUse:    nil,
- 	VolumesAttached: []v1.AttachedVolume{{Name: "kubernetes.io/csi/csi-mock-csi-mock-volumes-3620^4"}},
+ 	VolumesAttached: nil,
  	Config:          nil,
  }

Jan 11 19:55:36.352: INFO: node status heartbeat is unchanged for 999.71682ms, waiting for 1m20s
Jan 11 19:55:37.352: INFO: node status heartbeat is unchanged for 1.999845153s, waiting for 1m20s
Jan 11 19:55:38.352: INFO: node status heartbeat is unchanged for 3.000194602s, waiting for 1m20s
Jan 11 19:55:39.352: INFO: node status heartbeat is unchanged for 3.999756715s, waiting for 1m20s
Jan 11 19:55:40.351: INFO: node status heartbeat is unchanged for 4.999536454s, waiting for 1m20s
Jan 11 19:55:41.352: INFO: node status heartbeat is unchanged for 5.999767743s, waiting for 1m20s
Jan 11 19:55:42.351: INFO: node status heartbeat is unchanged for 6.999624588s, waiting for 1m20s
Jan 11 19:55:43.352: INFO: node status heartbeat is unchanged for 7.999734968s, waiting for 1m20s
Jan 11 19:55:44.352: INFO: node status heartbeat is unchanged for 8.999802132s, waiting for 1m20s
Jan 11 19:55:45.351: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s
Jan 11 19:55:45.353: INFO:   v1.NodeStatus{
  	Capacity:    v1.ResourceList{"attachable-volumes-aws-ebs": {i: resource.int64Amount{value: 25}, s: "25", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 2}, s: "2", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 28730179584}, Format: "BinarySI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {s: "0", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 8054267904}, Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},
  	Allocatable: v1.ResourceList{"attachable-volumes-aws-ebs": {i: resource.int64Amount{value: 25}, s: "25", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 1920, scale: -3}, s: "1920m", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 27293670584}, s: "27293670584", Format: "DecimalSI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {s: "0", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 6577812679}, s: "6577812679", Format: "DecimalSI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},
  	Phase:       "",
  	Conditions: []v1.NodeCondition{
  		... // 6 identical elements
  		{Type: "FrequentDockerRestart", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2020-01-11 19:55:26 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:58 +0000 UTC"}, Reason: "NoFrequentDockerRestart", Message: "docker is functioning properly"},
  		{Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2020-01-11 15:56:18 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:18 +0000 UTC"}, Reason: "CalicoIsUp", Message: "Calico is running on this node"},
  		{
  			Type:               "MemoryPressure",
  			Status:             "False",
- 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:55:34 +0000 UTC"},
+ 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:55:44 +0000 UTC"},
  			LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:03 +0000 UTC"},
  			Reason:             "KubeletHasSufficientMemory",
  			Message:            "kubelet has sufficient memory available",
  		},
  		{
  			Type:               "DiskPressure",
  			Status:             "False",
- 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:55:34 +0000 UTC"},
+ 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:55:44 +0000 UTC"},
  			LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:03 +0000 UTC"},
  			Reason:             "KubeletHasNoDiskPressure",
  			Message:            "kubelet has no disk pressure",
  		},
  		{
  			Type:               "PIDPressure",
  			Status:             "False",
- 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:55:34 +0000 UTC"},
+ 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:55:44 +0000 UTC"},
  			LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:03 +0000 UTC"},
  			Reason:             "KubeletHasSufficientPID",
  			Message:            "kubelet has sufficient PID available",
  		},
  		{Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:13 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"},
  	},
  	Addresses:       []v1.NodeAddress{{Type: "InternalIP", Address: "10.250.27.25"}, {Type: "Hostname", Address: "ip-10-250-27-25.ec2.internal"}, {Type: "InternalDNS", Address: "ip-10-250-27-25.ec2.internal"}},
  	DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}},
  	... // 5 identical fields
  }

Jan 11 19:55:46.352: INFO: node status heartbeat is unchanged for 1.000614721s, waiting for 1m20s
Jan 11 19:55:47.352: INFO: node status heartbeat is unchanged for 2.000621177s, waiting for 1m20s
Jan 11 19:55:48.352: INFO: node status heartbeat is unchanged for 3.000462465s, waiting for 1m20s
Jan 11 19:55:49.352: INFO: node status heartbeat is unchanged for 4.000493876s, waiting for 1m20s
Jan 11 19:55:50.351: INFO: node status heartbeat is unchanged for 5.000029721s, waiting for 1m20s
Jan 11 19:55:51.352: INFO: node status heartbeat is unchanged for 6.000818936s, waiting for 1m20s
Jan 11 19:55:52.351: INFO: node status heartbeat is unchanged for 6.999842062s, waiting for 1m20s
Jan 11 19:55:53.352: INFO: node status heartbeat is unchanged for 8.00087712s, waiting for 1m20s
Jan 11 19:55:54.352: INFO: node status heartbeat is unchanged for 9.00074624s, waiting for 1m20s
Jan 11 19:55:55.352: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s
Jan 11 19:55:55.353: INFO:   v1.NodeStatus{
  	Capacity:    v1.ResourceList{"attachable-volumes-aws-ebs": {i: resource.int64Amount{value: 25}, s: "25", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 2}, s: "2", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 28730179584}, Format: "BinarySI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {s: "0", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 8054267904}, Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},
  	Allocatable: v1.ResourceList{"attachable-volumes-aws-ebs": {i: resource.int64Amount{value: 25}, s: "25", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 1920, scale: -3}, s: "1920m", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 27293670584}, s: "27293670584", Format: "DecimalSI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {s: "0", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 6577812679}, s: "6577812679", Format: "DecimalSI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},
  	Phase:       "",
  	Conditions: []v1.NodeCondition{
  		... // 6 identical elements
  		{Type: "FrequentDockerRestart", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2020-01-11 19:55:26 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:58 +0000 UTC"}, Reason: "NoFrequentDockerRestart", Message: "docker is functioning properly"},
  		{Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2020-01-11 15:56:18 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:18 +0000 UTC"}, Reason: "CalicoIsUp", Message: "Calico is running on this node"},
  		{
  			Type:               "MemoryPressure",
  			Status:             "False",
- 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:55:44 +0000 UTC"},
+ 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:55:54 +0000 UTC"},
  			LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:03 +0000 UTC"},
  			Reason:             "KubeletHasSufficientMemory",
  			Message:            "kubelet has sufficient memory available",
  		},
  		{
  			Type:               "DiskPressure",
  			Status:             "False",
- 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:55:44 +0000 UTC"},
+ 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:55:54 +0000 UTC"},
  			LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:03 +0000 UTC"},
  			Reason:             "KubeletHasNoDiskPressure",
  			Message:            "kubelet has no disk pressure",
  		},
  		{
  			Type:               "PIDPressure",
  			Status:             "False",
- 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:55:44 +0000 UTC"},
+ 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:55:54 +0000 UTC"},
  			LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:03 +0000 UTC"},
  			Reason:             "KubeletHasSufficientPID",
  			Message:            "kubelet has sufficient PID available",
  		},
  		{Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:13 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"},
  	},
  	Addresses:       []v1.NodeAddress{{Type: "InternalIP", Address: "10.250.27.25"}, {Type: "Hostname", Address: "ip-10-250-27-25.ec2.internal"}, {Type: "InternalDNS", Address: "ip-10-250-27-25.ec2.internal"}},
  	DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}},
  	... // 5 identical fields
  }

Jan 11 19:55:56.352: INFO: node status heartbeat is unchanged for 1.000558364s, waiting for 1m20s
Jan 11 19:55:57.352: INFO: node status heartbeat is unchanged for 1.999894915s, waiting for 1m20s
Jan 11 19:55:58.352: INFO: node status heartbeat is unchanged for 2.999998226s, waiting for 1m20s
Jan 11 19:55:59.352: INFO: node status heartbeat is unchanged for 4.000144087s, waiting for 1m20s
Jan 11 19:56:00.352: INFO: node status heartbeat is unchanged for 5.00010875s, waiting for 1m20s
Jan 11 19:56:01.352: INFO: node status heartbeat is unchanged for 6.000440009s, waiting for 1m20s
Jan 11 19:56:02.352: INFO: node status heartbeat is unchanged for 7.000673565s, waiting for 1m20s
Jan 11 19:56:03.351: INFO: node status heartbeat is unchanged for 7.999777114s, waiting for 1m20s
Jan 11 19:56:04.352: INFO: node status heartbeat is unchanged for 9.000652135s, waiting for 1m20s
Jan 11 19:56:05.352: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s
Jan 11 19:56:05.354: INFO:   v1.NodeStatus{
  	Capacity:    v1.ResourceList{"attachable-volumes-aws-ebs": {i: resource.int64Amount{value: 25}, s: "25", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 2}, s: "2", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 28730179584}, Format: "BinarySI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {s: "0", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 8054267904}, Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},
  	Allocatable: v1.ResourceList{"attachable-volumes-aws-ebs": {i: resource.int64Amount{value: 25}, s: "25", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 1920, scale: -3}, s: "1920m", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 27293670584}, s: "27293670584", Format: "DecimalSI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {s: "0", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 6577812679}, s: "6577812679", Format: "DecimalSI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},
  	Phase:       "",
  	Conditions: []v1.NodeCondition{
  		... // 6 identical elements
  		{Type: "FrequentDockerRestart", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2020-01-11 19:55:26 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:58 +0000 UTC"}, Reason: "NoFrequentDockerRestart", Message: "docker is functioning properly"},
  		{Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2020-01-11 15:56:18 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:18 +0000 UTC"}, Reason: "CalicoIsUp", Message: "Calico is running on this node"},
  		{
  			Type:               "MemoryPressure",
  			Status:             "False",
- 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:55:54 +0000 UTC"},
+ 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:56:04 +0000 UTC"},
  			LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:03 +0000 UTC"},
  			Reason:             "KubeletHasSufficientMemory",
  			Message:            "kubelet has sufficient memory available",
  		},
  		{
  			Type:               "DiskPressure",
  			Status:             "False",
- 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:55:54 +0000 UTC"},
+ 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:56:04 +0000 UTC"},
  			LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:03 +0000 UTC"},
  			Reason:             "KubeletHasNoDiskPressure",
  			Message:            "kubelet has no disk pressure",
  		},
  		{
  			Type:               "PIDPressure",
  			Status:             "False",
- 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:55:54 +0000 UTC"},
+ 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:56:04 +0000 UTC"},
  			LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:03 +0000 UTC"},
  			Reason:             "KubeletHasSufficientPID",
  			Message:            "kubelet has sufficient PID available",
  		},
  		{Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:13 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"},
  	},
  	Addresses:       []v1.NodeAddress{{Type: "InternalIP", Address: "10.250.27.25"}, {Type: "Hostname", Address: "ip-10-250-27-25.ec2.internal"}, {Type: "InternalDNS", Address: "ip-10-250-27-25.ec2.internal"}},
  	DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}},
  	... // 5 identical fields
  }

Jan 11 19:56:06.352: INFO: node status heartbeat is unchanged for 1.000397594s, waiting for 1m20s
Jan 11 19:56:07.352: INFO: node status heartbeat is unchanged for 2.000338933s, waiting for 1m20s
Jan 11 19:56:08.352: INFO: node status heartbeat is unchanged for 3.000275707s, waiting for 1m20s
Jan 11 19:56:09.352: INFO: node status heartbeat is unchanged for 4.000396092s, waiting for 1m20s
Jan 11 19:56:10.352: INFO: node status heartbeat is unchanged for 5.000183805s, waiting for 1m20s
Jan 11 19:56:11.352: INFO: node status heartbeat is unchanged for 6.000478275s, waiting for 1m20s
Jan 11 19:56:12.351: INFO: node status heartbeat is unchanged for 6.999890243s, waiting for 1m20s
Jan 11 19:56:13.352: INFO: node status heartbeat is unchanged for 8.000589343s, waiting for 1m20s
Jan 11 19:56:14.353: INFO: node status heartbeat is unchanged for 9.000951911s, waiting for 1m20s
Jan 11 19:56:15.352: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s
Jan 11 19:56:15.354: INFO:   v1.NodeStatus{
  	Capacity:    v1.ResourceList{"attachable-volumes-aws-ebs": {i: resource.int64Amount{value: 25}, s: "25", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 2}, s: "2", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 28730179584}, Format: "BinarySI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {s: "0", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 8054267904}, Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},
  	Allocatable: v1.ResourceList{"attachable-volumes-aws-ebs": {i: resource.int64Amount{value: 25}, s: "25", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 1920, scale: -3}, s: "1920m", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 27293670584}, s: "27293670584", Format: "DecimalSI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {s: "0", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 6577812679}, s: "6577812679", Format: "DecimalSI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},
  	Phase:       "",
  	Conditions: []v1.NodeCondition{
  		... // 6 identical elements
  		{Type: "FrequentDockerRestart", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2020-01-11 19:55:26 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:58 +0000 UTC"}, Reason: "NoFrequentDockerRestart", Message: "docker is functioning properly"},
  		{Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2020-01-11 15:56:18 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:18 +0000 UTC"}, Reason: "CalicoIsUp", Message: "Calico is running on this node"},
  		{
  			Type:               "MemoryPressure",
  			Status:             "False",
- 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:56:04 +0000 UTC"},
+ 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:56:14 +0000 UTC"},
  			LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:03 +0000 UTC"},
  			Reason:             "KubeletHasSufficientMemory",
  			Message:            "kubelet has sufficient memory available",
  		},
  		{
  			Type:               "DiskPressure",
  			Status:             "False",
- 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:56:04 +0000 UTC"},
+ 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:56:14 +0000 UTC"},
  			LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:03 +0000 UTC"},
  			Reason:             "KubeletHasNoDiskPressure",
  			Message:            "kubelet has no disk pressure",
  		},
  		{
  			Type:               "PIDPressure",
  			Status:             "False",
- 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:56:04 +0000 UTC"},
+ 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:56:14 +0000 UTC"},
  			LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:03 +0000 UTC"},
  			Reason:             "KubeletHasSufficientPID",
  			Message:            "kubelet has sufficient PID available",
  		},
  		{Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:13 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"},
  	},
  	Addresses:       []v1.NodeAddress{{Type: "InternalIP", Address: "10.250.27.25"}, {Type: "Hostname", Address: "ip-10-250-27-25.ec2.internal"}, {Type: "InternalDNS", Address: "ip-10-250-27-25.ec2.internal"}},
  	DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}},
  	... // 5 identical fields
  }

Jan 11 19:56:16.352: INFO: node status heartbeat is unchanged for 1.000107887s, waiting for 1m20s
Jan 11 19:56:17.352: INFO: node status heartbeat is unchanged for 1.999727289s, waiting for 1m20s
Jan 11 19:56:18.352: INFO: node status heartbeat is unchanged for 2.999977579s, waiting for 1m20s
Jan 11 19:56:19.352: INFO: node status heartbeat is unchanged for 3.999835078s, waiting for 1m20s
Jan 11 19:56:20.352: INFO: node status heartbeat is unchanged for 5.000100987s, waiting for 1m20s
Jan 11 19:56:21.352: INFO: node status heartbeat is unchanged for 5.999935216s, waiting for 1m20s
Jan 11 19:56:22.352: INFO: node status heartbeat is unchanged for 7.000082227s, waiting for 1m20s
Jan 11 19:56:23.352: INFO: node status heartbeat is unchanged for 8.000206509s, waiting for 1m20s
Jan 11 19:56:24.352: INFO: node status heartbeat is unchanged for 8.999984886s, waiting for 1m20s
Jan 11 19:56:25.353: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s
Jan 11 19:56:25.355: INFO:   v1.NodeStatus{
  	Capacity:    v1.ResourceList{"attachable-volumes-aws-ebs": {i: resource.int64Amount{value: 25}, s: "25", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 2}, s: "2", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 28730179584}, Format: "BinarySI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {s: "0", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 8054267904}, Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},
  	Allocatable: v1.ResourceList{"attachable-volumes-aws-ebs": {i: resource.int64Amount{value: 25}, s: "25", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 1920, scale: -3}, s: "1920m", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 27293670584}, s: "27293670584", Format: "DecimalSI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {s: "0", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 6577812679}, s: "6577812679", Format: "DecimalSI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},
  	Phase:       "",
  	Conditions: []v1.NodeCondition{
  		... // 6 identical elements
  		{Type: "FrequentDockerRestart", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2020-01-11 19:55:26 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:58 +0000 UTC"}, Reason: "NoFrequentDockerRestart", Message: "docker is functioning properly"},
  		{Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2020-01-11 15:56:18 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:18 +0000 UTC"}, Reason: "CalicoIsUp", Message: "Calico is running on this node"},
  		{
  			Type:               "MemoryPressure",
  			Status:             "False",
- 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:56:14 +0000 UTC"},
+ 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:56:24 +0000 UTC"},
  			LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:03 +0000 UTC"},
  			Reason:             "KubeletHasSufficientMemory",
  			Message:            "kubelet has sufficient memory available",
  		},
  		{
  			Type:               "DiskPressure",
  			Status:             "False",
- 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:56:14 +0000 UTC"},
+ 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:56:24 +0000 UTC"},
  			LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:03 +0000 UTC"},
  			Reason:             "KubeletHasNoDiskPressure",
  			Message:            "kubelet has no disk pressure",
  		},
  		{
  			Type:               "PIDPressure",
  			Status:             "False",
- 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:56:14 +0000 UTC"},
+ 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:56:24 +0000 UTC"},
  			LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:03 +0000 UTC"},
  			Reason:             "KubeletHasSufficientPID",
  			Message:            "kubelet has sufficient PID available",
  		},
  		{Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:13 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"},
  	},
  	Addresses:       []v1.NodeAddress{{Type: "InternalIP", Address: "10.250.27.25"}, {Type: "Hostname", Address: "ip-10-250-27-25.ec2.internal"}, {Type: "InternalDNS", Address: "ip-10-250-27-25.ec2.internal"}},
  	DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}},
  	NodeInfo:        v1.NodeSystemInfo{MachineID: "ec280dba3c1837e27848a3dec8c080a9", SystemUUID: "ec280dba-3c18-37e2-7848-a3dec8c080a9", BootID: "89e42b89-b944-47ea-8bf6-5f2fe6d80c97", KernelVersion: "4.19.86-coreos", OSImage: "Container Linux by CoreOS 2303.3.0 (Rhyolite)", ContainerRuntimeVersion: "docker://18.6.3", KubeletVersion: "v1.16.4", KubeProxyVersion: "v1.16.4", OperatingSystem: "linux", Architecture: "amd64"},
  	Images:          []v1.ContainerImage{{Names: []string{"eu.gcr.io/gardener-project/k8s.gcr.io/hyperkube@sha256:1d8d7ef8bae1a6c8564d97a7d83a3661ea4b43127b0a6d901f3cd4b1126ee102", "eu.gcr.io/gardener-project/k8s.gcr.io/hyperkube:v1.16.4"}, SizeBytes: 601224435}, {Names: []string{"gcr.io/google-samples/gb-frontend@sha256:35cb427341429fac3df10ff74600ea73e8ec0754d78f9ce89e0b4f3d70d53ba6", "gcr.io/google-samples/gb-frontend:v6"}, SizeBytes: 373099368}, {Names: []string{"k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa", "k8s.gcr.io/etcd:3.3.15"}, SizeBytes: 246640776}, {Names: []string{"gcr.io/kubernetes-e2e-test-images/volume/nfs@sha256:c2ad734346f608a5f7d69cfded93c4e8094069320657bd372d12ba21dea3ea71", "gcr.io/kubernetes-e2e-test-images/volume/nfs:1.0"}, SizeBytes: 225358913}, {Names: []string{"gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb", "gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0"}, SizeBytes: 195659796}, {Names: []string{"eu.gcr.io/gardener-project/3rd/quay_io/calico/node@sha256:d017c694acb9df5ad8e957a14b4c5a613c3a42771a34904f40c279dd2f61461e", "eu.gcr.io/gardener-project/3rd/quay_io/calico/node:v3.8.2-mod-1"}, SizeBytes: 185406766}, {Names: []string{"eu.gcr.io/gardener-project/3rd/quay_io/calico/cni@sha256:fe6cb51f30add991b76eadfa26ec10fa8796383a1ddf807be5d4228725207b9d", "eu.gcr.io/gardener-project/3rd/quay_io/calico/cni:v3.8.2-mod-1"}, SizeBytes: 153790666}, {Names: []string{"httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a", "httpd:2.4.39-alpine"}, SizeBytes: 126894770}, {Names: []string{"httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060", "httpd:2.4.38-alpine"}, SizeBytes: 123781643}, {Names: []string{"eu.gcr.io/gardener-project/3rd/k8s_gcr_io/node-problem-detector@sha256:00aceed3b4ef20d0d578aff3f904212daa2f0aaf18350d3e213cf4ca0703ccf0", "eu.gcr.io/gardener-project/3rd/k8s_gcr_io/node-problem-detector:v0.7.1-mod-1"}, SizeBytes: 96768084}, {Names: []string{"gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:1bafcc6fb1aa990b487850adba9cadc020e42d7905aa8a30481182a477ba24b0", "gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.10"}, SizeBytes: 61365829}, {Names: []string{"gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727", "gcr.io/kubernetes-e2e-test-images/agnhost:2.6"}, SizeBytes: 57345321}, {Names: []string{"quay.io/k8scsi/csi-provisioner@sha256:0efcb424f1dde9b9fb11a1a14f2e48ab47e1c3f08bc3a929990dcfcb1f7ab34f", "quay.io/k8scsi/csi-provisioner:v1.4.0-rc1"}, SizeBytes: 54431016}, {Names: []string{"quay.io/k8scsi/csi-snapshotter@sha256:e3d3e742e32d00488fdb401045b9b1d033d7ca0ab6e760f77b24750fc95e5f70", "quay.io/k8scsi/csi-snapshotter:v2.0.0-rc1"}, SizeBytes: 51703561}, {Names: []string{"eu.gcr.io/gardener-project/3rd/quay_io/calico/typha@sha256:52298609a808087c774e95ded163e91828106bed6cf3117c51aba3f4d3b7943c", "eu.gcr.io/gardener-project/3rd/quay_io/calico/typha:v3.8.2"}, SizeBytes: 49771411}, {Names: []string{"quay.io/k8scsi/csi-attacher@sha256:26fccd7a99d973845df1193b46ebdcc6ab8dc5f6e6be319750c471fce1742d13", "quay.io/k8scsi/csi-attacher:v1.2.0"}, SizeBytes: 46226754}, {Names: []string{"quay.io/k8scsi/csi-attacher@sha256:0aba670b4d9d6b2e720bbf575d733156c676b693ca26501235444490300db838", "quay.io/k8scsi/csi-attacher:v1.1.0"}, SizeBytes: 42839085}, {Names: []string{"quay.io/k8scsi/csi-resizer@sha256:7d46fb6eb8b890dc546029d1565d502b4a1d974d33625c6ee2bc7991b77fc1a1", "quay.io/k8scsi/csi-resizer:v0.2.0"}, SizeBytes: 42817100}, {Names: []string{"quay.io/k8scsi/csi-resizer@sha256:f315c9042e56def3c05c6b04fe79ec9da6d39ddc557ca365a76cf35964ea08b6", "quay.io/k8scsi/csi-resizer:v0.1.0"}, SizeBytes: 42623056}, {Names: []string{"gcr.io/kubernetes-e2e-test-images/nonroot@sha256:d4ede5c74517090b6686219059118ed178cf4620f5db8781b32f806bb1e7395b", "gcr.io/kubernetes-e2e-test-images/nonroot:1.0"}, SizeBytes: 42321438}, {Names: []string{"redis@sha256:50899ea1ceed33fa03232f3ac57578a424faa1742c1ac9c7a7bdb95cdf19b858", "redis:5.0.5-alpine"}, SizeBytes: 29331594}, {Names: []string{"quay.io/k8scsi/hostpathplugin@sha256:b4826e492fc1762fceaf9726f41575ca0a4567864d3d235da874818de18039de", "quay.io/k8scsi/hostpathplugin:v1.2.0-rc5"}, SizeBytes: 28761497}, {Names: []string{"eu.gcr.io/gardener-project/3rd/quay_io/prometheus/node-exporter@sha256:fea82a3a79228af2840c72ff394d7446ace51ae035f5b26cd9767b250baf13b7", "eu.gcr.io/gardener-project/3rd/quay_io/prometheus/node-exporter:v0.18.1"}, SizeBytes: 22933477}, {Names: []string{"gcr.io/kubernetes-e2e-test-images/echoserver@sha256:e9ba514b896cdf559eef8788b66c2c3ee55f3572df617647b4b0d8b6bf81cf19", "gcr.io/kubernetes-e2e-test-images/echoserver:2.2"}, SizeBytes: 21692741}, {Names: []string{"quay.io/k8scsi/mock-driver@sha256:e0eed916b7d970bad2b7d9875f9ad16932f987f0f3d91ec5d86da68b0b5cc9d1", "quay.io/k8scsi/mock-driver:v2.1.0"}, SizeBytes: 16226335}, {Names: []string{"nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7", "nginx:1.14-alpine"}, SizeBytes: 16032814}, {Names: []string{"quay.io/k8scsi/csi-node-driver-registrar@sha256:13daf82fb99e951a4bff8ae5fc7c17c3a8fe7130be6400990d8f6076c32d4599", "quay.io/k8scsi/csi-node-driver-registrar:v1.1.0"}, SizeBytes: 15815995}, {Names: []string{"quay.io/k8scsi/livenessprobe@sha256:dde617756e0f602adc566ab71fd885f1dad451ad3fb063ac991c95a2ff47aea5", "quay.io/k8scsi/livenessprobe:v1.1.0"}, SizeBytes: 14967303}, {Names: []string{"eu.gcr.io/gardener-project/3rd/quay_io/calico/pod2daemon-flexvol@sha256:fd246ba4eb5b96a7b97bfd8d99eb823ba179e6eeb9852cb3e3f7bf2f44a800a8", "eu.gcr.io/gardener-project/3rd/quay_io/calico/pod2daemon-flexvol:v3.8.2"}, SizeBytes: 9371181}, {Names: []string{"gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd", "gcr.io/kubernetes-e2e-test-images/dnsutils:1.1"}, SizeBytes: 9349974}, {Names: []string{"gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411", "gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0"}, SizeBytes: 6757579}, {Names: []string{"gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc", "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"}, SizeBytes: 4753501}, {Names: []string{"gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6", "gcr.io/kubernetes-e2e-test-images/kitten:1.0"}, SizeBytes: 4747037}, {Names: []string{"gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e", "gcr.io/kubernetes-e2e-test-images/test-webserver:1.0"}, SizeBytes: 4732240}, {Names: []string{"alpine@sha256:8421d9a84432575381bfabd248f1eb56f3aa21d9d7cd2511583c68c9b7511d10", "alpine:3.7"}, SizeBytes: 4206494}, {Names: []string{"gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2", "gcr.io/kubernetes-e2e-test-images/mounttest:1.0"}, SizeBytes: 1563521}, {Names: []string{"gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d", "gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0"}, SizeBytes: 1450451}, {Names: []string{"busybox@sha256:6915be4043561d64e0ab0f8f098dc2ac48e077fe23f488ac24b665166898115a", "busybox:latest"}, SizeBytes: 1219782}, {Names: []string{"busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", "busybox:1.29"}, SizeBytes: 1154361}, {Names: []string{"eu.gcr.io/gardener-project/3rd/gcr_io/google_containers/pause-amd64@sha256:ffa28932647c3b6cab6a618aafe98d33dd185d96158ecf9b1addf042d6244025", "k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea", "eu.gcr.io/gardener-project/3rd/gcr_io/google_containers/pause-amd64:3.1", "k8s.gcr.io/pause:3.1"}, SizeBytes: 742472}},
- 	VolumesInUse:    nil,
+ 	VolumesInUse:    []v1.UniqueVolumeName{"kubernetes.io/csi/csi-mock-csi-mock-volumes-7446^4"},
- 	VolumesAttached: nil,
+ 	VolumesAttached: []v1.AttachedVolume{{Name: "kubernetes.io/csi/csi-mock-csi-mock-volumes-7446^4"}},
  	Config:          nil,
  }

Jan 11 19:56:26.352: INFO: node status heartbeat is unchanged for 998.722047ms, waiting for 1m20s
Jan 11 19:56:27.352: INFO: node status heartbeat is unchanged for 1.999191579s, waiting for 1m20s
Jan 11 19:56:28.352: INFO: node status heartbeat is unchanged for 2.999117987s, waiting for 1m20s
Jan 11 19:56:29.352: INFO: node status heartbeat is unchanged for 3.999017896s, waiting for 1m20s
Jan 11 19:56:30.352: INFO: node status heartbeat is unchanged for 4.999438085s, waiting for 1m20s
Jan 11 19:56:31.352: INFO: node status heartbeat is unchanged for 5.999111401s, waiting for 1m20s
Jan 11 19:56:32.352: INFO: node status heartbeat is unchanged for 6.999084677s, waiting for 1m20s
Jan 11 19:56:33.352: INFO: node status heartbeat is unchanged for 7.998988138s, waiting for 1m20s
Jan 11 19:56:34.352: INFO: node status heartbeat is unchanged for 8.999263278s, waiting for 1m20s
Jan 11 19:56:35.352: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s
Jan 11 19:56:35.354: INFO:   v1.NodeStatus{
  	Capacity:    v1.ResourceList{"attachable-volumes-aws-ebs": {i: resource.int64Amount{value: 25}, s: "25", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 2}, s: "2", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 28730179584}, Format: "BinarySI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {s: "0", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 8054267904}, Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},
  	Allocatable: v1.ResourceList{"attachable-volumes-aws-ebs": {i: resource.int64Amount{value: 25}, s: "25", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 1920, scale: -3}, s: "1920m", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 27293670584}, s: "27293670584", Format: "DecimalSI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {s: "0", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 6577812679}, s: "6577812679", Format: "DecimalSI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},
  	Phase:       "",
  	Conditions: []v1.NodeCondition{
- 		{
- 			Type:               "FrequentContainerdRestart",
- 			Status:             "False",
- 			LastHeartbeatTime:  s"2020-01-11 19:55:26 +0000 UTC",
- 			LastTransitionTime: s"2020-01-11 15:56:58 +0000 UTC",
- 			Reason:             "NoFrequentContainerdRestart",
- 			Message:            "containerd is functioning properly",
- 		},
+ 		{
+ 			Type:               "FrequentUnregisterNetDevice",
+ 			Status:             "False",
+ 			LastHeartbeatTime:  s"2020-01-11 19:56:26 +0000 UTC",
+ 			LastTransitionTime: s"2020-01-11 15:56:58 +0000 UTC",
+ 			Reason:             "NoFrequentUnregisterNetDevice",
+ 			Message:            "node is functioning properly",
+ 		},
- 		{
- 			Type:               "CorruptDockerOverlay2",
- 			Status:             "False",
- 			LastHeartbeatTime:  s"2020-01-11 19:55:26 +0000 UTC",
- 			LastTransitionTime: s"2020-01-11 15:56:58 +0000 UTC",
- 			Reason:             "NoCorruptDockerOverlay2",
- 			Message:            "docker overlay2 is functioning properly",
- 		},
+ 		{
+ 			Type:               "FrequentKubeletRestart",
+ 			Status:             "False",
+ 			LastHeartbeatTime:  s"2020-01-11 19:56:26 +0000 UTC",
+ 			LastTransitionTime: s"2020-01-11 15:56:58 +0000 UTC",
+ 			Reason:             "NoFrequentKubeletRestart",
+ 			Message:            "kubelet is functioning properly",
+ 		},
- 		{
- 			Type:               "KernelDeadlock",
- 			Status:             "False",
- 			LastHeartbeatTime:  s"2020-01-11 19:55:26 +0000 UTC",
- 			LastTransitionTime: s"2020-01-11 15:56:58 +0000 UTC",
- 			Reason:             "KernelHasNoDeadlock",
- 			Message:            "kernel has no deadlock",
- 		},
+ 		{
+ 			Type:               "FrequentDockerRestart",
+ 			Status:             "False",
+ 			LastHeartbeatTime:  s"2020-01-11 19:56:26 +0000 UTC",
+ 			LastTransitionTime: s"2020-01-11 15:56:58 +0000 UTC",
+ 			Reason:             "NoFrequentDockerRestart",
+ 			Message:            "docker is functioning properly",
+ 		},
- 		{
- 			Type:               "ReadonlyFilesystem",
- 			Status:             "False",
- 			LastHeartbeatTime:  s"2020-01-11 19:55:26 +0000 UTC",
- 			LastTransitionTime: s"2020-01-11 15:56:58 +0000 UTC",
- 			Reason:             "FilesystemIsNotReadOnly",
- 			Message:            "Filesystem is not read-only",
- 		},
+ 		{
+ 			Type:               "FrequentContainerdRestart",
+ 			Status:             "False",
+ 			LastHeartbeatTime:  s"2020-01-11 19:56:26 +0000 UTC",
+ 			LastTransitionTime: s"2020-01-11 15:56:58 +0000 UTC",
+ 			Reason:             "NoFrequentContainerdRestart",
+ 			Message:            "containerd is functioning properly",
+ 		},
- 		{
- 			Type:               "FrequentUnregisterNetDevice",
- 			Status:             "False",
- 			LastHeartbeatTime:  s"2020-01-11 19:55:26 +0000 UTC",
- 			LastTransitionTime: s"2020-01-11 15:56:58 +0000 UTC",
- 			Reason:             "NoFrequentUnregisterNetDevice",
- 			Message:            "node is functioning properly",
- 		},
+ 		{
+ 			Type:               "CorruptDockerOverlay2",
+ 			Status:             "False",
+ 			LastHeartbeatTime:  s"2020-01-11 19:56:26 +0000 UTC",
+ 			LastTransitionTime: s"2020-01-11 15:56:58 +0000 UTC",
+ 			Reason:             "NoCorruptDockerOverlay2",
+ 			Message:            "docker overlay2 is functioning properly",
+ 		},
- 		{
- 			Type:               "FrequentKubeletRestart",
- 			Status:             "False",
- 			LastHeartbeatTime:  s"2020-01-11 19:55:26 +0000 UTC",
- 			LastTransitionTime: s"2020-01-11 15:56:58 +0000 UTC",
- 			Reason:             "NoFrequentKubeletRestart",
- 			Message:            "kubelet is functioning properly",
- 		},
+ 		{
+ 			Type:               "KernelDeadlock",
+ 			Status:             "False",
+ 			LastHeartbeatTime:  s"2020-01-11 19:56:26 +0000 UTC",
+ 			LastTransitionTime: s"2020-01-11 15:56:58 +0000 UTC",
+ 			Reason:             "KernelHasNoDeadlock",
+ 			Message:            "kernel has no deadlock",
+ 		},
- 		{
- 			Type:               "FrequentDockerRestart",
- 			Status:             "False",
- 			LastHeartbeatTime:  s"2020-01-11 19:55:26 +0000 UTC",
- 			LastTransitionTime: s"2020-01-11 15:56:58 +0000 UTC",
- 			Reason:             "NoFrequentDockerRestart",
- 			Message:            "docker is functioning properly",
- 		},
+ 		{
+ 			Type:               "ReadonlyFilesystem",
+ 			Status:             "False",
+ 			LastHeartbeatTime:  s"2020-01-11 19:56:26 +0000 UTC",
+ 			LastTransitionTime: s"2020-01-11 15:56:58 +0000 UTC",
+ 			Reason:             "FilesystemIsNotReadOnly",
+ 			Message:            "Filesystem is not read-only",
+ 		},
  		{Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2020-01-11 15:56:18 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:18 +0000 UTC"}, Reason: "CalicoIsUp", Message: "Calico is running on this node"},
  		{
  			Type:               "MemoryPressure",
  			Status:             "False",
- 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:56:24 +0000 UTC"},
+ 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:56:34 +0000 UTC"},
  			LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:03 +0000 UTC"},
  			Reason:             "KubeletHasSufficientMemory",
  			Message:            "kubelet has sufficient memory available",
  		},
  		{
  			Type:               "DiskPressure",
  			Status:             "False",
- 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:56:24 +0000 UTC"},
+ 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:56:34 +0000 UTC"},
  			LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:03 +0000 UTC"},
  			Reason:             "KubeletHasNoDiskPressure",
  			Message:            "kubelet has no disk pressure",
  		},
  		{
  			Type:               "PIDPressure",
  			Status:             "False",
- 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:56:24 +0000 UTC"},
+ 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:56:34 +0000 UTC"},
  			LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:03 +0000 UTC"},
  			Reason:             "KubeletHasSufficientPID",
  			Message:            "kubelet has sufficient PID available",
  		},
  		{Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:13 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"},
  	},
  	Addresses:       []v1.NodeAddress{{Type: "InternalIP", Address: "10.250.27.25"}, {Type: "Hostname", Address: "ip-10-250-27-25.ec2.internal"}, {Type: "InternalDNS", Address: "ip-10-250-27-25.ec2.internal"}},
  	DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}},
  	... // 5 identical fields
  }

Jan 11 19:56:36.352: INFO: node status heartbeat is unchanged for 999.69411ms, waiting for 1m20s
Jan 11 19:56:37.352: INFO: node status heartbeat is unchanged for 2.00008048s, waiting for 1m20s
Jan 11 19:56:38.352: INFO: node status heartbeat is unchanged for 2.999998956s, waiting for 1m20s
Jan 11 19:56:39.352: INFO: node status heartbeat is unchanged for 4.000041145s, waiting for 1m20s
Jan 11 19:56:40.353: INFO: node status heartbeat is unchanged for 5.000751237s, waiting for 1m20s
Jan 11 19:56:41.353: INFO: node status heartbeat is unchanged for 6.001500659s, waiting for 1m20s
Jan 11 19:56:42.352: INFO: node status heartbeat is unchanged for 7.000147364s, waiting for 1m20s
Jan 11 19:56:43.352: INFO: node status heartbeat is unchanged for 8.000601189s, waiting for 1m20s
Jan 11 19:56:44.352: INFO: node status heartbeat is unchanged for 9.000059s, waiting for 1m20s
Jan 11 19:56:45.352: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s
Jan 11 19:56:45.353: INFO:   v1.NodeStatus{
  	Capacity:    v1.ResourceList{"attachable-volumes-aws-ebs": {i: resource.int64Amount{value: 25}, s: "25", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 2}, s: "2", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 28730179584}, Format: "BinarySI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {s: "0", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 8054267904}, Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},
  	Allocatable: v1.ResourceList{"attachable-volumes-aws-ebs": {i: resource.int64Amount{value: 25}, s: "25", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 1920, scale: -3}, s: "1920m", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 27293670584}, s: "27293670584", Format: "DecimalSI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {s: "0", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 6577812679}, s: "6577812679", Format: "DecimalSI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},
  	Phase:       "",
  	Conditions: []v1.NodeCondition{
  		... // 6 identical elements
  		{Type: "ReadonlyFilesystem", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2020-01-11 19:56:26 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:58 +0000 UTC"}, Reason: "FilesystemIsNotReadOnly", Message: "Filesystem is not read-only"},
  		{Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2020-01-11 15:56:18 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:18 +0000 UTC"}, Reason: "CalicoIsUp", Message: "Calico is running on this node"},
  		{
  			Type:               "MemoryPressure",
  			Status:             "False",
- 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:56:34 +0000 UTC"},
+ 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:56:44 +0000 UTC"},
  			LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:03 +0000 UTC"},
  			Reason:             "KubeletHasSufficientMemory",
  			Message:            "kubelet has sufficient memory available",
  		},
  		{
  			Type:               "DiskPressure",
  			Status:             "False",
- 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:56:34 +0000 UTC"},
+ 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:56:44 +0000 UTC"},
  			LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:03 +0000 UTC"},
  			Reason:             "KubeletHasNoDiskPressure",
  			Message:            "kubelet has no disk pressure",
  		},
  		{
  			Type:               "PIDPressure",
  			Status:             "False",
- 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:56:34 +0000 UTC"},
+ 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:56:44 +0000 UTC"},
  			LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:03 +0000 UTC"},
  			Reason:             "KubeletHasSufficientPID",
  			Message:            "kubelet has sufficient PID available",
  		},
  		{Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:13 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"},
  	},
  	Addresses:       []v1.NodeAddress{{Type: "InternalIP", Address: "10.250.27.25"}, {Type: "Hostname", Address: "ip-10-250-27-25.ec2.internal"}, {Type: "InternalDNS", Address: "ip-10-250-27-25.ec2.internal"}},
  	DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}},
  	... // 5 identical fields
  }

Jan 11 19:56:46.353: INFO: node status heartbeat is unchanged for 1.001222992s, waiting for 1m20s
Jan 11 19:56:47.352: INFO: node status heartbeat is unchanged for 1.999915011s, waiting for 1m20s
Jan 11 19:56:48.352: INFO: node status heartbeat is unchanged for 3.000242501s, waiting for 1m20s
Jan 11 19:56:49.352: INFO: node status heartbeat is unchanged for 4.000388209s, waiting for 1m20s
Jan 11 19:56:50.352: INFO: node status heartbeat is unchanged for 5.000186775s, waiting for 1m20s
Jan 11 19:56:51.352: INFO: node status heartbeat is unchanged for 6.00069305s, waiting for 1m20s
Jan 11 19:56:52.352: INFO: node status heartbeat is unchanged for 7.000117012s, waiting for 1m20s
Jan 11 19:56:53.353: INFO: node status heartbeat is unchanged for 8.00109909s, waiting for 1m20s
Jan 11 19:56:54.352: INFO: node status heartbeat is unchanged for 9.000483383s, waiting for 1m20s
Jan 11 19:56:55.352: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s
Jan 11 19:56:55.354: INFO:   v1.NodeStatus{
  	Capacity:    v1.ResourceList{"attachable-volumes-aws-ebs": {i: resource.int64Amount{value: 25}, s: "25", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 2}, s: "2", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 28730179584}, Format: "BinarySI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {s: "0", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 8054267904}, Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},
  	Allocatable: v1.ResourceList{"attachable-volumes-aws-ebs": {i: resource.int64Amount{value: 25}, s: "25", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 1920, scale: -3}, s: "1920m", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 27293670584}, s: "27293670584", Format: "DecimalSI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {s: "0", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 6577812679}, s: "6577812679", Format: "DecimalSI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},
  	Phase:       "",
  	Conditions: []v1.NodeCondition{
  		... // 6 identical elements
  		{Type: "ReadonlyFilesystem", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2020-01-11 19:56:26 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:58 +0000 UTC"}, Reason: "FilesystemIsNotReadOnly", Message: "Filesystem is not read-only"},
  		{Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2020-01-11 15:56:18 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:18 +0000 UTC"}, Reason: "CalicoIsUp", Message: "Calico is running on this node"},
  		{
  			Type:               "MemoryPressure",
  			Status:             "False",
- 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:56:44 +0000 UTC"},
+ 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:56:54 +0000 UTC"},
  			LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:03 +0000 UTC"},
  			Reason:             "KubeletHasSufficientMemory",
  			Message:            "kubelet has sufficient memory available",
  		},
  		{
  			Type:               "DiskPressure",
  			Status:             "False",
- 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:56:44 +0000 UTC"},
+ 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:56:54 +0000 UTC"},
  			LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:03 +0000 UTC"},
  			Reason:             "KubeletHasNoDiskPressure",
  			Message:            "kubelet has no disk pressure",
  		},
  		{
  			Type:               "PIDPressure",
  			Status:             "False",
- 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:56:44 +0000 UTC"},
+ 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:56:54 +0000 UTC"},
  			LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:03 +0000 UTC"},
  			Reason:             "KubeletHasSufficientPID",
  			Message:            "kubelet has sufficient PID available",
  		},
  		{Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:13 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"},
  	},
  	Addresses:       []v1.NodeAddress{{Type: "InternalIP", Address: "10.250.27.25"}, {Type: "Hostname", Address: "ip-10-250-27-25.ec2.internal"}, {Type: "InternalDNS", Address: "ip-10-250-27-25.ec2.internal"}},
  	DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}},
  	... // 5 identical fields
  }

Jan 11 19:56:56.352: INFO: node status heartbeat is unchanged for 999.939947ms, waiting for 1m20s
Jan 11 19:56:57.352: INFO: node status heartbeat is unchanged for 1.999902993s, waiting for 1m20s
Jan 11 19:56:58.352: INFO: node status heartbeat is unchanged for 3.000112447s, waiting for 1m20s
Jan 11 19:56:59.352: INFO: node status heartbeat is unchanged for 3.999885601s, waiting for 1m20s
Jan 11 19:57:00.352: INFO: node status heartbeat is unchanged for 4.999784571s, waiting for 1m20s
Jan 11 19:57:01.352: INFO: node status heartbeat is unchanged for 6.00005275s, waiting for 1m20s
Jan 11 19:57:02.352: INFO: node status heartbeat is unchanged for 7.000266858s, waiting for 1m20s
Jan 11 19:57:03.352: INFO: node status heartbeat is unchanged for 8.000138805s, waiting for 1m20s
Jan 11 19:57:04.352: INFO: node status heartbeat is unchanged for 9.000073685s, waiting for 1m20s
Jan 11 19:57:05.352: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s
Jan 11 19:57:05.353: INFO:   v1.NodeStatus{
  	Capacity:    v1.ResourceList{"attachable-volumes-aws-ebs": {i: resource.int64Amount{value: 25}, s: "25", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 2}, s: "2", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 28730179584}, Format: "BinarySI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {s: "0", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 8054267904}, Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},
  	Allocatable: v1.ResourceList{"attachable-volumes-aws-ebs": {i: resource.int64Amount{value: 25}, s: "25", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 1920, scale: -3}, s: "1920m", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 27293670584}, s: "27293670584", Format: "DecimalSI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {s: "0", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 6577812679}, s: "6577812679", Format: "DecimalSI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},
  	Phase:       "",
  	Conditions: []v1.NodeCondition{
  		... // 6 identical elements
  		{Type: "ReadonlyFilesystem", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2020-01-11 19:56:26 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:58 +0000 UTC"}, Reason: "FilesystemIsNotReadOnly", Message: "Filesystem is not read-only"},
  		{Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2020-01-11 15:56:18 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:18 +0000 UTC"}, Reason: "CalicoIsUp", Message: "Calico is running on this node"},
  		{
  			Type:               "MemoryPressure",
  			Status:             "False",
- 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:56:54 +0000 UTC"},
+ 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:57:04 +0000 UTC"},
  			LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:03 +0000 UTC"},
  			Reason:             "KubeletHasSufficientMemory",
  			Message:            "kubelet has sufficient memory available",
  		},
  		{
  			Type:               "DiskPressure",
  			Status:             "False",
- 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:56:54 +0000 UTC"},
+ 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:57:04 +0000 UTC"},
  			LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:03 +0000 UTC"},
  			Reason:             "KubeletHasNoDiskPressure",
  			Message:            "kubelet has no disk pressure",
  		},
  		{
  			Type:               "PIDPressure",
  			Status:             "False",
- 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:56:54 +0000 UTC"},
+ 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:57:04 +0000 UTC"},
  			LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:03 +0000 UTC"},
  			Reason:             "KubeletHasSufficientPID",
  			Message:            "kubelet has sufficient PID available",
  		},
  		{Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:13 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"},
  	},
  	Addresses:       []v1.NodeAddress{{Type: "InternalIP", Address: "10.250.27.25"}, {Type: "Hostname", Address: "ip-10-250-27-25.ec2.internal"}, {Type: "InternalDNS", Address: "ip-10-250-27-25.ec2.internal"}},
  	DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}},
  	... // 5 identical fields
  }

Jan 11 19:57:06.352: INFO: node status heartbeat is unchanged for 1.000024782s, waiting for 1m20s
Jan 11 19:57:07.352: INFO: node status heartbeat is unchanged for 1.99977358s, waiting for 1m20s
Jan 11 19:57:08.352: INFO: node status heartbeat is unchanged for 2.999905607s, waiting for 1m20s
Jan 11 19:57:09.352: INFO: node status heartbeat is unchanged for 3.999697224s, waiting for 1m20s
Jan 11 19:57:10.351: INFO: node status heartbeat is unchanged for 4.999453422s, waiting for 1m20s
Jan 11 19:57:11.352: INFO: node status heartbeat is unchanged for 5.999687399s, waiting for 1m20s
Jan 11 19:57:12.351: INFO: node status heartbeat is unchanged for 6.999194048s, waiting for 1m20s
Jan 11 19:57:13.352: INFO: node status heartbeat is unchanged for 7.999666362s, waiting for 1m20s
Jan 11 19:57:14.352: INFO: node status heartbeat is unchanged for 8.999991108s, waiting for 1m20s
Jan 11 19:57:15.352: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s
Jan 11 19:57:15.353: INFO:   v1.NodeStatus{
  	Capacity:    v1.ResourceList{"attachable-volumes-aws-ebs": {i: resource.int64Amount{value: 25}, s: "25", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 2}, s: "2", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 28730179584}, Format: "BinarySI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {s: "0", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 8054267904}, Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},
  	Allocatable: v1.ResourceList{"attachable-volumes-aws-ebs": {i: resource.int64Amount{value: 25}, s: "25", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 1920, scale: -3}, s: "1920m", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 27293670584}, s: "27293670584", Format: "DecimalSI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {s: "0", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 6577812679}, s: "6577812679", Format: "DecimalSI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},
  	Phase:       "",
  	Conditions: []v1.NodeCondition{
  		... // 6 identical elements
  		{Type: "ReadonlyFilesystem", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2020-01-11 19:56:26 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:58 +0000 UTC"}, Reason: "FilesystemIsNotReadOnly", Message: "Filesystem is not read-only"},
  		{Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2020-01-11 15:56:18 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:18 +0000 UTC"}, Reason: "CalicoIsUp", Message: "Calico is running on this node"},
  		{
  			Type:               "MemoryPressure",
  			Status:             "False",
- 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:57:04 +0000 UTC"},
+ 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:57:14 +0000 UTC"},
  			LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:03 +0000 UTC"},
  			Reason:             "KubeletHasSufficientMemory",
  			Message:            "kubelet has sufficient memory available",
  		},
  		{
  			Type:               "DiskPressure",
  			Status:             "False",
- 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:57:04 +0000 UTC"},
+ 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:57:14 +0000 UTC"},
  			LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:03 +0000 UTC"},
  			Reason:             "KubeletHasNoDiskPressure",
  			Message:            "kubelet has no disk pressure",
  		},
  		{
  			Type:               "PIDPressure",
  			Status:             "False",
- 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:57:04 +0000 UTC"},
+ 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:57:14 +0000 UTC"},
  			LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:03 +0000 UTC"},
  			Reason:             "KubeletHasSufficientPID",
  			Message:            "kubelet has sufficient PID available",
  		},
  		{Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:13 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"},
  	},
  	Addresses:       []v1.NodeAddress{{Type: "InternalIP", Address: "10.250.27.25"}, {Type: "Hostname", Address: "ip-10-250-27-25.ec2.internal"}, {Type: "InternalDNS", Address: "ip-10-250-27-25.ec2.internal"}},
  	DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}},
  	... // 5 identical fields
  }

Jan 11 19:57:16.352: INFO: node status heartbeat is unchanged for 999.879532ms, waiting for 1m20s
Jan 11 19:57:17.352: INFO: node status heartbeat is unchanged for 2.000247385s, waiting for 1m20s
Jan 11 19:57:18.352: INFO: node status heartbeat is unchanged for 3.000295367s, waiting for 1m20s
Jan 11 19:57:19.352: INFO: node status heartbeat is unchanged for 3.999995658s, waiting for 1m20s
Jan 11 19:57:20.352: INFO: node status heartbeat is unchanged for 5.000260208s, waiting for 1m20s
Jan 11 19:57:21.352: INFO: node status heartbeat is unchanged for 6.000183723s, waiting for 1m20s
Jan 11 19:57:22.352: INFO: node status heartbeat is unchanged for 7.000204353s, waiting for 1m20s
Jan 11 19:57:23.352: INFO: node status heartbeat is unchanged for 7.999826975s, waiting for 1m20s
Jan 11 19:57:24.352: INFO: node status heartbeat is unchanged for 9.000059545s, waiting for 1m20s
Jan 11 19:57:25.351: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s
Jan 11 19:57:25.353: INFO:   v1.NodeStatus{
  	Capacity:    v1.ResourceList{"attachable-volumes-aws-ebs": {i: resource.int64Amount{value: 25}, s: "25", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 2}, s: "2", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 28730179584}, Format: "BinarySI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {s: "0", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 8054267904}, Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},
  	Allocatable: v1.ResourceList{"attachable-volumes-aws-ebs": {i: resource.int64Amount{value: 25}, s: "25", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 1920, scale: -3}, s: "1920m", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 27293670584}, s: "27293670584", Format: "DecimalSI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {s: "0", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 6577812679}, s: "6577812679", Format: "DecimalSI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},
  	Phase:       "",
  	Conditions: []v1.NodeCondition{
  		... // 6 identical elements
  		{Type: "ReadonlyFilesystem", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2020-01-11 19:56:26 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:58 +0000 UTC"}, Reason: "FilesystemIsNotReadOnly", Message: "Filesystem is not read-only"},
  		{Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2020-01-11 15:56:18 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:18 +0000 UTC"}, Reason: "CalicoIsUp", Message: "Calico is running on this node"},
  		{
  			Type:               "MemoryPressure",
  			Status:             "False",
- 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:57:14 +0000 UTC"},
+ 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:57:24 +0000 UTC"},
  			LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:03 +0000 UTC"},
  			Reason:             "KubeletHasSufficientMemory",
  			Message:            "kubelet has sufficient memory available",
  		},
  		{
  			Type:               "DiskPressure",
  			Status:             "False",
- 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:57:14 +0000 UTC"},
+ 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:57:24 +0000 UTC"},
  			LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:03 +0000 UTC"},
  			Reason:             "KubeletHasNoDiskPressure",
  			Message:            "kubelet has no disk pressure",
  		},
  		{
  			Type:               "PIDPressure",
  			Status:             "False",
- 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:57:14 +0000 UTC"},
+ 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:57:24 +0000 UTC"},
  			LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:03 +0000 UTC"},
  			Reason:             "KubeletHasSufficientPID",
  			Message:            "kubelet has sufficient PID available",
  		},
  		{Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:13 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"},
  	},
  	Addresses:       []v1.NodeAddress{{Type: "InternalIP", Address: "10.250.27.25"}, {Type: "Hostname", Address: "ip-10-250-27-25.ec2.internal"}, {Type: "InternalDNS", Address: "ip-10-250-27-25.ec2.internal"}},
  	DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}},
  	... // 5 identical fields
  }

Jan 11 19:57:26.352: INFO: node status heartbeat is unchanged for 1.000802357s, waiting for 1m20s
Jan 11 19:57:27.352: INFO: node status heartbeat is unchanged for 2.000369976s, waiting for 1m20s
Jan 11 19:57:28.352: INFO: node status heartbeat is unchanged for 3.000631129s, waiting for 1m20s
Jan 11 19:57:29.352: INFO: node status heartbeat is unchanged for 4.000373523s, waiting for 1m20s
Jan 11 19:57:30.352: INFO: node status heartbeat is unchanged for 5.000512645s, waiting for 1m20s
Jan 11 19:57:31.352: INFO: node status heartbeat is unchanged for 6.000887958s, waiting for 1m20s
Jan 11 19:57:32.352: INFO: node status heartbeat is unchanged for 7.000174008s, waiting for 1m20s
Jan 11 19:57:33.352: INFO: node status heartbeat is unchanged for 8.000676626s, waiting for 1m20s
Jan 11 19:57:34.352: INFO: node status heartbeat is unchanged for 9.000216108s, waiting for 1m20s
Jan 11 19:57:35.352: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s
Jan 11 19:57:35.354: INFO:   v1.NodeStatus{
  	Capacity:    v1.ResourceList{"attachable-volumes-aws-ebs": {i: resource.int64Amount{value: 25}, s: "25", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 2}, s: "2", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 28730179584}, Format: "BinarySI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {s: "0", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 8054267904}, Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},
  	Allocatable: v1.ResourceList{"attachable-volumes-aws-ebs": {i: resource.int64Amount{value: 25}, s: "25", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 1920, scale: -3}, s: "1920m", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 27293670584}, s: "27293670584", Format: "DecimalSI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {s: "0", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 6577812679}, s: "6577812679", Format: "DecimalSI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},
  	Phase:       "",
  	Conditions: []v1.NodeCondition{
- 		{
- 			Type:               "FrequentUnregisterNetDevice",
- 			Status:             "False",
- 			LastHeartbeatTime:  s"2020-01-11 19:56:26 +0000 UTC",
- 			LastTransitionTime: s"2020-01-11 15:56:58 +0000 UTC",
- 			Reason:             "NoFrequentUnregisterNetDevice",
- 			Message:            "node is functioning properly",
- 		},
+ 		{
+ 			Type:               "FrequentDockerRestart",
+ 			Status:             "False",
+ 			LastHeartbeatTime:  s"2020-01-11 19:57:27 +0000 UTC",
+ 			LastTransitionTime: s"2020-01-11 15:56:58 +0000 UTC",
+ 			Reason:             "NoFrequentDockerRestart",
+ 			Message:            "docker is functioning properly",
+ 		},
- 		{
- 			Type:               "FrequentKubeletRestart",
- 			Status:             "False",
- 			LastHeartbeatTime:  s"2020-01-11 19:56:26 +0000 UTC",
- 			LastTransitionTime: s"2020-01-11 15:56:58 +0000 UTC",
- 			Reason:             "NoFrequentKubeletRestart",
- 			Message:            "kubelet is functioning properly",
- 		},
+ 		{
+ 			Type:               "FrequentContainerdRestart",
+ 			Status:             "False",
+ 			LastHeartbeatTime:  s"2020-01-11 19:57:27 +0000 UTC",
+ 			LastTransitionTime: s"2020-01-11 15:56:58 +0000 UTC",
+ 			Reason:             "NoFrequentContainerdRestart",
+ 			Message:            "containerd is functioning properly",
+ 		},
- 		{
- 			Type:               "FrequentDockerRestart",
- 			Status:             "False",
- 			LastHeartbeatTime:  s"2020-01-11 19:56:26 +0000 UTC",
- 			LastTransitionTime: s"2020-01-11 15:56:58 +0000 UTC",
- 			Reason:             "NoFrequentDockerRestart",
- 			Message:            "docker is functioning properly",
- 		},
+ 		{
+ 			Type:               "CorruptDockerOverlay2",
+ 			Status:             "False",
+ 			LastHeartbeatTime:  s"2020-01-11 19:57:27 +0000 UTC",
+ 			LastTransitionTime: s"2020-01-11 15:56:58 +0000 UTC",
+ 			Reason:             "NoCorruptDockerOverlay2",
+ 			Message:            "docker overlay2 is functioning properly",
+ 		},
- 		{
- 			Type:               "FrequentContainerdRestart",
- 			Status:             "False",
- 			LastHeartbeatTime:  s"2020-01-11 19:56:26 +0000 UTC",
- 			LastTransitionTime: s"2020-01-11 15:56:58 +0000 UTC",
- 			Reason:             "NoFrequentContainerdRestart",
- 			Message:            "containerd is functioning properly",
- 		},
+ 		{
+ 			Type:               "KernelDeadlock",
+ 			Status:             "False",
+ 			LastHeartbeatTime:  s"2020-01-11 19:57:27 +0000 UTC",
+ 			LastTransitionTime: s"2020-01-11 15:56:58 +0000 UTC",
+ 			Reason:             "KernelHasNoDeadlock",
+ 			Message:            "kernel has no deadlock",
+ 		},
- 		{
- 			Type:               "CorruptDockerOverlay2",
- 			Status:             "False",
- 			LastHeartbeatTime:  s"2020-01-11 19:56:26 +0000 UTC",
- 			LastTransitionTime: s"2020-01-11 15:56:58 +0000 UTC",
- 			Reason:             "NoCorruptDockerOverlay2",
- 			Message:            "docker overlay2 is functioning properly",
- 		},
+ 		{
+ 			Type:               "ReadonlyFilesystem",
+ 			Status:             "False",
+ 			LastHeartbeatTime:  s"2020-01-11 19:57:27 +0000 UTC",
+ 			LastTransitionTime: s"2020-01-11 15:56:58 +0000 UTC",
+ 			Reason:             "FilesystemIsNotReadOnly",
+ 			Message:            "Filesystem is not read-only",
+ 		},
- 		{
- 			Type:               "KernelDeadlock",
- 			Status:             "False",
- 			LastHeartbeatTime:  s"2020-01-11 19:56:26 +0000 UTC",
- 			LastTransitionTime: s"2020-01-11 15:56:58 +0000 UTC",
- 			Reason:             "KernelHasNoDeadlock",
- 			Message:            "kernel has no deadlock",
- 		},
+ 		{
+ 			Type:               "FrequentUnregisterNetDevice",
+ 			Status:             "False",
+ 			LastHeartbeatTime:  s"2020-01-11 19:57:27 +0000 UTC",
+ 			LastTransitionTime: s"2020-01-11 15:56:58 +0000 UTC",
+ 			Reason:             "NoFrequentUnregisterNetDevice",
+ 			Message:            "node is functioning properly",
+ 		},
- 		{
- 			Type:               "ReadonlyFilesystem",
- 			Status:             "False",
- 			LastHeartbeatTime:  s"2020-01-11 19:56:26 +0000 UTC",
- 			LastTransitionTime: s"2020-01-11 15:56:58 +0000 UTC",
- 			Reason:             "FilesystemIsNotReadOnly",
- 			Message:            "Filesystem is not read-only",
- 		},
+ 		{
+ 			Type:               "FrequentKubeletRestart",
+ 			Status:             "False",
+ 			LastHeartbeatTime:  s"2020-01-11 19:57:27 +0000 UTC",
+ 			LastTransitionTime: s"2020-01-11 15:56:58 +0000 UTC",
+ 			Reason:             "NoFrequentKubeletRestart",
+ 			Message:            "kubelet is functioning properly",
+ 		},
  		{Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2020-01-11 15:56:18 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:18 +0000 UTC"}, Reason: "CalicoIsUp", Message: "Calico is running on this node"},
  		{
  			Type:               "MemoryPressure",
  			Status:             "False",
- 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:57:24 +0000 UTC"},
+ 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:57:34 +0000 UTC"},
  			LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:03 +0000 UTC"},
  			Reason:             "KubeletHasSufficientMemory",
  			Message:            "kubelet has sufficient memory available",
  		},
  		{
  			Type:               "DiskPressure",
  			Status:             "False",
- 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:57:24 +0000 UTC"},
+ 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:57:34 +0000 UTC"},
  			LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:03 +0000 UTC"},
  			Reason:             "KubeletHasNoDiskPressure",
  			Message:            "kubelet has no disk pressure",
  		},
  		{
  			Type:               "PIDPressure",
  			Status:             "False",
- 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:57:24 +0000 UTC"},
+ 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:57:34 +0000 UTC"},
  			LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:03 +0000 UTC"},
  			Reason:             "KubeletHasSufficientPID",
  			Message:            "kubelet has sufficient PID available",
  		},
  		{Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:13 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"},
  	},
  	Addresses:       []v1.NodeAddress{{Type: "InternalIP", Address: "10.250.27.25"}, {Type: "Hostname", Address: "ip-10-250-27-25.ec2.internal"}, {Type: "InternalDNS", Address: "ip-10-250-27-25.ec2.internal"}},
  	DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}},
  	... // 5 identical fields
  }

Jan 11 19:57:36.352: INFO: node status heartbeat is unchanged for 999.384096ms, waiting for 1m20s
Jan 11 19:57:37.352: INFO: node status heartbeat is unchanged for 1.999974121s, waiting for 1m20s
Jan 11 19:57:38.352: INFO: node status heartbeat is unchanged for 2.999625784s, waiting for 1m20s
Jan 11 19:57:39.352: INFO: node status heartbeat is unchanged for 3.999958809s, waiting for 1m20s
Jan 11 19:57:40.352: INFO: node status heartbeat is unchanged for 5.000340885s, waiting for 1m20s
Jan 11 19:57:41.352: INFO: node status heartbeat is unchanged for 5.999686461s, waiting for 1m20s
Jan 11 19:57:42.352: INFO: node status heartbeat is unchanged for 7.000054786s, waiting for 1m20s
Jan 11 19:57:43.352: INFO: node status heartbeat is unchanged for 8.000086931s, waiting for 1m20s
Jan 11 19:57:44.352: INFO: node status heartbeat is unchanged for 8.999980213s, waiting for 1m20s
Jan 11 19:57:45.353: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s
Jan 11 19:57:45.354: INFO:   v1.NodeStatus{
  	Capacity:    v1.ResourceList{"attachable-volumes-aws-ebs": {i: resource.int64Amount{value: 25}, s: "25", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 2}, s: "2", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 28730179584}, Format: "BinarySI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {s: "0", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 8054267904}, Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},
  	Allocatable: v1.ResourceList{"attachable-volumes-aws-ebs": {i: resource.int64Amount{value: 25}, s: "25", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 1920, scale: -3}, s: "1920m", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 27293670584}, s: "27293670584", Format: "DecimalSI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {s: "0", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 6577812679}, s: "6577812679", Format: "DecimalSI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},
  	Phase:       "",
  	Conditions: []v1.NodeCondition{
  		... // 6 identical elements
  		{Type: "FrequentKubeletRestart", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2020-01-11 19:57:27 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:58 +0000 UTC"}, Reason: "NoFrequentKubeletRestart", Message: "kubelet is functioning properly"},
  		{Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2020-01-11 15:56:18 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:18 +0000 UTC"}, Reason: "CalicoIsUp", Message: "Calico is running on this node"},
  		{
  			Type:               "MemoryPressure",
  			Status:             "False",
- 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:57:34 +0000 UTC"},
+ 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:57:44 +0000 UTC"},
  			LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:03 +0000 UTC"},
  			Reason:             "KubeletHasSufficientMemory",
  			Message:            "kubelet has sufficient memory available",
  		},
  		{
  			Type:               "DiskPressure",
  			Status:             "False",
- 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:57:34 +0000 UTC"},
+ 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:57:44 +0000 UTC"},
  			LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:03 +0000 UTC"},
  			Reason:             "KubeletHasNoDiskPressure",
  			Message:            "kubelet has no disk pressure",
  		},
  		{
  			Type:               "PIDPressure",
  			Status:             "False",
- 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:57:34 +0000 UTC"},
+ 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:57:44 +0000 UTC"},
  			LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:03 +0000 UTC"},
  			Reason:             "KubeletHasSufficientPID",
  			Message:            "kubelet has sufficient PID available",
  		},
  		{Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:13 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"},
  	},
  	Addresses:       []v1.NodeAddress{{Type: "InternalIP", Address: "10.250.27.25"}, {Type: "Hostname", Address: "ip-10-250-27-25.ec2.internal"}, {Type: "InternalDNS", Address: "ip-10-250-27-25.ec2.internal"}},
  	DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}},
  	... // 5 identical fields
  }

Jan 11 19:57:46.352: INFO: node status heartbeat is unchanged for 999.448493ms, waiting for 1m20s
Jan 11 19:57:47.353: INFO: node status heartbeat is unchanged for 1.999995486s, waiting for 1m20s
Jan 11 19:57:48.352: INFO: node status heartbeat is unchanged for 2.999192181s, waiting for 1m20s
Jan 11 19:57:49.352: INFO: node status heartbeat is unchanged for 3.998879415s, waiting for 1m20s
Jan 11 19:57:50.353: INFO: node status heartbeat is unchanged for 5.000024278s, waiting for 1m20s
Jan 11 19:57:51.352: INFO: node status heartbeat is unchanged for 5.999194157s, waiting for 1m20s
Jan 11 19:57:52.352: INFO: node status heartbeat is unchanged for 6.999507273s, waiting for 1m20s
Jan 11 19:57:53.356: INFO: node status heartbeat is unchanged for 8.003520344s, waiting for 1m20s
Jan 11 19:57:54.352: INFO: node status heartbeat is unchanged for 8.999331998s, waiting for 1m20s
Jan 11 19:57:55.352: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s
Jan 11 19:57:55.354: INFO:   v1.NodeStatus{
  	Capacity:    v1.ResourceList{"attachable-volumes-aws-ebs": {i: resource.int64Amount{value: 25}, s: "25", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 2}, s: "2", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 28730179584}, Format: "BinarySI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {s: "0", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 8054267904}, Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},
  	Allocatable: v1.ResourceList{"attachable-volumes-aws-ebs": {i: resource.int64Amount{value: 25}, s: "25", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 1920, scale: -3}, s: "1920m", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 27293670584}, s: "27293670584", Format: "DecimalSI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {s: "0", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 6577812679}, s: "6577812679", Format: "DecimalSI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},
  	Phase:       "",
  	Conditions: []v1.NodeCondition{
  		... // 6 identical elements
  		{Type: "FrequentKubeletRestart", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2020-01-11 19:57:27 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:58 +0000 UTC"}, Reason: "NoFrequentKubeletRestart", Message: "kubelet is functioning properly"},
  		{Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2020-01-11 15:56:18 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:18 +0000 UTC"}, Reason: "CalicoIsUp", Message: "Calico is running on this node"},
  		{
  			Type:               "MemoryPressure",
  			Status:             "False",
- 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:57:44 +0000 UTC"},
+ 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:57:54 +0000 UTC"},
  			LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:03 +0000 UTC"},
  			Reason:             "KubeletHasSufficientMemory",
  			Message:            "kubelet has sufficient memory available",
  		},
  		{
  			Type:               "DiskPressure",
  			Status:             "False",
- 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:57:44 +0000 UTC"},
+ 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:57:54 +0000 UTC"},
  			LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:03 +0000 UTC"},
  			Reason:             "KubeletHasNoDiskPressure",
  			Message:            "kubelet has no disk pressure",
  		},
  		{
  			Type:               "PIDPressure",
  			Status:             "False",
- 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:57:44 +0000 UTC"},
+ 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:57:54 +0000 UTC"},
  			LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:03 +0000 UTC"},
  			Reason:             "KubeletHasSufficientPID",
  			Message:            "kubelet has sufficient PID available",
  		},
  		{Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:13 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"},
  	},
  	Addresses:       []v1.NodeAddress{{Type: "InternalIP", Address: "10.250.27.25"}, {Type: "Hostname", Address: "ip-10-250-27-25.ec2.internal"}, {Type: "InternalDNS", Address: "ip-10-250-27-25.ec2.internal"}},
  	DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}},
  	... // 5 identical fields
  }

Jan 11 19:57:56.352: INFO: node status heartbeat is unchanged for 1.000283843s, waiting for 1m20s
Jan 11 19:57:57.352: INFO: node status heartbeat is unchanged for 1.999992709s, waiting for 1m20s
Jan 11 19:57:58.352: INFO: node status heartbeat is unchanged for 2.999873391s, waiting for 1m20s
Jan 11 19:57:59.352: INFO: node status heartbeat is unchanged for 4.000224647s, waiting for 1m20s
Jan 11 19:58:00.352: INFO: node status heartbeat is unchanged for 4.999988548s, waiting for 1m20s
Jan 11 19:58:01.352: INFO: node status heartbeat is unchanged for 6.000160606s, waiting for 1m20s
Jan 11 19:58:02.352: INFO: node status heartbeat is unchanged for 7.000464413s, waiting for 1m20s
Jan 11 19:58:03.352: INFO: node status heartbeat is unchanged for 8.000174879s, waiting for 1m20s
Jan 11 19:58:04.353: INFO: node status heartbeat is unchanged for 9.000639532s, waiting for 1m20s
Jan 11 19:58:05.352: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s
Jan 11 19:58:05.355: INFO:   v1.NodeStatus{
  	Capacity:    v1.ResourceList{"attachable-volumes-aws-ebs": {i: resource.int64Amount{value: 25}, s: "25", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 2}, s: "2", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 28730179584}, Format: "BinarySI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {s: "0", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 8054267904}, Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},
  	Allocatable: v1.ResourceList{"attachable-volumes-aws-ebs": {i: resource.int64Amount{value: 25}, s: "25", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 1920, scale: -3}, s: "1920m", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 27293670584}, s: "27293670584", Format: "DecimalSI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {s: "0", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 6577812679}, s: "6577812679", Format: "DecimalSI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},
  	Phase:       "",
  	Conditions: []v1.NodeCondition{
  		... // 6 identical elements
  		{Type: "FrequentKubeletRestart", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2020-01-11 19:57:27 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:58 +0000 UTC"}, Reason: "NoFrequentKubeletRestart", Message: "kubelet is functioning properly"},
  		{Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2020-01-11 15:56:18 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:18 +0000 UTC"}, Reason: "CalicoIsUp", Message: "Calico is running on this node"},
  		{
  			Type:               "MemoryPressure",
  			Status:             "False",
- 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:57:54 +0000 UTC"},
+ 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:58:04 +0000 UTC"},
  			LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:03 +0000 UTC"},
  			Reason:             "KubeletHasSufficientMemory",
  			Message:            "kubelet has sufficient memory available",
  		},
  		{
  			Type:               "DiskPressure",
  			Status:             "False",
- 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:57:54 +0000 UTC"},
+ 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:58:04 +0000 UTC"},
  			LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:03 +0000 UTC"},
  			Reason:             "KubeletHasNoDiskPressure",
  			Message:            "kubelet has no disk pressure",
  		},
  		{
  			Type:               "PIDPressure",
  			Status:             "False",
- 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:57:54 +0000 UTC"},
+ 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:58:04 +0000 UTC"},
  			LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:03 +0000 UTC"},
  			Reason:             "KubeletHasSufficientPID",
  			Message:            "kubelet has sufficient PID available",
  		},
  		{Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:13 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"},
  	},
  	Addresses:       []v1.NodeAddress{{Type: "InternalIP", Address: "10.250.27.25"}, {Type: "Hostname", Address: "ip-10-250-27-25.ec2.internal"}, {Type: "InternalDNS", Address: "ip-10-250-27-25.ec2.internal"}},
  	DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}},
  	... // 5 identical fields
  }

Jan 11 19:58:06.352: INFO: node status heartbeat is unchanged for 1.000071833s, waiting for 1m20s
Jan 11 19:58:07.354: INFO: node status heartbeat is unchanged for 2.002025634s, waiting for 1m20s
Jan 11 19:58:08.352: INFO: node status heartbeat is unchanged for 2.999413224s, waiting for 1m20s
Jan 11 19:58:09.352: INFO: node status heartbeat is unchanged for 4.000144104s, waiting for 1m20s
Jan 11 19:58:10.352: INFO: node status heartbeat is unchanged for 4.999245189s, waiting for 1m20s
Jan 11 19:58:11.352: INFO: node status heartbeat is unchanged for 5.999651721s, waiting for 1m20s
Jan 11 19:58:12.352: INFO: node status heartbeat is unchanged for 6.999850286s, waiting for 1m20s
Jan 11 19:58:13.352: INFO: node status heartbeat is unchanged for 7.999955226s, waiting for 1m20s
Jan 11 19:58:14.355: INFO: node status heartbeat is unchanged for 9.002556259s, waiting for 1m20s
Jan 11 19:58:15.352: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s
Jan 11 19:58:15.354: INFO:   v1.NodeStatus{
  	Capacity:    v1.ResourceList{"attachable-volumes-aws-ebs": {i: resource.int64Amount{value: 25}, s: "25", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 2}, s: "2", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 28730179584}, Format: "BinarySI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {s: "0", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 8054267904}, Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},
  	Allocatable: v1.ResourceList{"attachable-volumes-aws-ebs": {i: resource.int64Amount{value: 25}, s: "25", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 1920, scale: -3}, s: "1920m", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 27293670584}, s: "27293670584", Format: "DecimalSI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {s: "0", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 6577812679}, s: "6577812679", Format: "DecimalSI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},
  	Phase:       "",
  	Conditions: []v1.NodeCondition{
  		... // 6 identical elements
  		{Type: "FrequentKubeletRestart", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2020-01-11 19:57:27 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:58 +0000 UTC"}, Reason: "NoFrequentKubeletRestart", Message: "kubelet is functioning properly"},
  		{Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2020-01-11 15:56:18 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:18 +0000 UTC"}, Reason: "CalicoIsUp", Message: "Calico is running on this node"},
  		{
  			Type:               "MemoryPressure",
  			Status:             "False",
- 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:58:04 +0000 UTC"},
+ 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:58:14 +0000 UTC"},
  			LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:03 +0000 UTC"},
  			Reason:             "KubeletHasSufficientMemory",
  			Message:            "kubelet has sufficient memory available",
  		},
  		{
  			Type:               "DiskPressure",
  			Status:             "False",
- 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:58:04 +0000 UTC"},
+ 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:58:14 +0000 UTC"},
  			LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:03 +0000 UTC"},
  			Reason:             "KubeletHasNoDiskPressure",
  			Message:            "kubelet has no disk pressure",
  		},
  		{
  			Type:               "PIDPressure",
  			Status:             "False",
- 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:58:04 +0000 UTC"},
+ 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:58:14 +0000 UTC"},
  			LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:03 +0000 UTC"},
  			Reason:             "KubeletHasSufficientPID",
  			Message:            "kubelet has sufficient PID available",
  		},
  		{Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:13 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"},
  	},
  	Addresses:       []v1.NodeAddress{{Type: "InternalIP", Address: "10.250.27.25"}, {Type: "Hostname", Address: "ip-10-250-27-25.ec2.internal"}, {Type: "InternalDNS", Address: "ip-10-250-27-25.ec2.internal"}},
  	DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}},
  	... // 5 identical fields
  }

Jan 11 19:58:16.352: INFO: node status heartbeat is unchanged for 1.000722292s, waiting for 1m20s
Jan 11 19:58:17.352: INFO: node status heartbeat is unchanged for 2.000122998s, waiting for 1m20s
Jan 11 19:58:18.352: INFO: node status heartbeat is unchanged for 3.000506588s, waiting for 1m20s
Jan 11 19:58:19.352: INFO: node status heartbeat is unchanged for 4.000461525s, waiting for 1m20s
Jan 11 19:58:20.352: INFO: node status heartbeat is unchanged for 5.000256552s, waiting for 1m20s
Jan 11 19:58:21.352: INFO: node status heartbeat is unchanged for 6.000172115s, waiting for 1m20s
Jan 11 19:58:22.352: INFO: node status heartbeat is unchanged for 7.000244471s, waiting for 1m20s
Jan 11 19:58:23.352: INFO: node status heartbeat is unchanged for 8.000510762s, waiting for 1m20s
Jan 11 19:58:24.352: INFO: node status heartbeat is unchanged for 9.000033156s, waiting for 1m20s
Jan 11 19:58:25.352: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s
Jan 11 19:58:25.353: INFO:   v1.NodeStatus{
  	Capacity:    v1.ResourceList{"attachable-volumes-aws-ebs": {i: resource.int64Amount{value: 25}, s: "25", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 2}, s: "2", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 28730179584}, Format: "BinarySI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {s: "0", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 8054267904}, Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},
  	Allocatable: v1.ResourceList{"attachable-volumes-aws-ebs": {i: resource.int64Amount{value: 25}, s: "25", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 1920, scale: -3}, s: "1920m", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 27293670584}, s: "27293670584", Format: "DecimalSI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {s: "0", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 6577812679}, s: "6577812679", Format: "DecimalSI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},
  	Phase:       "",
  	Conditions: []v1.NodeCondition{
  		... // 6 identical elements
  		{Type: "FrequentKubeletRestart", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2020-01-11 19:57:27 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:58 +0000 UTC"}, Reason: "NoFrequentKubeletRestart", Message: "kubelet is functioning properly"},
  		{Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2020-01-11 15:56:18 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:18 +0000 UTC"}, Reason: "CalicoIsUp", Message: "Calico is running on this node"},
  		{
  			Type:               "MemoryPressure",
  			Status:             "False",
- 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:58:14 +0000 UTC"},
+ 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:58:24 +0000 UTC"},
  			LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:03 +0000 UTC"},
  			Reason:             "KubeletHasSufficientMemory",
  			Message:            "kubelet has sufficient memory available",
  		},
  		{
  			Type:               "DiskPressure",
  			Status:             "False",
- 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:58:14 +0000 UTC"},
+ 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:58:24 +0000 UTC"},
  			LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:03 +0000 UTC"},
  			Reason:             "KubeletHasNoDiskPressure",
  			Message:            "kubelet has no disk pressure",
  		},
  		{
  			Type:               "PIDPressure",
  			Status:             "False",
- 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:58:14 +0000 UTC"},
+ 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:58:24 +0000 UTC"},
  			LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:03 +0000 UTC"},
  			Reason:             "KubeletHasSufficientPID",
  			Message:            "kubelet has sufficient PID available",
  		},
  		{Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:13 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"},
  	},
  	Addresses:       []v1.NodeAddress{{Type: "InternalIP", Address: "10.250.27.25"}, {Type: "Hostname", Address: "ip-10-250-27-25.ec2.internal"}, {Type: "InternalDNS", Address: "ip-10-250-27-25.ec2.internal"}},
  	DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}},
  	... // 5 identical fields
  }

Jan 11 19:58:26.352: INFO: node status heartbeat is unchanged for 1.00013354s, waiting for 1m20s
Jan 11 19:58:27.351: INFO: node status heartbeat is unchanged for 1.999832912s, waiting for 1m20s
Jan 11 19:58:28.352: INFO: node status heartbeat is unchanged for 2.99994369s, waiting for 1m20s
Jan 11 19:58:29.352: INFO: node status heartbeat is unchanged for 4.000469666s, waiting for 1m20s
Jan 11 19:58:30.352: INFO: node status heartbeat is unchanged for 5.000095871s, waiting for 1m20s
Jan 11 19:58:31.352: INFO: node status heartbeat is unchanged for 6.000236754s, waiting for 1m20s
Jan 11 19:58:32.352: INFO: node status heartbeat is unchanged for 7.000345879s, waiting for 1m20s
Jan 11 19:58:33.352: INFO: node status heartbeat is unchanged for 8.000408932s, waiting for 1m20s
Jan 11 19:58:34.352: INFO: node status heartbeat is unchanged for 9.000459933s, waiting for 1m20s
Jan 11 19:58:35.352: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s
Jan 11 19:58:35.354: INFO:   v1.NodeStatus{
  	Capacity:    v1.ResourceList{"attachable-volumes-aws-ebs": {i: resource.int64Amount{value: 25}, s: "25", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 2}, s: "2", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 28730179584}, Format: "BinarySI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {s: "0", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 8054267904}, Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},
  	Allocatable: v1.ResourceList{"attachable-volumes-aws-ebs": {i: resource.int64Amount{value: 25}, s: "25", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 1920, scale: -3}, s: "1920m", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 27293670584}, s: "27293670584", Format: "DecimalSI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {s: "0", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 6577812679}, s: "6577812679", Format: "DecimalSI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},
  	Phase:       "",
  	Conditions: []v1.NodeCondition{
- 		{
- 			Type:               "FrequentDockerRestart",
- 			Status:             "False",
- 			LastHeartbeatTime:  s"2020-01-11 19:57:27 +0000 UTC",
- 			LastTransitionTime: s"2020-01-11 15:56:58 +0000 UTC",
- 			Reason:             "NoFrequentDockerRestart",
- 			Message:            "docker is functioning properly",
- 		},
  		{
  			Type:               "FrequentContainerdRestart",
  			Status:             "False",
- 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:57:27 +0000 UTC"},
+ 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:58:27 +0000 UTC"},
  			LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:58 +0000 UTC"},
  			Reason:             "NoFrequentContainerdRestart",
  			Message:            "containerd is functioning properly",
  		},
  		{
  			Type:               "CorruptDockerOverlay2",
  			Status:             "False",
- 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:57:27 +0000 UTC"},
+ 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:58:27 +0000 UTC"},
  			LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:58 +0000 UTC"},
  			Reason:             "NoCorruptDockerOverlay2",
  			Message:            "docker overlay2 is functioning properly",
  		},
  		{
  			Type:               "KernelDeadlock",
  			Status:             "False",
- 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:57:27 +0000 UTC"},
+ 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:58:27 +0000 UTC"},
  			LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:58 +0000 UTC"},
  			Reason:             "KernelHasNoDeadlock",
  			Message:            "kernel has no deadlock",
  		},
  		{
  			Type:               "ReadonlyFilesystem",
  			Status:             "False",
- 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:57:27 +0000 UTC"},
+ 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:58:27 +0000 UTC"},
  			LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:58 +0000 UTC"},
  			Reason:             "FilesystemIsNotReadOnly",
  			Message:            "Filesystem is not read-only",
  		},
  		{
  			Type:               "FrequentUnregisterNetDevice",
  			Status:             "False",
- 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:57:27 +0000 UTC"},
+ 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:58:27 +0000 UTC"},
  			LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:58 +0000 UTC"},
  			Reason:             "NoFrequentUnregisterNetDevice",
  			Message:            "node is functioning properly",
  		},
  		{
  			Type:               "FrequentKubeletRestart",
  			Status:             "False",
- 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:57:27 +0000 UTC"},
+ 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:58:27 +0000 UTC"},
  			LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:58 +0000 UTC"},
  			Reason:             "NoFrequentKubeletRestart",
  			Message:            "kubelet is functioning properly",
  		},
+ 		{
+ 			Type:               "FrequentDockerRestart",
+ 			Status:             "False",
+ 			LastHeartbeatTime:  s"2020-01-11 19:58:27 +0000 UTC",
+ 			LastTransitionTime: s"2020-01-11 15:56:58 +0000 UTC",
+ 			Reason:             "NoFrequentDockerRestart",
+ 			Message:            "docker is functioning properly",
+ 		},
  		{Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2020-01-11 15:56:18 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:18 +0000 UTC"}, Reason: "CalicoIsUp", Message: "Calico is running on this node"},
  		{
  			Type:               "MemoryPressure",
  			Status:             "False",
- 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:58:24 +0000 UTC"},
+ 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:58:34 +0000 UTC"},
  			LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:03 +0000 UTC"},
  			Reason:             "KubeletHasSufficientMemory",
  			Message:            "kubelet has sufficient memory available",
  		},
  		{
  			Type:               "DiskPressure",
  			Status:             "False",
- 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:58:24 +0000 UTC"},
+ 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:58:34 +0000 UTC"},
  			LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:03 +0000 UTC"},
  			Reason:             "KubeletHasNoDiskPressure",
  			Message:            "kubelet has no disk pressure",
  		},
  		{
  			Type:               "PIDPressure",
  			Status:             "False",
- 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:58:24 +0000 UTC"},
+ 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:58:34 +0000 UTC"},
  			LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:03 +0000 UTC"},
  			Reason:             "KubeletHasSufficientPID",
  			Message:            "kubelet has sufficient PID available",
  		},
  		{Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:13 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"},
  	},
  	Addresses:       []v1.NodeAddress{{Type: "InternalIP", Address: "10.250.27.25"}, {Type: "Hostname", Address: "ip-10-250-27-25.ec2.internal"}, {Type: "InternalDNS", Address: "ip-10-250-27-25.ec2.internal"}},
  	DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}},
  	NodeInfo:        v1.NodeSystemInfo{MachineID: "ec280dba3c1837e27848a3dec8c080a9", SystemUUID: "ec280dba-3c18-37e2-7848-a3dec8c080a9", BootID: "89e42b89-b944-47ea-8bf6-5f2fe6d80c97", KernelVersion: "4.19.86-coreos", OSImage: "Container Linux by CoreOS 2303.3.0 (Rhyolite)", ContainerRuntimeVersion: "docker://18.6.3", KubeletVersion: "v1.16.4", KubeProxyVersion: "v1.16.4", OperatingSystem: "linux", Architecture: "amd64"},
  	Images:          []v1.ContainerImage{{Names: []string{"eu.gcr.io/gardener-project/k8s.gcr.io/hyperkube@sha256:1d8d7ef8bae1a6c8564d97a7d83a3661ea4b43127b0a6d901f3cd4b1126ee102", "eu.gcr.io/gardener-project/k8s.gcr.io/hyperkube:v1.16.4"}, SizeBytes: 601224435}, {Names: []string{"gcr.io/google-samples/gb-frontend@sha256:35cb427341429fac3df10ff74600ea73e8ec0754d78f9ce89e0b4f3d70d53ba6", "gcr.io/google-samples/gb-frontend:v6"}, SizeBytes: 373099368}, {Names: []string{"k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa", "k8s.gcr.io/etcd:3.3.15"}, SizeBytes: 246640776}, {Names: []string{"gcr.io/kubernetes-e2e-test-images/volume/nfs@sha256:c2ad734346f608a5f7d69cfded93c4e8094069320657bd372d12ba21dea3ea71", "gcr.io/kubernetes-e2e-test-images/volume/nfs:1.0"}, SizeBytes: 225358913}, {Names: []string{"gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb", "gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0"}, SizeBytes: 195659796}, {Names: []string{"eu.gcr.io/gardener-project/3rd/quay_io/calico/node@sha256:d017c694acb9df5ad8e957a14b4c5a613c3a42771a34904f40c279dd2f61461e", "eu.gcr.io/gardener-project/3rd/quay_io/calico/node:v3.8.2-mod-1"}, SizeBytes: 185406766}, {Names: []string{"eu.gcr.io/gardener-project/3rd/quay_io/calico/cni@sha256:fe6cb51f30add991b76eadfa26ec10fa8796383a1ddf807be5d4228725207b9d", "eu.gcr.io/gardener-project/3rd/quay_io/calico/cni:v3.8.2-mod-1"}, SizeBytes: 153790666}, {Names: []string{"httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a", "httpd:2.4.39-alpine"}, SizeBytes: 126894770}, {Names: []string{"httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060", "httpd:2.4.38-alpine"}, SizeBytes: 123781643}, {Names: []string{"eu.gcr.io/gardener-project/3rd/k8s_gcr_io/node-problem-detector@sha256:00aceed3b4ef20d0d578aff3f904212daa2f0aaf18350d3e213cf4ca0703ccf0", "eu.gcr.io/gardener-project/3rd/k8s_gcr_io/node-problem-detector:v0.7.1-mod-1"}, SizeBytes: 96768084}, {Names: []string{"gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:1bafcc6fb1aa990b487850adba9cadc020e42d7905aa8a30481182a477ba24b0", "gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.10"}, SizeBytes: 61365829}, {Names: []string{"gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727", "gcr.io/kubernetes-e2e-test-images/agnhost:2.6"}, SizeBytes: 57345321}, {Names: []string{"quay.io/k8scsi/csi-provisioner@sha256:0efcb424f1dde9b9fb11a1a14f2e48ab47e1c3f08bc3a929990dcfcb1f7ab34f", "quay.io/k8scsi/csi-provisioner:v1.4.0-rc1"}, SizeBytes: 54431016}, {Names: []string{"quay.io/k8scsi/csi-snapshotter@sha256:e3d3e742e32d00488fdb401045b9b1d033d7ca0ab6e760f77b24750fc95e5f70", "quay.io/k8scsi/csi-snapshotter:v2.0.0-rc1"}, SizeBytes: 51703561}, {Names: []string{"eu.gcr.io/gardener-project/3rd/quay_io/calico/typha@sha256:52298609a808087c774e95ded163e91828106bed6cf3117c51aba3f4d3b7943c", "eu.gcr.io/gardener-project/3rd/quay_io/calico/typha:v3.8.2"}, SizeBytes: 49771411}, {Names: []string{"quay.io/k8scsi/csi-attacher@sha256:26fccd7a99d973845df1193b46ebdcc6ab8dc5f6e6be319750c471fce1742d13", "quay.io/k8scsi/csi-attacher:v1.2.0"}, SizeBytes: 46226754}, {Names: []string{"quay.io/k8scsi/csi-attacher@sha256:0aba670b4d9d6b2e720bbf575d733156c676b693ca26501235444490300db838", "quay.io/k8scsi/csi-attacher:v1.1.0"}, SizeBytes: 42839085}, {Names: []string{"quay.io/k8scsi/csi-resizer@sha256:7d46fb6eb8b890dc546029d1565d502b4a1d974d33625c6ee2bc7991b77fc1a1", "quay.io/k8scsi/csi-resizer:v0.2.0"}, SizeBytes: 42817100}, {Names: []string{"quay.io/k8scsi/csi-resizer@sha256:f315c9042e56def3c05c6b04fe79ec9da6d39ddc557ca365a76cf35964ea08b6", "quay.io/k8scsi/csi-resizer:v0.1.0"}, SizeBytes: 42623056}, {Names: []string{"gcr.io/kubernetes-e2e-test-images/nonroot@sha256:d4ede5c74517090b6686219059118ed178cf4620f5db8781b32f806bb1e7395b", "gcr.io/kubernetes-e2e-test-images/nonroot:1.0"}, SizeBytes: 42321438}, {Names: []string{"redis@sha256:50899ea1ceed33fa03232f3ac57578a424faa1742c1ac9c7a7bdb95cdf19b858", "redis:5.0.5-alpine"}, SizeBytes: 29331594}, {Names: []string{"quay.io/k8scsi/hostpathplugin@sha256:b4826e492fc1762fceaf9726f41575ca0a4567864d3d235da874818de18039de", "quay.io/k8scsi/hostpathplugin:v1.2.0-rc5"}, SizeBytes: 28761497}, {Names: []string{"eu.gcr.io/gardener-project/3rd/quay_io/prometheus/node-exporter@sha256:fea82a3a79228af2840c72ff394d7446ace51ae035f5b26cd9767b250baf13b7", "eu.gcr.io/gardener-project/3rd/quay_io/prometheus/node-exporter:v0.18.1"}, SizeBytes: 22933477}, {Names: []string{"gcr.io/kubernetes-e2e-test-images/echoserver@sha256:e9ba514b896cdf559eef8788b66c2c3ee55f3572df617647b4b0d8b6bf81cf19", "gcr.io/kubernetes-e2e-test-images/echoserver:2.2"}, SizeBytes: 21692741}, {Names: []string{"quay.io/k8scsi/mock-driver@sha256:e0eed916b7d970bad2b7d9875f9ad16932f987f0f3d91ec5d86da68b0b5cc9d1", "quay.io/k8scsi/mock-driver:v2.1.0"}, SizeBytes: 16226335}, {Names: []string{"nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7", "nginx:1.14-alpine"}, SizeBytes: 16032814}, {Names: []string{"quay.io/k8scsi/csi-node-driver-registrar@sha256:13daf82fb99e951a4bff8ae5fc7c17c3a8fe7130be6400990d8f6076c32d4599", "quay.io/k8scsi/csi-node-driver-registrar:v1.1.0"}, SizeBytes: 15815995}, {Names: []string{"quay.io/k8scsi/livenessprobe@sha256:dde617756e0f602adc566ab71fd885f1dad451ad3fb063ac991c95a2ff47aea5", "quay.io/k8scsi/livenessprobe:v1.1.0"}, SizeBytes: 14967303}, {Names: []string{"eu.gcr.io/gardener-project/3rd/quay_io/calico/pod2daemon-flexvol@sha256:fd246ba4eb5b96a7b97bfd8d99eb823ba179e6eeb9852cb3e3f7bf2f44a800a8", "eu.gcr.io/gardener-project/3rd/quay_io/calico/pod2daemon-flexvol:v3.8.2"}, SizeBytes: 9371181}, {Names: []string{"gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd", "gcr.io/kubernetes-e2e-test-images/dnsutils:1.1"}, SizeBytes: 9349974}, {Names: []string{"gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411", "gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0"}, SizeBytes: 6757579}, {Names: []string{"gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc", "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"}, SizeBytes: 4753501}, {Names: []string{"gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6", "gcr.io/kubernetes-e2e-test-images/kitten:1.0"}, SizeBytes: 4747037}, {Names: []string{"gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e", "gcr.io/kubernetes-e2e-test-images/test-webserver:1.0"}, SizeBytes: 4732240}, {Names: []string{"alpine@sha256:8421d9a84432575381bfabd248f1eb56f3aa21d9d7cd2511583c68c9b7511d10", "alpine:3.7"}, SizeBytes: 4206494}, {Names: []string{"gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2", "gcr.io/kubernetes-e2e-test-images/mounttest:1.0"}, SizeBytes: 1563521}, {Names: []string{"gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d", "gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0"}, SizeBytes: 1450451}, {Names: []string{"busybox@sha256:6915be4043561d64e0ab0f8f098dc2ac48e077fe23f488ac24b665166898115a", "busybox:latest"}, SizeBytes: 1219782}, {Names: []string{"busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", "busybox:1.29"}, SizeBytes: 1154361}, {Names: []string{"eu.gcr.io/gardener-project/3rd/gcr_io/google_containers/pause-amd64@sha256:ffa28932647c3b6cab6a618aafe98d33dd185d96158ecf9b1addf042d6244025", "k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea", "eu.gcr.io/gardener-project/3rd/gcr_io/google_containers/pause-amd64:3.1", "k8s.gcr.io/pause:3.1"}, SizeBytes: 742472}},
- 	VolumesInUse:    []v1.UniqueVolumeName{"kubernetes.io/csi/csi-mock-csi-mock-volumes-7446^4"},
+ 	VolumesInUse:    nil,
- 	VolumesAttached: []v1.AttachedVolume{{Name: "kubernetes.io/csi/csi-mock-csi-mock-volumes-7446^4"}},
+ 	VolumesAttached: nil,
  	Config:          nil,
  }

Jan 11 19:58:36.352: INFO: node status heartbeat is unchanged for 1.000461454s, waiting for 1m20s
Jan 11 19:58:37.352: INFO: node status heartbeat is unchanged for 2.000043815s, waiting for 1m20s
Jan 11 19:58:38.352: INFO: node status heartbeat is unchanged for 3.00030605s, waiting for 1m20s
Jan 11 19:58:39.352: INFO: node status heartbeat is unchanged for 4.000252466s, waiting for 1m20s
Jan 11 19:58:40.352: INFO: node status heartbeat is unchanged for 5.000265577s, waiting for 1m20s
Jan 11 19:58:41.353: INFO: node status heartbeat is unchanged for 6.001746463s, waiting for 1m20s
Jan 11 19:58:42.352: INFO: node status heartbeat is unchanged for 7.000357076s, waiting for 1m20s
Jan 11 19:58:43.352: INFO: node status heartbeat is unchanged for 8.000266937s, waiting for 1m20s
Jan 11 19:58:44.351: INFO: node status heartbeat is unchanged for 8.999908817s, waiting for 1m20s
Jan 11 19:58:45.352: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s
Jan 11 19:58:45.354: INFO:   v1.NodeStatus{
  	Capacity:    v1.ResourceList{"attachable-volumes-aws-ebs": {i: resource.int64Amount{value: 25}, s: "25", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 2}, s: "2", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 28730179584}, Format: "BinarySI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {s: "0", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 8054267904}, Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},
  	Allocatable: v1.ResourceList{"attachable-volumes-aws-ebs": {i: resource.int64Amount{value: 25}, s: "25", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 1920, scale: -3}, s: "1920m", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 27293670584}, s: "27293670584", Format: "DecimalSI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {s: "0", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 6577812679}, s: "6577812679", Format: "DecimalSI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},
  	Phase:       "",
  	Conditions: []v1.NodeCondition{
  		... // 6 identical elements
  		{Type: "FrequentDockerRestart", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2020-01-11 19:58:27 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:58 +0000 UTC"}, Reason: "NoFrequentDockerRestart", Message: "docker is functioning properly"},
  		{Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2020-01-11 15:56:18 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:18 +0000 UTC"}, Reason: "CalicoIsUp", Message: "Calico is running on this node"},
  		{
  			Type:               "MemoryPressure",
  			Status:             "False",
- 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:58:34 +0000 UTC"},
+ 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:58:44 +0000 UTC"},
  			LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:03 +0000 UTC"},
  			Reason:             "KubeletHasSufficientMemory",
  			Message:            "kubelet has sufficient memory available",
  		},
  		{
  			Type:               "DiskPressure",
  			Status:             "False",
- 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:58:34 +0000 UTC"},
+ 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:58:44 +0000 UTC"},
  			LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:03 +0000 UTC"},
  			Reason:             "KubeletHasNoDiskPressure",
  			Message:            "kubelet has no disk pressure",
  		},
  		{
  			Type:               "PIDPressure",
  			Status:             "False",
- 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:58:34 +0000 UTC"},
+ 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:58:44 +0000 UTC"},
  			LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:03 +0000 UTC"},
  			Reason:             "KubeletHasSufficientPID",
  			Message:            "kubelet has sufficient PID available",
  		},
  		{Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:13 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"},
  	},
  	Addresses:       []v1.NodeAddress{{Type: "InternalIP", Address: "10.250.27.25"}, {Type: "Hostname", Address: "ip-10-250-27-25.ec2.internal"}, {Type: "InternalDNS", Address: "ip-10-250-27-25.ec2.internal"}},
  	DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}},
  	... // 5 identical fields
  }

Jan 11 19:58:46.352: INFO: node status heartbeat is unchanged for 1.000063581s, waiting for 1m20s
Jan 11 19:58:47.352: INFO: node status heartbeat is unchanged for 2.000120697s, waiting for 1m20s
Jan 11 19:58:48.353: INFO: node status heartbeat is unchanged for 3.000773193s, waiting for 1m20s
Jan 11 19:58:49.352: INFO: node status heartbeat is unchanged for 3.999810724s, waiting for 1m20s
Jan 11 19:58:50.352: INFO: node status heartbeat is unchanged for 5.000075113s, waiting for 1m20s
Jan 11 19:58:51.352: INFO: node status heartbeat is unchanged for 5.999734868s, waiting for 1m20s
Jan 11 19:58:52.352: INFO: node status heartbeat is unchanged for 7.000114824s, waiting for 1m20s
Jan 11 19:58:53.352: INFO: node status heartbeat is unchanged for 7.999850624s, waiting for 1m20s
Jan 11 19:58:54.353: INFO: node status heartbeat is unchanged for 9.001129192s, waiting for 1m20s
Jan 11 19:58:55.352: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s
Jan 11 19:58:55.354: INFO:   v1.NodeStatus{
  	Capacity:    v1.ResourceList{"attachable-volumes-aws-ebs": {i: resource.int64Amount{value: 25}, s: "25", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 2}, s: "2", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 28730179584}, Format: "BinarySI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {s: "0", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 8054267904}, Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},
  	Allocatable: v1.ResourceList{"attachable-volumes-aws-ebs": {i: resource.int64Amount{value: 25}, s: "25", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 1920, scale: -3}, s: "1920m", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 27293670584}, s: "27293670584", Format: "DecimalSI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {s: "0", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 6577812679}, s: "6577812679", Format: "DecimalSI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},
  	Phase:       "",
  	Conditions: []v1.NodeCondition{
  		... // 6 identical elements
  		{Type: "FrequentDockerRestart", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2020-01-11 19:58:27 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:58 +0000 UTC"}, Reason: "NoFrequentDockerRestart", Message: "docker is functioning properly"},
  		{Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2020-01-11 15:56:18 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:18 +0000 UTC"}, Reason: "CalicoIsUp", Message: "Calico is running on this node"},
  		{
  			Type:               "MemoryPressure",
  			Status:             "False",
- 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:58:44 +0000 UTC"},
+ 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:58:54 +0000 UTC"},
  			LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:03 +0000 UTC"},
  			Reason:             "KubeletHasSufficientMemory",
  			Message:            "kubelet has sufficient memory available",
  		},
  		{
  			Type:               "DiskPressure",
  			Status:             "False",
- 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:58:44 +0000 UTC"},
+ 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:58:54 +0000 UTC"},
  			LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:03 +0000 UTC"},
  			Reason:             "KubeletHasNoDiskPressure",
  			Message:            "kubelet has no disk pressure",
  		},
  		{
  			Type:               "PIDPressure",
  			Status:             "False",
- 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:58:44 +0000 UTC"},
+ 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:58:54 +0000 UTC"},
  			LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:03 +0000 UTC"},
  			Reason:             "KubeletHasSufficientPID",
  			Message:            "kubelet has sufficient PID available",
  		},
  		{Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:13 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"},
  	},
  	Addresses:       []v1.NodeAddress{{Type: "InternalIP", Address: "10.250.27.25"}, {Type: "Hostname", Address: "ip-10-250-27-25.ec2.internal"}, {Type: "InternalDNS", Address: "ip-10-250-27-25.ec2.internal"}},
  	DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}},
  	... // 5 identical fields
  }

Jan 11 19:58:56.352: INFO: node status heartbeat is unchanged for 1.000464107s, waiting for 1m20s
Jan 11 19:58:57.352: INFO: node status heartbeat is unchanged for 2.000163755s, waiting for 1m20s
Jan 11 19:58:58.352: INFO: node status heartbeat is unchanged for 2.999949397s, waiting for 1m20s
Jan 11 19:58:59.351: INFO: node status heartbeat is unchanged for 3.999468068s, waiting for 1m20s
Jan 11 19:59:00.352: INFO: node status heartbeat is unchanged for 5.000090485s, waiting for 1m20s
Jan 11 19:59:01.352: INFO: node status heartbeat is unchanged for 6.000113113s, waiting for 1m20s
Jan 11 19:59:02.352: INFO: node status heartbeat is unchanged for 7.000187098s, waiting for 1m20s
Jan 11 19:59:03.352: INFO: node status heartbeat is unchanged for 8.000012746s, waiting for 1m20s
Jan 11 19:59:04.353: INFO: node status heartbeat is unchanged for 9.000703769s, waiting for 1m20s
Jan 11 19:59:05.352: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s
Jan 11 19:59:05.353: INFO:   v1.NodeStatus{
  	Capacity:    v1.ResourceList{"attachable-volumes-aws-ebs": {i: resource.int64Amount{value: 25}, s: "25", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 2}, s: "2", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 28730179584}, Format: "BinarySI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {s: "0", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 8054267904}, Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},
  	Allocatable: v1.ResourceList{"attachable-volumes-aws-ebs": {i: resource.int64Amount{value: 25}, s: "25", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 1920, scale: -3}, s: "1920m", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 27293670584}, s: "27293670584", Format: "DecimalSI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {s: "0", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 6577812679}, s: "6577812679", Format: "DecimalSI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},
  	Phase:       "",
  	Conditions: []v1.NodeCondition{
  		... // 6 identical elements
  		{Type: "FrequentDockerRestart", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2020-01-11 19:58:27 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:58 +0000 UTC"}, Reason: "NoFrequentDockerRestart", Message: "docker is functioning properly"},
  		{Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2020-01-11 15:56:18 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:18 +0000 UTC"}, Reason: "CalicoIsUp", Message: "Calico is running on this node"},
  		{
  			Type:               "MemoryPressure",
  			Status:             "False",
- 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:58:54 +0000 UTC"},
+ 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:59:04 +0000 UTC"},
  			LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:03 +0000 UTC"},
  			Reason:             "KubeletHasSufficientMemory",
  			Message:            "kubelet has sufficient memory available",
  		},
  		{
  			Type:               "DiskPressure",
  			Status:             "False",
- 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:58:54 +0000 UTC"},
+ 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:59:04 +0000 UTC"},
  			LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:03 +0000 UTC"},
  			Reason:             "KubeletHasNoDiskPressure",
  			Message:            "kubelet has no disk pressure",
  		},
  		{
  			Type:               "PIDPressure",
  			Status:             "False",
- 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:58:54 +0000 UTC"},
+ 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:59:04 +0000 UTC"},
  			LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:03 +0000 UTC"},
  			Reason:             "KubeletHasSufficientPID",
  			Message:            "kubelet has sufficient PID available",
  		},
  		{Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:13 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"},
  	},
  	Addresses:       []v1.NodeAddress{{Type: "InternalIP", Address: "10.250.27.25"}, {Type: "Hostname", Address: "ip-10-250-27-25.ec2.internal"}, {Type: "InternalDNS", Address: "ip-10-250-27-25.ec2.internal"}},
  	DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}},
  	... // 5 identical fields
  }

Jan 11 19:59:06.353: INFO: node status heartbeat is unchanged for 1.001317399s, waiting for 1m20s
Jan 11 19:59:07.352: INFO: node status heartbeat is unchanged for 2.000247738s, waiting for 1m20s
Jan 11 19:59:08.352: INFO: node status heartbeat is unchanged for 3.000389399s, waiting for 1m20s
Jan 11 19:59:09.352: INFO: node status heartbeat is unchanged for 4.000320248s, waiting for 1m20s
Jan 11 19:59:10.352: INFO: node status heartbeat is unchanged for 4.999813438s, waiting for 1m20s
Jan 11 19:59:11.352: INFO: node status heartbeat is unchanged for 6.000261857s, waiting for 1m20s
Jan 11 19:59:12.351: INFO: node status heartbeat is unchanged for 6.999466571s, waiting for 1m20s
Jan 11 19:59:13.352: INFO: node status heartbeat is unchanged for 8.000263751s, waiting for 1m20s
Jan 11 19:59:14.352: INFO: node status heartbeat is unchanged for 9.000072985s, waiting for 1m20s
Jan 11 19:59:15.351: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s
Jan 11 19:59:15.353: INFO:   v1.NodeStatus{
  	Capacity:    v1.ResourceList{"attachable-volumes-aws-ebs": {i: resource.int64Amount{value: 25}, s: "25", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 2}, s: "2", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 28730179584}, Format: "BinarySI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {s: "0", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 8054267904}, Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},
  	Allocatable: v1.ResourceList{"attachable-volumes-aws-ebs": {i: resource.int64Amount{value: 25}, s: "25", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 1920, scale: -3}, s: "1920m", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 27293670584}, s: "27293670584", Format: "DecimalSI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {s: "0", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 6577812679}, s: "6577812679", Format: "DecimalSI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},
  	Phase:       "",
  	Conditions: []v1.NodeCondition{
  		... // 6 identical elements
  		{Type: "FrequentDockerRestart", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2020-01-11 19:58:27 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:58 +0000 UTC"}, Reason: "NoFrequentDockerRestart", Message: "docker is functioning properly"},
  		{Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2020-01-11 15:56:18 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:18 +0000 UTC"}, Reason: "CalicoIsUp", Message: "Calico is running on this node"},
  		{
  			Type:               "MemoryPressure",
  			Status:             "False",
- 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:59:04 +0000 UTC"},
+ 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:59:14 +0000 UTC"},
  			LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:03 +0000 UTC"},
  			Reason:             "KubeletHasSufficientMemory",
  			Message:            "kubelet has sufficient memory available",
  		},
  		{
  			Type:               "DiskPressure",
  			Status:             "False",
- 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:59:04 +0000 UTC"},
+ 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:59:14 +0000 UTC"},
  			LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:03 +0000 UTC"},
  			Reason:             "KubeletHasNoDiskPressure",
  			Message:            "kubelet has no disk pressure",
  		},
  		{
  			Type:               "PIDPressure",
  			Status:             "False",
- 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:59:04 +0000 UTC"},
+ 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:59:14 +0000 UTC"},
  			LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:03 +0000 UTC"},
  			Reason:             "KubeletHasSufficientPID",
  			Message:            "kubelet has sufficient PID available",
  		},
  		{Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:13 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"},
  	},
  	Addresses:       []v1.NodeAddress{{Type: "InternalIP", Address: "10.250.27.25"}, {Type: "Hostname", Address: "ip-10-250-27-25.ec2.internal"}, {Type: "InternalDNS", Address: "ip-10-250-27-25.ec2.internal"}},
  	DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}},
  	... // 5 identical fields
  }

Jan 11 19:59:16.352: INFO: node status heartbeat is unchanged for 1.000742944s, waiting for 1m20s
Jan 11 19:59:17.352: INFO: node status heartbeat is unchanged for 2.000213871s, waiting for 1m20s
Jan 11 19:59:18.352: INFO: node status heartbeat is unchanged for 3.00074976s, waiting for 1m20s
Jan 11 19:59:19.352: INFO: node status heartbeat is unchanged for 4.000429623s, waiting for 1m20s
Jan 11 19:59:20.352: INFO: node status heartbeat is unchanged for 5.000287706s, waiting for 1m20s
Jan 11 19:59:21.352: INFO: node status heartbeat is unchanged for 6.000398119s, waiting for 1m20s
Jan 11 19:59:22.352: INFO: node status heartbeat is unchanged for 7.000542212s, waiting for 1m20s
Jan 11 19:59:23.352: INFO: node status heartbeat is unchanged for 8.000588144s, waiting for 1m20s
Jan 11 19:59:24.352: INFO: node status heartbeat is unchanged for 9.000381044s, waiting for 1m20s
Jan 11 19:59:25.352: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s
Jan 11 19:59:25.353: INFO:   v1.NodeStatus{
  	Capacity:    v1.ResourceList{"attachable-volumes-aws-ebs": {i: resource.int64Amount{value: 25}, s: "25", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 2}, s: "2", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 28730179584}, Format: "BinarySI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {s: "0", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 8054267904}, Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},
  	Allocatable: v1.ResourceList{"attachable-volumes-aws-ebs": {i: resource.int64Amount{value: 25}, s: "25", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 1920, scale: -3}, s: "1920m", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 27293670584}, s: "27293670584", Format: "DecimalSI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {s: "0", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 6577812679}, s: "6577812679", Format: "DecimalSI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},
  	Phase:       "",
  	Conditions: []v1.NodeCondition{
  		... // 6 identical elements
  		{Type: "FrequentDockerRestart", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2020-01-11 19:58:27 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:58 +0000 UTC"}, Reason: "NoFrequentDockerRestart", Message: "docker is functioning properly"},
  		{Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2020-01-11 15:56:18 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:18 +0000 UTC"}, Reason: "CalicoIsUp", Message: "Calico is running on this node"},
  		{
  			Type:               "MemoryPressure",
  			Status:             "False",
- 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:59:14 +0000 UTC"},
+ 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:59:24 +0000 UTC"},
  			LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:03 +0000 UTC"},
  			Reason:             "KubeletHasSufficientMemory",
  			Message:            "kubelet has sufficient memory available",
  		},
  		{
  			Type:               "DiskPressure",
  			Status:             "False",
- 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:59:14 +0000 UTC"},
+ 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:59:24 +0000 UTC"},
  			LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:03 +0000 UTC"},
  			Reason:             "KubeletHasNoDiskPressure",
  			Message:            "kubelet has no disk pressure",
  		},
  		{
  			Type:               "PIDPressure",
  			Status:             "False",
- 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:59:14 +0000 UTC"},
+ 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:59:24 +0000 UTC"},
  			LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:03 +0000 UTC"},
  			Reason:             "KubeletHasSufficientPID",
  			Message:            "kubelet has sufficient PID available",
  		},
  		{Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:13 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"},
  	},
  	Addresses:       []v1.NodeAddress{{Type: "InternalIP", Address: "10.250.27.25"}, {Type: "Hostname", Address: "ip-10-250-27-25.ec2.internal"}, {Type: "InternalDNS", Address: "ip-10-250-27-25.ec2.internal"}},
  	DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}},
  	... // 5 identical fields
  }

Jan 11 19:59:26.351: INFO: node status heartbeat is unchanged for 1.000008365s, waiting for 1m20s
Jan 11 19:59:27.352: INFO: node status heartbeat is unchanged for 2.000245932s, waiting for 1m20s
Jan 11 19:59:28.353: INFO: node status heartbeat is unchanged for 3.001110338s, waiting for 1m20s
Jan 11 19:59:29.352: INFO: node status heartbeat is unchanged for 4.00019983s, waiting for 1m20s
Jan 11 19:59:30.352: INFO: node status heartbeat is unchanged for 5.000310986s, waiting for 1m20s
Jan 11 19:59:31.352: INFO: node status heartbeat is unchanged for 6.000374061s, waiting for 1m20s
Jan 11 19:59:32.352: INFO: node status heartbeat is unchanged for 7.00036417s, waiting for 1m20s
Jan 11 19:59:33.352: INFO: node status heartbeat is unchanged for 8.000205172s, waiting for 1m20s
Jan 11 19:59:34.355: INFO: node status heartbeat is unchanged for 9.003908839s, waiting for 1m20s
Jan 11 19:59:35.352: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s
Jan 11 19:59:35.354: INFO:   v1.NodeStatus{
  	Capacity:    v1.ResourceList{"attachable-volumes-aws-ebs": {i: resource.int64Amount{value: 25}, s: "25", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 2}, s: "2", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 28730179584}, Format: "BinarySI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {s: "0", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 8054267904}, Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},
  	Allocatable: v1.ResourceList{"attachable-volumes-aws-ebs": {i: resource.int64Amount{value: 25}, s: "25", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 1920, scale: -3}, s: "1920m", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 27293670584}, s: "27293670584", Format: "DecimalSI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {s: "0", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 6577812679}, s: "6577812679", Format: "DecimalSI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},
  	Phase:       "",
  	Conditions: []v1.NodeCondition{
- 		{
- 			Type:               "FrequentContainerdRestart",
- 			Status:             "False",
- 			LastHeartbeatTime:  s"2020-01-11 19:58:27 +0000 UTC",
- 			LastTransitionTime: s"2020-01-11 15:56:58 +0000 UTC",
- 			Reason:             "NoFrequentContainerdRestart",
- 			Message:            "containerd is functioning properly",
- 		},
  		{
  			Type:               "CorruptDockerOverlay2",
  			Status:             "False",
- 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:58:27 +0000 UTC"},
+ 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:59:28 +0000 UTC"},
  			LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:58 +0000 UTC"},
  			Reason:             "NoCorruptDockerOverlay2",
  			Message:            "docker overlay2 is functioning properly",
  		},
  		{
  			Type:               "KernelDeadlock",
  			Status:             "False",
- 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:58:27 +0000 UTC"},
+ 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:59:28 +0000 UTC"},
  			LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:58 +0000 UTC"},
  			Reason:             "KernelHasNoDeadlock",
  			Message:            "kernel has no deadlock",
  		},
  		{
  			Type:               "ReadonlyFilesystem",
  			Status:             "False",
- 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:58:27 +0000 UTC"},
+ 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:59:28 +0000 UTC"},
  			LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:58 +0000 UTC"},
  			Reason:             "FilesystemIsNotReadOnly",
  			Message:            "Filesystem is not read-only",
  		},
  		{
  			Type:               "FrequentUnregisterNetDevice",
  			Status:             "False",
- 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:58:27 +0000 UTC"},
+ 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:59:28 +0000 UTC"},
  			LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:58 +0000 UTC"},
  			Reason:             "NoFrequentUnregisterNetDevice",
  			Message:            "node is functioning properly",
  		},
  		{
  			Type:               "FrequentKubeletRestart",
  			Status:             "False",
- 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:58:27 +0000 UTC"},
+ 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:59:28 +0000 UTC"},
  			LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:58 +0000 UTC"},
  			Reason:             "NoFrequentKubeletRestart",
  			Message:            "kubelet is functioning properly",
  		},
  		{
  			Type:               "FrequentDockerRestart",
  			Status:             "False",
- 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:58:27 +0000 UTC"},
+ 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:59:28 +0000 UTC"},
  			LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:58 +0000 UTC"},
  			Reason:             "NoFrequentDockerRestart",
  			Message:            "docker is functioning properly",
  		},
+ 		{
+ 			Type:               "FrequentContainerdRestart",
+ 			Status:             "False",
+ 			LastHeartbeatTime:  s"2020-01-11 19:59:28 +0000 UTC",
+ 			LastTransitionTime: s"2020-01-11 15:56:58 +0000 UTC",
+ 			Reason:             "NoFrequentContainerdRestart",
+ 			Message:            "containerd is functioning properly",
+ 		},
  		{Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2020-01-11 15:56:18 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:18 +0000 UTC"}, Reason: "CalicoIsUp", Message: "Calico is running on this node"},
  		{
  			Type:               "MemoryPressure",
  			Status:             "False",
- 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:59:24 +0000 UTC"},
+ 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:59:34 +0000 UTC"},
  			LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:03 +0000 UTC"},
  			Reason:             "KubeletHasSufficientMemory",
  			Message:            "kubelet has sufficient memory available",
  		},
  		{
  			Type:               "DiskPressure",
  			Status:             "False",
- 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:59:24 +0000 UTC"},
+ 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:59:34 +0000 UTC"},
  			LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:03 +0000 UTC"},
  			Reason:             "KubeletHasNoDiskPressure",
  			Message:            "kubelet has no disk pressure",
  		},
  		{
  			Type:               "PIDPressure",
  			Status:             "False",
- 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:59:24 +0000 UTC"},
+ 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:59:34 +0000 UTC"},
  			LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:03 +0000 UTC"},
  			Reason:             "KubeletHasSufficientPID",
  			Message:            "kubelet has sufficient PID available",
  		},
  		{Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:13 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"},
  	},
  	Addresses:       []v1.NodeAddress{{Type: "InternalIP", Address: "10.250.27.25"}, {Type: "Hostname", Address: "ip-10-250-27-25.ec2.internal"}, {Type: "InternalDNS", Address: "ip-10-250-27-25.ec2.internal"}},
  	DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}},
  	... // 5 identical fields
  }

Jan 11 19:59:36.351: INFO: node status heartbeat is unchanged for 999.737476ms, waiting for 1m20s
Jan 11 19:59:37.352: INFO: node status heartbeat is unchanged for 2.00009816s, waiting for 1m20s
Jan 11 19:59:38.352: INFO: node status heartbeat is unchanged for 3.000222845s, waiting for 1m20s
Jan 11 19:59:39.352: INFO: node status heartbeat is unchanged for 4.0002801s, waiting for 1m20s
Jan 11 19:59:40.352: INFO: node status heartbeat is unchanged for 5.000081255s, waiting for 1m20s
Jan 11 19:59:41.352: INFO: node status heartbeat is unchanged for 6.000298316s, waiting for 1m20s
Jan 11 19:59:42.352: INFO: node status heartbeat is unchanged for 6.999986518s, waiting for 1m20s
Jan 11 19:59:43.352: INFO: node status heartbeat is unchanged for 8.000215212s, waiting for 1m20s
Jan 11 19:59:44.352: INFO: node status heartbeat is unchanged for 8.999918249s, waiting for 1m20s
Jan 11 19:59:45.352: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s
Jan 11 19:59:45.353: INFO:   v1.NodeStatus{
  	Capacity:    v1.ResourceList{"attachable-volumes-aws-ebs": {i: resource.int64Amount{value: 25}, s: "25", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 2}, s: "2", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 28730179584}, Format: "BinarySI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {s: "0", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 8054267904}, Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},
  	Allocatable: v1.ResourceList{"attachable-volumes-aws-ebs": {i: resource.int64Amount{value: 25}, s: "25", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 1920, scale: -3}, s: "1920m", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 27293670584}, s: "27293670584", Format: "DecimalSI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {s: "0", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 6577812679}, s: "6577812679", Format: "DecimalSI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},
  	Phase:       "",
  	Conditions: []v1.NodeCondition{
  		... // 6 identical elements
  		{Type: "FrequentContainerdRestart", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2020-01-11 19:59:28 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:58 +0000 UTC"}, Reason: "NoFrequentContainerdRestart", Message: "containerd is functioning properly"},
  		{Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2020-01-11 15:56:18 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:18 +0000 UTC"}, Reason: "CalicoIsUp", Message: "Calico is running on this node"},
  		{
  			Type:               "MemoryPressure",
  			Status:             "False",
- 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:59:34 +0000 UTC"},
+ 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:59:44 +0000 UTC"},
  			LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:03 +0000 UTC"},
  			Reason:             "KubeletHasSufficientMemory",
  			Message:            "kubelet has sufficient memory available",
  		},
  		{
  			Type:               "DiskPressure",
  			Status:             "False",
- 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:59:34 +0000 UTC"},
+ 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:59:44 +0000 UTC"},
  			LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:03 +0000 UTC"},
  			Reason:             "KubeletHasNoDiskPressure",
  			Message:            "kubelet has no disk pressure",
  		},
  		{
  			Type:               "PIDPressure",
  			Status:             "False",
- 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:59:34 +0000 UTC"},
+ 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:59:44 +0000 UTC"},
  			LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:03 +0000 UTC"},
  			Reason:             "KubeletHasSufficientPID",
  			Message:            "kubelet has sufficient PID available",
  		},
  		{Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:13 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"},
  	},
  	Addresses:       []v1.NodeAddress{{Type: "InternalIP", Address: "10.250.27.25"}, {Type: "Hostname", Address: "ip-10-250-27-25.ec2.internal"}, {Type: "InternalDNS", Address: "ip-10-250-27-25.ec2.internal"}},
  	DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}},
  	... // 5 identical fields
  }

Jan 11 19:59:46.352: INFO: node status heartbeat is unchanged for 999.943851ms, waiting for 1m20s
Jan 11 19:59:47.352: INFO: node status heartbeat is unchanged for 2.000053233s, waiting for 1m20s
Jan 11 19:59:48.351: INFO: node status heartbeat is unchanged for 2.99958072s, waiting for 1m20s
Jan 11 19:59:49.352: INFO: node status heartbeat is unchanged for 3.999900062s, waiting for 1m20s
Jan 11 19:59:50.352: INFO: node status heartbeat is unchanged for 5.000213036s, waiting for 1m20s
Jan 11 19:59:51.352: INFO: node status heartbeat is unchanged for 5.999824547s, waiting for 1m20s
Jan 11 19:59:52.352: INFO: node status heartbeat is unchanged for 6.999798288s, waiting for 1m20s
Jan 11 19:59:53.351: INFO: node status heartbeat is unchanged for 7.999530661s, waiting for 1m20s
Jan 11 19:59:54.352: INFO: node status heartbeat is unchanged for 8.999948266s, waiting for 1m20s
Jan 11 19:59:55.352: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s
Jan 11 19:59:55.354: INFO:   v1.NodeStatus{
  	Capacity:    v1.ResourceList{"attachable-volumes-aws-ebs": {i: resource.int64Amount{value: 25}, s: "25", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 2}, s: "2", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 28730179584}, Format: "BinarySI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {s: "0", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 8054267904}, Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},
  	Allocatable: v1.ResourceList{"attachable-volumes-aws-ebs": {i: resource.int64Amount{value: 25}, s: "25", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 1920, scale: -3}, s: "1920m", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 27293670584}, s: "27293670584", Format: "DecimalSI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {s: "0", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 6577812679}, s: "6577812679", Format: "DecimalSI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},
  	Phase:       "",
  	Conditions: []v1.NodeCondition{
  		... // 6 identical elements
  		{Type: "FrequentContainerdRestart", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2020-01-11 19:59:28 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:58 +0000 UTC"}, Reason: "NoFrequentContainerdRestart", Message: "containerd is functioning properly"},
  		{Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2020-01-11 15:56:18 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:18 +0000 UTC"}, Reason: "CalicoIsUp", Message: "Calico is running on this node"},
  		{
  			Type:               "MemoryPressure",
  			Status:             "False",
- 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:59:44 +0000 UTC"},
+ 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:59:54 +0000 UTC"},
  			LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:03 +0000 UTC"},
  			Reason:             "KubeletHasSufficientMemory",
  			Message:            "kubelet has sufficient memory available",
  		},
  		{
  			Type:               "DiskPressure",
  			Status:             "False",
- 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:59:44 +0000 UTC"},
+ 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:59:54 +0000 UTC"},
  			LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:03 +0000 UTC"},
  			Reason:             "KubeletHasNoDiskPressure",
  			Message:            "kubelet has no disk pressure",
  		},
  		{
  			Type:               "PIDPressure",
  			Status:             "False",
- 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:59:44 +0000 UTC"},
+ 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:59:54 +0000 UTC"},
  			LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:03 +0000 UTC"},
  			Reason:             "KubeletHasSufficientPID",
  			Message:            "kubelet has sufficient PID available",
  		},
  		{Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:13 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"},
  	},
  	Addresses:       []v1.NodeAddress{{Type: "InternalIP", Address: "10.250.27.25"}, {Type: "Hostname", Address: "ip-10-250-27-25.ec2.internal"}, {Type: "InternalDNS", Address: "ip-10-250-27-25.ec2.internal"}},
  	DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}},
  	... // 5 identical fields
  }

Jan 11 19:59:56.352: INFO: node status heartbeat is unchanged for 999.743745ms, waiting for 1m20s
Jan 11 19:59:57.352: INFO: node status heartbeat is unchanged for 2.000047492s, waiting for 1m20s
Jan 11 19:59:58.352: INFO: node status heartbeat is unchanged for 3.000092771s, waiting for 1m20s
Jan 11 19:59:59.352: INFO: node status heartbeat is unchanged for 4.000083122s, waiting for 1m20s
Jan 11 20:00:00.352: INFO: node status heartbeat is unchanged for 5.000100076s, waiting for 1m20s
Jan 11 20:00:01.362: INFO: node status heartbeat is unchanged for 6.010562562s, waiting for 1m20s
Jan 11 20:00:02.355: INFO: node status heartbeat is unchanged for 7.003272913s, waiting for 1m20s
Jan 11 20:00:03.353: INFO: node status heartbeat is unchanged for 8.001202067s, waiting for 1m20s
Jan 11 20:00:04.360: INFO: node status heartbeat is unchanged for 9.007661505s, waiting for 1m20s
Jan 11 20:00:05.359: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s
Jan 11 20:00:05.361: INFO:   v1.NodeStatus{
  	Capacity:    v1.ResourceList{"attachable-volumes-aws-ebs": {i: resource.int64Amount{value: 25}, s: "25", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 2}, s: "2", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 28730179584}, Format: "BinarySI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {s: "0", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 8054267904}, Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},
  	Allocatable: v1.ResourceList{"attachable-volumes-aws-ebs": {i: resource.int64Amount{value: 25}, s: "25", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 1920, scale: -3}, s: "1920m", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 27293670584}, s: "27293670584", Format: "DecimalSI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {s: "0", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 6577812679}, s: "6577812679", Format: "DecimalSI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},
  	Phase:       "",
  	Conditions: []v1.NodeCondition{
  		... // 6 identical elements
  		{Type: "FrequentContainerdRestart", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2020-01-11 19:59:28 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:58 +0000 UTC"}, Reason: "NoFrequentContainerdRestart", Message: "containerd is functioning properly"},
  		{Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2020-01-11 15:56:18 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:18 +0000 UTC"}, Reason: "CalicoIsUp", Message: "Calico is running on this node"},
  		{
  			Type:               "MemoryPressure",
  			Status:             "False",
- 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:59:54 +0000 UTC"},
+ 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 20:00:04 +0000 UTC"},
  			LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:03 +0000 UTC"},
  			Reason:             "KubeletHasSufficientMemory",
  			Message:            "kubelet has sufficient memory available",
  		},
  		{
  			Type:               "DiskPressure",
  			Status:             "False",
- 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:59:54 +0000 UTC"},
+ 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 20:00:04 +0000 UTC"},
  			LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:03 +0000 UTC"},
  			Reason:             "KubeletHasNoDiskPressure",
  			Message:            "kubelet has no disk pressure",
  		},
  		{
  			Type:               "PIDPressure",
  			Status:             "False",
- 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 19:59:54 +0000 UTC"},
+ 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 20:00:04 +0000 UTC"},
  			LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:03 +0000 UTC"},
  			Reason:             "KubeletHasSufficientPID",
  			Message:            "kubelet has sufficient PID available",
  		},
  		{Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:13 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"},
  	},
  	Addresses:       []v1.NodeAddress{{Type: "InternalIP", Address: "10.250.27.25"}, {Type: "Hostname", Address: "ip-10-250-27-25.ec2.internal"}, {Type: "InternalDNS", Address: "ip-10-250-27-25.ec2.internal"}},
  	DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}},
  	... // 5 identical fields
  }

Jan 11 20:00:06.355: INFO: node status heartbeat is unchanged for 996.085328ms, waiting for 1m20s
Jan 11 20:00:07.352: INFO: node status heartbeat is unchanged for 1.992688377s, waiting for 1m20s
Jan 11 20:00:08.352: INFO: node status heartbeat is unchanged for 2.992327206s, waiting for 1m20s
Jan 11 20:00:09.351: INFO: node status heartbeat is unchanged for 3.992256199s, waiting for 1m20s
Jan 11 20:00:10.353: INFO: node status heartbeat is unchanged for 4.994049749s, waiting for 1m20s
Jan 11 20:00:11.351: INFO: node status heartbeat is unchanged for 5.992265315s, waiting for 1m20s
Jan 11 20:00:12.355: INFO: node status heartbeat is unchanged for 6.995372123s, waiting for 1m20s
Jan 11 20:00:13.352: INFO: node status heartbeat is unchanged for 7.99281287s, waiting for 1m20s
Jan 11 20:00:14.352: INFO: node status heartbeat is unchanged for 8.992969675s, waiting for 1m20s
Jan 11 20:00:15.352: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s
Jan 11 20:00:15.354: INFO:   v1.NodeStatus{
  	Capacity:    v1.ResourceList{"attachable-volumes-aws-ebs": {i: resource.int64Amount{value: 25}, s: "25", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 2}, s: "2", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 28730179584}, Format: "BinarySI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {s: "0", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 8054267904}, Format: "BinarySI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},
  	Allocatable: v1.ResourceList{"attachable-volumes-aws-ebs": {i: resource.int64Amount{value: 25}, s: "25", Format: "DecimalSI"}, "cpu": {i: resource.int64Amount{value: 1920, scale: -3}, s: "1920m", Format: "DecimalSI"}, "ephemeral-storage": {i: resource.int64Amount{value: 27293670584}, s: "27293670584", Format: "DecimalSI"}, "hugepages-1Gi": {s: "0", Format: "DecimalSI"}, "hugepages-2Mi": {s: "0", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 6577812679}, s: "6577812679", Format: "DecimalSI"}, "pods": {i: resource.int64Amount{value: 110}, s: "110", Format: "DecimalSI"}},
  	Phase:       "",
  	Conditions: []v1.NodeCondition{
  		... // 6 identical elements
  		{Type: "FrequentContainerdRestart", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2020-01-11 19:59:28 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:58 +0000 UTC"}, Reason: "NoFrequentContainerdRestart", Message: "containerd is functioning properly"},
  		{Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: v1.Time{Time: s"2020-01-11 15:56:18 +0000 UTC"}, LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:18 +0000 UTC"}, Reason: "CalicoIsUp", Message: "Calico is running on this node"},
  		{
  			Type:               "MemoryPressure",
  			Status:             "False",
- 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 20:00:04 +0000 UTC"},
+ 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 20:00:14 +0000 UTC"},
  			LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:03 +0000 UTC"},
  			Reason:             "KubeletHasSufficientMemory",
  			Message:            "kubelet has sufficient memory available",
  		},
  		{
  			Type:               "DiskPressure",
  			Status:             "False",
- 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 20:00:04 +0000 UTC"},
+ 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 20:00:14 +0000 UTC"},
  			LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:03 +0000 UTC"},
  			Reason:             "KubeletHasNoDiskPressure",
  			Message:            "kubelet has no disk pressure",
  		},
  		{
  			Type:               "PIDPressure",
  			Status:             "False",
- 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 20:00:04 +0000 UTC"},
+ 			LastHeartbeatTime:  v1.Time{Time: s"2020-01-11 20:00:14 +0000 UTC"},
  			LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:03 +0000 UTC"},
  			Reason:             "KubeletHasSufficientPID",
  			Message:            "kubelet has sufficient PID available",
  		},
  		{Type: "Ready", Status: "True", LastTransitionTime: v1.Time{Time: s"2020-01-11 15:56:13 +0000 UTC"}, Reason: "KubeletReady", Message: "kubelet is posting ready status"},
  	},
  	Addresses:       []v1.NodeAddress{{Type: "InternalIP", Address: "10.250.27.25"}, {Type: "Hostname", Address: "ip-10-250-27-25.ec2.internal"}, {Type: "InternalDNS", Address: "ip-10-250-27-25.ec2.internal"}},
  	DaemonEndpoints: v1.NodeDaemonEndpoints{KubeletEndpoint: v1.DaemonEndpoint{Port: 10250}},
  	... // 5 identical fields
  }

Jan 11 20:00:16.352: INFO: node status heartbeat is unchanged for 999.83452ms, waiting for 1m20s
Jan 11 20:00:17.352: INFO: node status heartbeat is unchanged for 1.999931961s, waiting for 1m20s
Jan 11 20:00:18.352: INFO: node status heartbeat is unchanged for 2.999587505s, waiting for 1m20s
Jan 11 20:00:19.352: INFO: node status heartbeat is unchanged for 3.999646606s, waiting for 1m20s
Jan 11 20:00:20.352: INFO: node status heartbeat is unchanged for 4.999785021s, waiting for 1m20s
Jan 11 20:00:21.352: INFO: node status heartbeat is unchanged for 5.999813847s, waiting for 1m20s
Jan 11 20:00:22.352: INFO: node status heartbeat is unchanged for 6.999861641s, waiting for 1m20s
Jan 11 20:00:22.441: INFO: node status heartbeat is unchanged for 7.08919893s, waiting for 1m20s
STEP: verify node is still in ready status even though node status report is infrequent
[AfterEach] [k8s.io] NodeLease
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 11 20:00:22.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "node-lease-test-7289" for this suite.
Jan 11 20:00:28.889: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 20:00:32.253: INFO: namespace node-lease-test-7289 deletion completed in 9.631891756s


• [SLOW TEST:310.987 seconds]
[k8s.io] NodeLease
/workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
  when the NodeLease feature is enabled
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node_lease.go:49
    the kubelet should report node status infrequently
    /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node_lease.go:88
------------------------------
SSSSSS
------------------------------
[BeforeEach] [k8s.io] Variable Expansion
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 11 19:57:43.980: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
STEP: Building a namespace api object, basename var-expansion
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in var-expansion-2244
STEP: Waiting for a default service account to be provisioned in namespace
[It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][NodeFeature:VolumeSubpathEnvExpansion][Slow]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/expansion.go:319
STEP: creating the pod with failed condition
STEP: updating the pod
Jan 11 19:59:45.900: INFO: Successfully updated pod "var-expansion-f9086647-2328-41bd-b14f-7fe95b086b20"
STEP: waiting for pod running
STEP: deleting the pod gracefully
Jan 11 19:59:48.089: INFO: Deleting pod "var-expansion-f9086647-2328-41bd-b14f-7fe95b086b20" in namespace "var-expansion-2244"
Jan 11 19:59:48.180: INFO: Wait up to 5m0s for pod "var-expansion-f9086647-2328-41bd-b14f-7fe95b086b20" to be fully deleted
[AfterEach] [k8s.io] Variable Expansion
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 11 20:00:24.359: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-2244" for this suite.
Jan 11 20:00:30.720: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 20:00:34.046: INFO: namespace var-expansion-2244 deletion completed in 9.596204991s


• [SLOW TEST:170.066 seconds]
[k8s.io] Variable Expansion
/workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
  should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][NodeFeature:VolumeSubpathEnvExpansion][Slow]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/expansion.go:319
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 11 20:00:10.475: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
STEP: Building a namespace api object, basename kubectl
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-4667
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:225
[It] should create/apply a valid CR with arbitrary-extra properties for CRD with partially-specified validation schema
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:943
STEP: prepare CRD with partially-specified validation schema
Jan 11 20:00:11.114: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
STEP: sleep for 10s to wait for potential crd openapi publishing alpha feature
STEP: successfully create CR
Jan 11 20:00:21.552: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-4667 create --validate=true -f -'
Jan 11 20:00:22.987: INFO: stderr: ""
Jan 11 20:00:22.987: INFO: stdout: "e2e-test-kubectl-5668-crd.kubectl.example.com/test-cr created\n"
Jan 11 20:00:22.987: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-4667 delete e2e-test-kubectl-5668-crds test-cr'
Jan 11 20:00:23.503: INFO: stderr: ""
Jan 11 20:00:23.503: INFO: stdout: "e2e-test-kubectl-5668-crd.kubectl.example.com \"test-cr\" deleted\n"
STEP: successfully apply CR
Jan 11 20:00:23.503: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-4667 apply --validate=true -f -'
Jan 11 20:00:24.620: INFO: stderr: ""
Jan 11 20:00:24.620: INFO: stdout: "e2e-test-kubectl-5668-crd.kubectl.example.com/test-cr created\n"
Jan 11 20:00:24.620: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-4667 delete e2e-test-kubectl-5668-crds test-cr'
Jan 11 20:00:25.133: INFO: stderr: ""
Jan 11 20:00:25.133: INFO: stdout: "e2e-test-kubectl-5668-crd.kubectl.example.com \"test-cr\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 11 20:00:25.312: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4667" for this suite.
Jan 11 20:00:31.672: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 20:00:34.973: INFO: namespace kubectl-4667 deletion completed in 9.570528664s


• [SLOW TEST:24.499 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl client-side validation
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:898
    should create/apply a valid CR with arbitrary-extra properties for CRD with partially-specified validation schema
    /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:943
------------------------------
[BeforeEach] [k8s.io] Container Runtime
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 11 20:00:21.201: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
STEP: Building a namespace api object, basename container-runtime
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-runtime-5249
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to pull image from gcr.io [NodeConformance]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:368
STEP: create the container
STEP: check the container status
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 11 20:00:26.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-5249" for this suite.
Jan 11 20:00:32.944: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 20:00:36.259: INFO: namespace container-runtime-5249 deletion completed in 9.58559848s


• [SLOW TEST:15.058 seconds]
[k8s.io] Container Runtime
/workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
  blackbox test
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39
    when running a container with a new image
    /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:252
      should be able to pull image from gcr.io [NodeConformance]
      /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:368
------------------------------
SSSSSSSSSS
------------------------------
[BeforeEach] [k8s.io] Security Context
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 11 20:00:29.625: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
STEP: Building a namespace api object, basename security-context-test
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in security-context-test-6343
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:40
[It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
Jan 11 20:00:30.449: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-5df8bbf4-3969-49c0-93ac-78eb66d5fb63" in namespace "security-context-test-6343" to be "success or failure"
Jan 11 20:00:30.539: INFO: Pod "busybox-privileged-false-5df8bbf4-3969-49c0-93ac-78eb66d5fb63": Phase="Pending", Reason="", readiness=false. Elapsed: 89.872725ms
Jan 11 20:00:32.628: INFO: Pod "busybox-privileged-false-5df8bbf4-3969-49c0-93ac-78eb66d5fb63": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.179764579s
Jan 11 20:00:32.629: INFO: Pod "busybox-privileged-false-5df8bbf4-3969-49c0-93ac-78eb66d5fb63" satisfied condition "success or failure"
Jan 11 20:00:32.725: INFO: Got logs for pod "busybox-privileged-false-5df8bbf4-3969-49c0-93ac-78eb66d5fb63": "ip: RTNETLINK answers: Operation not permitted\n"
[AfterEach] [k8s.io] Security Context
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 11 20:00:32.725: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-6343" for this suite.
Jan 11 20:00:39.086: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 20:00:42.489: INFO: namespace security-context-test-6343 deletion completed in 9.673041199s


• [SLOW TEST:12.864 seconds]
[k8s.io] Security Context
/workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
  When creating a pod with privileged
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:226
    should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
S
------------------------------
[BeforeEach] [sig-apps] ReplicaSet
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 11 20:00:32.268: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
STEP: Building a namespace api object, basename replicaset
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in replicaset-4606
STEP: Waiting for a default service account to be provisioned in namespace
[It] should surface a failure condition on a common issue like exceeded quota
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/replica_set.go:104
STEP: Creating quota "condition-test" that allows only two pods to run in the current namespace
STEP: Creating replica set "condition-test" that asks for more than the allowed pod quota
STEP: Checking replica set "condition-test" has the desired failure condition set
STEP: Scaling down replica set "condition-test" to satisfy pod quota
Jan 11 20:00:34.285: INFO: Updating replica set "condition-test"
STEP: Checking replica set "condition-test" has no failure condition set
[AfterEach] [sig-apps] ReplicaSet
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 11 20:00:34.374: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-4606" for this suite.
Jan 11 20:00:40.734: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 20:00:44.032: INFO: namespace replicaset-4606 deletion completed in 9.568251372s


• [SLOW TEST:11.764 seconds]
[sig-apps] ReplicaSet
/workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should surface a failure condition on a common issue like exceeded quota
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/replica_set.go:104
------------------------------
SSS
------------------------------
[BeforeEach] [k8s.io] [sig-node] Security Context
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 11 20:00:34.048: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
STEP: Building a namespace api object, basename security-context
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in security-context-3936
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:87
STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser
Jan 11 20:00:34.779: INFO: Waiting up to 5m0s for pod "security-context-c98de597-bb83-442f-8b1b-290ecf56fd7b" in namespace "security-context-3936" to be "success or failure"
Jan 11 20:00:34.869: INFO: Pod "security-context-c98de597-bb83-442f-8b1b-290ecf56fd7b": Phase="Pending", Reason="", readiness=false. Elapsed: 89.592102ms
Jan 11 20:00:36.959: INFO: Pod "security-context-c98de597-bb83-442f-8b1b-290ecf56fd7b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.179314699s
STEP: Saw pod success
Jan 11 20:00:36.959: INFO: Pod "security-context-c98de597-bb83-442f-8b1b-290ecf56fd7b" satisfied condition "success or failure"
Jan 11 20:00:37.048: INFO: Trying to get logs from node ip-10-250-27-25.ec2.internal pod security-context-c98de597-bb83-442f-8b1b-290ecf56fd7b container test-container: 
STEP: delete the pod
Jan 11 20:00:37.250: INFO: Waiting for pod security-context-c98de597-bb83-442f-8b1b-290ecf56fd7b to disappear
Jan 11 20:00:37.339: INFO: Pod security-context-c98de597-bb83-442f-8b1b-290ecf56fd7b no longer exists
[AfterEach] [k8s.io] [sig-node] Security Context
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 11 20:00:37.339: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-3936" for this suite.
Jan 11 20:00:43.700: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 20:00:47.022: INFO: namespace security-context-3936 deletion completed in 9.591744635s


• [SLOW TEST:12.974 seconds]
[k8s.io] [sig-node] Security Context
/workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
  should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:87
------------------------------
SSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 11 19:59:59.944: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
STEP: Building a namespace api object, basename dns
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in dns-8433
STEP: Waiting for a default service account to be provisioned in namespace
[It] should resolve DNS of partial qualified names for services [LinuxOnly]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:184
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8433 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-8433;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8433 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-8433;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8433.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-8433.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8433.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-8433.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-8433.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-8433.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-8433.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-8433.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-8433.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-8433.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-8433.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-8433.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8433.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 87.223.110.100.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/100.110.223.87_udp@PTR;check="$$(dig +tcp +noall +answer +search 87.223.110.100.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/100.110.223.87_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8433 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-8433;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8433 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-8433;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8433.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-8433.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8433.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-8433.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-8433.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-8433.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-8433.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-8433.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-8433.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-8433.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-8433.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-8433.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8433.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 87.223.110.100.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/100.110.223.87_udp@PTR;check="$$(dig +tcp +noall +answer +search 87.223.110.100.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/100.110.223.87_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan 11 20:00:04.212: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8433/dns-test-91a39334-0864-442e-b7dc-f62b0666424b: the server could not find the requested resource (get pods dns-test-91a39334-0864-442e-b7dc-f62b0666424b)
Jan 11 20:00:04.306: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8433/dns-test-91a39334-0864-442e-b7dc-f62b0666424b: the server could not find the requested resource (get pods dns-test-91a39334-0864-442e-b7dc-f62b0666424b)
Jan 11 20:00:04.405: INFO: Unable to read wheezy_udp@dns-test-service.dns-8433 from pod dns-8433/dns-test-91a39334-0864-442e-b7dc-f62b0666424b: the server could not find the requested resource (get pods dns-test-91a39334-0864-442e-b7dc-f62b0666424b)
Jan 11 20:00:04.500: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8433 from pod dns-8433/dns-test-91a39334-0864-442e-b7dc-f62b0666424b: the server could not find the requested resource (get pods dns-test-91a39334-0864-442e-b7dc-f62b0666424b)
Jan 11 20:00:04.596: INFO: Unable to read wheezy_udp@dns-test-service.dns-8433.svc from pod dns-8433/dns-test-91a39334-0864-442e-b7dc-f62b0666424b: the server could not find the requested resource (get pods dns-test-91a39334-0864-442e-b7dc-f62b0666424b)
Jan 11 20:00:04.695: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8433.svc from pod dns-8433/dns-test-91a39334-0864-442e-b7dc-f62b0666424b: the server could not find the requested resource (get pods dns-test-91a39334-0864-442e-b7dc-f62b0666424b)
Jan 11 20:00:04.789: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8433.svc from pod dns-8433/dns-test-91a39334-0864-442e-b7dc-f62b0666424b: the server could not find the requested resource (get pods dns-test-91a39334-0864-442e-b7dc-f62b0666424b)
Jan 11 20:00:04.928: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8433.svc from pod dns-8433/dns-test-91a39334-0864-442e-b7dc-f62b0666424b: the server could not find the requested resource (get pods dns-test-91a39334-0864-442e-b7dc-f62b0666424b)
Jan 11 20:00:05.603: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8433/dns-test-91a39334-0864-442e-b7dc-f62b0666424b: the server could not find the requested resource (get pods dns-test-91a39334-0864-442e-b7dc-f62b0666424b)
Jan 11 20:00:05.696: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8433/dns-test-91a39334-0864-442e-b7dc-f62b0666424b: the server could not find the requested resource (get pods dns-test-91a39334-0864-442e-b7dc-f62b0666424b)
Jan 11 20:00:05.788: INFO: Unable to read jessie_udp@dns-test-service.dns-8433 from pod dns-8433/dns-test-91a39334-0864-442e-b7dc-f62b0666424b: the server could not find the requested resource (get pods dns-test-91a39334-0864-442e-b7dc-f62b0666424b)
Jan 11 20:00:05.881: INFO: Unable to read jessie_tcp@dns-test-service.dns-8433 from pod dns-8433/dns-test-91a39334-0864-442e-b7dc-f62b0666424b: the server could not find the requested resource (get pods dns-test-91a39334-0864-442e-b7dc-f62b0666424b)
Jan 11 20:00:05.974: INFO: Unable to read jessie_udp@dns-test-service.dns-8433.svc from pod dns-8433/dns-test-91a39334-0864-442e-b7dc-f62b0666424b: the server could not find the requested resource (get pods dns-test-91a39334-0864-442e-b7dc-f62b0666424b)
Jan 11 20:00:06.066: INFO: Unable to read jessie_tcp@dns-test-service.dns-8433.svc from pod dns-8433/dns-test-91a39334-0864-442e-b7dc-f62b0666424b: the server could not find the requested resource (get pods dns-test-91a39334-0864-442e-b7dc-f62b0666424b)
Jan 11 20:00:06.159: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8433.svc from pod dns-8433/dns-test-91a39334-0864-442e-b7dc-f62b0666424b: the server could not find the requested resource (get pods dns-test-91a39334-0864-442e-b7dc-f62b0666424b)
Jan 11 20:00:06.252: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8433.svc from pod dns-8433/dns-test-91a39334-0864-442e-b7dc-f62b0666424b: the server could not find the requested resource (get pods dns-test-91a39334-0864-442e-b7dc-f62b0666424b)
Jan 11 20:00:06.818: INFO: Lookups using dns-8433/dns-test-91a39334-0864-442e-b7dc-f62b0666424b failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8433 wheezy_tcp@dns-test-service.dns-8433 wheezy_udp@dns-test-service.dns-8433.svc wheezy_tcp@dns-test-service.dns-8433.svc wheezy_udp@_http._tcp.dns-test-service.dns-8433.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8433.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8433 jessie_tcp@dns-test-service.dns-8433 jessie_udp@dns-test-service.dns-8433.svc jessie_tcp@dns-test-service.dns-8433.svc jessie_udp@_http._tcp.dns-test-service.dns-8433.svc jessie_tcp@_http._tcp.dns-test-service.dns-8433.svc]

Jan 11 20:00:11.911: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8433/dns-test-91a39334-0864-442e-b7dc-f62b0666424b: the server could not find the requested resource (get pods dns-test-91a39334-0864-442e-b7dc-f62b0666424b)
Jan 11 20:00:12.006: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8433/dns-test-91a39334-0864-442e-b7dc-f62b0666424b: the server could not find the requested resource (get pods dns-test-91a39334-0864-442e-b7dc-f62b0666424b)
Jan 11 20:00:12.098: INFO: Unable to read wheezy_udp@dns-test-service.dns-8433 from pod dns-8433/dns-test-91a39334-0864-442e-b7dc-f62b0666424b: the server could not find the requested resource (get pods dns-test-91a39334-0864-442e-b7dc-f62b0666424b)
Jan 11 20:00:12.191: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8433 from pod dns-8433/dns-test-91a39334-0864-442e-b7dc-f62b0666424b: the server could not find the requested resource (get pods dns-test-91a39334-0864-442e-b7dc-f62b0666424b)
Jan 11 20:00:12.283: INFO: Unable to read wheezy_udp@dns-test-service.dns-8433.svc from pod dns-8433/dns-test-91a39334-0864-442e-b7dc-f62b0666424b: the server could not find the requested resource (get pods dns-test-91a39334-0864-442e-b7dc-f62b0666424b)
Jan 11 20:00:12.376: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8433.svc from pod dns-8433/dns-test-91a39334-0864-442e-b7dc-f62b0666424b: the server could not find the requested resource (get pods dns-test-91a39334-0864-442e-b7dc-f62b0666424b)
Jan 11 20:00:12.469: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8433.svc from pod dns-8433/dns-test-91a39334-0864-442e-b7dc-f62b0666424b: the server could not find the requested resource (get pods dns-test-91a39334-0864-442e-b7dc-f62b0666424b)
Jan 11 20:00:12.561: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8433.svc from pod dns-8433/dns-test-91a39334-0864-442e-b7dc-f62b0666424b: the server could not find the requested resource (get pods dns-test-91a39334-0864-442e-b7dc-f62b0666424b)
Jan 11 20:00:13.222: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8433/dns-test-91a39334-0864-442e-b7dc-f62b0666424b: the server could not find the requested resource (get pods dns-test-91a39334-0864-442e-b7dc-f62b0666424b)
Jan 11 20:00:13.315: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8433/dns-test-91a39334-0864-442e-b7dc-f62b0666424b: the server could not find the requested resource (get pods dns-test-91a39334-0864-442e-b7dc-f62b0666424b)
Jan 11 20:00:13.407: INFO: Unable to read jessie_udp@dns-test-service.dns-8433 from pod dns-8433/dns-test-91a39334-0864-442e-b7dc-f62b0666424b: the server could not find the requested resource (get pods dns-test-91a39334-0864-442e-b7dc-f62b0666424b)
Jan 11 20:00:13.499: INFO: Unable to read jessie_tcp@dns-test-service.dns-8433 from pod dns-8433/dns-test-91a39334-0864-442e-b7dc-f62b0666424b: the server could not find the requested resource (get pods dns-test-91a39334-0864-442e-b7dc-f62b0666424b)
Jan 11 20:00:13.592: INFO: Unable to read jessie_udp@dns-test-service.dns-8433.svc from pod dns-8433/dns-test-91a39334-0864-442e-b7dc-f62b0666424b: the server could not find the requested resource (get pods dns-test-91a39334-0864-442e-b7dc-f62b0666424b)
Jan 11 20:00:13.684: INFO: Unable to read jessie_tcp@dns-test-service.dns-8433.svc from pod dns-8433/dns-test-91a39334-0864-442e-b7dc-f62b0666424b: the server could not find the requested resource (get pods dns-test-91a39334-0864-442e-b7dc-f62b0666424b)
Jan 11 20:00:13.777: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8433.svc from pod dns-8433/dns-test-91a39334-0864-442e-b7dc-f62b0666424b: the server could not find the requested resource (get pods dns-test-91a39334-0864-442e-b7dc-f62b0666424b)
Jan 11 20:00:13.869: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8433.svc from pod dns-8433/dns-test-91a39334-0864-442e-b7dc-f62b0666424b: the server could not find the requested resource (get pods dns-test-91a39334-0864-442e-b7dc-f62b0666424b)
Jan 11 20:00:14.435: INFO: Lookups using dns-8433/dns-test-91a39334-0864-442e-b7dc-f62b0666424b failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8433 wheezy_tcp@dns-test-service.dns-8433 wheezy_udp@dns-test-service.dns-8433.svc wheezy_tcp@dns-test-service.dns-8433.svc wheezy_udp@_http._tcp.dns-test-service.dns-8433.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8433.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8433 jessie_tcp@dns-test-service.dns-8433 jessie_udp@dns-test-service.dns-8433.svc jessie_tcp@dns-test-service.dns-8433.svc jessie_udp@_http._tcp.dns-test-service.dns-8433.svc jessie_tcp@_http._tcp.dns-test-service.dns-8433.svc]

Jan 11 20:00:16.911: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8433/dns-test-91a39334-0864-442e-b7dc-f62b0666424b: the server could not find the requested resource (get pods dns-test-91a39334-0864-442e-b7dc-f62b0666424b)
Jan 11 20:00:17.004: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8433/dns-test-91a39334-0864-442e-b7dc-f62b0666424b: the server could not find the requested resource (get pods dns-test-91a39334-0864-442e-b7dc-f62b0666424b)
Jan 11 20:00:17.096: INFO: Unable to read wheezy_udp@dns-test-service.dns-8433 from pod dns-8433/dns-test-91a39334-0864-442e-b7dc-f62b0666424b: the server could not find the requested resource (get pods dns-test-91a39334-0864-442e-b7dc-f62b0666424b)
Jan 11 20:00:17.188: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8433 from pod dns-8433/dns-test-91a39334-0864-442e-b7dc-f62b0666424b: the server could not find the requested resource (get pods dns-test-91a39334-0864-442e-b7dc-f62b0666424b)
Jan 11 20:00:17.281: INFO: Unable to read wheezy_udp@dns-test-service.dns-8433.svc from pod dns-8433/dns-test-91a39334-0864-442e-b7dc-f62b0666424b: the server could not find the requested resource (get pods dns-test-91a39334-0864-442e-b7dc-f62b0666424b)
Jan 11 20:00:17.373: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8433.svc from pod dns-8433/dns-test-91a39334-0864-442e-b7dc-f62b0666424b: the server could not find the requested resource (get pods dns-test-91a39334-0864-442e-b7dc-f62b0666424b)
Jan 11 20:00:17.466: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8433.svc from pod dns-8433/dns-test-91a39334-0864-442e-b7dc-f62b0666424b: the server could not find the requested resource (get pods dns-test-91a39334-0864-442e-b7dc-f62b0666424b)
Jan 11 20:00:17.559: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8433.svc from pod dns-8433/dns-test-91a39334-0864-442e-b7dc-f62b0666424b: the server could not find the requested resource (get pods dns-test-91a39334-0864-442e-b7dc-f62b0666424b)
Jan 11 20:00:18.223: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8433/dns-test-91a39334-0864-442e-b7dc-f62b0666424b: the server could not find the requested resource (get pods dns-test-91a39334-0864-442e-b7dc-f62b0666424b)
Jan 11 20:00:18.315: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8433/dns-test-91a39334-0864-442e-b7dc-f62b0666424b: the server could not find the requested resource (get pods dns-test-91a39334-0864-442e-b7dc-f62b0666424b)
Jan 11 20:00:18.408: INFO: Unable to read jessie_udp@dns-test-service.dns-8433 from pod dns-8433/dns-test-91a39334-0864-442e-b7dc-f62b0666424b: the server could not find the requested resource (get pods dns-test-91a39334-0864-442e-b7dc-f62b0666424b)
Jan 11 20:00:18.500: INFO: Unable to read jessie_tcp@dns-test-service.dns-8433 from pod dns-8433/dns-test-91a39334-0864-442e-b7dc-f62b0666424b: the server could not find the requested resource (get pods dns-test-91a39334-0864-442e-b7dc-f62b0666424b)
Jan 11 20:00:18.594: INFO: Unable to read jessie_udp@dns-test-service.dns-8433.svc from pod dns-8433/dns-test-91a39334-0864-442e-b7dc-f62b0666424b: the server could not find the requested resource (get pods dns-test-91a39334-0864-442e-b7dc-f62b0666424b)
Jan 11 20:00:18.698: INFO: Unable to read jessie_tcp@dns-test-service.dns-8433.svc from pod dns-8433/dns-test-91a39334-0864-442e-b7dc-f62b0666424b: the server could not find the requested resource (get pods dns-test-91a39334-0864-442e-b7dc-f62b0666424b)
Jan 11 20:00:18.791: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8433.svc from pod dns-8433/dns-test-91a39334-0864-442e-b7dc-f62b0666424b: the server could not find the requested resource (get pods dns-test-91a39334-0864-442e-b7dc-f62b0666424b)
Jan 11 20:00:18.883: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8433.svc from pod dns-8433/dns-test-91a39334-0864-442e-b7dc-f62b0666424b: the server could not find the requested resource (get pods dns-test-91a39334-0864-442e-b7dc-f62b0666424b)
Jan 11 20:00:19.468: INFO: Lookups using dns-8433/dns-test-91a39334-0864-442e-b7dc-f62b0666424b failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8433 wheezy_tcp@dns-test-service.dns-8433 wheezy_udp@dns-test-service.dns-8433.svc wheezy_tcp@dns-test-service.dns-8433.svc wheezy_udp@_http._tcp.dns-test-service.dns-8433.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8433.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8433 jessie_tcp@dns-test-service.dns-8433 jessie_udp@dns-test-service.dns-8433.svc jessie_tcp@dns-test-service.dns-8433.svc jessie_udp@_http._tcp.dns-test-service.dns-8433.svc jessie_tcp@_http._tcp.dns-test-service.dns-8433.svc]

Jan 11 20:00:21.912: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8433/dns-test-91a39334-0864-442e-b7dc-f62b0666424b: the server could not find the requested resource (get pods dns-test-91a39334-0864-442e-b7dc-f62b0666424b)
Jan 11 20:00:22.004: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8433/dns-test-91a39334-0864-442e-b7dc-f62b0666424b: the server could not find the requested resource (get pods dns-test-91a39334-0864-442e-b7dc-f62b0666424b)
Jan 11 20:00:22.097: INFO: Unable to read wheezy_udp@dns-test-service.dns-8433 from pod dns-8433/dns-test-91a39334-0864-442e-b7dc-f62b0666424b: the server could not find the requested resource (get pods dns-test-91a39334-0864-442e-b7dc-f62b0666424b)
Jan 11 20:00:22.194: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8433 from pod dns-8433/dns-test-91a39334-0864-442e-b7dc-f62b0666424b: the server could not find the requested resource (get pods dns-test-91a39334-0864-442e-b7dc-f62b0666424b)
Jan 11 20:00:22.287: INFO: Unable to read wheezy_udp@dns-test-service.dns-8433.svc from pod dns-8433/dns-test-91a39334-0864-442e-b7dc-f62b0666424b: the server could not find the requested resource (get pods dns-test-91a39334-0864-442e-b7dc-f62b0666424b)
Jan 11 20:00:22.380: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8433.svc from pod dns-8433/dns-test-91a39334-0864-442e-b7dc-f62b0666424b: the server could not find the requested resource (get pods dns-test-91a39334-0864-442e-b7dc-f62b0666424b)
Jan 11 20:00:22.472: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8433.svc from pod dns-8433/dns-test-91a39334-0864-442e-b7dc-f62b0666424b: the server could not find the requested resource (get pods dns-test-91a39334-0864-442e-b7dc-f62b0666424b)
Jan 11 20:00:22.572: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8433.svc from pod dns-8433/dns-test-91a39334-0864-442e-b7dc-f62b0666424b: the server could not find the requested resource (get pods dns-test-91a39334-0864-442e-b7dc-f62b0666424b)
Jan 11 20:00:23.231: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8433/dns-test-91a39334-0864-442e-b7dc-f62b0666424b: the server could not find the requested resource (get pods dns-test-91a39334-0864-442e-b7dc-f62b0666424b)
Jan 11 20:00:23.324: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8433/dns-test-91a39334-0864-442e-b7dc-f62b0666424b: the server could not find the requested resource (get pods dns-test-91a39334-0864-442e-b7dc-f62b0666424b)
Jan 11 20:00:23.417: INFO: Unable to read jessie_udp@dns-test-service.dns-8433 from pod dns-8433/dns-test-91a39334-0864-442e-b7dc-f62b0666424b: the server could not find the requested resource (get pods dns-test-91a39334-0864-442e-b7dc-f62b0666424b)
Jan 11 20:00:23.509: INFO: Unable to read jessie_tcp@dns-test-service.dns-8433 from pod dns-8433/dns-test-91a39334-0864-442e-b7dc-f62b0666424b: the server could not find the requested resource (get pods dns-test-91a39334-0864-442e-b7dc-f62b0666424b)
Jan 11 20:00:23.602: INFO: Unable to read jessie_udp@dns-test-service.dns-8433.svc from pod dns-8433/dns-test-91a39334-0864-442e-b7dc-f62b0666424b: the server could not find the requested resource (get pods dns-test-91a39334-0864-442e-b7dc-f62b0666424b)
Jan 11 20:00:23.695: INFO: Unable to read jessie_tcp@dns-test-service.dns-8433.svc from pod dns-8433/dns-test-91a39334-0864-442e-b7dc-f62b0666424b: the server could not find the requested resource (get pods dns-test-91a39334-0864-442e-b7dc-f62b0666424b)
Jan 11 20:00:23.787: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8433.svc from pod dns-8433/dns-test-91a39334-0864-442e-b7dc-f62b0666424b: the server could not find the requested resource (get pods dns-test-91a39334-0864-442e-b7dc-f62b0666424b)
Jan 11 20:00:23.880: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8433.svc from pod dns-8433/dns-test-91a39334-0864-442e-b7dc-f62b0666424b: the server could not find the requested resource (get pods dns-test-91a39334-0864-442e-b7dc-f62b0666424b)
Jan 11 20:00:24.446: INFO: Lookups using dns-8433/dns-test-91a39334-0864-442e-b7dc-f62b0666424b failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8433 wheezy_tcp@dns-test-service.dns-8433 wheezy_udp@dns-test-service.dns-8433.svc wheezy_tcp@dns-test-service.dns-8433.svc wheezy_udp@_http._tcp.dns-test-service.dns-8433.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8433.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8433 jessie_tcp@dns-test-service.dns-8433 jessie_udp@dns-test-service.dns-8433.svc jessie_tcp@dns-test-service.dns-8433.svc jessie_udp@_http._tcp.dns-test-service.dns-8433.svc jessie_tcp@_http._tcp.dns-test-service.dns-8433.svc]

Jan 11 20:00:26.911: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8433/dns-test-91a39334-0864-442e-b7dc-f62b0666424b: the server could not find the requested resource (get pods dns-test-91a39334-0864-442e-b7dc-f62b0666424b)
Jan 11 20:00:27.003: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8433/dns-test-91a39334-0864-442e-b7dc-f62b0666424b: the server could not find the requested resource (get pods dns-test-91a39334-0864-442e-b7dc-f62b0666424b)
Jan 11 20:00:27.096: INFO: Unable to read wheezy_udp@dns-test-service.dns-8433 from pod dns-8433/dns-test-91a39334-0864-442e-b7dc-f62b0666424b: the server could not find the requested resource (get pods dns-test-91a39334-0864-442e-b7dc-f62b0666424b)
Jan 11 20:00:27.189: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8433 from pod dns-8433/dns-test-91a39334-0864-442e-b7dc-f62b0666424b: the server could not find the requested resource (get pods dns-test-91a39334-0864-442e-b7dc-f62b0666424b)
Jan 11 20:00:27.282: INFO: Unable to read wheezy_udp@dns-test-service.dns-8433.svc from pod dns-8433/dns-test-91a39334-0864-442e-b7dc-f62b0666424b: the server could not find the requested resource (get pods dns-test-91a39334-0864-442e-b7dc-f62b0666424b)
Jan 11 20:00:27.375: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8433.svc from pod dns-8433/dns-test-91a39334-0864-442e-b7dc-f62b0666424b: the server could not find the requested resource (get pods dns-test-91a39334-0864-442e-b7dc-f62b0666424b)
Jan 11 20:00:27.468: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8433.svc from pod dns-8433/dns-test-91a39334-0864-442e-b7dc-f62b0666424b: the server could not find the requested resource (get pods dns-test-91a39334-0864-442e-b7dc-f62b0666424b)
Jan 11 20:00:27.561: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8433.svc from pod dns-8433/dns-test-91a39334-0864-442e-b7dc-f62b0666424b: the server could not find the requested resource (get pods dns-test-91a39334-0864-442e-b7dc-f62b0666424b)
Jan 11 20:00:28.217: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8433/dns-test-91a39334-0864-442e-b7dc-f62b0666424b: the server could not find the requested resource (get pods dns-test-91a39334-0864-442e-b7dc-f62b0666424b)
Jan 11 20:00:28.310: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8433/dns-test-91a39334-0864-442e-b7dc-f62b0666424b: the server could not find the requested resource (get pods dns-test-91a39334-0864-442e-b7dc-f62b0666424b)
Jan 11 20:00:28.402: INFO: Unable to read jessie_udp@dns-test-service.dns-8433 from pod dns-8433/dns-test-91a39334-0864-442e-b7dc-f62b0666424b: the server could not find the requested resource (get pods dns-test-91a39334-0864-442e-b7dc-f62b0666424b)
Jan 11 20:00:28.496: INFO: Unable to read jessie_tcp@dns-test-service.dns-8433 from pod dns-8433/dns-test-91a39334-0864-442e-b7dc-f62b0666424b: the server could not find the requested resource (get pods dns-test-91a39334-0864-442e-b7dc-f62b0666424b)
Jan 11 20:00:28.588: INFO: Unable to read jessie_udp@dns-test-service.dns-8433.svc from pod dns-8433/dns-test-91a39334-0864-442e-b7dc-f62b0666424b: the server could not find the requested resource (get pods dns-test-91a39334-0864-442e-b7dc-f62b0666424b)
Jan 11 20:00:28.681: INFO: Unable to read jessie_tcp@dns-test-service.dns-8433.svc from pod dns-8433/dns-test-91a39334-0864-442e-b7dc-f62b0666424b: the server could not find the requested resource (get pods dns-test-91a39334-0864-442e-b7dc-f62b0666424b)
Jan 11 20:00:28.774: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8433.svc from pod dns-8433/dns-test-91a39334-0864-442e-b7dc-f62b0666424b: the server could not find the requested resource (get pods dns-test-91a39334-0864-442e-b7dc-f62b0666424b)
Jan 11 20:00:28.874: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8433.svc from pod dns-8433/dns-test-91a39334-0864-442e-b7dc-f62b0666424b: the server could not find the requested resource (get pods dns-test-91a39334-0864-442e-b7dc-f62b0666424b)
Jan 11 20:00:29.438: INFO: Lookups using dns-8433/dns-test-91a39334-0864-442e-b7dc-f62b0666424b failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8433 wheezy_tcp@dns-test-service.dns-8433 wheezy_udp@dns-test-service.dns-8433.svc wheezy_tcp@dns-test-service.dns-8433.svc wheezy_udp@_http._tcp.dns-test-service.dns-8433.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8433.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8433 jessie_tcp@dns-test-service.dns-8433 jessie_udp@dns-test-service.dns-8433.svc jessie_tcp@dns-test-service.dns-8433.svc jessie_udp@_http._tcp.dns-test-service.dns-8433.svc jessie_tcp@_http._tcp.dns-test-service.dns-8433.svc]

Jan 11 20:00:31.911: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8433/dns-test-91a39334-0864-442e-b7dc-f62b0666424b: the server could not find the requested resource (get pods dns-test-91a39334-0864-442e-b7dc-f62b0666424b)
Jan 11 20:00:32.003: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8433/dns-test-91a39334-0864-442e-b7dc-f62b0666424b: the server could not find the requested resource (get pods dns-test-91a39334-0864-442e-b7dc-f62b0666424b)
Jan 11 20:00:32.096: INFO: Unable to read wheezy_udp@dns-test-service.dns-8433 from pod dns-8433/dns-test-91a39334-0864-442e-b7dc-f62b0666424b: the server could not find the requested resource (get pods dns-test-91a39334-0864-442e-b7dc-f62b0666424b)
Jan 11 20:00:32.188: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8433 from pod dns-8433/dns-test-91a39334-0864-442e-b7dc-f62b0666424b: the server could not find the requested resource (get pods dns-test-91a39334-0864-442e-b7dc-f62b0666424b)
Jan 11 20:00:32.281: INFO: Unable to read wheezy_udp@dns-test-service.dns-8433.svc from pod dns-8433/dns-test-91a39334-0864-442e-b7dc-f62b0666424b: the server could not find the requested resource (get pods dns-test-91a39334-0864-442e-b7dc-f62b0666424b)
Jan 11 20:00:32.373: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8433.svc from pod dns-8433/dns-test-91a39334-0864-442e-b7dc-f62b0666424b: the server could not find the requested resource (get pods dns-test-91a39334-0864-442e-b7dc-f62b0666424b)
Jan 11 20:00:32.466: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8433.svc from pod dns-8433/dns-test-91a39334-0864-442e-b7dc-f62b0666424b: the server could not find the requested resource (get pods dns-test-91a39334-0864-442e-b7dc-f62b0666424b)
Jan 11 20:00:32.558: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8433.svc from pod dns-8433/dns-test-91a39334-0864-442e-b7dc-f62b0666424b: the server could not find the requested resource (get pods dns-test-91a39334-0864-442e-b7dc-f62b0666424b)
Jan 11 20:00:33.334: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8433/dns-test-91a39334-0864-442e-b7dc-f62b0666424b: the server could not find the requested resource (get pods dns-test-91a39334-0864-442e-b7dc-f62b0666424b)
Jan 11 20:00:33.428: INFO: Unable to read jessie_udp@dns-test-service.dns-8433 from pod dns-8433/dns-test-91a39334-0864-442e-b7dc-f62b0666424b: the server could not find the requested resource (get pods dns-test-91a39334-0864-442e-b7dc-f62b0666424b)
Jan 11 20:00:33.709: INFO: Unable to read jessie_tcp@dns-test-service.dns-8433.svc from pod dns-8433/dns-test-91a39334-0864-442e-b7dc-f62b0666424b: the server could not find the requested resource (get pods dns-test-91a39334-0864-442e-b7dc-f62b0666424b)
Jan 11 20:00:33.802: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8433.svc from pod dns-8433/dns-test-91a39334-0864-442e-b7dc-f62b0666424b: the server could not find the requested resource (get pods dns-test-91a39334-0864-442e-b7dc-f62b0666424b)
Jan 11 20:00:33.894: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8433.svc from pod dns-8433/dns-test-91a39334-0864-442e-b7dc-f62b0666424b: the server could not find the requested resource (get pods dns-test-91a39334-0864-442e-b7dc-f62b0666424b)
Jan 11 20:00:34.460: INFO: Lookups using dns-8433/dns-test-91a39334-0864-442e-b7dc-f62b0666424b failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8433 wheezy_tcp@dns-test-service.dns-8433 wheezy_udp@dns-test-service.dns-8433.svc wheezy_tcp@dns-test-service.dns-8433.svc wheezy_udp@_http._tcp.dns-test-service.dns-8433.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8433.svc jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8433 jessie_tcp@dns-test-service.dns-8433.svc jessie_udp@_http._tcp.dns-test-service.dns-8433.svc jessie_tcp@_http._tcp.dns-test-service.dns-8433.svc]

Jan 11 20:00:39.571: INFO: DNS probes using dns-8433/dns-test-91a39334-0864-442e-b7dc-f62b0666424b succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 11 20:00:39.854: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-8433" for this suite.
Jan 11 20:00:46.216: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 20:00:49.532: INFO: namespace dns-8433 deletion completed in 9.585663768s


• [SLOW TEST:49.588 seconds]
[sig-network] DNS
/workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should resolve DNS of partial qualified names for services [LinuxOnly]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:184
------------------------------
SSSS
------------------------------
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 11 20:00:44.038: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
STEP: Building a namespace api object, basename emptydir
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-2454
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:46
[It] volume on tmpfs should have the correct mode using FSGroup
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:71
STEP: Creating a pod to test emptydir volume type on tmpfs
Jan 11 20:00:44.767: INFO: Waiting up to 5m0s for pod "pod-3f944dc5-f7a8-489c-a799-8494174edb53" in namespace "emptydir-2454" to be "success or failure"
Jan 11 20:00:44.857: INFO: Pod "pod-3f944dc5-f7a8-489c-a799-8494174edb53": Phase="Pending", Reason="", readiness=false. Elapsed: 89.263181ms
Jan 11 20:00:46.946: INFO: Pod "pod-3f944dc5-f7a8-489c-a799-8494174edb53": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.178453731s
STEP: Saw pod success
Jan 11 20:00:46.946: INFO: Pod "pod-3f944dc5-f7a8-489c-a799-8494174edb53" satisfied condition "success or failure"
Jan 11 20:00:47.034: INFO: Trying to get logs from node ip-10-250-27-25.ec2.internal pod pod-3f944dc5-f7a8-489c-a799-8494174edb53 container test-container: 
STEP: delete the pod
Jan 11 20:00:47.222: INFO: Waiting for pod pod-3f944dc5-f7a8-489c-a799-8494174edb53 to disappear
Jan 11 20:00:47.311: INFO: Pod pod-3f944dc5-f7a8-489c-a799-8494174edb53 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 11 20:00:47.311: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2454" for this suite.
Jan 11 20:00:53.669: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 20:00:56.976: INFO: namespace emptydir-2454 deletion completed in 9.574708365s


• [SLOW TEST:12.938 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:44
    volume on tmpfs should have the correct mode using FSGroup
    /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:71
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 11 20:00:47.036: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
STEP: Building a namespace api object, basename emptydir
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-7533
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating a pod to test emptydir 0644 on tmpfs
Jan 11 20:00:48.047: INFO: Waiting up to 5m0s for pod "pod-9ceba428-d1e9-4594-b821-7cfc54886b73" in namespace "emptydir-7533" to be "success or failure"
Jan 11 20:00:48.137: INFO: Pod "pod-9ceba428-d1e9-4594-b821-7cfc54886b73": Phase="Pending", Reason="", readiness=false. Elapsed: 89.553993ms
Jan 11 20:00:50.227: INFO: Pod "pod-9ceba428-d1e9-4594-b821-7cfc54886b73": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.179806892s
STEP: Saw pod success
Jan 11 20:00:50.227: INFO: Pod "pod-9ceba428-d1e9-4594-b821-7cfc54886b73" satisfied condition "success or failure"
Jan 11 20:00:50.317: INFO: Trying to get logs from node ip-10-250-27-25.ec2.internal pod pod-9ceba428-d1e9-4594-b821-7cfc54886b73 container test-container: 
STEP: delete the pod
Jan 11 20:00:50.507: INFO: Waiting for pod pod-9ceba428-d1e9-4594-b821-7cfc54886b73 to disappear
Jan 11 20:00:50.596: INFO: Pod pod-9ceba428-d1e9-4594-b821-7cfc54886b73 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 11 20:00:50.596: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7533" for this suite.
Jan 11 20:00:56.958: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 20:01:00.278: INFO: namespace emptydir-7533 deletion completed in 9.590239475s


• [SLOW TEST:13.242 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:93
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 11 20:00:42.492: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
STEP: Building a namespace api object, basename provisioning
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in provisioning-8725
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support non-existent path
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:177
Jan 11 20:00:43.457: INFO: Could not find CSI Name for in-tree plugin kubernetes.io/host-path
Jan 11 20:00:43.641: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-8725" in namespace "provisioning-8725" to be "success or failure"
Jan 11 20:00:43.731: INFO: Pod "hostpath-symlink-prep-provisioning-8725": Phase="Pending", Reason="", readiness=false. Elapsed: 90.077814ms
Jan 11 20:00:45.821: INFO: Pod "hostpath-symlink-prep-provisioning-8725": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.180419612s
STEP: Saw pod success
Jan 11 20:00:45.821: INFO: Pod "hostpath-symlink-prep-provisioning-8725" satisfied condition "success or failure"
Jan 11 20:00:45.821: INFO: Deleting pod "hostpath-symlink-prep-provisioning-8725" in namespace "provisioning-8725"
Jan 11 20:00:45.915: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-8725" to be fully deleted
Jan 11 20:00:46.005: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-hostpathsymlink-jz4k
STEP: Creating a pod to test subpath
Jan 11 20:00:46.096: INFO: Waiting up to 5m0s for pod "pod-subpath-test-hostpathsymlink-jz4k" in namespace "provisioning-8725" to be "success or failure"
Jan 11 20:00:46.186: INFO: Pod "pod-subpath-test-hostpathsymlink-jz4k": Phase="Pending", Reason="", readiness=false. Elapsed: 89.999961ms
Jan 11 20:00:48.275: INFO: Pod "pod-subpath-test-hostpathsymlink-jz4k": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.179703884s
STEP: Saw pod success
Jan 11 20:00:48.276: INFO: Pod "pod-subpath-test-hostpathsymlink-jz4k" satisfied condition "success or failure"
Jan 11 20:00:48.366: INFO: Trying to get logs from node ip-10-250-7-77.ec2.internal pod pod-subpath-test-hostpathsymlink-jz4k container test-container-volume-hostpathsymlink-jz4k: 
STEP: delete the pod
Jan 11 20:00:48.558: INFO: Waiting for pod pod-subpath-test-hostpathsymlink-jz4k to disappear
Jan 11 20:00:48.647: INFO: Pod pod-subpath-test-hostpathsymlink-jz4k no longer exists
STEP: Deleting pod pod-subpath-test-hostpathsymlink-jz4k
Jan 11 20:00:48.647: INFO: Deleting pod "pod-subpath-test-hostpathsymlink-jz4k" in namespace "provisioning-8725"
STEP: Deleting pod
Jan 11 20:00:48.737: INFO: Deleting pod "pod-subpath-test-hostpathsymlink-jz4k" in namespace "provisioning-8725"
Jan 11 20:00:48.917: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-8725" in namespace "provisioning-8725" to be "success or failure"
Jan 11 20:00:49.007: INFO: Pod "hostpath-symlink-prep-provisioning-8725": Phase="Pending", Reason="", readiness=false. Elapsed: 90.053271ms
Jan 11 20:00:51.097: INFO: Pod "hostpath-symlink-prep-provisioning-8725": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.180461331s
STEP: Saw pod success
Jan 11 20:00:51.097: INFO: Pod "hostpath-symlink-prep-provisioning-8725" satisfied condition "success or failure"
Jan 11 20:00:51.097: INFO: Deleting pod "hostpath-symlink-prep-provisioning-8725" in namespace "provisioning-8725"
Jan 11 20:00:51.190: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-8725" to be fully deleted
Jan 11 20:00:51.280: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 11 20:00:51.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "provisioning-8725" for this suite.
Jan 11 20:00:59.641: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 20:01:03.077: INFO: namespace provisioning-8725 deletion completed in 11.705551739s


• [SLOW TEST:20.585 seconds]
[sig-storage] In-tree Volumes
/workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: hostPathSymlink]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:69
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:92
      should support non-existent path
      /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:177
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 11 20:00:34.975: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
STEP: Building a namespace api object, basename webhook
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-1074
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:88
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan 11 20:00:36.938: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714369636, loc:(*time.Location)(0x84bfb00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714369636, loc:(*time.Location)(0x84bfb00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714369636, loc:(*time.Location)(0x84bfb00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714369636, loc:(*time.Location)(0x84bfb00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 11 20:00:40.121: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] listing validating webhooks should work [Conformance]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Listing all of the created validation webhooks
STEP: Creating a configMap that does not comply to the validation webhook rules
STEP: Deleting the collection of validation webhooks
STEP: Creating a configMap that does not comply to the validation webhook rules
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 11 20:00:41.770: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-1074" for this suite.
Jan 11 20:00:48.130: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 20:00:51.433: INFO: namespace webhook-1074 deletion completed in 9.571907907s
STEP: Destroying namespace "webhook-1074-markers" for this suite.
Jan 11 20:00:59.705: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 20:01:03.029: INFO: namespace webhook-1074-markers deletion completed in 11.595564434s
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:103


• [SLOW TEST:28.412 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  listing validating webhooks should work [Conformance]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 11 20:00:49.539: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
STEP: Building a namespace api object, basename kubectl
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-59
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:225
[It] should get componentstatuses
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:754
STEP: getting list of componentstatuses
Jan 11 20:00:50.179: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config get componentstatuses -o jsonpath={.items[*].metadata.name}'
Jan 11 20:00:50.616: INFO: stderr: ""
Jan 11 20:00:50.616: INFO: stdout: "scheduler controller-manager etcd-0 etcd-1"
STEP: getting details of componentstatuses
STEP: getting status of scheduler
Jan 11 20:00:50.616: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config get componentstatuses scheduler'
Jan 11 20:00:51.034: INFO: stderr: ""
Jan 11 20:00:51.034: INFO: stdout: "NAME        AGE\nscheduler   \n"
STEP: getting status of controller-manager
Jan 11 20:00:51.034: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config get componentstatuses controller-manager'
Jan 11 20:00:51.470: INFO: stderr: ""
Jan 11 20:00:51.470: INFO: stdout: "NAME                 AGE\ncontroller-manager   \n"
STEP: getting status of etcd-0
Jan 11 20:00:51.470: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config get componentstatuses etcd-0'
Jan 11 20:00:51.895: INFO: stderr: ""
Jan 11 20:00:51.895: INFO: stdout: "NAME     AGE\netcd-0   \n"
STEP: getting status of etcd-1
Jan 11 20:00:51.895: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config get componentstatuses etcd-1'
Jan 11 20:00:52.322: INFO: stderr: ""
Jan 11 20:00:52.323: INFO: stdout: "NAME     AGE\netcd-1   \n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 11 20:00:52.323: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-59" for this suite.
Jan 11 20:01:00.747: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 20:01:04.149: INFO: namespace kubectl-59 deletion completed in 11.674086763s


• [SLOW TEST:14.610 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl get componentstatuses
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:753
    should get componentstatuses
    /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:754
------------------------------
S
------------------------------
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 11 20:01:03.403: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
STEP: Building a namespace api object, basename tables
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in tables-9145
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47
[It] should return a 406 for a backend which does not implement metadata [Conformance]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[AfterEach] [sig-api-machinery] Servers with support for Table transformation
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 11 20:01:04.239: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "tables-9145" for this suite.
Jan 11 20:01:10.600: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 20:01:13.902: INFO: namespace tables-9145 deletion completed in 9.572329094s


• [SLOW TEST:10.500 seconds]
[sig-api-machinery] Servers with support for Table transformation
/workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should return a 406 for a backend which does not implement metadata [Conformance]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
S
------------------------------
[BeforeEach] [sig-node] RuntimeClass
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 11 20:01:00.309: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
STEP: Building a namespace api object, basename runtimeclass
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in runtimeclass-7546
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run a Pod requesting a RuntimeClass with scheduling [NodeFeature:RuntimeHandler] 
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/runtimeclass.go:61
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a label on the found node.
STEP: verifying the node has the label foo bar
STEP: verifying the node has the label fizz buzz
STEP: Trying to apply taint on the found node.
STEP: verifying the node has the taint foo=bar:NoSchedule
STEP: Trying to create runtimeclass and pod
STEP: verifying the node doesn't have the taint foo=bar:NoSchedule
STEP: removing the label fizz off the node ip-10-250-27-25.ec2.internal
STEP: verifying the node doesn't have the label fizz
STEP: removing the label foo off the node ip-10-250-27-25.ec2.internal
STEP: verifying the node doesn't have the label foo
[AfterEach] [sig-node] RuntimeClass
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 11 20:01:07.523: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "runtimeclass-7546" for this suite.
Jan 11 20:01:13.885: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 20:01:17.202: INFO: namespace runtimeclass-7546 deletion completed in 9.587346035s


• [SLOW TEST:16.893 seconds]
[sig-node] RuntimeClass
/workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/runtimeclass.go:37
  should run a Pod requesting a RuntimeClass with scheduling [NodeFeature:RuntimeHandler] 
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/runtimeclass.go:61
------------------------------
SSS
------------------------------
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 11 20:00:23.444: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
STEP: Building a namespace api object, basename pods
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pods-4438
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:165
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
Jan 11 20:00:24.083: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 11 20:00:26.827: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-4438" for this suite.
Jan 11 20:01:15.187: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 20:01:18.513: INFO: namespace pods-4438 deletion completed in 51.595708776s


• [SLOW TEST:55.069 seconds]
[k8s.io] Pods
/workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 11 20:00:36.274: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
STEP: Building a namespace api object, basename kubectl
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-5656
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:225
[BeforeEach] Simple pod
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:371
STEP: creating the pod from 
Jan 11 20:00:36.957: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config create -f - --namespace=kubectl-5656'
Jan 11 20:00:37.889: INFO: stderr: ""
Jan 11 20:00:37.890: INFO: stdout: "pod/httpd created\n"
Jan 11 20:00:37.890: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [httpd]
Jan 11 20:00:37.890: INFO: Waiting up to 5m0s for pod "httpd" in namespace "kubectl-5656" to be "running and ready"
Jan 11 20:00:37.980: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 90.022096ms
Jan 11 20:00:40.070: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 2.179868528s
Jan 11 20:00:42.159: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 4.269830983s
Jan 11 20:00:44.249: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 6.359590599s
Jan 11 20:00:46.340: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 8.450261675s
Jan 11 20:00:48.430: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 10.540604233s
Jan 11 20:00:50.520: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 12.630577664s
Jan 11 20:00:52.610: INFO: Pod "httpd": Phase="Running", Reason="", readiness=true. Elapsed: 14.720337181s
Jan 11 20:00:52.620: INFO: Pod "httpd" satisfied condition "running and ready"
Jan 11 20:00:52.620: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [httpd]
[It] should support inline execution and attach
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:532
STEP: executing a command with run and attach with stdin
Jan 11 20:00:52.620: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-5656 run run-test --image=docker.io/library/busybox:1.29 --restart=OnFailure --attach=true --stdin -- sh -c while [ -z "$s" ]; do read s; sleep 1; done; echo read:$s && cat && echo 'stdin closed''
Jan 11 20:00:56.871: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\n"
Jan 11 20:00:56.871: INFO: stdout: "read:value\nabcd1234stdin closed\n"
STEP: executing a command with run and attach without stdin
Jan 11 20:00:56.962: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-5656 run run-test-2 --image=docker.io/library/busybox:1.29 --restart=OnFailure --attach=true --leave-stdin-open=true -- sh -c cat && echo 'stdin closed''
Jan 11 20:00:59.141: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan 11 20:00:59.141: INFO: stdout: "stdin closed\n"
STEP: executing a command with run and attach with stdin with open stdin should remain running
Jan 11 20:00:59.232: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-5656 run run-test-3 --image=docker.io/library/busybox:1.29 --restart=OnFailure --attach=true --leave-stdin-open=true --stdin -- sh -c cat && echo 'stdin closed''
Jan 11 20:01:02.200: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\n"
Jan 11 20:01:02.200: INFO: stdout: ""
Jan 11 20:01:02.290: INFO: Waiting up to 1m0s for 1 pods to be running and ready: [run-test-3-p4wlm]
Jan 11 20:01:02.291: INFO: Waiting up to 1m0s for pod "run-test-3-p4wlm" in namespace "kubectl-5656" to be "running and ready"
Jan 11 20:01:02.380: INFO: Pod "run-test-3-p4wlm": Phase="Running", Reason="", readiness=true. Elapsed: 89.865848ms
Jan 11 20:01:02.381: INFO: Pod "run-test-3-p4wlm" satisfied condition "running and ready"
Jan 11 20:01:02.381: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [run-test-3-p4wlm]
Jan 11 20:01:02.381: INFO: Waiting up to 1s for 1 pods to be running and ready: [run-test-3-p4wlm]
Jan 11 20:01:02.381: INFO: Waiting up to 1s for pod "run-test-3-p4wlm" in namespace "kubectl-5656" to be "running and ready"
Jan 11 20:01:02.471: INFO: Pod "run-test-3-p4wlm": Phase="Running", Reason="", readiness=true. Elapsed: 90.228637ms
Jan 11 20:01:02.471: INFO: Pod "run-test-3-p4wlm" satisfied condition "running and ready"
Jan 11 20:01:02.471: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [run-test-3-p4wlm]
Jan 11 20:01:02.471: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-5656 logs run-test-3-p4wlm'
Jan 11 20:01:03.024: INFO: stderr: ""
Jan 11 20:01:03.025: INFO: stdout: "abcd1234\n"
[AfterEach] Simple pod
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:377
STEP: using delete to clean up resources
Jan 11 20:01:03.116: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config delete --grace-period=0 --force -f - --namespace=kubectl-5656'
Jan 11 20:01:03.644: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 11 20:01:03.644: INFO: stdout: "pod \"httpd\" force deleted\n"
Jan 11 20:01:03.644: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config get rc,svc -l name=httpd --no-headers --namespace=kubectl-5656'
Jan 11 20:01:04.186: INFO: stderr: "No resources found in kubectl-5656 namespace.\n"
Jan 11 20:01:04.186: INFO: stdout: ""
Jan 11 20:01:04.186: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config get pods -l name=httpd --namespace=kubectl-5656 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan 11 20:01:04.630: INFO: stderr: ""
Jan 11 20:01:04.631: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 11 20:01:04.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5656" for this suite.
Jan 11 20:01:16.993: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 20:01:20.311: INFO: namespace kubectl-5656 deletion completed in 15.588690797s


• [SLOW TEST:44.037 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Simple pod
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:369
    should support inline execution and attach
    /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:532
------------------------------
SSSSSSSSSSSS
------------------------------
[BeforeEach] version v1
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 11 20:01:13.906: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
STEP: Building a namespace api object, basename proxy
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in proxy-5371
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
Jan 11 20:01:14.933: INFO: (0) /api/v1/nodes/ip-10-250-27-25.ec2.internal:10250/proxy/logs/: 
btmp
containers/
faillog... (200; 188.213377ms)
Jan 11 20:01:15.025: INFO: (1) /api/v1/nodes/ip-10-250-27-25.ec2.internal:10250/proxy/logs/: 
btmp
containers/
faillog... (200; 92.170697ms)
Jan 11 20:01:15.117: INFO: (2) /api/v1/nodes/ip-10-250-27-25.ec2.internal:10250/proxy/logs/: 
btmp
containers/
faillog... (200; 92.262256ms)
Jan 11 20:01:15.216: INFO: (3) /api/v1/nodes/ip-10-250-27-25.ec2.internal:10250/proxy/logs/: 
btmp
containers/
faillog... (200; 98.620732ms)
Jan 11 20:01:15.308: INFO: (4) /api/v1/nodes/ip-10-250-27-25.ec2.internal:10250/proxy/logs/: 
btmp
containers/
faillog... (200; 92.198855ms)
Jan 11 20:01:15.401: INFO: (5) /api/v1/nodes/ip-10-250-27-25.ec2.internal:10250/proxy/logs/: 
btmp
containers/
faillog... (200; 92.770186ms)
Jan 11 20:01:15.493: INFO: (6) /api/v1/nodes/ip-10-250-27-25.ec2.internal:10250/proxy/logs/: 
btmp
containers/
faillog... (200; 92.460096ms)
Jan 11 20:01:15.586: INFO: (7) /api/v1/nodes/ip-10-250-27-25.ec2.internal:10250/proxy/logs/: 
btmp
containers/
faillog... (200; 92.121992ms)
Jan 11 20:01:15.678: INFO: (8) /api/v1/nodes/ip-10-250-27-25.ec2.internal:10250/proxy/logs/: 
btmp
containers/
faillog... (200; 92.075287ms)
Jan 11 20:01:15.770: INFO: (9) /api/v1/nodes/ip-10-250-27-25.ec2.internal:10250/proxy/logs/: 
btmp
containers/
faillog... (200; 91.764475ms)
Jan 11 20:01:15.862: INFO: (10) /api/v1/nodes/ip-10-250-27-25.ec2.internal:10250/proxy/logs/: 
btmp
containers/
faillog... (200; 91.998739ms)
Jan 11 20:01:15.954: INFO: (11) /api/v1/nodes/ip-10-250-27-25.ec2.internal:10250/proxy/logs/: 
btmp
containers/
faillog... (200; 92.174229ms)
Jan 11 20:01:16.046: INFO: (12) /api/v1/nodes/ip-10-250-27-25.ec2.internal:10250/proxy/logs/: 
btmp
containers/
faillog... (200; 92.106509ms)
Jan 11 20:01:16.138: INFO: (13) /api/v1/nodes/ip-10-250-27-25.ec2.internal:10250/proxy/logs/: 
btmp
containers/
faillog... (200; 92.136548ms)
Jan 11 20:01:16.231: INFO: (14) /api/v1/nodes/ip-10-250-27-25.ec2.internal:10250/proxy/logs/: 
btmp
containers/
faillog... (200; 92.31825ms)
Jan 11 20:01:16.323: INFO: (15) /api/v1/nodes/ip-10-250-27-25.ec2.internal:10250/proxy/logs/: 
btmp
containers/
faillog... (200; 92.303278ms)
Jan 11 20:01:16.415: INFO: (16) /api/v1/nodes/ip-10-250-27-25.ec2.internal:10250/proxy/logs/: 
btmp
containers/
faillog... (200; 92.31832ms)
Jan 11 20:01:16.508: INFO: (17) /api/v1/nodes/ip-10-250-27-25.ec2.internal:10250/proxy/logs/: 
btmp
containers/
faillog... (200; 92.107145ms)
Jan 11 20:01:16.600: INFO: (18) /api/v1/nodes/ip-10-250-27-25.ec2.internal:10250/proxy/logs/: 
btmp
containers/
faillog... (200; 92.377334ms)
Jan 11 20:01:16.693: INFO: (19) /api/v1/nodes/ip-10-250-27-25.ec2.internal:10250/proxy/logs/: 
btmp
containers/
faillog... (200; 92.548808ms)
[AfterEach] version v1
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 11 20:01:16.693: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-5371" for this suite.
Jan 11 20:01:23.052: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 20:01:26.357: INFO: namespace proxy-5371 deletion completed in 9.573114755s


• [SLOW TEST:12.451 seconds]
[sig-network] Proxy
/workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:57
    should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
    /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSS
------------------------------
[BeforeEach] [k8s.io] Security Context
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 11 20:01:20.329: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
STEP: Building a namespace api object, basename security-context-test
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in security-context-test-8448
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:40
[It] should allow privilege escalation when true [LinuxOnly] [NodeConformance]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:348
Jan 11 20:01:21.144: INFO: Waiting up to 5m0s for pod "alpine-nnp-true-90095f1a-9ffc-4a33-b2a3-0f68acc505e7" in namespace "security-context-test-8448" to be "success or failure"
Jan 11 20:01:21.233: INFO: Pod "alpine-nnp-true-90095f1a-9ffc-4a33-b2a3-0f68acc505e7": Phase="Pending", Reason="", readiness=false. Elapsed: 89.571922ms
Jan 11 20:01:23.323: INFO: Pod "alpine-nnp-true-90095f1a-9ffc-4a33-b2a3-0f68acc505e7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.179767334s
Jan 11 20:01:23.324: INFO: Pod "alpine-nnp-true-90095f1a-9ffc-4a33-b2a3-0f68acc505e7" satisfied condition "success or failure"
[AfterEach] [k8s.io] Security Context
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 11 20:01:23.421: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-8448" for this suite.
Jan 11 20:01:29.783: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 20:01:33.160: INFO: namespace security-context-test-8448 deletion completed in 9.647820547s


• [SLOW TEST:12.830 seconds]
[k8s.io] Security Context
/workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
  when creating containers with AllowPrivilegeEscalation
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:277
    should allow privilege escalation when true [LinuxOnly] [NodeConformance]
    /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:348
------------------------------
SSSSSSSSSSSS
------------------------------
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 11 20:01:03.131: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
STEP: Building a namespace api object, basename container-probe
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-probe-4605
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:52
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
Jan 11 20:01:24.137: INFO: Container started at 2020-01-11 20:01:04 +0000 UTC, pod became ready at 2020-01-11 20:01:23 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 11 20:01:24.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-4605" for this suite.
Jan 11 20:01:36.499: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 20:01:39.821: INFO: namespace container-probe-4605 deletion completed in 15.592243996s


• [SLOW TEST:36.689 seconds]
[k8s.io] Probing container
/workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSS
------------------------------
[BeforeEach] [sig-storage] Subpath
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 11 20:01:18.554: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
STEP: Building a namespace api object, basename subpath
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in subpath-1690
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating pod pod-subpath-test-configmap-c4br
STEP: Creating a pod to test atomic-volume-subpath
Jan 11 20:01:19.629: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-c4br" in namespace "subpath-1690" to be "success or failure"
Jan 11 20:01:19.719: INFO: Pod "pod-subpath-test-configmap-c4br": Phase="Pending", Reason="", readiness=false. Elapsed: 89.821533ms
Jan 11 20:01:21.809: INFO: Pod "pod-subpath-test-configmap-c4br": Phase="Running", Reason="", readiness=true. Elapsed: 2.180020137s
Jan 11 20:01:23.899: INFO: Pod "pod-subpath-test-configmap-c4br": Phase="Running", Reason="", readiness=true. Elapsed: 4.270135676s
Jan 11 20:01:25.989: INFO: Pod "pod-subpath-test-configmap-c4br": Phase="Running", Reason="", readiness=true. Elapsed: 6.36033658s
Jan 11 20:01:28.080: INFO: Pod "pod-subpath-test-configmap-c4br": Phase="Running", Reason="", readiness=true. Elapsed: 8.451150156s
Jan 11 20:01:30.170: INFO: Pod "pod-subpath-test-configmap-c4br": Phase="Running", Reason="", readiness=true. Elapsed: 10.541298719s
Jan 11 20:01:32.261: INFO: Pod "pod-subpath-test-configmap-c4br": Phase="Running", Reason="", readiness=true. Elapsed: 12.631694203s
Jan 11 20:01:34.351: INFO: Pod "pod-subpath-test-configmap-c4br": Phase="Running", Reason="", readiness=true. Elapsed: 14.722065241s
Jan 11 20:01:36.442: INFO: Pod "pod-subpath-test-configmap-c4br": Phase="Running", Reason="", readiness=true. Elapsed: 16.812706002s
Jan 11 20:01:38.532: INFO: Pod "pod-subpath-test-configmap-c4br": Phase="Running", Reason="", readiness=true. Elapsed: 18.90292144s
Jan 11 20:01:40.622: INFO: Pod "pod-subpath-test-configmap-c4br": Phase="Running", Reason="", readiness=true. Elapsed: 20.993319996s
Jan 11 20:01:42.713: INFO: Pod "pod-subpath-test-configmap-c4br": Phase="Succeeded", Reason="", readiness=false. Elapsed: 23.083903634s
STEP: Saw pod success
Jan 11 20:01:42.713: INFO: Pod "pod-subpath-test-configmap-c4br" satisfied condition "success or failure"
Jan 11 20:01:42.803: INFO: Trying to get logs from node ip-10-250-27-25.ec2.internal pod pod-subpath-test-configmap-c4br container test-container-subpath-configmap-c4br: 
STEP: delete the pod
Jan 11 20:01:42.993: INFO: Waiting for pod pod-subpath-test-configmap-c4br to disappear
Jan 11 20:01:43.084: INFO: Pod pod-subpath-test-configmap-c4br no longer exists
STEP: Deleting pod pod-subpath-test-configmap-c4br
Jan 11 20:01:43.084: INFO: Deleting pod "pod-subpath-test-configmap-c4br" in namespace "subpath-1690"
[AfterEach] [sig-storage] Subpath
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 11 20:01:43.174: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-1690" for this suite.
Jan 11 20:01:49.536: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 20:01:52.858: INFO: namespace subpath-1690 deletion completed in 9.592008461s


• [SLOW TEST:34.304 seconds]
[sig-storage] Subpath
/workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
    /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:93
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 11 20:00:57.026: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
STEP: Building a namespace api object, basename volume
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in volume-3628
STEP: Waiting for a default service account to be provisioned in namespace
[It] should store data
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:146
Jan 11 20:00:58.075: INFO: Could not find CSI Name for in-tree plugin kubernetes.io/host-path
Jan 11 20:00:58.257: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-volume-3628" in namespace "volume-3628" to be "success or failure"
Jan 11 20:00:58.346: INFO: Pod "hostpath-symlink-prep-volume-3628": Phase="Pending", Reason="", readiness=false. Elapsed: 89.204774ms
Jan 11 20:01:00.436: INFO: Pod "hostpath-symlink-prep-volume-3628": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.178859669s
STEP: Saw pod success
Jan 11 20:01:00.436: INFO: Pod "hostpath-symlink-prep-volume-3628" satisfied condition "success or failure"
Jan 11 20:01:00.436: INFO: Deleting pod "hostpath-symlink-prep-volume-3628" in namespace "volume-3628"
Jan 11 20:01:00.528: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-volume-3628" to be fully deleted
Jan 11 20:01:00.617: INFO: Creating resource for inline volume
STEP: starting hostpathsymlink-injector
STEP: Writing text file contents in the container.
Jan 11 20:01:02.887: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec hostpathsymlink-injector --namespace=volume-3628 -- /bin/sh -c echo 'Hello from hostPathSymlink from namespace volume-3628' > /opt/0/index.html'
Jan 11 20:01:04.280: INFO: stderr: ""
Jan 11 20:01:04.280: INFO: stdout: ""
STEP: Checking that text file contents are perfect.
Jan 11 20:01:04.281: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec hostpathsymlink-injector --namespace=volume-3628 -- cat /opt/0/index.html'
Jan 11 20:01:05.589: INFO: stderr: ""
Jan 11 20:01:05.589: INFO: stdout: "Hello from hostPathSymlink from namespace volume-3628\n"
Jan 11 20:01:05.589: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=volume-3628 hostpathsymlink-injector -- /bin/sh -c test -d /opt/0'
Jan 11 20:01:06.873: INFO: stderr: ""
Jan 11 20:01:06.873: INFO: stdout: ""
Jan 11 20:01:06.873: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=volume-3628 hostpathsymlink-injector -- /bin/sh -c test -b /opt/0'
Jan 11 20:01:08.209: INFO: rc: 1
STEP: Deleting pod hostpathsymlink-injector in namespace volume-3628
Jan 11 20:01:08.299: INFO: Waiting for pod hostpathsymlink-injector to disappear
Jan 11 20:01:08.388: INFO: Pod hostpathsymlink-injector still exists
Jan 11 20:01:10.388: INFO: Waiting for pod hostpathsymlink-injector to disappear
Jan 11 20:01:10.478: INFO: Pod hostpathsymlink-injector still exists
Jan 11 20:01:12.388: INFO: Waiting for pod hostpathsymlink-injector to disappear
Jan 11 20:01:12.478: INFO: Pod hostpathsymlink-injector still exists
Jan 11 20:01:14.388: INFO: Waiting for pod hostpathsymlink-injector to disappear
Jan 11 20:01:14.478: INFO: Pod hostpathsymlink-injector still exists
Jan 11 20:01:16.388: INFO: Waiting for pod hostpathsymlink-injector to disappear
Jan 11 20:01:16.478: INFO: Pod hostpathsymlink-injector still exists
Jan 11 20:01:18.388: INFO: Waiting for pod hostpathsymlink-injector to disappear
Jan 11 20:01:18.478: INFO: Pod hostpathsymlink-injector still exists
Jan 11 20:01:20.388: INFO: Waiting for pod hostpathsymlink-injector to disappear
Jan 11 20:01:20.478: INFO: Pod hostpathsymlink-injector still exists
Jan 11 20:01:22.388: INFO: Waiting for pod hostpathsymlink-injector to disappear
Jan 11 20:01:22.478: INFO: Pod hostpathsymlink-injector still exists
Jan 11 20:01:24.388: INFO: Waiting for pod hostpathsymlink-injector to disappear
Jan 11 20:01:24.479: INFO: Pod hostpathsymlink-injector no longer exists
STEP: starting hostpathsymlink-client
STEP: Checking that text file contents are perfect.
Jan 11 20:01:26.838: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec hostpathsymlink-client --namespace=volume-3628 -- cat /opt/0/index.html'
Jan 11 20:01:28.113: INFO: stderr: ""
Jan 11 20:01:28.113: INFO: stdout: "Hello from hostPathSymlink from namespace volume-3628\n"
Jan 11 20:01:28.113: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=volume-3628 hostpathsymlink-client -- /bin/sh -c test -d /opt/0'
Jan 11 20:01:29.381: INFO: stderr: ""
Jan 11 20:01:29.381: INFO: stdout: ""
Jan 11 20:01:29.381: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=volume-3628 hostpathsymlink-client -- /bin/sh -c test -b /opt/0'
Jan 11 20:01:30.710: INFO: rc: 1
STEP: cleaning the environment after hostpathsymlink
Jan 11 20:01:30.710: INFO: Deleting pod "hostpathsymlink-client" in namespace "volume-3628"
Jan 11 20:01:30.800: INFO: Wait up to 5m0s for pod "hostpathsymlink-client" to be fully deleted
Jan 11 20:01:45.072: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-volume-3628" in namespace "volume-3628" to be "success or failure"
Jan 11 20:01:45.161: INFO: Pod "hostpath-symlink-prep-volume-3628": Phase="Pending", Reason="", readiness=false. Elapsed: 88.922146ms
Jan 11 20:01:47.251: INFO: Pod "hostpath-symlink-prep-volume-3628": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.178663447s
STEP: Saw pod success
Jan 11 20:01:47.251: INFO: Pod "hostpath-symlink-prep-volume-3628" satisfied condition "success or failure"
Jan 11 20:01:47.251: INFO: Deleting pod "hostpath-symlink-prep-volume-3628" in namespace "volume-3628"
Jan 11 20:01:47.345: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-volume-3628" to be fully deleted
Jan 11 20:01:47.434: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 11 20:01:47.434: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-3628" for this suite.
Jan 11 20:01:53.796: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 20:01:57.092: INFO: namespace volume-3628 deletion completed in 9.568264611s


• [SLOW TEST:60.067 seconds]
[sig-storage] In-tree Volumes
/workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: hostPathSymlink]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:69
    [Testpattern: Inline-volume (default fs)] volumes
    /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:92
      should store data
      /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:146
------------------------------
[BeforeEach] [sig-apps] DisruptionController
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 11 20:01:04.152: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
STEP: Building a namespace api object, basename disruption
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in disruption-8140
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] DisruptionController
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:52
[It] should update PodDisruptionBudget status
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:61
STEP: Waiting for the pdb to be processed
STEP: Waiting for all pods to be running
Jan 11 20:01:05.399: INFO: running pods: 0 < 3
[AfterEach] [sig-apps] DisruptionController
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 11 20:01:07.580: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "disruption-8140" for this suite.
Jan 11 20:01:53.942: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 20:01:57.252: INFO: namespace disruption-8140 deletion completed in 49.580687574s


• [SLOW TEST:53.100 seconds]
[sig-apps] DisruptionController
/workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should update PodDisruptionBudget status
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:61
------------------------------
SSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:93
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 11 20:01:52.861: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
STEP: Building a namespace api object, basename provisioning
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in provisioning-1925
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support non-existent path
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:177
Jan 11 20:01:53.502: INFO: Could not find CSI Name for in-tree plugin kubernetes.io/host-path
Jan 11 20:01:53.593: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-hostpath-gzhj
STEP: Creating a pod to test subpath
Jan 11 20:01:53.685: INFO: Waiting up to 5m0s for pod "pod-subpath-test-hostpath-gzhj" in namespace "provisioning-1925" to be "success or failure"
Jan 11 20:01:53.775: INFO: Pod "pod-subpath-test-hostpath-gzhj": Phase="Pending", Reason="", readiness=false. Elapsed: 90.026772ms
Jan 11 20:01:55.865: INFO: Pod "pod-subpath-test-hostpath-gzhj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.180057517s
STEP: Saw pod success
Jan 11 20:01:55.865: INFO: Pod "pod-subpath-test-hostpath-gzhj" satisfied condition "success or failure"
Jan 11 20:01:55.955: INFO: Trying to get logs from node ip-10-250-27-25.ec2.internal pod pod-subpath-test-hostpath-gzhj container test-container-volume-hostpath-gzhj: 
STEP: delete the pod
Jan 11 20:01:56.146: INFO: Waiting for pod pod-subpath-test-hostpath-gzhj to disappear
Jan 11 20:01:56.236: INFO: Pod pod-subpath-test-hostpath-gzhj no longer exists
STEP: Deleting pod pod-subpath-test-hostpath-gzhj
Jan 11 20:01:56.236: INFO: Deleting pod "pod-subpath-test-hostpath-gzhj" in namespace "provisioning-1925"
STEP: Deleting pod
Jan 11 20:01:56.326: INFO: Deleting pod "pod-subpath-test-hostpath-gzhj" in namespace "provisioning-1925"
Jan 11 20:01:56.415: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 11 20:01:56.415: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "provisioning-1925" for this suite.
Jan 11 20:02:02.778: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 20:02:06.103: INFO: namespace provisioning-1925 deletion completed in 9.595160491s


• [SLOW TEST:13.242 seconds]
[sig-storage] In-tree Volumes
/workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: hostPath]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:69
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:92
      should support non-existent path
      /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:177
------------------------------
SS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 11 20:01:33.176: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
STEP: Building a namespace api object, basename nettest
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nettest-6140
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:35
STEP: Executing a successful http request from the external internet
[It] should function for pod-Service: udp
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:114
STEP: Performing setup for networking test in namespace nettest-6140
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan 11 20:01:33.900: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
STEP: Getting node addresses
Jan 11 20:01:57.433: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Jan 11 20:01:57.614: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 11 20:01:57.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-6140" for this suite.
Jan 11 20:02:09.978: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 20:02:13.294: INFO: namespace nettest-6140 deletion completed in 15.586634503s


S [SKIPPING] [40.118 seconds]
[sig-network] Networking
/workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  Granular Checks: Services
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:103
    should function for pod-Service: udp [It]
    /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:114

    Requires at least 2 nodes (not -1)

    /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:597
------------------------------
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 11 20:02:06.108: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
STEP: Building a namespace api object, basename secrets
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secrets-5108
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating secret with name secret-test-faf5e687-35f2-4573-b347-29bea597ca20
STEP: Creating a pod to test consume secrets
Jan 11 20:02:06.943: INFO: Waiting up to 5m0s for pod "pod-secrets-296dc293-39c3-42b1-aebe-52826f0725e8" in namespace "secrets-5108" to be "success or failure"
Jan 11 20:02:07.033: INFO: Pod "pod-secrets-296dc293-39c3-42b1-aebe-52826f0725e8": Phase="Pending", Reason="", readiness=false. Elapsed: 90.28795ms
Jan 11 20:02:09.123: INFO: Pod "pod-secrets-296dc293-39c3-42b1-aebe-52826f0725e8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.180647494s
STEP: Saw pod success
Jan 11 20:02:09.123: INFO: Pod "pod-secrets-296dc293-39c3-42b1-aebe-52826f0725e8" satisfied condition "success or failure"
Jan 11 20:02:09.213: INFO: Trying to get logs from node ip-10-250-27-25.ec2.internal pod pod-secrets-296dc293-39c3-42b1-aebe-52826f0725e8 container secret-volume-test: 
STEP: delete the pod
Jan 11 20:02:09.402: INFO: Waiting for pod pod-secrets-296dc293-39c3-42b1-aebe-52826f0725e8 to disappear
Jan 11 20:02:09.491: INFO: Pod pod-secrets-296dc293-39c3-42b1-aebe-52826f0725e8 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 11 20:02:09.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5108" for this suite.
Jan 11 20:02:15.852: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 20:02:19.176: INFO: namespace secrets-5108 deletion completed in 9.59399127s


• [SLOW TEST:13.069 seconds]
[sig-storage] Secrets
/workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:35
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSS
------------------------------
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 11 20:01:57.095: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
STEP: Building a namespace api object, basename pods
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pods-6711
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:165
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
Jan 11 20:02:00.302: INFO: Waiting up to 5m0s for pod "client-envvars-d93679ce-3c1c-425f-8d59-d1d9440a2858" in namespace "pods-6711" to be "success or failure"
Jan 11 20:02:00.391: INFO: Pod "client-envvars-d93679ce-3c1c-425f-8d59-d1d9440a2858": Phase="Pending", Reason="", readiness=false. Elapsed: 89.129958ms
Jan 11 20:02:02.481: INFO: Pod "client-envvars-d93679ce-3c1c-425f-8d59-d1d9440a2858": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.178827972s
STEP: Saw pod success
Jan 11 20:02:02.481: INFO: Pod "client-envvars-d93679ce-3c1c-425f-8d59-d1d9440a2858" satisfied condition "success or failure"
Jan 11 20:02:02.570: INFO: Trying to get logs from node ip-10-250-27-25.ec2.internal pod client-envvars-d93679ce-3c1c-425f-8d59-d1d9440a2858 container env3cont: 
STEP: delete the pod
Jan 11 20:02:02.797: INFO: Waiting for pod client-envvars-d93679ce-3c1c-425f-8d59-d1d9440a2858 to disappear
Jan 11 20:02:02.887: INFO: Pod client-envvars-d93679ce-3c1c-425f-8d59-d1d9440a2858 no longer exists
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 11 20:02:02.887: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-6711" for this suite.
Jan 11 20:02:17.248: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 20:02:20.543: INFO: namespace pods-6711 deletion completed in 17.563259178s


• [SLOW TEST:23.448 seconds]
[k8s.io] Pods
/workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
  should contain environment variables for services [NodeConformance] [Conformance]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:93
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 11 20:02:13.296: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
STEP: Building a namespace api object, basename provisioning
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in provisioning-6696
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support readOnly directory specified in the volumeMount
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:347
Jan 11 20:02:13.955: INFO: Could not find CSI Name for in-tree plugin kubernetes.io/empty-dir
Jan 11 20:02:13.955: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-emptydir-j8zm
STEP: Creating a pod to test subpath
Jan 11 20:02:14.048: INFO: Waiting up to 5m0s for pod "pod-subpath-test-emptydir-j8zm" in namespace "provisioning-6696" to be "success or failure"
Jan 11 20:02:14.138: INFO: Pod "pod-subpath-test-emptydir-j8zm": Phase="Pending", Reason="", readiness=false. Elapsed: 89.831139ms
Jan 11 20:02:16.228: INFO: Pod "pod-subpath-test-emptydir-j8zm": Phase="Pending", Reason="", readiness=false. Elapsed: 2.180025587s
Jan 11 20:02:18.318: INFO: Pod "pod-subpath-test-emptydir-j8zm": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.270019829s
STEP: Saw pod success
Jan 11 20:02:18.318: INFO: Pod "pod-subpath-test-emptydir-j8zm" satisfied condition "success or failure"
Jan 11 20:02:18.408: INFO: Trying to get logs from node ip-10-250-27-25.ec2.internal pod pod-subpath-test-emptydir-j8zm container test-container-subpath-emptydir-j8zm: 
STEP: delete the pod
Jan 11 20:02:18.599: INFO: Waiting for pod pod-subpath-test-emptydir-j8zm to disappear
Jan 11 20:02:18.689: INFO: Pod pod-subpath-test-emptydir-j8zm no longer exists
STEP: Deleting pod pod-subpath-test-emptydir-j8zm
Jan 11 20:02:18.689: INFO: Deleting pod "pod-subpath-test-emptydir-j8zm" in namespace "provisioning-6696"
STEP: Deleting pod
Jan 11 20:02:18.779: INFO: Deleting pod "pod-subpath-test-emptydir-j8zm" in namespace "provisioning-6696"
Jan 11 20:02:18.869: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 11 20:02:18.869: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "provisioning-6696" for this suite.
Jan 11 20:02:27.230: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 20:02:30.551: INFO: namespace provisioning-6696 deletion completed in 11.590269815s


• [SLOW TEST:17.255 seconds]
[sig-storage] In-tree Volumes
/workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: emptydir]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:69
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:92
      should support readOnly directory specified in the volumeMount
      /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:347
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 11 20:02:19.189: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
STEP: Building a namespace api object, basename downward-api
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-2797
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating a pod to test downward API volume plugin
Jan 11 20:02:19.923: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4afc7ea7-b037-4f0d-a278-76896f1a5b14" in namespace "downward-api-2797" to be "success or failure"
Jan 11 20:02:20.013: INFO: Pod "downwardapi-volume-4afc7ea7-b037-4f0d-a278-76896f1a5b14": Phase="Pending", Reason="", readiness=false. Elapsed: 90.118774ms
Jan 11 20:02:22.103: INFO: Pod "downwardapi-volume-4afc7ea7-b037-4f0d-a278-76896f1a5b14": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.180559311s
STEP: Saw pod success
Jan 11 20:02:22.103: INFO: Pod "downwardapi-volume-4afc7ea7-b037-4f0d-a278-76896f1a5b14" satisfied condition "success or failure"
Jan 11 20:02:22.193: INFO: Trying to get logs from node ip-10-250-27-25.ec2.internal pod downwardapi-volume-4afc7ea7-b037-4f0d-a278-76896f1a5b14 container client-container: 
STEP: delete the pod
Jan 11 20:02:22.384: INFO: Waiting for pod downwardapi-volume-4afc7ea7-b037-4f0d-a278-76896f1a5b14 to disappear
Jan 11 20:02:22.474: INFO: Pod downwardapi-volume-4afc7ea7-b037-4f0d-a278-76896f1a5b14 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 11 20:02:22.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2797" for this suite.
Jan 11 20:02:28.838: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 20:02:32.160: INFO: namespace downward-api-2797 deletion completed in 9.594488478s


• [SLOW TEST:12.971 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
[BeforeEach] [sig-storage] CSI mock volume
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 11 20:01:57.265: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
STEP: Building a namespace api object, basename csi-mock-volumes
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in csi-mock-volumes-4004
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be passed when podInfoOnMount=false
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:347
STEP: deploying csi mock driver
Jan 11 20:01:58.137: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4004/csi-attacher
Jan 11 20:01:58.227: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-4004
Jan 11 20:01:58.227: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-4004
Jan 11 20:01:58.317: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-4004
Jan 11 20:01:58.407: INFO: creating *v1.Role: csi-mock-volumes-4004/external-attacher-cfg-csi-mock-volumes-4004
Jan 11 20:01:58.497: INFO: creating *v1.RoleBinding: csi-mock-volumes-4004/csi-attacher-role-cfg
Jan 11 20:01:58.587: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4004/csi-provisioner
Jan 11 20:01:58.678: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-4004
Jan 11 20:01:58.678: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-4004
Jan 11 20:01:58.768: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-4004
Jan 11 20:01:58.857: INFO: creating *v1.Role: csi-mock-volumes-4004/external-provisioner-cfg-csi-mock-volumes-4004
Jan 11 20:01:58.947: INFO: creating *v1.RoleBinding: csi-mock-volumes-4004/csi-provisioner-role-cfg
Jan 11 20:01:59.037: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4004/csi-resizer
Jan 11 20:01:59.128: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-4004
Jan 11 20:01:59.128: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-4004
Jan 11 20:01:59.217: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-4004
Jan 11 20:01:59.308: INFO: creating *v1.Role: csi-mock-volumes-4004/external-resizer-cfg-csi-mock-volumes-4004
Jan 11 20:01:59.398: INFO: creating *v1.RoleBinding: csi-mock-volumes-4004/csi-resizer-role-cfg
Jan 11 20:01:59.489: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4004/csi-mock
Jan 11 20:01:59.579: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-4004
Jan 11 20:01:59.669: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-4004
Jan 11 20:01:59.759: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-4004
Jan 11 20:01:59.849: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-4004
Jan 11 20:01:59.938: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-4004
Jan 11 20:02:00.028: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-4004
Jan 11 20:02:00.118: INFO: creating *v1.StatefulSet: csi-mock-volumes-4004/csi-mockplugin
Jan 11 20:02:00.208: INFO: creating *v1beta1.CSIDriver: csi-mock-csi-mock-volumes-4004
Jan 11 20:02:00.298: INFO: creating *v1.StatefulSet: csi-mock-volumes-4004/csi-mockplugin-attacher
Jan 11 20:02:00.388: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-4004"
STEP: Creating pod
Jan 11 20:02:00.658: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
Jan 11 20:02:00.750: INFO: Waiting up to 5m0s for PersistentVolumeClaims [pvc-x2jz9] to have phase Bound
Jan 11 20:02:00.839: INFO: PersistentVolumeClaim pvc-x2jz9 found but phase is Pending instead of Bound.
Jan 11 20:02:02.929: INFO: PersistentVolumeClaim pvc-x2jz9 found and phase=Bound (2.179114273s)
STEP: Deleting the previously created pod
Jan 11 20:02:09.379: INFO: Deleting pod "pvc-volume-tester-hdsnc" in namespace "csi-mock-volumes-4004"
Jan 11 20:02:09.469: INFO: Wait up to 5m0s for pod "pvc-volume-tester-hdsnc" to be fully deleted
STEP: Checking CSI driver logs
Jan 11 20:02:13.755: INFO: CSI driver logs:
mock driver started
gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-4004","vendor_version":"0.3.0","manifest":{"url":"https://github.com/kubernetes-csi/csi-test/mock"}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":9}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-4004","vendor_version":"0.3.0","manifest":{"url":"https://github.com/kubernetes-csi/csi-test/mock"}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":9}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-72256c8b-2153-4bf1-a58c-0f6551c28e5e","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":{"volume":{"capacity_bytes":1073741824,"volume_id":"4","volume_context":{"name":"pvc-72256c8b-2153-4bf1-a58c-0f6551c28e5e"}}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-4004","vendor_version":"0.3.0","manifest":{"url":"https://github.com/kubernetes-csi/csi-test/mock"}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeGetInfo","Request":{},"Response":{"node_id":"csi-mock-csi-mock-volumes-4004","max_volumes_per_node":2},"Error":""}
gRPCCall: {"Method":"/csi.v1.Controller/ControllerPublishVolume","Request":{"volume_id":"4","node_id":"csi-mock-csi-mock-volumes-4004","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-72256c8b-2153-4bf1-a58c-0f6551c28e5e","storage.kubernetes.io/csiProvisionerIdentity":"1578772922264-8081-csi-mock-csi-mock-volumes-4004"}},"Response":{"publish_context":{"device":"/dev/mock","readonly":"false"}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","publish_context":{"device":"/dev/mock","readonly":"false"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-72256c8b-2153-4bf1-a58c-0f6551c28e5e/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-72256c8b-2153-4bf1-a58c-0f6551c28e5e","storage.kubernetes.io/csiProvisionerIdentity":"1578772922264-8081-csi-mock-csi-mock-volumes-4004"}},"Response":{},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodePublishVolume","Request":{"volume_id":"4","publish_context":{"device":"/dev/mock","readonly":"false"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-72256c8b-2153-4bf1-a58c-0f6551c28e5e/globalmount","target_path":"/var/lib/kubelet/pods/ccdc56fb-9ffb-4123-986c-4db17e6f82cd/volumes/kubernetes.io~csi/pvc-72256c8b-2153-4bf1-a58c-0f6551c28e5e/mount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-72256c8b-2153-4bf1-a58c-0f6551c28e5e","storage.kubernetes.io/csiProvisionerIdentity":"1578772922264-8081-csi-mock-csi-mock-volumes-4004"}},"Response":{},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/ccdc56fb-9ffb-4123-986c-4db17e6f82cd/volumes/kubernetes.io~csi/pvc-72256c8b-2153-4bf1-a58c-0f6551c28e5e/mount"},"Response":{},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeUnstageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-72256c8b-2153-4bf1-a58c-0f6551c28e5e/globalmount"},"Response":{},"Error":""}

Jan 11 20:02:13.756: INFO: Found NodeUnpublishVolume: {Method:/csi.v1.Node/NodeUnpublishVolume Request:{VolumeContext:map[]}}
STEP: Deleting pod pvc-volume-tester-hdsnc
Jan 11 20:02:13.756: INFO: Deleting pod "pvc-volume-tester-hdsnc" in namespace "csi-mock-volumes-4004"
STEP: Deleting claim pvc-x2jz9
Jan 11 20:02:14.026: INFO: Waiting up to 2m0s for PersistentVolume pvc-72256c8b-2153-4bf1-a58c-0f6551c28e5e to get deleted
Jan 11 20:02:14.116: INFO: PersistentVolume pvc-72256c8b-2153-4bf1-a58c-0f6551c28e5e found and phase=Released (90.313521ms)
Jan 11 20:02:16.206: INFO: PersistentVolume pvc-72256c8b-2153-4bf1-a58c-0f6551c28e5e found and phase=Released (2.180018176s)
Jan 11 20:02:18.296: INFO: PersistentVolume pvc-72256c8b-2153-4bf1-a58c-0f6551c28e5e was removed
STEP: Deleting storageclass csi-mock-volumes-4004-sc
STEP: Cleaning up resources
STEP: uninstalling csi mock driver
Jan 11 20:02:18.387: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4004/csi-attacher
Jan 11 20:02:18.478: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-4004
Jan 11 20:02:18.570: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-4004
Jan 11 20:02:18.661: INFO: deleting *v1.Role: csi-mock-volumes-4004/external-attacher-cfg-csi-mock-volumes-4004
Jan 11 20:02:18.752: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4004/csi-attacher-role-cfg
Jan 11 20:02:18.843: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4004/csi-provisioner
Jan 11 20:02:18.935: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-4004
Jan 11 20:02:19.028: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-4004
Jan 11 20:02:19.119: INFO: deleting *v1.Role: csi-mock-volumes-4004/external-provisioner-cfg-csi-mock-volumes-4004
Jan 11 20:02:19.210: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4004/csi-provisioner-role-cfg
Jan 11 20:02:19.301: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4004/csi-resizer
Jan 11 20:02:19.392: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-4004
Jan 11 20:02:19.483: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-4004
Jan 11 20:02:19.574: INFO: deleting *v1.Role: csi-mock-volumes-4004/external-resizer-cfg-csi-mock-volumes-4004
Jan 11 20:02:19.666: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4004/csi-resizer-role-cfg
Jan 11 20:02:19.757: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4004/csi-mock
Jan 11 20:02:19.849: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-4004
Jan 11 20:02:19.941: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-4004
Jan 11 20:02:20.032: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-4004
Jan 11 20:02:20.123: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-4004
Jan 11 20:02:20.214: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-4004
Jan 11 20:02:20.305: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-4004
Jan 11 20:02:20.396: INFO: deleting *v1.StatefulSet: csi-mock-volumes-4004/csi-mockplugin
Jan 11 20:02:20.488: INFO: deleting *v1beta1.CSIDriver: csi-mock-csi-mock-volumes-4004
Jan 11 20:02:20.580: INFO: deleting *v1.StatefulSet: csi-mock-volumes-4004/csi-mockplugin-attacher
[AfterEach] [sig-storage] CSI mock volume
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 11 20:02:20.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "csi-mock-volumes-4004" for this suite.
Jan 11 20:02:33.127: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 20:02:36.440: INFO: namespace csi-mock-volumes-4004 deletion completed in 15.587668284s


• [SLOW TEST:39.175 seconds]
[sig-storage] CSI mock volume
/workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI workload information using mock driver
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:297
    should not be passed when podInfoOnMount=false
    /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:347
------------------------------
SSSSSS
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 11 20:01:39.829: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
STEP: Building a namespace api object, basename services
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-3570
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:91
[It] should be able to switch session affinity for NodePort service
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1819
STEP: creating service in namespace services-3570
STEP: creating service affinity-nodeport-transition in namespace services-3570
STEP: creating replication controller affinity-nodeport-transition in namespace services-3570
I0111 20:01:40.655672    8609 runners.go:184] Created replication controller with name: affinity-nodeport-transition, namespace: services-3570, replica count: 3
I0111 20:01:43.756249    8609 runners.go:184] affinity-nodeport-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jan 11 20:01:44.027: INFO: Creating new exec pod
Jan 11 20:01:47.391: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=services-3570 execpod-affinity2hdwk -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80'
Jan 11 20:01:48.676: INFO: stderr: "+ nc -zv -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n"
Jan 11 20:01:48.676: INFO: stdout: ""
Jan 11 20:01:48.677: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=services-3570 execpod-affinity2hdwk -- /bin/sh -x -c nc -zv -t -w 2 100.106.99.75 80'
Jan 11 20:01:49.965: INFO: stderr: "+ nc -zv -t -w 2 100.106.99.75 80\nConnection to 100.106.99.75 80 port [tcp/http] succeeded!\n"
Jan 11 20:01:49.965: INFO: stdout: ""
Jan 11 20:01:49.965: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=services-3570 execpod-affinity2hdwk -- /bin/sh -x -c nc -zv -t -w 2 10.250.27.25 31636'
Jan 11 20:01:51.296: INFO: stderr: "+ nc -zv -t -w 2 10.250.27.25 31636\nConnection to 10.250.27.25 31636 port [tcp/31636] succeeded!\n"
Jan 11 20:01:51.296: INFO: stdout: ""
Jan 11 20:01:51.296: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=services-3570 execpod-affinity2hdwk -- /bin/sh -x -c nc -zv -t -w 2 10.250.7.77 31636'
Jan 11 20:01:52.566: INFO: stderr: "+ nc -zv -t -w 2 10.250.7.77 31636\nConnection to 10.250.7.77 31636 port [tcp/31636] succeeded!\n"
Jan 11 20:01:52.567: INFO: stdout: ""
Jan 11 20:01:52.747: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=services-3570 execpod-affinity2hdwk -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.250.27.25:31636/'
Jan 11 20:01:54.020: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://10.250.27.25:31636/\n"
Jan 11 20:01:54.020: INFO: stdout: "affinity-nodeport-transition-lbqg2"
Jan 11 20:01:54.020: INFO: Received response from host: affinity-nodeport-transition-lbqg2
Jan 11 20:01:56.021: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=services-3570 execpod-affinity2hdwk -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.250.27.25:31636/'
Jan 11 20:01:57.391: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://10.250.27.25:31636/\n"
Jan 11 20:01:57.392: INFO: stdout: "affinity-nodeport-transition-cj2fd"
Jan 11 20:01:57.392: INFO: Received response from host: affinity-nodeport-transition-cj2fd
Jan 11 20:01:57.572: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=services-3570 execpod-affinity2hdwk -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.250.27.25:31636/'
Jan 11 20:01:58.887: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://10.250.27.25:31636/\n"
Jan 11 20:01:58.887: INFO: stdout: "affinity-nodeport-transition-lbqg2"
Jan 11 20:01:58.887: INFO: Received response from host: affinity-nodeport-transition-lbqg2
Jan 11 20:02:00.887: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=services-3570 execpod-affinity2hdwk -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.250.27.25:31636/'
Jan 11 20:02:02.396: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://10.250.27.25:31636/\n"
Jan 11 20:02:02.396: INFO: stdout: "affinity-nodeport-transition-lbqg2"
Jan 11 20:02:02.396: INFO: Received response from host: affinity-nodeport-transition-lbqg2
Jan 11 20:02:02.887: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=services-3570 execpod-affinity2hdwk -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.250.27.25:31636/'
Jan 11 20:02:04.176: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://10.250.27.25:31636/\n"
Jan 11 20:02:04.176: INFO: stdout: "affinity-nodeport-transition-lbqg2"
Jan 11 20:02:04.176: INFO: Received response from host: affinity-nodeport-transition-lbqg2
Jan 11 20:02:04.887: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=services-3570 execpod-affinity2hdwk -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.250.27.25:31636/'
Jan 11 20:02:06.293: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://10.250.27.25:31636/\n"
Jan 11 20:02:06.293: INFO: stdout: "affinity-nodeport-transition-lbqg2"
Jan 11 20:02:06.293: INFO: Received response from host: affinity-nodeport-transition-lbqg2
Jan 11 20:02:06.887: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=services-3570 execpod-affinity2hdwk -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.250.27.25:31636/'
Jan 11 20:02:08.252: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://10.250.27.25:31636/\n"
Jan 11 20:02:08.252: INFO: stdout: "affinity-nodeport-transition-lbqg2"
Jan 11 20:02:08.252: INFO: Received response from host: affinity-nodeport-transition-lbqg2
Jan 11 20:02:08.894: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=services-3570 execpod-affinity2hdwk -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.250.27.25:31636/'
Jan 11 20:02:10.281: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://10.250.27.25:31636/\n"
Jan 11 20:02:10.281: INFO: stdout: "affinity-nodeport-transition-lbqg2"
Jan 11 20:02:10.281: INFO: Received response from host: affinity-nodeport-transition-lbqg2
Jan 11 20:02:10.887: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=services-3570 execpod-affinity2hdwk -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.250.27.25:31636/'
Jan 11 20:02:12.192: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://10.250.27.25:31636/\n"
Jan 11 20:02:12.192: INFO: stdout: "affinity-nodeport-transition-lbqg2"
Jan 11 20:02:12.192: INFO: Received response from host: affinity-nodeport-transition-lbqg2
Jan 11 20:02:12.887: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=services-3570 execpod-affinity2hdwk -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.250.27.25:31636/'
Jan 11 20:02:14.205: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://10.250.27.25:31636/\n"
Jan 11 20:02:14.205: INFO: stdout: "affinity-nodeport-transition-lbqg2"
Jan 11 20:02:14.205: INFO: Received response from host: affinity-nodeport-transition-lbqg2
Jan 11 20:02:14.887: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=services-3570 execpod-affinity2hdwk -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.250.27.25:31636/'
Jan 11 20:02:16.206: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://10.250.27.25:31636/\n"
Jan 11 20:02:16.206: INFO: stdout: "affinity-nodeport-transition-lbqg2"
Jan 11 20:02:16.206: INFO: Received response from host: affinity-nodeport-transition-lbqg2
Jan 11 20:02:16.887: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=services-3570 execpod-affinity2hdwk -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.250.27.25:31636/'
Jan 11 20:02:18.169: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://10.250.27.25:31636/\n"
Jan 11 20:02:18.169: INFO: stdout: "affinity-nodeport-transition-lbqg2"
Jan 11 20:02:18.169: INFO: Received response from host: affinity-nodeport-transition-lbqg2
Jan 11 20:02:18.887: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=services-3570 execpod-affinity2hdwk -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.250.27.25:31636/'
Jan 11 20:02:20.179: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://10.250.27.25:31636/\n"
Jan 11 20:02:20.179: INFO: stdout: "affinity-nodeport-transition-lbqg2"
Jan 11 20:02:20.179: INFO: Received response from host: affinity-nodeport-transition-lbqg2
Jan 11 20:02:20.887: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=services-3570 execpod-affinity2hdwk -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.250.27.25:31636/'
Jan 11 20:02:22.186: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://10.250.27.25:31636/\n"
Jan 11 20:02:22.186: INFO: stdout: "affinity-nodeport-transition-lbqg2"
Jan 11 20:02:22.186: INFO: Received response from host: affinity-nodeport-transition-lbqg2
Jan 11 20:02:22.887: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=services-3570 execpod-affinity2hdwk -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.250.27.25:31636/'
Jan 11 20:02:24.230: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://10.250.27.25:31636/\n"
Jan 11 20:02:24.231: INFO: stdout: "affinity-nodeport-transition-lbqg2"
Jan 11 20:02:24.231: INFO: Received response from host: affinity-nodeport-transition-lbqg2
Jan 11 20:02:24.887: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=services-3570 execpod-affinity2hdwk -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.250.27.25:31636/'
Jan 11 20:02:26.271: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://10.250.27.25:31636/\n"
Jan 11 20:02:26.271: INFO: stdout: "affinity-nodeport-transition-lbqg2"
Jan 11 20:02:26.271: INFO: Received response from host: affinity-nodeport-transition-lbqg2
Jan 11 20:02:26.887: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=services-3570 execpod-affinity2hdwk -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.250.27.25:31636/'
Jan 11 20:02:28.212: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://10.250.27.25:31636/\n"
Jan 11 20:02:28.212: INFO: stdout: "affinity-nodeport-transition-lbqg2"
Jan 11 20:02:28.212: INFO: Received response from host: affinity-nodeport-transition-lbqg2
Jan 11 20:02:28.212: INFO: Cleaning up the exec pod
STEP: deleting ReplicationController affinity-nodeport-transition in namespace services-3570, will wait for the garbage collector to delete the pods
Jan 11 20:02:28.586: INFO: Deleting ReplicationController affinity-nodeport-transition took: 91.104989ms
Jan 11 20:02:28.686: INFO: Terminating ReplicationController affinity-nodeport-transition pods took: 100.270669ms
[AfterEach] [sig-network] Services
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 11 20:02:43.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-3570" for this suite.
Jan 11 20:02:50.348: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 20:02:53.664: INFO: namespace services-3570 deletion completed in 9.58576858s
[AfterEach] [sig-network] Services
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:95


• [SLOW TEST:73.836 seconds]
[sig-network] Services
/workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to switch session affinity for NodePort service
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1819
------------------------------
S
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 11 20:02:32.161: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
STEP: Building a namespace api object, basename kubectl
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-4733
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:225
[BeforeEach] Simple pod
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:371
STEP: creating the pod from 
Jan 11 20:02:32.803: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config create -f - --namespace=kubectl-4733'
Jan 11 20:02:33.740: INFO: stderr: ""
Jan 11 20:02:33.740: INFO: stdout: "pod/httpd created\n"
Jan 11 20:02:33.740: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [httpd]
Jan 11 20:02:33.741: INFO: Waiting up to 5m0s for pod "httpd" in namespace "kubectl-4733" to be "running and ready"
Jan 11 20:02:33.831: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 90.166209ms
Jan 11 20:02:35.921: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 2.180423943s
Jan 11 20:02:38.011: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 4.270675276s
Jan 11 20:02:40.101: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 6.360466405s
Jan 11 20:02:42.191: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 8.450920074s
Jan 11 20:02:44.282: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 10.541203232s
Jan 11 20:02:46.372: INFO: Pod "httpd": Phase="Running", Reason="", readiness=true. Elapsed: 12.631721612s
Jan 11 20:02:46.372: INFO: Pod "httpd" satisfied condition "running and ready"
Jan 11 20:02:46.372: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [httpd]
[It] should support exec through kubectl proxy
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:463
STEP: Starting kubectl proxy
Jan 11 20:02:46.373: INFO: Asynchronously running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config proxy -p 0 --disable-filter'
STEP: Running kubectl via kubectl proxy using --server=http://127.0.0.1:41011
Jan 11 20:02:46.430: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --server=http://127.0.0.1:41011 --namespace=kubectl-4733 exec httpd echo running in container'
Jan 11 20:02:48.241: INFO: stderr: ""
Jan 11 20:02:48.241: INFO: stdout: "running in container\n"
[AfterEach] Simple pod
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:377
STEP: using delete to clean up resources
Jan 11 20:02:48.242: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config delete --grace-period=0 --force -f - --namespace=kubectl-4733'
Jan 11 20:02:48.753: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 11 20:02:48.753: INFO: stdout: "pod \"httpd\" force deleted\n"
Jan 11 20:02:48.753: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config get rc,svc -l name=httpd --no-headers --namespace=kubectl-4733'
Jan 11 20:02:49.275: INFO: stderr: "No resources found in kubectl-4733 namespace.\n"
Jan 11 20:02:49.275: INFO: stdout: ""
Jan 11 20:02:49.275: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config get pods -l name=httpd --namespace=kubectl-4733 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan 11 20:02:49.700: INFO: stderr: ""
Jan 11 20:02:49.700: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 11 20:02:49.700: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4733" for this suite.
Jan 11 20:02:58.063: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 20:03:01.383: INFO: namespace kubectl-4733 deletion completed in 11.591536183s


• [SLOW TEST:29.222 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Simple pod
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:369
    should support exec through kubectl proxy
    /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:463
------------------------------
SSSSS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 11 20:02:20.545: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
STEP: Building a namespace api object, basename kubectl
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-5845
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:225
[It] should create services for rc  [Conformance]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: creating Redis RC
Jan 11 20:02:21.185: INFO: namespace kubectl-5845
Jan 11 20:02:21.185: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config create -f - --namespace=kubectl-5845'
Jan 11 20:02:21.817: INFO: stderr: ""
Jan 11 20:02:21.817: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Jan 11 20:02:22.907: INFO: Selector matched 1 pods for map[app:redis]
Jan 11 20:02:22.907: INFO: Found 0 / 1
Jan 11 20:02:23.907: INFO: Selector matched 1 pods for map[app:redis]
Jan 11 20:02:23.907: INFO: Found 1 / 1
Jan 11 20:02:23.907: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Jan 11 20:02:23.997: INFO: Selector matched 1 pods for map[app:redis]
Jan 11 20:02:23.997: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jan 11 20:02:23.997: INFO: wait on redis-master startup in kubectl-5845 
Jan 11 20:02:23.997: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config logs redis-master-792fp redis-master --namespace=kubectl-5845'
Jan 11 20:02:24.524: INFO: stderr: ""
Jan 11 20:02:24.525: INFO: stdout: "1:C 11 Jan 2020 20:02:22.603 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo\n1:C 11 Jan 2020 20:02:22.603 # Redis version=5.0.5, bits=64, commit=00000000, modified=0, pid=1, just started\n1:C 11 Jan 2020 20:02:22.603 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf\n1:M 11 Jan 2020 20:02:22.605 * Running mode=standalone, port=6379.\n1:M 11 Jan 2020 20:02:22.605 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 11 Jan 2020 20:02:22.605 # Server initialized\n1:M 11 Jan 2020 20:02:22.605 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 11 Jan 2020 20:02:22.605 * Ready to accept connections\n"
STEP: exposing RC
Jan 11 20:02:24.525: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-5845'
Jan 11 20:02:25.047: INFO: stderr: ""
Jan 11 20:02:25.047: INFO: stdout: "service/rm2 exposed\n"
Jan 11 20:02:25.136: INFO: Service rm2 in namespace kubectl-5845 found.
STEP: exposing service
Jan 11 20:02:27.314: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-5845'
Jan 11 20:02:27.882: INFO: stderr: ""
Jan 11 20:02:27.882: INFO: stdout: "service/rm3 exposed\n"
Jan 11 20:02:27.971: INFO: Service rm3 in namespace kubectl-5845 found.
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 11 20:02:30.150: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5845" for this suite.
Jan 11 20:02:58.508: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 20:03:01.806: INFO: namespace kubectl-5845 deletion completed in 31.565877942s


• [SLOW TEST:41.261 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl expose
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1105
    should create services for rc  [Conformance]
    /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:93
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 11 20:02:30.575: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
STEP: Building a namespace api object, basename provisioning
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in provisioning-702
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support file as subpath [LinuxOnly]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:213
Jan 11 20:02:31.220: INFO: Could not find CSI Name for in-tree plugin kubernetes.io/empty-dir
Jan 11 20:02:31.220: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-emptydir-nx47
STEP: Creating a pod to test atomic-volume-subpath
Jan 11 20:02:31.313: INFO: Waiting up to 5m0s for pod "pod-subpath-test-emptydir-nx47" in namespace "provisioning-702" to be "success or failure"
Jan 11 20:02:31.403: INFO: Pod "pod-subpath-test-emptydir-nx47": Phase="Pending", Reason="", readiness=false. Elapsed: 90.026975ms
Jan 11 20:02:33.493: INFO: Pod "pod-subpath-test-emptydir-nx47": Phase="Pending", Reason="", readiness=false. Elapsed: 2.180056766s
Jan 11 20:02:35.584: INFO: Pod "pod-subpath-test-emptydir-nx47": Phase="Running", Reason="", readiness=true. Elapsed: 4.270532314s
Jan 11 20:02:37.674: INFO: Pod "pod-subpath-test-emptydir-nx47": Phase="Running", Reason="", readiness=true. Elapsed: 6.360671936s
Jan 11 20:02:39.764: INFO: Pod "pod-subpath-test-emptydir-nx47": Phase="Running", Reason="", readiness=true. Elapsed: 8.450799018s
Jan 11 20:02:41.854: INFO: Pod "pod-subpath-test-emptydir-nx47": Phase="Running", Reason="", readiness=true. Elapsed: 10.541044598s
Jan 11 20:02:43.944: INFO: Pod "pod-subpath-test-emptydir-nx47": Phase="Running", Reason="", readiness=true. Elapsed: 12.631160247s
Jan 11 20:02:46.035: INFO: Pod "pod-subpath-test-emptydir-nx47": Phase="Running", Reason="", readiness=true. Elapsed: 14.72179169s
Jan 11 20:02:48.124: INFO: Pod "pod-subpath-test-emptydir-nx47": Phase="Running", Reason="", readiness=true. Elapsed: 16.811491351s
Jan 11 20:02:50.214: INFO: Pod "pod-subpath-test-emptydir-nx47": Phase="Running", Reason="", readiness=true. Elapsed: 18.901482492s
Jan 11 20:02:52.305: INFO: Pod "pod-subpath-test-emptydir-nx47": Phase="Running", Reason="", readiness=true. Elapsed: 20.992128027s
Jan 11 20:02:54.395: INFO: Pod "pod-subpath-test-emptydir-nx47": Phase="Succeeded", Reason="", readiness=false. Elapsed: 23.082436061s
STEP: Saw pod success
Jan 11 20:02:54.395: INFO: Pod "pod-subpath-test-emptydir-nx47" satisfied condition "success or failure"
Jan 11 20:02:54.485: INFO: Trying to get logs from node ip-10-250-27-25.ec2.internal pod pod-subpath-test-emptydir-nx47 container test-container-subpath-emptydir-nx47: 
STEP: delete the pod
Jan 11 20:02:54.677: INFO: Waiting for pod pod-subpath-test-emptydir-nx47 to disappear
Jan 11 20:02:54.766: INFO: Pod pod-subpath-test-emptydir-nx47 no longer exists
STEP: Deleting pod pod-subpath-test-emptydir-nx47
Jan 11 20:02:54.766: INFO: Deleting pod "pod-subpath-test-emptydir-nx47" in namespace "provisioning-702"
STEP: Deleting pod
Jan 11 20:02:54.855: INFO: Deleting pod "pod-subpath-test-emptydir-nx47" in namespace "provisioning-702"
Jan 11 20:02:54.945: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 11 20:02:54.945: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "provisioning-702" for this suite.
Jan 11 20:03:03.305: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 20:03:06.626: INFO: namespace provisioning-702 deletion completed in 11.590287339s


• [SLOW TEST:36.052 seconds]
[sig-storage] In-tree Volumes
/workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: emptydir]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:69
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:92
      should support file as subpath [LinuxOnly]
      /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:213
------------------------------
SS
------------------------------
[BeforeEach] [sig-node] ConfigMap
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 11 20:03:01.807: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
STEP: Building a namespace api object, basename configmap
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-8825
STEP: Waiting for a default service account to be provisioned in namespace
[It] should patch ConfigMap successfully
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:138
STEP: Creating configMap configmap-8825/configmap-test-c65ba117-8efd-4910-9c6e-c423ac525cc3
STEP: Updating configMap configmap-8825/configmap-test-c65ba117-8efd-4910-9c6e-c423ac525cc3
STEP: Verifying update of configMap configmap-8825/configmap-test-c65ba117-8efd-4910-9c6e-c423ac525cc3
[AfterEach] [sig-node] ConfigMap
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 11 20:03:03.623: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8825" for this suite.
Jan 11 20:03:09.983: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 20:03:13.278: INFO: namespace configmap-8825 deletion completed in 9.564485833s


• [SLOW TEST:11.471 seconds]
[sig-node] ConfigMap
/workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:32
  should patch ConfigMap successfully
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:138
------------------------------
SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:93
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 11 20:03:01.392: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
STEP: Building a namespace api object, basename provisioning
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in provisioning-9746
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support readOnly file specified in the volumeMount [LinuxOnly]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:362
Jan 11 20:03:02.848: INFO: Could not find CSI Name for in-tree plugin kubernetes.io/host-path
Jan 11 20:03:02.939: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-hostpath-65j4
STEP: Creating a pod to test subpath
Jan 11 20:03:03.031: INFO: Waiting up to 5m0s for pod "pod-subpath-test-hostpath-65j4" in namespace "provisioning-9746" to be "success or failure"
Jan 11 20:03:03.121: INFO: Pod "pod-subpath-test-hostpath-65j4": Phase="Pending", Reason="", readiness=false. Elapsed: 89.879946ms
Jan 11 20:03:05.212: INFO: Pod "pod-subpath-test-hostpath-65j4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.180472348s
Jan 11 20:03:07.303: INFO: Pod "pod-subpath-test-hostpath-65j4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.271503787s
STEP: Saw pod success
Jan 11 20:03:07.303: INFO: Pod "pod-subpath-test-hostpath-65j4" satisfied condition "success or failure"
Jan 11 20:03:07.396: INFO: Trying to get logs from node ip-10-250-27-25.ec2.internal pod pod-subpath-test-hostpath-65j4 container test-container-subpath-hostpath-65j4: 
STEP: delete the pod
Jan 11 20:03:07.586: INFO: Waiting for pod pod-subpath-test-hostpath-65j4 to disappear
Jan 11 20:03:07.680: INFO: Pod pod-subpath-test-hostpath-65j4 no longer exists
STEP: Deleting pod pod-subpath-test-hostpath-65j4
Jan 11 20:03:07.680: INFO: Deleting pod "pod-subpath-test-hostpath-65j4" in namespace "provisioning-9746"
STEP: Deleting pod
Jan 11 20:03:07.770: INFO: Deleting pod "pod-subpath-test-hostpath-65j4" in namespace "provisioning-9746"
Jan 11 20:03:07.860: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 11 20:03:07.860: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "provisioning-9746" for this suite.
Jan 11 20:03:14.221: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 20:03:17.540: INFO: namespace provisioning-9746 deletion completed in 9.588801629s


• [SLOW TEST:16.148 seconds]
[sig-storage] In-tree Volumes
/workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: hostPath]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:69
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:92
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:362
------------------------------
SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:93
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 11 20:02:36.450: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
STEP: Building a namespace api object, basename provisioning
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in provisioning-5877
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail if subpath directory is outside the volume [Slow]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:223
STEP: deploying csi-hostpath driver
Jan 11 20:02:37.280: INFO: creating *v1.ServiceAccount: provisioning-5877/csi-attacher
Jan 11 20:02:37.370: INFO: creating *v1.ClusterRole: external-attacher-runner-provisioning-5877
Jan 11 20:02:37.370: INFO: Define cluster role external-attacher-runner-provisioning-5877
Jan 11 20:02:37.460: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-provisioning-5877
Jan 11 20:02:37.551: INFO: creating *v1.Role: provisioning-5877/external-attacher-cfg-provisioning-5877
Jan 11 20:02:37.642: INFO: creating *v1.RoleBinding: provisioning-5877/csi-attacher-role-cfg
Jan 11 20:02:37.732: INFO: creating *v1.ServiceAccount: provisioning-5877/csi-provisioner
Jan 11 20:02:37.822: INFO: creating *v1.ClusterRole: external-provisioner-runner-provisioning-5877
Jan 11 20:02:37.822: INFO: Define cluster role external-provisioner-runner-provisioning-5877
Jan 11 20:02:37.912: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-provisioning-5877
Jan 11 20:02:38.002: INFO: creating *v1.Role: provisioning-5877/external-provisioner-cfg-provisioning-5877
Jan 11 20:02:38.093: INFO: creating *v1.RoleBinding: provisioning-5877/csi-provisioner-role-cfg
Jan 11 20:02:38.183: INFO: creating *v1.ServiceAccount: provisioning-5877/csi-snapshotter
Jan 11 20:02:38.272: INFO: creating *v1.ClusterRole: external-snapshotter-runner-provisioning-5877
Jan 11 20:02:38.272: INFO: Define cluster role external-snapshotter-runner-provisioning-5877
Jan 11 20:02:38.362: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-provisioning-5877
Jan 11 20:02:38.452: INFO: creating *v1.Role: provisioning-5877/external-snapshotter-leaderelection-provisioning-5877
Jan 11 20:02:38.542: INFO: creating *v1.RoleBinding: provisioning-5877/external-snapshotter-leaderelection
Jan 11 20:02:38.633: INFO: creating *v1.ServiceAccount: provisioning-5877/csi-resizer
Jan 11 20:02:38.723: INFO: creating *v1.ClusterRole: external-resizer-runner-provisioning-5877
Jan 11 20:02:38.723: INFO: Define cluster role external-resizer-runner-provisioning-5877
Jan 11 20:02:38.812: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-provisioning-5877
Jan 11 20:02:38.902: INFO: creating *v1.Role: provisioning-5877/external-resizer-cfg-provisioning-5877
Jan 11 20:02:38.992: INFO: creating *v1.RoleBinding: provisioning-5877/csi-resizer-role-cfg
Jan 11 20:02:39.082: INFO: creating *v1.Service: provisioning-5877/csi-hostpath-attacher
Jan 11 20:02:39.176: INFO: creating *v1.StatefulSet: provisioning-5877/csi-hostpath-attacher
Jan 11 20:02:39.266: INFO: creating *v1beta1.CSIDriver: csi-hostpath-provisioning-5877
Jan 11 20:02:39.356: INFO: creating *v1.Service: provisioning-5877/csi-hostpathplugin
Jan 11 20:02:39.450: INFO: creating *v1.StatefulSet: provisioning-5877/csi-hostpathplugin
Jan 11 20:02:39.540: INFO: creating *v1.Service: provisioning-5877/csi-hostpath-provisioner
Jan 11 20:02:39.639: INFO: creating *v1.StatefulSet: provisioning-5877/csi-hostpath-provisioner
Jan 11 20:02:39.730: INFO: creating *v1.Service: provisioning-5877/csi-hostpath-resizer
Jan 11 20:02:39.823: INFO: creating *v1.StatefulSet: provisioning-5877/csi-hostpath-resizer
Jan 11 20:02:39.913: INFO: creating *v1.Service: provisioning-5877/csi-snapshotter
Jan 11 20:02:40.006: INFO: creating *v1.StatefulSet: provisioning-5877/csi-snapshotter
Jan 11 20:02:40.097: INFO: creating *v1.ClusterRoleBinding: psp-csi-hostpath-role-provisioning-5877
Jan 11 20:02:40.186: INFO: Test running for native CSI Driver, not checking metrics
Jan 11 20:02:40.187: INFO: Creating resource for dynamic PV
STEP: creating a StorageClass provisioning-5877-csi-hostpath-provisioning-5877-sc5v5gh
STEP: creating a claim
Jan 11 20:02:40.276: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
Jan 11 20:02:40.368: INFO: Waiting up to 5m0s for PersistentVolumeClaims [csi-hostpathdjqwg] to have phase Bound
Jan 11 20:02:40.457: INFO: PersistentVolumeClaim csi-hostpathdjqwg found but phase is Pending instead of Bound.
Jan 11 20:02:42.547: INFO: PersistentVolumeClaim csi-hostpathdjqwg found and phase=Bound (2.179448003s)
STEP: Creating pod pod-subpath-test-csi-hostpath-dynamicpv-nkq8
STEP: Checking for subpath error in container status
Jan 11 20:02:49.003: INFO: Deleting pod "pod-subpath-test-csi-hostpath-dynamicpv-nkq8" in namespace "provisioning-5877"
Jan 11 20:02:49.094: INFO: Wait up to 5m0s for pod "pod-subpath-test-csi-hostpath-dynamicpv-nkq8" to be fully deleted
STEP: Deleting pod
Jan 11 20:02:59.275: INFO: Deleting pod "pod-subpath-test-csi-hostpath-dynamicpv-nkq8" in namespace "provisioning-5877"
STEP: Deleting pvc
Jan 11 20:02:59.365: INFO: Deleting PersistentVolumeClaim "csi-hostpathdjqwg"
Jan 11 20:02:59.456: INFO: Waiting up to 5m0s for PersistentVolume pvc-91778cbe-dbae-4e1c-ae3f-a67941523391 to get deleted
Jan 11 20:02:59.545: INFO: PersistentVolume pvc-91778cbe-dbae-4e1c-ae3f-a67941523391 was removed
STEP: Deleting sc
STEP: uninstalling csi-hostpath driver
Jan 11 20:02:59.637: INFO: deleting *v1.ServiceAccount: provisioning-5877/csi-attacher
Jan 11 20:02:59.727: INFO: deleting *v1.ClusterRole: external-attacher-runner-provisioning-5877
Jan 11 20:02:59.818: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-provisioning-5877
Jan 11 20:02:59.909: INFO: deleting *v1.Role: provisioning-5877/external-attacher-cfg-provisioning-5877
Jan 11 20:03:00.000: INFO: deleting *v1.RoleBinding: provisioning-5877/csi-attacher-role-cfg
Jan 11 20:03:00.090: INFO: deleting *v1.ServiceAccount: provisioning-5877/csi-provisioner
Jan 11 20:03:00.182: INFO: deleting *v1.ClusterRole: external-provisioner-runner-provisioning-5877
Jan 11 20:03:00.273: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-provisioning-5877
Jan 11 20:03:00.365: INFO: deleting *v1.Role: provisioning-5877/external-provisioner-cfg-provisioning-5877
Jan 11 20:03:00.456: INFO: deleting *v1.RoleBinding: provisioning-5877/csi-provisioner-role-cfg
Jan 11 20:03:00.552: INFO: deleting *v1.ServiceAccount: provisioning-5877/csi-snapshotter
Jan 11 20:03:00.644: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-provisioning-5877
Jan 11 20:03:00.735: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-provisioning-5877
Jan 11 20:03:00.826: INFO: deleting *v1.Role: provisioning-5877/external-snapshotter-leaderelection-provisioning-5877
Jan 11 20:03:00.917: INFO: deleting *v1.RoleBinding: provisioning-5877/external-snapshotter-leaderelection
Jan 11 20:03:01.009: INFO: deleting *v1.ServiceAccount: provisioning-5877/csi-resizer
Jan 11 20:03:01.100: INFO: deleting *v1.ClusterRole: external-resizer-runner-provisioning-5877
Jan 11 20:03:01.193: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-provisioning-5877
Jan 11 20:03:01.284: INFO: deleting *v1.Role: provisioning-5877/external-resizer-cfg-provisioning-5877
Jan 11 20:03:01.375: INFO: deleting *v1.RoleBinding: provisioning-5877/csi-resizer-role-cfg
Jan 11 20:03:01.467: INFO: deleting *v1.Service: provisioning-5877/csi-hostpath-attacher
Jan 11 20:03:01.564: INFO: deleting *v1.StatefulSet: provisioning-5877/csi-hostpath-attacher
Jan 11 20:03:01.655: INFO: deleting *v1beta1.CSIDriver: csi-hostpath-provisioning-5877
Jan 11 20:03:01.746: INFO: deleting *v1.Service: provisioning-5877/csi-hostpathplugin
Jan 11 20:03:01.842: INFO: deleting *v1.StatefulSet: provisioning-5877/csi-hostpathplugin
Jan 11 20:03:01.933: INFO: deleting *v1.Service: provisioning-5877/csi-hostpath-provisioner
Jan 11 20:03:02.029: INFO: deleting *v1.StatefulSet: provisioning-5877/csi-hostpath-provisioner
Jan 11 20:03:02.121: INFO: deleting *v1.Service: provisioning-5877/csi-hostpath-resizer
Jan 11 20:03:02.216: INFO: deleting *v1.StatefulSet: provisioning-5877/csi-hostpath-resizer
Jan 11 20:03:02.307: INFO: deleting *v1.Service: provisioning-5877/csi-snapshotter
Jan 11 20:03:02.403: INFO: deleting *v1.StatefulSet: provisioning-5877/csi-snapshotter
Jan 11 20:03:02.494: INFO: deleting *v1.ClusterRoleBinding: psp-csi-hostpath-role-provisioning-5877
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 11 20:03:02.585: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
WARNING: pod log: csi-hostpath-attacher-0/csi-attacher: context canceled
STEP: Destroying namespace "provisioning-5877" for this suite.
Jan 11 20:03:14.945: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 20:03:18.257: INFO: namespace provisioning-5877 deletion completed in 15.581378231s


• [SLOW TEST:41.808 seconds]
[sig-storage] CSI Volumes
/workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: csi-hostpath]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:62
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:92
      should fail if subpath directory is outside the volume [Slow]
      /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:223
------------------------------
SSSSSSSSSSS
------------------------------
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 11 20:03:17.545: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
STEP: Building a namespace api object, basename deployment
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in deployment-6752
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
Jan 11 20:03:18.185: INFO: Creating deployment "test-recreate-deployment"
Jan 11 20:03:18.275: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Jan 11 20:03:18.454: INFO: Waiting deployment "test-recreate-deployment" to complete
Jan 11 20:03:18.545: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714369798, loc:(*time.Location)(0x84bfb00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714369798, loc:(*time.Location)(0x84bfb00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714369798, loc:(*time.Location)(0x84bfb00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714369798, loc:(*time.Location)(0x84bfb00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-68fc85c7bb\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 11 20:03:20.635: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Jan 11 20:03:20.816: INFO: Updating deployment test-recreate-deployment
Jan 11 20:03:20.816: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:62
Jan 11 20:03:20.996: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:{test-recreate-deployment  deployment-6752 /apis/apps/v1/namespaces/deployment-6752/deployments/test-recreate-deployment 9dfdcbef-1c87-4573-be40-181126b0b255 66884 2 2020-01-11 20:03:18 +0000 UTC   map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc000bdff18  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-01-11 20:03:20 +0000 UTC,LastTransitionTime:2020-01-11 20:03:20 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-5f94c574ff" is progressing.,LastUpdateTime:2020-01-11 20:03:20 +0000 UTC,LastTransitionTime:2020-01-11 20:03:18 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},}

Jan 11 20:03:21.086: INFO: New ReplicaSet "test-recreate-deployment-5f94c574ff" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:{test-recreate-deployment-5f94c574ff  deployment-6752 /apis/apps/v1/namespaces/deployment-6752/replicasets/test-recreate-deployment-5f94c574ff 6f2188c9-fd77-43a1-b5ac-fe1378d46d59 66883 1 2020-01-11 20:03:20 +0000 UTC   map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 9dfdcbef-1c87-4573-be40-181126b0b255 0xc000198b87 0xc000198b88}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5f94c574ff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc000198be8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Jan 11 20:03:21.086: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Jan 11 20:03:21.086: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-68fc85c7bb  deployment-6752 /apis/apps/v1/namespaces/deployment-6752/replicasets/test-recreate-deployment-68fc85c7bb fda067e3-eb64-420e-ba12-0c464e2ff8e1 66875 2 2020-01-11 20:03:18 +0000 UTC   map[name:sample-pod-3 pod-template-hash:68fc85c7bb] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 9dfdcbef-1c87-4573-be40-181126b0b255 0xc000198c57 0xc000198c58}] []  []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 68fc85c7bb,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3 pod-template-hash:68fc85c7bb] map[] [] []  []} {[] [] [{redis docker.io/library/redis:5.0.5-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc000198cd8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Jan 11 20:03:21.177: INFO: Pod "test-recreate-deployment-5f94c574ff-rqgbg" is not available:
&Pod{ObjectMeta:{test-recreate-deployment-5f94c574ff-rqgbg test-recreate-deployment-5f94c574ff- deployment-6752 /api/v1/namespaces/deployment-6752/pods/test-recreate-deployment-5f94c574ff-rqgbg 948ebcc5-ebec-4e80-b0a4-74afba666302 66880 0 2020-01-11 20:03:20 +0000 UTC   map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet test-recreate-deployment-5f94c574ff 6f2188c9-fd77-43a1-b5ac-fe1378d46d59 0xc000199237 0xc000199238}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-7grvn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-7grvn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-7grvn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-10-250-27-25.ec2.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 20:03:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 20:03:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 20:03:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 20:03:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.27.25,PodIP:,StartTime:2020-01-11 20:03:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 11 20:03:21.177: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-6752" for this suite.
Jan 11 20:03:27.539: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 20:03:30.865: INFO: namespace deployment-6752 deletion completed in 9.59677312s


• [SLOW TEST:13.320 seconds]
[sig-apps] Deployment
/workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
S
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 11 20:03:30.870: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
STEP: Building a namespace api object, basename persistent-local-volumes-test
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in persistent-local-volumes-test-7470
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:152
[BeforeEach] [Volume type: tmpfs]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195
STEP: Initializing test volumes
STEP: Creating tmpfs mount point on node "ip-10-250-27-25.ec2.internal" at path "/tmp/local-volume-test-9077d280-f350-4777-a8f5-fefe28e782f3"
Jan 11 20:03:33.965: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-7470 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-9077d280-f350-4777-a8f5-fefe28e782f3" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-9077d280-f350-4777-a8f5-fefe28e782f3" "/tmp/local-volume-test-9077d280-f350-4777-a8f5-fefe28e782f3"'
Jan 11 20:03:35.293: INFO: stderr: ""
Jan 11 20:03:35.293: INFO: stdout: ""
STEP: Creating local PVCs and PVs
Jan 11 20:03:35.293: INFO: Creating a PV followed by a PVC
Jan 11 20:03:35.474: INFO: Waiting for PV local-pvd5qk7 to bind to PVC pvc-h8bfk
Jan 11 20:03:35.474: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-h8bfk] to have phase Bound
Jan 11 20:03:35.563: INFO: PersistentVolumeClaim pvc-h8bfk found and phase=Bound (89.695969ms)
Jan 11 20:03:35.563: INFO: Waiting up to 3m0s for PersistentVolume local-pvd5qk7 to have phase Bound
Jan 11 20:03:35.653: INFO: PersistentVolume local-pvd5qk7 found and phase=Bound (89.963139ms)
[BeforeEach] Set fsGroup for local volume
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261
[It] should set different fsGroup for second pod if first pod is deleted
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286
Jan 11 20:03:35.833: INFO: Disabled temporarily, reopen after #73168 is fixed
[AfterEach] [Volume type: tmpfs]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204
STEP: Cleaning up PVC and PV
Jan 11 20:03:35.834: INFO: Deleting PersistentVolumeClaim "pvc-h8bfk"
Jan 11 20:03:35.925: INFO: Deleting PersistentVolume "local-pvd5qk7"
STEP: Unmount tmpfs mount point on node "ip-10-250-27-25.ec2.internal" at path "/tmp/local-volume-test-9077d280-f350-4777-a8f5-fefe28e782f3"
Jan 11 20:03:36.016: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-7470 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-9077d280-f350-4777-a8f5-fefe28e782f3"'
Jan 11 20:03:37.306: INFO: stderr: ""
Jan 11 20:03:37.306: INFO: stdout: ""
STEP: Removing the test directory
Jan 11 20:03:37.306: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-7470 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-9077d280-f350-4777-a8f5-fefe28e782f3'
Jan 11 20:03:38.642: INFO: stderr: ""
Jan 11 20:03:38.642: INFO: stdout: ""
[AfterEach] [sig-storage] PersistentVolumes-local 
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 11 20:03:38.733: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "persistent-local-volumes-test-7470" for this suite.
Jan 11 20:03:45.096: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 20:03:48.420: INFO: namespace persistent-local-volumes-test-7470 deletion completed in 9.595418452s


S [SKIPPING] [17.550 seconds]
[sig-storage] PersistentVolumes-local 
/workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Volume type: tmpfs]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Set fsGroup for local volume
    /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260
      should set different fsGroup for second pod if first pod is deleted [It]
      /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286

      Disabled temporarily, reopen after #73168 is fixed

      /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:287
------------------------------
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 11 20:03:13.289: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
STEP: Building a namespace api object, basename webhook
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-4016
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:88
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan 11 20:03:15.172: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714369794, loc:(*time.Location)(0x84bfb00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714369794, loc:(*time.Location)(0x84bfb00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714369794, loc:(*time.Location)(0x84bfb00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714369794, loc:(*time.Location)(0x84bfb00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 11 20:03:17.262: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714369794, loc:(*time.Location)(0x84bfb00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714369794, loc:(*time.Location)(0x84bfb00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714369794, loc:(*time.Location)(0x84bfb00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714369794, loc:(*time.Location)(0x84bfb00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 11 20:03:20.355: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should honor timeout [Conformance]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Setting timeout (1s) shorter than webhook latency (5s)
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s)
STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is longer than webhook latency
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is empty (defaulted to 10s in v1)
STEP: Registering slow webhook via the AdmissionRegistration API
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 11 20:03:34.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-4016" for this suite.
Jan 11 20:03:42.996: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 20:03:46.296: INFO: namespace webhook-4016 deletion completed in 11.568974687s
STEP: Destroying namespace "webhook-4016-markers" for this suite.
Jan 11 20:03:52.569: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 20:03:55.868: INFO: namespace webhook-4016-markers deletion completed in 9.572573509s
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:103


• [SLOW TEST:42.939 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should honor timeout [Conformance]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 11 19:57:48.326: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
STEP: Building a namespace api object, basename statefulset
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in statefulset-5990
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:62
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:77
STEP: Creating service test in namespace statefulset-5990
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace statefulset-5990
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-5990
Jan 11 19:57:49.321: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false
Jan 11 19:57:59.411: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Jan 11 19:57:59.501: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=statefulset-5990 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jan 11 19:58:00.801: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n"
Jan 11 19:58:00.801: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jan 11 19:58:00.802: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Jan 11 19:58:00.892: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Jan 11 19:58:10.982: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan 11 19:58:10.982: INFO: Waiting for statefulset status.replicas updated to 0
Jan 11 19:58:11.338: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999581s
Jan 11 19:58:12.428: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.910740704s
Jan 11 19:58:13.518: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.821023258s
Jan 11 19:58:14.608: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.731052363s
Jan 11 19:58:15.699: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.640768455s
Jan 11 19:58:16.789: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.550437606s
Jan 11 19:58:17.879: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.4602614s
Jan 11 19:58:18.970: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.370157357s
Jan 11 19:58:20.060: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.27959783s
Jan 11 19:58:21.150: INFO: Verifying statefulset ss doesn't scale past 1 for another 189.476028ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-5990
Jan 11 19:58:22.240: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=statefulset-5990 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 11 19:58:23.541: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n"
Jan 11 19:58:23.541: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Jan 11 19:58:23.541: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Jan 11 19:58:23.631: INFO: Found 1 stateful pods, waiting for 3
Jan 11 19:58:33.722: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 11 19:58:33.722: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 11 19:58:33.722: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Jan 11 19:58:33.900: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=statefulset-5990 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jan 11 19:58:35.186: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n"
Jan 11 19:58:35.186: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jan 11 19:58:35.186: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Jan 11 19:58:35.186: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=statefulset-5990 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jan 11 19:58:36.493: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n"
Jan 11 19:58:36.494: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jan 11 19:58:36.494: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Jan 11 19:58:36.494: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=statefulset-5990 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jan 11 19:58:37.819: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n"
Jan 11 19:58:37.819: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jan 11 19:58:37.819: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Jan 11 19:58:37.819: INFO: Waiting for statefulset status.replicas updated to 0
Jan 11 19:58:37.998: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan 11 19:58:37.998: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Jan 11 19:58:37.998: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Jan 11 19:58:38.267: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999198s
Jan 11 19:58:39.357: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.909917304s
Jan 11 19:58:40.447: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.820118654s
Jan 11 19:58:41.537: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.729774503s
Jan 11 19:58:42.628: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.639279899s
Jan 11 19:58:43.721: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.549002202s
Jan 11 19:58:44.812: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.455375248s
Jan 11 19:58:45.902: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.364848485s
Jan 11 19:58:46.992: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.274439254s
Jan 11 19:58:48.083: INFO: Verifying statefulset ss doesn't scale past 3 for another 184.291282ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-5990
Jan 11 19:58:49.173: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=statefulset-5990 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 11 19:58:50.458: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n"
Jan 11 19:58:50.458: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Jan 11 19:58:50.458: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Jan 11 19:58:50.458: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=statefulset-5990 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 11 19:58:51.753: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n"
Jan 11 19:58:51.754: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Jan 11 19:58:51.754: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Jan 11 19:58:51.754: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=statefulset-5990 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 11 19:58:53.003: INFO: rc: 1
Jan 11 19:58:53.003: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=statefulset-5990 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true] []    error: Internal error occurred: error executing command in container: container not running (ddc88aa9b3c77e0e58e902cab1016f6ea6ff2c6a318659d394ee7117be3c8974)
 []  0xc002c93f80 exit status 1   true [0xc0029f8bc0 0xc0029f8bd8 0xc0029f8bf0] [0xc0029f8bc0 0xc0029f8bd8 0xc0029f8bf0] [0xc0029f8bd0 0xc0029f8be8] [0x10efe30 0x10efe30] 0xc00306e960 }:
Command stdout:

stderr:
error: Internal error occurred: error executing command in container: container not running (ddc88aa9b3c77e0e58e902cab1016f6ea6ff2c6a318659d394ee7117be3c8974)

error:
exit status 1
Jan 11 19:59:03.003: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=statefulset-5990 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 11 19:59:03.555: INFO: rc: 1
Jan 11 19:59:03.555: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=statefulset-5990 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc00145dc50 exit status 1   true [0xc004528bf0 0xc004528c08 0xc004528c20] [0xc004528bf0 0xc004528c08 0xc004528c20] [0xc004528c00 0xc004528c18] [0x10efe30 0x10efe30] 0xc004579b00 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 11 19:59:13.555: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=statefulset-5990 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 11 19:59:14.065: INFO: rc: 1
Jan 11 19:59:14.065: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=statefulset-5990 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc004a730e0 exit status 1   true [0xc0010602b8 0xc001060550 0xc001060650] [0xc0010602b8 0xc001060550 0xc001060650] [0xc001060408 0xc001060640] [0x10efe30 0x10efe30] 0xc00054a780 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 11 19:59:24.065: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=statefulset-5990 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 11 19:59:24.577: INFO: rc: 1
Jan 11 19:59:24.577: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=statefulset-5990 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0024a65a0 exit status 1   true [0xc000011ea8 0xc00054cb30 0xc0000e8b00] [0xc000011ea8 0xc00054cb30 0xc0000e8b00] [0xc00054ca48 0xc0000e88c0] [0x10efe30 0x10efe30] 0xc004a022a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 11 19:59:34.577: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=statefulset-5990 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 11 19:59:35.090: INFO: rc: 1
Jan 11 19:59:35.090: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=statefulset-5990 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0023ac5a0 exit status 1   true [0xc000608058 0xc000608318 0xc0006084b0] [0xc000608058 0xc000608318 0xc0006084b0] [0xc000608240 0xc000608488] [0x10efe30 0x10efe30] 0xc00363c060 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 11 19:59:45.090: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=statefulset-5990 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 11 19:59:45.603: INFO: rc: 1
Jan 11 19:59:45.604: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=statefulset-5990 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0023acc60 exit status 1   true [0xc000608500 0xc0006087e8 0xc000608870] [0xc000608500 0xc0006087e8 0xc000608870] [0xc000608688 0xc000608830] [0x10efe30 0x10efe30] 0xc00363c360 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 11 19:59:55.604: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=statefulset-5990 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 11 19:59:56.113: INFO: rc: 1
Jan 11 19:59:56.113: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=statefulset-5990 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0024a6bd0 exit status 1   true [0xc0000e9828 0xc0000e9ef8 0xc00471e010] [0xc0000e9828 0xc0000e9ef8 0xc00471e010] [0xc0000e9e80 0xc00471e008] [0x10efe30 0x10efe30] 0xc004a025a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 11 20:00:06.113: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=statefulset-5990 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 11 20:00:06.630: INFO: rc: 1
Jan 11 20:00:06.630: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=statefulset-5990 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0023ad2c0 exit status 1   true [0xc0006088e8 0xc000608be8 0xc000608d78] [0xc0006088e8 0xc000608be8 0xc000608d78] [0xc000608a40 0xc000608d08] [0x10efe30 0x10efe30] 0xc00363c6c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 11 20:00:16.630: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=statefulset-5990 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 11 20:00:17.145: INFO: rc: 1
Jan 11 20:00:17.145: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=statefulset-5990 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0023ad8c0 exit status 1   true [0xc000608dc8 0xc000608f30 0xc000609070] [0xc000608dc8 0xc000608f30 0xc000609070] [0xc000608eb0 0xc000608fc8] [0x10efe30 0x10efe30] 0xc00363cae0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 11 20:00:27.145: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=statefulset-5990 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 11 20:00:32.656: INFO: rc: 1
Jan 11 20:00:32.656: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=statefulset-5990 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc003d425d0 exit status 1   true [0xc0027d0008 0xc0027d0050 0xc0027d0070] [0xc0027d0008 0xc0027d0050 0xc0027d0070] [0xc0027d0038 0xc0027d0068] [0x10efe30 0x10efe30] 0xc003559260 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 11 20:00:42.657: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=statefulset-5990 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 11 20:00:43.184: INFO: rc: 1
Jan 11 20:00:43.184: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=statefulset-5990 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc003d42c00 exit status 1   true [0xc0027d0078 0xc0027d0090 0xc0027d00a8] [0xc0027d0078 0xc0027d0090 0xc0027d00a8] [0xc0027d0088 0xc0027d00a0] [0x10efe30 0x10efe30] 0xc003559560 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 11 20:00:53.184: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=statefulset-5990 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 11 20:00:53.694: INFO: rc: 1
Jan 11 20:00:53.694: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=statefulset-5990 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0024a71d0 exit status 1   true [0xc00471e018 0xc00471e030 0xc00471e048] [0xc00471e018 0xc00471e030 0xc00471e048] [0xc00471e028 0xc00471e040] [0x10efe30 0x10efe30] 0xc004a02960 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 11 20:01:03.695: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=statefulset-5990 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 11 20:01:04.226: INFO: rc: 1
Jan 11 20:01:04.226: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=statefulset-5990 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc004a73740 exit status 1   true [0xc001060678 0xc001060950 0xc001060bd0] [0xc001060678 0xc001060950 0xc001060bd0] [0xc0010608a0 0xc001060b38] [0x10efe30 0x10efe30] 0xc001e700c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 11 20:01:14.227: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=statefulset-5990 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 11 20:01:14.751: INFO: rc: 1
Jan 11 20:01:14.751: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=statefulset-5990 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc003d425a0 exit status 1   true [0xc0000e88c0 0xc0000e9d30 0xc00054c030] [0xc0000e88c0 0xc0000e9d30 0xc00054c030] [0xc0000e9828 0xc0000e9ef8] [0x10efe30 0x10efe30] 0xc002ae02a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 11 20:01:24.752: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=statefulset-5990 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 11 20:01:25.322: INFO: rc: 1
Jan 11 20:01:25.322: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=statefulset-5990 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0023ac5d0 exit status 1   true [0xc000011ea8 0xc0027d0038 0xc0027d0068] [0xc000011ea8 0xc0027d0038 0xc0027d0068] [0xc0027d0018 0xc0027d0058] [0x10efe30 0x10efe30] 0xc0021e7aa0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 11 20:01:35.322: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=statefulset-5990 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 11 20:01:35.832: INFO: rc: 1
Jan 11 20:01:35.832: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=statefulset-5990 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0023accf0 exit status 1   true [0xc0027d0070 0xc0027d0088 0xc0027d00a0] [0xc0027d0070 0xc0027d0088 0xc0027d00a0] [0xc0027d0080 0xc0027d0098] [0x10efe30 0x10efe30] 0xc0035592c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 11 20:01:45.832: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=statefulset-5990 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 11 20:01:46.372: INFO: rc: 1
Jan 11 20:01:46.372: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=statefulset-5990 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc004a73110 exit status 1   true [0xc000608058 0xc000608318 0xc0006084b0] [0xc000608058 0xc000608318 0xc0006084b0] [0xc000608240 0xc000608488] [0x10efe30 0x10efe30] 0xc00363c240 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 11 20:01:56.372: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=statefulset-5990 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 11 20:01:56.904: INFO: rc: 1
Jan 11 20:01:56.904: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=statefulset-5990 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc004a736e0 exit status 1   true [0xc000608500 0xc0006087e8 0xc000608870] [0xc000608500 0xc0006087e8 0xc000608870] [0xc000608688 0xc000608830] [0x10efe30 0x10efe30] 0xc00363c5a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 11 20:02:06.906: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=statefulset-5990 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 11 20:02:07.444: INFO: rc: 1
Jan 11 20:02:07.445: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=statefulset-5990 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0023ad320 exit status 1   true [0xc0027d00a8 0xc0027d00c0 0xc0027d00d8] [0xc0027d00a8 0xc0027d00c0 0xc0027d00d8] [0xc0027d00b8 0xc0027d00d0] [0x10efe30 0x10efe30] 0xc0035595c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 11 20:02:17.445: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=statefulset-5990 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 11 20:02:17.971: INFO: rc: 1
Jan 11 20:02:17.971: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=statefulset-5990 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0023ad920 exit status 1   true [0xc0027d00e0 0xc0027d00f8 0xc0027d0110] [0xc0027d00e0 0xc0027d00f8 0xc0027d0110] [0xc0027d00f0 0xc0027d0108] [0x10efe30 0x10efe30] 0xc003559b00 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 11 20:02:27.971: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=statefulset-5990 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 11 20:02:28.495: INFO: rc: 1
Jan 11 20:02:28.495: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=statefulset-5990 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc004a73da0 exit status 1   true [0xc0006088e8 0xc000608be8 0xc000608d78] [0xc0006088e8 0xc000608be8 0xc000608d78] [0xc000608a40 0xc000608d08] [0x10efe30 0x10efe30] 0xc00363c900 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 11 20:02:38.496: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=statefulset-5990 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 11 20:02:39.006: INFO: rc: 1
Jan 11 20:02:39.006: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=statefulset-5990 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0035763c0 exit status 1   true [0xc000608dc8 0xc000608f30 0xc000609070] [0xc000608dc8 0xc000608f30 0xc000609070] [0xc000608eb0 0xc000608fc8] [0x10efe30 0x10efe30] 0xc00363cd80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 11 20:02:49.006: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=statefulset-5990 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 11 20:02:49.514: INFO: rc: 1
Jan 11 20:02:49.514: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=statefulset-5990 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0024a66c0 exit status 1   true [0xc00471e000 0xc00471e018 0xc00471e030] [0xc00471e000 0xc00471e018 0xc00471e030] [0xc00471e010 0xc00471e028] [0x10efe30 0x10efe30] 0xc004a022a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 11 20:02:59.515: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=statefulset-5990 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 11 20:03:00.025: INFO: rc: 1
Jan 11 20:03:00.025: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=statefulset-5990 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc003576ba0 exit status 1   true [0xc0006090a8 0xc0006091c0 0xc000609410] [0xc0006090a8 0xc0006091c0 0xc000609410] [0xc000609150 0xc000609348] [0x10efe30 0x10efe30] 0xc00363d440 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 11 20:03:10.026: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=statefulset-5990 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 11 20:03:10.538: INFO: rc: 1
Jan 11 20:03:10.538: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=statefulset-5990 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc003577410 exit status 1   true [0xc000609480 0xc0006095b8 0xc000609678] [0xc000609480 0xc0006095b8 0xc000609678] [0xc0006095a0 0xc000609648] [0x10efe30 0x10efe30] 0xc001e70420 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 11 20:03:20.538: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=statefulset-5990 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 11 20:03:21.060: INFO: rc: 1
Jan 11 20:03:21.060: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=statefulset-5990 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc004a730b0 exit status 1   true [0xc0000e8338 0xc0000e9828 0xc0000e9ef8] [0xc0000e8338 0xc0000e9828 0xc0000e9ef8] [0xc0000e8b00 0xc0000e9e80] [0x10efe30 0x10efe30] 0xc00363c240 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 11 20:03:31.060: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=statefulset-5990 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 11 20:03:31.574: INFO: rc: 1
Jan 11 20:03:31.574: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=statefulset-5990 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc004a73710 exit status 1   true [0xc000608058 0xc000608318 0xc0006084b0] [0xc000608058 0xc000608318 0xc0006084b0] [0xc000608240 0xc000608488] [0x10efe30 0x10efe30] 0xc00363c5a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 11 20:03:41.574: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=statefulset-5990 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 11 20:03:42.112: INFO: rc: 1
Jan 11 20:03:42.112: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=statefulset-5990 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc004a73d70 exit status 1   true [0xc000608500 0xc0006087e8 0xc000608870] [0xc000608500 0xc0006087e8 0xc000608870] [0xc000608688 0xc000608830] [0x10efe30 0x10efe30] 0xc00363c900 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 11 20:03:52.112: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=statefulset-5990 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 11 20:03:52.620: INFO: rc: 1
Jan 11 20:03:52.620: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: 
Jan 11 20:03:52.620: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88
Jan 11 20:03:52.891: INFO: Deleting all statefulset in ns statefulset-5990
Jan 11 20:03:52.981: INFO: Scaling statefulset ss to 0
Jan 11 20:03:53.249: INFO: Waiting for statefulset status.replicas updated to 0
Jan 11 20:03:53.338: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 11 20:03:53.607: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-5990" for this suite.
Jan 11 20:04:01.967: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 20:04:05.268: INFO: namespace statefulset-5990 deletion completed in 11.570704508s


• [SLOW TEST:376.942 seconds]
[sig-apps] StatefulSet
/workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
    /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSS
------------------------------
[BeforeEach] [sig-apps] DisruptionController
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 11 20:03:06.632: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
STEP: Building a namespace api object, basename disruption
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in disruption-8920
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] DisruptionController
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:52
[It] evictions: maxUnavailable allow single eviction, percentage => should allow an eviction
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:149
STEP: Waiting for the pdb to be processed
STEP: locating a running pod
STEP: Waiting for all pods to be running
Jan 11 20:03:11.813: INFO: running pods: 4 < 10
[AfterEach] [sig-apps] DisruptionController
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 11 20:03:13.999: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "disruption-8920" for this suite.
Jan 11 20:04:06.362: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 20:04:09.679: INFO: namespace disruption-8920 deletion completed in 55.588783417s


• [SLOW TEST:63.047 seconds]
[sig-apps] DisruptionController
/workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  evictions: maxUnavailable allow single eviction, percentage => should allow an eviction
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:149
------------------------------
SSSSSSS
------------------------------
[BeforeEach] [k8s.io] Security Context
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 11 20:03:56.246: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
STEP: Building a namespace api object, basename security-context-test
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in security-context-test-412
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:40
[It] should not run with an explicit root user ID [LinuxOnly]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:133
[AfterEach] [k8s.io] Security Context
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 11 20:03:59.064: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-412" for this suite.
Jan 11 20:04:11.423: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 20:04:14.714: INFO: namespace security-context-test-412 deletion completed in 15.559087611s


• [SLOW TEST:18.468 seconds]
[k8s.io] Security Context
/workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
  When creating a container with runAsNonRoot
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:98
    should not run with an explicit root user ID [LinuxOnly]
    /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:133
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 11 20:04:05.277: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
STEP: Building a namespace api object, basename kubectl
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-5357
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:225
[It] should create a quota with scopes
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1975
STEP: calling kubectl quota
Jan 11 20:04:05.949: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config create quota scopes --hard=pods=1000000 --scopes=BestEffort,NotTerminating --namespace=kubectl-5357'
Jan 11 20:04:06.389: INFO: stderr: ""
Jan 11 20:04:06.389: INFO: stdout: "resourcequota/scopes created\n"
STEP: verifying that the quota was created
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 11 20:04:06.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5357" for this suite.
Jan 11 20:04:12.837: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 20:04:16.131: INFO: namespace kubectl-5357 deletion completed in 9.562202963s


• [SLOW TEST:10.854 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl create quota
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1945
    should create a quota with scopes
    /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1975
------------------------------
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 11 20:04:09.733: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
STEP: Building a namespace api object, basename downward-api
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-9991
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:90
STEP: Creating a pod to test downward API volume plugin
Jan 11 20:04:10.548: INFO: Waiting up to 5m0s for pod "metadata-volume-1943b488-5c38-4b49-b54d-3ed3a51376c4" in namespace "downward-api-9991" to be "success or failure"
Jan 11 20:04:10.638: INFO: Pod "metadata-volume-1943b488-5c38-4b49-b54d-3ed3a51376c4": Phase="Pending", Reason="", readiness=false. Elapsed: 89.701598ms
Jan 11 20:04:12.728: INFO: Pod "metadata-volume-1943b488-5c38-4b49-b54d-3ed3a51376c4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.17987158s
STEP: Saw pod success
Jan 11 20:04:12.728: INFO: Pod "metadata-volume-1943b488-5c38-4b49-b54d-3ed3a51376c4" satisfied condition "success or failure"
Jan 11 20:04:12.817: INFO: Trying to get logs from node ip-10-250-27-25.ec2.internal pod metadata-volume-1943b488-5c38-4b49-b54d-3ed3a51376c4 container client-container: 
STEP: delete the pod
Jan 11 20:04:13.014: INFO: Waiting for pod metadata-volume-1943b488-5c38-4b49-b54d-3ed3a51376c4 to disappear
Jan 11 20:04:13.103: INFO: Pod metadata-volume-1943b488-5c38-4b49-b54d-3ed3a51376c4 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 11 20:04:13.103: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9991" for this suite.
Jan 11 20:04:21.465: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 20:04:24.783: INFO: namespace downward-api-9991 deletion completed in 11.588993823s


• [SLOW TEST:15.051 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:90
------------------------------
SSSSS
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 11 20:03:48.430: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
STEP: Building a namespace api object, basename services
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-2057
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:91
[It] should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:267
Jan 11 20:03:51.437: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=services-2057 kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode'
Jan 11 20:03:52.709: INFO: stderr: "+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\n"
Jan 11 20:03:52.709: INFO: stdout: "iptables"
Jan 11 20:03:52.709: INFO: ProxyMode: iptables
Jan 11 20:03:52.800: INFO: Waiting for pod kube-proxy-mode-detector to disappear
Jan 11 20:03:52.890: INFO: Pod kube-proxy-mode-detector still exists
Jan 11 20:03:54.890: INFO: Waiting for pod kube-proxy-mode-detector to disappear
Jan 11 20:03:54.980: INFO: Pod kube-proxy-mode-detector still exists
Jan 11 20:03:56.890: INFO: Waiting for pod kube-proxy-mode-detector to disappear
Jan 11 20:03:56.980: INFO: Pod kube-proxy-mode-detector still exists
Jan 11 20:03:58.890: INFO: Waiting for pod kube-proxy-mode-detector to disappear
Jan 11 20:03:58.980: INFO: Pod kube-proxy-mode-detector still exists
Jan 11 20:04:00.890: INFO: Waiting for pod kube-proxy-mode-detector to disappear
Jan 11 20:04:00.980: INFO: Pod kube-proxy-mode-detector still exists
Jan 11 20:04:02.890: INFO: Waiting for pod kube-proxy-mode-detector to disappear
Jan 11 20:04:02.980: INFO: Pod kube-proxy-mode-detector still exists
Jan 11 20:04:04.890: INFO: Waiting for pod kube-proxy-mode-detector to disappear
Jan 11 20:04:04.980: INFO: Pod kube-proxy-mode-detector no longer exists
STEP: creating a TCP service sourceip-test with type=ClusterIP in namespace services-2057
Jan 11 20:04:05.074: INFO: sourceip-test cluster ip: 100.106.18.136
STEP: Picking 2 Nodes to test whether source IP is preserved or not
STEP: Creating a webserver pod to be part of the TCP service which echoes back source ip
STEP: waiting up to 3m0s for service sourceip-test in namespace services-2057 to expose endpoints map[echo-sourceip:[8080]]
Jan 11 20:04:07.615: INFO: successfully validated that service sourceip-test in namespace services-2057 exposes endpoints map[echo-sourceip:[8080]] (179.718014ms elapsed)
STEP: Creating pause pod deployment
Jan 11 20:04:07.796: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:2, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:2, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714369847, loc:(*time.Location)(0x84bfb00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714369847, loc:(*time.Location)(0x84bfb00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714369847, loc:(*time.Location)(0x84bfb00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714369847, loc:(*time.Location)(0x84bfb00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-fdcc94888\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 11 20:04:10.066: INFO: Waiting up to 2m0s to get response from 100.106.18.136:8080
Jan 11 20:04:10.066: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=services-2057 pause-pod-fdcc94888-4wdl7 -- /bin/sh -x -c curl -q -s --connect-timeout 30 100.106.18.136:8080/clientip'
Jan 11 20:04:11.410: INFO: stderr: "+ curl -q -s --connect-timeout 30 100.106.18.136:8080/clientip\n"
Jan 11 20:04:11.410: INFO: stdout: "100.64.1.84:39568"
STEP: Verifying the preserved source ip
Jan 11 20:04:11.410: INFO: Waiting up to 2m0s to get response from 100.106.18.136:8080
Jan 11 20:04:11.411: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=services-2057 pause-pod-fdcc94888-p78wj -- /bin/sh -x -c curl -q -s --connect-timeout 30 100.106.18.136:8080/clientip'
Jan 11 20:04:12.695: INFO: stderr: "+ curl -q -s --connect-timeout 30 100.106.18.136:8080/clientip\n"
Jan 11 20:04:12.695: INFO: stdout: "100.64.0.119:56106"
STEP: Verifying the preserved source ip
Jan 11 20:04:12.695: INFO: Deleting deployment
Jan 11 20:04:12.790: INFO: Cleaning up the echo server pod
Jan 11 20:04:12.880: INFO: Cleaning up the sourceip test service
[AfterEach] [sig-network] Services
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 11 20:04:12.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-2057" for this suite.
Jan 11 20:04:25.337: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 20:04:28.656: INFO: namespace services-2057 deletion completed in 15.589888155s
[AfterEach] [sig-network] Services
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:95


• [SLOW TEST:40.227 seconds]
[sig-network] Services
/workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:267
------------------------------
SSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 11 20:04:16.132: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
STEP: Building a namespace api object, basename projected
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-4734
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating configMap with name projected-configmap-test-volume-84f359b1-de7a-4ecc-9645-0ce39f4910c2
STEP: Creating a pod to test consume configMaps
Jan 11 20:04:16.951: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-a226cea3-15b0-4178-8423-ef2e5a951556" in namespace "projected-4734" to be "success or failure"
Jan 11 20:04:17.040: INFO: Pod "pod-projected-configmaps-a226cea3-15b0-4178-8423-ef2e5a951556": Phase="Pending", Reason="", readiness=false. Elapsed: 89.19271ms
Jan 11 20:04:19.130: INFO: Pod "pod-projected-configmaps-a226cea3-15b0-4178-8423-ef2e5a951556": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.178870713s
STEP: Saw pod success
Jan 11 20:04:19.130: INFO: Pod "pod-projected-configmaps-a226cea3-15b0-4178-8423-ef2e5a951556" satisfied condition "success or failure"
Jan 11 20:04:19.219: INFO: Trying to get logs from node ip-10-250-27-25.ec2.internal pod pod-projected-configmaps-a226cea3-15b0-4178-8423-ef2e5a951556 container projected-configmap-volume-test: 
STEP: delete the pod
Jan 11 20:04:19.408: INFO: Waiting for pod pod-projected-configmaps-a226cea3-15b0-4178-8423-ef2e5a951556 to disappear
Jan 11 20:04:19.497: INFO: Pod pod-projected-configmaps-a226cea3-15b0-4178-8423-ef2e5a951556 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 11 20:04:19.497: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4734" for this suite.
Jan 11 20:04:25.857: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 20:04:29.153: INFO: namespace projected-4734 deletion completed in 9.564804916s


• [SLOW TEST:13.021 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:35
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSS
------------------------------
[BeforeEach] [sig-apps] ReplicationController
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 11 20:04:29.165: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
STEP: Building a namespace api object, basename replication-controller
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in replication-controller-619
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Jan 11 20:04:29.980: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 11 20:04:30.248: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-619" for this suite.
Jan 11 20:04:36.608: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 20:04:39.898: INFO: namespace replication-controller-619 deletion completed in 9.559305719s


• [SLOW TEST:10.733 seconds]
[sig-apps] ReplicationController
/workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should release no longer matching pods [Conformance]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 11 20:04:14.748: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
STEP: Building a namespace api object, basename nettest
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nettest-4827
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:35
STEP: Executing a successful http request from the external internet
[It] should function for endpoint-Service: udp
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:150
STEP: Performing setup for networking test in namespace nettest-4827
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan 11 20:04:15.497: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
STEP: Getting node addresses
Jan 11 20:04:34.935: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Jan 11 20:04:35.115: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 11 20:04:35.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-4827" for this suite.
Jan 11 20:04:47.475: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 20:04:50.811: INFO: namespace nettest-4827 deletion completed in 15.604338724s


S [SKIPPING] [36.063 seconds]
[sig-network] Networking
/workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  Granular Checks: Services
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:103
    should function for endpoint-Service: udp [It]
    /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:150

    Requires at least 2 nodes (not -1)

    /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:597
------------------------------
SSSSSSSS
------------------------------
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 11 20:04:24.792: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
STEP: Building a namespace api object, basename resourcequota
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in resourcequota-3009
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ConfigMap
STEP: Ensuring resource quota status captures configMap creation
STEP: Deleting a ConfigMap
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 11 20:04:42.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-3009" for this suite.
Jan 11 20:04:48.839: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 20:04:52.162: INFO: namespace resourcequota-3009 deletion completed in 9.5927488s


• [SLOW TEST:27.369 seconds]
[sig-api-machinery] ResourceQuota
/workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:93
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 11 20:04:28.675: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
STEP: Building a namespace api object, basename volume
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in volume-5429
STEP: Waiting for a default service account to be provisioned in namespace
[It] should store data
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:146
Jan 11 20:04:29.315: INFO: Could not find CSI Name for in-tree plugin kubernetes.io/empty-dir
Jan 11 20:04:29.315: INFO: Creating resource for inline volume
STEP: starting emptydir-injector
STEP: Writing text file contents in the container.
Jan 11 20:04:31.587: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec emptydir-injector --namespace=volume-5429 -- /bin/sh -c echo 'Hello from emptydir from namespace volume-5429' > /opt/0/index.html'
Jan 11 20:04:33.004: INFO: stderr: ""
Jan 11 20:04:33.004: INFO: stdout: ""
STEP: Checking that text file contents are perfect.
Jan 11 20:04:33.005: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec emptydir-injector --namespace=volume-5429 -- cat /opt/0/index.html'
Jan 11 20:04:34.286: INFO: stderr: ""
Jan 11 20:04:34.286: INFO: stdout: "Hello from emptydir from namespace volume-5429\n"
Jan 11 20:04:34.286: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=volume-5429 emptydir-injector -- /bin/sh -c test -d /opt/0'
Jan 11 20:04:35.792: INFO: stderr: ""
Jan 11 20:04:35.792: INFO: stdout: ""
Jan 11 20:04:35.792: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=volume-5429 emptydir-injector -- /bin/sh -c test -b /opt/0'
Jan 11 20:04:37.080: INFO: rc: 1
STEP: Deleting pod emptydir-injector in namespace volume-5429
Jan 11 20:04:37.172: INFO: Waiting for pod emptydir-injector to disappear
Jan 11 20:04:37.262: INFO: Pod emptydir-injector still exists
Jan 11 20:04:39.262: INFO: Waiting for pod emptydir-injector to disappear
Jan 11 20:04:39.352: INFO: Pod emptydir-injector still exists
Jan 11 20:04:41.262: INFO: Waiting for pod emptydir-injector to disappear
Jan 11 20:04:41.352: INFO: Pod emptydir-injector still exists
Jan 11 20:04:43.262: INFO: Waiting for pod emptydir-injector to disappear
Jan 11 20:04:43.352: INFO: Pod emptydir-injector still exists
Jan 11 20:04:45.262: INFO: Waiting for pod emptydir-injector to disappear
Jan 11 20:04:45.352: INFO: Pod emptydir-injector no longer exists
STEP: Skipping persistence check for non-persistent volume
STEP: cleaning the environment after emptydir
Jan 11 20:04:45.352: INFO: Deleting pod "emptydir-client" in namespace "volume-5429"
Jan 11 20:04:45.442: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 11 20:04:45.442: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-5429" for this suite.
Jan 11 20:04:51.803: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 20:04:55.127: INFO: namespace volume-5429 deletion completed in 9.594452877s


• [SLOW TEST:26.453 seconds]
[sig-storage] In-tree Volumes
/workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: emptydir]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:69
    [Testpattern: Inline-volume (default fs)] volumes
    /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:92
      should store data
      /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:146
------------------------------
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 11 20:04:50.823: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
STEP: Building a namespace api object, basename tables
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in tables-1651
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47
[It] should return generic metadata details across all namespaces for nodes
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:128
Jan 11 20:04:51.639: INFO: Table: &v1.Table{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"/api/v1/nodes", ResourceVersion:"67564", Continue:"", RemainingItemCount:(*int64)(nil)}, ColumnDefinitions:[]v1.TableColumnDefinition{v1.TableColumnDefinition{Name:"Name", Type:"string", Format:"name", Description:"Name must be unique within a namespace. Is required when creating resources, although some resources may allow a client to request the generation of an appropriate name automatically. Name is primarily intended for creation idempotence and configuration definition. Cannot be updated. More info: http://kubernetes.io/docs/user-guide/identifiers#names", Priority:0}, v1.TableColumnDefinition{Name:"Status", Type:"string", Format:"", Description:"The status of the node", Priority:0}, v1.TableColumnDefinition{Name:"Roles", Type:"string", Format:"", Description:"The roles of the node", Priority:0}, v1.TableColumnDefinition{Name:"Age", Type:"string", Format:"", Description:"CreationTimestamp is a timestamp representing the server time when this object was created. It is not guaranteed to be set in happens-before order across separate operations. Clients may not set this value. It is represented in RFC3339 form and is in UTC.\n\nPopulated by the system. Read-only. Null for lists. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata", Priority:0}, v1.TableColumnDefinition{Name:"Version", Type:"string", Format:"", Description:"Kubelet Version reported by the node.", Priority:0}, v1.TableColumnDefinition{Name:"Internal-IP", Type:"string", Format:"", Description:"List of addresses reachable to the node. Queried from cloud provider, if available. More info: https://kubernetes.io/docs/concepts/nodes/node/#addresses Note: This field is declared as mergeable, but the merge key is not sufficiently unique, which can cause data corruption when it is merged. Callers should instead use a full-replacement patch. See http://pr.k8s.io/79391 for an example.", Priority:1}, v1.TableColumnDefinition{Name:"External-IP", Type:"string", Format:"", Description:"List of addresses reachable to the node. Queried from cloud provider, if available. More info: https://kubernetes.io/docs/concepts/nodes/node/#addresses Note: This field is declared as mergeable, but the merge key is not sufficiently unique, which can cause data corruption when it is merged. Callers should instead use a full-replacement patch. See http://pr.k8s.io/79391 for an example.", Priority:1}, v1.TableColumnDefinition{Name:"OS-Image", Type:"string", Format:"", Description:"OS Image reported by the node from /etc/os-release (e.g. Debian GNU/Linux 7 (wheezy)).", Priority:1}, v1.TableColumnDefinition{Name:"Kernel-Version", Type:"string", Format:"", Description:"Kernel Version reported by the node from 'uname -r' (e.g. 3.16.0-0.bpo.4-amd64).", Priority:1}, v1.TableColumnDefinition{Name:"Container-Runtime", Type:"string", Format:"", Description:"ContainerRuntime Version reported by the node through runtime remote API (e.g. docker://1.5.0).", Priority:1}}, Rows:[]v1.TableRow{v1.TableRow{Cells:[]interface {}{"ip-10-250-27-25.ec2.internal", "Ready", "", "4h8m", "v1.16.4", "10.250.27.25", "", "Container Linux by CoreOS 2303.3.0 (Rhyolite)", "4.19.86-coreos", "docker://18.6.3"}, Conditions:[]v1.TableRowCondition(nil), Object:runtime.RawExtension{Raw:[]uint8{0x7b, 0x22, 0x6b, 0x69, 0x6e, 0x64, 0x22, 0x3a, 0x22, 0x50, 0x61, 0x72, 0x74, 0x69, 0x61, 0x6c, 0x4f, 0x62, 0x6a, 0x65, 0x63, 0x74, 0x4d, 0x65, 0x74, 0x61, 0x64, 0x61, 0x74, 0x61, 0x22, 0x2c, 0x22, 0x61, 0x70, 0x69, 0x56, 0x65, 0x72, 0x73, 0x69, 0x6f, 0x6e, 0x22, 0x3a, 0x22, 0x6d, 0x65, 0x74, 0x61, 0x2e, 0x6b, 0x38, 0x73, 0x2e, 0x69, 0x6f, 0x2f, 0x76, 0x31, 0x62, 0x65, 0x74, 0x61, 0x31, 0x22, 0x2c, 0x22, 0x6d, 0x65, 0x74, 0x61, 0x64, 0x61, 0x74, 0x61, 0x22, 0x3a, 0x7b, 0x22, 0x6e, 0x61, 0x6d, 0x65, 0x22, 0x3a, 0x22, 0x69, 0x70, 0x2d, 0x31, 0x30, 0x2d, 0x32, 0x35, 0x30, 0x2d, 0x32, 0x37, 0x2d, 0x32, 0x35, 0x2e, 0x65, 0x63, 0x32, 0x2e, 0x69, 0x6e, 0x74, 0x65, 0x72, 0x6e, 0x61, 0x6c, 0x22, 0x2c, 0x22, 0x73, 0x65, 0x6c, 0x66, 0x4c, 0x69, 0x6e, 0x6b, 0x22, 0x3a, 0x22, 0x2f, 0x61, 0x70, 0x69, 0x2f, 0x76, 0x31, 0x2f, 0x6e, 0x6f, 0x64, 0x65, 0x73, 0x2f, 0x69, 0x70, 0x2d, 0x31, 0x30, 0x2d, 0x32, 0x35, 0x30, 0x2d, 0x32, 0x37, 0x2d, 0x32, 0x35, 0x2e, 0x65, 0x63, 0x32, 0x2e, 0x69, 0x6e, 0x74, 0x65, 0x72, 0x6e, 0x61, 0x6c, 0x22, 0x2c, 0x22, 0x75, 0x69, 0x64, 0x22, 0x3a, 0x22, 0x61, 0x66, 0x37, 0x66, 0x36, 0x34, 0x66, 0x33, 0x2d, 0x61, 0x35, 0x64, 0x65, 0x2d, 0x34, 0x64, 0x66, 0x33, 0x2d, 0x39, 0x65, 0x30, 0x37, 0x2d, 0x66, 0x36, 0x39, 0x65, 0x38, 0x33, 0x35, 0x61, 0x62, 0x35, 0x38, 0x30, 0x22, 0x2c, 0x22, 0x72, 0x65, 0x73, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x56, 0x65, 0x72, 0x73, 0x69, 0x6f, 0x6e, 0x22, 0x3a, 0x22, 0x36, 0x37, 0x34, 0x35, 0x33, 0x22, 0x2c, 0x22, 0x63, 0x72, 0x65, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x54, 0x69, 0x6d, 0x65, 0x73, 0x74, 0x61, 0x6d, 0x70, 0x22, 0x3a, 0x22, 0x32, 0x30, 0x32, 0x30, 0x2d, 0x30, 0x31, 0x2d, 0x31, 0x31, 0x54, 0x31, 0x35, 0x3a, 0x35, 0x36, 0x3a, 0x30, 0x33, 0x5a, 0x22, 0x2c, 0x22, 0x6c, 0x61, 0x62, 0x65, 0x6c, 0x73, 0x22, 0x3a, 0x7b, 0x22, 0x62, 0x65, 0x74, 0x61, 0x2e, 0x6b, 0x75, 0x62, 0x65, 0x72, 0x6e, 0x65, 0x74, 0x65, 0x73, 0x2e, 0x69, 0x6f, 0x2f, 0x61, 0x72, 0x63, 0x68, 0x22, 0x3a, 0x22, 0x61, 0x6d, 0x64, 0x36, 0x34, 0x22, 0x2c, 0x22, 0x62, 0x65, 0x74, 0x61, 0x2e, 0x6b, 0x75, 0x62, 0x65, 0x72, 0x6e, 0x65, 0x74, 0x65, 0x73, 0x2e, 0x69, 0x6f, 0x2f, 0x69, 0x6e, 0x73, 0x74, 0x61, 0x6e, 0x63, 0x65, 0x2d, 0x74, 0x79, 0x70, 0x65, 0x22, 0x3a, 0x22, 0x6d, 0x35, 0x2e, 0x6c, 0x61, 0x72, 0x67, 0x65, 0x22, 0x2c, 0x22, 0x62, 0x65, 0x74, 0x61, 0x2e, 0x6b, 0x75, 0x62, 0x65, 0x72, 0x6e, 0x65, 0x74, 0x65, 0x73, 0x2e, 0x69, 0x6f, 0x2f, 0x6f, 0x73, 0x22, 0x3a, 0x22, 0x6c, 0x69, 0x6e, 0x75, 0x78, 0x22, 0x2c, 0x22, 0x66, 0x61, 0x69, 0x6c, 0x75, 0x72, 0x65, 0x2d, 0x64, 0x6f, 0x6d, 0x61, 0x69, 0x6e, 0x2e, 0x62, 0x65, 0x74, 0x61, 0x2e, 0x6b, 0x75, 0x62, 0x65, 0x72, 0x6e, 0x65, 0x74, 0x65, 0x73, 0x2e, 0x69, 0x6f, 0x2f, 0x72, 0x65, 0x67, 0x69, 0x6f, 0x6e, 0x22, 0x3a, 0x22, 0x75, 0x73, 0x2d, 0x65, 0x61, 0x73, 0x74, 0x2d, 0x31, 0x22, 0x2c, 0x22, 0x66, 0x61, 0x69, 0x6c, 0x75, 0x72, 0x65, 0x2d, 0x64, 0x6f, 0x6d, 0x61, 0x69, 0x6e, 0x2e, 0x62, 0x65, 0x74, 0x61, 0x2e, 0x6b, 0x75, 0x62, 0x65, 0x72, 0x6e, 0x65, 0x74, 0x65, 0x73, 0x2e, 0x69, 0x6f, 0x2f, 0x7a, 0x6f, 0x6e, 0x65, 0x22, 0x3a, 0x22, 0x75, 0x73, 0x2d, 0x65, 0x61, 0x73, 0x74, 0x2d, 0x31, 0x63, 0x22, 0x2c, 0x22, 0x6b, 0x75, 0x62, 0x65, 0x72, 0x6e, 0x65, 0x74, 0x65, 0x73, 0x2e, 0x69, 0x6f, 0x2f, 0x61, 0x72, 0x63, 0x68, 0x22, 0x3a, 0x22, 0x61, 0x6d, 0x64, 0x36, 0x34, 0x22, 0x2c, 0x22, 0x6b, 0x75, 0x62, 0x65, 0x72, 0x6e, 0x65, 0x74, 0x65, 0x73, 0x2e, 0x69, 0x6f, 0x2f, 0x68, 0x6f, 0x73, 0x74, 0x6e, 0x61, 0x6d, 0x65, 0x22, 0x3a, 0x22, 0x69, 0x70, 0x2d, 0x31, 0x30, 0x2d, 0x32, 0x35, 0x30, 0x2d, 0x32, 0x37, 0x2d, 0x32, 0x35, 0x2e, 0x65, 0x63, 0x32, 0x2e, 0x69, 0x6e, 0x74, 0x65, 0x72, 0x6e, 0x61, 0x6c, 0x22, 0x2c, 0x22, 0x6b, 0x75, 0x62, 0x65, 0x72, 0x6e, 0x65, 0x74, 0x65, 0x73, 0x2e, 0x69, 0x6f, 0x2f, 0x6f, 0x73, 0x22, 0x3a, 0x22, 0x6c, 0x69, 0x6e, 0x75, 0x78, 0x22, 0x2c, 0x22, 0x6e, 0x6f, 0x64, 0x65, 0x2e, 0x6b, 0x75, 0x62, 0x65, 0x72, 0x6e, 0x65, 0x74, 0x65, 0x73, 0x2e, 0x69, 0x6f, 0x2f, 0x72, 0x6f, 0x6c, 0x65, 0x22, 0x3a, 0x22, 0x6e, 0x6f, 0x64, 0x65, 0x22, 0x2c, 0x22, 0x77, 0x6f, 0x72, 0x6b, 0x65, 0x72, 0x2e, 0x67, 0x61, 0x72, 0x64, 0x65, 0x6e, 0x2e, 0x73, 0x61, 0x70, 0x63, 0x6c, 0x6f, 0x75, 0x64, 0x2e, 0x69, 0x6f, 0x2f, 0x67, 0x72, 0x6f, 0x75, 0x70, 0x22, 0x3a, 0x22, 0x77, 0x6f, 0x72, 0x6b, 0x65, 0x72, 0x2d, 0x31, 0x22, 0x2c, 0x22, 0x77, 0x6f, 0x72, 0x6b, 0x65, 0x72, 0x2e, 0x67, 0x61, 0x72, 0x64, 0x65, 0x6e, 0x65, 0x72, 0x2e, 0x63, 0x6c, 0x6f, 0x75, 0x64, 0x2f, 0x70, 0x6f, 0x6f, 0x6c, 0x22, 0x3a, 0x22, 0x77, 0x6f, 0x72, 0x6b, 0x65, 0x72, 0x2d, 0x31, 0x22, 0x7d, 0x2c, 0x22, 0x61, 0x6e, 0x6e, 0x6f, 0x74, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x73, 0x22, 0x3a, 0x7b, 0x22, 0x63, 0x73, 0x69, 0x2e, 0x76, 0x6f, 0x6c, 0x75, 0x6d, 0x65, 0x2e, 0x6b, 0x75, 0x62, 0x65, 0x72, 0x6e, 0x65, 0x74, 0x65, 0x73, 0x2e, 0x69, 0x6f, 0x2f, 0x6e, 0x6f, 0x64, 0x65, 0x69, 0x64, 0x22, 0x3a, 0x22, 0x7b, 0x5c, 0x22, 0x63, 0x73, 0x69, 0x2d, 0x68, 0x6f, 0x73, 0x74, 0x70, 0x61, 0x74, 0x68, 0x2d, 0x65, 0x70, 0x68, 0x65, 0x6d, 0x65, 0x72, 0x61, 0x6c, 0x2d, 0x31, 0x36, 0x34, 0x31, 0x5c, 0x22, 0x3a, 0x5c, 0x22, 0x69, 0x70, 0x2d, 0x31, 0x30, 0x2d, 0x32, 0x35, 0x30, 0x2d, 0x32, 0x37, 0x2d, 0x32, 0x35, 0x2e, 0x65, 0x63, 0x32, 0x2e, 0x69, 0x6e, 0x74, 0x65, 0x72, 0x6e, 0x61, 0x6c, 0x5c, 0x22, 0x2c, 0x5c, 0x22, 0x63, 0x73, 0x69, 0x2d, 0x68, 0x6f, 0x73, 0x74, 0x70, 0x61, 0x74, 0x68, 0x2d, 0x70, 0x72, 0x6f, 0x76, 0x69, 0x73, 0x69, 0x6f, 0x6e, 0x69, 0x6e, 0x67, 0x2d, 0x36, 0x32, 0x34, 0x30, 0x5c, 0x22, 0x3a, 0x5c, 0x22, 0x69, 0x70, 0x2d, 0x31, 0x30, 0x2d, 0x32, 0x35, 0x30, 0x2d, 0x32, 0x37, 0x2d, 0x32, 0x35, 0x2e, 0x65, 0x63, 0x32, 0x2e, 0x69, 0x6e, 0x74, 0x65, 0x72, 0x6e, 0x61, 0x6c, 0x5c, 0x22, 0x2c, 0x5c, 0x22, 0x63, 0x73, 0x69, 0x2d, 0x68, 0x6f, 0x73, 0x74, 0x70, 0x61, 0x74, 0x68, 0x2d, 0x76, 0x6f, 0x6c, 0x75, 0x6d, 0x65, 0x2d, 0x65, 0x78, 0x70, 0x61, 0x6e, 0x64, 0x2d, 0x37, 0x39, 0x39, 0x31, 0x5c, 0x22, 0x3a, 0x5c, 0x22, 0x69, 0x70, 0x2d, 0x31, 0x30, 0x2d, 0x32, 0x35, 0x30, 0x2d, 0x32, 0x37, 0x2d, 0x32, 0x35, 0x2e, 0x65, 0x63, 0x32, 0x2e, 0x69, 0x6e, 0x74, 0x65, 0x72, 0x6e, 0x61, 0x6c, 0x5c, 0x22, 0x2c, 0x5c, 0x22, 0x63, 0x73, 0x69, 0x2d, 0x6d, 0x6f, 0x63, 0x6b, 0x2d, 0x63, 0x73, 0x69, 0x2d, 0x6d, 0x6f, 0x63, 0x6b, 0x2d, 0x76, 0x6f, 0x6c, 0x75, 0x6d, 0x65, 0x73, 0x2d, 0x31, 0x30, 0x36, 0x32, 0x5c, 0x22, 0x3a, 0x5c, 0x22, 0x63, 0x73, 0x69, 0x2d, 0x6d, 0x6f, 0x63, 0x6b, 0x2d, 0x63, 0x73, 0x69, 0x2d, 0x6d, 0x6f, 0x63, 0x6b, 0x2d, 0x76, 0x6f, 0x6c, 0x75, 0x6d, 0x65, 0x73, 0x2d, 0x31, 0x30, 0x36, 0x32, 0x5c, 0x22, 0x2c, 0x5c, 0x22, 0x63, 0x73, 0x69, 0x2d, 0x6d, 0x6f, 0x63, 0x6b, 0x2d, 0x63, 0x73, 0x69, 0x2d, 0x6d, 0x6f, 0x63, 0x6b, 0x2d, 0x76, 0x6f, 0x6c, 0x75, 0x6d, 0x65, 0x73, 0x2d, 0x32, 0x32, 0x33, 0x39, 0x5c, 0x22, 0x3a, 0x5c, 0x22, 0x63, 0x73, 0x69, 0x2d, 0x6d, 0x6f, 0x63, 0x6b, 0x2d, 0x63, 0x73, 0x69, 0x2d, 0x6d, 0x6f, 0x63, 0x6b, 0x2d, 0x76, 0x6f, 0x6c, 0x75, 0x6d, 0x65, 0x73, 0x2d, 0x32, 0x32, 0x33, 0x39, 0x5c, 0x22, 0x2c, 0x5c, 0x22, 0x63, 0x73, 0x69, 0x2d, 0x6d, 0x6f, 0x63, 0x6b, 0x2d, 0x63, 0x73, 0x69, 0x2d, 0x6d, 0x6f, 0x63, 0x6b, 0x2d, 0x76, 0x6f, 0x6c, 0x75, 0x6d, 0x65, 0x73, 0x2d, 0x33, 0x36, 0x32, 0x30, 0x5c, 0x22, 0x3a, 0x5c, 0x22, 0x63, 0x73, 0x69, 0x2d, 0x6d, 0x6f, 0x63, 0x6b, 0x2d, 0x63, 0x73, 0x69, 0x2d, 0x6d, 0x6f, 0x63, 0x6b, 0x2d, 0x76, 0x6f, 0x6c, 0x75, 0x6d, 0x65, 0x73, 0x2d, 0x33, 0x36, 0x32, 0x30, 0x5c, 0x22, 0x2c, 0x5c, 0x22, 0x63, 0x73, 0x69, 0x2d, 0x6d, 0x6f, 0x63, 0x6b, 0x2d, 0x63, 0x73, 0x69, 0x2d, 0x6d, 0x6f, 0x63, 0x6b, 0x2d, 0x76, 0x6f, 0x6c, 0x75, 0x6d, 0x65, 0x73, 0x2d, 0x34, 0x32, 0x34, 0x39, 0x5c, 0x22, 0x3a, 0x5c, 0x22, 0x63, 0x73, 0x69, 0x2d, 0x6d, 0x6f, 0x63, 0x6b, 0x2d, 0x63, 0x73, 0x69, 0x2d, 0x6d, 0x6f, 0x63, 0x6b, 0x2d, 0x76, 0x6f, 0x6c, 0x75, 0x6d, 0x65, 0x73, 0x2d, 0x34, 0x32, 0x34, 0x39, 0x5c, 0x22, 0x2c, 0x5c, 0x22, 0x63, 0x73, 0x69, 0x2d, 0x6d, 0x6f, 0x63, 0x6b, 0x2d, 0x63, 0x73, 0x69, 0x2d, 0x6d, 0x6f, 0x63, 0x6b, 0x2d, 0x76, 0x6f, 0x6c, 0x75, 0x6d, 0x65, 0x73, 0x2d, 0x36, 0x33, 0x38, 0x31, 0x5c, 0x22, 0x3a, 0x5c, 0x22, 0x63, 0x73, 0x69, 0x2d, 0x6d, 0x6f, 0x63, 0x6b, 0x2d, 0x63, 0x73, 0x69, 0x2d, 0x6d, 0x6f, 0x63, 0x6b, 0x2d, 0x76, 0x6f, 0x6c, 0x75, 0x6d, 0x65, 0x73, 0x2d, 0x36, 0x33, 0x38, 0x31, 0x5c, 0x22, 0x2c, 0x5c, 0x22, 0x63, 0x73, 0x69, 0x2d, 0x6d, 0x6f, 0x63, 0x6b, 0x2d, 0x63, 0x73, 0x69, 0x2d, 0x6d, 0x6f, 0x63, 0x6b, 0x2d, 0x76, 0x6f, 0x6c, 0x75, 0x6d, 0x65, 0x73, 0x2d, 0x37, 0x34, 0x34, 0x36, 0x5c, 0x22, 0x3a, 0x5c, 0x22, 0x63, 0x73, 0x69, 0x2d, 0x6d, 0x6f, 0x63, 0x6b, 0x2d, 0x63, 0x73, 0x69, 0x2d, 0x6d, 0x6f, 0x63, 0x6b, 0x2d, 0x76, 0x6f, 0x6c, 0x75, 0x6d, 0x65, 0x73, 0x2d, 0x37, 0x34, 0x34, 0x36, 0x5c, 0x22, 0x2c, 0x5c, 0x22, 0x63, 0x73, 0x69, 0x2d, 0x6d, 0x6f, 0x63, 0x6b, 0x2d, 0x63, 0x73, 0x69, 0x2d, 0x6d, 0x6f, 0x63, 0x6b, 0x2d, 0x76, 0x6f, 0x6c, 0x75, 0x6d, 0x65, 0x73, 0x2d, 0x37, 0x39, 0x35, 0x5c, 0x22, 0x3a, 0x5c, 0x22, 0x63, 0x73, 0x69, 0x2d, 0x6d, 0x6f, 0x63, 0x6b, 0x2d, 0x63, 0x73, 0x69, 0x2d, 0x6d, 0x6f, 0x63, 0x6b, 0x2d, 0x76, 0x6f, 0x6c, 0x75, 0x6d, 0x65, 0x73, 0x2d, 0x37, 0x39, 0x35, 0x5c, 0x22, 0x7d, 0x22, 0x2c, 0x22, 0x6e, 0x6f, 0x64, 0x65, 0x2e, 0x61, 0x6c, 0x70, 0x68, 0x61, 0x2e, 0x6b, 0x75, 0x62, 0x65, 0x72, 0x6e, 0x65, 0x74, 0x65, 0x73, 0x2e, 0x69, 0x6f, 0x2f, 0x74, 0x74, 0x6c, 0x22, 0x3a, 0x22, 0x30, 0x22, 0x2c, 0x22, 0x70, 0x72, 0x6f, 0x6a, 0x65, 0x63, 0x74, 0x63, 0x61, 0x6c, 0x69, 0x63, 0x6f, 0x2e, 0x6f, 0x72, 0x67, 0x2f, 0x49, 0x50, 0x76, 0x34, 0x41, 0x64, 0x64, 0x72, 0x65, 0x73, 0x73, 0x22, 0x3a, 0x22, 0x31, 0x30, 0x2e, 0x32, 0x35, 0x30, 0x2e, 0x32, 0x37, 0x2e, 0x32, 0x35, 0x2f, 0x31, 0x39, 0x22, 0x2c, 0x22, 0x70, 0x72, 0x6f, 0x6a, 0x65, 0x63, 0x74, 0x63, 0x61, 0x6c, 0x69, 0x63, 0x6f, 0x2e, 0x6f, 0x72, 0x67, 0x2f, 0x49, 0x50, 0x76, 0x34, 0x49, 0x50, 0x49, 0x50, 0x54, 0x75, 0x6e, 0x6e, 0x65, 0x6c, 0x41, 0x64, 0x64, 0x72, 0x22, 0x3a, 0x22, 0x31, 0x30, 0x30, 0x2e, 0x36, 0x34, 0x2e, 0x31, 0x2e, 0x31, 0x22, 0x2c, 0x22, 0x76, 0x6f, 0x6c, 0x75, 0x6d, 0x65, 0x73, 0x2e, 0x6b, 0x75, 0x62, 0x65, 0x72, 0x6e, 0x65, 0x74, 0x65, 0x73, 0x2e, 0x69, 0x6f, 0x2f, 0x63, 0x6f, 0x6e, 0x74, 0x72, 0x6f, 0x6c, 0x6c, 0x65, 0x72, 0x2d, 0x6d, 0x61, 0x6e, 0x61, 0x67, 0x65, 0x64, 0x2d, 0x61, 0x74, 0x74, 0x61, 0x63, 0x68, 0x2d, 0x64, 0x65, 0x74, 0x61, 0x63, 0x68, 0x22, 0x3a, 0x22, 0x74, 0x72, 0x75, 0x65, 0x22, 0x7d, 0x7d, 0x7d}, Object:runtime.Object(nil)}}, v1.TableRow{Cells:[]interface {}{"ip-10-250-7-77.ec2.internal", "Ready", "", "4h8m", "v1.16.4", "10.250.7.77", "", "Container Linux by CoreOS 2303.3.0 (Rhyolite)", "4.19.86-coreos", "docker://18.6.3"}, Conditions:[]v1.TableRowCondition(nil), Object:runtime.RawExtension{Raw:[]uint8{0x7b, 0x22, 0x6b, 0x69, 0x6e, 0x64, 0x22, 0x3a, 0x22, 0x50, 0x61, 0x72, 0x74, 0x69, 0x61, 0x6c, 0x4f, 0x62, 0x6a, 0x65, 0x63, 0x74, 0x4d, 0x65, 0x74, 0x61, 0x64, 0x61, 0x74, 0x61, 0x22, 0x2c, 0x22, 0x61, 0x70, 0x69, 0x56, 0x65, 0x72, 0x73, 0x69, 0x6f, 0x6e, 0x22, 0x3a, 0x22, 0x6d, 0x65, 0x74, 0x61, 0x2e, 0x6b, 0x38, 0x73, 0x2e, 0x69, 0x6f, 0x2f, 0x76, 0x31, 0x62, 0x65, 0x74, 0x61, 0x31, 0x22, 0x2c, 0x22, 0x6d, 0x65, 0x74, 0x61, 0x64, 0x61, 0x74, 0x61, 0x22, 0x3a, 0x7b, 0x22, 0x6e, 0x61, 0x6d, 0x65, 0x22, 0x3a, 0x22, 0x69, 0x70, 0x2d, 0x31, 0x30, 0x2d, 0x32, 0x35, 0x30, 0x2d, 0x37, 0x2d, 0x37, 0x37, 0x2e, 0x65, 0x63, 0x32, 0x2e, 0x69, 0x6e, 0x74, 0x65, 0x72, 0x6e, 0x61, 0x6c, 0x22, 0x2c, 0x22, 0x73, 0x65, 0x6c, 0x66, 0x4c, 0x69, 0x6e, 0x6b, 0x22, 0x3a, 0x22, 0x2f, 0x61, 0x70, 0x69, 0x2f, 0x76, 0x31, 0x2f, 0x6e, 0x6f, 0x64, 0x65, 0x73, 0x2f, 0x69, 0x70, 0x2d, 0x31, 0x30, 0x2d, 0x32, 0x35, 0x30, 0x2d, 0x37, 0x2d, 0x37, 0x37, 0x2e, 0x65, 0x63, 0x32, 0x2e, 0x69, 0x6e, 0x74, 0x65, 0x72, 0x6e, 0x61, 0x6c, 0x22, 0x2c, 0x22, 0x75, 0x69, 0x64, 0x22, 0x3a, 0x22, 0x33, 0x37, 0x37, 0x33, 0x63, 0x30, 0x32, 0x63, 0x2d, 0x31, 0x66, 0x62, 0x62, 0x2d, 0x34, 0x63, 0x62, 0x65, 0x2d, 0x61, 0x35, 0x32, 0x37, 0x2d, 0x38, 0x39, 0x33, 0x33, 0x64, 0x65, 0x30, 0x61, 0x38, 0x39, 0x37, 0x38, 0x22, 0x2c, 0x22, 0x72, 0x65, 0x73, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x56, 0x65, 0x72, 0x73, 0x69, 0x6f, 0x6e, 0x22, 0x3a, 0x22, 0x36, 0x37, 0x34, 0x37, 0x37, 0x22, 0x2c, 0x22, 0x63, 0x72, 0x65, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x54, 0x69, 0x6d, 0x65, 0x73, 0x74, 0x61, 0x6d, 0x70, 0x22, 0x3a, 0x22, 0x32, 0x30, 0x32, 0x30, 0x2d, 0x30, 0x31, 0x2d, 0x31, 0x31, 0x54, 0x31, 0x35, 0x3a, 0x35, 0x35, 0x3a, 0x35, 0x38, 0x5a, 0x22, 0x2c, 0x22, 0x6c, 0x61, 0x62, 0x65, 0x6c, 0x73, 0x22, 0x3a, 0x7b, 0x22, 0x62, 0x65, 0x74, 0x61, 0x2e, 0x6b, 0x75, 0x62, 0x65, 0x72, 0x6e, 0x65, 0x74, 0x65, 0x73, 0x2e, 0x69, 0x6f, 0x2f, 0x61, 0x72, 0x63, 0x68, 0x22, 0x3a, 0x22, 0x61, 0x6d, 0x64, 0x36, 0x34, 0x22, 0x2c, 0x22, 0x62, 0x65, 0x74, 0x61, 0x2e, 0x6b, 0x75, 0x62, 0x65, 0x72, 0x6e, 0x65, 0x74, 0x65, 0x73, 0x2e, 0x69, 0x6f, 0x2f, 0x69, 0x6e, 0x73, 0x74, 0x61, 0x6e, 0x63, 0x65, 0x2d, 0x74, 0x79, 0x70, 0x65, 0x22, 0x3a, 0x22, 0x6d, 0x35, 0x2e, 0x6c, 0x61, 0x72, 0x67, 0x65, 0x22, 0x2c, 0x22, 0x62, 0x65, 0x74, 0x61, 0x2e, 0x6b, 0x75, 0x62, 0x65, 0x72, 0x6e, 0x65, 0x74, 0x65, 0x73, 0x2e, 0x69, 0x6f, 0x2f, 0x6f, 0x73, 0x22, 0x3a, 0x22, 0x6c, 0x69, 0x6e, 0x75, 0x78, 0x22, 0x2c, 0x22, 0x66, 0x61, 0x69, 0x6c, 0x75, 0x72, 0x65, 0x2d, 0x64, 0x6f, 0x6d, 0x61, 0x69, 0x6e, 0x2e, 0x62, 0x65, 0x74, 0x61, 0x2e, 0x6b, 0x75, 0x62, 0x65, 0x72, 0x6e, 0x65, 0x74, 0x65, 0x73, 0x2e, 0x69, 0x6f, 0x2f, 0x72, 0x65, 0x67, 0x69, 0x6f, 0x6e, 0x22, 0x3a, 0x22, 0x75, 0x73, 0x2d, 0x65, 0x61, 0x73, 0x74, 0x2d, 0x31, 0x22, 0x2c, 0x22, 0x66, 0x61, 0x69, 0x6c, 0x75, 0x72, 0x65, 0x2d, 0x64, 0x6f, 0x6d, 0x61, 0x69, 0x6e, 0x2e, 0x62, 0x65, 0x74, 0x61, 0x2e, 0x6b, 0x75, 0x62, 0x65, 0x72, 0x6e, 0x65, 0x74, 0x65, 0x73, 0x2e, 0x69, 0x6f, 0x2f, 0x7a, 0x6f, 0x6e, 0x65, 0x22, 0x3a, 0x22, 0x75, 0x73, 0x2d, 0x65, 0x61, 0x73, 0x74, 0x2d, 0x31, 0x63, 0x22, 0x2c, 0x22, 0x6b, 0x75, 0x62, 0x65, 0x72, 0x6e, 0x65, 0x74, 0x65, 0x73, 0x2e, 0x69, 0x6f, 0x2f, 0x61, 0x72, 0x63, 0x68, 0x22, 0x3a, 0x22, 0x61, 0x6d, 0x64, 0x36, 0x34, 0x22, 0x2c, 0x22, 0x6b, 0x75, 0x62, 0x65, 0x72, 0x6e, 0x65, 0x74, 0x65, 0x73, 0x2e, 0x69, 0x6f, 0x2f, 0x68, 0x6f, 0x73, 0x74, 0x6e, 0x61, 0x6d, 0x65, 0x22, 0x3a, 0x22, 0x69, 0x70, 0x2d, 0x31, 0x30, 0x2d, 0x32, 0x35, 0x30, 0x2d, 0x37, 0x2d, 0x37, 0x37, 0x2e, 0x65, 0x63, 0x32, 0x2e, 0x69, 0x6e, 0x74, 0x65, 0x72, 0x6e, 0x61, 0x6c, 0x22, 0x2c, 0x22, 0x6b, 0x75, 0x62, 0x65, 0x72, 0x6e, 0x65, 0x74, 0x65, 0x73, 0x2e, 0x69, 0x6f, 0x2f, 0x6f, 0x73, 0x22, 0x3a, 0x22, 0x6c, 0x69, 0x6e, 0x75, 0x78, 0x22, 0x2c, 0x22, 0x6e, 0x6f, 0x64, 0x65, 0x2e, 0x6b, 0x75, 0x62, 0x65, 0x72, 0x6e, 0x65, 0x74, 0x65, 0x73, 0x2e, 0x69, 0x6f, 0x2f, 0x72, 0x6f, 0x6c, 0x65, 0x22, 0x3a, 0x22, 0x6e, 0x6f, 0x64, 0x65, 0x22, 0x2c, 0x22, 0x77, 0x6f, 0x72, 0x6b, 0x65, 0x72, 0x2e, 0x67, 0x61, 0x72, 0x64, 0x65, 0x6e, 0x2e, 0x73, 0x61, 0x70, 0x63, 0x6c, 0x6f, 0x75, 0x64, 0x2e, 0x69, 0x6f, 0x2f, 0x67, 0x72, 0x6f, 0x75, 0x70, 0x22, 0x3a, 0x22, 0x77, 0x6f, 0x72, 0x6b, 0x65, 0x72, 0x2d, 0x31, 0x22, 0x2c, 0x22, 0x77, 0x6f, 0x72, 0x6b, 0x65, 0x72, 0x2e, 0x67, 0x61, 0x72, 0x64, 0x65, 0x6e, 0x65, 0x72, 0x2e, 0x63, 0x6c, 0x6f, 0x75, 0x64, 0x2f, 0x70, 0x6f, 0x6f, 0x6c, 0x22, 0x3a, 0x22, 0x77, 0x6f, 0x72, 0x6b, 0x65, 0x72, 0x2d, 0x31, 0x22, 0x7d, 0x2c, 0x22, 0x61, 0x6e, 0x6e, 0x6f, 0x74, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x73, 0x22, 0x3a, 0x7b, 0x22, 0x63, 0x73, 0x69, 0x2e, 0x76, 0x6f, 0x6c, 0x75, 0x6d, 0x65, 0x2e, 0x6b, 0x75, 0x62, 0x65, 0x72, 0x6e, 0x65, 0x74, 0x65, 0x73, 0x2e, 0x69, 0x6f, 0x2f, 0x6e, 0x6f, 0x64, 0x65, 0x69, 0x64, 0x22, 0x3a, 0x22, 0x7b, 0x5c, 0x22, 0x63, 0x73, 0x69, 0x2d, 0x68, 0x6f, 0x73, 0x74, 0x70, 0x61, 0x74, 0x68, 0x2d, 0x65, 0x70, 0x68, 0x65, 0x6d, 0x65, 0x72, 0x61, 0x6c, 0x2d, 0x39, 0x37, 0x30, 0x38, 0x5c, 0x22, 0x3a, 0x5c, 0x22, 0x69, 0x70, 0x2d, 0x31, 0x30, 0x2d, 0x32, 0x35, 0x30, 0x2d, 0x37, 0x2d, 0x37, 0x37, 0x2e, 0x65, 0x63, 0x32, 0x2e, 0x69, 0x6e, 0x74, 0x65, 0x72, 0x6e, 0x61, 0x6c, 0x5c, 0x22, 0x2c, 0x5c, 0x22, 0x63, 0x73, 0x69, 0x2d, 0x68, 0x6f, 0x73, 0x74, 0x70, 0x61, 0x74, 0x68, 0x2d, 0x70, 0x72, 0x6f, 0x76, 0x69, 0x73, 0x69, 0x6f, 0x6e, 0x69, 0x6e, 0x67, 0x2d, 0x32, 0x32, 0x36, 0x33, 0x5c, 0x22, 0x3a, 0x5c, 0x22, 0x69, 0x70, 0x2d, 0x31, 0x30, 0x2d, 0x32, 0x35, 0x30, 0x2d, 0x37, 0x2d, 0x37, 0x37, 0x2e, 0x65, 0x63, 0x32, 0x2e, 0x69, 0x6e, 0x74, 0x65, 0x72, 0x6e, 0x61, 0x6c, 0x5c, 0x22, 0x2c, 0x5c, 0x22, 0x63, 0x73, 0x69, 0x2d, 0x68, 0x6f, 0x73, 0x74, 0x70, 0x61, 0x74, 0x68, 0x2d, 0x70, 0x72, 0x6f, 0x76, 0x69, 0x73, 0x69, 0x6f, 0x6e, 0x69, 0x6e, 0x67, 0x2d, 0x33, 0x33, 0x33, 0x32, 0x5c, 0x22, 0x3a, 0x5c, 0x22, 0x69, 0x70, 0x2d, 0x31, 0x30, 0x2d, 0x32, 0x35, 0x30, 0x2d, 0x37, 0x2d, 0x37, 0x37, 0x2e, 0x65, 0x63, 0x32, 0x2e, 0x69, 0x6e, 0x74, 0x65, 0x72, 0x6e, 0x61, 0x6c, 0x5c, 0x22, 0x2c, 0x5c, 0x22, 0x63, 0x73, 0x69, 0x2d, 0x68, 0x6f, 0x73, 0x74, 0x70, 0x61, 0x74, 0x68, 0x2d, 0x70, 0x72, 0x6f, 0x76, 0x69, 0x73, 0x69, 0x6f, 0x6e, 0x69, 0x6e, 0x67, 0x2d, 0x34, 0x36, 0x32, 0x35, 0x5c, 0x22, 0x3a, 0x5c, 0x22, 0x69, 0x70, 0x2d, 0x31, 0x30, 0x2d, 0x32, 0x35, 0x30, 0x2d, 0x37, 0x2d, 0x37, 0x37, 0x2e, 0x65, 0x63, 0x32, 0x2e, 0x69, 0x6e, 0x74, 0x65, 0x72, 0x6e, 0x61, 0x6c, 0x5c, 0x22, 0x2c, 0x5c, 0x22, 0x63, 0x73, 0x69, 0x2d, 0x68, 0x6f, 0x73, 0x74, 0x70, 0x61, 0x74, 0x68, 0x2d, 0x70, 0x72, 0x6f, 0x76, 0x69, 0x73, 0x69, 0x6f, 0x6e, 0x69, 0x6e, 0x67, 0x2d, 0x35, 0x38, 0x37, 0x37, 0x5c, 0x22, 0x3a, 0x5c, 0x22, 0x69, 0x70, 0x2d, 0x31, 0x30, 0x2d, 0x32, 0x35, 0x30, 0x2d, 0x37, 0x2d, 0x37, 0x37, 0x2e, 0x65, 0x63, 0x32, 0x2e, 0x69, 0x6e, 0x74, 0x65, 0x72, 0x6e, 0x61, 0x6c, 0x5c, 0x22, 0x2c, 0x5c, 0x22, 0x63, 0x73, 0x69, 0x2d, 0x68, 0x6f, 0x73, 0x74, 0x70, 0x61, 0x74, 0x68, 0x2d, 0x70, 0x72, 0x6f, 0x76, 0x69, 0x73, 0x69, 0x6f, 0x6e, 0x69, 0x6e, 0x67, 0x2d, 0x36, 0x33, 0x38, 0x5c, 0x22, 0x3a, 0x5c, 0x22, 0x69, 0x70, 0x2d, 0x31, 0x30, 0x2d, 0x32, 0x35, 0x30, 0x2d, 0x37, 0x2d, 0x37, 0x37, 0x2e, 0x65, 0x63, 0x32, 0x2e, 0x69, 0x6e, 0x74, 0x65, 0x72, 0x6e, 0x61, 0x6c, 0x5c, 0x22, 0x2c, 0x5c, 0x22, 0x63, 0x73, 0x69, 0x2d, 0x68, 0x6f, 0x73, 0x74, 0x70, 0x61, 0x74, 0x68, 0x2d, 0x70, 0x72, 0x6f, 0x76, 0x69, 0x73, 0x69, 0x6f, 0x6e, 0x69, 0x6e, 0x67, 0x2d, 0x38, 0x38, 0x38, 0x5c, 0x22, 0x3a, 0x5c, 0x22, 0x69, 0x70, 0x2d, 0x31, 0x30, 0x2d, 0x32, 0x35, 0x30, 0x2d, 0x37, 0x2d, 0x37, 0x37, 0x2e, 0x65, 0x63, 0x32, 0x2e, 0x69, 0x6e, 0x74, 0x65, 0x72, 0x6e, 0x61, 0x6c, 0x5c, 0x22, 0x2c, 0x5c, 0x22, 0x63, 0x73, 0x69, 0x2d, 0x68, 0x6f, 0x73, 0x74, 0x70, 0x61, 0x74, 0x68, 0x2d, 0x70, 0x72, 0x6f, 0x76, 0x69, 0x73, 0x69, 0x6f, 0x6e, 0x69, 0x6e, 0x67, 0x2d, 0x39, 0x36, 0x36, 0x37, 0x5c, 0x22, 0x3a, 0x5c, 0x22, 0x69, 0x70, 0x2d, 0x31, 0x30, 0x2d, 0x32, 0x35, 0x30, 0x2d, 0x37, 0x2d, 0x37, 0x37, 0x2e, 0x65, 0x63, 0x32, 0x2e, 0x69, 0x6e, 0x74, 0x65, 0x72, 0x6e, 0x61, 0x6c, 0x5c, 0x22, 0x2c, 0x5c, 0x22, 0x63, 0x73, 0x69, 0x2d, 0x68, 0x6f, 0x73, 0x74, 0x70, 0x61, 0x74, 0x68, 0x2d, 0x76, 0x6f, 0x6c, 0x75, 0x6d, 0x65, 0x2d, 0x32, 0x34, 0x34, 0x31, 0x5c, 0x22, 0x3a, 0x5c, 0x22, 0x69, 0x70, 0x2d, 0x31, 0x30, 0x2d, 0x32, 0x35, 0x30, 0x2d, 0x37, 0x2d, 0x37, 0x37, 0x2e, 0x65, 0x63, 0x32, 0x2e, 0x69, 0x6e, 0x74, 0x65, 0x72, 0x6e, 0x61, 0x6c, 0x5c, 0x22, 0x2c, 0x5c, 0x22, 0x63, 0x73, 0x69, 0x2d, 0x68, 0x6f, 0x73, 0x74, 0x70, 0x61, 0x74, 0x68, 0x2d, 0x76, 0x6f, 0x6c, 0x75, 0x6d, 0x65, 0x2d, 0x65, 0x78, 0x70, 0x61, 0x6e, 0x64, 0x2d, 0x31, 0x39, 0x32, 0x39, 0x5c, 0x22, 0x3a, 0x5c, 0x22, 0x69, 0x70, 0x2d, 0x31, 0x30, 0x2d, 0x32, 0x35, 0x30, 0x2d, 0x37, 0x2d, 0x37, 0x37, 0x2e, 0x65, 0x63, 0x32, 0x2e, 0x69, 0x6e, 0x74, 0x65, 0x72, 0x6e, 0x61, 0x6c, 0x5c, 0x22, 0x2c, 0x5c, 0x22, 0x63, 0x73, 0x69, 0x2d, 0x68, 0x6f, 0x73, 0x74, 0x70, 0x61, 0x74, 0x68, 0x2d, 0x76, 0x6f, 0x6c, 0x75, 0x6d, 0x65, 0x2d, 0x65, 0x78, 0x70, 0x61, 0x6e, 0x64, 0x2d, 0x38, 0x39, 0x38, 0x33, 0x5c, 0x22, 0x3a, 0x5c, 0x22, 0x69, 0x70, 0x2d, 0x31, 0x30, 0x2d, 0x32, 0x35, 0x30, 0x2d, 0x37, 0x2d, 0x37, 0x37, 0x2e, 0x65, 0x63, 0x32, 0x2e, 0x69, 0x6e, 0x74, 0x65, 0x72, 0x6e, 0x61, 0x6c, 0x5c, 0x22, 0x2c, 0x5c, 0x22, 0x63, 0x73, 0x69, 0x2d, 0x68, 0x6f, 0x73, 0x74, 0x70, 0x61, 0x74, 0x68, 0x2d, 0x76, 0x6f, 0x6c, 0x75, 0x6d, 0x65, 0x69, 0x6f, 0x2d, 0x33, 0x31, 0x36, 0x34, 0x5c, 0x22, 0x3a, 0x5c, 0x22, 0x69, 0x70, 0x2d, 0x31, 0x30, 0x2d, 0x32, 0x35, 0x30, 0x2d, 0x37, 0x2d, 0x37, 0x37, 0x2e, 0x65, 0x63, 0x32, 0x2e, 0x69, 0x6e, 0x74, 0x65, 0x72, 0x6e, 0x61, 0x6c, 0x5c, 0x22, 0x2c, 0x5c, 0x22, 0x63, 0x73, 0x69, 0x2d, 0x68, 0x6f, 0x73, 0x74, 0x70, 0x61, 0x74, 0x68, 0x2d, 0x76, 0x6f, 0x6c, 0x75, 0x6d, 0x65, 0x6d, 0x6f, 0x64, 0x65, 0x2d, 0x32, 0x37, 0x39, 0x32, 0x5c, 0x22, 0x3a, 0x5c, 0x22, 0x69, 0x70, 0x2d, 0x31, 0x30, 0x2d, 0x32, 0x35, 0x30, 0x2d, 0x37, 0x2d, 0x37, 0x37, 0x2e, 0x65, 0x63, 0x32, 0x2e, 0x69, 0x6e, 0x74, 0x65, 0x72, 0x6e, 0x61, 0x6c, 0x5c, 0x22, 0x2c, 0x5c, 0x22, 0x63, 0x73, 0x69, 0x2d, 0x6d, 0x6f, 0x63, 0x6b, 0x2d, 0x63, 0x73, 0x69, 0x2d, 0x6d, 0x6f, 0x63, 0x6b, 0x2d, 0x76, 0x6f, 0x6c, 0x75, 0x6d, 0x65, 0x73, 0x2d, 0x34, 0x30, 0x30, 0x34, 0x5c, 0x22, 0x3a, 0x5c, 0x22, 0x63, 0x73, 0x69, 0x2d, 0x6d, 0x6f, 0x63, 0x6b, 0x2d, 0x63, 0x73, 0x69, 0x2d, 0x6d, 0x6f, 0x63, 0x6b, 0x2d, 0x76, 0x6f, 0x6c, 0x75, 0x6d, 0x65, 0x73, 0x2d, 0x34, 0x30, 0x30, 0x34, 0x5c, 0x22, 0x2c, 0x5c, 0x22, 0x63, 0x73, 0x69, 0x2d, 0x6d, 0x6f, 0x63, 0x6b, 0x2d, 0x63, 0x73, 0x69, 0x2d, 0x6d, 0x6f, 0x63, 0x6b, 0x2d, 0x76, 0x6f, 0x6c, 0x75, 0x6d, 0x65, 0x73, 0x2d, 0x38, 0x36, 0x36, 0x33, 0x5c, 0x22, 0x3a, 0x5c, 0x22, 0x63, 0x73, 0x69, 0x2d, 0x6d, 0x6f, 0x63, 0x6b, 0x2d, 0x63, 0x73, 0x69, 0x2d, 0x6d, 0x6f, 0x63, 0x6b, 0x2d, 0x76, 0x6f, 0x6c, 0x75, 0x6d, 0x65, 0x73, 0x2d, 0x38, 0x36, 0x36, 0x33, 0x5c, 0x22, 0x7d, 0x22, 0x2c, 0x22, 0x6e, 0x6f, 0x64, 0x65, 0x2e, 0x61, 0x6c, 0x70, 0x68, 0x61, 0x2e, 0x6b, 0x75, 0x62, 0x65, 0x72, 0x6e, 0x65, 0x74, 0x65, 0x73, 0x2e, 0x69, 0x6f, 0x2f, 0x74, 0x74, 0x6c, 0x22, 0x3a, 0x22, 0x30, 0x22, 0x2c, 0x22, 0x70, 0x72, 0x6f, 0x6a, 0x65, 0x63, 0x74, 0x63, 0x61, 0x6c, 0x69, 0x63, 0x6f, 0x2e, 0x6f, 0x72, 0x67, 0x2f, 0x49, 0x50, 0x76, 0x34, 0x41, 0x64, 0x64, 0x72, 0x65, 0x73, 0x73, 0x22, 0x3a, 0x22, 0x31, 0x30, 0x2e, 0x32, 0x35, 0x30, 0x2e, 0x37, 0x2e, 0x37, 0x37, 0x2f, 0x31, 0x39, 0x22, 0x2c, 0x22, 0x70, 0x72, 0x6f, 0x6a, 0x65, 0x63, 0x74, 0x63, 0x61, 0x6c, 0x69, 0x63, 0x6f, 0x2e, 0x6f, 0x72, 0x67, 0x2f, 0x49, 0x50, 0x76, 0x34, 0x49, 0x50, 0x49, 0x50, 0x54, 0x75, 0x6e, 0x6e, 0x65, 0x6c, 0x41, 0x64, 0x64, 0x72, 0x22, 0x3a, 0x22, 0x31, 0x30, 0x30, 0x2e, 0x36, 0x34, 0x2e, 0x30, 0x2e, 0x31, 0x22, 0x2c, 0x22, 0x76, 0x6f, 0x6c, 0x75, 0x6d, 0x65, 0x73, 0x2e, 0x6b, 0x75, 0x62, 0x65, 0x72, 0x6e, 0x65, 0x74, 0x65, 0x73, 0x2e, 0x69, 0x6f, 0x2f, 0x63, 0x6f, 0x6e, 0x74, 0x72, 0x6f, 0x6c, 0x6c, 0x65, 0x72, 0x2d, 0x6d, 0x61, 0x6e, 0x61, 0x67, 0x65, 0x64, 0x2d, 0x61, 0x74, 0x74, 0x61, 0x63, 0x68, 0x2d, 0x64, 0x65, 0x74, 0x61, 0x63, 0x68, 0x22, 0x3a, 0x22, 0x74, 0x72, 0x75, 0x65, 0x22, 0x7d, 0x7d, 0x7d}, Object:runtime.Object(nil)}}}}
Jan 11 20:04:51.642: INFO: Table:
NAME                           STATUS   ROLES    AGE    VERSION
ip-10-250-27-25.ec2.internal   Ready       4h8m   v1.16.4
ip-10-250-7-77.ec2.internal    Ready       4h8m   v1.16.4

[AfterEach] [sig-api-machinery] Servers with support for Table transformation
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 11 20:04:51.642: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "tables-1651" for this suite.
Jan 11 20:04:58.002: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 20:05:01.304: INFO: namespace tables-1651 deletion completed in 9.572077423s


• [SLOW TEST:10.481 seconds]
[sig-api-machinery] Servers with support for Table transformation
/workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should return generic metadata details across all namespaces for nodes
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:128
------------------------------
S
------------------------------
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 11 20:04:52.163: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
STEP: Building a namespace api object, basename watch
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in watch-5403
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to start watching from a specific resource version [Conformance]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: modifying the configmap a second time
STEP: deleting the configmap
STEP: creating a watch on configmaps from the resource version returned by the first update
STEP: Expecting to observe notifications for all changes to the configmap after the first update
Jan 11 20:04:53.435: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version  watch-5403 /api/v1/namespaces/watch-5403/configmaps/e2e-watch-test-resource-version a4072ce6-03c6-4236-9e81-321fa73c311f 67577 0 2020-01-11 20:04:52 +0000 UTC   map[watch-this-configmap:from-resource-version] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan 11 20:04:53.435: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version  watch-5403 /api/v1/namespaces/watch-5403/configmaps/e2e-watch-test-resource-version a4072ce6-03c6-4236-9e81-321fa73c311f 67578 0 2020-01-11 20:04:52 +0000 UTC   map[watch-this-configmap:from-resource-version] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 11 20:04:53.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-5403" for this suite.
Jan 11 20:04:59.796: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 20:05:03.187: INFO: namespace watch-5403 deletion completed in 9.661439409s


• [SLOW TEST:11.024 seconds]
[sig-api-machinery] Watchers
/workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to start watching from a specific resource version [Conformance]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:93
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:85
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 11 20:02:53.668: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
STEP: Building a namespace api object, basename volume-expand
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in volume-expand-1929
STEP: Waiting for a default service account to be provisioned in namespace
[It] should resize volume when PVC is edited while pod is using it
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:218
STEP: deploying csi-hostpath driver
Jan 11 20:02:54.494: INFO: creating *v1.ServiceAccount: volume-expand-1929/csi-attacher
Jan 11 20:02:54.584: INFO: creating *v1.ClusterRole: external-attacher-runner-volume-expand-1929
Jan 11 20:02:54.584: INFO: Define cluster role external-attacher-runner-volume-expand-1929
Jan 11 20:02:54.674: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-volume-expand-1929
Jan 11 20:02:54.764: INFO: creating *v1.Role: volume-expand-1929/external-attacher-cfg-volume-expand-1929
Jan 11 20:02:54.854: INFO: creating *v1.RoleBinding: volume-expand-1929/csi-attacher-role-cfg
Jan 11 20:02:54.943: INFO: creating *v1.ServiceAccount: volume-expand-1929/csi-provisioner
Jan 11 20:02:55.034: INFO: creating *v1.ClusterRole: external-provisioner-runner-volume-expand-1929
Jan 11 20:02:55.034: INFO: Define cluster role external-provisioner-runner-volume-expand-1929
Jan 11 20:02:55.124: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-volume-expand-1929
Jan 11 20:02:55.214: INFO: creating *v1.Role: volume-expand-1929/external-provisioner-cfg-volume-expand-1929
Jan 11 20:02:55.304: INFO: creating *v1.RoleBinding: volume-expand-1929/csi-provisioner-role-cfg
Jan 11 20:02:55.393: INFO: creating *v1.ServiceAccount: volume-expand-1929/csi-snapshotter
Jan 11 20:02:55.484: INFO: creating *v1.ClusterRole: external-snapshotter-runner-volume-expand-1929
Jan 11 20:02:55.485: INFO: Define cluster role external-snapshotter-runner-volume-expand-1929
Jan 11 20:02:55.574: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-volume-expand-1929
Jan 11 20:02:55.664: INFO: creating *v1.Role: volume-expand-1929/external-snapshotter-leaderelection-volume-expand-1929
Jan 11 20:02:55.754: INFO: creating *v1.RoleBinding: volume-expand-1929/external-snapshotter-leaderelection
Jan 11 20:02:55.844: INFO: creating *v1.ServiceAccount: volume-expand-1929/csi-resizer
Jan 11 20:02:55.934: INFO: creating *v1.ClusterRole: external-resizer-runner-volume-expand-1929
Jan 11 20:02:55.934: INFO: Define cluster role external-resizer-runner-volume-expand-1929
Jan 11 20:02:56.024: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-volume-expand-1929
Jan 11 20:02:56.114: INFO: creating *v1.Role: volume-expand-1929/external-resizer-cfg-volume-expand-1929
Jan 11 20:02:56.204: INFO: creating *v1.RoleBinding: volume-expand-1929/csi-resizer-role-cfg
Jan 11 20:02:56.293: INFO: creating *v1.Service: volume-expand-1929/csi-hostpath-attacher
Jan 11 20:02:56.387: INFO: creating *v1.StatefulSet: volume-expand-1929/csi-hostpath-attacher
Jan 11 20:02:56.477: INFO: creating *v1beta1.CSIDriver: csi-hostpath-volume-expand-1929
Jan 11 20:02:56.567: INFO: creating *v1.Service: volume-expand-1929/csi-hostpathplugin
Jan 11 20:02:56.661: INFO: creating *v1.StatefulSet: volume-expand-1929/csi-hostpathplugin
Jan 11 20:02:56.752: INFO: creating *v1.Service: volume-expand-1929/csi-hostpath-provisioner
Jan 11 20:02:56.845: INFO: creating *v1.StatefulSet: volume-expand-1929/csi-hostpath-provisioner
Jan 11 20:02:56.935: INFO: creating *v1.Service: volume-expand-1929/csi-hostpath-resizer
Jan 11 20:02:57.028: INFO: creating *v1.StatefulSet: volume-expand-1929/csi-hostpath-resizer
Jan 11 20:02:57.119: INFO: creating *v1.Service: volume-expand-1929/csi-snapshotter
Jan 11 20:02:57.213: INFO: creating *v1.StatefulSet: volume-expand-1929/csi-snapshotter
Jan 11 20:02:57.303: INFO: creating *v1.ClusterRoleBinding: psp-csi-hostpath-role-volume-expand-1929
Jan 11 20:02:57.393: INFO: Test running for native CSI Driver, not checking metrics
Jan 11 20:02:57.393: INFO: Creating resource for dynamic PV
STEP: creating a StorageClass volume-expand-1929-csi-hostpath-volume-expand-1929-scwm4n4
STEP: creating a claim
Jan 11 20:02:57.482: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
Jan 11 20:02:57.573: INFO: Waiting up to 5m0s for PersistentVolumeClaims [csi-hostpathmzrlr] to have phase Bound
Jan 11 20:02:57.662: INFO: PersistentVolumeClaim csi-hostpathmzrlr found but phase is Pending instead of Bound.
Jan 11 20:02:59.751: INFO: PersistentVolumeClaim csi-hostpathmzrlr found but phase is Pending instead of Bound.
Jan 11 20:03:01.841: INFO: PersistentVolumeClaim csi-hostpathmzrlr found but phase is Pending instead of Bound.
Jan 11 20:03:03.931: INFO: PersistentVolumeClaim csi-hostpathmzrlr found but phase is Pending instead of Bound.
Jan 11 20:03:06.021: INFO: PersistentVolumeClaim csi-hostpathmzrlr found and phase=Bound (8.447796679s)
STEP: Creating a pod with dynamically provisioned volume
STEP: Expanding current pvc
Jan 11 20:03:10.559: INFO: currentPvcSize {{5368709120 0} {} 5Gi BinarySI}, newSize {{6442450944 0} {}  BinarySI}
STEP: Waiting for cloudprovider resize to finish
STEP: Waiting for file system resize to finish
Jan 11 20:04:41.008: INFO: Deleting pod "security-context-1631323d-a32e-4bfa-903d-b1e91a87e982" in namespace "volume-expand-1929"
Jan 11 20:04:41.099: INFO: Wait up to 5m0s for pod "security-context-1631323d-a32e-4bfa-903d-b1e91a87e982" to be fully deleted
STEP: Deleting pod
Jan 11 20:04:45.279: INFO: Deleting pod "security-context-1631323d-a32e-4bfa-903d-b1e91a87e982" in namespace "volume-expand-1929"
STEP: Deleting pvc
Jan 11 20:04:45.369: INFO: Deleting PersistentVolumeClaim "csi-hostpathmzrlr"
Jan 11 20:04:45.459: INFO: Waiting up to 5m0s for PersistentVolume pvc-812082be-35cc-447a-9cf3-a5ceb0012b99 to get deleted
Jan 11 20:04:45.549: INFO: PersistentVolume pvc-812082be-35cc-447a-9cf3-a5ceb0012b99 was removed
STEP: Deleting sc
STEP: uninstalling csi-hostpath driver
Jan 11 20:04:45.640: INFO: deleting *v1.ServiceAccount: volume-expand-1929/csi-attacher
Jan 11 20:04:45.731: INFO: deleting *v1.ClusterRole: external-attacher-runner-volume-expand-1929
Jan 11 20:04:45.823: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-volume-expand-1929
Jan 11 20:04:45.914: INFO: deleting *v1.Role: volume-expand-1929/external-attacher-cfg-volume-expand-1929
Jan 11 20:04:46.006: INFO: deleting *v1.RoleBinding: volume-expand-1929/csi-attacher-role-cfg
Jan 11 20:04:46.097: INFO: deleting *v1.ServiceAccount: volume-expand-1929/csi-provisioner
Jan 11 20:04:46.188: INFO: deleting *v1.ClusterRole: external-provisioner-runner-volume-expand-1929
Jan 11 20:04:46.279: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-volume-expand-1929
Jan 11 20:04:46.371: INFO: deleting *v1.Role: volume-expand-1929/external-provisioner-cfg-volume-expand-1929
Jan 11 20:04:46.462: INFO: deleting *v1.RoleBinding: volume-expand-1929/csi-provisioner-role-cfg
Jan 11 20:04:46.553: INFO: deleting *v1.ServiceAccount: volume-expand-1929/csi-snapshotter
Jan 11 20:04:46.645: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-volume-expand-1929
Jan 11 20:04:46.736: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-volume-expand-1929
Jan 11 20:04:46.827: INFO: deleting *v1.Role: volume-expand-1929/external-snapshotter-leaderelection-volume-expand-1929
Jan 11 20:04:46.918: INFO: deleting *v1.RoleBinding: volume-expand-1929/external-snapshotter-leaderelection
Jan 11 20:04:47.009: INFO: deleting *v1.ServiceAccount: volume-expand-1929/csi-resizer
Jan 11 20:04:47.102: INFO: deleting *v1.ClusterRole: external-resizer-runner-volume-expand-1929
Jan 11 20:04:47.193: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-volume-expand-1929
Jan 11 20:04:47.284: INFO: deleting *v1.Role: volume-expand-1929/external-resizer-cfg-volume-expand-1929
Jan 11 20:04:47.375: INFO: deleting *v1.RoleBinding: volume-expand-1929/csi-resizer-role-cfg
Jan 11 20:04:47.468: INFO: deleting *v1.Service: volume-expand-1929/csi-hostpath-attacher
Jan 11 20:04:47.564: INFO: deleting *v1.StatefulSet: volume-expand-1929/csi-hostpath-attacher
Jan 11 20:04:47.655: INFO: deleting *v1beta1.CSIDriver: csi-hostpath-volume-expand-1929
Jan 11 20:04:47.746: INFO: deleting *v1.Service: volume-expand-1929/csi-hostpathplugin
Jan 11 20:04:47.841: INFO: deleting *v1.StatefulSet: volume-expand-1929/csi-hostpathplugin
Jan 11 20:04:47.933: INFO: deleting *v1.Service: volume-expand-1929/csi-hostpath-provisioner
Jan 11 20:04:48.031: INFO: deleting *v1.StatefulSet: volume-expand-1929/csi-hostpath-provisioner
Jan 11 20:04:48.123: INFO: deleting *v1.Service: volume-expand-1929/csi-hostpath-resizer
Jan 11 20:04:48.218: INFO: deleting *v1.StatefulSet: volume-expand-1929/csi-hostpath-resizer
Jan 11 20:04:48.312: INFO: deleting *v1.Service: volume-expand-1929/csi-snapshotter
Jan 11 20:04:48.409: INFO: deleting *v1.StatefulSet: volume-expand-1929/csi-snapshotter
Jan 11 20:04:48.500: INFO: deleting *v1.ClusterRoleBinding: psp-csi-hostpath-role-volume-expand-1929
[AfterEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 11 20:04:48.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-expand-1929" for this suite.
Jan 11 20:05:00.953: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 20:05:04.357: INFO: namespace volume-expand-1929 deletion completed in 15.673953396s


• [SLOW TEST:130.690 seconds]
[sig-storage] CSI Volumes
/workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: csi-hostpath]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:62
    [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
    /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:92
      should resize volume when PVC is edited while pod is using it
      /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:218
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-node] RuntimeClass
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 11 20:04:55.129: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
STEP: Building a namespace api object, basename runtimeclass
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in runtimeclass-1605
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run a Pod requesting a RuntimeClass with a configured handler [NodeFeature:RuntimeHandler]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtimeclass.go:56
Jan 11 20:04:55.953: INFO: Waiting up to 5m0s for pod "test-runtimeclass-runtimeclass-1605-preconfigured-handler-vskc9" in namespace "runtimeclass-1605" to be "success or failure"
Jan 11 20:04:56.044: INFO: Pod "test-runtimeclass-runtimeclass-1605-preconfigured-handler-vskc9": Phase="Pending", Reason="", readiness=false. Elapsed: 91.503744ms
Jan 11 20:04:58.134: INFO: Pod "test-runtimeclass-runtimeclass-1605-preconfigured-handler-vskc9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.181672061s
STEP: Saw pod success
Jan 11 20:04:58.135: INFO: Pod "test-runtimeclass-runtimeclass-1605-preconfigured-handler-vskc9" satisfied condition "success or failure"
[AfterEach] [sig-node] RuntimeClass
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 11 20:04:58.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "runtimeclass-1605" for this suite.
Jan 11 20:05:04.502: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 20:05:07.829: INFO: namespace runtimeclass-1605 deletion completed in 9.60253433s


• [SLOW TEST:12.700 seconds]
[sig-node] RuntimeClass
/workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtimeclass.go:40
  should run a Pod requesting a RuntimeClass with a configured handler [NodeFeature:RuntimeHandler]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtimeclass.go:56
------------------------------
S
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 11 20:05:03.192: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
STEP: Building a namespace api object, basename kubectl
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-1370
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:225
[BeforeEach] Kubectl run CronJob
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1631
[It] should create a CronJob
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1640
Jan 11 20:05:04.137: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config run e2e-test-echo-cronjob-beta --restart=OnFailure --generator=cronjob/v1beta1 --schedule=*/5 * * * ? --image=docker.io/library/busybox:1.29 --namespace=kubectl-1370'
Jan 11 20:05:04.598: INFO: stderr: "kubectl run --generator=cronjob/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan 11 20:05:04.598: INFO: stdout: "cronjob.batch/e2e-test-echo-cronjob-beta created\n"
STEP: verifying the CronJob e2e-test-echo-cronjob-beta was created
[AfterEach] Kubectl run CronJob
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1636
Jan 11 20:05:04.689: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config delete cronjobs e2e-test-echo-cronjob-beta --namespace=kubectl-1370'
Jan 11 20:05:05.246: INFO: stderr: ""
Jan 11 20:05:05.246: INFO: stdout: "cronjob.batch \"e2e-test-echo-cronjob-beta\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 11 20:05:05.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1370" for this suite.
Jan 11 20:05:11.608: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 20:05:14.930: INFO: namespace kubectl-1370 deletion completed in 9.592589287s


• [SLOW TEST:11.737 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run CronJob
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1627
    should create a CronJob
    /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1640
------------------------------
SSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:93
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 11 20:05:07.834: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
STEP: Building a namespace api object, basename provisioning
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in provisioning-9931
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support existing single file [LinuxOnly]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:202
Jan 11 20:05:08.953: INFO: Could not find CSI Name for in-tree plugin kubernetes.io/host-path
Jan 11 20:05:09.137: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-9931" in namespace "provisioning-9931" to be "success or failure"
Jan 11 20:05:09.227: INFO: Pod "hostpath-symlink-prep-provisioning-9931": Phase="Pending", Reason="", readiness=false. Elapsed: 89.863005ms
Jan 11 20:05:11.317: INFO: Pod "hostpath-symlink-prep-provisioning-9931": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.18001483s
STEP: Saw pod success
Jan 11 20:05:11.317: INFO: Pod "hostpath-symlink-prep-provisioning-9931" satisfied condition "success or failure"
Jan 11 20:05:11.317: INFO: Deleting pod "hostpath-symlink-prep-provisioning-9931" in namespace "provisioning-9931"
Jan 11 20:05:11.409: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-9931" to be fully deleted
Jan 11 20:05:11.500: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-hostpathsymlink-mdwn
STEP: Creating a pod to test subpath
Jan 11 20:05:11.591: INFO: Waiting up to 5m0s for pod "pod-subpath-test-hostpathsymlink-mdwn" in namespace "provisioning-9931" to be "success or failure"
Jan 11 20:05:11.681: INFO: Pod "pod-subpath-test-hostpathsymlink-mdwn": Phase="Pending", Reason="", readiness=false. Elapsed: 89.944344ms
Jan 11 20:05:13.772: INFO: Pod "pod-subpath-test-hostpathsymlink-mdwn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.181142204s
STEP: Saw pod success
Jan 11 20:05:13.772: INFO: Pod "pod-subpath-test-hostpathsymlink-mdwn" satisfied condition "success or failure"
Jan 11 20:05:13.862: INFO: Trying to get logs from node ip-10-250-27-25.ec2.internal pod pod-subpath-test-hostpathsymlink-mdwn container test-container-subpath-hostpathsymlink-mdwn: 
STEP: delete the pod
Jan 11 20:05:14.200: INFO: Waiting for pod pod-subpath-test-hostpathsymlink-mdwn to disappear
Jan 11 20:05:14.290: INFO: Pod pod-subpath-test-hostpathsymlink-mdwn no longer exists
STEP: Deleting pod pod-subpath-test-hostpathsymlink-mdwn
Jan 11 20:05:14.290: INFO: Deleting pod "pod-subpath-test-hostpathsymlink-mdwn" in namespace "provisioning-9931"
STEP: Deleting pod
Jan 11 20:05:14.379: INFO: Deleting pod "pod-subpath-test-hostpathsymlink-mdwn" in namespace "provisioning-9931"
Jan 11 20:05:14.560: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-9931" in namespace "provisioning-9931" to be "success or failure"
Jan 11 20:05:14.651: INFO: Pod "hostpath-symlink-prep-provisioning-9931": Phase="Pending", Reason="", readiness=false. Elapsed: 89.964384ms
Jan 11 20:05:16.741: INFO: Pod "hostpath-symlink-prep-provisioning-9931": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.180187492s
STEP: Saw pod success
Jan 11 20:05:16.741: INFO: Pod "hostpath-symlink-prep-provisioning-9931" satisfied condition "success or failure"
Jan 11 20:05:16.741: INFO: Deleting pod "hostpath-symlink-prep-provisioning-9931" in namespace "provisioning-9931"
Jan 11 20:05:16.838: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-9931" to be fully deleted
Jan 11 20:05:16.928: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 11 20:05:16.929: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "provisioning-9931" for this suite.
Jan 11 20:05:23.292: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 20:05:26.614: INFO: namespace provisioning-9931 deletion completed in 9.593736644s


• [SLOW TEST:18.781 seconds]
[sig-storage] In-tree Volumes
/workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: hostPathSymlink]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:69
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:92
      should support existing single file [LinuxOnly]
      /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:202
------------------------------
SSS
------------------------------
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 11 20:05:15.302: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
STEP: Building a namespace api object, basename configmap
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-8649
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:74
STEP: Creating configMap with name configmap-test-volume-62f6d562-b9ae-4756-b814-ad6782b8adbd
STEP: Creating a pod to test consume configMaps
Jan 11 20:05:16.169: INFO: Waiting up to 5m0s for pod "pod-configmaps-a2da55a1-e3eb-4461-b930-6d4b66644fe7" in namespace "configmap-8649" to be "success or failure"
Jan 11 20:05:16.260: INFO: Pod "pod-configmaps-a2da55a1-e3eb-4461-b930-6d4b66644fe7": Phase="Pending", Reason="", readiness=false. Elapsed: 90.480027ms
Jan 11 20:05:18.350: INFO: Pod "pod-configmaps-a2da55a1-e3eb-4461-b930-6d4b66644fe7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.180920045s
STEP: Saw pod success
Jan 11 20:05:18.350: INFO: Pod "pod-configmaps-a2da55a1-e3eb-4461-b930-6d4b66644fe7" satisfied condition "success or failure"
Jan 11 20:05:18.440: INFO: Trying to get logs from node ip-10-250-27-25.ec2.internal pod pod-configmaps-a2da55a1-e3eb-4461-b930-6d4b66644fe7 container configmap-volume-test: 
STEP: delete the pod
Jan 11 20:05:18.636: INFO: Waiting for pod pod-configmaps-a2da55a1-e3eb-4461-b930-6d4b66644fe7 to disappear
Jan 11 20:05:18.726: INFO: Pod pod-configmaps-a2da55a1-e3eb-4461-b930-6d4b66644fe7 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 11 20:05:18.726: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8649" for this suite.
Jan 11 20:05:25.088: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 20:05:28.418: INFO: namespace configmap-8649 deletion completed in 9.600198314s


• [SLOW TEST:13.116 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:34
  should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:74
------------------------------
SSSSSSSS
------------------------------
[BeforeEach] [sig-apps] Job
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 11 20:05:26.623: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
STEP: Building a namespace api object, basename job
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in job-5488
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating a job
STEP: Ensuring job reaches completions
[AfterEach] [sig-apps] Job
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 11 20:05:35.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-5488" for this suite.
Jan 11 20:05:41.805: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 20:05:45.133: INFO: namespace job-5488 deletion completed in 9.597813773s


• [SLOW TEST:18.511 seconds]
[sig-apps] Job
/workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 11 20:04:39.900: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
STEP: Building a namespace api object, basename services
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-6943
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:91
[It] should handle load balancer cleanup finalizer for service [Slow]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2049
STEP: Create load balancer service
STEP: Wait for load balancer to serve traffic
Jan 11 20:04:40.738: INFO: Waiting up to 20m0s for service "lb-finalizer" to have a LoadBalancer
STEP: Check if finalizer presents on service with type=LoadBalancer
STEP: Wait for service to hasFinalizer=true
STEP: Check if finalizer is removed on service after changed to type=ClusterIP
STEP: Wait for service to hasFinalizer=false
Jan 11 20:04:43.549: INFO: Service services-6943/lb-finalizer hasFinalizer=true, want false
STEP: Check if finalizer is added back to service after changed to type=LoadBalancer
STEP: Wait for service to hasFinalizer=true
STEP: Check that service can be deleted with finalizer
STEP: Delete service with finalizer
STEP: Wait for service to disappear
Jan 11 20:05:14.372: INFO: Service services-6943/lb-finalizer still exists with finalizers: [service.kubernetes.io/load-balancer-cleanup]
Jan 11 20:05:44.461: INFO: Service services-6943/lb-finalizer is gone.
[AfterEach] [sig-network] Services
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 11 20:05:44.461: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-6943" for this suite.
Jan 11 20:05:50.821: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 20:05:54.117: INFO: namespace services-6943 deletion completed in 9.564782193s
[AfterEach] [sig-network] Services
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:95


• [SLOW TEST:74.218 seconds]
[sig-network] Services
/workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should handle load balancer cleanup finalizer for service [Slow]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2049
------------------------------
SSSSSSSSSS
------------------------------
[BeforeEach] [k8s.io] [sig-node] Events
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 11 20:05:01.309: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
STEP: Building a namespace api object, basename events
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in events-1845
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Jan 11 20:05:04.460: INFO: &Pod{ObjectMeta:{send-events-f3a408e6-085b-4749-93e8-5f2898ddf853  events-1845 /api/v1/namespaces/events-1845/pods/send-events-f3a408e6-085b-4749-93e8-5f2898ddf853 bd6529b0-b2d9-41ca-a9f3-8dc23bccd648 67645 0 2020-01-11 20:05:02 +0000 UTC   map[name:foo time:965067002] map[cni.projectcalico.org/podIP:100.64.1.93/32 kubernetes.io/psp:e2e-test-privileged-psp] [] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-hsc8j,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-hsc8j,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.6,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-hsc8j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-10-250-27-25.ec2.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 20:05:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 20:05:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 20:05:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 20:05:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.27.25,PodIP:100.64.1.93,StartTime:2020-01-11 20:05:02 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-11 20:05:02 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.6,ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727,ContainerID:docker://be6fa289011d9e97c14b673139ee9106df40e794dc3d0c32c7a4e93ef24f2c49,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:100.64.1.93,},},EphemeralContainerStatuses:[]ContainerStatus{},},}

STEP: checking for scheduler event about the pod
Jan 11 20:05:06.550: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Jan 11 20:05:08.640: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 11 20:05:08.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-1845" for this suite.
Jan 11 20:05:53.092: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 20:05:56.401: INFO: namespace events-1845 deletion completed in 47.579150151s


• [SLOW TEST:55.093 seconds]
[k8s.io] [sig-node] Events
/workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSS
------------------------------
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 11 20:05:45.154: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
STEP: Building a namespace api object, basename watch
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in watch-2960
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: creating a watch on configmaps with a certain label
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: changing the label value of the configmap
STEP: Expecting to observe a delete notification for the watched object
Jan 11 20:05:46.336: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-2960 /api/v1/namespaces/watch-2960/configmaps/e2e-watch-test-label-changed dcbd0209-7dc7-48f4-b74d-470ead5001b9 67937 0 2020-01-11 20:05:45 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan 11 20:05:46.336: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-2960 /api/v1/namespaces/watch-2960/configmaps/e2e-watch-test-label-changed dcbd0209-7dc7-48f4-b74d-470ead5001b9 67938 0 2020-01-11 20:05:45 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Jan 11 20:05:46.337: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-2960 /api/v1/namespaces/watch-2960/configmaps/e2e-watch-test-label-changed dcbd0209-7dc7-48f4-b74d-470ead5001b9 67940 0 2020-01-11 20:05:45 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time
STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
STEP: changing the label value of the configmap back
STEP: modifying the configmap a third time
STEP: deleting the configmap
STEP: Expecting to observe an add notification for the watched object when the label value was restored
Jan 11 20:05:56.969: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-2960 /api/v1/namespaces/watch-2960/configmaps/e2e-watch-test-label-changed dcbd0209-7dc7-48f4-b74d-470ead5001b9 67985 0 2020-01-11 20:05:45 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan 11 20:05:56.969: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-2960 /api/v1/namespaces/watch-2960/configmaps/e2e-watch-test-label-changed dcbd0209-7dc7-48f4-b74d-470ead5001b9 67987 0 2020-01-11 20:05:45 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
Jan 11 20:05:56.969: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-2960 /api/v1/namespaces/watch-2960/configmaps/e2e-watch-test-label-changed dcbd0209-7dc7-48f4-b74d-470ead5001b9 67988 0 2020-01-11 20:05:45 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 11 20:05:56.969: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-2960" for this suite.
Jan 11 20:06:03.330: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 20:06:06.650: INFO: namespace watch-2960 deletion completed in 9.589882844s


• [SLOW TEST:21.496 seconds]
[sig-api-machinery] Watchers
/workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSS
------------------------------
[BeforeEach] [k8s.io] Variable Expansion
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 11 20:05:56.413: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
STEP: Building a namespace api object, basename var-expansion
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in var-expansion-9722
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating a pod to test env composition
Jan 11 20:05:57.144: INFO: Waiting up to 5m0s for pod "var-expansion-f5cfd170-0f96-4560-9887-124b524443b7" in namespace "var-expansion-9722" to be "success or failure"
Jan 11 20:05:57.234: INFO: Pod "var-expansion-f5cfd170-0f96-4560-9887-124b524443b7": Phase="Pending", Reason="", readiness=false. Elapsed: 89.653874ms
Jan 11 20:05:59.323: INFO: Pod "var-expansion-f5cfd170-0f96-4560-9887-124b524443b7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.179270333s
STEP: Saw pod success
Jan 11 20:05:59.323: INFO: Pod "var-expansion-f5cfd170-0f96-4560-9887-124b524443b7" satisfied condition "success or failure"
Jan 11 20:05:59.413: INFO: Trying to get logs from node ip-10-250-27-25.ec2.internal pod var-expansion-f5cfd170-0f96-4560-9887-124b524443b7 container dapi-container: 
STEP: delete the pod
Jan 11 20:05:59.602: INFO: Waiting for pod var-expansion-f5cfd170-0f96-4560-9887-124b524443b7 to disappear
Jan 11 20:05:59.691: INFO: Pod var-expansion-f5cfd170-0f96-4560-9887-124b524443b7 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 11 20:05:59.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-9722" for this suite.
Jan 11 20:06:06.049: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 20:06:09.349: INFO: namespace var-expansion-9722 deletion completed in 9.567690923s


• [SLOW TEST:12.937 seconds]
[k8s.io] Variable Expansion
/workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 11 20:05:54.133: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
STEP: Building a namespace api object, basename pv
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pv-8578
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] PersistentVolumes
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:110
[BeforeEach] NFS
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:127
STEP: creating nfs-server pod
STEP: locating the "nfs-server" server pod
Jan 11 20:05:57.132: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config logs nfs-server nfs-server --namespace=pv-8578'
Jan 11 20:05:57.688: INFO: stderr: ""
Jan 11 20:05:57.688: INFO: stdout: "Serving /exports\nrpcinfo: can't contact rpcbind: : RPC: Unable to receive; errno = Connection refused\nStarting rpcbind\nNFS started\n"
Jan 11 20:05:57.688: INFO: nfs server pod IP address: 100.64.1.106
[It] should create a non-pre-bound PV and PVC: test write access 
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:167
Jan 11 20:05:57.688: INFO: Creating a PV followed by a PVC
STEP: Validating the PV-PVC binding
Jan 11 20:05:57.867: INFO: Waiting for PV nfs-x96cv to bind to PVC pvc-7k4ps
Jan 11 20:05:57.867: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-7k4ps] to have phase Bound
Jan 11 20:05:57.956: INFO: PersistentVolumeClaim pvc-7k4ps found and phase=Bound (89.212243ms)
Jan 11 20:05:57.956: INFO: Waiting up to 3m0s for PersistentVolume nfs-x96cv to have phase Bound
Jan 11 20:05:58.046: INFO: PersistentVolume nfs-x96cv found and phase=Bound (89.576057ms)
STEP: Checking pod has write access to PersistentVolume
Jan 11 20:05:58.224: INFO: Creating nfs test pod
STEP: Pod should terminate with exitcode 0 (success)
Jan 11 20:05:58.315: INFO: Waiting up to 5m0s for pod "pvc-tester-qdxsz" in namespace "pv-8578" to be "success or failure"
Jan 11 20:05:58.404: INFO: Pod "pvc-tester-qdxsz": Phase="Pending", Reason="", readiness=false. Elapsed: 88.949988ms
Jan 11 20:06:00.493: INFO: Pod "pvc-tester-qdxsz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.178638275s
STEP: Saw pod success
Jan 11 20:06:00.494: INFO: Pod "pvc-tester-qdxsz" satisfied condition "success or failure"
Jan 11 20:06:00.494: INFO: Pod pvc-tester-qdxsz succeeded 
Jan 11 20:06:00.494: INFO: Deleting pod "pvc-tester-qdxsz" in namespace "pv-8578"
Jan 11 20:06:00.586: INFO: Wait up to 5m0s for pod "pvc-tester-qdxsz" to be fully deleted
STEP: Deleting the PVC to invoke the reclaim policy.
Jan 11 20:06:00.676: INFO: Deleting PVC pvc-7k4ps to trigger reclamation of PV 
Jan 11 20:06:00.676: INFO: Deleting PersistentVolumeClaim "pvc-7k4ps"
Jan 11 20:06:00.766: INFO: Waiting for reclaim process to complete.
Jan 11 20:06:00.766: INFO: Waiting up to 3m0s for PersistentVolume nfs-x96cv to have phase Released
Jan 11 20:06:00.855: INFO: PersistentVolume nfs-x96cv found and phase=Released (89.30816ms)
Jan 11 20:06:00.944: INFO: PV nfs-x96cv now in "Released" phase
[AfterEach] with Single PV - PVC pairs
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:155
Jan 11 20:06:00.944: INFO: AfterEach: Cleaning up test resources.
Jan 11 20:06:00.944: INFO: Deleting PersistentVolumeClaim "pvc-7k4ps"
Jan 11 20:06:01.033: INFO: Deleting PersistentVolume "nfs-x96cv"
[AfterEach] NFS
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:147
Jan 11 20:06:01.127: INFO: Deleting pod "nfs-server" in namespace "pv-8578"
Jan 11 20:06:01.217: INFO: Wait up to 5m0s for pod "nfs-server" to be fully deleted
[AfterEach] [sig-storage] PersistentVolumes
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 11 20:06:09.400: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pv-8578" for this suite.
Jan 11 20:06:17.759: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 20:06:21.064: INFO: namespace pv-8578 deletion completed in 11.57436439s


• [SLOW TEST:26.931 seconds]
[sig-storage] PersistentVolumes
/workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  NFS
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:120
    with Single PV - PVC pairs
    /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:153
      should create a non-pre-bound PV and PVC: test write access 
      /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:167
------------------------------
SSS
------------------------------
[BeforeEach] [sig-apps] DisruptionController
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 11 20:05:28.429: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
STEP: Building a namespace api object, basename disruption
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in disruption-7899
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] DisruptionController
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:52
[It] evictions: enough pods, absolute => should allow an eviction
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:149
STEP: Waiting for the pdb to be processed
STEP: locating a running pod
STEP: Waiting for all pods to be running
[AfterEach] [sig-apps] DisruptionController
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 11 20:05:31.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "disruption-7899" for this suite.
Jan 11 20:06:20.275: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 20:06:23.595: INFO: namespace disruption-7899 deletion completed in 51.612876207s


• [SLOW TEST:55.167 seconds]
[sig-apps] DisruptionController
/workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  evictions: enough pods, absolute => should allow an eviction
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:149
------------------------------
SSS
------------------------------
[BeforeEach] [sig-apps] CronJob
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 11 20:01:17.208: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
STEP: Building a namespace api object, basename cronjob
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in cronjob-2724
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] CronJob
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:55
[It] should not schedule jobs when suspended [Slow]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:83
STEP: Creating a suspended cronjob
STEP: Ensuring no jobs are scheduled
STEP: Ensuring no job exists by listing jobs explicitly
STEP: Removing cronjob
[AfterEach] [sig-apps] CronJob
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 11 20:06:18.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "cronjob-2724" for this suite.
Jan 11 20:06:24.754: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 20:06:28.079: INFO: namespace cronjob-2724 deletion completed in 9.59527831s


• [SLOW TEST:310.871 seconds]
[sig-apps] CronJob
/workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should not schedule jobs when suspended [Slow]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:83
------------------------------
SSSSS
------------------------------
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 11 20:05:04.438: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
STEP: Building a namespace api object, basename configmap
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-1576
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating configMap with name cm-test-opt-del-4143aa8a-a887-4926-a52c-600eff49215a
STEP: Creating configMap with name cm-test-opt-upd-eddf7b86-45ef-423c-bec7-c79c1c722ddd
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-4143aa8a-a887-4926-a52c-600eff49215a
STEP: Updating configmap cm-test-opt-upd-eddf7b86-45ef-423c-bec7-c79c1c722ddd
STEP: Creating configMap with name cm-test-opt-create-316f7428-c8e8-4fc4-af7c-8e0ce44bd9a7
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 11 20:06:24.416: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1576" for this suite.
Jan 11 20:06:38.781: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 20:06:42.088: INFO: namespace configmap-1576 deletion completed in 17.580042203s


• [SLOW TEST:97.650 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSS
------------------------------
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 11 20:01:26.363: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
STEP: Building a namespace api object, basename configmap
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-5250
STEP: Waiting for a default service account to be provisioned in namespace
[It] Should fail non-optional pod creation due to configMap object does not exist [Slow]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:558
STEP: Creating the pod
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 11 20:06:27.450: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5250" for this suite.
Jan 11 20:06:57.809: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 20:07:01.118: INFO: namespace configmap-5250 deletion completed in 33.577091745s


• [SLOW TEST:334.755 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:34
  Should fail non-optional pod creation due to configMap object does not exist [Slow]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:558
------------------------------
S
------------------------------
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 11 20:06:06.657: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
STEP: Building a namespace api object, basename statefulset
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in statefulset-4982
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:62
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:77
STEP: Creating service test in namespace statefulset-4982
[It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating stateful set ss in namespace statefulset-4982
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-4982
Jan 11 20:06:07.570: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false
Jan 11 20:06:17.661: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
Jan 11 20:06:17.751: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=statefulset-4982 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jan 11 20:06:19.092: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n"
Jan 11 20:06:19.092: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jan 11 20:06:19.092: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Jan 11 20:06:19.182: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan 11 20:06:19.182: INFO: Waiting for statefulset status.replicas updated to 0
Jan 11 20:06:19.543: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999616s
Jan 11 20:06:20.633: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.909052364s
Jan 11 20:06:21.724: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.818337895s
Jan 11 20:06:22.815: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.727476686s
Jan 11 20:06:23.906: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.636729414s
Jan 11 20:06:24.996: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.545978021s
Jan 11 20:06:26.091: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.455251024s
Jan 11 20:06:27.181: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.360624991s
Jan 11 20:06:28.273: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.270215732s
Jan 11 20:06:29.363: INFO: Verifying statefulset ss doesn't scale past 3 for another 179.076049ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-4982
Jan 11 20:06:30.454: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=statefulset-4982 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 11 20:06:31.931: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n"
Jan 11 20:06:31.931: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Jan 11 20:06:31.931: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Jan 11 20:06:31.931: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=statefulset-4982 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 11 20:06:33.519: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n"
Jan 11 20:06:33.519: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Jan 11 20:06:33.519: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Jan 11 20:06:33.519: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=statefulset-4982 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 11 20:06:34.831: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n"
Jan 11 20:06:34.831: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Jan 11 20:06:34.831: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Jan 11 20:06:34.921: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 11 20:06:34.921: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 11 20:06:34.921: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
Jan 11 20:06:35.011: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=statefulset-4982 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jan 11 20:06:36.387: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n"
Jan 11 20:06:36.387: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jan 11 20:06:36.387: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Jan 11 20:06:36.387: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=statefulset-4982 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jan 11 20:06:37.671: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n"
Jan 11 20:06:37.671: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jan 11 20:06:37.671: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Jan 11 20:06:37.671: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=statefulset-4982 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jan 11 20:06:39.017: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n"
Jan 11 20:06:39.017: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jan 11 20:06:39.017: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Jan 11 20:06:39.017: INFO: Waiting for statefulset status.replicas updated to 0
Jan 11 20:06:39.198: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan 11 20:06:39.198: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Jan 11 20:06:39.198: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Jan 11 20:06:39.468: INFO: POD   NODE                          PHASE    GRACE  CONDITIONS
Jan 11 20:06:39.468: INFO: ss-0  ip-10-250-27-25.ec2.internal  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 20:06:07 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-11 20:06:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-11 20:06:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 20:06:07 +0000 UTC  }]
Jan 11 20:06:39.468: INFO: ss-1  ip-10-250-7-77.ec2.internal   Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 20:06:19 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-11 20:06:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-11 20:06:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 20:06:19 +0000 UTC  }]
Jan 11 20:06:39.468: INFO: ss-2  ip-10-250-7-77.ec2.internal   Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 20:06:19 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-11 20:06:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-11 20:06:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 20:06:19 +0000 UTC  }]
Jan 11 20:06:39.468: INFO: 
Jan 11 20:06:39.468: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 11 20:06:40.558: INFO: POD   NODE                          PHASE    GRACE  CONDITIONS
Jan 11 20:06:40.558: INFO: ss-0  ip-10-250-27-25.ec2.internal  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 20:06:07 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-11 20:06:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-11 20:06:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 20:06:07 +0000 UTC  }]
Jan 11 20:06:40.558: INFO: ss-1  ip-10-250-7-77.ec2.internal   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 20:06:19 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-11 20:06:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-11 20:06:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 20:06:19 +0000 UTC  }]
Jan 11 20:06:40.558: INFO: ss-2  ip-10-250-7-77.ec2.internal   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 20:06:19 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-11 20:06:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-11 20:06:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 20:06:19 +0000 UTC  }]
Jan 11 20:06:40.558: INFO: 
Jan 11 20:06:40.558: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 11 20:06:41.649: INFO: POD   NODE                          PHASE    GRACE  CONDITIONS
Jan 11 20:06:41.649: INFO: ss-0  ip-10-250-27-25.ec2.internal  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 20:06:07 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-11 20:06:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-11 20:06:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 20:06:07 +0000 UTC  }]
Jan 11 20:06:41.649: INFO: ss-1  ip-10-250-7-77.ec2.internal   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 20:06:19 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-11 20:06:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-11 20:06:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 20:06:19 +0000 UTC  }]
Jan 11 20:06:41.649: INFO: ss-2  ip-10-250-7-77.ec2.internal   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 20:06:19 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-11 20:06:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-11 20:06:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 20:06:19 +0000 UTC  }]
Jan 11 20:06:41.649: INFO: 
Jan 11 20:06:41.649: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 11 20:06:42.740: INFO: POD   NODE                         PHASE    GRACE  CONDITIONS
Jan 11 20:06:42.740: INFO: ss-1  ip-10-250-7-77.ec2.internal  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 20:06:19 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-11 20:06:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-11 20:06:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 20:06:19 +0000 UTC  }]
Jan 11 20:06:42.740: INFO: ss-2  ip-10-250-7-77.ec2.internal  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 20:06:19 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-11 20:06:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-11 20:06:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 20:06:19 +0000 UTC  }]
Jan 11 20:06:42.740: INFO: 
Jan 11 20:06:42.740: INFO: StatefulSet ss has not reached scale 0, at 2
Jan 11 20:06:43.830: INFO: POD   NODE                         PHASE    GRACE  CONDITIONS
Jan 11 20:06:43.830: INFO: ss-1  ip-10-250-7-77.ec2.internal  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 20:06:19 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-11 20:06:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-11 20:06:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 20:06:19 +0000 UTC  }]
Jan 11 20:06:43.830: INFO: ss-2  ip-10-250-7-77.ec2.internal  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 20:06:19 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-11 20:06:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-11 20:06:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 20:06:19 +0000 UTC  }]
Jan 11 20:06:43.830: INFO: 
Jan 11 20:06:43.830: INFO: StatefulSet ss has not reached scale 0, at 2
Jan 11 20:06:44.921: INFO: POD   NODE                         PHASE    GRACE  CONDITIONS
Jan 11 20:06:44.921: INFO: ss-1  ip-10-250-7-77.ec2.internal  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 20:06:19 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-11 20:06:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-11 20:06:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 20:06:19 +0000 UTC  }]
Jan 11 20:06:44.921: INFO: ss-2  ip-10-250-7-77.ec2.internal  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 20:06:19 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-11 20:06:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-11 20:06:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 20:06:19 +0000 UTC  }]
Jan 11 20:06:44.921: INFO: 
Jan 11 20:06:44.921: INFO: StatefulSet ss has not reached scale 0, at 2
Jan 11 20:06:46.012: INFO: POD   NODE                         PHASE    GRACE  CONDITIONS
Jan 11 20:06:46.012: INFO: ss-1  ip-10-250-7-77.ec2.internal  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 20:06:19 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-11 20:06:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-11 20:06:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 20:06:19 +0000 UTC  }]
Jan 11 20:06:46.012: INFO: ss-2  ip-10-250-7-77.ec2.internal  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 20:06:19 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-11 20:06:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-11 20:06:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 20:06:19 +0000 UTC  }]
Jan 11 20:06:46.012: INFO: 
Jan 11 20:06:46.012: INFO: StatefulSet ss has not reached scale 0, at 2
Jan 11 20:06:47.103: INFO: POD   NODE                         PHASE    GRACE  CONDITIONS
Jan 11 20:06:47.103: INFO: ss-1  ip-10-250-7-77.ec2.internal  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 20:06:19 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-11 20:06:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-11 20:06:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 20:06:19 +0000 UTC  }]
Jan 11 20:06:47.103: INFO: ss-2  ip-10-250-7-77.ec2.internal  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 20:06:19 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-11 20:06:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-11 20:06:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 20:06:19 +0000 UTC  }]
Jan 11 20:06:47.103: INFO: 
Jan 11 20:06:47.103: INFO: StatefulSet ss has not reached scale 0, at 2
Jan 11 20:06:48.193: INFO: Verifying statefulset ss doesn't scale past 0 for another 1.274315714s
Jan 11 20:06:49.283: INFO: Verifying statefulset ss doesn't scale past 0 for another 184.013932ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-4982
Jan 11 20:06:50.374: INFO: Scaling statefulset ss to 0
Jan 11 20:06:50.644: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88
Jan 11 20:06:50.734: INFO: Deleting all statefulset in ns statefulset-4982
Jan 11 20:06:50.825: INFO: Scaling statefulset ss to 0
Jan 11 20:06:51.094: INFO: Waiting for statefulset status.replicas updated to 0
Jan 11 20:06:51.184: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 11 20:06:51.455: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-4982" for this suite.
Jan 11 20:06:57.815: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 20:07:01.141: INFO: namespace statefulset-4982 deletion completed in 9.59487279s


• [SLOW TEST:54.483 seconds]
[sig-apps] StatefulSet
/workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
    Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
    /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:93
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 11 20:06:09.355: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
STEP: Building a namespace api object, basename provisioning
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in provisioning-5738
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support readOnly file specified in the volumeMount [LinuxOnly]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:362
STEP: deploying csi-hostpath driver
Jan 11 20:06:10.192: INFO: creating *v1.ServiceAccount: provisioning-5738/csi-attacher
Jan 11 20:06:10.282: INFO: creating *v1.ClusterRole: external-attacher-runner-provisioning-5738
Jan 11 20:06:10.282: INFO: Define cluster role external-attacher-runner-provisioning-5738
Jan 11 20:06:10.371: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-provisioning-5738
Jan 11 20:06:10.461: INFO: creating *v1.Role: provisioning-5738/external-attacher-cfg-provisioning-5738
Jan 11 20:06:10.550: INFO: creating *v1.RoleBinding: provisioning-5738/csi-attacher-role-cfg
Jan 11 20:06:10.640: INFO: creating *v1.ServiceAccount: provisioning-5738/csi-provisioner
Jan 11 20:06:10.730: INFO: creating *v1.ClusterRole: external-provisioner-runner-provisioning-5738
Jan 11 20:06:10.730: INFO: Define cluster role external-provisioner-runner-provisioning-5738
Jan 11 20:06:10.819: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-provisioning-5738
Jan 11 20:06:10.909: INFO: creating *v1.Role: provisioning-5738/external-provisioner-cfg-provisioning-5738
Jan 11 20:06:10.999: INFO: creating *v1.RoleBinding: provisioning-5738/csi-provisioner-role-cfg
Jan 11 20:06:11.088: INFO: creating *v1.ServiceAccount: provisioning-5738/csi-snapshotter
Jan 11 20:06:11.178: INFO: creating *v1.ClusterRole: external-snapshotter-runner-provisioning-5738
Jan 11 20:06:11.178: INFO: Define cluster role external-snapshotter-runner-provisioning-5738
Jan 11 20:06:11.269: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-provisioning-5738
Jan 11 20:06:11.360: INFO: creating *v1.Role: provisioning-5738/external-snapshotter-leaderelection-provisioning-5738
Jan 11 20:06:11.449: INFO: creating *v1.RoleBinding: provisioning-5738/external-snapshotter-leaderelection
Jan 11 20:06:11.538: INFO: creating *v1.ServiceAccount: provisioning-5738/csi-resizer
Jan 11 20:06:11.628: INFO: creating *v1.ClusterRole: external-resizer-runner-provisioning-5738
Jan 11 20:06:11.628: INFO: Define cluster role external-resizer-runner-provisioning-5738
Jan 11 20:06:11.720: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-provisioning-5738
Jan 11 20:06:11.809: INFO: creating *v1.Role: provisioning-5738/external-resizer-cfg-provisioning-5738
Jan 11 20:06:11.899: INFO: creating *v1.RoleBinding: provisioning-5738/csi-resizer-role-cfg
Jan 11 20:06:11.989: INFO: creating *v1.Service: provisioning-5738/csi-hostpath-attacher
Jan 11 20:06:12.085: INFO: creating *v1.StatefulSet: provisioning-5738/csi-hostpath-attacher
Jan 11 20:06:12.175: INFO: creating *v1beta1.CSIDriver: csi-hostpath-provisioning-5738
Jan 11 20:06:12.265: INFO: creating *v1.Service: provisioning-5738/csi-hostpathplugin
Jan 11 20:06:12.358: INFO: creating *v1.StatefulSet: provisioning-5738/csi-hostpathplugin
Jan 11 20:06:12.448: INFO: creating *v1.Service: provisioning-5738/csi-hostpath-provisioner
Jan 11 20:06:12.541: INFO: creating *v1.StatefulSet: provisioning-5738/csi-hostpath-provisioner
Jan 11 20:06:12.631: INFO: creating *v1.Service: provisioning-5738/csi-hostpath-resizer
Jan 11 20:06:12.724: INFO: creating *v1.StatefulSet: provisioning-5738/csi-hostpath-resizer
Jan 11 20:06:12.814: INFO: creating *v1.Service: provisioning-5738/csi-snapshotter
Jan 11 20:06:12.908: INFO: creating *v1.StatefulSet: provisioning-5738/csi-snapshotter
Jan 11 20:06:12.998: INFO: creating *v1.ClusterRoleBinding: psp-csi-hostpath-role-provisioning-5738
Jan 11 20:06:13.088: INFO: Test running for native CSI Driver, not checking metrics
Jan 11 20:06:13.088: INFO: Creating resource for dynamic PV
STEP: creating a StorageClass provisioning-5738-csi-hostpath-provisioning-5738-scfcpjg
STEP: creating a claim
Jan 11 20:06:13.178: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
Jan 11 20:06:13.269: INFO: Waiting up to 5m0s for PersistentVolumeClaims [csi-hostpath9sk5g] to have phase Bound
Jan 11 20:06:13.358: INFO: PersistentVolumeClaim csi-hostpath9sk5g found but phase is Pending instead of Bound.
Jan 11 20:06:15.447: INFO: PersistentVolumeClaim csi-hostpath9sk5g found and phase=Bound (2.178513928s)
STEP: Creating pod pod-subpath-test-csi-hostpath-dynamicpv-5gs9
STEP: Creating a pod to test subpath
Jan 11 20:06:15.716: INFO: Waiting up to 5m0s for pod "pod-subpath-test-csi-hostpath-dynamicpv-5gs9" in namespace "provisioning-5738" to be "success or failure"
Jan 11 20:06:15.806: INFO: Pod "pod-subpath-test-csi-hostpath-dynamicpv-5gs9": Phase="Pending", Reason="", readiness=false. Elapsed: 89.29921ms
Jan 11 20:06:17.898: INFO: Pod "pod-subpath-test-csi-hostpath-dynamicpv-5gs9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.181282531s
Jan 11 20:06:19.988: INFO: Pod "pod-subpath-test-csi-hostpath-dynamicpv-5gs9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.271102211s
Jan 11 20:06:22.078: INFO: Pod "pod-subpath-test-csi-hostpath-dynamicpv-5gs9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.361647102s
Jan 11 20:06:24.168: INFO: Pod "pod-subpath-test-csi-hostpath-dynamicpv-5gs9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.45153216s
Jan 11 20:06:26.257: INFO: Pod "pod-subpath-test-csi-hostpath-dynamicpv-5gs9": Phase="Pending", Reason="", readiness=false. Elapsed: 10.541087188s
Jan 11 20:06:28.347: INFO: Pod "pod-subpath-test-csi-hostpath-dynamicpv-5gs9": Phase="Pending", Reason="", readiness=false. Elapsed: 12.630823782s
Jan 11 20:06:30.437: INFO: Pod "pod-subpath-test-csi-hostpath-dynamicpv-5gs9": Phase="Pending", Reason="", readiness=false. Elapsed: 14.720313909s
Jan 11 20:06:32.526: INFO: Pod "pod-subpath-test-csi-hostpath-dynamicpv-5gs9": Phase="Pending", Reason="", readiness=false. Elapsed: 16.809580229s
Jan 11 20:06:34.615: INFO: Pod "pod-subpath-test-csi-hostpath-dynamicpv-5gs9": Phase="Pending", Reason="", readiness=false. Elapsed: 18.898744638s
Jan 11 20:06:36.705: INFO: Pod "pod-subpath-test-csi-hostpath-dynamicpv-5gs9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 20.988139817s
STEP: Saw pod success
Jan 11 20:06:36.705: INFO: Pod "pod-subpath-test-csi-hostpath-dynamicpv-5gs9" satisfied condition "success or failure"
Jan 11 20:06:36.794: INFO: Trying to get logs from node ip-10-250-27-25.ec2.internal pod pod-subpath-test-csi-hostpath-dynamicpv-5gs9 container test-container-subpath-csi-hostpath-dynamicpv-5gs9: 
STEP: delete the pod
Jan 11 20:06:36.987: INFO: Waiting for pod pod-subpath-test-csi-hostpath-dynamicpv-5gs9 to disappear
Jan 11 20:06:37.077: INFO: Pod pod-subpath-test-csi-hostpath-dynamicpv-5gs9 no longer exists
STEP: Deleting pod pod-subpath-test-csi-hostpath-dynamicpv-5gs9
Jan 11 20:06:37.077: INFO: Deleting pod "pod-subpath-test-csi-hostpath-dynamicpv-5gs9" in namespace "provisioning-5738"
STEP: Deleting pod
Jan 11 20:06:37.166: INFO: Deleting pod "pod-subpath-test-csi-hostpath-dynamicpv-5gs9" in namespace "provisioning-5738"
STEP: Deleting pvc
Jan 11 20:06:37.255: INFO: Deleting PersistentVolumeClaim "csi-hostpath9sk5g"
Jan 11 20:06:37.346: INFO: Waiting up to 5m0s for PersistentVolume pvc-89354698-52ad-44a8-ad78-5b50464c4305 to get deleted
Jan 11 20:06:37.435: INFO: PersistentVolume pvc-89354698-52ad-44a8-ad78-5b50464c4305 found and phase=Bound (89.073032ms)
Jan 11 20:06:42.524: INFO: PersistentVolume pvc-89354698-52ad-44a8-ad78-5b50464c4305 was removed
STEP: Deleting sc
STEP: uninstalling csi-hostpath driver
Jan 11 20:06:42.614: INFO: deleting *v1.ServiceAccount: provisioning-5738/csi-attacher
Jan 11 20:06:42.705: INFO: deleting *v1.ClusterRole: external-attacher-runner-provisioning-5738
Jan 11 20:06:42.796: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-provisioning-5738
Jan 11 20:06:42.886: INFO: deleting *v1.Role: provisioning-5738/external-attacher-cfg-provisioning-5738
Jan 11 20:06:42.977: INFO: deleting *v1.RoleBinding: provisioning-5738/csi-attacher-role-cfg
Jan 11 20:06:43.068: INFO: deleting *v1.ServiceAccount: provisioning-5738/csi-provisioner
Jan 11 20:06:43.158: INFO: deleting *v1.ClusterRole: external-provisioner-runner-provisioning-5738
Jan 11 20:06:43.249: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-provisioning-5738
Jan 11 20:06:43.340: INFO: deleting *v1.Role: provisioning-5738/external-provisioner-cfg-provisioning-5738
Jan 11 20:06:43.430: INFO: deleting *v1.RoleBinding: provisioning-5738/csi-provisioner-role-cfg
Jan 11 20:06:43.521: INFO: deleting *v1.ServiceAccount: provisioning-5738/csi-snapshotter
Jan 11 20:06:43.611: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-provisioning-5738
Jan 11 20:06:43.703: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-provisioning-5738
Jan 11 20:06:43.794: INFO: deleting *v1.Role: provisioning-5738/external-snapshotter-leaderelection-provisioning-5738
Jan 11 20:06:43.887: INFO: deleting *v1.RoleBinding: provisioning-5738/external-snapshotter-leaderelection
Jan 11 20:06:43.977: INFO: deleting *v1.ServiceAccount: provisioning-5738/csi-resizer
Jan 11 20:06:44.068: INFO: deleting *v1.ClusterRole: external-resizer-runner-provisioning-5738
Jan 11 20:06:44.159: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-provisioning-5738
Jan 11 20:06:44.250: INFO: deleting *v1.Role: provisioning-5738/external-resizer-cfg-provisioning-5738
Jan 11 20:06:44.341: INFO: deleting *v1.RoleBinding: provisioning-5738/csi-resizer-role-cfg
Jan 11 20:06:44.431: INFO: deleting *v1.Service: provisioning-5738/csi-hostpath-attacher
Jan 11 20:06:44.526: INFO: deleting *v1.StatefulSet: provisioning-5738/csi-hostpath-attacher
Jan 11 20:06:44.617: INFO: deleting *v1beta1.CSIDriver: csi-hostpath-provisioning-5738
Jan 11 20:06:44.709: INFO: deleting *v1.Service: provisioning-5738/csi-hostpathplugin
Jan 11 20:06:44.805: INFO: deleting *v1.StatefulSet: provisioning-5738/csi-hostpathplugin
Jan 11 20:06:44.896: INFO: deleting *v1.Service: provisioning-5738/csi-hostpath-provisioner
Jan 11 20:06:44.992: INFO: deleting *v1.StatefulSet: provisioning-5738/csi-hostpath-provisioner
Jan 11 20:06:45.083: INFO: deleting *v1.Service: provisioning-5738/csi-hostpath-resizer
Jan 11 20:06:45.179: INFO: deleting *v1.StatefulSet: provisioning-5738/csi-hostpath-resizer
Jan 11 20:06:45.270: INFO: deleting *v1.Service: provisioning-5738/csi-snapshotter
Jan 11 20:06:45.365: INFO: deleting *v1.StatefulSet: provisioning-5738/csi-snapshotter
Jan 11 20:06:45.456: INFO: deleting *v1.ClusterRoleBinding: psp-csi-hostpath-role-provisioning-5738
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 11 20:06:45.547: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
WARNING: pod log: csi-hostpath-attacher-0/csi-attacher: context canceled
STEP: Destroying namespace "provisioning-5738" for this suite.
WARNING: pod log: csi-hostpath-attacher-0/csi-attacher: context canceled
Jan 11 20:06:57.906: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 20:07:01.218: INFO: namespace provisioning-5738 deletion completed in 15.580381836s


• [SLOW TEST:51.862 seconds]
[sig-storage] CSI Volumes
/workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: csi-hostpath]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:62
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:92
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:362
------------------------------
S
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 11 20:06:21.070: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
STEP: Building a namespace api object, basename pv
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pv-6847
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] PersistentVolumes
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:110
[BeforeEach] NFS
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:127
STEP: creating nfs-server pod
STEP: locating the "nfs-server" server pod
Jan 11 20:06:34.068: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config logs nfs-server nfs-server --namespace=pv-6847'
Jan 11 20:06:34.669: INFO: stderr: ""
Jan 11 20:06:34.669: INFO: stdout: "Serving /exports\nrpcinfo: can't contact rpcbind: : RPC: Unable to receive; errno = Connection refused\nStarting rpcbind\nNFS started\n"
Jan 11 20:06:34.669: INFO: nfs server pod IP address: 100.64.0.125
[It] create a PVC and non-pre-bound PV: test write access
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:176
STEP: Creating a PVC followed by a PV
STEP: Validating the PV-PVC binding
Jan 11 20:06:34.849: INFO: Waiting for PV nfs-hfhrq to bind to PVC pvc-q4s47
Jan 11 20:06:34.849: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-q4s47] to have phase Bound
Jan 11 20:06:34.938: INFO: PersistentVolumeClaim pvc-q4s47 found but phase is Pending instead of Bound.
Jan 11 20:06:37.028: INFO: PersistentVolumeClaim pvc-q4s47 found but phase is Pending instead of Bound.
Jan 11 20:06:39.118: INFO: PersistentVolumeClaim pvc-q4s47 found and phase=Bound (4.268598368s)
Jan 11 20:06:39.118: INFO: Waiting up to 3m0s for PersistentVolume nfs-hfhrq to have phase Bound
Jan 11 20:06:39.207: INFO: PersistentVolume nfs-hfhrq found and phase=Bound (88.841125ms)
STEP: Checking pod has write access to PersistentVolume
Jan 11 20:06:39.384: INFO: Creating nfs test pod
STEP: Pod should terminate with exitcode 0 (success)
Jan 11 20:06:39.474: INFO: Waiting up to 5m0s for pod "pvc-tester-gvh6p" in namespace "pv-6847" to be "success or failure"
Jan 11 20:06:39.563: INFO: Pod "pvc-tester-gvh6p": Phase="Pending", Reason="", readiness=false. Elapsed: 88.905846ms
Jan 11 20:06:41.652: INFO: Pod "pvc-tester-gvh6p": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.177971471s
STEP: Saw pod success
Jan 11 20:06:41.652: INFO: Pod "pvc-tester-gvh6p" satisfied condition "success or failure"
Jan 11 20:06:41.652: INFO: Pod pvc-tester-gvh6p succeeded 
Jan 11 20:06:41.652: INFO: Deleting pod "pvc-tester-gvh6p" in namespace "pv-6847"
Jan 11 20:06:41.744: INFO: Wait up to 5m0s for pod "pvc-tester-gvh6p" to be fully deleted
STEP: Deleting the PVC to invoke the reclaim policy.
Jan 11 20:06:41.833: INFO: Deleting PVC pvc-q4s47 to trigger reclamation of PV 
Jan 11 20:06:41.833: INFO: Deleting PersistentVolumeClaim "pvc-q4s47"
Jan 11 20:06:41.923: INFO: Waiting for reclaim process to complete.
Jan 11 20:06:41.923: INFO: Waiting up to 3m0s for PersistentVolume nfs-hfhrq to have phase Released
Jan 11 20:06:42.012: INFO: PersistentVolume nfs-hfhrq found but phase is Bound instead of Released.
Jan 11 20:06:44.101: INFO: PersistentVolume nfs-hfhrq found and phase=Released (2.178437864s)
Jan 11 20:06:44.190: INFO: PV nfs-hfhrq now in "Released" phase
[AfterEach] with Single PV - PVC pairs
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:155
Jan 11 20:06:44.190: INFO: AfterEach: Cleaning up test resources.
Jan 11 20:06:44.190: INFO: Deleting PersistentVolumeClaim "pvc-q4s47"
Jan 11 20:06:44.281: INFO: Deleting PersistentVolume "nfs-hfhrq"
[AfterEach] NFS
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:147
Jan 11 20:06:44.371: INFO: Deleting pod "nfs-server" in namespace "pv-6847"
Jan 11 20:06:44.461: INFO: Wait up to 5m0s for pod "nfs-server" to be fully deleted
[AfterEach] [sig-storage] PersistentVolumes
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 11 20:06:58.640: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pv-6847" for this suite.
Jan 11 20:07:04.999: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 20:07:08.298: INFO: namespace pv-6847 deletion completed in 9.567469253s


• [SLOW TEST:47.228 seconds]
[sig-storage] PersistentVolumes
/workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  NFS
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:120
    with Single PV - PVC pairs
    /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:153
      create a PVC and non-pre-bound PV: test write access
      /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:176
------------------------------
SSSS
------------------------------
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 11 20:06:42.102: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
STEP: Building a namespace api object, basename resourcequota
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in resourcequota-1532
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a secret. [Conformance]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Discovering how many secrets are in namespace by default
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Secret
STEP: Ensuring resource quota status captures secret creation
STEP: Deleting a secret
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 11 20:07:01.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-1532" for this suite.
Jan 11 20:07:07.535: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 20:07:10.847: INFO: namespace resourcequota-1532 deletion completed in 9.58063879s


• [SLOW TEST:28.745 seconds]
[sig-api-machinery] ResourceQuota
/workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a secret. [Conformance]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
S
------------------------------
[BeforeEach] [sig-api-machinery] Generated clientset
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 11 20:07:01.221: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
STEP: Building a namespace api object, basename clientset
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in clientset-804
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Generated clientset
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/generated_clientset.go:217
[It] should create v1beta1 cronJobs, delete cronJobs, watch cronJobs
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/generated_clientset.go:221
STEP: constructing the cronJob
STEP: setting up watch
STEP: creating the cronJob
STEP: verifying the cronJob is in kubernetes
STEP: verifying cronJob creation was observed
STEP: deleting the cronJob
[AfterEach] [sig-api-machinery] Generated clientset
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 11 20:07:02.672: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "clientset-804" for this suite.
Jan 11 20:07:09.031: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 20:07:12.367: INFO: namespace clientset-804 deletion completed in 9.603997551s


• [SLOW TEST:11.146 seconds]
[sig-api-machinery] Generated clientset
/workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create v1beta1 cronJobs, delete cronJobs, watch cronJobs
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/generated_clientset.go:221
------------------------------
SSSSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:93
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 11 20:06:23.602: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
STEP: Building a namespace api object, basename provisioning
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in provisioning-1947
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to unmount after the subpath directory is deleted
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:425
STEP: deploying csi-hostpath driver
Jan 11 20:06:24.651: INFO: creating *v1.ServiceAccount: provisioning-1947/csi-attacher
Jan 11 20:06:24.742: INFO: creating *v1.ClusterRole: external-attacher-runner-provisioning-1947
Jan 11 20:06:24.742: INFO: Define cluster role external-attacher-runner-provisioning-1947
Jan 11 20:06:24.832: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-provisioning-1947
Jan 11 20:06:24.922: INFO: creating *v1.Role: provisioning-1947/external-attacher-cfg-provisioning-1947
Jan 11 20:06:25.012: INFO: creating *v1.RoleBinding: provisioning-1947/csi-attacher-role-cfg
Jan 11 20:06:25.103: INFO: creating *v1.ServiceAccount: provisioning-1947/csi-provisioner
Jan 11 20:06:25.193: INFO: creating *v1.ClusterRole: external-provisioner-runner-provisioning-1947
Jan 11 20:06:25.193: INFO: Define cluster role external-provisioner-runner-provisioning-1947
Jan 11 20:06:25.283: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-provisioning-1947
Jan 11 20:06:25.373: INFO: creating *v1.Role: provisioning-1947/external-provisioner-cfg-provisioning-1947
Jan 11 20:06:25.463: INFO: creating *v1.RoleBinding: provisioning-1947/csi-provisioner-role-cfg
Jan 11 20:06:25.552: INFO: creating *v1.ServiceAccount: provisioning-1947/csi-snapshotter
Jan 11 20:06:25.642: INFO: creating *v1.ClusterRole: external-snapshotter-runner-provisioning-1947
Jan 11 20:06:25.642: INFO: Define cluster role external-snapshotter-runner-provisioning-1947
Jan 11 20:06:25.733: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-provisioning-1947
Jan 11 20:06:25.825: INFO: creating *v1.Role: provisioning-1947/external-snapshotter-leaderelection-provisioning-1947
Jan 11 20:06:25.915: INFO: creating *v1.RoleBinding: provisioning-1947/external-snapshotter-leaderelection
Jan 11 20:06:26.005: INFO: creating *v1.ServiceAccount: provisioning-1947/csi-resizer
Jan 11 20:06:26.095: INFO: creating *v1.ClusterRole: external-resizer-runner-provisioning-1947
Jan 11 20:06:26.095: INFO: Define cluster role external-resizer-runner-provisioning-1947
Jan 11 20:06:26.186: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-provisioning-1947
Jan 11 20:06:26.275: INFO: creating *v1.Role: provisioning-1947/external-resizer-cfg-provisioning-1947
Jan 11 20:06:26.366: INFO: creating *v1.RoleBinding: provisioning-1947/csi-resizer-role-cfg
Jan 11 20:06:26.456: INFO: creating *v1.Service: provisioning-1947/csi-hostpath-attacher
Jan 11 20:06:26.550: INFO: creating *v1.StatefulSet: provisioning-1947/csi-hostpath-attacher
Jan 11 20:06:26.640: INFO: creating *v1beta1.CSIDriver: csi-hostpath-provisioning-1947
Jan 11 20:06:26.731: INFO: creating *v1.Service: provisioning-1947/csi-hostpathplugin
Jan 11 20:06:26.826: INFO: creating *v1.StatefulSet: provisioning-1947/csi-hostpathplugin
Jan 11 20:06:26.916: INFO: creating *v1.Service: provisioning-1947/csi-hostpath-provisioner
Jan 11 20:06:27.012: INFO: creating *v1.StatefulSet: provisioning-1947/csi-hostpath-provisioner
Jan 11 20:06:27.103: INFO: creating *v1.Service: provisioning-1947/csi-hostpath-resizer
Jan 11 20:06:27.196: INFO: creating *v1.StatefulSet: provisioning-1947/csi-hostpath-resizer
Jan 11 20:06:27.286: INFO: creating *v1.Service: provisioning-1947/csi-snapshotter
Jan 11 20:06:27.380: INFO: creating *v1.StatefulSet: provisioning-1947/csi-snapshotter
Jan 11 20:06:27.470: INFO: creating *v1.ClusterRoleBinding: psp-csi-hostpath-role-provisioning-1947
Jan 11 20:06:27.560: INFO: Test running for native CSI Driver, not checking metrics
Jan 11 20:06:27.560: INFO: Creating resource for dynamic PV
STEP: creating a StorageClass provisioning-1947-csi-hostpath-provisioning-1947-scwmtsm
STEP: creating a claim
Jan 11 20:06:27.650: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
Jan 11 20:06:27.742: INFO: Waiting up to 5m0s for PersistentVolumeClaims [csi-hostpath9v9zp] to have phase Bound
Jan 11 20:06:27.831: INFO: PersistentVolumeClaim csi-hostpath9v9zp found but phase is Pending instead of Bound.
Jan 11 20:06:29.921: INFO: PersistentVolumeClaim csi-hostpath9v9zp found but phase is Pending instead of Bound.
Jan 11 20:06:32.013: INFO: PersistentVolumeClaim csi-hostpath9v9zp found and phase=Bound (4.270899918s)
STEP: Creating pod pod-subpath-test-csi-hostpath-dynamicpv-vhcm
Jan 11 20:06:42.465: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=provisioning-1947 pod-subpath-test-csi-hostpath-dynamicpv-vhcm --container test-container-volume-csi-hostpath-dynamicpv-vhcm -- /bin/sh -c rm -r /test-volume/provisioning-1947'
Jan 11 20:06:48.793: INFO: stderr: ""
Jan 11 20:06:48.794: INFO: stdout: ""
STEP: Deleting pod pod-subpath-test-csi-hostpath-dynamicpv-vhcm
Jan 11 20:06:48.794: INFO: Deleting pod "pod-subpath-test-csi-hostpath-dynamicpv-vhcm" in namespace "provisioning-1947"
Jan 11 20:06:48.885: INFO: Wait up to 5m0s for pod "pod-subpath-test-csi-hostpath-dynamicpv-vhcm" to be fully deleted
STEP: Deleting pod
Jan 11 20:06:59.064: INFO: Deleting pod "pod-subpath-test-csi-hostpath-dynamicpv-vhcm" in namespace "provisioning-1947"
STEP: Deleting pvc
Jan 11 20:06:59.154: INFO: Deleting PersistentVolumeClaim "csi-hostpath9v9zp"
Jan 11 20:06:59.245: INFO: Waiting up to 5m0s for PersistentVolume pvc-f3d1321b-9800-4cab-b3e5-f5291f6b12dd to get deleted
Jan 11 20:06:59.334: INFO: PersistentVolume pvc-f3d1321b-9800-4cab-b3e5-f5291f6b12dd was removed
STEP: Deleting sc
STEP: uninstalling csi-hostpath driver
Jan 11 20:06:59.426: INFO: deleting *v1.ServiceAccount: provisioning-1947/csi-attacher
Jan 11 20:06:59.517: INFO: deleting *v1.ClusterRole: external-attacher-runner-provisioning-1947
Jan 11 20:06:59.610: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-provisioning-1947
Jan 11 20:06:59.702: INFO: deleting *v1.Role: provisioning-1947/external-attacher-cfg-provisioning-1947
Jan 11 20:06:59.793: INFO: deleting *v1.RoleBinding: provisioning-1947/csi-attacher-role-cfg
Jan 11 20:06:59.883: INFO: deleting *v1.ServiceAccount: provisioning-1947/csi-provisioner
Jan 11 20:06:59.974: INFO: deleting *v1.ClusterRole: external-provisioner-runner-provisioning-1947
Jan 11 20:07:00.066: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-provisioning-1947
Jan 11 20:07:00.157: INFO: deleting *v1.Role: provisioning-1947/external-provisioner-cfg-provisioning-1947
Jan 11 20:07:00.248: INFO: deleting *v1.RoleBinding: provisioning-1947/csi-provisioner-role-cfg
Jan 11 20:07:00.339: INFO: deleting *v1.ServiceAccount: provisioning-1947/csi-snapshotter
Jan 11 20:07:00.430: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-provisioning-1947
Jan 11 20:07:00.521: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-provisioning-1947
Jan 11 20:07:00.612: INFO: deleting *v1.Role: provisioning-1947/external-snapshotter-leaderelection-provisioning-1947
Jan 11 20:07:00.707: INFO: deleting *v1.RoleBinding: provisioning-1947/external-snapshotter-leaderelection
Jan 11 20:07:00.799: INFO: deleting *v1.ServiceAccount: provisioning-1947/csi-resizer
Jan 11 20:07:00.890: INFO: deleting *v1.ClusterRole: external-resizer-runner-provisioning-1947
Jan 11 20:07:00.981: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-provisioning-1947
Jan 11 20:07:01.073: INFO: deleting *v1.Role: provisioning-1947/external-resizer-cfg-provisioning-1947
Jan 11 20:07:01.163: INFO: deleting *v1.RoleBinding: provisioning-1947/csi-resizer-role-cfg
Jan 11 20:07:01.254: INFO: deleting *v1.Service: provisioning-1947/csi-hostpath-attacher
Jan 11 20:07:01.351: INFO: deleting *v1.StatefulSet: provisioning-1947/csi-hostpath-attacher
Jan 11 20:07:01.443: INFO: deleting *v1beta1.CSIDriver: csi-hostpath-provisioning-1947
Jan 11 20:07:01.535: INFO: deleting *v1.Service: provisioning-1947/csi-hostpathplugin
Jan 11 20:07:01.630: INFO: deleting *v1.StatefulSet: provisioning-1947/csi-hostpathplugin
Jan 11 20:07:01.721: INFO: deleting *v1.Service: provisioning-1947/csi-hostpath-provisioner
Jan 11 20:07:01.818: INFO: deleting *v1.StatefulSet: provisioning-1947/csi-hostpath-provisioner
Jan 11 20:07:01.909: INFO: deleting *v1.Service: provisioning-1947/csi-hostpath-resizer
Jan 11 20:07:02.006: INFO: deleting *v1.StatefulSet: provisioning-1947/csi-hostpath-resizer
Jan 11 20:07:02.097: INFO: deleting *v1.Service: provisioning-1947/csi-snapshotter
Jan 11 20:07:02.192: INFO: deleting *v1.StatefulSet: provisioning-1947/csi-snapshotter
Jan 11 20:07:02.284: INFO: deleting *v1.ClusterRoleBinding: psp-csi-hostpath-role-provisioning-1947
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 11 20:07:02.375: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
WARNING: pod log: csi-hostpath-attacher-0/csi-attacher: context canceled
STEP: Destroying namespace "provisioning-1947" for this suite.
Jan 11 20:07:14.738: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 20:07:18.056: INFO: namespace provisioning-1947 deletion completed in 15.588848629s


• [SLOW TEST:54.455 seconds]
[sig-storage] CSI Volumes
/workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: csi-hostpath]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:62
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:92
      should be able to unmount after the subpath directory is deleted
      /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:425
------------------------------
S
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 11 20:07:01.121: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
STEP: Building a namespace api object, basename persistent-local-volumes-test
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in persistent-local-volumes-test-2397
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:152
[BeforeEach] [Volume type: dir]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195
STEP: Initializing test volumes
Jan 11 20:07:04.298: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-2397 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-807cf1e4-11cc-4301-9526-0c3328e79c20'
Jan 11 20:07:05.686: INFO: stderr: ""
Jan 11 20:07:05.686: INFO: stdout: ""
STEP: Creating local PVCs and PVs
Jan 11 20:07:05.687: INFO: Creating a PV followed by a PVC
Jan 11 20:07:05.866: INFO: Waiting for PV local-pvgpwxb to bind to PVC pvc-fhtsx
Jan 11 20:07:05.866: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-fhtsx] to have phase Bound
Jan 11 20:07:05.955: INFO: PersistentVolumeClaim pvc-fhtsx found and phase=Bound (89.148785ms)
Jan 11 20:07:05.955: INFO: Waiting up to 3m0s for PersistentVolume local-pvgpwxb to have phase Bound
Jan 11 20:07:06.045: INFO: PersistentVolume local-pvgpwxb found and phase=Bound (89.179853ms)
[It] should be able to write from pod1 and read from pod2
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249
STEP: Creating pod1 to write to the PV
STEP: Creating a pod
Jan 11 20:07:08.672: INFO: pod "security-context-18151ffa-54dd-459e-9b3e-82b0fbff3056" created on Node "ip-10-250-27-25.ec2.internal"
STEP: Writing in pod1
Jan 11 20:07:08.672: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-2397 security-context-18151ffa-54dd-459e-9b3e-82b0fbff3056 -- /bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file'
Jan 11 20:07:10.005: INFO: stderr: ""
Jan 11 20:07:10.005: INFO: stdout: ""
Jan 11 20:07:10.005: INFO: podRWCmdExec out: "" err: 
Jan 11 20:07:10.006: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-2397 security-context-18151ffa-54dd-459e-9b3e-82b0fbff3056 -- /bin/sh -c cat /mnt/volume1/test-file'
Jan 11 20:07:11.369: INFO: stderr: ""
Jan 11 20:07:11.369: INFO: stdout: "test-file-content\n"
Jan 11 20:07:11.369: INFO: podRWCmdExec out: "test-file-content\n" err: 
STEP: Creating pod2 to read from the PV
STEP: Creating a pod
Jan 11 20:07:13.816: INFO: pod "security-context-7f837afc-346a-4131-89ca-3346fcdcb1d6" created on Node "ip-10-250-27-25.ec2.internal"
Jan 11 20:07:13.816: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-2397 security-context-7f837afc-346a-4131-89ca-3346fcdcb1d6 -- /bin/sh -c cat /mnt/volume1/test-file'
Jan 11 20:07:15.155: INFO: stderr: ""
Jan 11 20:07:15.155: INFO: stdout: "test-file-content\n"
Jan 11 20:07:15.155: INFO: podRWCmdExec out: "test-file-content\n" err: 
STEP: Writing in pod2
Jan 11 20:07:15.155: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-2397 security-context-7f837afc-346a-4131-89ca-3346fcdcb1d6 -- /bin/sh -c mkdir -p /mnt/volume1; echo /tmp/local-volume-test-807cf1e4-11cc-4301-9526-0c3328e79c20 > /mnt/volume1/test-file'
Jan 11 20:07:16.476: INFO: stderr: ""
Jan 11 20:07:16.476: INFO: stdout: ""
Jan 11 20:07:16.476: INFO: podRWCmdExec out: "" err: 
STEP: Reading in pod1
Jan 11 20:07:16.476: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-2397 security-context-18151ffa-54dd-459e-9b3e-82b0fbff3056 -- /bin/sh -c cat /mnt/volume1/test-file'
Jan 11 20:07:17.747: INFO: stderr: ""
Jan 11 20:07:17.748: INFO: stdout: "/tmp/local-volume-test-807cf1e4-11cc-4301-9526-0c3328e79c20\n"
Jan 11 20:07:17.748: INFO: podRWCmdExec out: "/tmp/local-volume-test-807cf1e4-11cc-4301-9526-0c3328e79c20\n" err: 
STEP: Deleting pod1
STEP: Deleting pod security-context-18151ffa-54dd-459e-9b3e-82b0fbff3056 in namespace persistent-local-volumes-test-2397
STEP: Deleting pod2
STEP: Deleting pod security-context-7f837afc-346a-4131-89ca-3346fcdcb1d6 in namespace persistent-local-volumes-test-2397
[AfterEach] [Volume type: dir]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204
STEP: Cleaning up PVC and PV
Jan 11 20:07:17.928: INFO: Deleting PersistentVolumeClaim "pvc-fhtsx"
Jan 11 20:07:18.019: INFO: Deleting PersistentVolume "local-pvgpwxb"
STEP: Removing the test directory
Jan 11 20:07:18.109: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-2397 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-807cf1e4-11cc-4301-9526-0c3328e79c20'
Jan 11 20:07:19.421: INFO: stderr: ""
Jan 11 20:07:19.421: INFO: stdout: ""
[AfterEach] [sig-storage] PersistentVolumes-local 
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 11 20:07:19.512: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "persistent-local-volumes-test-2397" for this suite.
Jan 11 20:07:25.871: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 20:07:29.172: INFO: namespace persistent-local-volumes-test-2397 deletion completed in 9.570056736s


• [SLOW TEST:28.052 seconds]
[sig-storage] PersistentVolumes-local 
/workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Volume type: dir]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume at the same time
    /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248
      should be able to write from pod1 and read from pod2
      /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249
------------------------------
S
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 11 20:07:01.160: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
STEP: Building a namespace api object, basename pv
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pv-5812
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] PersistentVolumes
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:110
[BeforeEach] NFS
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:127
STEP: creating nfs-server pod
STEP: locating the "nfs-server" server pod
Jan 11 20:07:04.417: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config logs nfs-server nfs-server --namespace=pv-5812'
Jan 11 20:07:05.079: INFO: stderr: ""
Jan 11 20:07:05.079: INFO: stdout: "Serving /exports\nrpcinfo: can't contact rpcbind: : RPC: Unable to receive; errno = Connection refused\nStarting rpcbind\nNFS started\n"
Jan 11 20:07:05.079: INFO: nfs server pod IP address: 100.64.1.121
[It] create a PVC and a pre-bound PV: test write access
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:185
STEP: Creating a PVC followed by a pre-bound PV
STEP: Validating the PV-PVC binding
Jan 11 20:07:05.260: INFO: Waiting for PV nfs-v8dtj to bind to PVC pvc-qhhc2
Jan 11 20:07:05.260: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-qhhc2] to have phase Bound
Jan 11 20:07:05.350: INFO: PersistentVolumeClaim pvc-qhhc2 found but phase is Pending instead of Bound.
Jan 11 20:07:07.440: INFO: PersistentVolumeClaim pvc-qhhc2 found and phase=Bound (2.180367251s)
Jan 11 20:07:07.440: INFO: Waiting up to 3m0s for PersistentVolume nfs-v8dtj to have phase Bound
Jan 11 20:07:07.530: INFO: PersistentVolume nfs-v8dtj found and phase=Bound (89.955151ms)
STEP: Checking pod has write access to PersistentVolume
Jan 11 20:07:07.710: INFO: Creating nfs test pod
STEP: Pod should terminate with exitcode 0 (success)
Jan 11 20:07:07.801: INFO: Waiting up to 5m0s for pod "pvc-tester-j2wdd" in namespace "pv-5812" to be "success or failure"
Jan 11 20:07:07.891: INFO: Pod "pvc-tester-j2wdd": Phase="Pending", Reason="", readiness=false. Elapsed: 90.108364ms
Jan 11 20:07:09.981: INFO: Pod "pvc-tester-j2wdd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.180349578s
STEP: Saw pod success
Jan 11 20:07:09.981: INFO: Pod "pvc-tester-j2wdd" satisfied condition "success or failure"
Jan 11 20:07:09.981: INFO: Pod pvc-tester-j2wdd succeeded 
Jan 11 20:07:09.981: INFO: Deleting pod "pvc-tester-j2wdd" in namespace "pv-5812"
Jan 11 20:07:10.074: INFO: Wait up to 5m0s for pod "pvc-tester-j2wdd" to be fully deleted
STEP: Deleting the PVC to invoke the reclaim policy.
Jan 11 20:07:10.163: INFO: Deleting PVC pvc-qhhc2 to trigger reclamation of PV 
Jan 11 20:07:10.163: INFO: Deleting PersistentVolumeClaim "pvc-qhhc2"
Jan 11 20:07:10.254: INFO: Waiting for reclaim process to complete.
Jan 11 20:07:10.254: INFO: Waiting up to 3m0s for PersistentVolume nfs-v8dtj to have phase Released
Jan 11 20:07:10.344: INFO: PersistentVolume nfs-v8dtj found and phase=Released (89.707894ms)
Jan 11 20:07:10.434: INFO: PV nfs-v8dtj now in "Released" phase
[AfterEach] with Single PV - PVC pairs
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:155
Jan 11 20:07:10.434: INFO: AfterEach: Cleaning up test resources.
Jan 11 20:07:10.434: INFO: Deleting PersistentVolumeClaim "pvc-qhhc2"
Jan 11 20:07:10.524: INFO: Deleting PersistentVolume "nfs-v8dtj"
[AfterEach] NFS
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:147
Jan 11 20:07:10.614: INFO: Deleting pod "nfs-server" in namespace "pv-5812"
Jan 11 20:07:10.705: INFO: Wait up to 5m0s for pod "nfs-server" to be fully deleted
[AfterEach] [sig-storage] PersistentVolumes
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 11 20:07:24.885: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pv-5812" for this suite.
Jan 11 20:07:33.246: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 20:07:36.562: INFO: namespace pv-5812 deletion completed in 11.585802219s


• [SLOW TEST:35.402 seconds]
[sig-storage] PersistentVolumes
/workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  NFS
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:120
    with Single PV - PVC pairs
    /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:153
      create a PVC and a pre-bound PV: test write access
      /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:185
------------------------------
SSS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 11 20:07:08.305: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
STEP: Building a namespace api object, basename pv
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pv-2695
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] PersistentVolumes
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:110
[BeforeEach] NFS
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:127
STEP: creating nfs-server pod
STEP: locating the "nfs-server" server pod
Jan 11 20:07:11.406: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config logs nfs-server nfs-server --namespace=pv-2695'
Jan 11 20:07:11.937: INFO: stderr: ""
Jan 11 20:07:11.937: INFO: stdout: "Serving /exports\nrpcinfo: can't contact rpcbind: : RPC: Unable to receive; errno = Connection refused\nStarting rpcbind\nNFS started\n"
Jan 11 20:07:11.937: INFO: nfs server pod IP address: 100.64.0.131
[BeforeEach] when invoking the Recycle reclaim policy
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:264
Jan 11 20:07:11.938: INFO: Creating a PV followed by a PVC
Jan 11 20:07:12.116: INFO: Waiting for PV nfs-ddr4b to bind to PVC pvc-z6t8v
Jan 11 20:07:12.116: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-z6t8v] to have phase Bound
Jan 11 20:07:12.205: INFO: PersistentVolumeClaim pvc-z6t8v found and phase=Bound (89.271074ms)
Jan 11 20:07:12.205: INFO: Waiting up to 3m0s for PersistentVolume nfs-ddr4b to have phase Bound
Jan 11 20:07:12.295: INFO: PersistentVolume nfs-ddr4b found and phase=Bound (89.088433ms)
[It] should test that a PV becomes Available and is clean after the PVC is deleted.
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:281
STEP: Writing to the volume.
Jan 11 20:07:12.563: INFO: Waiting up to 5m0s for pod "pvc-tester-mjwk9" in namespace "pv-2695" to be "success or failure"
Jan 11 20:07:12.651: INFO: Pod "pvc-tester-mjwk9": Phase="Pending", Reason="", readiness=false. Elapsed: 88.821222ms
Jan 11 20:07:14.742: INFO: Pod "pvc-tester-mjwk9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.178924247s
STEP: Saw pod success
Jan 11 20:07:14.742: INFO: Pod "pvc-tester-mjwk9" satisfied condition "success or failure"
STEP: Deleting the claim
Jan 11 20:07:14.742: INFO: Deleting pod "pvc-tester-mjwk9" in namespace "pv-2695"
Jan 11 20:07:14.834: INFO: Wait up to 5m0s for pod "pvc-tester-mjwk9" to be fully deleted
Jan 11 20:07:14.923: INFO: Deleting PVC pvc-z6t8v to trigger reclamation of PV 
Jan 11 20:07:14.923: INFO: Deleting PersistentVolumeClaim "pvc-z6t8v"
Jan 11 20:07:15.013: INFO: Waiting for reclaim process to complete.
Jan 11 20:07:15.013: INFO: Waiting up to 3m0s for PersistentVolume nfs-ddr4b to have phase Available
Jan 11 20:07:15.102: INFO: PersistentVolume nfs-ddr4b found but phase is Released instead of Available.
Jan 11 20:07:17.191: INFO: PersistentVolume nfs-ddr4b found but phase is Released instead of Available.
Jan 11 20:07:19.281: INFO: PersistentVolume nfs-ddr4b found and phase=Available (4.26789007s)
Jan 11 20:07:19.370: INFO: PV nfs-ddr4b now in "Available" phase
STEP: Re-mounting the volume.
Jan 11 20:07:19.460: INFO: Waiting up to 1m0s for PersistentVolumeClaims [pvc-n2q62] to have phase Bound
Jan 11 20:07:19.549: INFO: PersistentVolumeClaim pvc-n2q62 found and phase=Bound (89.01969ms)
STEP: Verifying the mount has been cleaned.
Jan 11 20:07:19.639: INFO: Waiting up to 5m0s for pod "pvc-tester-whrch" in namespace "pv-2695" to be "success or failure"
Jan 11 20:07:19.728: INFO: Pod "pvc-tester-whrch": Phase="Pending", Reason="", readiness=false. Elapsed: 89.087926ms
Jan 11 20:07:21.818: INFO: Pod "pvc-tester-whrch": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.178516547s
STEP: Saw pod success
Jan 11 20:07:21.818: INFO: Pod "pvc-tester-whrch" satisfied condition "success or failure"
Jan 11 20:07:21.818: INFO: Deleting pod "pvc-tester-whrch" in namespace "pv-2695"
Jan 11 20:07:21.910: INFO: Wait up to 5m0s for pod "pvc-tester-whrch" to be fully deleted
Jan 11 20:07:21.999: INFO: Pod exited without failure; the volume has been recycled.
[AfterEach] when invoking the Recycle reclaim policy
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:271
Jan 11 20:07:21.999: INFO: AfterEach: Cleaning up test resources.
Jan 11 20:07:21.999: INFO: Deleting PersistentVolumeClaim "pvc-n2q62"
Jan 11 20:07:22.089: INFO: Deleting PersistentVolume "nfs-ddr4b"
[AfterEach] NFS
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:147
Jan 11 20:07:22.180: INFO: Deleting pod "nfs-server" in namespace "pv-2695"
Jan 11 20:07:22.271: INFO: Wait up to 5m0s for pod "nfs-server" to be fully deleted
[AfterEach] [sig-storage] PersistentVolumes
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 11 20:07:38.449: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pv-2695" for this suite.
Jan 11 20:07:44.807: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 20:07:48.101: INFO: namespace pv-2695 deletion completed in 9.561758906s


• [SLOW TEST:39.796 seconds]
[sig-storage] PersistentVolumes
/workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  NFS
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:120
    when invoking the Recycle reclaim policy
    /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:263
      should test that a PV becomes Available and is clean after the PVC is deleted.
      /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:281
------------------------------
S
------------------------------
[BeforeEach] [sig-storage] Dynamic Provisioning
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 11 20:07:18.060: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
STEP: Building a namespace api object, basename volume-provisioning
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in volume-provisioning-8985
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Dynamic Provisioning
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:259
[It] should let an external dynamic provisioner create and delete persistent volumes [Slow]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:719
STEP: creating an external dynamic provisioner pod
STEP: locating the provisioner pod
STEP: creating a StorageClass
Jan 11 20:07:33.522: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: creating a claim with a external provisioning annotation
STEP: creating a StorageClass volume-provisioning-8985-external
STEP: creating a claim
Jan 11 20:07:33.792: INFO: Waiting up to 5m0s for PersistentVolumeClaims [pvc-74gvm] to have phase Bound
Jan 11 20:07:33.882: INFO: PersistentVolumeClaim pvc-74gvm found but phase is Pending instead of Bound.
Jan 11 20:07:35.973: INFO: PersistentVolumeClaim pvc-74gvm found but phase is Pending instead of Bound.
Jan 11 20:07:38.062: INFO: PersistentVolumeClaim pvc-74gvm found and phase=Bound (4.270632622s)
STEP: checking the claim
STEP: checking the PV
STEP: deleting claim "volume-provisioning-8985"/"pvc-74gvm"
STEP: deleting the claim's PV "pvc-ff4a7290-da6e-4c36-af98-21bc36968a9f"
Jan 11 20:07:38.332: INFO: Waiting up to 20m0s for PersistentVolume pvc-ff4a7290-da6e-4c36-af98-21bc36968a9f to get deleted
Jan 11 20:07:38.422: INFO: PersistentVolume pvc-ff4a7290-da6e-4c36-af98-21bc36968a9f was removed
Jan 11 20:07:38.422: INFO: deleting claim "volume-provisioning-8985"/"pvc-74gvm"
Jan 11 20:07:38.511: INFO: deleting storage class volume-provisioning-8985-external
STEP: Deleting pod external-provisioner-b9klg in namespace volume-provisioning-8985
[AfterEach] [sig-storage] Dynamic Provisioning
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 11 20:07:38.693: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-provisioning-8985" for this suite.
Jan 11 20:07:47.054: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 20:07:50.370: INFO: namespace volume-provisioning-8985 deletion completed in 11.585344931s


• [SLOW TEST:32.311 seconds]
[sig-storage] Dynamic Provisioning
/workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  DynamicProvisioner External
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:718
    should let an external dynamic provisioner create and delete persistent volumes [Slow]
    /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:719
------------------------------
S
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 11 20:07:29.176: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
STEP: Building a namespace api object, basename persistent-local-volumes-test
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in persistent-local-volumes-test-1209
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:152
[BeforeEach] [Volume type: dir-link-bindmounted]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195
STEP: Initializing test volumes
Jan 11 20:07:32.261: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-1209 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-00760048-5400-42f6-b32b-d4a72b112790-backend && mount --bind /tmp/local-volume-test-00760048-5400-42f6-b32b-d4a72b112790-backend /tmp/local-volume-test-00760048-5400-42f6-b32b-d4a72b112790-backend && ln -s /tmp/local-volume-test-00760048-5400-42f6-b32b-d4a72b112790-backend /tmp/local-volume-test-00760048-5400-42f6-b32b-d4a72b112790'
Jan 11 20:07:33.568: INFO: stderr: ""
Jan 11 20:07:33.568: INFO: stdout: ""
STEP: Creating local PVCs and PVs
Jan 11 20:07:33.568: INFO: Creating a PV followed by a PVC
Jan 11 20:07:33.748: INFO: Waiting for PV local-pvhkw6f to bind to PVC pvc-8hqdz
Jan 11 20:07:33.748: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-8hqdz] to have phase Bound
Jan 11 20:07:33.837: INFO: PersistentVolumeClaim pvc-8hqdz found and phase=Bound (89.3047ms)
Jan 11 20:07:33.837: INFO: Waiting up to 3m0s for PersistentVolume local-pvhkw6f to have phase Bound
Jan 11 20:07:33.926: INFO: PersistentVolume local-pvhkw6f found and phase=Bound (89.200757ms)
[It] should be able to write from pod1 and read from pod2
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255
STEP: Creating pod1
STEP: Creating a pod
Jan 11 20:07:36.555: INFO: pod "security-context-171bbcfd-dd53-483b-b756-23d3e4c31632" created on Node "ip-10-250-27-25.ec2.internal"
STEP: Writing in pod1
Jan 11 20:07:36.555: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-1209 security-context-171bbcfd-dd53-483b-b756-23d3e4c31632 -- /bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file'
Jan 11 20:07:37.882: INFO: stderr: ""
Jan 11 20:07:37.882: INFO: stdout: ""
Jan 11 20:07:37.882: INFO: podRWCmdExec out: "" err: 
Jan 11 20:07:37.882: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-1209 security-context-171bbcfd-dd53-483b-b756-23d3e4c31632 -- /bin/sh -c cat /mnt/volume1/test-file'
Jan 11 20:07:39.213: INFO: stderr: ""
Jan 11 20:07:39.213: INFO: stdout: "test-file-content\n"
Jan 11 20:07:39.213: INFO: podRWCmdExec out: "test-file-content\n" err: 
STEP: Deleting pod1
STEP: Deleting pod security-context-171bbcfd-dd53-483b-b756-23d3e4c31632 in namespace persistent-local-volumes-test-1209
STEP: Creating pod2
STEP: Creating a pod
Jan 11 20:07:41.751: INFO: pod "security-context-61a9485b-989d-48dc-bb52-09cece61a7e0" created on Node "ip-10-250-27-25.ec2.internal"
STEP: Reading in pod2
Jan 11 20:07:41.751: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-1209 security-context-61a9485b-989d-48dc-bb52-09cece61a7e0 -- /bin/sh -c cat /mnt/volume1/test-file'
Jan 11 20:07:43.049: INFO: stderr: ""
Jan 11 20:07:43.049: INFO: stdout: "test-file-content\n"
Jan 11 20:07:43.049: INFO: podRWCmdExec out: "test-file-content\n" err: 
STEP: Deleting pod2
STEP: Deleting pod security-context-61a9485b-989d-48dc-bb52-09cece61a7e0 in namespace persistent-local-volumes-test-1209
[AfterEach] [Volume type: dir-link-bindmounted]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204
STEP: Cleaning up PVC and PV
Jan 11 20:07:43.140: INFO: Deleting PersistentVolumeClaim "pvc-8hqdz"
Jan 11 20:07:43.230: INFO: Deleting PersistentVolume "local-pvhkw6f"
STEP: Removing the test directory
Jan 11 20:07:43.321: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-1209 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm /tmp/local-volume-test-00760048-5400-42f6-b32b-d4a72b112790 && umount /tmp/local-volume-test-00760048-5400-42f6-b32b-d4a72b112790-backend && rm -r /tmp/local-volume-test-00760048-5400-42f6-b32b-d4a72b112790-backend'
Jan 11 20:07:44.570: INFO: stderr: ""
Jan 11 20:07:44.570: INFO: stdout: ""
[AfterEach] [sig-storage] PersistentVolumes-local 
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 11 20:07:44.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "persistent-local-volumes-test-1209" for this suite.
Jan 11 20:07:51.020: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 20:07:54.365: INFO: namespace persistent-local-volumes-test-1209 deletion completed in 9.613597129s


• [SLOW TEST:25.189 seconds]
[sig-storage] PersistentVolumes-local 
/workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Volume type: dir-link-bindmounted]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume one after the other
    /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254
      should be able to write from pod1 and read from pod2
      /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255
------------------------------
SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:93
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:85
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 11 20:06:28.090: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
STEP: Building a namespace api object, basename volume-expand
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in volume-expand-8205
STEP: Waiting for a default service account to be provisioned in namespace
[It] Verify if offline PVC expansion works
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:154
STEP: deploying csi-hostpath driver
Jan 11 20:06:28.941: INFO: creating *v1.ServiceAccount: volume-expand-8205/csi-attacher
Jan 11 20:06:29.032: INFO: creating *v1.ClusterRole: external-attacher-runner-volume-expand-8205
Jan 11 20:06:29.032: INFO: Define cluster role external-attacher-runner-volume-expand-8205
Jan 11 20:06:29.122: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-volume-expand-8205
Jan 11 20:06:29.212: INFO: creating *v1.Role: volume-expand-8205/external-attacher-cfg-volume-expand-8205
Jan 11 20:06:29.302: INFO: creating *v1.RoleBinding: volume-expand-8205/csi-attacher-role-cfg
Jan 11 20:06:29.392: INFO: creating *v1.ServiceAccount: volume-expand-8205/csi-provisioner
Jan 11 20:06:29.482: INFO: creating *v1.ClusterRole: external-provisioner-runner-volume-expand-8205
Jan 11 20:06:29.482: INFO: Define cluster role external-provisioner-runner-volume-expand-8205
Jan 11 20:06:29.572: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-volume-expand-8205
Jan 11 20:06:29.662: INFO: creating *v1.Role: volume-expand-8205/external-provisioner-cfg-volume-expand-8205
Jan 11 20:06:29.753: INFO: creating *v1.RoleBinding: volume-expand-8205/csi-provisioner-role-cfg
Jan 11 20:06:29.843: INFO: creating *v1.ServiceAccount: volume-expand-8205/csi-snapshotter
Jan 11 20:06:29.933: INFO: creating *v1.ClusterRole: external-snapshotter-runner-volume-expand-8205
Jan 11 20:06:29.933: INFO: Define cluster role external-snapshotter-runner-volume-expand-8205
Jan 11 20:06:30.025: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-volume-expand-8205
Jan 11 20:06:30.115: INFO: creating *v1.Role: volume-expand-8205/external-snapshotter-leaderelection-volume-expand-8205
Jan 11 20:06:30.205: INFO: creating *v1.RoleBinding: volume-expand-8205/external-snapshotter-leaderelection
Jan 11 20:06:30.295: INFO: creating *v1.ServiceAccount: volume-expand-8205/csi-resizer
Jan 11 20:06:30.385: INFO: creating *v1.ClusterRole: external-resizer-runner-volume-expand-8205
Jan 11 20:06:30.385: INFO: Define cluster role external-resizer-runner-volume-expand-8205
Jan 11 20:06:30.475: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-volume-expand-8205
Jan 11 20:06:30.565: INFO: creating *v1.Role: volume-expand-8205/external-resizer-cfg-volume-expand-8205
Jan 11 20:06:30.655: INFO: creating *v1.RoleBinding: volume-expand-8205/csi-resizer-role-cfg
Jan 11 20:06:30.746: INFO: creating *v1.Service: volume-expand-8205/csi-hostpath-attacher
Jan 11 20:06:30.840: INFO: creating *v1.StatefulSet: volume-expand-8205/csi-hostpath-attacher
Jan 11 20:06:30.931: INFO: creating *v1beta1.CSIDriver: csi-hostpath-volume-expand-8205
Jan 11 20:06:31.022: INFO: creating *v1.Service: volume-expand-8205/csi-hostpathplugin
Jan 11 20:06:31.116: INFO: creating *v1.StatefulSet: volume-expand-8205/csi-hostpathplugin
Jan 11 20:06:31.207: INFO: creating *v1.Service: volume-expand-8205/csi-hostpath-provisioner
Jan 11 20:06:31.301: INFO: creating *v1.StatefulSet: volume-expand-8205/csi-hostpath-provisioner
Jan 11 20:06:31.391: INFO: creating *v1.Service: volume-expand-8205/csi-hostpath-resizer
Jan 11 20:06:31.486: INFO: creating *v1.StatefulSet: volume-expand-8205/csi-hostpath-resizer
Jan 11 20:06:31.576: INFO: creating *v1.Service: volume-expand-8205/csi-snapshotter
Jan 11 20:06:31.670: INFO: creating *v1.StatefulSet: volume-expand-8205/csi-snapshotter
Jan 11 20:06:31.761: INFO: creating *v1.ClusterRoleBinding: psp-csi-hostpath-role-volume-expand-8205
Jan 11 20:06:31.850: INFO: Test running for native CSI Driver, not checking metrics
Jan 11 20:06:31.850: INFO: Creating resource for dynamic PV
STEP: creating a StorageClass volume-expand-8205-csi-hostpath-volume-expand-8205-scqk5jx
STEP: creating a claim
Jan 11 20:06:31.940: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
Jan 11 20:06:32.103: INFO: Waiting up to 5m0s for PersistentVolumeClaims [csi-hostpathc7msn] to have phase Bound
Jan 11 20:06:32.195: INFO: PersistentVolumeClaim csi-hostpathc7msn found but phase is Pending instead of Bound.
Jan 11 20:06:34.285: INFO: PersistentVolumeClaim csi-hostpathc7msn found but phase is Pending instead of Bound.
Jan 11 20:06:36.375: INFO: PersistentVolumeClaim csi-hostpathc7msn found but phase is Pending instead of Bound.
Jan 11 20:06:38.465: INFO: PersistentVolumeClaim csi-hostpathc7msn found but phase is Pending instead of Bound.
Jan 11 20:06:40.555: INFO: PersistentVolumeClaim csi-hostpathc7msn found but phase is Pending instead of Bound.
Jan 11 20:06:42.645: INFO: PersistentVolumeClaim csi-hostpathc7msn found but phase is Pending instead of Bound.
Jan 11 20:06:44.735: INFO: PersistentVolumeClaim csi-hostpathc7msn found but phase is Pending instead of Bound.
Jan 11 20:06:46.825: INFO: PersistentVolumeClaim csi-hostpathc7msn found but phase is Pending instead of Bound.
Jan 11 20:06:48.915: INFO: PersistentVolumeClaim csi-hostpathc7msn found and phase=Bound (16.811966349s)
STEP: Creating a pod with dynamically provisioned volume
STEP: Deleting the previously created pod
Jan 11 20:06:59.457: INFO: Deleting pod "security-context-03492313-bba8-4afb-b756-03ad4eee7195" in namespace "volume-expand-8205"
Jan 11 20:06:59.547: INFO: Wait up to 5m0s for pod "security-context-03492313-bba8-4afb-b756-03ad4eee7195" to be fully deleted
STEP: Expanding current pvc
Jan 11 20:07:15.727: INFO: currentPvcSize {{5368709120 0} {} 5Gi BinarySI}, newSize {{6442450944 0} {}  BinarySI}
STEP: Waiting for cloudprovider resize to finish
STEP: Checking for conditions on pvc
STEP: Creating a new pod with same volume
STEP: Waiting for file system resize to finish
Jan 11 20:07:34.540: INFO: Deleting pod "security-context-9a74e8cb-5c55-459a-b3c9-ed993436908a" in namespace "volume-expand-8205"
Jan 11 20:07:34.631: INFO: Wait up to 5m0s for pod "security-context-9a74e8cb-5c55-459a-b3c9-ed993436908a" to be fully deleted
Jan 11 20:07:44.811: INFO: Deleting pod "security-context-03492313-bba8-4afb-b756-03ad4eee7195" in namespace "volume-expand-8205"
STEP: Deleting pod
Jan 11 20:07:44.901: INFO: Deleting pod "security-context-03492313-bba8-4afb-b756-03ad4eee7195" in namespace "volume-expand-8205"
STEP: Deleting pod2
Jan 11 20:07:44.991: INFO: Deleting pod "security-context-9a74e8cb-5c55-459a-b3c9-ed993436908a" in namespace "volume-expand-8205"
STEP: Deleting pvc
Jan 11 20:07:45.080: INFO: Deleting PersistentVolumeClaim "csi-hostpathc7msn"
Jan 11 20:07:45.172: INFO: Waiting up to 5m0s for PersistentVolume pvc-7f94b80f-2e45-425b-a397-1cf25811637d to get deleted
Jan 11 20:07:45.261: INFO: PersistentVolume pvc-7f94b80f-2e45-425b-a397-1cf25811637d was removed
STEP: Deleting sc
STEP: uninstalling csi-hostpath driver
Jan 11 20:07:45.353: INFO: deleting *v1.ServiceAccount: volume-expand-8205/csi-attacher
Jan 11 20:07:45.444: INFO: deleting *v1.ClusterRole: external-attacher-runner-volume-expand-8205
Jan 11 20:07:45.536: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-volume-expand-8205
Jan 11 20:07:45.627: INFO: deleting *v1.Role: volume-expand-8205/external-attacher-cfg-volume-expand-8205
Jan 11 20:07:45.719: INFO: deleting *v1.RoleBinding: volume-expand-8205/csi-attacher-role-cfg
Jan 11 20:07:45.810: INFO: deleting *v1.ServiceAccount: volume-expand-8205/csi-provisioner
Jan 11 20:07:45.902: INFO: deleting *v1.ClusterRole: external-provisioner-runner-volume-expand-8205
Jan 11 20:07:45.993: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-volume-expand-8205
Jan 11 20:07:46.094: INFO: deleting *v1.Role: volume-expand-8205/external-provisioner-cfg-volume-expand-8205
Jan 11 20:07:46.186: INFO: deleting *v1.RoleBinding: volume-expand-8205/csi-provisioner-role-cfg
Jan 11 20:07:46.278: INFO: deleting *v1.ServiceAccount: volume-expand-8205/csi-snapshotter
Jan 11 20:07:46.370: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-volume-expand-8205
Jan 11 20:07:46.461: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-volume-expand-8205
Jan 11 20:07:46.552: INFO: deleting *v1.Role: volume-expand-8205/external-snapshotter-leaderelection-volume-expand-8205
Jan 11 20:07:46.644: INFO: deleting *v1.RoleBinding: volume-expand-8205/external-snapshotter-leaderelection
Jan 11 20:07:46.735: INFO: deleting *v1.ServiceAccount: volume-expand-8205/csi-resizer
Jan 11 20:07:46.826: INFO: deleting *v1.ClusterRole: external-resizer-runner-volume-expand-8205
Jan 11 20:07:46.918: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-volume-expand-8205
Jan 11 20:07:47.009: INFO: deleting *v1.Role: volume-expand-8205/external-resizer-cfg-volume-expand-8205
Jan 11 20:07:47.101: INFO: deleting *v1.RoleBinding: volume-expand-8205/csi-resizer-role-cfg
Jan 11 20:07:47.192: INFO: deleting *v1.Service: volume-expand-8205/csi-hostpath-attacher
Jan 11 20:07:47.289: INFO: deleting *v1.StatefulSet: volume-expand-8205/csi-hostpath-attacher
Jan 11 20:07:47.381: INFO: deleting *v1beta1.CSIDriver: csi-hostpath-volume-expand-8205
Jan 11 20:07:47.475: INFO: deleting *v1.Service: volume-expand-8205/csi-hostpathplugin
Jan 11 20:07:47.571: INFO: deleting *v1.StatefulSet: volume-expand-8205/csi-hostpathplugin
Jan 11 20:07:47.663: INFO: deleting *v1.Service: volume-expand-8205/csi-hostpath-provisioner
Jan 11 20:07:47.760: INFO: deleting *v1.StatefulSet: volume-expand-8205/csi-hostpath-provisioner
Jan 11 20:07:47.852: INFO: deleting *v1.Service: volume-expand-8205/csi-hostpath-resizer
Jan 11 20:07:47.949: INFO: deleting *v1.StatefulSet: volume-expand-8205/csi-hostpath-resizer
Jan 11 20:07:48.040: INFO: deleting *v1.Service: volume-expand-8205/csi-snapshotter
Jan 11 20:07:48.135: INFO: deleting *v1.StatefulSet: volume-expand-8205/csi-snapshotter
Jan 11 20:07:48.226: INFO: deleting *v1.ClusterRoleBinding: psp-csi-hostpath-role-volume-expand-8205
[AfterEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 11 20:07:48.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-expand-8205" for this suite.
Jan 11 20:07:54.679: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 20:07:58.052: INFO: namespace volume-expand-8205 deletion completed in 9.643163137s


• [SLOW TEST:89.962 seconds]
[sig-storage] CSI Volumes
/workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: csi-hostpath]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:62
    [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
    /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:92
      Verify if offline PVC expansion works
      /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:154
------------------------------
SSSS
------------------------------
[BeforeEach] [sig-node] ConfigMap
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 11 20:07:48.104: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
STEP: Building a namespace api object, basename configmap
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-5829
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating configMap that has name configmap-test-emptyKey-127a155b-e541-45a5-bba6-303571cb4949
[AfterEach] [sig-node] ConfigMap
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 11 20:07:48.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5829" for this suite.
Jan 11 20:07:55.217: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 20:07:58.513: INFO: namespace configmap-5829 deletion completed in 9.565978501s


• [SLOW TEST:10.409 seconds]
[sig-node] ConfigMap
/workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:32
  should fail to create ConfigMap with empty key [Conformance]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SS
------------------------------
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 11 20:07:12.385: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
STEP: Building a namespace api object, basename kubelet-test
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubelet-test-1268
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[AfterEach] [k8s.io] Kubelet
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 11 20:07:15.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-1268" for this suite.
Jan 11 20:07:57.864: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 20:08:01.167: INFO: namespace kubelet-test-1268 deletion completed in 45.57155582s


• [SLOW TEST:48.782 seconds]
[k8s.io] Kubelet
/workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
  when scheduling a busybox Pod with hostAliases
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136
    should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 11 20:07:50.373: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
STEP: Building a namespace api object, basename emptydir
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-3222
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating a pod to test emptydir volume type on tmpfs
Jan 11 20:07:51.145: INFO: Waiting up to 5m0s for pod "pod-a05a5eea-efe7-467c-9da5-d9f4ba3af414" in namespace "emptydir-3222" to be "success or failure"
Jan 11 20:07:51.234: INFO: Pod "pod-a05a5eea-efe7-467c-9da5-d9f4ba3af414": Phase="Pending", Reason="", readiness=false. Elapsed: 89.219814ms
Jan 11 20:07:53.332: INFO: Pod "pod-a05a5eea-efe7-467c-9da5-d9f4ba3af414": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.186673223s
STEP: Saw pod success
Jan 11 20:07:53.332: INFO: Pod "pod-a05a5eea-efe7-467c-9da5-d9f4ba3af414" satisfied condition "success or failure"
Jan 11 20:07:53.422: INFO: Trying to get logs from node ip-10-250-27-25.ec2.internal pod pod-a05a5eea-efe7-467c-9da5-d9f4ba3af414 container test-container: 
STEP: delete the pod
Jan 11 20:07:53.616: INFO: Waiting for pod pod-a05a5eea-efe7-467c-9da5-d9f4ba3af414 to disappear
Jan 11 20:07:53.707: INFO: Pod pod-a05a5eea-efe7-467c-9da5-d9f4ba3af414 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 11 20:07:53.707: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3222" for this suite.
Jan 11 20:08:00.069: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 20:08:03.541: INFO: namespace emptydir-3222 deletion completed in 9.743267801s


• [SLOW TEST:13.168 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
S
------------------------------
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 11 20:07:58.059: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
STEP: Building a namespace api object, basename projected
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-4236
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating configMap with name projected-configmap-test-volume-map-785c34a0-f42d-45ec-8e8f-9cc7a551b23f
STEP: Creating a pod to test consume configMaps
Jan 11 20:07:58.887: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-3d00d66f-7262-441f-af1a-96a8e2745ec5" in namespace "projected-4236" to be "success or failure"
Jan 11 20:07:58.977: INFO: Pod "pod-projected-configmaps-3d00d66f-7262-441f-af1a-96a8e2745ec5": Phase="Pending", Reason="", readiness=false. Elapsed: 90.004926ms
Jan 11 20:08:01.067: INFO: Pod "pod-projected-configmaps-3d00d66f-7262-441f-af1a-96a8e2745ec5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.180223425s
STEP: Saw pod success
Jan 11 20:08:01.067: INFO: Pod "pod-projected-configmaps-3d00d66f-7262-441f-af1a-96a8e2745ec5" satisfied condition "success or failure"
Jan 11 20:08:01.157: INFO: Trying to get logs from node ip-10-250-27-25.ec2.internal pod pod-projected-configmaps-3d00d66f-7262-441f-af1a-96a8e2745ec5 container projected-configmap-volume-test: 
STEP: delete the pod
Jan 11 20:08:01.345: INFO: Waiting for pod pod-projected-configmaps-3d00d66f-7262-441f-af1a-96a8e2745ec5 to disappear
Jan 11 20:08:01.435: INFO: Pod pod-projected-configmaps-3d00d66f-7262-441f-af1a-96a8e2745ec5 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 11 20:08:01.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4236" for this suite.
Jan 11 20:08:07.800: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 20:08:11.125: INFO: namespace projected-4236 deletion completed in 9.598243194s


• [SLOW TEST:13.066 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:35
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSS
------------------------------
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 11 20:08:01.210: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
STEP: Building a namespace api object, basename downward-api
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-1426
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating a pod to test downward API volume plugin
Jan 11 20:08:01.958: INFO: Waiting up to 5m0s for pod "downwardapi-volume-70c95ae6-c32c-4be2-abe3-024d846a259b" in namespace "downward-api-1426" to be "success or failure"
Jan 11 20:08:02.097: INFO: Pod "downwardapi-volume-70c95ae6-c32c-4be2-abe3-024d846a259b": Phase="Pending", Reason="", readiness=false. Elapsed: 139.296744ms
Jan 11 20:08:04.187: INFO: Pod "downwardapi-volume-70c95ae6-c32c-4be2-abe3-024d846a259b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.228422043s
STEP: Saw pod success
Jan 11 20:08:04.187: INFO: Pod "downwardapi-volume-70c95ae6-c32c-4be2-abe3-024d846a259b" satisfied condition "success or failure"
Jan 11 20:08:04.276: INFO: Trying to get logs from node ip-10-250-27-25.ec2.internal pod downwardapi-volume-70c95ae6-c32c-4be2-abe3-024d846a259b container client-container: 
STEP: delete the pod
Jan 11 20:08:04.466: INFO: Waiting for pod downwardapi-volume-70c95ae6-c32c-4be2-abe3-024d846a259b to disappear
Jan 11 20:08:04.554: INFO: Pod downwardapi-volume-70c95ae6-c32c-4be2-abe3-024d846a259b no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 11 20:08:04.554: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1426" for this suite.
Jan 11 20:08:10.915: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 20:08:14.213: INFO: namespace downward-api-1426 deletion completed in 9.568404585s


• [SLOW TEST:13.003 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 11 20:07:10.850: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-kubelet-etc-hosts-4432
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
Jan 11 20:07:16.213: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4432 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 11 20:07:16.213: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
Jan 11 20:07:17.108: INFO: Exec stderr: ""
Jan 11 20:07:17.108: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4432 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 11 20:07:17.108: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
Jan 11 20:07:17.973: INFO: Exec stderr: ""
Jan 11 20:07:17.973: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4432 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 11 20:07:17.973: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
Jan 11 20:07:18.827: INFO: Exec stderr: ""
Jan 11 20:07:18.827: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4432 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 11 20:07:18.827: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
Jan 11 20:07:19.726: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
Jan 11 20:07:19.726: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4432 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 11 20:07:19.726: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
Jan 11 20:07:20.547: INFO: Exec stderr: ""
Jan 11 20:07:20.547: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4432 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 11 20:07:20.547: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
Jan 11 20:07:21.418: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
Jan 11 20:07:21.418: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4432 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 11 20:07:21.418: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
Jan 11 20:07:22.293: INFO: Exec stderr: ""
Jan 11 20:07:22.293: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4432 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 11 20:07:22.293: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
Jan 11 20:07:23.130: INFO: Exec stderr: ""
Jan 11 20:07:23.130: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4432 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 11 20:07:23.130: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
Jan 11 20:07:23.997: INFO: Exec stderr: ""
Jan 11 20:07:23.998: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4432 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 11 20:07:23.998: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
Jan 11 20:07:24.871: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 11 20:07:24.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-kubelet-etc-hosts-4432" for this suite.
Jan 11 20:08:11.234: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 20:08:14.555: INFO: namespace e2e-kubelet-etc-hosts-4432 deletion completed in 49.591249255s


• [SLOW TEST:63.705 seconds]
[k8s.io] KubeletManagedEtcHosts
/workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:93
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 11 20:07:54.372: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
STEP: Building a namespace api object, basename provisioning
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in provisioning-5247
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail if subpath with backstepping is outside the volume [Slow]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:261
Jan 11 20:07:55.052: INFO: Could not find CSI Name for in-tree plugin kubernetes.io/host-path
Jan 11 20:07:55.143: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-hostpath-qxfj
STEP: Checking for subpath error in container status
Jan 11 20:07:59.417: INFO: Deleting pod "pod-subpath-test-hostpath-qxfj" in namespace "provisioning-5247"
Jan 11 20:07:59.507: INFO: Wait up to 5m0s for pod "pod-subpath-test-hostpath-qxfj" to be fully deleted
STEP: Deleting pod
Jan 11 20:08:05.685: INFO: Deleting pod "pod-subpath-test-hostpath-qxfj" in namespace "provisioning-5247"
Jan 11 20:08:05.775: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 11 20:08:05.775: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "provisioning-5247" for this suite.
Jan 11 20:08:12.135: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 20:08:15.441: INFO: namespace provisioning-5247 deletion completed in 9.575276229s


• [SLOW TEST:21.069 seconds]
[sig-storage] In-tree Volumes
/workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: hostPath]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:69
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:92
      should fail if subpath with backstepping is outside the volume [Slow]
      /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:261
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [k8s.io] Variable Expansion
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 11 20:08:11.139: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
STEP: Building a namespace api object, basename var-expansion
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in var-expansion-7671
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating a pod to test substitution in container's args
Jan 11 20:08:12.045: INFO: Waiting up to 5m0s for pod "var-expansion-b058333d-debc-485e-8bd2-7984bd8862c1" in namespace "var-expansion-7671" to be "success or failure"
Jan 11 20:08:12.135: INFO: Pod "var-expansion-b058333d-debc-485e-8bd2-7984bd8862c1": Phase="Pending", Reason="", readiness=false. Elapsed: 90.267162ms
Jan 11 20:08:14.225: INFO: Pod "var-expansion-b058333d-debc-485e-8bd2-7984bd8862c1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.180306484s
STEP: Saw pod success
Jan 11 20:08:14.225: INFO: Pod "var-expansion-b058333d-debc-485e-8bd2-7984bd8862c1" satisfied condition "success or failure"
Jan 11 20:08:14.315: INFO: Trying to get logs from node ip-10-250-7-77.ec2.internal pod var-expansion-b058333d-debc-485e-8bd2-7984bd8862c1 container dapi-container: 
STEP: delete the pod
Jan 11 20:08:14.506: INFO: Waiting for pod var-expansion-b058333d-debc-485e-8bd2-7984bd8862c1 to disappear
Jan 11 20:08:14.595: INFO: Pod var-expansion-b058333d-debc-485e-8bd2-7984bd8862c1 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 11 20:08:14.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-7671" for this suite.
Jan 11 20:08:20.959: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 20:08:24.279: INFO: namespace var-expansion-7671 deletion completed in 9.592189876s


• [SLOW TEST:13.140 seconds]
[k8s.io] Variable Expansion
/workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 11 20:07:58.518: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
STEP: Building a namespace api object, basename persistent-local-volumes-test
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in persistent-local-volumes-test-6843
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:152
[BeforeEach] [Volume type: dir-link-bindmounted]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195
STEP: Initializing test volumes
Jan 11 20:08:01.899: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-6843 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-ae1fb825-13bf-4766-8da7-4e269cac5d10-backend && mount --bind /tmp/local-volume-test-ae1fb825-13bf-4766-8da7-4e269cac5d10-backend /tmp/local-volume-test-ae1fb825-13bf-4766-8da7-4e269cac5d10-backend && ln -s /tmp/local-volume-test-ae1fb825-13bf-4766-8da7-4e269cac5d10-backend /tmp/local-volume-test-ae1fb825-13bf-4766-8da7-4e269cac5d10'
Jan 11 20:08:03.207: INFO: stderr: ""
Jan 11 20:08:03.207: INFO: stdout: ""
STEP: Creating local PVCs and PVs
Jan 11 20:08:03.207: INFO: Creating a PV followed by a PVC
Jan 11 20:08:03.386: INFO: Waiting for PV local-pv6lmqz to bind to PVC pvc-49kzc
Jan 11 20:08:03.386: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-49kzc] to have phase Bound
Jan 11 20:08:03.475: INFO: PersistentVolumeClaim pvc-49kzc found and phase=Bound (88.959447ms)
Jan 11 20:08:03.475: INFO: Waiting up to 3m0s for PersistentVolume local-pv6lmqz to have phase Bound
Jan 11 20:08:03.564: INFO: PersistentVolume local-pv6lmqz found and phase=Bound (89.169281ms)
[BeforeEach] One pod requesting one prebound PVC
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215
STEP: Creating pod1
STEP: Creating a pod
Jan 11 20:08:06.190: INFO: pod "security-context-51ec0cf6-feda-40f4-8080-5c2ad2e30494" created on Node "ip-10-250-27-25.ec2.internal"
STEP: Writing in pod1
Jan 11 20:08:06.190: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-6843 security-context-51ec0cf6-feda-40f4-8080-5c2ad2e30494 -- /bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file'
Jan 11 20:08:07.740: INFO: stderr: ""
Jan 11 20:08:07.740: INFO: stdout: ""
Jan 11 20:08:07.740: INFO: podRWCmdExec out: "" err: 
[It] should be able to mount volume and write from pod1
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238
Jan 11 20:08:07.740: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-6843 security-context-51ec0cf6-feda-40f4-8080-5c2ad2e30494 -- /bin/sh -c cat /mnt/volume1/test-file'
Jan 11 20:08:09.090: INFO: stderr: ""
Jan 11 20:08:09.090: INFO: stdout: "test-file-content\n"
Jan 11 20:08:09.090: INFO: podRWCmdExec out: "test-file-content\n" err: 
STEP: Writing in pod1
Jan 11 20:08:09.090: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-6843 security-context-51ec0cf6-feda-40f4-8080-5c2ad2e30494 -- /bin/sh -c mkdir -p /mnt/volume1; echo /tmp/local-volume-test-ae1fb825-13bf-4766-8da7-4e269cac5d10 > /mnt/volume1/test-file'
Jan 11 20:08:10.456: INFO: stderr: ""
Jan 11 20:08:10.456: INFO: stdout: ""
Jan 11 20:08:10.456: INFO: podRWCmdExec out: "" err: 
[AfterEach] One pod requesting one prebound PVC
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227
STEP: Deleting pod1
STEP: Deleting pod security-context-51ec0cf6-feda-40f4-8080-5c2ad2e30494 in namespace persistent-local-volumes-test-6843
[AfterEach] [Volume type: dir-link-bindmounted]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204
STEP: Cleaning up PVC and PV
Jan 11 20:08:10.547: INFO: Deleting PersistentVolumeClaim "pvc-49kzc"
Jan 11 20:08:10.637: INFO: Deleting PersistentVolume "local-pv6lmqz"
STEP: Removing the test directory
Jan 11 20:08:10.727: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-6843 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm /tmp/local-volume-test-ae1fb825-13bf-4766-8da7-4e269cac5d10 && umount /tmp/local-volume-test-ae1fb825-13bf-4766-8da7-4e269cac5d10-backend && rm -r /tmp/local-volume-test-ae1fb825-13bf-4766-8da7-4e269cac5d10-backend'
Jan 11 20:08:12.066: INFO: stderr: ""
Jan 11 20:08:12.066: INFO: stdout: ""
[AfterEach] [sig-storage] PersistentVolumes-local 
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 11 20:08:12.157: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "persistent-local-volumes-test-6843" for this suite.
Jan 11 20:08:24.516: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 20:08:27.823: INFO: namespace persistent-local-volumes-test-6843 deletion completed in 15.575712615s


• [SLOW TEST:29.306 seconds]
[sig-storage] PersistentVolumes-local 
/workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Volume type: dir-link-bindmounted]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and write from pod1
      /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238
------------------------------
SSS
------------------------------
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 11 20:08:15.466: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
STEP: Building a namespace api object, basename dns
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in dns-5603
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-5603.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-5603.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5603.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-5603.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-5603.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5603.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan 11 20:08:19.670: INFO: DNS probes using dns-5603/dns-test-f21b9f31-73ae-40a7-99e1-51beb15d0e68 succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 11 20:08:19.857: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-5603" for this suite.
Jan 11 20:08:28.216: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 20:08:31.551: INFO: namespace dns-5603 deletion completed in 11.603147763s


• [SLOW TEST:16.085 seconds]
[sig-network] DNS
/workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 11 20:08:14.215: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
STEP: Building a namespace api object, basename kubectl
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-6752
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:225
[BeforeEach] Kubectl run deployment
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1540
[It] should create a deployment from an image  [Conformance]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Jan 11 20:08:14.854: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --generator=deployment/apps.v1 --namespace=kubectl-6752'
Jan 11 20:08:15.325: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan 11 20:08:15.325: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n"
STEP: verifying the deployment e2e-test-httpd-deployment was created
STEP: verifying the pod controlled by deployment e2e-test-httpd-deployment was created
[AfterEach] Kubectl run deployment
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1545
Jan 11 20:08:17.506: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config delete deployment e2e-test-httpd-deployment --namespace=kubectl-6752'
Jan 11 20:08:18.052: INFO: stderr: ""
Jan 11 20:08:18.052: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 11 20:08:18.052: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6752" for this suite.
Jan 11 20:08:30.411: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 20:08:33.922: INFO: namespace kubectl-6752 deletion completed in 15.778627839s


• [SLOW TEST:19.707 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run deployment
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1536
    should create a deployment from an image  [Conformance]
    /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:93
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 11 20:08:03.544: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
STEP: Building a namespace api object, basename provisioning
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in provisioning-8445
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support non-existent path
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:177
STEP: deploying csi-hostpath driver
Jan 11 20:08:04.374: INFO: creating *v1.ServiceAccount: provisioning-8445/csi-attacher
Jan 11 20:08:04.464: INFO: creating *v1.ClusterRole: external-attacher-runner-provisioning-8445
Jan 11 20:08:04.464: INFO: Define cluster role external-attacher-runner-provisioning-8445
Jan 11 20:08:04.553: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-provisioning-8445
Jan 11 20:08:04.643: INFO: creating *v1.Role: provisioning-8445/external-attacher-cfg-provisioning-8445
Jan 11 20:08:04.733: INFO: creating *v1.RoleBinding: provisioning-8445/csi-attacher-role-cfg
Jan 11 20:08:04.823: INFO: creating *v1.ServiceAccount: provisioning-8445/csi-provisioner
Jan 11 20:08:04.913: INFO: creating *v1.ClusterRole: external-provisioner-runner-provisioning-8445
Jan 11 20:08:04.913: INFO: Define cluster role external-provisioner-runner-provisioning-8445
Jan 11 20:08:05.003: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-provisioning-8445
Jan 11 20:08:05.093: INFO: creating *v1.Role: provisioning-8445/external-provisioner-cfg-provisioning-8445
Jan 11 20:08:05.185: INFO: creating *v1.RoleBinding: provisioning-8445/csi-provisioner-role-cfg
Jan 11 20:08:05.275: INFO: creating *v1.ServiceAccount: provisioning-8445/csi-snapshotter
Jan 11 20:08:05.365: INFO: creating *v1.ClusterRole: external-snapshotter-runner-provisioning-8445
Jan 11 20:08:05.365: INFO: Define cluster role external-snapshotter-runner-provisioning-8445
Jan 11 20:08:05.455: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-provisioning-8445
Jan 11 20:08:05.544: INFO: creating *v1.Role: provisioning-8445/external-snapshotter-leaderelection-provisioning-8445
Jan 11 20:08:05.634: INFO: creating *v1.RoleBinding: provisioning-8445/external-snapshotter-leaderelection
Jan 11 20:08:05.724: INFO: creating *v1.ServiceAccount: provisioning-8445/csi-resizer
Jan 11 20:08:05.814: INFO: creating *v1.ClusterRole: external-resizer-runner-provisioning-8445
Jan 11 20:08:05.814: INFO: Define cluster role external-resizer-runner-provisioning-8445
Jan 11 20:08:05.904: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-provisioning-8445
Jan 11 20:08:05.995: INFO: creating *v1.Role: provisioning-8445/external-resizer-cfg-provisioning-8445
Jan 11 20:08:06.085: INFO: creating *v1.RoleBinding: provisioning-8445/csi-resizer-role-cfg
Jan 11 20:08:06.175: INFO: creating *v1.Service: provisioning-8445/csi-hostpath-attacher
Jan 11 20:08:06.269: INFO: creating *v1.StatefulSet: provisioning-8445/csi-hostpath-attacher
Jan 11 20:08:06.360: INFO: creating *v1beta1.CSIDriver: csi-hostpath-provisioning-8445
Jan 11 20:08:06.450: INFO: creating *v1.Service: provisioning-8445/csi-hostpathplugin
Jan 11 20:08:06.543: INFO: creating *v1.StatefulSet: provisioning-8445/csi-hostpathplugin
Jan 11 20:08:06.634: INFO: creating *v1.Service: provisioning-8445/csi-hostpath-provisioner
Jan 11 20:08:06.728: INFO: creating *v1.StatefulSet: provisioning-8445/csi-hostpath-provisioner
Jan 11 20:08:06.818: INFO: creating *v1.Service: provisioning-8445/csi-hostpath-resizer
Jan 11 20:08:06.913: INFO: creating *v1.StatefulSet: provisioning-8445/csi-hostpath-resizer
Jan 11 20:08:07.004: INFO: creating *v1.Service: provisioning-8445/csi-snapshotter
Jan 11 20:08:07.097: INFO: creating *v1.StatefulSet: provisioning-8445/csi-snapshotter
Jan 11 20:08:07.189: INFO: creating *v1.ClusterRoleBinding: psp-csi-hostpath-role-provisioning-8445
Jan 11 20:08:07.279: INFO: Test running for native CSI Driver, not checking metrics
Jan 11 20:08:07.279: INFO: Creating resource for dynamic PV
STEP: creating a StorageClass provisioning-8445-csi-hostpath-provisioning-8445-sczx5qg
STEP: creating a claim
Jan 11 20:08:07.368: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
Jan 11 20:08:07.460: INFO: Waiting up to 5m0s for PersistentVolumeClaims [csi-hostpathqvln7] to have phase Bound
Jan 11 20:08:07.550: INFO: PersistentVolumeClaim csi-hostpathqvln7 found but phase is Pending instead of Bound.
Jan 11 20:08:09.641: INFO: PersistentVolumeClaim csi-hostpathqvln7 found and phase=Bound (2.181390013s)
STEP: Creating pod pod-subpath-test-csi-hostpath-dynamicpv-ps7d
STEP: Creating a pod to test subpath
Jan 11 20:08:09.913: INFO: Waiting up to 5m0s for pod "pod-subpath-test-csi-hostpath-dynamicpv-ps7d" in namespace "provisioning-8445" to be "success or failure"
Jan 11 20:08:10.003: INFO: Pod "pod-subpath-test-csi-hostpath-dynamicpv-ps7d": Phase="Pending", Reason="", readiness=false. Elapsed: 89.715579ms
Jan 11 20:08:12.093: INFO: Pod "pod-subpath-test-csi-hostpath-dynamicpv-ps7d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.18011038s
Jan 11 20:08:14.185: INFO: Pod "pod-subpath-test-csi-hostpath-dynamicpv-ps7d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.271673275s
Jan 11 20:08:16.275: INFO: Pod "pod-subpath-test-csi-hostpath-dynamicpv-ps7d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.361806706s
Jan 11 20:08:18.364: INFO: Pod "pod-subpath-test-csi-hostpath-dynamicpv-ps7d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.451501212s
Jan 11 20:08:20.455: INFO: Pod "pod-subpath-test-csi-hostpath-dynamicpv-ps7d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.5417537s
STEP: Saw pod success
Jan 11 20:08:20.455: INFO: Pod "pod-subpath-test-csi-hostpath-dynamicpv-ps7d" satisfied condition "success or failure"
Jan 11 20:08:20.545: INFO: Trying to get logs from node ip-10-250-27-25.ec2.internal pod pod-subpath-test-csi-hostpath-dynamicpv-ps7d container test-container-volume-csi-hostpath-dynamicpv-ps7d: 
STEP: delete the pod
Jan 11 20:08:20.788: INFO: Waiting for pod pod-subpath-test-csi-hostpath-dynamicpv-ps7d to disappear
Jan 11 20:08:20.878: INFO: Pod pod-subpath-test-csi-hostpath-dynamicpv-ps7d no longer exists
STEP: Deleting pod pod-subpath-test-csi-hostpath-dynamicpv-ps7d
Jan 11 20:08:20.879: INFO: Deleting pod "pod-subpath-test-csi-hostpath-dynamicpv-ps7d" in namespace "provisioning-8445"
STEP: Deleting pod
Jan 11 20:08:20.968: INFO: Deleting pod "pod-subpath-test-csi-hostpath-dynamicpv-ps7d" in namespace "provisioning-8445"
STEP: Deleting pvc
Jan 11 20:08:21.058: INFO: Deleting PersistentVolumeClaim "csi-hostpathqvln7"
Jan 11 20:08:21.152: INFO: Waiting up to 5m0s for PersistentVolume pvc-74abe337-be7d-4cbe-83d5-7dac3dec2aff to get deleted
Jan 11 20:08:21.241: INFO: PersistentVolume pvc-74abe337-be7d-4cbe-83d5-7dac3dec2aff was removed
STEP: Deleting sc
STEP: uninstalling csi-hostpath driver
Jan 11 20:08:21.333: INFO: deleting *v1.ServiceAccount: provisioning-8445/csi-attacher
Jan 11 20:08:21.425: INFO: deleting *v1.ClusterRole: external-attacher-runner-provisioning-8445
Jan 11 20:08:21.516: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-provisioning-8445
Jan 11 20:08:21.607: INFO: deleting *v1.Role: provisioning-8445/external-attacher-cfg-provisioning-8445
Jan 11 20:08:21.699: INFO: deleting *v1.RoleBinding: provisioning-8445/csi-attacher-role-cfg
Jan 11 20:08:21.791: INFO: deleting *v1.ServiceAccount: provisioning-8445/csi-provisioner
Jan 11 20:08:21.882: INFO: deleting *v1.ClusterRole: external-provisioner-runner-provisioning-8445
Jan 11 20:08:21.979: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-provisioning-8445
Jan 11 20:08:22.070: INFO: deleting *v1.Role: provisioning-8445/external-provisioner-cfg-provisioning-8445
Jan 11 20:08:22.161: INFO: deleting *v1.RoleBinding: provisioning-8445/csi-provisioner-role-cfg
Jan 11 20:08:22.253: INFO: deleting *v1.ServiceAccount: provisioning-8445/csi-snapshotter
Jan 11 20:08:22.345: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-provisioning-8445
Jan 11 20:08:22.436: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-provisioning-8445
Jan 11 20:08:22.527: INFO: deleting *v1.Role: provisioning-8445/external-snapshotter-leaderelection-provisioning-8445
Jan 11 20:08:22.619: INFO: deleting *v1.RoleBinding: provisioning-8445/external-snapshotter-leaderelection
Jan 11 20:08:22.710: INFO: deleting *v1.ServiceAccount: provisioning-8445/csi-resizer
Jan 11 20:08:22.802: INFO: deleting *v1.ClusterRole: external-resizer-runner-provisioning-8445
Jan 11 20:08:22.893: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-provisioning-8445
Jan 11 20:08:22.984: INFO: deleting *v1.Role: provisioning-8445/external-resizer-cfg-provisioning-8445
Jan 11 20:08:23.075: INFO: deleting *v1.RoleBinding: provisioning-8445/csi-resizer-role-cfg
Jan 11 20:08:23.167: INFO: deleting *v1.Service: provisioning-8445/csi-hostpath-attacher
Jan 11 20:08:23.263: INFO: deleting *v1.StatefulSet: provisioning-8445/csi-hostpath-attacher
Jan 11 20:08:23.355: INFO: deleting *v1beta1.CSIDriver: csi-hostpath-provisioning-8445
Jan 11 20:08:23.446: INFO: deleting *v1.Service: provisioning-8445/csi-hostpathplugin
Jan 11 20:08:23.544: INFO: deleting *v1.StatefulSet: provisioning-8445/csi-hostpathplugin
Jan 11 20:08:23.636: INFO: deleting *v1.Service: provisioning-8445/csi-hostpath-provisioner
Jan 11 20:08:23.732: INFO: deleting *v1.StatefulSet: provisioning-8445/csi-hostpath-provisioner
Jan 11 20:08:23.824: INFO: deleting *v1.Service: provisioning-8445/csi-hostpath-resizer
Jan 11 20:08:23.919: INFO: deleting *v1.StatefulSet: provisioning-8445/csi-hostpath-resizer
Jan 11 20:08:24.010: INFO: deleting *v1.Service: provisioning-8445/csi-snapshotter
Jan 11 20:08:24.106: INFO: deleting *v1.StatefulSet: provisioning-8445/csi-snapshotter
Jan 11 20:08:24.197: INFO: deleting *v1.ClusterRoleBinding: psp-csi-hostpath-role-provisioning-8445
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 11 20:08:24.288: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "provisioning-8445" for this suite.
Jan 11 20:08:36.650: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 20:08:40.280: INFO: namespace provisioning-8445 deletion completed in 15.900322814s


• [SLOW TEST:36.736 seconds]
[sig-storage] CSI Volumes
/workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: csi-hostpath]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:62
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:92
      should support non-existent path
      /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:177
------------------------------
[BeforeEach] [sig-instrumentation] MetricsGrabber
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 11 20:08:40.282: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
STEP: Building a namespace api object, basename metrics-grabber
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in metrics-grabber-6805
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-instrumentation] MetricsGrabber
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/monitoring/metrics_grabber.go:36
W0111 20:08:41.014609    8632 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
[It] should grab all metrics from a ControllerManager.
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/monitoring/metrics_grabber.go:82
STEP: Proxying to Pod through the API server
Jan 11 20:08:41.105: INFO: Master is node api.Registry. Skipping testing ControllerManager metrics.
[AfterEach] [sig-instrumentation] MetricsGrabber
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 11 20:08:41.105: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "metrics-grabber-6805" for this suite.
Jan 11 20:08:47.469: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 20:08:51.149: INFO: namespace metrics-grabber-6805 deletion completed in 9.949909368s


• [SLOW TEST:10.867 seconds]
[sig-instrumentation] MetricsGrabber
/workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/common/framework.go:23
  should grab all metrics from a ControllerManager.
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/monitoring/metrics_grabber.go:82
------------------------------
SSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 11 20:03:18.273: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
STEP: Building a namespace api object, basename secrets
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secrets-6216
STEP: Waiting for a default service account to be provisioned in namespace
[It] Should fail non-optional pod creation due to the key in the secret object does not exist [Slow]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:386
STEP: Creating secret with name s-test-opt-create-49821551-eb32-4d27-a990-9d9df135120a
STEP: Creating the pod
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 11 20:08:19.453: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-6216" for this suite.
Jan 11 20:08:49.817: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 20:08:53.492: INFO: namespace secrets-6216 deletion completed in 33.947832741s


• [SLOW TEST:335.219 seconds]
[sig-storage] Secrets
/workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:35
  Should fail non-optional pod creation due to the key in the secret object does not exist [Slow]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:386
------------------------------
S
------------------------------
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 11 20:08:24.289: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
STEP: Building a namespace api object, basename webhook
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-9223
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:88
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan 11 20:08:26.263: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 11 20:08:29.634: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] listing mutating webhooks should work [Conformance]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Listing all of the created validation webhooks
STEP: Creating a configMap that should be mutated
STEP: Deleting the collection of validation webhooks
STEP: Creating a configMap that should not be mutated
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 11 20:08:31.509: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-9223" for this suite.
Jan 11 20:08:37.875: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 20:08:41.513: INFO: namespace webhook-9223 deletion completed in 9.911650716s
STEP: Destroying namespace "webhook-9223-markers" for this suite.
Jan 11 20:08:49.787: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 20:08:53.493: INFO: namespace webhook-9223-markers deletion completed in 11.980621554s
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:103


• [SLOW TEST:29.567 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  listing mutating webhooks should work [Conformance]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SS
------------------------------
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 11 20:08:53.495: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
STEP: Building a namespace api object, basename watch
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in watch-6934
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: creating a watch on configmaps
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: closing the watch once it receives two notifications
Jan 11 20:08:54.514: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-6934 /api/v1/namespaces/watch-6934/configmaps/e2e-watch-test-watch-closed cc3b5c6a-472a-4f51-abfb-9acf0007c337 70613 0 2020-01-11 20:08:54 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan 11 20:08:54.514: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-6934 /api/v1/namespaces/watch-6934/configmaps/e2e-watch-test-watch-closed cc3b5c6a-472a-4f51-abfb-9acf0007c337 70614 0 2020-01-11 20:08:54 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time, while the watch is closed
STEP: creating a new watch on configmaps from the last resource version observed by the first watch
STEP: deleting the configmap
STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
Jan 11 20:08:54.880: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-6934 /api/v1/namespaces/watch-6934/configmaps/e2e-watch-test-watch-closed cc3b5c6a-472a-4f51-abfb-9acf0007c337 70616 0 2020-01-11 20:08:54 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan 11 20:08:54.880: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-6934 /api/v1/namespaces/watch-6934/configmaps/e2e-watch-test-watch-closed cc3b5c6a-472a-4f51-abfb-9acf0007c337 70619 0 2020-01-11 20:08:54 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 11 20:08:54.880: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-6934" for this suite.
Jan 11 20:09:01.241: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 20:09:04.622: INFO: namespace watch-6934 deletion completed in 9.650461887s


• [SLOW TEST:11.127 seconds]
[sig-api-machinery] Watchers
/workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSS
------------------------------
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 11 20:08:27.829: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in crd-publish-openapi-94
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of same group and version but different kinds [Conformance]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation
Jan 11 20:08:28.468: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
Jan 11 20:08:33.791: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 11 20:08:59.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-94" for this suite.
Jan 11 20:09:07.787: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 20:09:11.097: INFO: namespace crd-publish-openapi-94 deletion completed in 11.583691102s


• [SLOW TEST:43.268 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group and version but different kinds [Conformance]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 11 20:07:36.568: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
STEP: Building a namespace api object, basename container-probe
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-probe-9477
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:52
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 11 20:08:37.458: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-9477" for this suite.
Jan 11 20:09:07.828: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 20:09:11.154: INFO: namespace container-probe-9477 deletion completed in 33.604978573s


• [SLOW TEST:94.586 seconds]
[k8s.io] Probing container
/workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:93
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 11 20:08:53.861: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
STEP: Building a namespace api object, basename provisioning
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in provisioning-2789
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support existing single file [LinuxOnly]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:202
Jan 11 20:08:54.513: INFO: Could not find CSI Name for in-tree plugin kubernetes.io/host-path
Jan 11 20:08:54.605: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-hostpath-4v5j
STEP: Creating a pod to test subpath
Jan 11 20:08:54.699: INFO: Waiting up to 5m0s for pod "pod-subpath-test-hostpath-4v5j" in namespace "provisioning-2789" to be "success or failure"
Jan 11 20:08:54.789: INFO: Pod "pod-subpath-test-hostpath-4v5j": Phase="Pending", Reason="", readiness=false. Elapsed: 89.865985ms
Jan 11 20:08:56.882: INFO: Pod "pod-subpath-test-hostpath-4v5j": Phase="Pending", Reason="", readiness=false. Elapsed: 2.183010931s
Jan 11 20:08:58.972: INFO: Pod "pod-subpath-test-hostpath-4v5j": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.273272248s
STEP: Saw pod success
Jan 11 20:08:58.972: INFO: Pod "pod-subpath-test-hostpath-4v5j" satisfied condition "success or failure"
Jan 11 20:08:59.062: INFO: Trying to get logs from node ip-10-250-27-25.ec2.internal pod pod-subpath-test-hostpath-4v5j container test-container-subpath-hostpath-4v5j: 
STEP: delete the pod
Jan 11 20:08:59.253: INFO: Waiting for pod pod-subpath-test-hostpath-4v5j to disappear
Jan 11 20:08:59.343: INFO: Pod pod-subpath-test-hostpath-4v5j no longer exists
STEP: Deleting pod pod-subpath-test-hostpath-4v5j
Jan 11 20:08:59.343: INFO: Deleting pod "pod-subpath-test-hostpath-4v5j" in namespace "provisioning-2789"
STEP: Deleting pod
Jan 11 20:08:59.433: INFO: Deleting pod "pod-subpath-test-hostpath-4v5j" in namespace "provisioning-2789"
Jan 11 20:08:59.523: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 11 20:08:59.523: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "provisioning-2789" for this suite.
Jan 11 20:09:07.891: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 20:09:11.221: INFO: namespace provisioning-2789 deletion completed in 11.606235189s


• [SLOW TEST:17.360 seconds]
[sig-storage] In-tree Volumes
/workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: hostPath]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:69
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:92
      should support existing single file [LinuxOnly]
      /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:202
------------------------------
SSSSSSSS
------------------------------
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 11 20:08:33.924: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in crd-publish-openapi-7571
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of different groups [Conformance]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation
Jan 11 20:08:34.564: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
Jan 11 20:08:40.038: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 11 20:09:03.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-7571" for this suite.
Jan 11 20:09:10.147: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 20:09:13.449: INFO: namespace crd-publish-openapi-7571 deletion completed in 9.570223396s


• [SLOW TEST:39.525 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of different groups [Conformance]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 11 20:08:31.552: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
STEP: Building a namespace api object, basename pod-network-test
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pod-network-test-5211
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Performing setup for networking test in namespace pod-network-test-5211
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan 11 20:08:32.193: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan 11 20:08:57.732: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.64.0.142 8081 | grep -v '^\s*$'] Namespace:pod-network-test-5211 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 11 20:08:57.732: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
Jan 11 20:08:59.601: INFO: Found all expected endpoints: [netserver-0]
Jan 11 20:08:59.690: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.64.1.145 8081 | grep -v '^\s*$'] Namespace:pod-network-test-5211 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 11 20:08:59.690: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
Jan 11 20:09:01.537: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 11 20:09:01.537: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-5211" for this suite.
Jan 11 20:09:13.897: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 20:09:17.207: INFO: namespace pod-network-test-5211 deletion completed in 15.579350183s


• [SLOW TEST:45.655 seconds]
[sig-network] Networking
/workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSS
------------------------------
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 11 20:09:11.101: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
STEP: Building a namespace api object, basename configmap
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-3085
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating configMap with name configmap-test-volume-0d4031a4-ce94-48bc-a15f-2fe6b18d8069
STEP: Creating a pod to test consume configMaps
Jan 11 20:09:12.030: INFO: Waiting up to 5m0s for pod "pod-configmaps-b179593d-5cec-4a04-bf6c-2fe42824932d" in namespace "configmap-3085" to be "success or failure"
Jan 11 20:09:12.125: INFO: Pod "pod-configmaps-b179593d-5cec-4a04-bf6c-2fe42824932d": Phase="Pending", Reason="", readiness=false. Elapsed: 94.826726ms
Jan 11 20:09:14.214: INFO: Pod "pod-configmaps-b179593d-5cec-4a04-bf6c-2fe42824932d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.184519658s
STEP: Saw pod success
Jan 11 20:09:14.214: INFO: Pod "pod-configmaps-b179593d-5cec-4a04-bf6c-2fe42824932d" satisfied condition "success or failure"
Jan 11 20:09:14.304: INFO: Trying to get logs from node ip-10-250-27-25.ec2.internal pod pod-configmaps-b179593d-5cec-4a04-bf6c-2fe42824932d container configmap-volume-test: 
STEP: delete the pod
Jan 11 20:09:14.497: INFO: Waiting for pod pod-configmaps-b179593d-5cec-4a04-bf6c-2fe42824932d to disappear
Jan 11 20:09:14.586: INFO: Pod pod-configmaps-b179593d-5cec-4a04-bf6c-2fe42824932d no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 11 20:09:14.587: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3085" for this suite.
Jan 11 20:09:20.946: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 20:09:24.254: INFO: namespace configmap-3085 deletion completed in 9.57689069s


• [SLOW TEST:13.153 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:34
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:93
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 11 20:09:17.218: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
STEP: Building a namespace api object, basename provisioning
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in provisioning-1078
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support readOnly file specified in the volumeMount [LinuxOnly]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:362
Jan 11 20:09:18.151: INFO: Could not find CSI Name for in-tree plugin kubernetes.io/empty-dir
Jan 11 20:09:18.151: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-emptydir-969p
STEP: Creating a pod to test subpath
Jan 11 20:09:18.242: INFO: Waiting up to 5m0s for pod "pod-subpath-test-emptydir-969p" in namespace "provisioning-1078" to be "success or failure"
Jan 11 20:09:18.332: INFO: Pod "pod-subpath-test-emptydir-969p": Phase="Pending", Reason="", readiness=false. Elapsed: 89.564065ms
Jan 11 20:09:20.422: INFO: Pod "pod-subpath-test-emptydir-969p": Phase="Pending", Reason="", readiness=false. Elapsed: 2.179477277s
Jan 11 20:09:22.512: INFO: Pod "pod-subpath-test-emptydir-969p": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.269257969s
STEP: Saw pod success
Jan 11 20:09:22.512: INFO: Pod "pod-subpath-test-emptydir-969p" satisfied condition "success or failure"
Jan 11 20:09:22.601: INFO: Trying to get logs from node ip-10-250-27-25.ec2.internal pod pod-subpath-test-emptydir-969p container test-container-subpath-emptydir-969p: 
STEP: delete the pod
Jan 11 20:09:22.932: INFO: Waiting for pod pod-subpath-test-emptydir-969p to disappear
Jan 11 20:09:23.022: INFO: Pod pod-subpath-test-emptydir-969p no longer exists
STEP: Deleting pod pod-subpath-test-emptydir-969p
Jan 11 20:09:23.022: INFO: Deleting pod "pod-subpath-test-emptydir-969p" in namespace "provisioning-1078"
STEP: Deleting pod
Jan 11 20:09:23.111: INFO: Deleting pod "pod-subpath-test-emptydir-969p" in namespace "provisioning-1078"
Jan 11 20:09:23.200: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 11 20:09:23.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "provisioning-1078" for this suite.
Jan 11 20:09:29.561: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 20:09:32.860: INFO: namespace provisioning-1078 deletion completed in 9.566967244s


• [SLOW TEST:15.642 seconds]
[sig-storage] In-tree Volumes
/workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: emptydir]
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:69
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:92
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:362
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 11 20:09:04.628: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config
STEP: Building a namespace api object, basename kubectl
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-3773
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:225
[BeforeEach] Simple pod
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:371
STEP: creating the pod from 
Jan 11 20:09:05.548: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config create -f - --namespace=kubectl-3773'
Jan 11 20:09:06.550: INFO: stderr: ""
Jan 11 20:09:06.551: INFO: stdout: "pod/httpd created\n"
Jan 11 20:09:06.551: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [httpd]
Jan 11 20:09:06.551: INFO: Waiting up to 5m0s for pod "httpd" in namespace "kubectl-3773" to be "running and ready"
Jan 11 20:09:06.640: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 89.837007ms
Jan 11 20:09:08.731: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 2.179998217s
Jan 11 20:09:10.821: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 4.270053003s
Jan 11 20:09:12.911: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 6.360419321s
Jan 11 20:09:15.001: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 8.450386297s
Jan 11 20:09:17.091: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 10.540426021s
Jan 11 20:09:19.181: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 12.630911353s
Jan 11 20:09:21.272: INFO: Pod "httpd": Phase="Running", Reason="", readiness=true. Elapsed: 14.721040552s
Jan 11 20:09:21.272: INFO: Pod "httpd" satisfied condition "running and ready"
Jan 11 20:09:21.272: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [httpd]
[It] should support port-forward
  /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:596
STEP: forwarding the container port to a local port
Jan 11 20:09:21.272: INFO: starting port-forward command and streaming output
Jan 11 20:09:21.272: INFO: Asynchronously running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config port-forward --namespace=kubectl-3773 httpd :80'
Jan 11 20:09:21.272: INFO: reading from `kubectl port-forward` command's stdout
STEP: curling local port output
Jan 11 20:09:22.691: INFO: got: 

It works!

[AfterEach] Simple pod /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:377 STEP: using delete to clean up resources Jan 11 20:09:22.695: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config delete --grace-period=0 --force -f - --namespace=kubectl-3773' Jan 11 20:09:23.240: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 11 20:09:23.240: INFO: stdout: "pod \"httpd\" force deleted\n" Jan 11 20:09:23.240: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config get rc,svc -l name=httpd --no-headers --namespace=kubectl-3773' Jan 11 20:09:23.794: INFO: stderr: "No resources found in kubectl-3773 namespace.\n" Jan 11 20:09:23.794: INFO: stdout: "" Jan 11 20:09:23.795: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config get pods -l name=httpd --namespace=kubectl-3773 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jan 11 20:09:24.240: INFO: stderr: "" Jan 11 20:09:24.240: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:09:24.241: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3773" for this suite. Jan 11 20:09:30.603: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:09:33.994: INFO: namespace kubectl-3773 deletion completed in 9.661952943s • [SLOW TEST:29.366 seconds] [sig-cli] Kubectl client /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Simple pod /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:369 should support port-forward /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:596 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:09:11.177: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename webhook STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-1039 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:88 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 11 20:09:13.362: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714370153, loc:(*time.Location)(0x84bfb00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714370153, loc:(*time.Location)(0x84bfb00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714370153, loc:(*time.Location)(0x84bfb00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714370153, loc:(*time.Location)(0x84bfb00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 11 20:09:16.546: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Jan 11 20:09:16.636: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-9862-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:09:18.167: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1039" for this suite. Jan 11 20:09:24.529: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:09:27.851: INFO: namespace webhook-1039 deletion completed in 9.592563463s STEP: Destroying namespace "webhook-1039-markers" for this suite. Jan 11 20:09:34.122: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:09:37.439: INFO: namespace webhook-1039-markers deletion completed in 9.587719858s [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:103 • [SLOW TEST:26.623 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSS ------------------------------ [BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:93 [BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:08:14.560: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename volume STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in volume-1340 STEP: Waiting for a default service account to be provisioned in namespace [It] should store data /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:146 STEP: deploying csi-hostpath driver Jan 11 20:08:15.541: INFO: creating *v1.ServiceAccount: volume-1340/csi-attacher Jan 11 20:08:15.631: INFO: creating *v1.ClusterRole: external-attacher-runner-volume-1340 Jan 11 20:08:15.631: INFO: Define cluster role external-attacher-runner-volume-1340 Jan 11 20:08:15.721: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-volume-1340 Jan 11 20:08:15.811: INFO: creating *v1.Role: volume-1340/external-attacher-cfg-volume-1340 Jan 11 20:08:15.901: INFO: creating *v1.RoleBinding: volume-1340/csi-attacher-role-cfg Jan 11 20:08:15.991: INFO: creating *v1.ServiceAccount: volume-1340/csi-provisioner Jan 11 20:08:16.081: INFO: creating *v1.ClusterRole: external-provisioner-runner-volume-1340 Jan 11 20:08:16.081: INFO: Define cluster role external-provisioner-runner-volume-1340 Jan 11 20:08:16.171: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-volume-1340 Jan 11 20:08:16.261: INFO: creating *v1.Role: volume-1340/external-provisioner-cfg-volume-1340 Jan 11 20:08:16.351: INFO: creating *v1.RoleBinding: volume-1340/csi-provisioner-role-cfg Jan 11 20:08:16.441: INFO: creating *v1.ServiceAccount: volume-1340/csi-snapshotter Jan 11 20:08:16.531: INFO: creating *v1.ClusterRole: external-snapshotter-runner-volume-1340 Jan 11 20:08:16.531: INFO: Define cluster role external-snapshotter-runner-volume-1340 Jan 11 20:08:16.621: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-volume-1340 Jan 11 20:08:16.711: INFO: creating *v1.Role: volume-1340/external-snapshotter-leaderelection-volume-1340 Jan 11 20:08:16.801: INFO: creating *v1.RoleBinding: volume-1340/external-snapshotter-leaderelection Jan 11 20:08:16.891: INFO: creating *v1.ServiceAccount: volume-1340/csi-resizer Jan 11 20:08:16.981: INFO: creating *v1.ClusterRole: external-resizer-runner-volume-1340 Jan 11 20:08:16.981: INFO: Define cluster role external-resizer-runner-volume-1340 Jan 11 20:08:17.070: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-volume-1340 Jan 11 20:08:17.160: INFO: creating *v1.Role: volume-1340/external-resizer-cfg-volume-1340 Jan 11 20:08:17.250: INFO: creating *v1.RoleBinding: volume-1340/csi-resizer-role-cfg Jan 11 20:08:17.340: INFO: creating *v1.Service: volume-1340/csi-hostpath-attacher Jan 11 20:08:17.435: INFO: creating *v1.StatefulSet: volume-1340/csi-hostpath-attacher Jan 11 20:08:17.526: INFO: creating *v1beta1.CSIDriver: csi-hostpath-volume-1340 Jan 11 20:08:17.616: INFO: creating *v1.Service: volume-1340/csi-hostpathplugin Jan 11 20:08:17.709: INFO: creating *v1.StatefulSet: volume-1340/csi-hostpathplugin Jan 11 20:08:17.799: INFO: creating *v1.Service: volume-1340/csi-hostpath-provisioner Jan 11 20:08:17.892: INFO: creating *v1.StatefulSet: volume-1340/csi-hostpath-provisioner Jan 11 20:08:17.983: INFO: creating *v1.Service: volume-1340/csi-hostpath-resizer Jan 11 20:08:18.076: INFO: creating *v1.StatefulSet: volume-1340/csi-hostpath-resizer Jan 11 20:08:18.166: INFO: creating *v1.Service: volume-1340/csi-snapshotter Jan 11 20:08:18.260: INFO: creating *v1.StatefulSet: volume-1340/csi-snapshotter Jan 11 20:08:18.350: INFO: creating *v1.ClusterRoleBinding: psp-csi-hostpath-role-volume-1340 Jan 11 20:08:18.439: INFO: Test running for native CSI Driver, not checking metrics Jan 11 20:08:18.439: INFO: Creating resource for dynamic PV STEP: creating a StorageClass volume-1340-csi-hostpath-volume-1340-schwzfj STEP: creating a claim Jan 11 20:08:18.621: INFO: Waiting up to 5m0s for PersistentVolumeClaims [csi-hostpathh85cs] to have phase Bound Jan 11 20:08:18.710: INFO: PersistentVolumeClaim csi-hostpathh85cs found but phase is Pending instead of Bound. Jan 11 20:08:20.800: INFO: PersistentVolumeClaim csi-hostpathh85cs found and phase=Bound (2.179148899s) STEP: starting hostpath-injector STEP: Writing text file contents in the container. Jan 11 20:08:31.250: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec hostpath-injector --namespace=volume-1340 -- /bin/sh -c echo 'Hello from csi-hostpath from namespace volume-1340' > /opt/0' Jan 11 20:08:32.616: INFO: stderr: "" Jan 11 20:08:32.616: INFO: stdout: "" STEP: Checking that text file contents are perfect. Jan 11 20:08:32.616: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec hostpath-injector --namespace=volume-1340 -- head -c 50 /opt/0' Jan 11 20:08:33.977: INFO: stderr: "" Jan 11 20:08:33.977: INFO: stdout: "Hello from csi-hostpath from namespace volume-1340" Jan 11 20:08:33.977: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=volume-1340 hostpath-injector -- /bin/sh -c test -b /opt/0' Jan 11 20:08:35.352: INFO: stderr: "" Jan 11 20:08:35.352: INFO: stdout: "" Jan 11 20:08:35.352: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=volume-1340 hostpath-injector -- /bin/sh -c test -d /opt/0' Jan 11 20:08:36.754: INFO: rc: 1 STEP: Deleting pod hostpath-injector in namespace volume-1340 Jan 11 20:08:36.846: INFO: Waiting for pod hostpath-injector to disappear Jan 11 20:08:36.936: INFO: Pod hostpath-injector still exists Jan 11 20:08:38.936: INFO: Waiting for pod hostpath-injector to disappear Jan 11 20:08:39.028: INFO: Pod hostpath-injector still exists Jan 11 20:08:40.936: INFO: Waiting for pod hostpath-injector to disappear Jan 11 20:08:41.027: INFO: Pod hostpath-injector still exists Jan 11 20:08:42.937: INFO: Waiting for pod hostpath-injector to disappear Jan 11 20:08:43.029: INFO: Pod hostpath-injector still exists Jan 11 20:08:44.936: INFO: Waiting for pod hostpath-injector to disappear Jan 11 20:08:45.033: INFO: Pod hostpath-injector still exists Jan 11 20:08:46.937: INFO: Waiting for pod hostpath-injector to disappear Jan 11 20:08:47.029: INFO: Pod hostpath-injector still exists Jan 11 20:08:48.937: INFO: Waiting for pod hostpath-injector to disappear Jan 11 20:08:49.028: INFO: Pod hostpath-injector no longer exists STEP: starting hostpath-client STEP: Checking that text file contents are perfect. Jan 11 20:09:07.394: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec hostpath-client --namespace=volume-1340 -- head -c 50 /opt/0' Jan 11 20:09:08.729: INFO: stderr: "" Jan 11 20:09:08.729: INFO: stdout: "Hello from csi-hostpath from namespace volume-1340" Jan 11 20:09:08.729: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=volume-1340 hostpath-client -- /bin/sh -c test -b /opt/0' Jan 11 20:09:10.093: INFO: stderr: "" Jan 11 20:09:10.094: INFO: stdout: "" Jan 11 20:09:10.094: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=volume-1340 hostpath-client -- /bin/sh -c test -d /opt/0' Jan 11 20:09:11.445: INFO: rc: 1 STEP: cleaning the environment after hostpath Jan 11 20:09:11.445: INFO: Deleting pod "hostpath-client" in namespace "volume-1340" Jan 11 20:09:11.536: INFO: Wait up to 5m0s for pod "hostpath-client" to be fully deleted STEP: Deleting pvc Jan 11 20:09:19.716: INFO: Deleting PersistentVolumeClaim "csi-hostpathh85cs" Jan 11 20:09:19.806: INFO: Waiting up to 5m0s for PersistentVolume pvc-adf54eb1-6d80-4b16-a296-83631fa867f2 to get deleted Jan 11 20:09:19.897: INFO: PersistentVolume pvc-adf54eb1-6d80-4b16-a296-83631fa867f2 found and phase=Released (90.045337ms) Jan 11 20:09:24.989: INFO: PersistentVolume pvc-adf54eb1-6d80-4b16-a296-83631fa867f2 was removed STEP: Deleting sc STEP: uninstalling csi-hostpath driver Jan 11 20:09:25.080: INFO: deleting *v1.ServiceAccount: volume-1340/csi-attacher Jan 11 20:09:25.171: INFO: deleting *v1.ClusterRole: external-attacher-runner-volume-1340 Jan 11 20:09:25.263: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-volume-1340 Jan 11 20:09:25.355: INFO: deleting *v1.Role: volume-1340/external-attacher-cfg-volume-1340 Jan 11 20:09:25.446: INFO: deleting *v1.RoleBinding: volume-1340/csi-attacher-role-cfg Jan 11 20:09:25.537: INFO: deleting *v1.ServiceAccount: volume-1340/csi-provisioner Jan 11 20:09:25.629: INFO: deleting *v1.ClusterRole: external-provisioner-runner-volume-1340 Jan 11 20:09:25.720: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-volume-1340 Jan 11 20:09:25.811: INFO: deleting *v1.Role: volume-1340/external-provisioner-cfg-volume-1340 Jan 11 20:09:25.903: INFO: deleting *v1.RoleBinding: volume-1340/csi-provisioner-role-cfg Jan 11 20:09:25.994: INFO: deleting *v1.ServiceAccount: volume-1340/csi-snapshotter Jan 11 20:09:26.085: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-volume-1340 Jan 11 20:09:26.176: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-volume-1340 Jan 11 20:09:26.268: INFO: deleting *v1.Role: volume-1340/external-snapshotter-leaderelection-volume-1340 Jan 11 20:09:26.359: INFO: deleting *v1.RoleBinding: volume-1340/external-snapshotter-leaderelection Jan 11 20:09:26.450: INFO: deleting *v1.ServiceAccount: volume-1340/csi-resizer Jan 11 20:09:26.541: INFO: deleting *v1.ClusterRole: external-resizer-runner-volume-1340 Jan 11 20:09:26.633: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-volume-1340 Jan 11 20:09:26.724: INFO: deleting *v1.Role: volume-1340/external-resizer-cfg-volume-1340 Jan 11 20:09:26.816: INFO: deleting *v1.RoleBinding: volume-1340/csi-resizer-role-cfg Jan 11 20:09:26.907: INFO: deleting *v1.Service: volume-1340/csi-hostpath-attacher Jan 11 20:09:27.004: INFO: deleting *v1.StatefulSet: volume-1340/csi-hostpath-attacher Jan 11 20:09:27.095: INFO: deleting *v1beta1.CSIDriver: csi-hostpath-volume-1340 Jan 11 20:09:27.187: INFO: deleting *v1.Service: volume-1340/csi-hostpathplugin Jan 11 20:09:27.286: INFO: deleting *v1.StatefulSet: volume-1340/csi-hostpathplugin Jan 11 20:09:27.377: INFO: deleting *v1.Service: volume-1340/csi-hostpath-provisioner Jan 11 20:09:27.473: INFO: deleting *v1.StatefulSet: volume-1340/csi-hostpath-provisioner Jan 11 20:09:27.564: INFO: deleting *v1.Service: volume-1340/csi-hostpath-resizer Jan 11 20:09:27.663: INFO: deleting *v1.StatefulSet: volume-1340/csi-hostpath-resizer Jan 11 20:09:27.755: INFO: deleting *v1.Service: volume-1340/csi-snapshotter Jan 11 20:09:27.851: INFO: deleting *v1.StatefulSet: volume-1340/csi-snapshotter Jan 11 20:09:27.943: INFO: deleting *v1.ClusterRoleBinding: psp-csi-hostpath-role-volume-1340 [AfterEach] [Testpattern: Dynamic PV (block volmode)] volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:09:28.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "volume-1340" for this suite. Jan 11 20:09:40.396: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:09:43.703: INFO: namespace volume-1340 deletion completed in 15.577137322s • [SLOW TEST:89.144 seconds] [sig-storage] CSI Volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Driver: csi-hostpath] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:62 [Testpattern: Dynamic PV (block volmode)] volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:92 should store data /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:146 ------------------------------ SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:09:34.018: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename services STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-8930 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:91 [It] should be able to create a functioning NodePort service [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: creating service nodeport-test with type=NodePort in namespace services-8930 STEP: creating replication controller nodeport-test in namespace services-8930 I0111 20:09:34.843698 8625 runners.go:184] Created replication controller with name: nodeport-test, namespace: services-8930, replica count: 2 I0111 20:09:37.944219 8625 runners.go:184] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 11 20:09:37.944: INFO: Creating new exec pod Jan 11 20:09:41.307: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=services-8930 execpod9xn5d -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' Jan 11 20:09:42.589: INFO: stderr: "+ nc -zv -t -w 2 nodeport-test 80\nConnection to nodeport-test 80 port [tcp/http] succeeded!\n" Jan 11 20:09:42.589: INFO: stdout: "" Jan 11 20:09:42.590: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=services-8930 execpod9xn5d -- /bin/sh -x -c nc -zv -t -w 2 100.111.48.61 80' Jan 11 20:09:43.894: INFO: stderr: "+ nc -zv -t -w 2 100.111.48.61 80\nConnection to 100.111.48.61 80 port [tcp/http] succeeded!\n" Jan 11 20:09:43.894: INFO: stdout: "" Jan 11 20:09:43.894: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=services-8930 execpod9xn5d -- /bin/sh -x -c nc -zv -t -w 2 10.250.27.25 31523' Jan 11 20:09:45.269: INFO: stderr: "+ nc -zv -t -w 2 10.250.27.25 31523\nConnection to 10.250.27.25 31523 port [tcp/31523] succeeded!\n" Jan 11 20:09:45.269: INFO: stdout: "" Jan 11 20:09:45.270: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=services-8930 execpod9xn5d -- /bin/sh -x -c nc -zv -t -w 2 10.250.7.77 31523' Jan 11 20:09:46.597: INFO: stderr: "+ nc -zv -t -w 2 10.250.7.77 31523\nConnection to 10.250.7.77 31523 port [tcp/31523] succeeded!\n" Jan 11 20:09:46.597: INFO: stdout: "" [AfterEach] [sig-network] Services /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:09:46.597: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8930" for this suite. Jan 11 20:09:52.958: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:09:56.273: INFO: namespace services-8930 deletion completed in 9.584475236s [AfterEach] [sig-network] Services /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:95 • [SLOW TEST:22.256 seconds] [sig-network] Services /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:09:56.282: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename services STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-4609 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:91 [It] should use same NodePort with same port but different protocols /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1151 STEP: creating service nodeports with same NodePort but different protocols in namespace services-4609 STEP: deleting service nodeports in namespace services-4609 [AfterEach] [sig-network] Services /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:09:57.114: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4609" for this suite. Jan 11 20:10:03.475: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:10:06.800: INFO: namespace services-4609 deletion completed in 9.594687313s [AfterEach] [sig-network] Services /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:95 • [SLOW TEST:10.518 seconds] [sig-network] Services /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should use same NodePort with same port but different protocols /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1151 ------------------------------ SS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:09:37.811: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in persistent-local-volumes-test-9335 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:152 [BeforeEach] [Volume type: dir-link] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Jan 11 20:09:40.907: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-9335 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-7abc213b-65f6-46c6-946d-5d5e1003b83a-backend && ln -s /tmp/local-volume-test-7abc213b-65f6-46c6-946d-5d5e1003b83a-backend /tmp/local-volume-test-7abc213b-65f6-46c6-946d-5d5e1003b83a' Jan 11 20:09:42.238: INFO: stderr: "" Jan 11 20:09:42.238: INFO: stdout: "" STEP: Creating local PVCs and PVs Jan 11 20:09:42.238: INFO: Creating a PV followed by a PVC Jan 11 20:09:42.419: INFO: Waiting for PV local-pvzwxfh to bind to PVC pvc-vp5bw Jan 11 20:09:42.419: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-vp5bw] to have phase Bound Jan 11 20:09:42.508: INFO: PersistentVolumeClaim pvc-vp5bw found and phase=Bound (89.554374ms) Jan 11 20:09:42.508: INFO: Waiting up to 3m0s for PersistentVolume local-pvzwxfh to have phase Bound Jan 11 20:09:42.598: INFO: PersistentVolume local-pvzwxfh found and phase=Bound (89.574797ms) [It] should be able to write from pod1 and read from pod2 /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 STEP: Creating pod1 STEP: Creating a pod Jan 11 20:09:45.229: INFO: pod "security-context-69511bb5-22a3-4548-8a50-223fd7683e7f" created on Node "ip-10-250-27-25.ec2.internal" STEP: Writing in pod1 Jan 11 20:09:45.229: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-9335 security-context-69511bb5-22a3-4548-8a50-223fd7683e7f -- /bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file' Jan 11 20:09:46.571: INFO: stderr: "" Jan 11 20:09:46.571: INFO: stdout: "" Jan 11 20:09:46.571: INFO: podRWCmdExec out: "" err: Jan 11 20:09:46.571: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-9335 security-context-69511bb5-22a3-4548-8a50-223fd7683e7f -- /bin/sh -c cat /mnt/volume1/test-file' Jan 11 20:09:47.962: INFO: stderr: "" Jan 11 20:09:47.962: INFO: stdout: "test-file-content\n" Jan 11 20:09:47.962: INFO: podRWCmdExec out: "test-file-content\n" err: STEP: Deleting pod1 STEP: Deleting pod security-context-69511bb5-22a3-4548-8a50-223fd7683e7f in namespace persistent-local-volumes-test-9335 STEP: Creating pod2 STEP: Creating a pod Jan 11 20:09:50.504: INFO: pod "security-context-f78ccbef-ef21-4c04-9d38-936fb59e6f6f" created on Node "ip-10-250-27-25.ec2.internal" STEP: Reading in pod2 Jan 11 20:09:50.504: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-9335 security-context-f78ccbef-ef21-4c04-9d38-936fb59e6f6f -- /bin/sh -c cat /mnt/volume1/test-file' Jan 11 20:09:51.777: INFO: stderr: "" Jan 11 20:09:51.777: INFO: stdout: "test-file-content\n" Jan 11 20:09:51.777: INFO: podRWCmdExec out: "test-file-content\n" err: STEP: Deleting pod2 STEP: Deleting pod security-context-f78ccbef-ef21-4c04-9d38-936fb59e6f6f in namespace persistent-local-volumes-test-9335 [AfterEach] [Volume type: dir-link] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jan 11 20:09:51.868: INFO: Deleting PersistentVolumeClaim "pvc-vp5bw" Jan 11 20:09:51.959: INFO: Deleting PersistentVolume "local-pvzwxfh" STEP: Removing the test directory Jan 11 20:09:52.050: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-9335 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-7abc213b-65f6-46c6-946d-5d5e1003b83a && rm -r /tmp/local-volume-test-7abc213b-65f6-46c6-946d-5d5e1003b83a-backend' Jan 11 20:09:53.494: INFO: stderr: "" Jan 11 20:09:53.494: INFO: stdout: "" [AfterEach] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:09:53.585: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-9335" for this suite. Jan 11 20:10:05.946: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:10:09.283: INFO: namespace persistent-local-volumes-test-9335 deletion completed in 15.606796023s • [SLOW TEST:31.472 seconds] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-link] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume one after the other /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254 should be able to write from pod1 and read from pod2 /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:09:32.862: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename replication-controller STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in replication-controller-9454 STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:09:36.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-9454" for this suite. Jan 11 20:10:06.452: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:10:09.760: INFO: namespace replication-controller-9454 deletion completed in 33.576016023s • [SLOW TEST:36.899 seconds] [sig-apps] ReplicationController /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:09:43.719: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename webhook STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-9977 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:88 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 11 20:09:45.672: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714370185, loc:(*time.Location)(0x84bfb00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714370185, loc:(*time.Location)(0x84bfb00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714370185, loc:(*time.Location)(0x84bfb00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714370185, loc:(*time.Location)(0x84bfb00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 11 20:09:48.856: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:09:49.723: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9977" for this suite. Jan 11 20:09:58.085: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:10:01.406: INFO: namespace webhook-9977 deletion completed in 11.591970027s STEP: Destroying namespace "webhook-9977-markers" for this suite. Jan 11 20:10:07.712: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:10:11.033: INFO: namespace webhook-9977-markers deletion completed in 9.626917224s [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:103 • [SLOW TEST:27.678 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSS ------------------------------ [BeforeEach] [sig-node] Downward API /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:10:06.805: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename downward-api STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-431 STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating a pod to test downward api env vars Jan 11 20:10:07.847: INFO: Waiting up to 5m0s for pod "downward-api-d4ddb93f-75be-4ad8-84c4-8f1b22de4db0" in namespace "downward-api-431" to be "success or failure" Jan 11 20:10:07.937: INFO: Pod "downward-api-d4ddb93f-75be-4ad8-84c4-8f1b22de4db0": Phase="Pending", Reason="", readiness=false. Elapsed: 89.612594ms Jan 11 20:10:10.027: INFO: Pod "downward-api-d4ddb93f-75be-4ad8-84c4-8f1b22de4db0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.179981847s STEP: Saw pod success Jan 11 20:10:10.027: INFO: Pod "downward-api-d4ddb93f-75be-4ad8-84c4-8f1b22de4db0" satisfied condition "success or failure" Jan 11 20:10:10.117: INFO: Trying to get logs from node ip-10-250-27-25.ec2.internal pod downward-api-d4ddb93f-75be-4ad8-84c4-8f1b22de4db0 container dapi-container: STEP: delete the pod Jan 11 20:10:10.309: INFO: Waiting for pod downward-api-d4ddb93f-75be-4ad8-84c4-8f1b22de4db0 to disappear Jan 11 20:10:10.398: INFO: Pod downward-api-d4ddb93f-75be-4ad8-84c4-8f1b22de4db0 no longer exists [AfterEach] [sig-node] Downward API /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:10:10.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-431" for this suite. Jan 11 20:10:16.760: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:10:20.077: INFO: namespace downward-api-431 deletion completed in 9.586768658s • [SLOW TEST:13.272 seconds] [sig-node] Downward API /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ S ------------------------------ [BeforeEach] [sig-storage] Projected combined /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:10:11.408: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename projected STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-7516 STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating configMap with name configmap-projected-all-test-volume-9219c6d7-c503-48da-bc42-9bdb608290d2 STEP: Creating secret with name secret-projected-all-test-volume-941e8829-108e-432e-bac9-acb90d32ac73 STEP: Creating a pod to test Check all projections for projected volume plugin Jan 11 20:10:12.325: INFO: Waiting up to 5m0s for pod "projected-volume-a62c44fa-4842-4bd9-b51d-860f81d7b59b" in namespace "projected-7516" to be "success or failure" Jan 11 20:10:12.414: INFO: Pod "projected-volume-a62c44fa-4842-4bd9-b51d-860f81d7b59b": Phase="Pending", Reason="", readiness=false. Elapsed: 89.612464ms Jan 11 20:10:14.505: INFO: Pod "projected-volume-a62c44fa-4842-4bd9-b51d-860f81d7b59b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.179809229s STEP: Saw pod success Jan 11 20:10:14.505: INFO: Pod "projected-volume-a62c44fa-4842-4bd9-b51d-860f81d7b59b" satisfied condition "success or failure" Jan 11 20:10:14.594: INFO: Trying to get logs from node ip-10-250-27-25.ec2.internal pod projected-volume-a62c44fa-4842-4bd9-b51d-860f81d7b59b container projected-all-volume-test: STEP: delete the pod Jan 11 20:10:14.787: INFO: Waiting for pod projected-volume-a62c44fa-4842-4bd9-b51d-860f81d7b59b to disappear Jan 11 20:10:14.876: INFO: Pod projected-volume-a62c44fa-4842-4bd9-b51d-860f81d7b59b no longer exists [AfterEach] [sig-storage] Projected combined /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:10:14.876: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7516" for this suite. Jan 11 20:10:23.238: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:10:26.552: INFO: namespace projected-7516 deletion completed in 11.584707455s • [SLOW TEST:15.144 seconds] [sig-storage] Projected combined /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:32 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ S ------------------------------ [BeforeEach] [sig-api-machinery] Generated clientset /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:10:09.768: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename clientset STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in clientset-3653 STEP: Waiting for a default service account to be provisioned in namespace [It] should create pods, set the deletionTimestamp and deletionGracePeriodSeconds of the pod /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/generated_clientset.go:104 STEP: constructing the pod STEP: setting up watch STEP: creating the pod STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the deletionTimestamp and deletionGracePeriodSeconds of the pod is set [AfterEach] [sig-api-machinery] Generated clientset /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:10:13.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "clientset-3653" for this suite. Jan 11 20:10:25.393: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:10:28.693: INFO: namespace clientset-3653 deletion completed in 15.569199758s • [SLOW TEST:18.925 seconds] [sig-api-machinery] Generated clientset /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create pods, set the deletionTimestamp and deletionGracePeriodSeconds of the pod /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/generated_clientset.go:104 ------------------------------ SS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:10:20.080: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in persistent-local-volumes-test-2811 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:152 [BeforeEach] [Volume type: blockfswithformat] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "ip-10-250-27-25.ec2.internal" using path "/tmp/local-volume-test-d6f83ed5-888e-442b-994e-eaa06362cc49" Jan 11 20:10:23.503: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-2811 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-d6f83ed5-888e-442b-994e-eaa06362cc49 && dd if=/dev/zero of=/tmp/local-volume-test-d6f83ed5-888e-442b-994e-eaa06362cc49/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-d6f83ed5-888e-442b-994e-eaa06362cc49/file' Jan 11 20:10:25.332: INFO: stderr: "5120+0 records in\n5120+0 records out\n20971520 bytes (21 MB, 20 MiB) copied, 0.0176751 s, 1.2 GB/s\n" Jan 11 20:10:25.333: INFO: stdout: "" Jan 11 20:10:25.333: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-2811 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-d6f83ed5-888e-442b-994e-eaa06362cc49/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}' Jan 11 20:10:26.761: INFO: stderr: "" Jan 11 20:10:26.761: INFO: stdout: "/dev/loop0\n" Jan 11 20:10:26.761: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-2811 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkfs -t ext4 /dev/loop0 && mount -t ext4 /dev/loop0 /tmp/local-volume-test-d6f83ed5-888e-442b-994e-eaa06362cc49 && chmod o+rwx /tmp/local-volume-test-d6f83ed5-888e-442b-994e-eaa06362cc49' Jan 11 20:10:28.096: INFO: stderr: "mke2fs 1.44.5 (15-Dec-2018)\n" Jan 11 20:10:28.096: INFO: stdout: "Discarding device blocks: 1024/20480\b\b\b\b\b\b\b\b\b\b\b \b\b\b\b\b\b\b\b\b\b\bdone \nCreating filesystem with 20480 1k blocks and 5136 inodes\nFilesystem UUID: b2ff7750-cb6b-4102-a5b1-80711a7055bb\nSuperblock backups stored on blocks: \n\t8193\n\nAllocating group tables: 0/3\b\b\b \b\b\bdone \nWriting inode tables: 0/3\b\b\b \b\b\bdone \nCreating journal (1024 blocks): done\nWriting superblocks and filesystem accounting information: 0/3\b\b\b \b\b\bdone\n\n" STEP: Creating local PVCs and PVs Jan 11 20:10:28.096: INFO: Creating a PV followed by a PVC Jan 11 20:10:28.277: INFO: Waiting for PV local-pvbbrrz to bind to PVC pvc-bct5w Jan 11 20:10:28.277: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-bct5w] to have phase Bound Jan 11 20:10:28.366: INFO: PersistentVolumeClaim pvc-bct5w found and phase=Bound (89.661261ms) Jan 11 20:10:28.366: INFO: Waiting up to 3m0s for PersistentVolume local-pvbbrrz to have phase Bound Jan 11 20:10:28.456: INFO: PersistentVolume local-pvbbrrz found and phase=Bound (89.652877ms) [BeforeEach] Set fsGroup for local volume /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set same fsGroup for two pods simultaneously [Slow] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:274 STEP: Create first pod and check fsGroup is set STEP: Creating a pod Jan 11 20:10:30.995: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec security-context-d3d0c18e-bab2-4622-a88c-6c00ce54574c --namespace=persistent-local-volumes-test-2811 -- stat -c %g /mnt/volume1' Jan 11 20:10:32.257: INFO: stderr: "" Jan 11 20:10:32.257: INFO: stdout: "1234\n" STEP: Create second pod with same fsGroup and check fsGroup is correct STEP: Creating a pod Jan 11 20:10:34.618: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec security-context-864a00fa-62af-473f-8d44-59626e2648b7 --namespace=persistent-local-volumes-test-2811 -- stat -c %g /mnt/volume1' Jan 11 20:10:35.941: INFO: stderr: "" Jan 11 20:10:35.941: INFO: stdout: "1234\n" STEP: Deleting first pod STEP: Deleting pod security-context-d3d0c18e-bab2-4622-a88c-6c00ce54574c in namespace persistent-local-volumes-test-2811 STEP: Deleting second pod STEP: Deleting pod security-context-864a00fa-62af-473f-8d44-59626e2648b7 in namespace persistent-local-volumes-test-2811 [AfterEach] [Volume type: blockfswithformat] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jan 11 20:10:36.124: INFO: Deleting PersistentVolumeClaim "pvc-bct5w" Jan 11 20:10:36.215: INFO: Deleting PersistentVolume "local-pvbbrrz" Jan 11 20:10:36.305: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-2811 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount /tmp/local-volume-test-d6f83ed5-888e-442b-994e-eaa06362cc49' Jan 11 20:10:37.801: INFO: stderr: "" Jan 11 20:10:37.801: INFO: stdout: "" Jan 11 20:10:37.801: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-2811 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-d6f83ed5-888e-442b-994e-eaa06362cc49/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}' Jan 11 20:10:39.135: INFO: stderr: "" Jan 11 20:10:39.135: INFO: stdout: "/dev/loop0\n" STEP: Tear down block device "/dev/loop0" on node "ip-10-250-27-25.ec2.internal" at path /tmp/local-volume-test-d6f83ed5-888e-442b-994e-eaa06362cc49/file Jan 11 20:10:39.135: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-2811 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0' Jan 11 20:10:40.413: INFO: stderr: "" Jan 11 20:10:40.413: INFO: stdout: "" STEP: Removing the test directory /tmp/local-volume-test-d6f83ed5-888e-442b-994e-eaa06362cc49 Jan 11 20:10:40.413: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-2811 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-d6f83ed5-888e-442b-994e-eaa06362cc49' Jan 11 20:10:41.809: INFO: stderr: "" Jan 11 20:10:41.809: INFO: stdout: "" [AfterEach] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:10:41.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-2811" for this suite. Jan 11 20:10:48.261: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:10:51.572: INFO: namespace persistent-local-volumes-test-2811 deletion completed in 9.581125076s • [SLOW TEST:31.492 seconds] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: blockfswithformat] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set same fsGroup for two pods simultaneously [Slow] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:274 ------------------------------ SSSSSS ------------------------------ [BeforeEach] [Testpattern: Inline-volume (default fs)] volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:93 [BeforeEach] [Testpattern: Inline-volume (default fs)] volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:10:28.698: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename volume STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in volume-5889 STEP: Waiting for a default service account to be provisioned in namespace [It] should store data /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:146 Jan 11 20:10:29.351: INFO: Could not find CSI Name for in-tree plugin kubernetes.io/host-path Jan 11 20:10:29.441: INFO: Creating resource for inline volume STEP: starting hostpath-injector STEP: Writing text file contents in the container. Jan 11 20:10:31.712: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec hostpath-injector --namespace=volume-5889 -- /bin/sh -c echo 'Hello from hostPath from namespace volume-5889' > /opt/0/index.html' Jan 11 20:10:33.038: INFO: stderr: "" Jan 11 20:10:33.038: INFO: stdout: "" STEP: Checking that text file contents are perfect. Jan 11 20:10:33.038: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec hostpath-injector --namespace=volume-5889 -- cat /opt/0/index.html' Jan 11 20:10:34.347: INFO: stderr: "" Jan 11 20:10:34.348: INFO: stdout: "Hello from hostPath from namespace volume-5889\n" Jan 11 20:10:34.348: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=volume-5889 hostpath-injector -- /bin/sh -c test -d /opt/0' Jan 11 20:10:35.662: INFO: stderr: "" Jan 11 20:10:35.662: INFO: stdout: "" Jan 11 20:10:35.662: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=volume-5889 hostpath-injector -- /bin/sh -c test -b /opt/0' Jan 11 20:10:37.018: INFO: rc: 1 STEP: Deleting pod hostpath-injector in namespace volume-5889 Jan 11 20:10:37.109: INFO: Waiting for pod hostpath-injector to disappear Jan 11 20:10:37.198: INFO: Pod hostpath-injector still exists Jan 11 20:10:39.198: INFO: Waiting for pod hostpath-injector to disappear Jan 11 20:10:39.288: INFO: Pod hostpath-injector still exists Jan 11 20:10:41.198: INFO: Waiting for pod hostpath-injector to disappear Jan 11 20:10:41.288: INFO: Pod hostpath-injector no longer exists STEP: starting hostpath-client STEP: Checking that text file contents are perfect. Jan 11 20:10:43.647: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec hostpath-client --namespace=volume-5889 -- cat /opt/0/index.html' Jan 11 20:10:44.957: INFO: stderr: "" Jan 11 20:10:44.957: INFO: stdout: "Hello from hostPath from namespace volume-5889\n" Jan 11 20:10:44.957: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=volume-5889 hostpath-client -- /bin/sh -c test -d /opt/0' Jan 11 20:10:46.238: INFO: stderr: "" Jan 11 20:10:46.239: INFO: stdout: "" Jan 11 20:10:46.239: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=volume-5889 hostpath-client -- /bin/sh -c test -b /opt/0' Jan 11 20:10:47.542: INFO: rc: 1 STEP: cleaning the environment after hostpath Jan 11 20:10:47.543: INFO: Deleting pod "hostpath-client" in namespace "volume-5889" Jan 11 20:10:47.633: INFO: Wait up to 5m0s for pod "hostpath-client" to be fully deleted Jan 11 20:10:53.811: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics [AfterEach] [Testpattern: Inline-volume (default fs)] volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:10:53.811: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "volume-5889" for this suite. Jan 11 20:11:02.171: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:11:05.473: INFO: namespace volume-5889 deletion completed in 11.571155178s • [SLOW TEST:36.775 seconds] [sig-storage] In-tree Volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Driver: hostPath] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:69 [Testpattern: Inline-volume (default fs)] volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:92 should store data /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:146 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:93 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:09:11.235: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename provisioning STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in provisioning-2677 STEP: Waiting for a default service account to be provisioned in namespace [It] should support restarting containers using directory as subpath [Slow] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:303 Jan 11 20:09:12.148: INFO: Could not find CSI Name for in-tree plugin kubernetes.io/host-path Jan 11 20:09:12.331: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-2677" in namespace "provisioning-2677" to be "success or failure" Jan 11 20:09:12.421: INFO: Pod "hostpath-symlink-prep-provisioning-2677": Phase="Pending", Reason="", readiness=false. Elapsed: 90.006777ms Jan 11 20:09:14.511: INFO: Pod "hostpath-symlink-prep-provisioning-2677": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.180138435s STEP: Saw pod success Jan 11 20:09:14.511: INFO: Pod "hostpath-symlink-prep-provisioning-2677" satisfied condition "success or failure" Jan 11 20:09:14.511: INFO: Deleting pod "hostpath-symlink-prep-provisioning-2677" in namespace "provisioning-2677" Jan 11 20:09:14.604: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-2677" to be fully deleted Jan 11 20:09:14.693: INFO: Creating resource for inline volume STEP: Creating pod pod-subpath-test-hostpathsymlink-l2f6 STEP: Failing liveness probe Jan 11 20:09:18.969: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=provisioning-2677 pod-subpath-test-hostpathsymlink-l2f6 --container test-container-volume-hostpathsymlink-l2f6 -- /bin/sh -c rm /probe-volume/probe-file' Jan 11 20:09:20.384: INFO: stderr: "" Jan 11 20:09:20.384: INFO: stdout: "" Jan 11 20:09:20.384: INFO: Pod exec output: STEP: Waiting for container to restart Jan 11 20:09:20.475: INFO: Container test-container-subpath-hostpathsymlink-l2f6, restarts: 0 Jan 11 20:09:30.566: INFO: Container test-container-subpath-hostpathsymlink-l2f6, restarts: 2 Jan 11 20:09:30.566: INFO: Container has restart count: 2 STEP: Rewriting the file Jan 11 20:09:30.566: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=provisioning-2677 pod-subpath-test-hostpathsymlink-l2f6 --container test-container-volume-hostpathsymlink-l2f6 -- /bin/sh -c echo test-after > /probe-volume/probe-file' Jan 11 20:09:31.881: INFO: stderr: "" Jan 11 20:09:31.882: INFO: stdout: "" Jan 11 20:09:31.882: INFO: Pod exec output: STEP: Waiting for container to stop restarting Jan 11 20:09:50.062: INFO: Container has restart count: 3 Jan 11 20:10:52.062: INFO: Container restart has stabilized Jan 11 20:10:52.062: INFO: Deleting pod "pod-subpath-test-hostpathsymlink-l2f6" in namespace "provisioning-2677" Jan 11 20:10:52.153: INFO: Wait up to 5m0s for pod "pod-subpath-test-hostpathsymlink-l2f6" to be fully deleted STEP: Deleting pod Jan 11 20:10:58.333: INFO: Deleting pod "pod-subpath-test-hostpathsymlink-l2f6" in namespace "provisioning-2677" Jan 11 20:10:58.518: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-2677" in namespace "provisioning-2677" to be "success or failure" Jan 11 20:10:58.608: INFO: Pod "hostpath-symlink-prep-provisioning-2677": Phase="Pending", Reason="", readiness=false. Elapsed: 89.978588ms Jan 11 20:11:00.699: INFO: Pod "hostpath-symlink-prep-provisioning-2677": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.180695755s STEP: Saw pod success Jan 11 20:11:00.699: INFO: Pod "hostpath-symlink-prep-provisioning-2677" satisfied condition "success or failure" Jan 11 20:11:00.699: INFO: Deleting pod "hostpath-symlink-prep-provisioning-2677" in namespace "provisioning-2677" Jan 11 20:11:00.792: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-2677" to be fully deleted Jan 11 20:11:00.882: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics [AfterEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:11:00.882: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "provisioning-2677" for this suite. Jan 11 20:11:07.245: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:11:10.567: INFO: namespace provisioning-2677 deletion completed in 9.592740544s • [SLOW TEST:119.332 seconds] [sig-storage] In-tree Volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Driver: hostPathSymlink] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:69 [Testpattern: Inline-volume (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:92 should support restarting containers using directory as subpath [Slow] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:303 ------------------------------ SS ------------------------------ [BeforeEach] [sig-network] Network /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:09:13.451: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename network STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in network-7217 STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve connrection reset issue #74839 [Slow] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/kube_proxy.go:228 Jan 11 20:09:14.181: INFO: Waiting up to 5m0s for all pods (need at least 1) in namespace 'network-7217' to be running and ready Jan 11 20:09:14.449: INFO: The status of Pod boom-server is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 11 20:09:14.449: INFO: 0 / 1 pods in namespace 'network-7217' are running and ready (0 seconds elapsed) Jan 11 20:09:14.449: INFO: expected 0 pod replicas in namespace 'network-7217', 0 are Running and Ready. Jan 11 20:09:14.449: INFO: POD NODE PHASE GRACE CONDITIONS Jan 11 20:09:14.449: INFO: boom-server ip-10-250-27-25.ec2.internal Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 20:09:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-11 20:09:14 +0000 UTC ContainersNotReady containers with unready status: [boom-server]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-11 20:09:14 +0000 UTC ContainersNotReady containers with unready status: [boom-server]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 20:09:14 +0000 UTC }] Jan 11 20:09:14.449: INFO: Jan 11 20:09:16.717: INFO: The status of Pod boom-server is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 11 20:09:16.717: INFO: 0 / 1 pods in namespace 'network-7217' are running and ready (2 seconds elapsed) Jan 11 20:09:16.717: INFO: expected 0 pod replicas in namespace 'network-7217', 0 are Running and Ready. Jan 11 20:09:16.717: INFO: POD NODE PHASE GRACE CONDITIONS Jan 11 20:09:16.717: INFO: boom-server ip-10-250-27-25.ec2.internal Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 20:09:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-11 20:09:14 +0000 UTC ContainersNotReady containers with unready status: [boom-server]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-11 20:09:14 +0000 UTC ContainersNotReady containers with unready status: [boom-server]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 20:09:14 +0000 UTC }] Jan 11 20:09:16.717: INFO: Jan 11 20:09:18.721: INFO: 1 / 1 pods in namespace 'network-7217' are running and ready (4 seconds elapsed) Jan 11 20:09:18.721: INFO: expected 0 pod replicas in namespace 'network-7217', 0 are Running and Ready. STEP: Server pod created STEP: Server service created STEP: Client pod created [AfterEach] [sig-network] Network /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:10:20.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "network-7217" for this suite. Jan 11 20:11:09.065: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:11:12.357: INFO: namespace network-7217 deletion completed in 51.561826418s • [SLOW TEST:118.907 seconds] [sig-network] Network /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should resolve connrection reset issue #74839 [Slow] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/kube_proxy.go:228 ------------------------------ SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:10:51.581: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in persistent-local-volumes-test-7946 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:152 [BeforeEach] [Volume type: tmpfs] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating tmpfs mount point on node "ip-10-250-27-25.ec2.internal" at path "/tmp/local-volume-test-76514bc4-3e49-4ec7-acbc-515cbaf2cfc3" Jan 11 20:10:54.672: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-7946 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-76514bc4-3e49-4ec7-acbc-515cbaf2cfc3" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-76514bc4-3e49-4ec7-acbc-515cbaf2cfc3" "/tmp/local-volume-test-76514bc4-3e49-4ec7-acbc-515cbaf2cfc3"' Jan 11 20:10:56.190: INFO: stderr: "" Jan 11 20:10:56.190: INFO: stdout: "" STEP: Creating local PVCs and PVs Jan 11 20:10:56.190: INFO: Creating a PV followed by a PVC Jan 11 20:10:56.370: INFO: Waiting for PV local-pvk99zg to bind to PVC pvc-2zfnh Jan 11 20:10:56.370: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-2zfnh] to have phase Bound Jan 11 20:10:56.460: INFO: PersistentVolumeClaim pvc-2zfnh found and phase=Bound (89.788872ms) Jan 11 20:10:56.460: INFO: Waiting up to 3m0s for PersistentVolume local-pvk99zg to have phase Bound Jan 11 20:10:56.550: INFO: PersistentVolume local-pvk99zg found and phase=Bound (89.903621ms) [BeforeEach] Set fsGroup for local volume /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set fsGroup for one pod [Slow] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:267 STEP: Checking fsGroup is set STEP: Creating a pod Jan 11 20:11:01.092: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec security-context-e3184310-fcfb-45d9-8f6e-52b38b75946c --namespace=persistent-local-volumes-test-7946 -- stat -c %g /mnt/volume1' Jan 11 20:11:02.490: INFO: stderr: "" Jan 11 20:11:02.490: INFO: stdout: "1234\n" STEP: Deleting pod STEP: Deleting pod security-context-e3184310-fcfb-45d9-8f6e-52b38b75946c in namespace persistent-local-volumes-test-7946 [AfterEach] [Volume type: tmpfs] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jan 11 20:11:02.581: INFO: Deleting PersistentVolumeClaim "pvc-2zfnh" Jan 11 20:11:02.671: INFO: Deleting PersistentVolume "local-pvk99zg" STEP: Unmount tmpfs mount point on node "ip-10-250-27-25.ec2.internal" at path "/tmp/local-volume-test-76514bc4-3e49-4ec7-acbc-515cbaf2cfc3" Jan 11 20:11:02.763: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-7946 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-76514bc4-3e49-4ec7-acbc-515cbaf2cfc3"' Jan 11 20:11:04.114: INFO: stderr: "" Jan 11 20:11:04.114: INFO: stdout: "" STEP: Removing the test directory Jan 11 20:11:04.115: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-7946 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-76514bc4-3e49-4ec7-acbc-515cbaf2cfc3' Jan 11 20:11:05.475: INFO: stderr: "" Jan 11 20:11:05.475: INFO: stdout: "" [AfterEach] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:11:05.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-7946" for this suite. Jan 11 20:11:17.927: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:11:21.245: INFO: namespace persistent-local-volumes-test-7946 deletion completed in 15.587701145s • [SLOW TEST:29.663 seconds] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: tmpfs] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set fsGroup for one pod [Slow] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:267 ------------------------------ S ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:11:10.572: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename custom-resource-definition STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in custom-resource-definition-163 STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Jan 11 20:11:11.351: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:11:11.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-163" for this suite. Jan 11 20:11:18.164: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:11:21.495: INFO: namespace custom-resource-definition-163 deletion completed in 9.601603992s • [SLOW TEST:10.923 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:42 creating/deleting custom resource definition objects works [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ S ------------------------------ [BeforeEach] [sig-network] DNS /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:11:05.514: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename dns STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in dns-2317 STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod resolv.conf /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:446 STEP: Preparing a test DNS service with injected DNS names... Jan 11 20:11:06.738: INFO: Created pod &Pod{ObjectMeta:{e2e-dns-configmap-dns-server-5xhct e2e-dns-configmap-dns-server- dns-2317 /api/v1/namespaces/dns-2317/pods/e2e-dns-configmap-dns-server-5xhct c3528094-0ff6-41c8-9966-26b945bd7fa2 71768 0 2020-01-11 20:11:06 +0000 UTC map[] map[kubernetes.io/psp:e2e-test-privileged-psp] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-blgnq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-blgnq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:dns,Image:gcr.io/kubernetes-e2e-test-images/dnsutils:1.1,Command:[/usr/sbin/dnsmasq -u root -k --log-facility - -q -A/notexistname.resolv.conf.local/1.1.1.1],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-blgnq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:Default,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 11 20:11:09.008: INFO: testServerIP is 100.64.1.173 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... Jan 11 20:11:09.100: INFO: Created pod &Pod{ObjectMeta:{e2e-dns-utils-kvtg8 e2e-dns-utils- dns-2317 /api/v1/namespaces/dns-2317/pods/e2e-dns-utils-kvtg8 62e90936-ad5b-4839-80fb-052fd1e5e4f9 71786 0 2020-01-11 20:11:09 +0000 UTC map[] map[kubernetes.io/psp:e2e-test-privileged-psp] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-blgnq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-blgnq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:util,Image:gcr.io/kubernetes-e2e-test-images/dnsutils:1.1,Command:[sleep 10000],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-blgnq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[100.64.1.173],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{PodDNSConfigOption{Name:ndots,Value:*2,},},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: Verifying customized DNS option is configured on pod... Jan 11 20:11:11.279: INFO: ExecWithOptions {Command:[cat /etc/resolv.conf] Namespace:dns-2317 PodName:e2e-dns-utils-kvtg8 ContainerName:util Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 11 20:11:11.279: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Verifying customized name server and search path are working... Jan 11 20:11:12.193: INFO: ExecWithOptions {Command:[/usr/bin/dig +short +search notexistname] Namespace:dns-2317 PodName:e2e-dns-utils-kvtg8 ContainerName:util Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 11 20:11:12.193: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config Jan 11 20:11:13.078: INFO: Deleting pod e2e-dns-utils-kvtg8... Jan 11 20:11:13.169: INFO: Deleting pod e2e-dns-configmap-dns-server-5xhct... [AfterEach] [sig-network] DNS /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:11:13.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2317" for this suite. Jan 11 20:11:19.622: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:11:22.930: INFO: namespace dns-2317 deletion completed in 9.577004168s • [SLOW TEST:17.416 seconds] [sig-network] DNS /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should support configurable pod resolv.conf /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:446 ------------------------------ SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:10:26.555: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename statefulset STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in statefulset-5365 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:62 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:77 STEP: Creating service test in namespace statefulset-5365 [It] should adopt matching orphans and release non-matching pods /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:137 STEP: Creating statefulset ss in namespace statefulset-5365 Jan 11 20:10:27.427: INFO: Default storage class: "default" STEP: Saturating stateful set ss Jan 11 20:10:27.518: INFO: Waiting for stateful pod at index 0 to enter Running Jan 11 20:10:27.608: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Pending - Ready=false Jan 11 20:10:37.699: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Pending - Ready=false Jan 11 20:10:47.699: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Pending - Ready=false Jan 11 20:10:57.699: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 11 20:10:57.699: INFO: Resuming stateful pod at index 0 Jan 11 20:10:57.789: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=statefulset-5365 ss-0 -- /bin/sh -x -c dd if=/dev/zero of=/data/statefulset-continue bs=1 count=1 conv=fsync' Jan 11 20:10:59.186: INFO: stderr: "+ dd 'if=/dev/zero' 'of=/data/statefulset-continue' 'bs=1' 'count=1' 'conv=fsync'\n1+0 records in\n1+0 records out\n" Jan 11 20:10:59.186: INFO: stdout: "" Jan 11 20:10:59.186: INFO: Resumed pod ss-0 STEP: Checking that stateful set pods are created with ControllerRef STEP: Orphaning one of the stateful set's pods Jan 11 20:10:59.957: INFO: Successfully updated pod "ss-0" STEP: Checking that the stateful set readopts the pod Jan 11 20:10:59.957: INFO: Waiting up to 10m0s for pod "ss-0" in namespace "statefulset-5365" to be "adopted" Jan 11 20:11:00.048: INFO: Pod "ss-0": Phase="Running", Reason="", readiness=true. Elapsed: 90.757303ms Jan 11 20:11:00.048: INFO: Pod "ss-0" satisfied condition "adopted" STEP: Removing the labels from one of the stateful set's pods Jan 11 20:11:00.729: INFO: Successfully updated pod "ss-0" STEP: Checking that the stateful set releases the pod Jan 11 20:11:00.729: INFO: Waiting up to 10m0s for pod "ss-0" in namespace "statefulset-5365" to be "released" Jan 11 20:11:00.820: INFO: Pod "ss-0": Phase="Running", Reason="", readiness=true. Elapsed: 91.362407ms Jan 11 20:11:00.820: INFO: Pod "ss-0" satisfied condition "released" STEP: Readding labels to the stateful set's pod Jan 11 20:11:01.502: INFO: Successfully updated pod "ss-0" STEP: Checking that the stateful set readopts the pod Jan 11 20:11:01.502: INFO: Waiting up to 10m0s for pod "ss-0" in namespace "statefulset-5365" to be "adopted" Jan 11 20:11:01.592: INFO: Pod "ss-0": Phase="Running", Reason="", readiness=true. Elapsed: 89.585278ms Jan 11 20:11:01.592: INFO: Pod "ss-0" satisfied condition "adopted" [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 Jan 11 20:11:01.592: INFO: Deleting all statefulset in ns statefulset-5365 Jan 11 20:11:01.682: INFO: Scaling statefulset ss to 0 Jan 11 20:11:12.044: INFO: Waiting for statefulset status.replicas updated to 0 Jan 11 20:11:12.134: INFO: Deleting statefulset ss Jan 11 20:11:12.316: INFO: Deleting pvc: datadir-ss-0 with volume pvc-bc2c8206-e6e4-4675-9310-70b1d28134ca Jan 11 20:11:12.497: INFO: Still waiting for pvs of statefulset to disappear: pvc-bc2c8206-e6e4-4675-9310-70b1d28134ca: {Phase:Released Message: Reason:} [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:11:22.587: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5365" for this suite. Jan 11 20:11:28.948: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:11:32.290: INFO: namespace statefulset-5365 deletion completed in 9.612139241s • [SLOW TEST:65.734 seconds] [sig-apps] StatefulSet /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693 should adopt matching orphans and release non-matching pods /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:137 ------------------------------ SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:11:21.248: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename downward-api STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-9119 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating a pod to test downward API volume plugin Jan 11 20:11:21.984: INFO: Waiting up to 5m0s for pod "downwardapi-volume-51e5e6f9-8485-4a59-b830-f33ea2347d0d" in namespace "downward-api-9119" to be "success or failure" Jan 11 20:11:22.074: INFO: Pod "downwardapi-volume-51e5e6f9-8485-4a59-b830-f33ea2347d0d": Phase="Pending", Reason="", readiness=false. Elapsed: 89.872047ms Jan 11 20:11:24.175: INFO: Pod "downwardapi-volume-51e5e6f9-8485-4a59-b830-f33ea2347d0d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.191136041s STEP: Saw pod success Jan 11 20:11:24.175: INFO: Pod "downwardapi-volume-51e5e6f9-8485-4a59-b830-f33ea2347d0d" satisfied condition "success or failure" Jan 11 20:11:24.265: INFO: Trying to get logs from node ip-10-250-27-25.ec2.internal pod downwardapi-volume-51e5e6f9-8485-4a59-b830-f33ea2347d0d container client-container: STEP: delete the pod Jan 11 20:11:24.456: INFO: Waiting for pod downwardapi-volume-51e5e6f9-8485-4a59-b830-f33ea2347d0d to disappear Jan 11 20:11:24.545: INFO: Pod downwardapi-volume-51e5e6f9-8485-4a59-b830-f33ea2347d0d no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:11:24.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9119" for this suite. Jan 11 20:11:30.907: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:11:34.263: INFO: namespace downward-api-9119 deletion completed in 9.626394085s • [SLOW TEST:13.015 seconds] [sig-storage] Downward API volume /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:11:21.498: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in persistent-local-volumes-test-2962 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:152 [BeforeEach] [Volume type: dir-link-bindmounted] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Jan 11 20:11:24.607: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-2962 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-20869fa2-0c6a-40ca-8753-7e225ca56a56-backend && mount --bind /tmp/local-volume-test-20869fa2-0c6a-40ca-8753-7e225ca56a56-backend /tmp/local-volume-test-20869fa2-0c6a-40ca-8753-7e225ca56a56-backend && ln -s /tmp/local-volume-test-20869fa2-0c6a-40ca-8753-7e225ca56a56-backend /tmp/local-volume-test-20869fa2-0c6a-40ca-8753-7e225ca56a56' Jan 11 20:11:25.863: INFO: stderr: "" Jan 11 20:11:25.863: INFO: stdout: "" STEP: Creating local PVCs and PVs Jan 11 20:11:25.863: INFO: Creating a PV followed by a PVC Jan 11 20:11:26.044: INFO: Waiting for PV local-pvxbq2r to bind to PVC pvc-jhbnk Jan 11 20:11:26.044: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-jhbnk] to have phase Bound Jan 11 20:11:26.133: INFO: PersistentVolumeClaim pvc-jhbnk found and phase=Bound (89.279497ms) Jan 11 20:11:26.133: INFO: Waiting up to 3m0s for PersistentVolume local-pvxbq2r to have phase Bound Jan 11 20:11:26.223: INFO: PersistentVolume local-pvxbq2r found and phase=Bound (89.728781ms) [It] should be able to write from pod1 and read from pod2 /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 STEP: Creating pod1 to write to the PV STEP: Creating a pod Jan 11 20:11:28.854: INFO: pod "security-context-f143b084-9edb-4ad4-ae14-63339f5d819a" created on Node "ip-10-250-27-25.ec2.internal" STEP: Writing in pod1 Jan 11 20:11:28.854: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-2962 security-context-f143b084-9edb-4ad4-ae14-63339f5d819a -- /bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file' Jan 11 20:11:30.150: INFO: stderr: "" Jan 11 20:11:30.150: INFO: stdout: "" Jan 11 20:11:30.150: INFO: podRWCmdExec out: "" err: Jan 11 20:11:30.150: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-2962 security-context-f143b084-9edb-4ad4-ae14-63339f5d819a -- /bin/sh -c cat /mnt/volume1/test-file' Jan 11 20:11:31.425: INFO: stderr: "" Jan 11 20:11:31.425: INFO: stdout: "test-file-content\n" Jan 11 20:11:31.425: INFO: podRWCmdExec out: "test-file-content\n" err: STEP: Creating pod2 to read from the PV STEP: Creating a pod Jan 11 20:11:33.876: INFO: pod "security-context-ff854634-6b75-45ad-9d06-539f6f059e34" created on Node "ip-10-250-27-25.ec2.internal" Jan 11 20:11:33.876: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-2962 security-context-ff854634-6b75-45ad-9d06-539f6f059e34 -- /bin/sh -c cat /mnt/volume1/test-file' Jan 11 20:11:35.141: INFO: stderr: "" Jan 11 20:11:35.141: INFO: stdout: "test-file-content\n" Jan 11 20:11:35.141: INFO: podRWCmdExec out: "test-file-content\n" err: STEP: Writing in pod2 Jan 11 20:11:35.141: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-2962 security-context-ff854634-6b75-45ad-9d06-539f6f059e34 -- /bin/sh -c mkdir -p /mnt/volume1; echo /tmp/local-volume-test-20869fa2-0c6a-40ca-8753-7e225ca56a56 > /mnt/volume1/test-file' Jan 11 20:11:36.401: INFO: stderr: "" Jan 11 20:11:36.401: INFO: stdout: "" Jan 11 20:11:36.401: INFO: podRWCmdExec out: "" err: STEP: Reading in pod1 Jan 11 20:11:36.401: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-2962 security-context-f143b084-9edb-4ad4-ae14-63339f5d819a -- /bin/sh -c cat /mnt/volume1/test-file' Jan 11 20:11:37.719: INFO: stderr: "" Jan 11 20:11:37.719: INFO: stdout: "/tmp/local-volume-test-20869fa2-0c6a-40ca-8753-7e225ca56a56\n" Jan 11 20:11:37.719: INFO: podRWCmdExec out: "/tmp/local-volume-test-20869fa2-0c6a-40ca-8753-7e225ca56a56\n" err: STEP: Deleting pod1 STEP: Deleting pod security-context-f143b084-9edb-4ad4-ae14-63339f5d819a in namespace persistent-local-volumes-test-2962 STEP: Deleting pod2 STEP: Deleting pod security-context-ff854634-6b75-45ad-9d06-539f6f059e34 in namespace persistent-local-volumes-test-2962 [AfterEach] [Volume type: dir-link-bindmounted] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jan 11 20:11:37.902: INFO: Deleting PersistentVolumeClaim "pvc-jhbnk" Jan 11 20:11:37.994: INFO: Deleting PersistentVolume "local-pvxbq2r" STEP: Removing the test directory Jan 11 20:11:38.085: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-2962 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm /tmp/local-volume-test-20869fa2-0c6a-40ca-8753-7e225ca56a56 && umount /tmp/local-volume-test-20869fa2-0c6a-40ca-8753-7e225ca56a56-backend && rm -r /tmp/local-volume-test-20869fa2-0c6a-40ca-8753-7e225ca56a56-backend' Jan 11 20:11:39.463: INFO: stderr: "" Jan 11 20:11:39.463: INFO: stdout: "" [AfterEach] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:11:39.554: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-2962" for this suite. Jan 11 20:11:45.917: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:11:49.249: INFO: namespace persistent-local-volumes-test-2962 deletion completed in 9.602263595s • [SLOW TEST:27.751 seconds] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-link-bindmounted] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume at the same time /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248 should be able to write from pod1 and read from pod2 /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:93 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:11:34.273: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename provisioning STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in provisioning-8921 STEP: Waiting for a default service account to be provisioned in namespace [It] should fail if non-existent subpath is outside the volume [Slow][LinuxOnly] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:250 Jan 11 20:11:34.912: INFO: Could not find CSI Name for in-tree plugin kubernetes.io/empty-dir Jan 11 20:11:34.912: INFO: Creating resource for inline volume STEP: Creating pod pod-subpath-test-emptydir-ht9k STEP: Checking for subpath error in container status Jan 11 20:11:39.186: INFO: Deleting pod "pod-subpath-test-emptydir-ht9k" in namespace "provisioning-8921" Jan 11 20:11:39.278: INFO: Wait up to 5m0s for pod "pod-subpath-test-emptydir-ht9k" to be fully deleted STEP: Deleting pod Jan 11 20:11:45.457: INFO: Deleting pod "pod-subpath-test-emptydir-ht9k" in namespace "provisioning-8921" Jan 11 20:11:45.547: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics [AfterEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:11:45.547: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "provisioning-8921" for this suite. Jan 11 20:11:53.908: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:11:57.219: INFO: namespace provisioning-8921 deletion completed in 11.580943295s • [SLOW TEST:22.946 seconds] [sig-storage] In-tree Volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Driver: emptydir] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:69 [Testpattern: Inline-volume (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:92 should fail if non-existent subpath is outside the volume [Slow][LinuxOnly] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:250 ------------------------------ SSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:11:22.945: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in persistent-local-volumes-test-7866 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:152 [BeforeEach] [Volume type: dir] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Jan 11 20:11:26.033: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-7866 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-097ad370-a886-4205-931c-cdb596aad7cd' Jan 11 20:11:27.394: INFO: stderr: "" Jan 11 20:11:27.394: INFO: stdout: "" STEP: Creating local PVCs and PVs Jan 11 20:11:27.394: INFO: Creating a PV followed by a PVC Jan 11 20:11:27.573: INFO: Waiting for PV local-pvpfczg to bind to PVC pvc-s2cv4 Jan 11 20:11:27.573: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-s2cv4] to have phase Bound Jan 11 20:11:27.663: INFO: PersistentVolumeClaim pvc-s2cv4 found but phase is Pending instead of Bound. Jan 11 20:11:29.752: INFO: PersistentVolumeClaim pvc-s2cv4 found but phase is Pending instead of Bound. Jan 11 20:11:31.842: INFO: PersistentVolumeClaim pvc-s2cv4 found but phase is Pending instead of Bound. Jan 11 20:11:33.932: INFO: PersistentVolumeClaim pvc-s2cv4 found but phase is Pending instead of Bound. Jan 11 20:11:36.021: INFO: PersistentVolumeClaim pvc-s2cv4 found but phase is Pending instead of Bound. Jan 11 20:11:38.111: INFO: PersistentVolumeClaim pvc-s2cv4 found and phase=Bound (10.53777402s) Jan 11 20:11:38.111: INFO: Waiting up to 3m0s for PersistentVolume local-pvpfczg to have phase Bound Jan 11 20:11:38.201: INFO: PersistentVolume local-pvpfczg found and phase=Bound (89.886133ms) [BeforeEach] One pod requesting one prebound PVC /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Jan 11 20:11:40.829: INFO: pod "security-context-d050e67b-38a4-4d33-9d45-7d976382b5be" created on Node "ip-10-250-27-25.ec2.internal" STEP: Writing in pod1 Jan 11 20:11:40.829: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-7866 security-context-d050e67b-38a4-4d33-9d45-7d976382b5be -- /bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file' Jan 11 20:11:42.154: INFO: stderr: "" Jan 11 20:11:42.155: INFO: stdout: "" Jan 11 20:11:42.155: INFO: podRWCmdExec out: "" err: [It] should be able to mount volume and read from pod1 /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 STEP: Reading in pod1 Jan 11 20:11:42.155: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-7866 security-context-d050e67b-38a4-4d33-9d45-7d976382b5be -- /bin/sh -c cat /mnt/volume1/test-file' Jan 11 20:11:43.444: INFO: stderr: "" Jan 11 20:11:43.444: INFO: stdout: "test-file-content\n" Jan 11 20:11:43.444: INFO: podRWCmdExec out: "test-file-content\n" err: [AfterEach] One pod requesting one prebound PVC /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod security-context-d050e67b-38a4-4d33-9d45-7d976382b5be in namespace persistent-local-volumes-test-7866 [AfterEach] [Volume type: dir] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jan 11 20:11:43.535: INFO: Deleting PersistentVolumeClaim "pvc-s2cv4" Jan 11 20:11:43.625: INFO: Deleting PersistentVolume "local-pvpfczg" STEP: Removing the test directory Jan 11 20:11:43.715: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-7866 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-097ad370-a886-4205-931c-cdb596aad7cd' Jan 11 20:11:45.180: INFO: stderr: "" Jan 11 20:11:45.181: INFO: stdout: "" [AfterEach] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:11:45.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-7866" for this suite. Jan 11 20:11:59.631: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:12:02.940: INFO: namespace persistent-local-volumes-test-7866 deletion completed in 17.577436939s • [SLOW TEST:39.995 seconds] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and read from pod1 /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 ------------------------------ [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:11:12.378: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename kubelet-test STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubelet-test-6779 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [AfterEach] [k8s.io] Kubelet /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:11:15.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-6779" for this suite. Jan 11 20:12:05.836: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:12:09.137: INFO: namespace kubelet-test-6779 deletion completed in 53.569651741s • [SLOW TEST:56.759 seconds] [k8s.io] Kubelet /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693 when scheduling a busybox command in a pod /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40 should print the output to logs [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:11:49.271: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename projected STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-9731 STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:91 STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secret-namespace-289 STEP: Creating projection with secret that has name projected-secret-test-da04aec6-0a7a-470c-a491-b01ad6ac1d0c STEP: Creating a pod to test consume secrets Jan 11 20:11:50.738: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-c53e52da-91a0-407f-8e71-881818ef6030" in namespace "projected-9731" to be "success or failure" Jan 11 20:11:50.828: INFO: Pod "pod-projected-secrets-c53e52da-91a0-407f-8e71-881818ef6030": Phase="Pending", Reason="", readiness=false. Elapsed: 90.33435ms Jan 11 20:11:52.919: INFO: Pod "pod-projected-secrets-c53e52da-91a0-407f-8e71-881818ef6030": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.180721401s STEP: Saw pod success Jan 11 20:11:52.919: INFO: Pod "pod-projected-secrets-c53e52da-91a0-407f-8e71-881818ef6030" satisfied condition "success or failure" Jan 11 20:11:53.009: INFO: Trying to get logs from node ip-10-250-27-25.ec2.internal pod pod-projected-secrets-c53e52da-91a0-407f-8e71-881818ef6030 container projected-secret-volume-test: STEP: delete the pod Jan 11 20:11:53.204: INFO: Waiting for pod pod-projected-secrets-c53e52da-91a0-407f-8e71-881818ef6030 to disappear Jan 11 20:11:53.293: INFO: Pod pod-projected-secrets-c53e52da-91a0-407f-8e71-881818ef6030 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:11:53.293: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9731" for this suite. Jan 11 20:11:59.655: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:12:03.095: INFO: namespace projected-9731 deletion completed in 9.710417459s STEP: Destroying namespace "secret-namespace-289" for this suite. Jan 11 20:12:09.367: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:12:12.689: INFO: namespace secret-namespace-289 deletion completed in 9.593003944s • [SLOW TEST:23.418 seconds] [sig-storage] Projected secret /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:91 ------------------------------ SSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:12:02.942: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename tables STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in tables-572 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47 [It] should return chunks of table results for list calls /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:78 STEP: creating a large number of resources [AfterEach] [sig-api-machinery] Servers with support for Table transformation /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:12:04.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-572" for this suite. Jan 11 20:12:10.635: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:12:13.936: INFO: namespace tables-572 deletion completed in 9.569614833s • [SLOW TEST:10.994 seconds] [sig-api-machinery] Servers with support for Table transformation /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should return chunks of table results for list calls /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:78 ------------------------------ SSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] Job /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:11:32.308: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename job STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in job-3861 STEP: Waiting for a default service account to be provisioned in namespace [It] should remove pods when job is deleted /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:66 STEP: Creating a job STEP: Ensure pods equal to paralellism count is attached to the job STEP: Delete the job STEP: deleting Job.batch all-pods-removed in namespace job-3861, will wait for the garbage collector to delete the pods Jan 11 20:11:35.418: INFO: Deleting Job.batch all-pods-removed took: 91.793217ms Jan 11 20:11:35.518: INFO: Terminating Job.batch all-pods-removed pods took: 100.258521ms STEP: Ensure the pods associated with the job are also deleted [AfterEach] [sig-apps] Job /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:12:08.508: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-3861" for this suite. Jan 11 20:12:16.869: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:12:20.238: INFO: namespace job-3861 deletion completed in 11.63932949s • [SLOW TEST:47.930 seconds] [sig-apps] Job /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should remove pods when job is deleted /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:66 ------------------------------ SSSSSSS ------------------------------ [BeforeEach] [Testpattern: Inline-volume (default fs)] volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:93 [BeforeEach] [Testpattern: Inline-volume (default fs)] volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:12:12.697: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename volume STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in volume-555 STEP: Waiting for a default service account to be provisioned in namespace [It] should allow exec of files on the volume /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:187 Jan 11 20:12:13.353: INFO: Could not find CSI Name for in-tree plugin kubernetes.io/empty-dir Jan 11 20:12:13.353: INFO: Creating resource for inline volume STEP: Creating pod exec-volume-test-emptydir-6sm9 STEP: Creating a pod to test exec-volume-test Jan 11 20:12:13.446: INFO: Waiting up to 5m0s for pod "exec-volume-test-emptydir-6sm9" in namespace "volume-555" to be "success or failure" Jan 11 20:12:13.536: INFO: Pod "exec-volume-test-emptydir-6sm9": Phase="Pending", Reason="", readiness=false. Elapsed: 89.749609ms Jan 11 20:12:15.626: INFO: Pod "exec-volume-test-emptydir-6sm9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.179962015s STEP: Saw pod success Jan 11 20:12:15.626: INFO: Pod "exec-volume-test-emptydir-6sm9" satisfied condition "success or failure" Jan 11 20:12:15.716: INFO: Trying to get logs from node ip-10-250-27-25.ec2.internal pod exec-volume-test-emptydir-6sm9 container exec-container-emptydir-6sm9: STEP: delete the pod Jan 11 20:12:15.904: INFO: Waiting for pod exec-volume-test-emptydir-6sm9 to disappear Jan 11 20:12:15.994: INFO: Pod exec-volume-test-emptydir-6sm9 no longer exists STEP: Deleting pod exec-volume-test-emptydir-6sm9 Jan 11 20:12:15.994: INFO: Deleting pod "exec-volume-test-emptydir-6sm9" in namespace "volume-555" Jan 11 20:12:16.084: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics [AfterEach] [Testpattern: Inline-volume (default fs)] volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:12:16.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "volume-555" for this suite. Jan 11 20:12:24.448: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:12:27.881: INFO: namespace volume-555 deletion completed in 11.704551971s • [SLOW TEST:15.184 seconds] [sig-storage] In-tree Volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Driver: emptydir] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:69 [Testpattern: Inline-volume (default fs)] volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:92 should allow exec of files on the volume /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:187 ------------------------------ SSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:10:09.328: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in persistent-local-volumes-test-5344 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:152 [It] should fail due to wrong node /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:324 STEP: Initializing test volumes Jan 11 20:10:12.333: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-5344 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-8d9498c5-2320-4a18-be60-542dc7cb8560' Jan 11 20:10:13.621: INFO: stderr: "" Jan 11 20:10:13.621: INFO: stdout: "" STEP: Creating local PVCs and PVs Jan 11 20:10:13.621: INFO: Creating a PV followed by a PVC Jan 11 20:10:13.802: INFO: Waiting for PV local-pv47lpq to bind to PVC pvc-plcsn Jan 11 20:10:13.802: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-plcsn] to have phase Bound Jan 11 20:10:13.892: INFO: PersistentVolumeClaim pvc-plcsn found and phase=Bound (90.065825ms) Jan 11 20:10:13.892: INFO: Waiting up to 3m0s for PersistentVolume local-pv47lpq to have phase Bound Jan 11 20:10:13.982: INFO: PersistentVolume local-pv47lpq found and phase=Bound (89.753657ms) STEP: Cleaning up PVC and PV Jan 11 20:12:14.523: INFO: Deleting PersistentVolumeClaim "pvc-plcsn" Jan 11 20:12:14.613: INFO: Deleting PersistentVolume "local-pv47lpq" STEP: Removing the test directory Jan 11 20:12:14.704: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-5344 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-8d9498c5-2320-4a18-be60-542dc7cb8560' Jan 11 20:12:15.970: INFO: stderr: "" Jan 11 20:12:15.970: INFO: stdout: "" [AfterEach] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:12:15.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-5344" for this suite. Jan 11 20:12:28.336: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:12:31.777: INFO: namespace persistent-local-volumes-test-5344 deletion completed in 15.711333573s • [SLOW TEST:142.449 seconds] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Local volume that cannot be mounted [Slow] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:304 should fail due to wrong node /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:324 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:93 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:12:09.153: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename provisioning STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in provisioning-786 STEP: Waiting for a default service account to be provisioned in namespace [It] should fail if subpath directory is outside the volume [Slow] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:223 Jan 11 20:12:09.790: INFO: Could not find CSI Name for in-tree plugin kubernetes.io/host-path Jan 11 20:12:09.972: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-786" in namespace "provisioning-786" to be "success or failure" Jan 11 20:12:10.061: INFO: Pod "hostpath-symlink-prep-provisioning-786": Phase="Pending", Reason="", readiness=false. Elapsed: 89.04258ms Jan 11 20:12:12.150: INFO: Pod "hostpath-symlink-prep-provisioning-786": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.178117445s STEP: Saw pod success Jan 11 20:12:12.151: INFO: Pod "hostpath-symlink-prep-provisioning-786" satisfied condition "success or failure" Jan 11 20:12:12.151: INFO: Deleting pod "hostpath-symlink-prep-provisioning-786" in namespace "provisioning-786" Jan 11 20:12:12.243: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-786" to be fully deleted Jan 11 20:12:12.331: INFO: Creating resource for inline volume STEP: Creating pod pod-subpath-test-hostpathsymlink-n4vp STEP: Checking for subpath error in container status Jan 11 20:12:16.599: INFO: Deleting pod "pod-subpath-test-hostpathsymlink-n4vp" in namespace "provisioning-786" Jan 11 20:12:16.689: INFO: Wait up to 5m0s for pod "pod-subpath-test-hostpathsymlink-n4vp" to be fully deleted STEP: Deleting pod Jan 11 20:12:24.868: INFO: Deleting pod "pod-subpath-test-hostpathsymlink-n4vp" in namespace "provisioning-786" Jan 11 20:12:25.048: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-786" in namespace "provisioning-786" to be "success or failure" Jan 11 20:12:25.137: INFO: Pod "hostpath-symlink-prep-provisioning-786": Phase="Pending", Reason="", readiness=false. Elapsed: 89.133579ms Jan 11 20:12:27.227: INFO: Pod "hostpath-symlink-prep-provisioning-786": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.178948299s STEP: Saw pod success Jan 11 20:12:27.227: INFO: Pod "hostpath-symlink-prep-provisioning-786" satisfied condition "success or failure" Jan 11 20:12:27.227: INFO: Deleting pod "hostpath-symlink-prep-provisioning-786" in namespace "provisioning-786" Jan 11 20:12:27.319: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-786" to be fully deleted Jan 11 20:12:27.408: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics [AfterEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:12:27.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "provisioning-786" for this suite. Jan 11 20:12:33.770: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:12:37.158: INFO: namespace provisioning-786 deletion completed in 9.655237867s • [SLOW TEST:28.005 seconds] [sig-storage] In-tree Volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Driver: hostPathSymlink] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:69 [Testpattern: Inline-volume (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:92 should fail if subpath directory is outside the volume [Slow] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:223 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:12:37.191: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename emptydir STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-3053 STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating a pod to test emptydir 0644 on node default medium Jan 11 20:12:37.944: INFO: Waiting up to 5m0s for pod "pod-da230169-788d-471c-a6c9-18581fb33329" in namespace "emptydir-3053" to be "success or failure" Jan 11 20:12:38.033: INFO: Pod "pod-da230169-788d-471c-a6c9-18581fb33329": Phase="Pending", Reason="", readiness=false. Elapsed: 89.158588ms Jan 11 20:12:40.123: INFO: Pod "pod-da230169-788d-471c-a6c9-18581fb33329": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.179070076s STEP: Saw pod success Jan 11 20:12:40.123: INFO: Pod "pod-da230169-788d-471c-a6c9-18581fb33329" satisfied condition "success or failure" Jan 11 20:12:40.212: INFO: Trying to get logs from node ip-10-250-27-25.ec2.internal pod pod-da230169-788d-471c-a6c9-18581fb33329 container test-container: STEP: delete the pod Jan 11 20:12:40.401: INFO: Waiting for pod pod-da230169-788d-471c-a6c9-18581fb33329 to disappear Jan 11 20:12:40.490: INFO: Pod pod-da230169-788d-471c-a6c9-18581fb33329 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:12:40.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3053" for this suite. Jan 11 20:12:46.849: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:12:50.239: INFO: namespace emptydir-3053 deletion completed in 9.658566415s • [SLOW TEST:13.048 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ S ------------------------------ [BeforeEach] [sig-cli] Kubectl Port forwarding /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:12:27.888: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename port-forwarding STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in port-forwarding-9485 STEP: Waiting for a default service account to be provisioned in namespace [It] should support a client that connects, sends DATA, and disconnects /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:446 STEP: Creating the target pod STEP: Running 'kubectl port-forward' Jan 11 20:12:34.801: INFO: starting port-forward command and streaming output Jan 11 20:12:34.801: INFO: Asynchronously running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config port-forward --namespace=port-forwarding-9485 pfpod :80' Jan 11 20:12:34.802: INFO: reading from `kubectl port-forward` command's stdout STEP: Dialing the local port STEP: Sending the expected data to the local port STEP: Closing the write half of the client's connection STEP: Reading data from the local port STEP: Waiting for the target pod to stop running Jan 11 20:12:37.180: INFO: Waiting up to 5m0s for pod "pfpod" in namespace "port-forwarding-9485" to be "container terminated" Jan 11 20:12:37.271: INFO: Pod "pfpod": Phase="Running", Reason="", readiness=true. Elapsed: 90.06524ms Jan 11 20:12:39.361: INFO: Pod "pfpod": Phase="Running", Reason="", readiness=false. Elapsed: 2.180236204s Jan 11 20:12:39.361: INFO: Pod "pfpod" satisfied condition "container terminated" STEP: Verifying logs STEP: Closing the connection to the local port [AfterEach] [sig-cli] Kubectl Port forwarding /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:12:39.461: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "port-forwarding-9485" for this suite. Jan 11 20:12:55.823: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:12:59.402: INFO: namespace port-forwarding-9485 deletion completed in 19.849227247s • [SLOW TEST:31.515 seconds] [sig-cli] Kubectl Port forwarding /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 With a server listening on 0.0.0.0 /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:441 that expects a client request /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:442 should support a client that connects, sends DATA, and disconnects /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:446 ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:12:31.816: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename kubectl STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-9458 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:225 [BeforeEach] Simple pod /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:371 STEP: creating the pod from apiVersion: v1 kind: Pod metadata: name: httpd labels: name: httpd spec: containers: - name: httpd image: docker.io/library/httpd:2.4.38-alpine ports: - containerPort: 80 readinessProbe: httpGet: path: / port: 80 initialDelaySeconds: 5 timeoutSeconds: 5 Jan 11 20:12:32.457: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config create -f - --namespace=kubectl-9458' Jan 11 20:12:33.416: INFO: stderr: "" Jan 11 20:12:33.416: INFO: stdout: "pod/httpd created\n" Jan 11 20:12:33.416: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [httpd] Jan 11 20:12:33.416: INFO: Waiting up to 5m0s for pod "httpd" in namespace "kubectl-9458" to be "running and ready" Jan 11 20:12:33.506: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 90.005754ms Jan 11 20:12:35.596: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 2.180282073s Jan 11 20:12:37.687: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 4.270769985s Jan 11 20:12:39.777: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 6.36136817s Jan 11 20:12:41.868: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 8.451874145s Jan 11 20:12:43.958: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 10.541996717s Jan 11 20:12:46.048: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 12.632377174s Jan 11 20:12:48.138: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 14.722304476s Jan 11 20:12:50.228: INFO: Pod "httpd": Phase="Running", Reason="", readiness=true. Elapsed: 16.812498843s Jan 11 20:12:50.228: INFO: Pod "httpd" satisfied condition "running and ready" Jan 11 20:12:50.229: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [httpd] [It] should support exec through an HTTP proxy /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:429 STEP: Starting goproxy STEP: Running kubectl via an HTTP proxy using https_proxy Jan 11 20:12:50.229: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-9458 exec httpd echo running in container' Jan 11 20:12:51.566: INFO: stderr: "" Jan 11 20:12:51.566: INFO: stdout: "running in container\n" STEP: Running kubectl via an HTTP proxy using HTTPS_PROXY Jan 11 20:12:51.566: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-9458 exec httpd echo running in container' Jan 11 20:12:52.890: INFO: stderr: "" Jan 11 20:12:52.890: INFO: stdout: "running in container\n" [AfterEach] Simple pod /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:377 STEP: using delete to clean up resources Jan 11 20:12:52.890: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config delete --grace-period=0 --force -f - --namespace=kubectl-9458' Jan 11 20:12:53.459: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 11 20:12:53.460: INFO: stdout: "pod \"httpd\" force deleted\n" Jan 11 20:12:53.460: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config get rc,svc -l name=httpd --no-headers --namespace=kubectl-9458' Jan 11 20:12:54.078: INFO: stderr: "No resources found in kubectl-9458 namespace.\n" Jan 11 20:12:54.078: INFO: stdout: "" Jan 11 20:12:54.078: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config get pods -l name=httpd --namespace=kubectl-9458 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jan 11 20:12:54.595: INFO: stderr: "" Jan 11 20:12:54.595: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:12:54.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9458" for this suite. Jan 11 20:13:02.958: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:13:06.287: INFO: namespace kubectl-9458 deletion completed in 11.598862818s • [SLOW TEST:34.471 seconds] [sig-cli] Kubectl client /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Simple pod /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:369 should support exec through an HTTP proxy /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:429 ------------------------------ SSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:12:20.249: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename resourcequota STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in resourcequota-8238 STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a custom resource. /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/resource_quota.go:560 STEP: Creating a Custom Resource Definition Jan 11 20:12:20.889: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a custom resource STEP: Ensuring resource quota status captures custom resource creation STEP: Creating a second custom resource STEP: Deleting a custom resource STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:12:57.344: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8238" for this suite. Jan 11 20:13:03.710: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:13:07.032: INFO: namespace resourcequota-8238 deletion completed in 9.596453446s • [SLOW TEST:46.783 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a custom resource. /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/resource_quota.go:560 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:09:24.269: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename statefulset STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in statefulset-1265 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:62 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:77 STEP: Creating service test in namespace statefulset-1265 [It] should provide basic identity /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:98 STEP: Creating statefulset ss in namespace statefulset-1265 Jan 11 20:09:25.086: INFO: Default storage class: "default" STEP: Saturating stateful set ss Jan 11 20:09:25.176: INFO: Waiting for stateful pod at index 0 to enter Running Jan 11 20:09:25.265: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Pending - Ready=false Jan 11 20:09:35.355: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Pending - Ready=false Jan 11 20:09:45.355: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 11 20:09:45.355: INFO: Resuming stateful pod at index 0 Jan 11 20:09:45.445: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=statefulset-1265 ss-0 -- /bin/sh -x -c dd if=/dev/zero of=/data/statefulset-continue bs=1 count=1 conv=fsync' Jan 11 20:09:46.836: INFO: stderr: "+ dd 'if=/dev/zero' 'of=/data/statefulset-continue' 'bs=1' 'count=1' 'conv=fsync'\n1+0 records in\n1+0 records out\n" Jan 11 20:09:46.836: INFO: stdout: "" Jan 11 20:09:46.836: INFO: Resumed pod ss-0 Jan 11 20:09:46.836: INFO: Waiting for stateful pod at index 1 to enter Running Jan 11 20:09:46.926: INFO: Found 1 stateful pods, waiting for 2 Jan 11 20:09:57.016: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jan 11 20:09:57.016: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Pending - Ready=false Jan 11 20:10:07.016: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jan 11 20:10:07.016: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Pending - Ready=false Jan 11 20:10:17.016: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jan 11 20:10:17.016: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Jan 11 20:10:17.016: INFO: Resuming stateful pod at index 1 Jan 11 20:10:17.106: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=statefulset-1265 ss-1 -- /bin/sh -x -c dd if=/dev/zero of=/data/statefulset-continue bs=1 count=1 conv=fsync' Jan 11 20:10:18.369: INFO: stderr: "+ dd 'if=/dev/zero' 'of=/data/statefulset-continue' 'bs=1' 'count=1' 'conv=fsync'\n1+0 records in\n1+0 records out\n" Jan 11 20:10:18.369: INFO: stdout: "" Jan 11 20:10:18.369: INFO: Resumed pod ss-1 Jan 11 20:10:18.369: INFO: Waiting for stateful pod at index 2 to enter Running Jan 11 20:10:18.459: INFO: Found 2 stateful pods, waiting for 3 Jan 11 20:10:28.549: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jan 11 20:10:28.549: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jan 11 20:10:28.549: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Pending - Ready=false Jan 11 20:10:38.549: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jan 11 20:10:38.549: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jan 11 20:10:38.549: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Pending - Ready=false Jan 11 20:10:48.549: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jan 11 20:10:48.549: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jan 11 20:10:48.549: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Jan 11 20:10:48.550: INFO: Resuming stateful pod at index 2 Jan 11 20:10:48.639: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=statefulset-1265 ss-2 -- /bin/sh -x -c dd if=/dev/zero of=/data/statefulset-continue bs=1 count=1 conv=fsync' Jan 11 20:10:49.925: INFO: stderr: "+ dd 'if=/dev/zero' 'of=/data/statefulset-continue' 'bs=1' 'count=1' 'conv=fsync'\n1+0 records in\n1+0 records out\n" Jan 11 20:10:49.925: INFO: stdout: "" Jan 11 20:10:49.925: INFO: Resumed pod ss-2 STEP: Verifying statefulset mounted data directory is usable Jan 11 20:10:50.015: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=statefulset-1265 ss-0 -- /bin/sh -x -c ls -idlh /data' Jan 11 20:10:51.269: INFO: stderr: "+ ls -idlh /data\n" Jan 11 20:10:51.269: INFO: stdout: " 2 drwxr-xr-x 3 root root 4.0K Jan 11 20:09 /data\n" Jan 11 20:10:51.269: INFO: stdout of ls -idlh /data on ss-0: 2 drwxr-xr-x 3 root root 4.0K Jan 11 20:09 /data Jan 11 20:10:51.269: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=statefulset-1265 ss-1 -- /bin/sh -x -c ls -idlh /data' Jan 11 20:10:52.545: INFO: stderr: "+ ls -idlh /data\n" Jan 11 20:10:52.545: INFO: stdout: " 2 drwxr-xr-x 3 root root 4.0K Jan 11 20:10 /data\n" Jan 11 20:10:52.545: INFO: stdout of ls -idlh /data on ss-1: 2 drwxr-xr-x 3 root root 4.0K Jan 11 20:10 /data Jan 11 20:10:52.545: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=statefulset-1265 ss-2 -- /bin/sh -x -c ls -idlh /data' Jan 11 20:10:53.805: INFO: stderr: "+ ls -idlh /data\n" Jan 11 20:10:53.805: INFO: stdout: " 2 drwxr-xr-x 3 root root 4.0K Jan 11 20:10 /data\n" Jan 11 20:10:53.805: INFO: stdout of ls -idlh /data on ss-2: 2 drwxr-xr-x 3 root root 4.0K Jan 11 20:10 /data Jan 11 20:10:53.895: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=statefulset-1265 ss-0 -- /bin/sh -x -c find /data' Jan 11 20:10:55.194: INFO: stderr: "+ find /data\n" Jan 11 20:10:55.194: INFO: stdout: "/data\n/data/lost+found\n/data/statefulset-continue\n" Jan 11 20:10:55.194: INFO: stdout of find /data on ss-0: /data /data/lost+found /data/statefulset-continue Jan 11 20:10:55.194: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=statefulset-1265 ss-1 -- /bin/sh -x -c find /data' Jan 11 20:10:56.500: INFO: stderr: "+ find /data\n" Jan 11 20:10:56.500: INFO: stdout: "/data\n/data/statefulset-continue\n/data/lost+found\n" Jan 11 20:10:56.500: INFO: stdout of find /data on ss-1: /data /data/statefulset-continue /data/lost+found Jan 11 20:10:56.501: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=statefulset-1265 ss-2 -- /bin/sh -x -c find /data' Jan 11 20:10:57.898: INFO: stderr: "+ find /data\n" Jan 11 20:10:57.898: INFO: stdout: "/data\n/data/statefulset-continue\n/data/lost+found\n" Jan 11 20:10:57.898: INFO: stdout of find /data on ss-2: /data /data/statefulset-continue /data/lost+found Jan 11 20:10:57.988: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=statefulset-1265 ss-0 -- /bin/sh -x -c touch /data/1578773449925860697' Jan 11 20:10:59.312: INFO: stderr: "+ touch /data/1578773449925860697\n" Jan 11 20:10:59.312: INFO: stdout: "" Jan 11 20:10:59.312: INFO: stdout of touch /data/1578773449925860697 on ss-0: Jan 11 20:10:59.312: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=statefulset-1265 ss-1 -- /bin/sh -x -c touch /data/1578773449925860697' Jan 11 20:11:00.686: INFO: stderr: "+ touch /data/1578773449925860697\n" Jan 11 20:11:00.686: INFO: stdout: "" Jan 11 20:11:00.686: INFO: stdout of touch /data/1578773449925860697 on ss-1: Jan 11 20:11:00.686: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=statefulset-1265 ss-2 -- /bin/sh -x -c touch /data/1578773449925860697' Jan 11 20:11:02.034: INFO: stderr: "+ touch /data/1578773449925860697\n" Jan 11 20:11:02.034: INFO: stdout: "" Jan 11 20:11:02.034: INFO: stdout of touch /data/1578773449925860697 on ss-2: STEP: Verifying statefulset provides a stable hostname for each pod Jan 11 20:11:02.125: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=statefulset-1265 ss-0 -- /bin/sh -x -c printf $(hostname)' Jan 11 20:11:03.550: INFO: stderr: "+ hostname\n+ printf ss-0\n" Jan 11 20:11:03.550: INFO: stdout: "ss-0" Jan 11 20:11:03.550: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=statefulset-1265 ss-1 -- /bin/sh -x -c printf $(hostname)' Jan 11 20:11:04.902: INFO: stderr: "+ hostname\n+ printf ss-1\n" Jan 11 20:11:04.902: INFO: stdout: "ss-1" Jan 11 20:11:04.902: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=statefulset-1265 ss-2 -- /bin/sh -x -c printf $(hostname)' Jan 11 20:11:06.213: INFO: stderr: "+ hostname\n+ printf ss-2\n" Jan 11 20:11:06.213: INFO: stdout: "ss-2" STEP: Verifying statefulset set proper service name Jan 11 20:11:06.213: INFO: Checking if statefulset spec.serviceName is test STEP: Running echo $(hostname) | dd of=/data/hostname conv=fsync in all stateful pods Jan 11 20:11:06.303: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=statefulset-1265 ss-0 -- /bin/sh -x -c echo $(hostname) | dd of=/data/hostname conv=fsync' Jan 11 20:11:07.616: INFO: stderr: "+ dd 'of=/data/hostname' 'conv=fsync'\n+ hostname\n+ echo ss-0\n0+1 records in\n0+1 records out\n" Jan 11 20:11:07.616: INFO: stdout: "" Jan 11 20:11:07.616: INFO: stdout of echo $(hostname) | dd of=/data/hostname conv=fsync on ss-0: Jan 11 20:11:07.616: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=statefulset-1265 ss-1 -- /bin/sh -x -c echo $(hostname) | dd of=/data/hostname conv=fsync' Jan 11 20:11:08.889: INFO: stderr: "+ dd 'of=/data/hostname' 'conv=fsync'\n+ hostname\n+ echo ss-1\n0+1 records in\n0+1 records out\n" Jan 11 20:11:08.889: INFO: stdout: "" Jan 11 20:11:08.889: INFO: stdout of echo $(hostname) | dd of=/data/hostname conv=fsync on ss-1: Jan 11 20:11:08.889: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=statefulset-1265 ss-2 -- /bin/sh -x -c echo $(hostname) | dd of=/data/hostname conv=fsync' Jan 11 20:11:10.152: INFO: stderr: "+ dd 'of=/data/hostname' 'conv=fsync'\n+ hostname\n+ echo ss-2\n0+1 records in\n0+1 records out\n" Jan 11 20:11:10.152: INFO: stdout: "" Jan 11 20:11:10.153: INFO: stdout of echo $(hostname) | dd of=/data/hostname conv=fsync on ss-2: STEP: Restarting statefulset ss Jan 11 20:11:10.153: INFO: Scaling statefulset ss to 0 Jan 11 20:11:30.511: INFO: Waiting for statefulset status.replicas updated to 0 Jan 11 20:11:30.869: INFO: Found 1 stateful pods, waiting for 3 Jan 11 20:11:40.959: INFO: Found 1 stateful pods, waiting for 3 Jan 11 20:11:50.959: INFO: Found 2 stateful pods, waiting for 3 Jan 11 20:12:00.960: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jan 11 20:12:00.960: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jan 11 20:12:00.960: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false Jan 11 20:12:10.959: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jan 11 20:12:10.959: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jan 11 20:12:10.959: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying statefulset mounted data directory is usable Jan 11 20:12:11.050: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=statefulset-1265 ss-0 -- /bin/sh -x -c ls -idlh /data' Jan 11 20:12:12.323: INFO: stderr: "+ ls -idlh /data\n" Jan 11 20:12:12.323: INFO: stdout: " 2 drwxr-xr-x 3 root root 4.0K Jan 11 20:11 /data\n" Jan 11 20:12:12.323: INFO: stdout of ls -idlh /data on ss-0: 2 drwxr-xr-x 3 root root 4.0K Jan 11 20:11 /data Jan 11 20:12:12.324: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=statefulset-1265 ss-1 -- /bin/sh -x -c ls -idlh /data' Jan 11 20:12:13.625: INFO: stderr: "+ ls -idlh /data\n" Jan 11 20:12:13.625: INFO: stdout: " 2 drwxr-xr-x 3 root root 4.0K Jan 11 20:11 /data\n" Jan 11 20:12:13.625: INFO: stdout of ls -idlh /data on ss-1: 2 drwxr-xr-x 3 root root 4.0K Jan 11 20:11 /data Jan 11 20:12:13.625: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=statefulset-1265 ss-2 -- /bin/sh -x -c ls -idlh /data' Jan 11 20:12:14.915: INFO: stderr: "+ ls -idlh /data\n" Jan 11 20:12:14.915: INFO: stdout: " 2 drwxr-xr-x 3 root root 4.0K Jan 11 20:11 /data\n" Jan 11 20:12:14.915: INFO: stdout of ls -idlh /data on ss-2: 2 drwxr-xr-x 3 root root 4.0K Jan 11 20:11 /data Jan 11 20:12:15.006: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=statefulset-1265 ss-0 -- /bin/sh -x -c find /data' Jan 11 20:12:16.329: INFO: stderr: "+ find /data\n" Jan 11 20:12:16.329: INFO: stdout: "/data\n/data/hostname\n/data/lost+found\n/data/statefulset-continue\n/data/1578773449925860697\n" Jan 11 20:12:16.329: INFO: stdout of find /data on ss-0: /data /data/hostname /data/lost+found /data/statefulset-continue /data/1578773449925860697 Jan 11 20:12:16.329: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=statefulset-1265 ss-1 -- /bin/sh -x -c find /data' Jan 11 20:12:17.906: INFO: stderr: "+ find /data\n" Jan 11 20:12:17.906: INFO: stdout: "/data\n/data/statefulset-continue\n/data/hostname\n/data/1578773449925860697\n/data/lost+found\n" Jan 11 20:12:17.906: INFO: stdout of find /data on ss-1: /data /data/statefulset-continue /data/hostname /data/1578773449925860697 /data/lost+found Jan 11 20:12:17.906: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=statefulset-1265 ss-2 -- /bin/sh -x -c find /data' Jan 11 20:12:19.177: INFO: stderr: "+ find /data\n" Jan 11 20:12:19.177: INFO: stdout: "/data\n/data/statefulset-continue\n/data/lost+found\n/data/hostname\n/data/1578773449925860697\n" Jan 11 20:12:19.177: INFO: stdout of find /data on ss-2: /data /data/statefulset-continue /data/lost+found /data/hostname /data/1578773449925860697 Jan 11 20:12:19.267: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=statefulset-1265 ss-0 -- /bin/sh -x -c touch /data/1578773530959849012' Jan 11 20:12:20.613: INFO: stderr: "+ touch /data/1578773530959849012\n" Jan 11 20:12:20.613: INFO: stdout: "" Jan 11 20:12:20.613: INFO: stdout of touch /data/1578773530959849012 on ss-0: Jan 11 20:12:20.613: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=statefulset-1265 ss-1 -- /bin/sh -x -c touch /data/1578773530959849012' Jan 11 20:12:21.885: INFO: stderr: "+ touch /data/1578773530959849012\n" Jan 11 20:12:21.885: INFO: stdout: "" Jan 11 20:12:21.885: INFO: stdout of touch /data/1578773530959849012 on ss-1: Jan 11 20:12:21.885: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=statefulset-1265 ss-2 -- /bin/sh -x -c touch /data/1578773530959849012' Jan 11 20:12:23.158: INFO: stderr: "+ touch /data/1578773530959849012\n" Jan 11 20:12:23.158: INFO: stdout: "" Jan 11 20:12:23.158: INFO: stdout of touch /data/1578773530959849012 on ss-2: STEP: Running if [ "$(cat /data/hostname)" = "$(hostname)" ]; then exit 0; else exit 1; fi in all stateful pods Jan 11 20:12:23.249: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=statefulset-1265 ss-0 -- /bin/sh -x -c if [ "$(cat /data/hostname)" = "$(hostname)" ]; then exit 0; else exit 1; fi' Jan 11 20:12:24.497: INFO: stderr: "+ cat /data/hostname\n+ hostname\n+ '[' ss-0 '=' ss-0 ]\n+ exit 0\n" Jan 11 20:12:24.497: INFO: stdout: "" Jan 11 20:12:24.497: INFO: stdout of if [ "$(cat /data/hostname)" = "$(hostname)" ]; then exit 0; else exit 1; fi on ss-0: Jan 11 20:12:24.497: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=statefulset-1265 ss-1 -- /bin/sh -x -c if [ "$(cat /data/hostname)" = "$(hostname)" ]; then exit 0; else exit 1; fi' Jan 11 20:12:25.777: INFO: stderr: "+ cat /data/hostname\n+ hostname\n+ '[' ss-1 '=' ss-1 ]\n+ exit 0\n" Jan 11 20:12:25.777: INFO: stdout: "" Jan 11 20:12:25.777: INFO: stdout of if [ "$(cat /data/hostname)" = "$(hostname)" ]; then exit 0; else exit 1; fi on ss-1: Jan 11 20:12:25.777: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=statefulset-1265 ss-2 -- /bin/sh -x -c if [ "$(cat /data/hostname)" = "$(hostname)" ]; then exit 0; else exit 1; fi' Jan 11 20:12:27.295: INFO: stderr: "+ cat /data/hostname\n+ hostname\n+ '[' ss-2 '=' ss-2 ]\n+ exit 0\n" Jan 11 20:12:27.295: INFO: stdout: "" Jan 11 20:12:27.295: INFO: stdout of if [ "$(cat /data/hostname)" = "$(hostname)" ]; then exit 0; else exit 1; fi on ss-2: [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 Jan 11 20:12:27.295: INFO: Deleting all statefulset in ns statefulset-1265 Jan 11 20:12:27.384: INFO: Scaling statefulset ss to 0 Jan 11 20:12:47.742: INFO: Waiting for statefulset status.replicas updated to 0 Jan 11 20:12:47.831: INFO: Deleting statefulset ss Jan 11 20:12:48.011: INFO: Deleting pvc: datadir-ss-0 with volume pvc-88bad2b5-c157-4cf6-a481-9b4d88a11947 Jan 11 20:12:48.101: INFO: Deleting pvc: datadir-ss-1 with volume pvc-98c0a8d6-acbc-4c4f-8e37-7779700f9c00 Jan 11 20:12:48.192: INFO: Deleting pvc: datadir-ss-2 with volume pvc-6c2b10c2-6597-4831-abbe-ef708437de55 Jan 11 20:12:48.372: INFO: Still waiting for pvs of statefulset to disappear: pvc-6c2b10c2-6597-4831-abbe-ef708437de55: {Phase:Bound Message: Reason:} pvc-88bad2b5-c157-4cf6-a481-9b4d88a11947: {Phase:Failed Message:Error deleting EBS volume "vol-0b3a38d6f8487e32f" since volume is currently attached to "i-0a8c404292a3c92e9" Reason:} pvc-98c0a8d6-acbc-4c4f-8e37-7779700f9c00: {Phase:Released Message: Reason:} [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:12:58.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-1265" for this suite. Jan 11 20:13:06.823: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:13:10.139: INFO: namespace statefulset-1265 deletion completed in 11.587171636s • [SLOW TEST:225.870 seconds] [sig-apps] StatefulSet /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693 should provide basic identity /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:98 ------------------------------ SSSSS ------------------------------ [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:11:57.232: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename init-container STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in init-container-1229 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: creating the pod Jan 11 20:11:58.361: INFO: PodSpec: initContainers in spec.initContainers Jan 11 20:12:37.132: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-c6e7e8eb-2a55-4ce5-a439-ff114e0bce29", GenerateName:"", Namespace:"init-container-1229", SelfLink:"/api/v1/namespaces/init-container-1229/pods/pod-init-c6e7e8eb-2a55-4ce5-a439-ff114e0bce29", UID:"8ce83d74-d041-4291-9c40-7e138d8e47b1", ResourceVersion:"72615", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63714370318, loc:(*time.Location)(0x84bfb00)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"361335973"}, Annotations:map[string]string{"cni.projectcalico.org/podIP":"100.64.1.184/32", "kubernetes.io/psp":"e2e-test-privileged-psp"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-sh49x", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc002ff2200), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-sh49x", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-sh49x", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-sh49x", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0037b0838), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"ip-10-250-27-25.ec2.internal", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0026ebce0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0037b08b0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0037b08d0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0037b08d8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0037b08dc), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714370318, loc:(*time.Location)(0x84bfb00)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714370318, loc:(*time.Location)(0x84bfb00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714370318, loc:(*time.Location)(0x84bfb00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714370318, loc:(*time.Location)(0x84bfb00)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.250.27.25", PodIP:"100.64.1.184", PodIPs:[]v1.PodIP{v1.PodIP{IP:"100.64.1.184"}}, StartTime:(*v1.Time)(0xc0037e19e0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc003112fc0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc003113340)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://fa65cabeecbdddc56b01a189efa9613afe5030a649595ce4df8ee399832023da", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0037e1a20), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0037e1a00), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc0037b095f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:12:37.133: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-1229" for this suite. Jan 11 20:13:07.494: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:13:10.810: INFO: namespace init-container-1229 deletion completed in 33.585166541s • [SLOW TEST:73.578 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:12:50.242: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in crd-publish-openapi-1224 STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Jan 11 20:12:50.878: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Jan 11 20:12:55.411: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-1224 create -f -' Jan 11 20:12:57.161: INFO: stderr: "" Jan 11 20:12:57.161: INFO: stdout: "e2e-test-crd-publish-openapi-3920-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Jan 11 20:12:57.161: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-1224 delete e2e-test-crd-publish-openapi-3920-crds test-cr' Jan 11 20:12:57.712: INFO: stderr: "" Jan 11 20:12:57.712: INFO: stdout: "e2e-test-crd-publish-openapi-3920-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" Jan 11 20:12:57.712: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-1224 apply -f -' Jan 11 20:12:58.853: INFO: stderr: "" Jan 11 20:12:58.853: INFO: stdout: "e2e-test-crd-publish-openapi-3920-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Jan 11 20:12:58.853: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-1224 delete e2e-test-crd-publish-openapi-3920-crds test-cr' Jan 11 20:12:59.409: INFO: stderr: "" Jan 11 20:12:59.409: INFO: stdout: "e2e-test-crd-publish-openapi-3920-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema Jan 11 20:12:59.409: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config explain e2e-test-crd-publish-openapi-3920-crds' Jan 11 20:13:00.312: INFO: stderr: "" Jan 11 20:13:00.312: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-3920-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:13:05.436: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1224" for this suite. Jan 11 20:13:13.798: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:13:17.141: INFO: namespace crd-publish-openapi-1224 deletion completed in 11.613588783s • [SLOW TEST:26.899 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SS ------------------------------ [BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:93 [BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:85 [BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:12:13.951: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename volume-expand STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in volume-expand-1240 STEP: Waiting for a default service account to be provisioned in namespace [It] should not allow expansion of pvcs without AllowVolumeExpansion property /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:139 STEP: deploying csi-hostpath driver Jan 11 20:12:15.234: INFO: creating *v1.ServiceAccount: volume-expand-1240/csi-attacher Jan 11 20:12:15.324: INFO: creating *v1.ClusterRole: external-attacher-runner-volume-expand-1240 Jan 11 20:12:15.324: INFO: Define cluster role external-attacher-runner-volume-expand-1240 Jan 11 20:12:15.415: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-volume-expand-1240 Jan 11 20:12:15.504: INFO: creating *v1.Role: volume-expand-1240/external-attacher-cfg-volume-expand-1240 Jan 11 20:12:15.594: INFO: creating *v1.RoleBinding: volume-expand-1240/csi-attacher-role-cfg Jan 11 20:12:15.684: INFO: creating *v1.ServiceAccount: volume-expand-1240/csi-provisioner Jan 11 20:12:15.774: INFO: creating *v1.ClusterRole: external-provisioner-runner-volume-expand-1240 Jan 11 20:12:15.774: INFO: Define cluster role external-provisioner-runner-volume-expand-1240 Jan 11 20:12:15.864: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-volume-expand-1240 Jan 11 20:12:15.954: INFO: creating *v1.Role: volume-expand-1240/external-provisioner-cfg-volume-expand-1240 Jan 11 20:12:16.043: INFO: creating *v1.RoleBinding: volume-expand-1240/csi-provisioner-role-cfg Jan 11 20:12:16.133: INFO: creating *v1.ServiceAccount: volume-expand-1240/csi-snapshotter Jan 11 20:12:16.224: INFO: creating *v1.ClusterRole: external-snapshotter-runner-volume-expand-1240 Jan 11 20:12:16.224: INFO: Define cluster role external-snapshotter-runner-volume-expand-1240 Jan 11 20:12:16.313: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-volume-expand-1240 Jan 11 20:12:16.403: INFO: creating *v1.Role: volume-expand-1240/external-snapshotter-leaderelection-volume-expand-1240 Jan 11 20:12:16.493: INFO: creating *v1.RoleBinding: volume-expand-1240/external-snapshotter-leaderelection Jan 11 20:12:16.583: INFO: creating *v1.ServiceAccount: volume-expand-1240/csi-resizer Jan 11 20:12:16.672: INFO: creating *v1.ClusterRole: external-resizer-runner-volume-expand-1240 Jan 11 20:12:16.672: INFO: Define cluster role external-resizer-runner-volume-expand-1240 Jan 11 20:12:16.762: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-volume-expand-1240 Jan 11 20:12:16.852: INFO: creating *v1.Role: volume-expand-1240/external-resizer-cfg-volume-expand-1240 Jan 11 20:12:16.941: INFO: creating *v1.RoleBinding: volume-expand-1240/csi-resizer-role-cfg Jan 11 20:12:17.031: INFO: creating *v1.Service: volume-expand-1240/csi-hostpath-attacher Jan 11 20:12:17.124: INFO: creating *v1.StatefulSet: volume-expand-1240/csi-hostpath-attacher Jan 11 20:12:17.215: INFO: creating *v1beta1.CSIDriver: csi-hostpath-volume-expand-1240 Jan 11 20:12:17.304: INFO: creating *v1.Service: volume-expand-1240/csi-hostpathplugin Jan 11 20:12:17.397: INFO: creating *v1.StatefulSet: volume-expand-1240/csi-hostpathplugin Jan 11 20:12:17.487: INFO: creating *v1.Service: volume-expand-1240/csi-hostpath-provisioner Jan 11 20:12:17.580: INFO: creating *v1.StatefulSet: volume-expand-1240/csi-hostpath-provisioner Jan 11 20:12:17.671: INFO: creating *v1.Service: volume-expand-1240/csi-hostpath-resizer Jan 11 20:12:17.764: INFO: creating *v1.StatefulSet: volume-expand-1240/csi-hostpath-resizer Jan 11 20:12:17.854: INFO: creating *v1.Service: volume-expand-1240/csi-snapshotter Jan 11 20:12:17.947: INFO: creating *v1.StatefulSet: volume-expand-1240/csi-snapshotter Jan 11 20:12:18.037: INFO: creating *v1.ClusterRoleBinding: psp-csi-hostpath-role-volume-expand-1240 Jan 11 20:12:18.126: INFO: Test running for native CSI Driver, not checking metrics Jan 11 20:12:18.126: INFO: Creating resource for dynamic PV STEP: creating a StorageClass volume-expand-1240-csi-hostpath-volume-expand-1240-sccr5b8 STEP: creating a claim Jan 11 20:12:18.216: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Jan 11 20:12:18.305: INFO: Waiting up to 5m0s for PersistentVolumeClaims [csi-hostpathcqnx8] to have phase Bound Jan 11 20:12:18.394: INFO: PersistentVolumeClaim csi-hostpathcqnx8 found but phase is Pending instead of Bound. Jan 11 20:12:20.484: INFO: PersistentVolumeClaim csi-hostpathcqnx8 found and phase=Bound (2.178564028s) STEP: Expanding non-expandable pvc Jan 11 20:12:20.663: INFO: currentPvcSize {{5368709120 0} {} 5Gi BinarySI}, newSize {{6442450944 0} {} BinarySI} Jan 11 20:12:20.842: INFO: Error updating pvc csi-hostpathcqnx8 with persistentvolumeclaims "csi-hostpathcqnx8" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize Jan 11 20:12:23.022: INFO: Error updating pvc csi-hostpathcqnx8 with persistentvolumeclaims "csi-hostpathcqnx8" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize Jan 11 20:12:25.022: INFO: Error updating pvc csi-hostpathcqnx8 with persistentvolumeclaims "csi-hostpathcqnx8" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize Jan 11 20:12:27.021: INFO: Error updating pvc csi-hostpathcqnx8 with persistentvolumeclaims "csi-hostpathcqnx8" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize Jan 11 20:12:29.021: INFO: Error updating pvc csi-hostpathcqnx8 with persistentvolumeclaims "csi-hostpathcqnx8" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize Jan 11 20:12:31.022: INFO: Error updating pvc csi-hostpathcqnx8 with persistentvolumeclaims "csi-hostpathcqnx8" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize Jan 11 20:12:33.022: INFO: Error updating pvc csi-hostpathcqnx8 with persistentvolumeclaims "csi-hostpathcqnx8" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize Jan 11 20:12:35.021: INFO: Error updating pvc csi-hostpathcqnx8 with persistentvolumeclaims "csi-hostpathcqnx8" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize Jan 11 20:12:37.022: INFO: Error updating pvc csi-hostpathcqnx8 with persistentvolumeclaims "csi-hostpathcqnx8" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize Jan 11 20:12:39.022: INFO: Error updating pvc csi-hostpathcqnx8 with persistentvolumeclaims "csi-hostpathcqnx8" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize Jan 11 20:12:41.021: INFO: Error updating pvc csi-hostpathcqnx8 with persistentvolumeclaims "csi-hostpathcqnx8" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize Jan 11 20:12:43.023: INFO: Error updating pvc csi-hostpathcqnx8 with persistentvolumeclaims "csi-hostpathcqnx8" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize Jan 11 20:12:45.021: INFO: Error updating pvc csi-hostpathcqnx8 with persistentvolumeclaims "csi-hostpathcqnx8" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize Jan 11 20:12:47.022: INFO: Error updating pvc csi-hostpathcqnx8 with persistentvolumeclaims "csi-hostpathcqnx8" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize Jan 11 20:12:49.023: INFO: Error updating pvc csi-hostpathcqnx8 with persistentvolumeclaims "csi-hostpathcqnx8" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize Jan 11 20:12:51.022: INFO: Error updating pvc csi-hostpathcqnx8 with persistentvolumeclaims "csi-hostpathcqnx8" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize Jan 11 20:12:51.201: INFO: Error updating pvc csi-hostpathcqnx8 with persistentvolumeclaims "csi-hostpathcqnx8" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize STEP: Deleting pvc Jan 11 20:12:51.201: INFO: Deleting PersistentVolumeClaim "csi-hostpathcqnx8" Jan 11 20:12:51.291: INFO: Waiting up to 5m0s for PersistentVolume pvc-0189d44a-f620-4ea2-983e-c52ff4c2bba2 to get deleted Jan 11 20:12:51.381: INFO: PersistentVolume pvc-0189d44a-f620-4ea2-983e-c52ff4c2bba2 found and phase=Bound (89.994228ms) Jan 11 20:12:56.471: INFO: PersistentVolume pvc-0189d44a-f620-4ea2-983e-c52ff4c2bba2 was removed STEP: Deleting sc STEP: uninstalling csi-hostpath driver Jan 11 20:12:56.562: INFO: deleting *v1.ServiceAccount: volume-expand-1240/csi-attacher Jan 11 20:12:56.652: INFO: deleting *v1.ClusterRole: external-attacher-runner-volume-expand-1240 Jan 11 20:12:56.743: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-volume-expand-1240 Jan 11 20:12:56.834: INFO: deleting *v1.Role: volume-expand-1240/external-attacher-cfg-volume-expand-1240 Jan 11 20:12:56.924: INFO: deleting *v1.RoleBinding: volume-expand-1240/csi-attacher-role-cfg Jan 11 20:12:57.014: INFO: deleting *v1.ServiceAccount: volume-expand-1240/csi-provisioner Jan 11 20:12:57.105: INFO: deleting *v1.ClusterRole: external-provisioner-runner-volume-expand-1240 Jan 11 20:12:57.196: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-volume-expand-1240 Jan 11 20:12:57.288: INFO: deleting *v1.Role: volume-expand-1240/external-provisioner-cfg-volume-expand-1240 Jan 11 20:12:57.378: INFO: deleting *v1.RoleBinding: volume-expand-1240/csi-provisioner-role-cfg Jan 11 20:12:57.469: INFO: deleting *v1.ServiceAccount: volume-expand-1240/csi-snapshotter Jan 11 20:12:57.560: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-volume-expand-1240 Jan 11 20:12:57.650: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-volume-expand-1240 Jan 11 20:12:57.740: INFO: deleting *v1.Role: volume-expand-1240/external-snapshotter-leaderelection-volume-expand-1240 Jan 11 20:12:57.832: INFO: deleting *v1.RoleBinding: volume-expand-1240/external-snapshotter-leaderelection Jan 11 20:12:57.923: INFO: deleting *v1.ServiceAccount: volume-expand-1240/csi-resizer Jan 11 20:12:58.015: INFO: deleting *v1.ClusterRole: external-resizer-runner-volume-expand-1240 Jan 11 20:12:58.108: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-volume-expand-1240 Jan 11 20:12:58.198: INFO: deleting *v1.Role: volume-expand-1240/external-resizer-cfg-volume-expand-1240 Jan 11 20:12:58.289: INFO: deleting *v1.RoleBinding: volume-expand-1240/csi-resizer-role-cfg Jan 11 20:12:58.380: INFO: deleting *v1.Service: volume-expand-1240/csi-hostpath-attacher Jan 11 20:12:58.474: INFO: deleting *v1.StatefulSet: volume-expand-1240/csi-hostpath-attacher Jan 11 20:12:58.565: INFO: deleting *v1beta1.CSIDriver: csi-hostpath-volume-expand-1240 Jan 11 20:12:58.656: INFO: deleting *v1.Service: volume-expand-1240/csi-hostpathplugin Jan 11 20:12:58.758: INFO: deleting *v1.StatefulSet: volume-expand-1240/csi-hostpathplugin Jan 11 20:12:58.849: INFO: deleting *v1.Service: volume-expand-1240/csi-hostpath-provisioner Jan 11 20:12:58.943: INFO: deleting *v1.StatefulSet: volume-expand-1240/csi-hostpath-provisioner Jan 11 20:12:59.034: INFO: deleting *v1.Service: volume-expand-1240/csi-hostpath-resizer Jan 11 20:12:59.130: INFO: deleting *v1.StatefulSet: volume-expand-1240/csi-hostpath-resizer Jan 11 20:12:59.220: INFO: deleting *v1.Service: volume-expand-1240/csi-snapshotter Jan 11 20:12:59.314: INFO: deleting *v1.StatefulSet: volume-expand-1240/csi-snapshotter Jan 11 20:12:59.405: INFO: deleting *v1.ClusterRoleBinding: psp-csi-hostpath-role-volume-expand-1240 [AfterEach] [Testpattern: Dynamic PV (default fs)] volume-expand /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:12:59.503: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "volume-expand-1240" for this suite. Jan 11 20:13:13.863: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:13:17.182: INFO: namespace volume-expand-1240 deletion completed in 17.589163578s • [SLOW TEST:63.232 seconds] [sig-storage] CSI Volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Driver: csi-hostpath] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:62 [Testpattern: Dynamic PV (default fs)] volume-expand /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:92 should not allow expansion of pvcs without AllowVolumeExpansion property /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:139 ------------------------------ SS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:13:06.305: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename custom-resource-definition STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in custom-resource-definition-8204 STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Jan 11 20:13:06.950: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:13:07.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-8204" for this suite. Jan 11 20:13:14.043: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:13:17.378: INFO: namespace custom-resource-definition-8204 deletion completed in 9.608417688s • [SLOW TEST:11.073 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:42 getting/updating/patching custom resource definition status sub-resource works [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSS ------------------------------ [BeforeEach] [sig-instrumentation] Cadvisor /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:13:10.149: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename cadvisor STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in cadvisor-1942 STEP: Waiting for a default service account to be provisioned in namespace [It] should be healthy on every node. /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/monitoring/cadvisor.go:42 STEP: getting list of nodes STEP: Querying stats from node ip-10-250-27-25.ec2.internal using url api/v1/nodes/ip-10-250-27-25.ec2.internal/proxy/stats/ STEP: Querying stats from node ip-10-250-7-77.ec2.internal using url api/v1/nodes/ip-10-250-7-77.ec2.internal/proxy/stats/ [AfterEach] [sig-instrumentation] Cadvisor /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:13:11.224: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "cadvisor-1942" for this suite. Jan 11 20:13:19.583: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:13:22.885: INFO: namespace cadvisor-1942 deletion completed in 11.569894836s • [SLOW TEST:12.735 seconds] [sig-instrumentation] Cadvisor /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/common/framework.go:23 should be healthy on every node. /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/monitoring/cadvisor.go:42 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:93 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:13:10.818: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename provisioning STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in provisioning-6507 STEP: Waiting for a default service account to be provisioned in namespace [It] should support existing directory /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:188 Jan 11 20:13:11.849: INFO: Could not find CSI Name for in-tree plugin kubernetes.io/host-path Jan 11 20:13:12.033: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-6507" in namespace "provisioning-6507" to be "success or failure" Jan 11 20:13:12.123: INFO: Pod "hostpath-symlink-prep-provisioning-6507": Phase="Pending", Reason="", readiness=false. Elapsed: 89.874111ms Jan 11 20:13:14.213: INFO: Pod "hostpath-symlink-prep-provisioning-6507": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.179984422s STEP: Saw pod success Jan 11 20:13:14.213: INFO: Pod "hostpath-symlink-prep-provisioning-6507" satisfied condition "success or failure" Jan 11 20:13:14.213: INFO: Deleting pod "hostpath-symlink-prep-provisioning-6507" in namespace "provisioning-6507" Jan 11 20:13:14.306: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-6507" to be fully deleted Jan 11 20:13:14.396: INFO: Creating resource for inline volume STEP: Creating pod pod-subpath-test-hostpathsymlink-9wxj STEP: Creating a pod to test subpath Jan 11 20:13:14.487: INFO: Waiting up to 5m0s for pod "pod-subpath-test-hostpathsymlink-9wxj" in namespace "provisioning-6507" to be "success or failure" Jan 11 20:13:14.578: INFO: Pod "pod-subpath-test-hostpathsymlink-9wxj": Phase="Pending", Reason="", readiness=false. Elapsed: 90.184342ms Jan 11 20:13:16.669: INFO: Pod "pod-subpath-test-hostpathsymlink-9wxj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.181201523s Jan 11 20:13:18.759: INFO: Pod "pod-subpath-test-hostpathsymlink-9wxj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.271752221s STEP: Saw pod success Jan 11 20:13:18.759: INFO: Pod "pod-subpath-test-hostpathsymlink-9wxj" satisfied condition "success or failure" Jan 11 20:13:18.849: INFO: Trying to get logs from node ip-10-250-7-77.ec2.internal pod pod-subpath-test-hostpathsymlink-9wxj container test-container-volume-hostpathsymlink-9wxj: STEP: delete the pod Jan 11 20:13:19.042: INFO: Waiting for pod pod-subpath-test-hostpathsymlink-9wxj to disappear Jan 11 20:13:19.131: INFO: Pod pod-subpath-test-hostpathsymlink-9wxj no longer exists STEP: Deleting pod pod-subpath-test-hostpathsymlink-9wxj Jan 11 20:13:19.131: INFO: Deleting pod "pod-subpath-test-hostpathsymlink-9wxj" in namespace "provisioning-6507" STEP: Deleting pod Jan 11 20:13:19.221: INFO: Deleting pod "pod-subpath-test-hostpathsymlink-9wxj" in namespace "provisioning-6507" Jan 11 20:13:19.401: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-6507" in namespace "provisioning-6507" to be "success or failure" Jan 11 20:13:19.490: INFO: Pod "hostpath-symlink-prep-provisioning-6507": Phase="Pending", Reason="", readiness=false. Elapsed: 89.465724ms Jan 11 20:13:21.580: INFO: Pod "hostpath-symlink-prep-provisioning-6507": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.179559838s STEP: Saw pod success Jan 11 20:13:21.581: INFO: Pod "hostpath-symlink-prep-provisioning-6507" satisfied condition "success or failure" Jan 11 20:13:21.581: INFO: Deleting pod "hostpath-symlink-prep-provisioning-6507" in namespace "provisioning-6507" Jan 11 20:13:21.674: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-6507" to be fully deleted Jan 11 20:13:21.763: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics [AfterEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:13:21.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "provisioning-6507" for this suite. Jan 11 20:13:28.124: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:13:31.446: INFO: namespace provisioning-6507 deletion completed in 9.591423594s • [SLOW TEST:20.627 seconds] [sig-storage] In-tree Volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Driver: hostPathSymlink] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:69 [Testpattern: Inline-volume (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:92 should support existing directory /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:188 ------------------------------ SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Subpath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:13:07.055: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename subpath STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in subpath-1132 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating pod pod-subpath-test-configmap-zflg STEP: Creating a pod to test atomic-volume-subpath Jan 11 20:13:07.967: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-zflg" in namespace "subpath-1132" to be "success or failure" Jan 11 20:13:08.057: INFO: Pod "pod-subpath-test-configmap-zflg": Phase="Pending", Reason="", readiness=false. Elapsed: 89.353048ms Jan 11 20:13:10.147: INFO: Pod "pod-subpath-test-configmap-zflg": Phase="Running", Reason="", readiness=true. Elapsed: 2.179296856s Jan 11 20:13:12.237: INFO: Pod "pod-subpath-test-configmap-zflg": Phase="Running", Reason="", readiness=true. Elapsed: 4.269535121s Jan 11 20:13:14.327: INFO: Pod "pod-subpath-test-configmap-zflg": Phase="Running", Reason="", readiness=true. Elapsed: 6.359320678s Jan 11 20:13:16.417: INFO: Pod "pod-subpath-test-configmap-zflg": Phase="Running", Reason="", readiness=true. Elapsed: 8.449390485s Jan 11 20:13:18.507: INFO: Pod "pod-subpath-test-configmap-zflg": Phase="Running", Reason="", readiness=true. Elapsed: 10.53966295s Jan 11 20:13:20.597: INFO: Pod "pod-subpath-test-configmap-zflg": Phase="Running", Reason="", readiness=true. Elapsed: 12.62994664s Jan 11 20:13:22.688: INFO: Pod "pod-subpath-test-configmap-zflg": Phase="Running", Reason="", readiness=true. Elapsed: 14.720242156s Jan 11 20:13:24.778: INFO: Pod "pod-subpath-test-configmap-zflg": Phase="Running", Reason="", readiness=true. Elapsed: 16.810197406s Jan 11 20:13:26.868: INFO: Pod "pod-subpath-test-configmap-zflg": Phase="Running", Reason="", readiness=true. Elapsed: 18.900420867s Jan 11 20:13:28.958: INFO: Pod "pod-subpath-test-configmap-zflg": Phase="Running", Reason="", readiness=true. Elapsed: 20.990544126s Jan 11 20:13:31.048: INFO: Pod "pod-subpath-test-configmap-zflg": Phase="Succeeded", Reason="", readiness=false. Elapsed: 23.080639736s STEP: Saw pod success Jan 11 20:13:31.048: INFO: Pod "pod-subpath-test-configmap-zflg" satisfied condition "success or failure" Jan 11 20:13:31.138: INFO: Trying to get logs from node ip-10-250-27-25.ec2.internal pod pod-subpath-test-configmap-zflg container test-container-subpath-configmap-zflg: STEP: delete the pod Jan 11 20:13:31.327: INFO: Waiting for pod pod-subpath-test-configmap-zflg to disappear Jan 11 20:13:31.417: INFO: Pod pod-subpath-test-configmap-zflg no longer exists STEP: Deleting pod pod-subpath-test-configmap-zflg Jan 11 20:13:31.417: INFO: Deleting pod "pod-subpath-test-configmap-zflg" in namespace "subpath-1132" [AfterEach] [sig-storage] Subpath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:13:31.507: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-1132" for this suite. Jan 11 20:13:37.870: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:13:41.187: INFO: namespace subpath-1132 deletion completed in 9.589634168s • [SLOW TEST:34.132 seconds] [sig-storage] Subpath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:13:17.188: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename statefulset STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in statefulset-8788 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:62 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:77 STEP: Creating service test in namespace statefulset-8788 [It] Should recreate evicted statefulset [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-8788 STEP: Creating statefulset with conflicting port in namespace statefulset-8788 STEP: Waiting until pod test-pod will start running in namespace statefulset-8788 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-8788 Jan 11 20:13:20.886: INFO: Observed stateful pod in namespace: statefulset-8788, name: ss-0, uid: 06c0995f-6149-4bd4-be64-087ad8c4c82c, status phase: Pending. Waiting for statefulset controller to delete. Jan 11 20:13:20.889: INFO: Observed stateful pod in namespace: statefulset-8788, name: ss-0, uid: 06c0995f-6149-4bd4-be64-087ad8c4c82c, status phase: Failed. Waiting for statefulset controller to delete. Jan 11 20:13:20.917: INFO: Observed stateful pod in namespace: statefulset-8788, name: ss-0, uid: 06c0995f-6149-4bd4-be64-087ad8c4c82c, status phase: Failed. Waiting for statefulset controller to delete. Jan 11 20:13:20.918: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-8788 STEP: Removing pod with conflicting port in namespace statefulset-8788 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-8788 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 Jan 11 20:13:23.189: INFO: Deleting all statefulset in ns statefulset-8788 Jan 11 20:13:23.279: INFO: Scaling statefulset ss to 0 Jan 11 20:13:33.637: INFO: Waiting for statefulset status.replicas updated to 0 Jan 11 20:13:33.727: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:13:33.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-8788" for this suite. Jan 11 20:13:40.355: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:13:43.661: INFO: namespace statefulset-8788 deletion completed in 9.575134264s • [SLOW TEST:26.474 seconds] [sig-apps] StatefulSet /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693 Should recreate evicted statefulset [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ S ------------------------------ [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:12:59.404: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename gc STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in gc-217 STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0111 20:13:40.700635 8610 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 11 20:13:40.700: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:13:40.700: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-217" for this suite. Jan 11 20:13:49.062: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:13:52.391: INFO: namespace gc-217 deletion completed in 11.600051415s • [SLOW TEST:52.987 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSS ------------------------------ [BeforeEach] [sig-auth] ServiceAccounts /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:13:41.232: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename svcaccounts STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in svcaccounts-9484 STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: getting the auto-created API token Jan 11 20:13:42.829: INFO: created pod pod-service-account-defaultsa Jan 11 20:13:42.829: INFO: pod pod-service-account-defaultsa service account token volume mount: true Jan 11 20:13:42.919: INFO: created pod pod-service-account-mountsa Jan 11 20:13:42.919: INFO: pod pod-service-account-mountsa service account token volume mount: true Jan 11 20:13:43.010: INFO: created pod pod-service-account-nomountsa Jan 11 20:13:43.010: INFO: pod pod-service-account-nomountsa service account token volume mount: false Jan 11 20:13:43.102: INFO: created pod pod-service-account-defaultsa-mountspec Jan 11 20:13:43.102: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Jan 11 20:13:43.192: INFO: created pod pod-service-account-mountsa-mountspec Jan 11 20:13:43.192: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Jan 11 20:13:43.283: INFO: created pod pod-service-account-nomountsa-mountspec Jan 11 20:13:43.283: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Jan 11 20:13:43.374: INFO: created pod pod-service-account-defaultsa-nomountspec Jan 11 20:13:43.374: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Jan 11 20:13:43.465: INFO: created pod pod-service-account-mountsa-nomountspec Jan 11 20:13:43.465: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Jan 11 20:13:43.555: INFO: created pod pod-service-account-nomountsa-nomountspec Jan 11 20:13:43.555: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:13:43.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-9484" for this suite. Jan 11 20:13:51.917: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:13:55.238: INFO: namespace svcaccounts-9484 deletion completed in 11.590641258s • [SLOW TEST:14.006 seconds] [sig-auth] ServiceAccounts /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should allow opting out of API token automount [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:13:17.385: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename services STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-4281 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:91 [It] should serve a basic endpoint from pods [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: creating service endpoint-test2 in namespace services-4281 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4281 to expose endpoints map[] Jan 11 20:13:18.339: INFO: successfully validated that service endpoint-test2 in namespace services-4281 exposes endpoints map[] (89.688331ms elapsed) STEP: Creating pod pod1 in namespace services-4281 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4281 to expose endpoints map[pod1:[80]] Jan 11 20:13:20.970: INFO: successfully validated that service endpoint-test2 in namespace services-4281 exposes endpoints map[pod1:[80]] (2.539482193s elapsed) STEP: Creating pod pod2 in namespace services-4281 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4281 to expose endpoints map[pod1:[80] pod2:[80]] Jan 11 20:13:23.875: INFO: successfully validated that service endpoint-test2 in namespace services-4281 exposes endpoints map[pod1:[80] pod2:[80]] (2.814624433s elapsed) STEP: Deleting pod pod1 in namespace services-4281 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4281 to expose endpoints map[pod2:[80]] Jan 11 20:13:24.145: INFO: successfully validated that service endpoint-test2 in namespace services-4281 exposes endpoints map[pod2:[80]] (179.284155ms elapsed) STEP: Deleting pod pod2 in namespace services-4281 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4281 to expose endpoints map[] Jan 11 20:13:24.327: INFO: successfully validated that service endpoint-test2 in namespace services-4281 exposes endpoints map[] (89.669782ms elapsed) [AfterEach] [sig-network] Services /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:13:24.422: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4281" for this suite. Jan 11 20:13:52.783: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:13:56.109: INFO: namespace services-4281 deletion completed in 31.595279286s [AfterEach] [sig-network] Services /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:95 • [SLOW TEST:38.723 seconds] [sig-network] Services /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSS ------------------------------ [BeforeEach] [sig-scheduling] PreemptionExecutionPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:13:17.146: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename sched-preemption-path STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in sched-preemption-path-4134 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] PreemptionExecutionPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:302 STEP: Finding an available node STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. Jan 11 20:13:20.510: INFO: found a healthy node: ip-10-250-7-77.ec2.internal [It] runs ReplicaSets to verify preemption running path /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:345 Jan 11 20:13:41.937: INFO: pods created so far: map[rs-pod1-2t2dn:{} rs-pod1-chg9v:{} rs-pod1-mpnhz:{} rs-pod1-nj9cf:{} rs-pod1-rlsg4:{} rs-pod2-7lcd7:{} rs-pod2-lcf8f:{} rs-pod2-pjv2c:{} rs-pod2-tc9wv:{} rs-pod3-82s8b:{} rs-pod3-8rjzv:{} rs-pod3-lrvrm:{} rs-pod3-q6mt6:{}] Jan 11 20:13:41.937: INFO: length of pods created so far: 13 Jan 11 20:13:50.205: INFO: pods created so far: map[rs-pod1-2t2dn:{} rs-pod1-6hqtn:{} rs-pod1-8cm5t:{} rs-pod1-chg9v:{} rs-pod1-gqhmm:{} rs-pod1-m4q7d:{} rs-pod1-mpnhz:{} rs-pod1-nj9cf:{} rs-pod1-rlsg4:{} rs-pod1-vtfzj:{} rs-pod2-7lcd7:{} rs-pod2-8vz6w:{} rs-pod2-8wzb9:{} rs-pod2-bdm5t:{} rs-pod2-lcf8f:{} rs-pod2-pjv2c:{} rs-pod2-q7bsf:{} rs-pod2-tc9wv:{} rs-pod3-82s8b:{} rs-pod3-8rjzv:{} rs-pod3-lrvrm:{} rs-pod3-q6mt6:{} rs-pod4-m5lg6:{}] Jan 11 20:13:50.206: INFO: length of pods created so far: 23 [AfterEach] [sig-scheduling] PreemptionExecutionPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:13:50.206: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-path-4134" for this suite. Jan 11 20:13:58.564: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:14:01.876: INFO: namespace sched-preemption-path-4134 deletion completed in 11.580458988s [AfterEach] [sig-scheduling] PreemptionExecutionPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:274 • [SLOW TEST:45.256 seconds] [sig-scheduling] PreemptionExecutionPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 runs ReplicaSets to verify preemption running path /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:345 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Networking /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:13:22.916: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename pod-network-test STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pod-network-test-7598 STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Performing setup for networking test in namespace pod-network-test-7598 STEP: creating a selector STEP: Creating the service pods in kubernetes Jan 11 20:13:23.554: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jan 11 20:13:49.180: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.64.0.160:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7598 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 11 20:13:49.180: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config Jan 11 20:13:50.760: INFO: Found all expected endpoints: [netserver-0] Jan 11 20:13:50.850: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.64.1.206:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7598 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 11 20:13:50.850: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config Jan 11 20:13:51.713: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:13:51.713: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-7598" for this suite. Jan 11 20:14:04.072: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:14:07.394: INFO: namespace pod-network-test-7598 deletion completed in 15.590620985s • [SLOW TEST:44.478 seconds] [sig-network] Networking /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ S ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:13:55.277: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename emptydir STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-4435 STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating a pod to test emptydir 0777 on tmpfs Jan 11 20:13:56.040: INFO: Waiting up to 5m0s for pod "pod-c89d144a-7559-43c4-b730-9f35b38367a8" in namespace "emptydir-4435" to be "success or failure" Jan 11 20:13:56.133: INFO: Pod "pod-c89d144a-7559-43c4-b730-9f35b38367a8": Phase="Pending", Reason="", readiness=false. Elapsed: 92.769003ms Jan 11 20:13:58.229: INFO: Pod "pod-c89d144a-7559-43c4-b730-9f35b38367a8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.188577136s Jan 11 20:14:00.319: INFO: Pod "pod-c89d144a-7559-43c4-b730-9f35b38367a8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.278707466s STEP: Saw pod success Jan 11 20:14:00.319: INFO: Pod "pod-c89d144a-7559-43c4-b730-9f35b38367a8" satisfied condition "success or failure" Jan 11 20:14:00.409: INFO: Trying to get logs from node ip-10-250-7-77.ec2.internal pod pod-c89d144a-7559-43c4-b730-9f35b38367a8 container test-container: STEP: delete the pod Jan 11 20:14:00.599: INFO: Waiting for pod pod-c89d144a-7559-43c4-b730-9f35b38367a8 to disappear Jan 11 20:14:00.689: INFO: Pod pod-c89d144a-7559-43c4-b730-9f35b38367a8 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:14:00.689: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4435" for this suite. Jan 11 20:14:09.051: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:14:12.378: INFO: namespace emptydir-4435 deletion completed in 11.596532469s • [SLOW TEST:17.101 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] ReplicaSet /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:13:52.398: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename replicaset STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in replicaset-4981 STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Jan 11 20:13:53.056: INFO: Creating ReplicaSet my-hostname-basic-9313c94a-cd9e-401d-b1bf-bef057ad1a5e Jan 11 20:13:53.236: INFO: Pod name my-hostname-basic-9313c94a-cd9e-401d-b1bf-bef057ad1a5e: Found 1 pods out of 1 Jan 11 20:13:53.236: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-9313c94a-cd9e-401d-b1bf-bef057ad1a5e" is running Jan 11 20:13:55.420: INFO: Pod "my-hostname-basic-9313c94a-cd9e-401d-b1bf-bef057ad1a5e-wxqsv" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-11 20:13:53 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-11 20:13:53 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-9313c94a-cd9e-401d-b1bf-bef057ad1a5e]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-11 20:13:53 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-9313c94a-cd9e-401d-b1bf-bef057ad1a5e]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-11 20:13:53 +0000 UTC Reason: Message:}]) Jan 11 20:13:55.420: INFO: Trying to dial the pod Jan 11 20:14:00.780: INFO: Controller my-hostname-basic-9313c94a-cd9e-401d-b1bf-bef057ad1a5e: Got expected result from replica 1 [my-hostname-basic-9313c94a-cd9e-401d-b1bf-bef057ad1a5e-wxqsv]: "my-hostname-basic-9313c94a-cd9e-401d-b1bf-bef057ad1a5e-wxqsv", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:14:00.780: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-4981" for this suite. Jan 11 20:14:09.143: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:14:12.469: INFO: namespace replicaset-4981 deletion completed in 11.597125432s • [SLOW TEST:20.072 seconds] [sig-apps] ReplicaSet /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:13:43.664: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in persistent-local-volumes-test-7832 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:152 [BeforeEach] [Volume type: block] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "ip-10-250-27-25.ec2.internal" using path "/tmp/local-volume-test-3d154014-0f3d-4378-aae3-0d622c5e1a47" Jan 11 20:13:48.751: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-7832 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-3d154014-0f3d-4378-aae3-0d622c5e1a47 && dd if=/dev/zero of=/tmp/local-volume-test-3d154014-0f3d-4378-aae3-0d622c5e1a47/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-3d154014-0f3d-4378-aae3-0d622c5e1a47/file' Jan 11 20:13:50.778: INFO: stderr: "5120+0 records in\n5120+0 records out\n20971520 bytes (21 MB, 20 MiB) copied, 0.027528 s, 762 MB/s\n" Jan 11 20:13:50.778: INFO: stdout: "" Jan 11 20:13:50.778: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-7832 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-3d154014-0f3d-4378-aae3-0d622c5e1a47/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}' Jan 11 20:13:52.181: INFO: stderr: "" Jan 11 20:13:52.181: INFO: stdout: "/dev/loop0\n" STEP: Creating local PVCs and PVs Jan 11 20:13:52.181: INFO: Creating a PV followed by a PVC Jan 11 20:13:52.361: INFO: Waiting for PV local-pv4cqzp to bind to PVC pvc-m5vwz Jan 11 20:13:52.361: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-m5vwz] to have phase Bound Jan 11 20:13:52.450: INFO: PersistentVolumeClaim pvc-m5vwz found and phase=Bound (89.046204ms) Jan 11 20:13:52.450: INFO: Waiting up to 3m0s for PersistentVolume local-pv4cqzp to have phase Bound Jan 11 20:13:52.539: INFO: PersistentVolume local-pv4cqzp found and phase=Bound (89.208108ms) [BeforeEach] One pod requesting one prebound PVC /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Jan 11 20:13:55.165: INFO: pod "security-context-d198874c-267e-4658-88c3-f6ad4fcf941b" created on Node "ip-10-250-27-25.ec2.internal" STEP: Writing in pod1 Jan 11 20:13:55.166: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-7832 security-context-d198874c-267e-4658-88c3-f6ad4fcf941b -- /bin/sh -c mkdir -p /tmp/mnt/volume1; echo test-file-content > /tmp/mnt/volume1/test-file && SUDO_CMD=$(which sudo); echo ${SUDO_CMD} && ${SUDO_CMD} dd if=/tmp/mnt/volume1/test-file of=/mnt/volume1 bs=512 count=100 && rm /tmp/mnt/volume1/test-file' Jan 11 20:13:56.630: INFO: stderr: "0+1 records in\n0+1 records out\n18 bytes (18B) copied, 0.000043 seconds, 408.8KB/s\n" Jan 11 20:13:56.630: INFO: stdout: "\n" Jan 11 20:13:56.630: INFO: podRWCmdExec out: "\n" err: [It] should be able to mount volume and write from pod1 /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 Jan 11 20:13:56.630: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-7832 security-context-d198874c-267e-4658-88c3-f6ad4fcf941b -- /bin/sh -c hexdump -n 100 -e '100 "%_p"' /mnt/volume1 | head -1' Jan 11 20:13:58.192: INFO: stderr: "" Jan 11 20:13:58.192: INFO: stdout: "test-file-content..................................................................................." Jan 11 20:13:58.192: INFO: podRWCmdExec out: "test-file-content..................................................................................." err: STEP: Writing in pod1 Jan 11 20:13:58.192: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-7832 security-context-d198874c-267e-4658-88c3-f6ad4fcf941b -- /bin/sh -c mkdir -p /tmp/mnt/volume1; echo /dev/loop0 > /tmp/mnt/volume1/test-file && SUDO_CMD=$(which sudo); echo ${SUDO_CMD} && ${SUDO_CMD} dd if=/tmp/mnt/volume1/test-file of=/mnt/volume1 bs=512 count=100 && rm /tmp/mnt/volume1/test-file' Jan 11 20:13:59.641: INFO: stderr: "0+1 records in\n0+1 records out\n11 bytes (11B) copied, 0.000040 seconds, 268.6KB/s\n" Jan 11 20:13:59.641: INFO: stdout: "\n" Jan 11 20:13:59.641: INFO: podRWCmdExec out: "\n" err: [AfterEach] One pod requesting one prebound PVC /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod security-context-d198874c-267e-4658-88c3-f6ad4fcf941b in namespace persistent-local-volumes-test-7832 [AfterEach] [Volume type: block] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jan 11 20:13:59.732: INFO: Deleting PersistentVolumeClaim "pvc-m5vwz" Jan 11 20:13:59.823: INFO: Deleting PersistentVolume "local-pv4cqzp" Jan 11 20:13:59.914: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-7832 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-3d154014-0f3d-4378-aae3-0d622c5e1a47/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}' Jan 11 20:14:01.389: INFO: stderr: "" Jan 11 20:14:01.389: INFO: stdout: "/dev/loop0\n" STEP: Tear down block device "/dev/loop0" on node "ip-10-250-27-25.ec2.internal" at path /tmp/local-volume-test-3d154014-0f3d-4378-aae3-0d622c5e1a47/file Jan 11 20:14:01.389: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-7832 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0' Jan 11 20:14:02.917: INFO: stderr: "" Jan 11 20:14:02.917: INFO: stdout: "" STEP: Removing the test directory /tmp/local-volume-test-3d154014-0f3d-4378-aae3-0d622c5e1a47 Jan 11 20:14:02.917: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-7832 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-3d154014-0f3d-4378-aae3-0d622c5e1a47' Jan 11 20:14:04.449: INFO: stderr: "" Jan 11 20:14:04.449: INFO: stdout: "" [AfterEach] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:14:04.539: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-7832" for this suite. Jan 11 20:14:16.901: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:14:20.220: INFO: namespace persistent-local-volumes-test-7832 deletion completed in 15.588490205s • [SLOW TEST:36.556 seconds] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: block] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and write from pod1 /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:93 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:13:31.462: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename provisioning STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in provisioning-5271 STEP: Waiting for a default service account to be provisioned in namespace [It] should support existing directory /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:188 STEP: deploying csi-hostpath driver Jan 11 20:13:32.413: INFO: creating *v1.ServiceAccount: provisioning-5271/csi-attacher Jan 11 20:13:32.503: INFO: creating *v1.ClusterRole: external-attacher-runner-provisioning-5271 Jan 11 20:13:32.503: INFO: Define cluster role external-attacher-runner-provisioning-5271 Jan 11 20:13:32.594: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-provisioning-5271 Jan 11 20:13:32.684: INFO: creating *v1.Role: provisioning-5271/external-attacher-cfg-provisioning-5271 Jan 11 20:13:32.774: INFO: creating *v1.RoleBinding: provisioning-5271/csi-attacher-role-cfg Jan 11 20:13:32.864: INFO: creating *v1.ServiceAccount: provisioning-5271/csi-provisioner Jan 11 20:13:32.953: INFO: creating *v1.ClusterRole: external-provisioner-runner-provisioning-5271 Jan 11 20:13:32.953: INFO: Define cluster role external-provisioner-runner-provisioning-5271 Jan 11 20:13:33.043: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-provisioning-5271 Jan 11 20:13:33.133: INFO: creating *v1.Role: provisioning-5271/external-provisioner-cfg-provisioning-5271 Jan 11 20:13:33.223: INFO: creating *v1.RoleBinding: provisioning-5271/csi-provisioner-role-cfg Jan 11 20:13:33.313: INFO: creating *v1.ServiceAccount: provisioning-5271/csi-snapshotter Jan 11 20:13:33.402: INFO: creating *v1.ClusterRole: external-snapshotter-runner-provisioning-5271 Jan 11 20:13:33.402: INFO: Define cluster role external-snapshotter-runner-provisioning-5271 Jan 11 20:13:33.492: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-provisioning-5271 Jan 11 20:13:33.581: INFO: creating *v1.Role: provisioning-5271/external-snapshotter-leaderelection-provisioning-5271 Jan 11 20:13:33.671: INFO: creating *v1.RoleBinding: provisioning-5271/external-snapshotter-leaderelection Jan 11 20:13:33.761: INFO: creating *v1.ServiceAccount: provisioning-5271/csi-resizer Jan 11 20:13:33.854: INFO: creating *v1.ClusterRole: external-resizer-runner-provisioning-5271 Jan 11 20:13:33.854: INFO: Define cluster role external-resizer-runner-provisioning-5271 Jan 11 20:13:33.944: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-provisioning-5271 Jan 11 20:13:34.034: INFO: creating *v1.Role: provisioning-5271/external-resizer-cfg-provisioning-5271 Jan 11 20:13:34.124: INFO: creating *v1.RoleBinding: provisioning-5271/csi-resizer-role-cfg Jan 11 20:13:34.214: INFO: creating *v1.Service: provisioning-5271/csi-hostpath-attacher Jan 11 20:13:34.308: INFO: creating *v1.StatefulSet: provisioning-5271/csi-hostpath-attacher Jan 11 20:13:34.399: INFO: creating *v1beta1.CSIDriver: csi-hostpath-provisioning-5271 Jan 11 20:13:34.489: INFO: creating *v1.Service: provisioning-5271/csi-hostpathplugin Jan 11 20:13:34.583: INFO: creating *v1.StatefulSet: provisioning-5271/csi-hostpathplugin Jan 11 20:13:34.673: INFO: creating *v1.Service: provisioning-5271/csi-hostpath-provisioner Jan 11 20:13:34.770: INFO: creating *v1.StatefulSet: provisioning-5271/csi-hostpath-provisioner Jan 11 20:13:34.860: INFO: creating *v1.Service: provisioning-5271/csi-hostpath-resizer Jan 11 20:13:34.958: INFO: creating *v1.StatefulSet: provisioning-5271/csi-hostpath-resizer Jan 11 20:13:35.048: INFO: creating *v1.Service: provisioning-5271/csi-snapshotter Jan 11 20:13:35.141: INFO: creating *v1.StatefulSet: provisioning-5271/csi-snapshotter Jan 11 20:13:35.232: INFO: creating *v1.ClusterRoleBinding: psp-csi-hostpath-role-provisioning-5271 Jan 11 20:13:35.321: INFO: Test running for native CSI Driver, not checking metrics Jan 11 20:13:35.321: INFO: Creating resource for dynamic PV STEP: creating a StorageClass provisioning-5271-csi-hostpath-provisioning-5271-scwsrlp STEP: creating a claim Jan 11 20:13:35.411: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Jan 11 20:13:35.503: INFO: Waiting up to 5m0s for PersistentVolumeClaims [csi-hostpathksjfn] to have phase Bound Jan 11 20:13:35.592: INFO: PersistentVolumeClaim csi-hostpathksjfn found but phase is Pending instead of Bound. Jan 11 20:13:37.682: INFO: PersistentVolumeClaim csi-hostpathksjfn found and phase=Bound (2.178816523s) STEP: Creating pod pod-subpath-test-csi-hostpath-dynamicpv-fbfd STEP: Creating a pod to test subpath Jan 11 20:13:37.953: INFO: Waiting up to 5m0s for pod "pod-subpath-test-csi-hostpath-dynamicpv-fbfd" in namespace "provisioning-5271" to be "success or failure" Jan 11 20:13:38.042: INFO: Pod "pod-subpath-test-csi-hostpath-dynamicpv-fbfd": Phase="Pending", Reason="", readiness=false. Elapsed: 89.723574ms Jan 11 20:13:40.133: INFO: Pod "pod-subpath-test-csi-hostpath-dynamicpv-fbfd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.179977083s Jan 11 20:13:42.223: INFO: Pod "pod-subpath-test-csi-hostpath-dynamicpv-fbfd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.270253394s Jan 11 20:13:44.313: INFO: Pod "pod-subpath-test-csi-hostpath-dynamicpv-fbfd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.360503853s Jan 11 20:13:46.403: INFO: Pod "pod-subpath-test-csi-hostpath-dynamicpv-fbfd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.450498511s Jan 11 20:13:48.493: INFO: Pod "pod-subpath-test-csi-hostpath-dynamicpv-fbfd": Phase="Pending", Reason="", readiness=false. Elapsed: 10.540806039s Jan 11 20:13:50.583: INFO: Pod "pod-subpath-test-csi-hostpath-dynamicpv-fbfd": Phase="Pending", Reason="", readiness=false. Elapsed: 12.63078773s Jan 11 20:13:52.673: INFO: Pod "pod-subpath-test-csi-hostpath-dynamicpv-fbfd": Phase="Pending", Reason="", readiness=false. Elapsed: 14.720457961s Jan 11 20:13:54.763: INFO: Pod "pod-subpath-test-csi-hostpath-dynamicpv-fbfd": Phase="Pending", Reason="", readiness=false. Elapsed: 16.810426311s Jan 11 20:13:56.853: INFO: Pod "pod-subpath-test-csi-hostpath-dynamicpv-fbfd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.900225771s STEP: Saw pod success Jan 11 20:13:56.853: INFO: Pod "pod-subpath-test-csi-hostpath-dynamicpv-fbfd" satisfied condition "success or failure" Jan 11 20:13:56.942: INFO: Trying to get logs from node ip-10-250-27-25.ec2.internal pod pod-subpath-test-csi-hostpath-dynamicpv-fbfd container test-container-volume-csi-hostpath-dynamicpv-fbfd: STEP: delete the pod Jan 11 20:13:57.171: INFO: Waiting for pod pod-subpath-test-csi-hostpath-dynamicpv-fbfd to disappear Jan 11 20:13:57.261: INFO: Pod pod-subpath-test-csi-hostpath-dynamicpv-fbfd no longer exists STEP: Deleting pod pod-subpath-test-csi-hostpath-dynamicpv-fbfd Jan 11 20:13:57.261: INFO: Deleting pod "pod-subpath-test-csi-hostpath-dynamicpv-fbfd" in namespace "provisioning-5271" STEP: Deleting pod Jan 11 20:13:57.350: INFO: Deleting pod "pod-subpath-test-csi-hostpath-dynamicpv-fbfd" in namespace "provisioning-5271" STEP: Deleting pvc Jan 11 20:13:57.440: INFO: Deleting PersistentVolumeClaim "csi-hostpathksjfn" Jan 11 20:13:57.530: INFO: Waiting up to 5m0s for PersistentVolume pvc-a40594c1-df9f-4e31-ba13-ffe6b62a33cf to get deleted Jan 11 20:13:57.620: INFO: PersistentVolume pvc-a40594c1-df9f-4e31-ba13-ffe6b62a33cf found and phase=Bound (89.571646ms) Jan 11 20:14:02.713: INFO: PersistentVolume pvc-a40594c1-df9f-4e31-ba13-ffe6b62a33cf was removed STEP: Deleting sc STEP: uninstalling csi-hostpath driver Jan 11 20:14:02.806: INFO: deleting *v1.ServiceAccount: provisioning-5271/csi-attacher Jan 11 20:14:02.897: INFO: deleting *v1.ClusterRole: external-attacher-runner-provisioning-5271 Jan 11 20:14:02.990: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-provisioning-5271 Jan 11 20:14:03.081: INFO: deleting *v1.Role: provisioning-5271/external-attacher-cfg-provisioning-5271 Jan 11 20:14:03.172: INFO: deleting *v1.RoleBinding: provisioning-5271/csi-attacher-role-cfg Jan 11 20:14:03.265: INFO: deleting *v1.ServiceAccount: provisioning-5271/csi-provisioner Jan 11 20:14:03.357: INFO: deleting *v1.ClusterRole: external-provisioner-runner-provisioning-5271 Jan 11 20:14:03.452: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-provisioning-5271 Jan 11 20:14:03.544: INFO: deleting *v1.Role: provisioning-5271/external-provisioner-cfg-provisioning-5271 Jan 11 20:14:03.635: INFO: deleting *v1.RoleBinding: provisioning-5271/csi-provisioner-role-cfg Jan 11 20:14:03.726: INFO: deleting *v1.ServiceAccount: provisioning-5271/csi-snapshotter Jan 11 20:14:03.818: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-provisioning-5271 Jan 11 20:14:03.909: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-provisioning-5271 Jan 11 20:14:04.000: INFO: deleting *v1.Role: provisioning-5271/external-snapshotter-leaderelection-provisioning-5271 Jan 11 20:14:04.091: INFO: deleting *v1.RoleBinding: provisioning-5271/external-snapshotter-leaderelection Jan 11 20:14:04.186: INFO: deleting *v1.ServiceAccount: provisioning-5271/csi-resizer Jan 11 20:14:04.277: INFO: deleting *v1.ClusterRole: external-resizer-runner-provisioning-5271 Jan 11 20:14:04.368: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-provisioning-5271 Jan 11 20:14:04.459: INFO: deleting *v1.Role: provisioning-5271/external-resizer-cfg-provisioning-5271 Jan 11 20:14:04.551: INFO: deleting *v1.RoleBinding: provisioning-5271/csi-resizer-role-cfg Jan 11 20:14:04.642: INFO: deleting *v1.Service: provisioning-5271/csi-hostpath-attacher Jan 11 20:14:04.741: INFO: deleting *v1.StatefulSet: provisioning-5271/csi-hostpath-attacher Jan 11 20:14:04.834: INFO: deleting *v1beta1.CSIDriver: csi-hostpath-provisioning-5271 Jan 11 20:14:04.925: INFO: deleting *v1.Service: provisioning-5271/csi-hostpathplugin Jan 11 20:14:05.022: INFO: deleting *v1.StatefulSet: provisioning-5271/csi-hostpathplugin Jan 11 20:14:05.114: INFO: deleting *v1.Service: provisioning-5271/csi-hostpath-provisioner Jan 11 20:14:05.211: INFO: deleting *v1.StatefulSet: provisioning-5271/csi-hostpath-provisioner Jan 11 20:14:05.303: INFO: deleting *v1.Service: provisioning-5271/csi-hostpath-resizer Jan 11 20:14:05.400: INFO: deleting *v1.StatefulSet: provisioning-5271/csi-hostpath-resizer Jan 11 20:14:05.491: INFO: deleting *v1.Service: provisioning-5271/csi-snapshotter Jan 11 20:14:05.587: INFO: deleting *v1.StatefulSet: provisioning-5271/csi-snapshotter Jan 11 20:14:05.679: INFO: deleting *v1.ClusterRoleBinding: psp-csi-hostpath-role-provisioning-5271 [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:14:05.770: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "provisioning-5271" for this suite. Jan 11 20:14:18.132: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:14:21.448: INFO: namespace provisioning-5271 deletion completed in 15.586726198s • [SLOW TEST:49.987 seconds] [sig-storage] CSI Volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Driver: csi-hostpath] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:62 [Testpattern: Dynamic PV (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:92 should support existing directory /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:188 ------------------------------ SS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:13:56.121: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in persistent-local-volumes-test-1195 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:152 [BeforeEach] [Volume type: blockfswithformat] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "ip-10-250-27-25.ec2.internal" using path "/tmp/local-volume-test-36d18a90-822d-4701-9955-6ea00a6203e5" Jan 11 20:13:59.407: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-1195 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-36d18a90-822d-4701-9955-6ea00a6203e5 && dd if=/dev/zero of=/tmp/local-volume-test-36d18a90-822d-4701-9955-6ea00a6203e5/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-36d18a90-822d-4701-9955-6ea00a6203e5/file' Jan 11 20:14:00.951: INFO: stderr: "5120+0 records in\n5120+0 records out\n20971520 bytes (21 MB, 20 MiB) copied, 0.0284802 s, 736 MB/s\n" Jan 11 20:14:00.952: INFO: stdout: "" Jan 11 20:14:00.952: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-1195 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-36d18a90-822d-4701-9955-6ea00a6203e5/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}' Jan 11 20:14:02.430: INFO: stderr: "" Jan 11 20:14:02.430: INFO: stdout: "/dev/loop2\n" Jan 11 20:14:02.430: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-1195 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkfs -t ext4 /dev/loop2 && mount -t ext4 /dev/loop2 /tmp/local-volume-test-36d18a90-822d-4701-9955-6ea00a6203e5 && chmod o+rwx /tmp/local-volume-test-36d18a90-822d-4701-9955-6ea00a6203e5' Jan 11 20:14:03.972: INFO: stderr: "mke2fs 1.44.5 (15-Dec-2018)\n" Jan 11 20:14:03.972: INFO: stdout: "Discarding device blocks: 1024/20480\b\b\b\b\b\b\b\b\b\b\b \b\b\b\b\b\b\b\b\b\b\bdone \nCreating filesystem with 20480 1k blocks and 5136 inodes\nFilesystem UUID: 4dfb0fbb-e62d-4834-b295-e48982494b76\nSuperblock backups stored on blocks: \n\t8193\n\nAllocating group tables: 0/3\b\b\b \b\b\bdone \nWriting inode tables: 0/3\b\b\b \b\b\bdone \nCreating journal (1024 blocks): done\nWriting superblocks and filesystem accounting information: 0/3\b\b\b \b\b\bdone\n\n" STEP: Creating local PVCs and PVs Jan 11 20:14:03.972: INFO: Creating a PV followed by a PVC Jan 11 20:14:04.153: INFO: Waiting for PV local-pvw6bhd to bind to PVC pvc-52bhd Jan 11 20:14:04.153: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-52bhd] to have phase Bound Jan 11 20:14:04.244: INFO: PersistentVolumeClaim pvc-52bhd found and phase=Bound (90.854271ms) Jan 11 20:14:04.244: INFO: Waiting up to 3m0s for PersistentVolume local-pvw6bhd to have phase Bound Jan 11 20:14:04.334: INFO: PersistentVolume local-pvw6bhd found and phase=Bound (89.886594ms) [BeforeEach] Set fsGroup for local volume /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set fsGroup for one pod [Slow] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:267 STEP: Checking fsGroup is set STEP: Creating a pod Jan 11 20:14:06.878: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec security-context-63101003-91a2-430c-b723-0125055a4c08 --namespace=persistent-local-volumes-test-1195 -- stat -c %g /mnt/volume1' Jan 11 20:14:08.371: INFO: stderr: "" Jan 11 20:14:08.371: INFO: stdout: "1234\n" STEP: Deleting pod STEP: Deleting pod security-context-63101003-91a2-430c-b723-0125055a4c08 in namespace persistent-local-volumes-test-1195 [AfterEach] [Volume type: blockfswithformat] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jan 11 20:14:08.462: INFO: Deleting PersistentVolumeClaim "pvc-52bhd" Jan 11 20:14:08.553: INFO: Deleting PersistentVolume "local-pvw6bhd" Jan 11 20:14:08.645: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-1195 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount /tmp/local-volume-test-36d18a90-822d-4701-9955-6ea00a6203e5' Jan 11 20:14:10.064: INFO: stderr: "" Jan 11 20:14:10.064: INFO: stdout: "" Jan 11 20:14:10.064: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-1195 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-36d18a90-822d-4701-9955-6ea00a6203e5/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}' Jan 11 20:14:11.555: INFO: stderr: "" Jan 11 20:14:11.555: INFO: stdout: "/dev/loop2\n" STEP: Tear down block device "/dev/loop2" on node "ip-10-250-27-25.ec2.internal" at path /tmp/local-volume-test-36d18a90-822d-4701-9955-6ea00a6203e5/file Jan 11 20:14:11.555: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-1195 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop2' Jan 11 20:14:13.038: INFO: stderr: "" Jan 11 20:14:13.038: INFO: stdout: "" STEP: Removing the test directory /tmp/local-volume-test-36d18a90-822d-4701-9955-6ea00a6203e5 Jan 11 20:14:13.038: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-1195 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-36d18a90-822d-4701-9955-6ea00a6203e5' Jan 11 20:14:14.454: INFO: stderr: "" Jan 11 20:14:14.454: INFO: stdout: "" [AfterEach] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:14:14.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-1195" for this suite. Jan 11 20:14:20.906: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:14:24.226: INFO: namespace persistent-local-volumes-test-1195 deletion completed in 9.590245501s • [SLOW TEST:28.105 seconds] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: blockfswithformat] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set fsGroup for one pod [Slow] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:267 ------------------------------ SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:14:07.398: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename kubectl STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-6617 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:225 [BeforeEach] Kubectl run pod /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1668 [It] should create a pod from an image when restart is Never [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: running the image docker.io/library/httpd:2.4.38-alpine Jan 11 20:14:08.149: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config run e2e-test-httpd-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-6617' Jan 11 20:14:08.652: INFO: stderr: "" Jan 11 20:14:08.652: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1673 Jan 11 20:14:08.741: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config delete pods e2e-test-httpd-pod --namespace=kubectl-6617' Jan 11 20:14:18.052: INFO: stderr: "" Jan 11 20:14:18.052: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:14:18.052: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6617" for this suite. Jan 11 20:14:24.411: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:14:27.709: INFO: namespace kubectl-6617 deletion completed in 9.56655151s • [SLOW TEST:20.311 seconds] [sig-cli] Kubectl client /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run pod /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1664 should create a pod from an image when restart is Never [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:14:02.433: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in persistent-local-volumes-test-1133 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:152 [BeforeEach] [Volume type: dir] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Jan 11 20:14:05.533: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-1133 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-deb3077c-5ea4-4eb0-add7-a463837a1e4f' Jan 11 20:14:06.956: INFO: stderr: "" Jan 11 20:14:06.956: INFO: stdout: "" STEP: Creating local PVCs and PVs Jan 11 20:14:06.956: INFO: Creating a PV followed by a PVC Jan 11 20:14:07.139: INFO: Waiting for PV local-pv2vp87 to bind to PVC pvc-mp8vx Jan 11 20:14:07.139: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-mp8vx] to have phase Bound Jan 11 20:14:07.228: INFO: PersistentVolumeClaim pvc-mp8vx found and phase=Bound (88.877949ms) Jan 11 20:14:07.228: INFO: Waiting up to 3m0s for PersistentVolume local-pv2vp87 to have phase Bound Jan 11 20:14:07.317: INFO: PersistentVolume local-pv2vp87 found and phase=Bound (89.264099ms) [BeforeEach] Set fsGroup for local volume /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set fsGroup for one pod [Slow] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:267 STEP: Checking fsGroup is set STEP: Creating a pod Jan 11 20:14:09.854: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec security-context-256addb7-a115-4642-b5f0-a17757d193a5 --namespace=persistent-local-volumes-test-1133 -- stat -c %g /mnt/volume1' Jan 11 20:14:11.219: INFO: stderr: "" Jan 11 20:14:11.219: INFO: stdout: "1234\n" STEP: Deleting pod STEP: Deleting pod security-context-256addb7-a115-4642-b5f0-a17757d193a5 in namespace persistent-local-volumes-test-1133 [AfterEach] [Volume type: dir] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jan 11 20:14:11.310: INFO: Deleting PersistentVolumeClaim "pvc-mp8vx" Jan 11 20:14:11.400: INFO: Deleting PersistentVolume "local-pv2vp87" STEP: Removing the test directory Jan 11 20:14:11.490: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-1133 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-deb3077c-5ea4-4eb0-add7-a463837a1e4f' Jan 11 20:14:12.973: INFO: stderr: "" Jan 11 20:14:12.973: INFO: stdout: "" [AfterEach] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:14:13.066: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-1133" for this suite. Jan 11 20:14:25.433: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:14:28.728: INFO: namespace persistent-local-volumes-test-1133 deletion completed in 15.569214705s • [SLOW TEST:26.295 seconds] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set fsGroup for one pod [Slow] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:267 ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:14:12.404: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename webhook STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-9848 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:88 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 11 20:14:14.617: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714370454, loc:(*time.Location)(0x84bfb00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714370454, loc:(*time.Location)(0x84bfb00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714370454, loc:(*time.Location)(0x84bfb00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714370454, loc:(*time.Location)(0x84bfb00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 11 20:14:17.802: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:14:18.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9848" for this suite. Jan 11 20:14:24.519: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:14:27.836: INFO: namespace webhook-9848 deletion completed in 9.586030261s STEP: Destroying namespace "webhook-9848-markers" for this suite. Jan 11 20:14:34.106: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:14:37.420: INFO: namespace webhook-9848-markers deletion completed in 9.584536327s [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:103 • [SLOW TEST:25.376 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSS ------------------------------ [BeforeEach] [k8s.io] Container Runtime /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:14:28.730: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename container-runtime STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-runtime-4574 STEP: Waiting for a default service account to be provisioned in namespace [It] should not be able to pull from private registry without secret [NodeConformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:389 STEP: create the container STEP: check the container status STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:14:31.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-4574" for this suite. Jan 11 20:14:38.385: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:14:41.684: INFO: namespace container-runtime-4574 deletion completed in 9.567889437s • [SLOW TEST:12.954 seconds] [k8s.io] Container Runtime /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693 blackbox test /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 when running a container with a new image /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:252 should not be able to pull from private registry without secret [NodeConformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:389 ------------------------------ SSSSSSS ------------------------------ [BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:34 [BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:14:37.785: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename sysctl STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in sysctl-2563 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:63 [It] should support sysctls /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:67 STEP: Creating a pod with the kernel.shm_rmid_forced sysctl STEP: Watching for error events or started pod STEP: Waiting for pod completion STEP: Checking that the pod succeeded STEP: Getting logs from the pod STEP: Checking that the sysctl is actually updated [AfterEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:14:40.885: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sysctl-2563" for this suite. Jan 11 20:14:47.246: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:14:50.561: INFO: namespace sysctl-2563 deletion completed in 9.585065535s • [SLOW TEST:12.776 seconds] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693 should support sysctls /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:67 ------------------------------ SSSSSSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:14:12.480: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename dns STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in dns-3324 STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-3324.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-3324.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-3324.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3324.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-3324.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-3324.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-3324.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-3324.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3324.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-3324.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-3324.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-3324.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-3324.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-3324.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-3324.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-3324.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-3324.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3324.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 11 20:14:15.897: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3324.svc.cluster.local from pod dns-3324/dns-test-32d46b95-1d0f-4899-9f14-fe9df50099b7: the server could not find the requested resource (get pods dns-test-32d46b95-1d0f-4899-9f14-fe9df50099b7) Jan 11 20:14:15.989: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3324.svc.cluster.local from pod dns-3324/dns-test-32d46b95-1d0f-4899-9f14-fe9df50099b7: the server could not find the requested resource (get pods dns-test-32d46b95-1d0f-4899-9f14-fe9df50099b7) Jan 11 20:14:16.082: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3324.svc.cluster.local from pod dns-3324/dns-test-32d46b95-1d0f-4899-9f14-fe9df50099b7: the server could not find the requested resource (get pods dns-test-32d46b95-1d0f-4899-9f14-fe9df50099b7) Jan 11 20:14:16.175: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3324.svc.cluster.local from pod dns-3324/dns-test-32d46b95-1d0f-4899-9f14-fe9df50099b7: the server could not find the requested resource (get pods dns-test-32d46b95-1d0f-4899-9f14-fe9df50099b7) Jan 11 20:14:16.493: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3324.svc.cluster.local from pod dns-3324/dns-test-32d46b95-1d0f-4899-9f14-fe9df50099b7: the server could not find the requested resource (get pods dns-test-32d46b95-1d0f-4899-9f14-fe9df50099b7) Jan 11 20:14:16.586: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3324.svc.cluster.local from pod dns-3324/dns-test-32d46b95-1d0f-4899-9f14-fe9df50099b7: the server could not find the requested resource (get pods dns-test-32d46b95-1d0f-4899-9f14-fe9df50099b7) Jan 11 20:14:16.679: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3324.svc.cluster.local from pod dns-3324/dns-test-32d46b95-1d0f-4899-9f14-fe9df50099b7: the server could not find the requested resource (get pods dns-test-32d46b95-1d0f-4899-9f14-fe9df50099b7) Jan 11 20:14:16.772: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3324.svc.cluster.local from pod dns-3324/dns-test-32d46b95-1d0f-4899-9f14-fe9df50099b7: the server could not find the requested resource (get pods dns-test-32d46b95-1d0f-4899-9f14-fe9df50099b7) Jan 11 20:14:16.961: INFO: Lookups using dns-3324/dns-test-32d46b95-1d0f-4899-9f14-fe9df50099b7 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3324.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3324.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3324.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3324.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3324.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3324.svc.cluster.local jessie_udp@dns-test-service-2.dns-3324.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3324.svc.cluster.local] Jan 11 20:14:22.055: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3324.svc.cluster.local from pod dns-3324/dns-test-32d46b95-1d0f-4899-9f14-fe9df50099b7: the server could not find the requested resource (get pods dns-test-32d46b95-1d0f-4899-9f14-fe9df50099b7) Jan 11 20:14:22.148: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3324.svc.cluster.local from pod dns-3324/dns-test-32d46b95-1d0f-4899-9f14-fe9df50099b7: the server could not find the requested resource (get pods dns-test-32d46b95-1d0f-4899-9f14-fe9df50099b7) Jan 11 20:14:22.241: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3324.svc.cluster.local from pod dns-3324/dns-test-32d46b95-1d0f-4899-9f14-fe9df50099b7: the server could not find the requested resource (get pods dns-test-32d46b95-1d0f-4899-9f14-fe9df50099b7) Jan 11 20:14:22.333: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3324.svc.cluster.local from pod dns-3324/dns-test-32d46b95-1d0f-4899-9f14-fe9df50099b7: the server could not find the requested resource (get pods dns-test-32d46b95-1d0f-4899-9f14-fe9df50099b7) Jan 11 20:14:22.613: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3324.svc.cluster.local from pod dns-3324/dns-test-32d46b95-1d0f-4899-9f14-fe9df50099b7: the server could not find the requested resource (get pods dns-test-32d46b95-1d0f-4899-9f14-fe9df50099b7) Jan 11 20:14:22.708: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3324.svc.cluster.local from pod dns-3324/dns-test-32d46b95-1d0f-4899-9f14-fe9df50099b7: the server could not find the requested resource (get pods dns-test-32d46b95-1d0f-4899-9f14-fe9df50099b7) Jan 11 20:14:22.804: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3324.svc.cluster.local from pod dns-3324/dns-test-32d46b95-1d0f-4899-9f14-fe9df50099b7: the server could not find the requested resource (get pods dns-test-32d46b95-1d0f-4899-9f14-fe9df50099b7) Jan 11 20:14:22.900: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3324.svc.cluster.local from pod dns-3324/dns-test-32d46b95-1d0f-4899-9f14-fe9df50099b7: the server could not find the requested resource (get pods dns-test-32d46b95-1d0f-4899-9f14-fe9df50099b7) Jan 11 20:14:23.090: INFO: Lookups using dns-3324/dns-test-32d46b95-1d0f-4899-9f14-fe9df50099b7 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3324.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3324.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3324.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3324.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3324.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3324.svc.cluster.local jessie_udp@dns-test-service-2.dns-3324.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3324.svc.cluster.local] Jan 11 20:14:27.054: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3324.svc.cluster.local from pod dns-3324/dns-test-32d46b95-1d0f-4899-9f14-fe9df50099b7: the server could not find the requested resource (get pods dns-test-32d46b95-1d0f-4899-9f14-fe9df50099b7) Jan 11 20:14:27.148: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3324.svc.cluster.local from pod dns-3324/dns-test-32d46b95-1d0f-4899-9f14-fe9df50099b7: the server could not find the requested resource (get pods dns-test-32d46b95-1d0f-4899-9f14-fe9df50099b7) Jan 11 20:14:27.241: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3324.svc.cluster.local from pod dns-3324/dns-test-32d46b95-1d0f-4899-9f14-fe9df50099b7: the server could not find the requested resource (get pods dns-test-32d46b95-1d0f-4899-9f14-fe9df50099b7) Jan 11 20:14:27.333: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3324.svc.cluster.local from pod dns-3324/dns-test-32d46b95-1d0f-4899-9f14-fe9df50099b7: the server could not find the requested resource (get pods dns-test-32d46b95-1d0f-4899-9f14-fe9df50099b7) Jan 11 20:14:27.614: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3324.svc.cluster.local from pod dns-3324/dns-test-32d46b95-1d0f-4899-9f14-fe9df50099b7: the server could not find the requested resource (get pods dns-test-32d46b95-1d0f-4899-9f14-fe9df50099b7) Jan 11 20:14:27.707: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3324.svc.cluster.local from pod dns-3324/dns-test-32d46b95-1d0f-4899-9f14-fe9df50099b7: the server could not find the requested resource (get pods dns-test-32d46b95-1d0f-4899-9f14-fe9df50099b7) Jan 11 20:14:27.800: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3324.svc.cluster.local from pod dns-3324/dns-test-32d46b95-1d0f-4899-9f14-fe9df50099b7: the server could not find the requested resource (get pods dns-test-32d46b95-1d0f-4899-9f14-fe9df50099b7) Jan 11 20:14:27.892: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3324.svc.cluster.local from pod dns-3324/dns-test-32d46b95-1d0f-4899-9f14-fe9df50099b7: the server could not find the requested resource (get pods dns-test-32d46b95-1d0f-4899-9f14-fe9df50099b7) Jan 11 20:14:28.081: INFO: Lookups using dns-3324/dns-test-32d46b95-1d0f-4899-9f14-fe9df50099b7 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3324.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3324.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3324.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3324.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3324.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3324.svc.cluster.local jessie_udp@dns-test-service-2.dns-3324.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3324.svc.cluster.local] Jan 11 20:14:32.116: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3324.svc.cluster.local from pod dns-3324/dns-test-32d46b95-1d0f-4899-9f14-fe9df50099b7: the server could not find the requested resource (get pods dns-test-32d46b95-1d0f-4899-9f14-fe9df50099b7) Jan 11 20:14:32.208: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3324.svc.cluster.local from pod dns-3324/dns-test-32d46b95-1d0f-4899-9f14-fe9df50099b7: the server could not find the requested resource (get pods dns-test-32d46b95-1d0f-4899-9f14-fe9df50099b7) Jan 11 20:14:32.301: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3324.svc.cluster.local from pod dns-3324/dns-test-32d46b95-1d0f-4899-9f14-fe9df50099b7: the server could not find the requested resource (get pods dns-test-32d46b95-1d0f-4899-9f14-fe9df50099b7) Jan 11 20:14:32.393: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3324.svc.cluster.local from pod dns-3324/dns-test-32d46b95-1d0f-4899-9f14-fe9df50099b7: the server could not find the requested resource (get pods dns-test-32d46b95-1d0f-4899-9f14-fe9df50099b7) Jan 11 20:14:32.675: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3324.svc.cluster.local from pod dns-3324/dns-test-32d46b95-1d0f-4899-9f14-fe9df50099b7: the server could not find the requested resource (get pods dns-test-32d46b95-1d0f-4899-9f14-fe9df50099b7) Jan 11 20:14:32.767: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3324.svc.cluster.local from pod dns-3324/dns-test-32d46b95-1d0f-4899-9f14-fe9df50099b7: the server could not find the requested resource (get pods dns-test-32d46b95-1d0f-4899-9f14-fe9df50099b7) Jan 11 20:14:32.860: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3324.svc.cluster.local from pod dns-3324/dns-test-32d46b95-1d0f-4899-9f14-fe9df50099b7: the server could not find the requested resource (get pods dns-test-32d46b95-1d0f-4899-9f14-fe9df50099b7) Jan 11 20:14:32.953: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3324.svc.cluster.local from pod dns-3324/dns-test-32d46b95-1d0f-4899-9f14-fe9df50099b7: the server could not find the requested resource (get pods dns-test-32d46b95-1d0f-4899-9f14-fe9df50099b7) Jan 11 20:14:33.142: INFO: Lookups using dns-3324/dns-test-32d46b95-1d0f-4899-9f14-fe9df50099b7 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3324.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3324.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3324.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3324.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3324.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3324.svc.cluster.local jessie_udp@dns-test-service-2.dns-3324.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3324.svc.cluster.local] Jan 11 20:14:37.054: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3324.svc.cluster.local from pod dns-3324/dns-test-32d46b95-1d0f-4899-9f14-fe9df50099b7: the server could not find the requested resource (get pods dns-test-32d46b95-1d0f-4899-9f14-fe9df50099b7) Jan 11 20:14:37.146: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3324.svc.cluster.local from pod dns-3324/dns-test-32d46b95-1d0f-4899-9f14-fe9df50099b7: the server could not find the requested resource (get pods dns-test-32d46b95-1d0f-4899-9f14-fe9df50099b7) Jan 11 20:14:37.240: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3324.svc.cluster.local from pod dns-3324/dns-test-32d46b95-1d0f-4899-9f14-fe9df50099b7: the server could not find the requested resource (get pods dns-test-32d46b95-1d0f-4899-9f14-fe9df50099b7) Jan 11 20:14:37.332: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3324.svc.cluster.local from pod dns-3324/dns-test-32d46b95-1d0f-4899-9f14-fe9df50099b7: the server could not find the requested resource (get pods dns-test-32d46b95-1d0f-4899-9f14-fe9df50099b7) Jan 11 20:14:37.612: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3324.svc.cluster.local from pod dns-3324/dns-test-32d46b95-1d0f-4899-9f14-fe9df50099b7: the server could not find the requested resource (get pods dns-test-32d46b95-1d0f-4899-9f14-fe9df50099b7) Jan 11 20:14:37.704: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3324.svc.cluster.local from pod dns-3324/dns-test-32d46b95-1d0f-4899-9f14-fe9df50099b7: the server could not find the requested resource (get pods dns-test-32d46b95-1d0f-4899-9f14-fe9df50099b7) Jan 11 20:14:37.797: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3324.svc.cluster.local from pod dns-3324/dns-test-32d46b95-1d0f-4899-9f14-fe9df50099b7: the server could not find the requested resource (get pods dns-test-32d46b95-1d0f-4899-9f14-fe9df50099b7) Jan 11 20:14:37.889: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3324.svc.cluster.local from pod dns-3324/dns-test-32d46b95-1d0f-4899-9f14-fe9df50099b7: the server could not find the requested resource (get pods dns-test-32d46b95-1d0f-4899-9f14-fe9df50099b7) Jan 11 20:14:38.077: INFO: Lookups using dns-3324/dns-test-32d46b95-1d0f-4899-9f14-fe9df50099b7 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3324.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3324.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3324.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3324.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3324.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3324.svc.cluster.local jessie_udp@dns-test-service-2.dns-3324.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3324.svc.cluster.local] Jan 11 20:14:42.055: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3324.svc.cluster.local from pod dns-3324/dns-test-32d46b95-1d0f-4899-9f14-fe9df50099b7: the server could not find the requested resource (get pods dns-test-32d46b95-1d0f-4899-9f14-fe9df50099b7) Jan 11 20:14:42.148: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3324.svc.cluster.local from pod dns-3324/dns-test-32d46b95-1d0f-4899-9f14-fe9df50099b7: the server could not find the requested resource (get pods dns-test-32d46b95-1d0f-4899-9f14-fe9df50099b7) Jan 11 20:14:42.241: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3324.svc.cluster.local from pod dns-3324/dns-test-32d46b95-1d0f-4899-9f14-fe9df50099b7: the server could not find the requested resource (get pods dns-test-32d46b95-1d0f-4899-9f14-fe9df50099b7) Jan 11 20:14:42.333: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3324.svc.cluster.local from pod dns-3324/dns-test-32d46b95-1d0f-4899-9f14-fe9df50099b7: the server could not find the requested resource (get pods dns-test-32d46b95-1d0f-4899-9f14-fe9df50099b7) Jan 11 20:14:42.615: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3324.svc.cluster.local from pod dns-3324/dns-test-32d46b95-1d0f-4899-9f14-fe9df50099b7: the server could not find the requested resource (get pods dns-test-32d46b95-1d0f-4899-9f14-fe9df50099b7) Jan 11 20:14:42.709: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3324.svc.cluster.local from pod dns-3324/dns-test-32d46b95-1d0f-4899-9f14-fe9df50099b7: the server could not find the requested resource (get pods dns-test-32d46b95-1d0f-4899-9f14-fe9df50099b7) Jan 11 20:14:42.803: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3324.svc.cluster.local from pod dns-3324/dns-test-32d46b95-1d0f-4899-9f14-fe9df50099b7: the server could not find the requested resource (get pods dns-test-32d46b95-1d0f-4899-9f14-fe9df50099b7) Jan 11 20:14:42.936: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3324.svc.cluster.local from pod dns-3324/dns-test-32d46b95-1d0f-4899-9f14-fe9df50099b7: the server could not find the requested resource (get pods dns-test-32d46b95-1d0f-4899-9f14-fe9df50099b7) Jan 11 20:14:43.125: INFO: Lookups using dns-3324/dns-test-32d46b95-1d0f-4899-9f14-fe9df50099b7 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3324.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3324.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3324.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3324.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3324.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3324.svc.cluster.local jessie_udp@dns-test-service-2.dns-3324.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3324.svc.cluster.local] Jan 11 20:14:48.137: INFO: DNS probes using dns-3324/dns-test-32d46b95-1d0f-4899-9f14-fe9df50099b7 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:14:48.324: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3324" for this suite. Jan 11 20:14:54.685: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:14:58.001: INFO: namespace dns-3324 deletion completed in 9.58564148s • [SLOW TEST:45.520 seconds] [sig-network] DNS /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:14:27.715: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in persistent-local-volumes-test-2972 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:152 [BeforeEach] [Volume type: block] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "ip-10-250-27-25.ec2.internal" using path "/tmp/local-volume-test-79ec9e0a-219d-4a25-aa7d-c84779caae2b" Jan 11 20:14:31.304: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-2972 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-79ec9e0a-219d-4a25-aa7d-c84779caae2b && dd if=/dev/zero of=/tmp/local-volume-test-79ec9e0a-219d-4a25-aa7d-c84779caae2b/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-79ec9e0a-219d-4a25-aa7d-c84779caae2b/file' Jan 11 20:14:32.670: INFO: stderr: "5120+0 records in\n5120+0 records out\n20971520 bytes (21 MB, 20 MiB) copied, 0.0158804 s, 1.3 GB/s\n" Jan 11 20:14:32.670: INFO: stdout: "" Jan 11 20:14:32.670: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-2972 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-79ec9e0a-219d-4a25-aa7d-c84779caae2b/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}' Jan 11 20:14:33.991: INFO: stderr: "" Jan 11 20:14:33.991: INFO: stdout: "/dev/loop0\n" STEP: Creating local PVCs and PVs Jan 11 20:14:33.991: INFO: Creating a PV followed by a PVC Jan 11 20:14:34.171: INFO: Waiting for PV local-pvpr7w4 to bind to PVC pvc-nf2ps Jan 11 20:14:34.171: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-nf2ps] to have phase Bound Jan 11 20:14:34.260: INFO: PersistentVolumeClaim pvc-nf2ps found and phase=Bound (89.283323ms) Jan 11 20:14:34.260: INFO: Waiting up to 3m0s for PersistentVolume local-pvpr7w4 to have phase Bound Jan 11 20:14:34.350: INFO: PersistentVolume local-pvpr7w4 found and phase=Bound (89.415355ms) [It] should be able to write from pod1 and read from pod2 /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 STEP: Creating pod1 to write to the PV STEP: Creating a pod Jan 11 20:14:36.976: INFO: pod "security-context-b6b8ce9a-6e0c-45fb-bdf5-a2c46ac95913" created on Node "ip-10-250-27-25.ec2.internal" STEP: Writing in pod1 Jan 11 20:14:36.976: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-2972 security-context-b6b8ce9a-6e0c-45fb-bdf5-a2c46ac95913 -- /bin/sh -c mkdir -p /tmp/mnt/volume1; echo test-file-content > /tmp/mnt/volume1/test-file && SUDO_CMD=$(which sudo); echo ${SUDO_CMD} && ${SUDO_CMD} dd if=/tmp/mnt/volume1/test-file of=/mnt/volume1 bs=512 count=100 && rm /tmp/mnt/volume1/test-file' Jan 11 20:14:38.296: INFO: stderr: "0+1 records in\n0+1 records out\n18 bytes (18B) copied, 0.000042 seconds, 418.5KB/s\n" Jan 11 20:14:38.296: INFO: stdout: "\n" Jan 11 20:14:38.296: INFO: podRWCmdExec out: "\n" err: Jan 11 20:14:38.296: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-2972 security-context-b6b8ce9a-6e0c-45fb-bdf5-a2c46ac95913 -- /bin/sh -c hexdump -n 100 -e '100 "%_p"' /mnt/volume1 | head -1' Jan 11 20:14:39.625: INFO: stderr: "" Jan 11 20:14:39.625: INFO: stdout: "test-file-content..................................................................................." Jan 11 20:14:39.625: INFO: podRWCmdExec out: "test-file-content..................................................................................." err: STEP: Creating pod2 to read from the PV STEP: Creating a pod Jan 11 20:14:42.073: INFO: pod "security-context-ab76a52b-273a-4930-9fbe-a9a0d9235d1d" created on Node "ip-10-250-27-25.ec2.internal" Jan 11 20:14:42.073: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-2972 security-context-ab76a52b-273a-4930-9fbe-a9a0d9235d1d -- /bin/sh -c hexdump -n 100 -e '100 "%_p"' /mnt/volume1 | head -1' Jan 11 20:14:43.413: INFO: stderr: "" Jan 11 20:14:43.413: INFO: stdout: "test-file-content..................................................................................." Jan 11 20:14:43.413: INFO: podRWCmdExec out: "test-file-content..................................................................................." err: STEP: Writing in pod2 Jan 11 20:14:43.413: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-2972 security-context-ab76a52b-273a-4930-9fbe-a9a0d9235d1d -- /bin/sh -c mkdir -p /tmp/mnt/volume1; echo /dev/loop0 > /tmp/mnt/volume1/test-file && SUDO_CMD=$(which sudo); echo ${SUDO_CMD} && ${SUDO_CMD} dd if=/tmp/mnt/volume1/test-file of=/mnt/volume1 bs=512 count=100 && rm /tmp/mnt/volume1/test-file' Jan 11 20:14:44.750: INFO: stderr: "0+1 records in\n0+1 records out\n11 bytes (11B) copied, 0.000039 seconds, 275.4KB/s\n" Jan 11 20:14:44.751: INFO: stdout: "\n" Jan 11 20:14:44.751: INFO: podRWCmdExec out: "\n" err: STEP: Reading in pod1 Jan 11 20:14:44.751: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-2972 security-context-b6b8ce9a-6e0c-45fb-bdf5-a2c46ac95913 -- /bin/sh -c hexdump -n 100 -e '100 "%_p"' /mnt/volume1 | head -1' Jan 11 20:14:46.099: INFO: stderr: "" Jan 11 20:14:46.099: INFO: stdout: "/dev/loop0.ontent..................................................................................." Jan 11 20:14:46.099: INFO: podRWCmdExec out: "/dev/loop0.ontent..................................................................................." err: STEP: Deleting pod1 STEP: Deleting pod security-context-b6b8ce9a-6e0c-45fb-bdf5-a2c46ac95913 in namespace persistent-local-volumes-test-2972 STEP: Deleting pod2 STEP: Deleting pod security-context-ab76a52b-273a-4930-9fbe-a9a0d9235d1d in namespace persistent-local-volumes-test-2972 [AfterEach] [Volume type: block] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jan 11 20:14:46.280: INFO: Deleting PersistentVolumeClaim "pvc-nf2ps" Jan 11 20:14:46.370: INFO: Deleting PersistentVolume "local-pvpr7w4" Jan 11 20:14:46.461: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-2972 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-79ec9e0a-219d-4a25-aa7d-c84779caae2b/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}' Jan 11 20:14:47.741: INFO: stderr: "" Jan 11 20:14:47.741: INFO: stdout: "/dev/loop0\n" STEP: Tear down block device "/dev/loop0" on node "ip-10-250-27-25.ec2.internal" at path /tmp/local-volume-test-79ec9e0a-219d-4a25-aa7d-c84779caae2b/file Jan 11 20:14:47.741: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-2972 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0' Jan 11 20:14:49.018: INFO: stderr: "" Jan 11 20:14:49.018: INFO: stdout: "" STEP: Removing the test directory /tmp/local-volume-test-79ec9e0a-219d-4a25-aa7d-c84779caae2b Jan 11 20:14:49.018: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-2972 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-79ec9e0a-219d-4a25-aa7d-c84779caae2b' Jan 11 20:14:50.286: INFO: stderr: "" Jan 11 20:14:50.286: INFO: stdout: "" [AfterEach] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:14:50.377: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-2972" for this suite. Jan 11 20:14:56.738: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:15:00.037: INFO: namespace persistent-local-volumes-test-2972 deletion completed in 9.567346035s • [SLOW TEST:32.322 seconds] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: block] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume at the same time /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248 should be able to write from pod1 and read from pod2 /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:14:41.698: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename webhook STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-1622 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:88 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 11 20:14:43.590: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714370483, loc:(*time.Location)(0x84bfb00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714370483, loc:(*time.Location)(0x84bfb00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714370483, loc:(*time.Location)(0x84bfb00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714370483, loc:(*time.Location)(0x84bfb00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 11 20:14:46.773: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:14:47.369: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1622" for this suite. Jan 11 20:14:53.729: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:14:57.015: INFO: namespace webhook-1622 deletion completed in 9.554645486s STEP: Destroying namespace "webhook-1622-markers" for this suite. Jan 11 20:15:03.283: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:15:06.584: INFO: namespace webhook-1622-markers deletion completed in 9.569208001s [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:103 • [SLOW TEST:25.245 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSS ------------------------------ [BeforeEach] [sig-instrumentation] MetricsGrabber /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:15:00.073: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename metrics-grabber STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in metrics-grabber-5631 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-instrumentation] MetricsGrabber /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/monitoring/metrics_grabber.go:36 W0111 20:15:00.802058 8631 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. [It] should grab all metrics from a Scheduler. /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/monitoring/metrics_grabber.go:61 STEP: Proxying to Pod through the API server Jan 11 20:15:00.892: INFO: Master is node api.Registry. Skipping testing Scheduler metrics. [AfterEach] [sig-instrumentation] MetricsGrabber /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:15:00.892: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "metrics-grabber-5631" for this suite. Jan 11 20:15:07.252: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:15:10.559: INFO: namespace metrics-grabber-5631 deletion completed in 9.576645765s • [SLOW TEST:10.487 seconds] [sig-instrumentation] MetricsGrabber /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/common/framework.go:23 should grab all metrics from a Scheduler. /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/monitoring/metrics_grabber.go:61 ------------------------------ [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:14:21.454: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename var-expansion STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in var-expansion-3182 STEP: Waiting for a default service account to be provisioned in namespace [It] should succeed in writing subpaths in container [sig-storage][NodeFeature:VolumeSubpathEnvExpansion][Slow] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/expansion.go:410 STEP: creating the pod STEP: waiting for pod running STEP: creating a file in subpath Jan 11 20:14:24.456: INFO: ExecWithOptions {Command:[/bin/sh -c touch /volume_mount/mypath/foo/test.log] Namespace:var-expansion-3182 PodName:var-expansion-a2b3b133-769a-4170-a65a-fa4ccc801edb ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 11 20:14:24.456: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: test for file in mounted path Jan 11 20:14:25.405: INFO: ExecWithOptions {Command:[/bin/sh -c test -f /subpath_mount/test.log] Namespace:var-expansion-3182 PodName:var-expansion-a2b3b133-769a-4170-a65a-fa4ccc801edb ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 11 20:14:25.405: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: updating the annotation value Jan 11 20:14:26.942: INFO: Successfully updated pod "var-expansion-a2b3b133-769a-4170-a65a-fa4ccc801edb" STEP: waiting for annotated pod running STEP: deleting the pod gracefully Jan 11 20:14:27.031: INFO: Deleting pod "var-expansion-a2b3b133-769a-4170-a65a-fa4ccc801edb" in namespace "var-expansion-3182" Jan 11 20:14:27.122: INFO: Wait up to 5m0s for pod "var-expansion-a2b3b133-769a-4170-a65a-fa4ccc801edb" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:15:05.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-3182" for this suite. Jan 11 20:15:11.664: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:15:14.977: INFO: namespace var-expansion-3182 deletion completed in 9.583393199s • [SLOW TEST:53.523 seconds] [k8s.io] Variable Expansion /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693 should succeed in writing subpaths in container [sig-storage][NodeFeature:VolumeSubpathEnvExpansion][Slow] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/expansion.go:410 ------------------------------ SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:14:24.243: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename pv STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pv-5911 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:110 [BeforeEach] NFS /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:127 STEP: creating nfs-server pod STEP: locating the "nfs-server" server pod Jan 11 20:14:27.243: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config logs nfs-server nfs-server --namespace=pv-5911' Jan 11 20:14:27.928: INFO: stderr: "" Jan 11 20:14:27.928: INFO: stdout: "Serving /exports\nrpcinfo: can't contact rpcbind: : RPC: Unable to receive; errno = Connection refused\nStarting rpcbind\nNFS started\n" Jan 11 20:14:27.928: INFO: nfs server pod IP address: 100.64.1.223 [It] create a PV and a pre-bound PVC: test write access /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:194 Jan 11 20:14:27.928: INFO: Creating a PV followed by a pre-bound PVC STEP: Validating the PV-PVC binding Jan 11 20:14:28.109: INFO: Waiting for PV nfs-jzt7n to bind to PVC pvc-s7xwm Jan 11 20:14:28.109: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-s7xwm] to have phase Bound Jan 11 20:14:28.198: INFO: PersistentVolumeClaim pvc-s7xwm found but phase is Pending instead of Bound. Jan 11 20:14:30.289: INFO: PersistentVolumeClaim pvc-s7xwm found but phase is Pending instead of Bound. Jan 11 20:14:32.379: INFO: PersistentVolumeClaim pvc-s7xwm found but phase is Pending instead of Bound. Jan 11 20:14:34.471: INFO: PersistentVolumeClaim pvc-s7xwm found but phase is Pending instead of Bound. Jan 11 20:14:36.561: INFO: PersistentVolumeClaim pvc-s7xwm found but phase is Pending instead of Bound. Jan 11 20:14:38.651: INFO: PersistentVolumeClaim pvc-s7xwm found but phase is Pending instead of Bound. Jan 11 20:14:40.741: INFO: PersistentVolumeClaim pvc-s7xwm found but phase is Pending instead of Bound. Jan 11 20:14:42.832: INFO: PersistentVolumeClaim pvc-s7xwm found but phase is Pending instead of Bound. Jan 11 20:14:44.925: INFO: PersistentVolumeClaim pvc-s7xwm found but phase is Pending instead of Bound. Jan 11 20:14:47.018: INFO: PersistentVolumeClaim pvc-s7xwm found but phase is Pending instead of Bound. Jan 11 20:14:49.108: INFO: PersistentVolumeClaim pvc-s7xwm found but phase is Pending instead of Bound. Jan 11 20:14:51.198: INFO: PersistentVolumeClaim pvc-s7xwm found but phase is Pending instead of Bound. Jan 11 20:14:53.288: INFO: PersistentVolumeClaim pvc-s7xwm found and phase=Bound (25.179023354s) Jan 11 20:14:53.288: INFO: Waiting up to 3m0s for PersistentVolume nfs-jzt7n to have phase Bound Jan 11 20:14:53.378: INFO: PersistentVolume nfs-jzt7n found and phase=Bound (89.799664ms) STEP: Checking pod has write access to PersistentVolume Jan 11 20:14:53.557: INFO: Creating nfs test pod STEP: Pod should terminate with exitcode 0 (success) Jan 11 20:14:53.648: INFO: Waiting up to 5m0s for pod "pvc-tester-rkk5s" in namespace "pv-5911" to be "success or failure" Jan 11 20:14:53.737: INFO: Pod "pvc-tester-rkk5s": Phase="Pending", Reason="", readiness=false. Elapsed: 89.604866ms Jan 11 20:14:55.827: INFO: Pod "pvc-tester-rkk5s": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.179503654s STEP: Saw pod success Jan 11 20:14:55.827: INFO: Pod "pvc-tester-rkk5s" satisfied condition "success or failure" Jan 11 20:14:55.827: INFO: Pod pvc-tester-rkk5s succeeded Jan 11 20:14:55.827: INFO: Deleting pod "pvc-tester-rkk5s" in namespace "pv-5911" Jan 11 20:14:55.923: INFO: Wait up to 5m0s for pod "pvc-tester-rkk5s" to be fully deleted STEP: Deleting the PVC to invoke the reclaim policy. Jan 11 20:14:56.013: INFO: Deleting PVC pvc-s7xwm to trigger reclamation of PV nfs-jzt7n Jan 11 20:14:56.013: INFO: Deleting PersistentVolumeClaim "pvc-s7xwm" Jan 11 20:14:56.104: INFO: Waiting for reclaim process to complete. Jan 11 20:14:56.104: INFO: Waiting up to 3m0s for PersistentVolume nfs-jzt7n to have phase Released Jan 11 20:14:56.194: INFO: PersistentVolume nfs-jzt7n found and phase=Released (89.550038ms) Jan 11 20:14:56.283: INFO: PV nfs-jzt7n now in "Released" phase [AfterEach] with Single PV - PVC pairs /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:155 Jan 11 20:14:56.283: INFO: AfterEach: Cleaning up test resources. Jan 11 20:14:56.283: INFO: Deleting PersistentVolumeClaim "pvc-s7xwm" Jan 11 20:14:56.373: INFO: Deleting PersistentVolume "nfs-jzt7n" [AfterEach] NFS /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:147 Jan 11 20:14:56.464: INFO: Deleting pod "nfs-server" in namespace "pv-5911" Jan 11 20:14:56.555: INFO: Wait up to 5m0s for pod "nfs-server" to be fully deleted [AfterEach] [sig-storage] PersistentVolumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:15:14.735: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-5911" for this suite. Jan 11 20:15:21.096: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:15:24.415: INFO: namespace pv-5911 deletion completed in 9.58926363s • [SLOW TEST:60.173 seconds] [sig-storage] PersistentVolumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 NFS /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:120 with Single PV - PVC pairs /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:153 create a PV and a pre-bound PVC: test write access /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:194 ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:14:50.572: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename kubectl STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-4063 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:225 [It] apply set/view last-applied /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:803 STEP: deployment replicas number is 2 Jan 11 20:14:51.212: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config apply -f - --namespace=kubectl-4063' Jan 11 20:14:52.358: INFO: stderr: "" Jan 11 20:14:52.359: INFO: stdout: "deployment.apps/httpd-deployment created\n" STEP: check the last-applied matches expectations annotations Jan 11 20:14:52.359: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config apply view-last-applied -f - --namespace=kubectl-4063 -o json' Jan 11 20:14:52.777: INFO: stderr: "" Jan 11 20:14:52.777: INFO: stdout: "{\n \"apiVersion\": \"apps/v1\",\n \"kind\": \"Deployment\",\n \"metadata\": {\n \"annotations\": {},\n \"name\": \"httpd-deployment\",\n \"namespace\": \"kubectl-4063\"\n },\n \"spec\": {\n \"replicas\": 2,\n \"selector\": {\n \"matchLabels\": {\n \"app\": \"httpd\"\n }\n },\n \"template\": {\n \"metadata\": {\n \"labels\": {\n \"app\": \"httpd\"\n }\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.39-alpine\",\n \"name\": \"httpd\",\n \"ports\": [\n {\n \"containerPort\": 80\n }\n ]\n }\n ]\n }\n }\n }\n}\n" STEP: apply file doesn't have replicas Jan 11 20:14:52.777: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config apply set-last-applied -f - --namespace=kubectl-4063' Jan 11 20:14:53.285: INFO: stderr: "" Jan 11 20:14:53.285: INFO: stdout: "deployment.apps/httpd-deployment configured\n" STEP: check last-applied has been updated, annotations doesn't have replicas Jan 11 20:14:53.285: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config apply view-last-applied -f - --namespace=kubectl-4063 -o json' Jan 11 20:14:53.706: INFO: stderr: "" Jan 11 20:14:53.706: INFO: stdout: "{\n \"apiVersion\": \"apps/v1\",\n \"kind\": \"Deployment\",\n \"metadata\": {\n \"name\": \"httpd-deployment\",\n \"namespace\": \"kubectl-4063\"\n },\n \"spec\": {\n \"selector\": {\n \"matchLabels\": {\n \"app\": \"httpd\"\n }\n },\n \"template\": {\n \"metadata\": {\n \"labels\": {\n \"app\": \"httpd\"\n }\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.39-alpine\",\n \"name\": \"httpd\",\n \"ports\": [\n {\n \"containerPort\": 80\n }\n ]\n }\n ]\n }\n }\n }\n}\n" STEP: scale set replicas to 3 Jan 11 20:14:53.710: INFO: scanned /root for discovery docs: Jan 11 20:14:53.710: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config scale deployment httpd-deployment --replicas=3 --namespace=kubectl-4063' Jan 11 20:14:54.224: INFO: stderr: "" Jan 11 20:14:54.224: INFO: stdout: "deployment.apps/httpd-deployment scaled\n" STEP: apply file doesn't have replicas but image changed Jan 11 20:14:54.224: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config apply -f - --namespace=kubectl-4063' Jan 11 20:14:55.263: INFO: stderr: "" Jan 11 20:14:55.263: INFO: stdout: "deployment.apps/httpd-deployment configured\n" STEP: verify replicas still is 3 and image has been updated Jan 11 20:14:55.263: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config get -f - --namespace=kubectl-4063 -o json' Jan 11 20:14:55.691: INFO: stderr: "" Jan 11 20:14:55.691: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"items\": [\n {\n \"apiVersion\": \"apps/v1\",\n \"kind\": \"Deployment\",\n \"metadata\": {\n \"annotations\": {\n \"deployment.kubernetes.io/revision\": \"2\",\n \"kubectl.kubernetes.io/last-applied-configuration\": \"{\\\"apiVersion\\\":\\\"apps/v1\\\",\\\"kind\\\":\\\"Deployment\\\",\\\"metadata\\\":{\\\"annotations\\\":{},\\\"name\\\":\\\"httpd-deployment\\\",\\\"namespace\\\":\\\"kubectl-4063\\\"},\\\"spec\\\":{\\\"selector\\\":{\\\"matchLabels\\\":{\\\"app\\\":\\\"httpd\\\"}},\\\"template\\\":{\\\"metadata\\\":{\\\"labels\\\":{\\\"app\\\":\\\"httpd\\\"}},\\\"spec\\\":{\\\"containers\\\":[{\\\"image\\\":\\\"docker.io/library/httpd:2.4.38-alpine\\\",\\\"name\\\":\\\"httpd\\\",\\\"ports\\\":[{\\\"containerPort\\\":80}]}]}}}}\\n\"\n },\n \"creationTimestamp\": \"2020-01-11T20:14:52Z\",\n \"generation\": 4,\n \"name\": \"httpd-deployment\",\n \"namespace\": \"kubectl-4063\",\n \"resourceVersion\": \"74560\",\n \"selfLink\": \"/apis/apps/v1/namespaces/kubectl-4063/deployments/httpd-deployment\",\n \"uid\": \"38d9e921-2c70-4b00-8459-2dcc9c2e8637\"\n },\n \"spec\": {\n \"progressDeadlineSeconds\": 600,\n \"replicas\": 3,\n \"revisionHistoryLimit\": 10,\n \"selector\": {\n \"matchLabels\": {\n \"app\": \"httpd\"\n }\n },\n \"strategy\": {\n \"rollingUpdate\": {\n \"maxSurge\": \"25%\",\n \"maxUnavailable\": \"25%\"\n },\n \"type\": \"RollingUpdate\"\n },\n \"template\": {\n \"metadata\": {\n \"creationTimestamp\": null,\n \"labels\": {\n \"app\": \"httpd\"\n }\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"httpd\",\n \"ports\": [\n {\n \"containerPort\": 80,\n \"protocol\": \"TCP\"\n }\n ],\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\"\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"terminationGracePeriodSeconds\": 30\n }\n }\n },\n \"status\": {\n \"availableReplicas\": 2,\n \"conditions\": [\n {\n \"lastTransitionTime\": \"2020-01-11T20:14:54Z\",\n \"lastUpdateTime\": \"2020-01-11T20:14:54Z\",\n \"message\": \"Deployment does not have minimum availability.\",\n \"reason\": \"MinimumReplicasUnavailable\",\n \"status\": \"False\",\n \"type\": \"Available\"\n },\n {\n \"lastTransitionTime\": \"2020-01-11T20:14:52Z\",\n \"lastUpdateTime\": \"2020-01-11T20:14:55Z\",\n \"message\": \"ReplicaSet \\\"httpd-deployment-5744c88cf4\\\" is progressing.\",\n \"reason\": \"ReplicaSetUpdated\",\n \"status\": \"True\",\n \"type\": \"Progressing\"\n }\n ],\n \"observedGeneration\": 4,\n \"readyReplicas\": 2,\n \"replicas\": 4,\n \"unavailableReplicas\": 2,\n \"updatedReplicas\": 1\n }\n }\n ],\n \"kind\": \"List\",\n \"metadata\": {\n \"resourceVersion\": \"\",\n \"selfLink\": \"\"\n }\n}\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:14:55.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4063" for this suite. Jan 11 20:15:24.051: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:15:27.367: INFO: namespace kubectl-4063 deletion completed in 31.58503555s • [SLOW TEST:36.795 seconds] [sig-cli] Kubectl client /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl apply /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:766 apply set/view last-applied /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:803 ------------------------------ SSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:15:14.993: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename kubectl STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-8757 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:225 [BeforeEach] Kubectl run job /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1595 [It] should create a job from an image when restart is OnFailure [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: running the image docker.io/library/httpd:2.4.38-alpine Jan 11 20:15:15.652: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config run e2e-test-httpd-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-8757' Jan 11 20:15:16.078: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jan 11 20:15:16.078: INFO: stdout: "job.batch/e2e-test-httpd-job created\n" STEP: verifying the job e2e-test-httpd-job was created [AfterEach] Kubectl run job /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1600 Jan 11 20:15:16.167: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config delete jobs e2e-test-httpd-job --namespace=kubectl-8757' Jan 11 20:15:16.679: INFO: stderr: "" Jan 11 20:15:16.679: INFO: stdout: "job.batch \"e2e-test-httpd-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:15:16.679: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8757" for this suite. Jan 11 20:15:29.040: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:15:32.493: INFO: namespace kubectl-8757 deletion completed in 15.723048067s • [SLOW TEST:17.500 seconds] [sig-cli] Kubectl client /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run job /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1591 should create a job from an image when restart is OnFailure [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:15:06.958: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename webhook STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-1400 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:88 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 11 20:15:09.248: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714370508, loc:(*time.Location)(0x84bfb00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714370508, loc:(*time.Location)(0x84bfb00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714370508, loc:(*time.Location)(0x84bfb00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714370508, loc:(*time.Location)(0x84bfb00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 11 20:15:12.432: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Jan 11 20:15:12.522: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-5497-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:15:13.743: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1400" for this suite. Jan 11 20:15:20.102: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:15:23.397: INFO: namespace webhook-1400 deletion completed in 9.563271688s STEP: Destroying namespace "webhook-1400-markers" for this suite. Jan 11 20:15:29.666: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:15:33.027: INFO: namespace webhook-1400-markers deletion completed in 9.630657242s [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:103 • [SLOW TEST:26.428 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:14:58.003: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename kubectl STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-9426 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:225 [It] should add annotations for pods in rc [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: creating Redis RC Jan 11 20:14:58.656: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config create -f - --namespace=kubectl-9426' Jan 11 20:14:59.592: INFO: stderr: "" Jan 11 20:14:59.592: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Jan 11 20:15:00.682: INFO: Selector matched 1 pods for map[app:redis] Jan 11 20:15:00.682: INFO: Found 0 / 1 Jan 11 20:15:01.683: INFO: Selector matched 1 pods for map[app:redis] Jan 11 20:15:01.683: INFO: Found 1 / 1 Jan 11 20:15:01.683: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Jan 11 20:15:01.774: INFO: Selector matched 1 pods for map[app:redis] Jan 11 20:15:01.774: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jan 11 20:15:01.774: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config patch pod redis-master-jrg7t --namespace=kubectl-9426 -p {"metadata":{"annotations":{"x":"y"}}}' Jan 11 20:15:02.323: INFO: stderr: "" Jan 11 20:15:02.323: INFO: stdout: "pod/redis-master-jrg7t patched\n" STEP: checking annotations Jan 11 20:15:02.414: INFO: Selector matched 1 pods for map[app:redis] Jan 11 20:15:02.414: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:15:02.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9426" for this suite. Jan 11 20:15:30.779: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:15:34.199: INFO: namespace kubectl-9426 deletion completed in 31.690706365s • [SLOW TEST:36.196 seconds] [sig-cli] Kubectl client /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl patch /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1346 should add annotations for pods in rc [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SS ------------------------------ [BeforeEach] [sig-node] RuntimeClass /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:15:24.419: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename runtimeclass STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in runtimeclass-4475 STEP: Waiting for a default service account to be provisioned in namespace [It] should reject a Pod requesting a non-existent RuntimeClass /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtimeclass.go:43 [AfterEach] [sig-node] RuntimeClass /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:15:25.340: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "runtimeclass-4475" for this suite. Jan 11 20:15:31.703: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:15:35.024: INFO: namespace runtimeclass-4475 deletion completed in 9.591784262s • [SLOW TEST:10.605 seconds] [sig-node] RuntimeClass /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtimeclass.go:40 should reject a Pod requesting a non-existent RuntimeClass /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtimeclass.go:43 ------------------------------ SSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:15:27.376: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename projected STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-3229 STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:58 STEP: Creating configMap with name projected-configmap-test-volume-ed7b0c30-820e-4849-89f7-0f6a6d066671 STEP: Creating a pod to test consume configMaps Jan 11 20:15:28.198: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-4719e64a-fe48-48ef-9c57-a03c65887f12" in namespace "projected-3229" to be "success or failure" Jan 11 20:15:28.288: INFO: Pod "pod-projected-configmaps-4719e64a-fe48-48ef-9c57-a03c65887f12": Phase="Pending", Reason="", readiness=false. Elapsed: 89.934964ms Jan 11 20:15:30.377: INFO: Pod "pod-projected-configmaps-4719e64a-fe48-48ef-9c57-a03c65887f12": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.179462295s STEP: Saw pod success Jan 11 20:15:30.377: INFO: Pod "pod-projected-configmaps-4719e64a-fe48-48ef-9c57-a03c65887f12" satisfied condition "success or failure" Jan 11 20:15:30.467: INFO: Trying to get logs from node ip-10-250-7-77.ec2.internal pod pod-projected-configmaps-4719e64a-fe48-48ef-9c57-a03c65887f12 container projected-configmap-volume-test: STEP: delete the pod Jan 11 20:15:30.815: INFO: Waiting for pod pod-projected-configmaps-4719e64a-fe48-48ef-9c57-a03c65887f12 to disappear Jan 11 20:15:30.905: INFO: Pod pod-projected-configmaps-4719e64a-fe48-48ef-9c57-a03c65887f12 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:15:30.905: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3229" for this suite. Jan 11 20:15:39.266: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:15:42.594: INFO: namespace projected-3229 deletion completed in 11.598499811s • [SLOW TEST:15.218 seconds] [sig-storage] Projected configMap /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:35 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:58 ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:15:32.517: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename projected STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-2843 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating a pod to test downward API volume plugin Jan 11 20:15:33.248: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ce650180-49b8-430c-987d-0a14e8955090" in namespace "projected-2843" to be "success or failure" Jan 11 20:15:33.338: INFO: Pod "downwardapi-volume-ce650180-49b8-430c-987d-0a14e8955090": Phase="Pending", Reason="", readiness=false. Elapsed: 89.714873ms Jan 11 20:15:35.428: INFO: Pod "downwardapi-volume-ce650180-49b8-430c-987d-0a14e8955090": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.180053129s STEP: Saw pod success Jan 11 20:15:35.428: INFO: Pod "downwardapi-volume-ce650180-49b8-430c-987d-0a14e8955090" satisfied condition "success or failure" Jan 11 20:15:35.518: INFO: Trying to get logs from node ip-10-250-7-77.ec2.internal pod downwardapi-volume-ce650180-49b8-430c-987d-0a14e8955090 container client-container: STEP: delete the pod Jan 11 20:15:35.707: INFO: Waiting for pod downwardapi-volume-ce650180-49b8-430c-987d-0a14e8955090 to disappear Jan 11 20:15:35.796: INFO: Pod downwardapi-volume-ce650180-49b8-430c-987d-0a14e8955090 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:15:35.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2843" for this suite. Jan 11 20:15:42.157: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:15:45.465: INFO: namespace projected-2843 deletion completed in 9.577951427s • [SLOW TEST:12.948 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:93 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:15:35.029: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename provisioning STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in provisioning-6231 STEP: Waiting for a default service account to be provisioned in namespace [It] should support readOnly directory specified in the volumeMount /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:347 Jan 11 20:15:35.849: INFO: Could not find CSI Name for in-tree plugin kubernetes.io/host-path Jan 11 20:15:36.033: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-6231" in namespace "provisioning-6231" to be "success or failure" Jan 11 20:15:36.123: INFO: Pod "hostpath-symlink-prep-provisioning-6231": Phase="Pending", Reason="", readiness=false. Elapsed: 89.608227ms Jan 11 20:15:38.213: INFO: Pod "hostpath-symlink-prep-provisioning-6231": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.179591793s STEP: Saw pod success Jan 11 20:15:38.213: INFO: Pod "hostpath-symlink-prep-provisioning-6231" satisfied condition "success or failure" Jan 11 20:15:38.213: INFO: Deleting pod "hostpath-symlink-prep-provisioning-6231" in namespace "provisioning-6231" Jan 11 20:15:38.307: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-6231" to be fully deleted Jan 11 20:15:38.396: INFO: Creating resource for inline volume STEP: Creating pod pod-subpath-test-hostpathsymlink-swpl STEP: Creating a pod to test subpath Jan 11 20:15:38.488: INFO: Waiting up to 5m0s for pod "pod-subpath-test-hostpathsymlink-swpl" in namespace "provisioning-6231" to be "success or failure" Jan 11 20:15:38.578: INFO: Pod "pod-subpath-test-hostpathsymlink-swpl": Phase="Pending", Reason="", readiness=false. Elapsed: 90.049647ms Jan 11 20:15:40.668: INFO: Pod "pod-subpath-test-hostpathsymlink-swpl": Phase="Pending", Reason="", readiness=false. Elapsed: 2.180489872s Jan 11 20:15:42.759: INFO: Pod "pod-subpath-test-hostpathsymlink-swpl": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.271115024s STEP: Saw pod success Jan 11 20:15:42.759: INFO: Pod "pod-subpath-test-hostpathsymlink-swpl" satisfied condition "success or failure" Jan 11 20:15:42.849: INFO: Trying to get logs from node ip-10-250-7-77.ec2.internal pod pod-subpath-test-hostpathsymlink-swpl container test-container-subpath-hostpathsymlink-swpl: STEP: delete the pod Jan 11 20:15:43.042: INFO: Waiting for pod pod-subpath-test-hostpathsymlink-swpl to disappear Jan 11 20:15:43.131: INFO: Pod pod-subpath-test-hostpathsymlink-swpl no longer exists STEP: Deleting pod pod-subpath-test-hostpathsymlink-swpl Jan 11 20:15:43.131: INFO: Deleting pod "pod-subpath-test-hostpathsymlink-swpl" in namespace "provisioning-6231" STEP: Deleting pod Jan 11 20:15:43.222: INFO: Deleting pod "pod-subpath-test-hostpathsymlink-swpl" in namespace "provisioning-6231" Jan 11 20:15:43.403: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-6231" in namespace "provisioning-6231" to be "success or failure" Jan 11 20:15:43.493: INFO: Pod "hostpath-symlink-prep-provisioning-6231": Phase="Pending", Reason="", readiness=false. Elapsed: 90.258729ms Jan 11 20:15:45.583: INFO: Pod "hostpath-symlink-prep-provisioning-6231": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.180125708s STEP: Saw pod success Jan 11 20:15:45.583: INFO: Pod "hostpath-symlink-prep-provisioning-6231" satisfied condition "success or failure" Jan 11 20:15:45.583: INFO: Deleting pod "hostpath-symlink-prep-provisioning-6231" in namespace "provisioning-6231" Jan 11 20:15:45.675: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-6231" to be fully deleted Jan 11 20:15:45.764: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics [AfterEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:15:45.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "provisioning-6231" for this suite. Jan 11 20:15:54.126: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:15:57.449: INFO: namespace provisioning-6231 deletion completed in 11.593919615s • [SLOW TEST:22.420 seconds] [sig-storage] In-tree Volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Driver: hostPathSymlink] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:69 [Testpattern: Inline-volume (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:92 should support readOnly directory specified in the volumeMount /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:347 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:15:42.596: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename projected STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-5353 STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating secret with name projected-secret-test-b62d6c0c-3744-4977-b904-1a305955e614 STEP: Creating a pod to test consume secrets Jan 11 20:15:44.230: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-a0990c73-cf23-49c0-886d-5b01c166b6d6" in namespace "projected-5353" to be "success or failure" Jan 11 20:15:44.319: INFO: Pod "pod-projected-secrets-a0990c73-cf23-49c0-886d-5b01c166b6d6": Phase="Pending", Reason="", readiness=false. Elapsed: 89.525218ms Jan 11 20:15:46.409: INFO: Pod "pod-projected-secrets-a0990c73-cf23-49c0-886d-5b01c166b6d6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.179280159s STEP: Saw pod success Jan 11 20:15:46.409: INFO: Pod "pod-projected-secrets-a0990c73-cf23-49c0-886d-5b01c166b6d6" satisfied condition "success or failure" Jan 11 20:15:46.499: INFO: Trying to get logs from node ip-10-250-7-77.ec2.internal pod pod-projected-secrets-a0990c73-cf23-49c0-886d-5b01c166b6d6 container secret-volume-test: STEP: delete the pod Jan 11 20:15:46.691: INFO: Waiting for pod pod-projected-secrets-a0990c73-cf23-49c0-886d-5b01c166b6d6 to disappear Jan 11 20:15:46.781: INFO: Pod pod-projected-secrets-a0990c73-cf23-49c0-886d-5b01c166b6d6 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:15:46.781: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5353" for this suite. Jan 11 20:15:55.142: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:15:58.455: INFO: namespace projected-5353 deletion completed in 11.582719752s • [SLOW TEST:15.859 seconds] [sig-storage] Projected secret /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ S ------------------------------ [BeforeEach] [k8s.io] Container Runtime /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:15:45.488: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename container-runtime STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-runtime-797 STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jan 11 20:15:48.008: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:15:48.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-797" for this suite. Jan 11 20:15:56.550: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:15:59.850: INFO: namespace container-runtime-797 deletion completed in 11.570149691s • [SLOW TEST:14.363 seconds] [k8s.io] Container Runtime /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693 blackbox test /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 on terminated container /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:132 should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSS ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:15:58.458: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename projected STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-4041 STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating projection with secret that has name projected-secret-test-map-2a1e4776-26db-40f4-82a9-14019ab8566d STEP: Creating a pod to test consume secrets Jan 11 20:15:59.631: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-d7883f3e-ebfe-40ee-95e8-7f1391b33db0" in namespace "projected-4041" to be "success or failure" Jan 11 20:15:59.721: INFO: Pod "pod-projected-secrets-d7883f3e-ebfe-40ee-95e8-7f1391b33db0": Phase="Pending", Reason="", readiness=false. Elapsed: 89.6093ms Jan 11 20:16:01.810: INFO: Pod "pod-projected-secrets-d7883f3e-ebfe-40ee-95e8-7f1391b33db0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.179252211s STEP: Saw pod success Jan 11 20:16:01.810: INFO: Pod "pod-projected-secrets-d7883f3e-ebfe-40ee-95e8-7f1391b33db0" satisfied condition "success or failure" Jan 11 20:16:01.900: INFO: Trying to get logs from node ip-10-250-7-77.ec2.internal pod pod-projected-secrets-d7883f3e-ebfe-40ee-95e8-7f1391b33db0 container projected-secret-volume-test: STEP: delete the pod Jan 11 20:16:02.222: INFO: Waiting for pod pod-projected-secrets-d7883f3e-ebfe-40ee-95e8-7f1391b33db0 to disappear Jan 11 20:16:02.311: INFO: Pod pod-projected-secrets-d7883f3e-ebfe-40ee-95e8-7f1391b33db0 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:16:02.311: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4041" for this suite. Jan 11 20:16:08.675: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:16:11.986: INFO: namespace projected-4041 deletion completed in 9.583942031s • [SLOW TEST:13.529 seconds] [sig-storage] Projected secret /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ [BeforeEach] [sig-network] Networking /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:15:34.203: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename nettest STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nettest-3621 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Networking /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:35 STEP: Executing a successful http request from the external internet [It] should function for client IP based session affinity: http /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:205 STEP: Performing setup for networking test in namespace nettest-3621 STEP: creating a selector STEP: Creating the service pods in kubernetes Jan 11 20:15:35.096: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods STEP: Getting node addresses Jan 11 20:15:54.541: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Jan 11 20:15:54.722: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-network] Networking /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:15:54.723: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "nettest-3621" for this suite. Jan 11 20:16:09.083: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:16:12.402: INFO: namespace nettest-3621 deletion completed in 17.587759206s S [SKIPPING] [38.199 seconds] [sig-network] Networking /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 Granular Checks: Services /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:103 should function for client IP based session affinity: http [It] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:205 Requires at least 2 nodes (not -1) /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:597 ------------------------------ SSSSSSSS ------------------------------ [BeforeEach] [sig-auth] ServiceAccounts /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:15:57.467: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename svcaccounts STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in svcaccounts-7953 STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: getting the auto-created API token Jan 11 20:15:58.931: INFO: mount-test service account has no secret references STEP: getting the auto-created API token STEP: reading a file in the container Jan 11 20:16:01.792: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl exec --namespace=svcaccounts-7953 pod-service-account-19c860d9-2c1b-4f8e-b723-88eeb4745c97 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Jan 11 20:16:03.108: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl exec --namespace=svcaccounts-7953 pod-service-account-19c860d9-2c1b-4f8e-b723-88eeb4745c97 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Jan 11 20:16:04.426: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl exec --namespace=svcaccounts-7953 pod-service-account-19c860d9-2c1b-4f8e-b723-88eeb4745c97 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:16:05.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-7953" for this suite. Jan 11 20:16:16.159: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:16:19.490: INFO: namespace svcaccounts-7953 deletion completed in 13.600654795s • [SLOW TEST:22.023 seconds] [sig-auth] ServiceAccounts /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:16:11.988: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename configmap STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-7360 STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating configMap with name configmap-test-volume-8c24ee29-ecba-46bd-8906-3004261ec259 STEP: Creating a pod to test consume configMaps Jan 11 20:16:13.830: INFO: Waiting up to 5m0s for pod "pod-configmaps-35f34d8d-c3c0-4c9e-baa1-0afd8f1220a9" in namespace "configmap-7360" to be "success or failure" Jan 11 20:16:13.919: INFO: Pod "pod-configmaps-35f34d8d-c3c0-4c9e-baa1-0afd8f1220a9": Phase="Pending", Reason="", readiness=false. Elapsed: 89.319661ms Jan 11 20:16:16.010: INFO: Pod "pod-configmaps-35f34d8d-c3c0-4c9e-baa1-0afd8f1220a9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.179967404s STEP: Saw pod success Jan 11 20:16:16.010: INFO: Pod "pod-configmaps-35f34d8d-c3c0-4c9e-baa1-0afd8f1220a9" satisfied condition "success or failure" Jan 11 20:16:16.099: INFO: Trying to get logs from node ip-10-250-7-77.ec2.internal pod pod-configmaps-35f34d8d-c3c0-4c9e-baa1-0afd8f1220a9 container configmap-volume-test: STEP: delete the pod Jan 11 20:16:16.290: INFO: Waiting for pod pod-configmaps-35f34d8d-c3c0-4c9e-baa1-0afd8f1220a9 to disappear Jan 11 20:16:16.380: INFO: Pod pod-configmaps-35f34d8d-c3c0-4c9e-baa1-0afd8f1220a9 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:16:16.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7360" for this suite. Jan 11 20:16:22.742: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:16:26.067: INFO: namespace configmap-7360 deletion completed in 9.595441813s • [SLOW TEST:14.079 seconds] [sig-storage] ConfigMap /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:34 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:15:59.858: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename webhook STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-4029 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:88 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 11 20:16:02.129: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 11 20:16:05.494: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook Jan 11 20:16:05.902: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:16:06.131: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4029" for this suite. Jan 11 20:16:16.492: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:16:19.810: INFO: namespace webhook-4029 deletion completed in 13.587070307s STEP: Destroying namespace "webhook-4029-markers" for this suite. Jan 11 20:16:26.080: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:16:29.388: INFO: namespace webhook-4029-markers deletion completed in 9.578011852s [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:103 • [SLOW TEST:29.893 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSS ------------------------------ [BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:93 [BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/provisioning.go:108 [BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:15:10.561: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename provisioning STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in provisioning-1550 STEP: Waiting for a default service account to be provisioned in namespace [It] should provision storage with pvc data source /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/provisioning.go:207 STEP: deploying csi-hostpath driver Jan 11 20:15:11.642: INFO: creating *v1.ServiceAccount: provisioning-1550/csi-attacher Jan 11 20:15:11.733: INFO: creating *v1.ClusterRole: external-attacher-runner-provisioning-1550 Jan 11 20:15:11.733: INFO: Define cluster role external-attacher-runner-provisioning-1550 Jan 11 20:15:11.823: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-provisioning-1550 Jan 11 20:15:11.912: INFO: creating *v1.Role: provisioning-1550/external-attacher-cfg-provisioning-1550 Jan 11 20:15:12.002: INFO: creating *v1.RoleBinding: provisioning-1550/csi-attacher-role-cfg Jan 11 20:15:12.092: INFO: creating *v1.ServiceAccount: provisioning-1550/csi-provisioner Jan 11 20:15:12.181: INFO: creating *v1.ClusterRole: external-provisioner-runner-provisioning-1550 Jan 11 20:15:12.181: INFO: Define cluster role external-provisioner-runner-provisioning-1550 Jan 11 20:15:12.271: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-provisioning-1550 Jan 11 20:15:12.360: INFO: creating *v1.Role: provisioning-1550/external-provisioner-cfg-provisioning-1550 Jan 11 20:15:12.450: INFO: creating *v1.RoleBinding: provisioning-1550/csi-provisioner-role-cfg Jan 11 20:15:12.540: INFO: creating *v1.ServiceAccount: provisioning-1550/csi-snapshotter Jan 11 20:15:12.629: INFO: creating *v1.ClusterRole: external-snapshotter-runner-provisioning-1550 Jan 11 20:15:12.629: INFO: Define cluster role external-snapshotter-runner-provisioning-1550 Jan 11 20:15:12.719: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-provisioning-1550 Jan 11 20:15:12.809: INFO: creating *v1.Role: provisioning-1550/external-snapshotter-leaderelection-provisioning-1550 Jan 11 20:15:12.898: INFO: creating *v1.RoleBinding: provisioning-1550/external-snapshotter-leaderelection Jan 11 20:15:12.988: INFO: creating *v1.ServiceAccount: provisioning-1550/csi-resizer Jan 11 20:15:13.078: INFO: creating *v1.ClusterRole: external-resizer-runner-provisioning-1550 Jan 11 20:15:13.078: INFO: Define cluster role external-resizer-runner-provisioning-1550 Jan 11 20:15:13.168: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-provisioning-1550 Jan 11 20:15:13.258: INFO: creating *v1.Role: provisioning-1550/external-resizer-cfg-provisioning-1550 Jan 11 20:15:13.347: INFO: creating *v1.RoleBinding: provisioning-1550/csi-resizer-role-cfg Jan 11 20:15:13.437: INFO: creating *v1.Service: provisioning-1550/csi-hostpath-attacher Jan 11 20:15:13.531: INFO: creating *v1.StatefulSet: provisioning-1550/csi-hostpath-attacher Jan 11 20:15:13.623: INFO: creating *v1beta1.CSIDriver: csi-hostpath-provisioning-1550 Jan 11 20:15:13.713: INFO: creating *v1.Service: provisioning-1550/csi-hostpathplugin Jan 11 20:15:13.806: INFO: creating *v1.StatefulSet: provisioning-1550/csi-hostpathplugin Jan 11 20:15:13.896: INFO: creating *v1.Service: provisioning-1550/csi-hostpath-provisioner Jan 11 20:15:13.991: INFO: creating *v1.StatefulSet: provisioning-1550/csi-hostpath-provisioner Jan 11 20:15:14.081: INFO: creating *v1.Service: provisioning-1550/csi-hostpath-resizer Jan 11 20:15:14.174: INFO: creating *v1.StatefulSet: provisioning-1550/csi-hostpath-resizer Jan 11 20:15:14.263: INFO: creating *v1.Service: provisioning-1550/csi-snapshotter Jan 11 20:15:14.357: INFO: creating *v1.StatefulSet: provisioning-1550/csi-snapshotter Jan 11 20:15:14.447: INFO: creating *v1.ClusterRoleBinding: psp-csi-hostpath-role-provisioning-1550 Jan 11 20:15:14.537: INFO: Test running for native CSI Driver, not checking metrics Jan 11 20:15:14.537: INFO: In creating storage class object and pvc objects for driver - sc: &StorageClass{ObjectMeta:{provisioning-1550-csi-hostpath-provisioning-1550-scl5nwl 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},Provisioner:csi-hostpath-provisioning-1550,Parameters:map[string]string{},ReclaimPolicy:nil,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},}, pvc: &PersistentVolumeClaim{ObjectMeta:{ pvc- provisioning-1550 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{5368709120 0} {} 5Gi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*provisioning-1550-csi-hostpath-provisioning-1550-scl5nwl,VolumeMode:nil,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},}, src-pvc: &PersistentVolumeClaim{ObjectMeta:{ pvc- provisioning-1550 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{5368709120 0} {} 5Gi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*provisioning-1550-csi-hostpath-provisioning-1550-scl5nwl,VolumeMode:nil,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} STEP: [Initialize dataSource]creating a StorageClass provisioning-1550-csi-hostpath-provisioning-1550-scl5nwl STEP: [Initialize dataSource]creating a source PVC STEP: [Initialize dataSource]write data to volume Jan 11 20:15:14.808: INFO: Waiting up to 15m0s for pod "pvc-datasource-writer-z9wwd" in namespace "provisioning-1550" to be "success or failure" Jan 11 20:15:14.897: INFO: Pod "pvc-datasource-writer-z9wwd": Phase="Pending", Reason="", readiness=false. Elapsed: 88.98882ms Jan 11 20:15:16.986: INFO: Pod "pvc-datasource-writer-z9wwd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.178062401s Jan 11 20:15:19.075: INFO: Pod "pvc-datasource-writer-z9wwd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.267711268s Jan 11 20:15:21.165: INFO: Pod "pvc-datasource-writer-z9wwd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.357574261s Jan 11 20:15:23.255: INFO: Pod "pvc-datasource-writer-z9wwd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.447158995s Jan 11 20:15:25.344: INFO: Pod "pvc-datasource-writer-z9wwd": Phase="Pending", Reason="", readiness=false. Elapsed: 10.536659771s Jan 11 20:15:27.436: INFO: Pod "pvc-datasource-writer-z9wwd": Phase="Pending", Reason="", readiness=false. Elapsed: 12.627973309s Jan 11 20:15:29.525: INFO: Pod "pvc-datasource-writer-z9wwd": Phase="Pending", Reason="", readiness=false. Elapsed: 14.717289817s Jan 11 20:15:31.615: INFO: Pod "pvc-datasource-writer-z9wwd": Phase="Pending", Reason="", readiness=false. Elapsed: 16.807033176s Jan 11 20:15:33.704: INFO: Pod "pvc-datasource-writer-z9wwd": Phase="Pending", Reason="", readiness=false. Elapsed: 18.896660492s Jan 11 20:15:35.794: INFO: Pod "pvc-datasource-writer-z9wwd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 20.985966887s STEP: Saw pod success Jan 11 20:15:35.794: INFO: Pod "pvc-datasource-writer-z9wwd" satisfied condition "success or failure" Jan 11 20:15:35.890: INFO: Pod pvc-datasource-writer-z9wwd has the following logs: STEP: Deleting pod pvc-datasource-writer-z9wwd in namespace provisioning-1550 STEP: creating a StorageClass provisioning-1550-csi-hostpath-provisioning-1550-scl5nwl STEP: creating a claim STEP: checking whether the created volume has the pre-populated data Jan 11 20:15:36.429: INFO: Waiting up to 15m0s for pod "pvc-datasource-tester-rj8s7" in namespace "provisioning-1550" to be "success or failure" Jan 11 20:15:36.518: INFO: Pod "pvc-datasource-tester-rj8s7": Phase="Pending", Reason="", readiness=false. Elapsed: 89.241879ms Jan 11 20:15:38.608: INFO: Pod "pvc-datasource-tester-rj8s7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.178553649s Jan 11 20:15:40.697: INFO: Pod "pvc-datasource-tester-rj8s7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.268289169s Jan 11 20:15:42.787: INFO: Pod "pvc-datasource-tester-rj8s7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.357603216s Jan 11 20:15:44.876: INFO: Pod "pvc-datasource-tester-rj8s7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.446866163s Jan 11 20:15:46.965: INFO: Pod "pvc-datasource-tester-rj8s7": Phase="Pending", Reason="", readiness=false. Elapsed: 10.536134843s Jan 11 20:15:49.055: INFO: Pod "pvc-datasource-tester-rj8s7": Phase="Pending", Reason="", readiness=false. Elapsed: 12.62592558s Jan 11 20:15:51.144: INFO: Pod "pvc-datasource-tester-rj8s7": Phase="Pending", Reason="", readiness=false. Elapsed: 14.715256104s Jan 11 20:15:53.234: INFO: Pod "pvc-datasource-tester-rj8s7": Phase="Pending", Reason="", readiness=false. Elapsed: 16.804554324s Jan 11 20:15:55.323: INFO: Pod "pvc-datasource-tester-rj8s7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.893668732s STEP: Saw pod success Jan 11 20:15:55.323: INFO: Pod "pvc-datasource-tester-rj8s7" satisfied condition "success or failure" Jan 11 20:15:55.418: INFO: Pod pvc-datasource-tester-rj8s7 has the following logs: provisioning-1550 STEP: Deleting pod pvc-datasource-tester-rj8s7 in namespace provisioning-1550 Jan 11 20:15:55.599: INFO: Waiting up to 5m0s for PersistentVolumeClaims [pvc-2qhjc] to have phase Bound Jan 11 20:15:55.688: INFO: PersistentVolumeClaim pvc-2qhjc found and phase=Bound (88.563296ms) STEP: checking the claim STEP: checking the PV STEP: deleting claim "provisioning-1550"/"pvc-2qhjc" STEP: deleting the claim's PV "pvc-667e9aef-6758-4b85-8c99-39ca7b84574d" Jan 11 20:15:55.956: INFO: Waiting up to 20m0s for PersistentVolume pvc-667e9aef-6758-4b85-8c99-39ca7b84574d to get deleted Jan 11 20:15:56.045: INFO: PersistentVolume pvc-667e9aef-6758-4b85-8c99-39ca7b84574d was removed Jan 11 20:15:56.045: INFO: deleting claim "provisioning-1550"/"pvc-2qhjc" Jan 11 20:15:56.134: INFO: deleting storage class provisioning-1550-csi-hostpath-provisioning-1550-scl5nwl Jan 11 20:15:56.226: INFO: deleting source PVC "provisioning-1550"/"pvc-k7hwr" STEP: uninstalling csi-hostpath driver Jan 11 20:15:56.316: INFO: deleting *v1.ServiceAccount: provisioning-1550/csi-attacher Jan 11 20:15:56.406: INFO: deleting *v1.ClusterRole: external-attacher-runner-provisioning-1550 Jan 11 20:15:56.498: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-provisioning-1550 Jan 11 20:15:56.589: INFO: deleting *v1.Role: provisioning-1550/external-attacher-cfg-provisioning-1550 Jan 11 20:15:56.680: INFO: deleting *v1.RoleBinding: provisioning-1550/csi-attacher-role-cfg Jan 11 20:15:56.771: INFO: deleting *v1.ServiceAccount: provisioning-1550/csi-provisioner Jan 11 20:15:56.862: INFO: deleting *v1.ClusterRole: external-provisioner-runner-provisioning-1550 Jan 11 20:15:56.953: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-provisioning-1550 Jan 11 20:15:57.044: INFO: deleting *v1.Role: provisioning-1550/external-provisioner-cfg-provisioning-1550 Jan 11 20:15:57.134: INFO: deleting *v1.RoleBinding: provisioning-1550/csi-provisioner-role-cfg Jan 11 20:15:57.225: INFO: deleting *v1.ServiceAccount: provisioning-1550/csi-snapshotter Jan 11 20:15:57.315: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-provisioning-1550 Jan 11 20:15:57.405: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-provisioning-1550 Jan 11 20:15:57.496: INFO: deleting *v1.Role: provisioning-1550/external-snapshotter-leaderelection-provisioning-1550 Jan 11 20:15:57.588: INFO: deleting *v1.RoleBinding: provisioning-1550/external-snapshotter-leaderelection Jan 11 20:15:57.679: INFO: deleting *v1.ServiceAccount: provisioning-1550/csi-resizer Jan 11 20:15:57.770: INFO: deleting *v1.ClusterRole: external-resizer-runner-provisioning-1550 Jan 11 20:15:57.860: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-provisioning-1550 Jan 11 20:15:57.951: INFO: deleting *v1.Role: provisioning-1550/external-resizer-cfg-provisioning-1550 Jan 11 20:15:58.042: INFO: deleting *v1.RoleBinding: provisioning-1550/csi-resizer-role-cfg Jan 11 20:15:58.132: INFO: deleting *v1.Service: provisioning-1550/csi-hostpath-attacher Jan 11 20:15:58.228: INFO: deleting *v1.StatefulSet: provisioning-1550/csi-hostpath-attacher Jan 11 20:15:58.319: INFO: deleting *v1beta1.CSIDriver: csi-hostpath-provisioning-1550 Jan 11 20:15:58.410: INFO: deleting *v1.Service: provisioning-1550/csi-hostpathplugin Jan 11 20:15:58.505: INFO: deleting *v1.StatefulSet: provisioning-1550/csi-hostpathplugin Jan 11 20:15:58.596: INFO: deleting *v1.Service: provisioning-1550/csi-hostpath-provisioner Jan 11 20:15:58.690: INFO: deleting *v1.StatefulSet: provisioning-1550/csi-hostpath-provisioner Jan 11 20:15:58.781: INFO: deleting *v1.Service: provisioning-1550/csi-hostpath-resizer Jan 11 20:15:58.876: INFO: deleting *v1.StatefulSet: provisioning-1550/csi-hostpath-resizer Jan 11 20:15:58.966: INFO: deleting *v1.Service: provisioning-1550/csi-snapshotter Jan 11 20:15:59.062: INFO: deleting *v1.StatefulSet: provisioning-1550/csi-snapshotter Jan 11 20:15:59.152: INFO: deleting *v1.ClusterRoleBinding: psp-csi-hostpath-role-provisioning-1550 [AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:15:59.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "provisioning-1550" for this suite. Jan 11 20:16:27.602: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:16:30.904: INFO: namespace provisioning-1550 deletion completed in 31.569984961s • [SLOW TEST:80.343 seconds] [sig-storage] CSI Volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Driver: csi-hostpath] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:62 [Testpattern: Dynamic PV (default fs)] provisioning /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:92 should provision storage with pvc data source /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/provisioning.go:207 ------------------------------ S ------------------------------ [BeforeEach] [k8s.io] [sig-node] Pods Extended /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:16:12.414: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename pods STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pods-6156 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Delete Grace Period /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:47 [It] should be submitted and removed [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes Jan 11 20:16:16.595: INFO: Asynchronously running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config proxy -p 0' STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Jan 11 20:16:22.224: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [k8s.io] [sig-node] Pods Extended /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:16:22.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6156" for this suite. Jan 11 20:16:28.677: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:16:32.118: INFO: namespace pods-6156 deletion completed in 9.713165213s • [SLOW TEST:19.705 seconds] [k8s.io] [sig-node] Pods Extended /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693 [k8s.io] Delete Grace Period /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693 should be submitted and removed [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSS ------------------------------ [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:93 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:16:19.498: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename provisioning STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in provisioning-8361 STEP: Waiting for a default service account to be provisioned in namespace [It] should support creating multiple subpath from same volumes [Slow] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:277 Jan 11 20:16:20.154: INFO: Could not find CSI Name for in-tree plugin kubernetes.io/empty-dir Jan 11 20:16:20.154: INFO: Creating resource for inline volume STEP: Creating pod pod-subpath-test-emptydir-9j2l STEP: Creating a pod to test multi_subpath Jan 11 20:16:20.246: INFO: Waiting up to 5m0s for pod "pod-subpath-test-emptydir-9j2l" in namespace "provisioning-8361" to be "success or failure" Jan 11 20:16:20.336: INFO: Pod "pod-subpath-test-emptydir-9j2l": Phase="Pending", Reason="", readiness=false. Elapsed: 89.87273ms Jan 11 20:16:22.426: INFO: Pod "pod-subpath-test-emptydir-9j2l": Phase="Pending", Reason="", readiness=false. Elapsed: 2.179930734s Jan 11 20:16:24.517: INFO: Pod "pod-subpath-test-emptydir-9j2l": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.270742494s STEP: Saw pod success Jan 11 20:16:24.517: INFO: Pod "pod-subpath-test-emptydir-9j2l" satisfied condition "success or failure" Jan 11 20:16:24.607: INFO: Trying to get logs from node ip-10-250-7-77.ec2.internal pod pod-subpath-test-emptydir-9j2l container test-container-subpath-emptydir-9j2l: STEP: delete the pod Jan 11 20:16:24.797: INFO: Waiting for pod pod-subpath-test-emptydir-9j2l to disappear Jan 11 20:16:24.887: INFO: Pod pod-subpath-test-emptydir-9j2l no longer exists STEP: Deleting pod Jan 11 20:16:24.887: INFO: Deleting pod "pod-subpath-test-emptydir-9j2l" in namespace "provisioning-8361" Jan 11 20:16:24.977: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics [AfterEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:16:24.977: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "provisioning-8361" for this suite. Jan 11 20:16:31.340: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:16:34.664: INFO: namespace provisioning-8361 deletion completed in 9.594792756s • [SLOW TEST:15.166 seconds] [sig-storage] In-tree Volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Driver: emptydir] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:69 [Testpattern: Inline-volume (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:92 should support creating multiple subpath from same volumes [Slow] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:277 ------------------------------ SSSS ------------------------------ [BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:93 [BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:15:33.393: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename volumemode STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in volumemode-2239 STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to use a volume in a pod with mismatched mode [Slow] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:278 STEP: deploying csi-hostpath driver Jan 11 20:15:34.235: INFO: creating *v1.ServiceAccount: volumemode-2239/csi-attacher Jan 11 20:15:34.325: INFO: creating *v1.ClusterRole: external-attacher-runner-volumemode-2239 Jan 11 20:15:34.325: INFO: Define cluster role external-attacher-runner-volumemode-2239 Jan 11 20:15:34.414: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-volumemode-2239 Jan 11 20:15:34.504: INFO: creating *v1.Role: volumemode-2239/external-attacher-cfg-volumemode-2239 Jan 11 20:15:34.593: INFO: creating *v1.RoleBinding: volumemode-2239/csi-attacher-role-cfg Jan 11 20:15:34.682: INFO: creating *v1.ServiceAccount: volumemode-2239/csi-provisioner Jan 11 20:15:34.771: INFO: creating *v1.ClusterRole: external-provisioner-runner-volumemode-2239 Jan 11 20:15:34.771: INFO: Define cluster role external-provisioner-runner-volumemode-2239 Jan 11 20:15:34.861: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-volumemode-2239 Jan 11 20:15:34.950: INFO: creating *v1.Role: volumemode-2239/external-provisioner-cfg-volumemode-2239 Jan 11 20:15:35.039: INFO: creating *v1.RoleBinding: volumemode-2239/csi-provisioner-role-cfg Jan 11 20:15:35.129: INFO: creating *v1.ServiceAccount: volumemode-2239/csi-snapshotter Jan 11 20:15:35.218: INFO: creating *v1.ClusterRole: external-snapshotter-runner-volumemode-2239 Jan 11 20:15:35.218: INFO: Define cluster role external-snapshotter-runner-volumemode-2239 Jan 11 20:15:35.307: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-volumemode-2239 Jan 11 20:15:35.396: INFO: creating *v1.Role: volumemode-2239/external-snapshotter-leaderelection-volumemode-2239 Jan 11 20:15:35.485: INFO: creating *v1.RoleBinding: volumemode-2239/external-snapshotter-leaderelection Jan 11 20:15:35.574: INFO: creating *v1.ServiceAccount: volumemode-2239/csi-resizer Jan 11 20:15:35.663: INFO: creating *v1.ClusterRole: external-resizer-runner-volumemode-2239 Jan 11 20:15:35.663: INFO: Define cluster role external-resizer-runner-volumemode-2239 Jan 11 20:15:35.752: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-volumemode-2239 Jan 11 20:15:35.842: INFO: creating *v1.Role: volumemode-2239/external-resizer-cfg-volumemode-2239 Jan 11 20:15:35.931: INFO: creating *v1.RoleBinding: volumemode-2239/csi-resizer-role-cfg Jan 11 20:15:36.021: INFO: creating *v1.Service: volumemode-2239/csi-hostpath-attacher Jan 11 20:15:36.114: INFO: creating *v1.StatefulSet: volumemode-2239/csi-hostpath-attacher Jan 11 20:15:36.204: INFO: creating *v1beta1.CSIDriver: csi-hostpath-volumemode-2239 Jan 11 20:15:36.294: INFO: creating *v1.Service: volumemode-2239/csi-hostpathplugin Jan 11 20:15:36.387: INFO: creating *v1.StatefulSet: volumemode-2239/csi-hostpathplugin Jan 11 20:15:36.476: INFO: creating *v1.Service: volumemode-2239/csi-hostpath-provisioner Jan 11 20:15:36.570: INFO: creating *v1.StatefulSet: volumemode-2239/csi-hostpath-provisioner Jan 11 20:15:36.660: INFO: creating *v1.Service: volumemode-2239/csi-hostpath-resizer Jan 11 20:15:36.753: INFO: creating *v1.StatefulSet: volumemode-2239/csi-hostpath-resizer Jan 11 20:15:36.843: INFO: creating *v1.Service: volumemode-2239/csi-snapshotter Jan 11 20:15:36.937: INFO: creating *v1.StatefulSet: volumemode-2239/csi-snapshotter Jan 11 20:15:37.047: INFO: creating *v1.ClusterRoleBinding: psp-csi-hostpath-role-volumemode-2239 Jan 11 20:15:37.136: INFO: Test running for native CSI Driver, not checking metrics Jan 11 20:15:37.136: INFO: Creating resource for dynamic PV STEP: creating a StorageClass volumemode-2239-csi-hostpath-volumemode-2239-scdl8lb STEP: creating a claim Jan 11 20:15:37.316: INFO: Waiting up to 5m0s for PersistentVolumeClaims [csi-hostpathpj4bv] to have phase Bound Jan 11 20:15:37.405: INFO: PersistentVolumeClaim csi-hostpathpj4bv found but phase is Pending instead of Bound. Jan 11 20:15:39.494: INFO: PersistentVolumeClaim csi-hostpathpj4bv found but phase is Pending instead of Bound. Jan 11 20:15:41.583: INFO: PersistentVolumeClaim csi-hostpathpj4bv found but phase is Pending instead of Bound. Jan 11 20:15:43.673: INFO: PersistentVolumeClaim csi-hostpathpj4bv found but phase is Pending instead of Bound. Jan 11 20:15:45.762: INFO: PersistentVolumeClaim csi-hostpathpj4bv found but phase is Pending instead of Bound. Jan 11 20:15:47.851: INFO: PersistentVolumeClaim csi-hostpathpj4bv found but phase is Pending instead of Bound. Jan 11 20:15:49.940: INFO: PersistentVolumeClaim csi-hostpathpj4bv found but phase is Pending instead of Bound. Jan 11 20:15:52.029: INFO: PersistentVolumeClaim csi-hostpathpj4bv found but phase is Pending instead of Bound. Jan 11 20:15:54.118: INFO: PersistentVolumeClaim csi-hostpathpj4bv found but phase is Pending instead of Bound. Jan 11 20:15:56.207: INFO: PersistentVolumeClaim csi-hostpathpj4bv found but phase is Pending instead of Bound. Jan 11 20:15:58.296: INFO: PersistentVolumeClaim csi-hostpathpj4bv found but phase is Pending instead of Bound. Jan 11 20:16:00.385: INFO: PersistentVolumeClaim csi-hostpathpj4bv found but phase is Pending instead of Bound. Jan 11 20:16:02.475: INFO: PersistentVolumeClaim csi-hostpathpj4bv found but phase is Pending instead of Bound. Jan 11 20:16:04.565: INFO: PersistentVolumeClaim csi-hostpathpj4bv found and phase=Bound (27.248473114s) STEP: Creating pod STEP: Waiting for the pod to fail Jan 11 20:16:07.107: INFO: Deleting pod "security-context-e8334d35-3e34-4ed5-ba3d-ba8d1e898140" in namespace "volumemode-2239" Jan 11 20:16:07.197: INFO: Wait up to 5m0s for pod "security-context-e8334d35-3e34-4ed5-ba3d-ba8d1e898140" to be fully deleted WARNING: pod log: security-context-e8334d35-3e34-4ed5-ba3d-ba8d1e898140/write-pod: pods "security-context-e8334d35-3e34-4ed5-ba3d-ba8d1e898140" not found STEP: Deleting pvc Jan 11 20:16:19.375: INFO: Deleting PersistentVolumeClaim "csi-hostpathpj4bv" Jan 11 20:16:19.465: INFO: Waiting up to 5m0s for PersistentVolume pvc-5baca660-ceba-47a6-a7de-fcd829ece141 to get deleted Jan 11 20:16:19.555: INFO: PersistentVolume pvc-5baca660-ceba-47a6-a7de-fcd829ece141 was removed STEP: Deleting sc STEP: uninstalling csi-hostpath driver Jan 11 20:16:19.646: INFO: deleting *v1.ServiceAccount: volumemode-2239/csi-attacher Jan 11 20:16:19.736: INFO: deleting *v1.ClusterRole: external-attacher-runner-volumemode-2239 Jan 11 20:16:19.827: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-volumemode-2239 Jan 11 20:16:19.918: INFO: deleting *v1.Role: volumemode-2239/external-attacher-cfg-volumemode-2239 Jan 11 20:16:20.009: INFO: deleting *v1.RoleBinding: volumemode-2239/csi-attacher-role-cfg Jan 11 20:16:20.100: INFO: deleting *v1.ServiceAccount: volumemode-2239/csi-provisioner Jan 11 20:16:20.191: INFO: deleting *v1.ClusterRole: external-provisioner-runner-volumemode-2239 Jan 11 20:16:20.281: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-volumemode-2239 Jan 11 20:16:20.372: INFO: deleting *v1.Role: volumemode-2239/external-provisioner-cfg-volumemode-2239 Jan 11 20:16:20.463: INFO: deleting *v1.RoleBinding: volumemode-2239/csi-provisioner-role-cfg Jan 11 20:16:20.554: INFO: deleting *v1.ServiceAccount: volumemode-2239/csi-snapshotter Jan 11 20:16:20.645: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-volumemode-2239 Jan 11 20:16:20.736: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-volumemode-2239 Jan 11 20:16:20.827: INFO: deleting *v1.Role: volumemode-2239/external-snapshotter-leaderelection-volumemode-2239 Jan 11 20:16:20.918: INFO: deleting *v1.RoleBinding: volumemode-2239/external-snapshotter-leaderelection Jan 11 20:16:21.009: INFO: deleting *v1.ServiceAccount: volumemode-2239/csi-resizer Jan 11 20:16:21.099: INFO: deleting *v1.ClusterRole: external-resizer-runner-volumemode-2239 Jan 11 20:16:21.190: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-volumemode-2239 Jan 11 20:16:21.281: INFO: deleting *v1.Role: volumemode-2239/external-resizer-cfg-volumemode-2239 Jan 11 20:16:21.372: INFO: deleting *v1.RoleBinding: volumemode-2239/csi-resizer-role-cfg Jan 11 20:16:21.462: INFO: deleting *v1.Service: volumemode-2239/csi-hostpath-attacher Jan 11 20:16:21.558: INFO: deleting *v1.StatefulSet: volumemode-2239/csi-hostpath-attacher Jan 11 20:16:21.649: INFO: deleting *v1beta1.CSIDriver: csi-hostpath-volumemode-2239 Jan 11 20:16:21.739: INFO: deleting *v1.Service: volumemode-2239/csi-hostpathplugin Jan 11 20:16:21.834: INFO: deleting *v1.StatefulSet: volumemode-2239/csi-hostpathplugin Jan 11 20:16:21.925: INFO: deleting *v1.Service: volumemode-2239/csi-hostpath-provisioner Jan 11 20:16:22.029: INFO: deleting *v1.StatefulSet: volumemode-2239/csi-hostpath-provisioner Jan 11 20:16:22.120: INFO: deleting *v1.Service: volumemode-2239/csi-hostpath-resizer Jan 11 20:16:22.215: INFO: deleting *v1.StatefulSet: volumemode-2239/csi-hostpath-resizer Jan 11 20:16:22.306: INFO: deleting *v1.Service: volumemode-2239/csi-snapshotter Jan 11 20:16:22.401: INFO: deleting *v1.StatefulSet: volumemode-2239/csi-snapshotter Jan 11 20:16:22.492: INFO: deleting *v1.ClusterRoleBinding: psp-csi-hostpath-role-volumemode-2239 [AfterEach] [Testpattern: Dynamic PV (block volmode)] volumeMode /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:16:22.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "volumemode-2239" for this suite. WARNING: pod log: csi-hostpath-attacher-0/csi-attacher: context canceled WARNING: pod log: csi-hostpath-provisioner-0/csi-provisioner: context canceled WARNING: pod log: csi-hostpath-resizer-0/csi-resizer: context canceled WARNING: pod log: csi-hostpathplugin-0/node-driver-registrar: context canceled WARNING: pod log: csi-hostpathplugin-0/hostpath: context canceled WARNING: pod log: csi-hostpathplugin-0/liveness-probe: context canceled WARNING: pod log: csi-snapshotter-0/csi-snapshotter: context canceled Jan 11 20:16:34.942: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:16:38.237: INFO: namespace volumemode-2239 deletion completed in 15.563878252s • [SLOW TEST:64.845 seconds] [sig-storage] CSI Volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Driver: csi-hostpath] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:62 [Testpattern: Dynamic PV (block volmode)] volumeMode /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:92 should fail to use a volume in a pod with mismatched mode [Slow] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:278 ------------------------------ [BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:34 [BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:16:32.135: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename sysctl STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in sysctl-1386 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:63 [It] should not launch unsafe, but not explicitly enabled sysctls on the node /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:188 STEP: Creating a pod with a greylisted, but not whitelisted sysctl on the node STEP: Watching for error events or started pod STEP: Checking that the pod was rejected [AfterEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:16:34.958: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sysctl-1386" for this suite. Jan 11 20:16:41.320: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:16:44.647: INFO: namespace sysctl-1386 deletion completed in 9.597672364s • [SLOW TEST:12.512 seconds] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693 should not launch unsafe, but not explicitly enabled sysctls on the node /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:188 ------------------------------ SSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Container Runtime /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:16:38.239: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename container-runtime STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-runtime-8287 STEP: Waiting for a default service account to be provisioned in namespace [It] should not be able to pull image from invalid registry [NodeConformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:358 STEP: create the container STEP: check the container status STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:16:41.497: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-8287" for this suite. Jan 11 20:16:47.857: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:16:51.156: INFO: namespace container-runtime-8287 deletion completed in 9.567330744s • [SLOW TEST:12.916 seconds] [k8s.io] Container Runtime /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693 blackbox test /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 when running a container with a new image /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:252 should not be able to pull image from invalid registry [NodeConformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:358 ------------------------------ SSSSS ------------------------------ [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:93 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:16:29.762: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename provisioning STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in provisioning-4000 STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to unmount after the subpath directory is deleted /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:425 Jan 11 20:16:30.402: INFO: Could not find CSI Name for in-tree plugin kubernetes.io/empty-dir Jan 11 20:16:30.402: INFO: Creating resource for inline volume STEP: Creating pod pod-subpath-test-emptydir-nfvb Jan 11 20:16:32.674: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=provisioning-4000 pod-subpath-test-emptydir-nfvb --container test-container-volume-emptydir-nfvb -- /bin/sh -c rm -r /test-volume/provisioning-4000' Jan 11 20:16:34.025: INFO: stderr: "" Jan 11 20:16:34.025: INFO: stdout: "" STEP: Deleting pod pod-subpath-test-emptydir-nfvb Jan 11 20:16:34.025: INFO: Deleting pod "pod-subpath-test-emptydir-nfvb" in namespace "provisioning-4000" Jan 11 20:16:34.116: INFO: Wait up to 5m0s for pod "pod-subpath-test-emptydir-nfvb" to be fully deleted STEP: Deleting pod Jan 11 20:16:44.296: INFO: Deleting pod "pod-subpath-test-emptydir-nfvb" in namespace "provisioning-4000" Jan 11 20:16:44.386: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics [AfterEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:16:44.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "provisioning-4000" for this suite. Jan 11 20:16:50.746: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:16:54.054: INFO: namespace provisioning-4000 deletion completed in 9.577306684s • [SLOW TEST:24.292 seconds] [sig-storage] In-tree Volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Driver: emptydir] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:69 [Testpattern: Inline-volume (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:92 should be able to unmount after the subpath directory is deleted /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:425 ------------------------------ S ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:16:44.661: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename projected STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-23 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating a pod to test downward API volume plugin Jan 11 20:16:45.445: INFO: Waiting up to 5m0s for pod "downwardapi-volume-94d9ed48-7bbc-4cd0-bf32-adacd1ffae23" in namespace "projected-23" to be "success or failure" Jan 11 20:16:45.535: INFO: Pod "downwardapi-volume-94d9ed48-7bbc-4cd0-bf32-adacd1ffae23": Phase="Pending", Reason="", readiness=false. Elapsed: 90.589973ms Jan 11 20:16:47.626: INFO: Pod "downwardapi-volume-94d9ed48-7bbc-4cd0-bf32-adacd1ffae23": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.181056516s STEP: Saw pod success Jan 11 20:16:47.626: INFO: Pod "downwardapi-volume-94d9ed48-7bbc-4cd0-bf32-adacd1ffae23" satisfied condition "success or failure" Jan 11 20:16:47.716: INFO: Trying to get logs from node ip-10-250-27-25.ec2.internal pod downwardapi-volume-94d9ed48-7bbc-4cd0-bf32-adacd1ffae23 container client-container: STEP: delete the pod Jan 11 20:16:47.906: INFO: Waiting for pod downwardapi-volume-94d9ed48-7bbc-4cd0-bf32-adacd1ffae23 to disappear Jan 11 20:16:47.996: INFO: Pod downwardapi-volume-94d9ed48-7bbc-4cd0-bf32-adacd1ffae23 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:16:47.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-23" for this suite. Jan 11 20:16:56.357: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:16:59.680: INFO: namespace projected-23 deletion completed in 11.593143277s • [SLOW TEST:15.019 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:16:34.672: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename secrets STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secrets-2494 STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secret-namespace-7244 STEP: Creating secret with name secret-test-2844a0d4-8fdb-46fb-bc51-df263f378ce3 STEP: Creating a pod to test consume secrets Jan 11 20:16:36.140: INFO: Waiting up to 5m0s for pod "pod-secrets-0f0a3c01-d32e-47a6-b7d2-c40c922533c9" in namespace "secrets-2494" to be "success or failure" Jan 11 20:16:36.230: INFO: Pod "pod-secrets-0f0a3c01-d32e-47a6-b7d2-c40c922533c9": Phase="Pending", Reason="", readiness=false. Elapsed: 89.872609ms Jan 11 20:16:38.320: INFO: Pod "pod-secrets-0f0a3c01-d32e-47a6-b7d2-c40c922533c9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.180024249s STEP: Saw pod success Jan 11 20:16:38.320: INFO: Pod "pod-secrets-0f0a3c01-d32e-47a6-b7d2-c40c922533c9" satisfied condition "success or failure" Jan 11 20:16:38.410: INFO: Trying to get logs from node ip-10-250-27-25.ec2.internal pod pod-secrets-0f0a3c01-d32e-47a6-b7d2-c40c922533c9 container secret-volume-test: STEP: delete the pod Jan 11 20:16:38.735: INFO: Waiting for pod pod-secrets-0f0a3c01-d32e-47a6-b7d2-c40c922533c9 to disappear Jan 11 20:16:38.824: INFO: Pod pod-secrets-0f0a3c01-d32e-47a6-b7d2-c40c922533c9 no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:16:38.824: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2494" for this suite. Jan 11 20:16:45.185: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:16:48.546: INFO: namespace secrets-2494 deletion completed in 9.630473439s STEP: Destroying namespace "secret-namespace-7244" for this suite. Jan 11 20:16:56.816: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:17:00.151: INFO: namespace secret-namespace-7244 deletion completed in 11.605441431s • [SLOW TEST:25.480 seconds] [sig-storage] Secrets /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:35 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:16:54.057: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename emptydir STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-9965 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:46 [It] new files should be created with FSGroup ownership when container is non-root /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:55 STEP: Creating a pod to test emptydir 0644 on tmpfs Jan 11 20:16:55.339: INFO: Waiting up to 5m0s for pod "pod-6772fd4e-cdf0-47ec-8559-b51032cfe8f8" in namespace "emptydir-9965" to be "success or failure" Jan 11 20:16:55.429: INFO: Pod "pod-6772fd4e-cdf0-47ec-8559-b51032cfe8f8": Phase="Pending", Reason="", readiness=false. Elapsed: 89.588722ms Jan 11 20:16:57.519: INFO: Pod "pod-6772fd4e-cdf0-47ec-8559-b51032cfe8f8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.179635307s STEP: Saw pod success Jan 11 20:16:57.519: INFO: Pod "pod-6772fd4e-cdf0-47ec-8559-b51032cfe8f8" satisfied condition "success or failure" Jan 11 20:16:57.609: INFO: Trying to get logs from node ip-10-250-27-25.ec2.internal pod pod-6772fd4e-cdf0-47ec-8559-b51032cfe8f8 container test-container: STEP: delete the pod Jan 11 20:16:57.798: INFO: Waiting for pod pod-6772fd4e-cdf0-47ec-8559-b51032cfe8f8 to disappear Jan 11 20:16:57.889: INFO: Pod pod-6772fd4e-cdf0-47ec-8559-b51032cfe8f8 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:16:57.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9965" for this suite. Jan 11 20:17:04.249: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:17:07.562: INFO: namespace emptydir-9965 deletion completed in 9.581877617s • [SLOW TEST:13.504 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:44 new files should be created with FSGroup ownership when container is non-root /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:55 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Docker Containers /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:16:51.164: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename containers STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in containers-162 STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [AfterEach] [k8s.io] Docker Containers /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:16:54.216: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-162" for this suite. Jan 11 20:17:04.574: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:17:07.869: INFO: namespace containers-162 deletion completed in 13.56236231s • [SLOW TEST:16.705 seconds] [k8s.io] Docker Containers /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Networking /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:16:30.908: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename nettest STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nettest-1791 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Networking /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:35 STEP: Executing a successful http request from the external internet [It] should update endpoints: udp /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:170 STEP: Performing setup for networking test in namespace nettest-1791 STEP: creating a selector STEP: Creating the service pods in kubernetes Jan 11 20:16:31.616: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods STEP: Getting node addresses Jan 11 20:16:55.099: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Jan 11 20:16:55.279: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-network] Networking /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:16:55.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "nettest-1791" for this suite. Jan 11 20:17:07.640: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:17:10.937: INFO: namespace nettest-1791 deletion completed in 15.566574653s S [SKIPPING] [40.029 seconds] [sig-network] Networking /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 Granular Checks: Services /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:103 should update endpoints: udp [It] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:170 Requires at least 2 nodes (not -1) /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:597 ------------------------------ SSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:17:07.600: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename services STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-9670 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:91 [It] should release NodePorts on delete /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1319 STEP: creating service nodeport-reuse with type NodePort in namespace services-9670 STEP: deleting original service nodeport-reuse Jan 11 20:17:10.715: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=services-9670 hostexec -- /bin/sh -x -c ! ss -ant46 'sport = :30304' | tail -n +2 | grep LISTEN' Jan 11 20:17:12.111: INFO: stderr: "+ tail -n +2\n+ ss -ant46 sport = :30304\n+ grep LISTEN\n" Jan 11 20:17:12.111: INFO: stdout: "" STEP: creating service nodeport-reuse with same NodePort 30304 STEP: deleting service nodeport-reuse in namespace services-9670 [AfterEach] [sig-network] Services /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:17:12.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9670" for this suite. Jan 11 20:17:18.664: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:17:21.975: INFO: namespace services-9670 deletion completed in 9.579755083s [AfterEach] [sig-network] Services /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:95 • [SLOW TEST:14.375 seconds] [sig-network] Services /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should release NodePorts on delete /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1319 ------------------------------ SSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:17:00.173: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename pv STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pv-4483 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:110 [BeforeEach] NFS /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:127 STEP: creating nfs-server pod STEP: locating the "nfs-server" server pod Jan 11 20:17:03.514: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config logs nfs-server nfs-server --namespace=pv-4483' Jan 11 20:17:04.052: INFO: stderr: "" Jan 11 20:17:04.053: INFO: stdout: "Serving /exports\nrpcinfo: can't contact rpcbind: : RPC: Unable to receive; errno = Connection refused\nStarting rpcbind\nNFS started\n" Jan 11 20:17:04.053: INFO: nfs server pod IP address: 100.64.1.5 [It] should create 3 PVs and 3 PVCs: test write access /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:241 Jan 11 20:17:04.053: INFO: Creating a PV followed by a PVC Jan 11 20:17:04.233: INFO: Creating a PV followed by a PVC Jan 11 20:17:04.413: INFO: Creating a PV followed by a PVC Jan 11 20:17:04.594: INFO: Waiting up to 3m0s for PersistentVolume nfs-5cmzz to have phase Bound Jan 11 20:17:04.684: INFO: PersistentVolume nfs-5cmzz found and phase=Bound (90.140298ms) Jan 11 20:17:04.774: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-scvkl] to have phase Bound Jan 11 20:17:04.863: INFO: PersistentVolumeClaim pvc-scvkl found and phase=Bound (89.670277ms) Jan 11 20:17:04.863: INFO: Waiting up to 3m0s for PersistentVolume nfs-ql67r to have phase Bound Jan 11 20:17:04.953: INFO: PersistentVolume nfs-ql67r found and phase=Bound (89.793422ms) Jan 11 20:17:05.043: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-5v4s2] to have phase Bound Jan 11 20:17:05.133: INFO: PersistentVolumeClaim pvc-5v4s2 found and phase=Bound (89.663704ms) Jan 11 20:17:05.133: INFO: Waiting up to 3m0s for PersistentVolume nfs-rq8tj to have phase Bound Jan 11 20:17:05.223: INFO: PersistentVolume nfs-rq8tj found and phase=Bound (89.763271ms) Jan 11 20:17:05.313: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-zd5bm] to have phase Bound Jan 11 20:17:05.404: INFO: PersistentVolumeClaim pvc-zd5bm found and phase=Bound (90.868673ms) STEP: Checking pod has write access to PersistentVolumes Jan 11 20:17:05.493: INFO: Creating nfs test pod STEP: Pod should terminate with exitcode 0 (success) Jan 11 20:17:05.584: INFO: Waiting up to 5m0s for pod "pvc-tester-4jdll" in namespace "pv-4483" to be "success or failure" Jan 11 20:17:05.674: INFO: Pod "pvc-tester-4jdll": Phase="Pending", Reason="", readiness=false. Elapsed: 89.67052ms Jan 11 20:17:07.764: INFO: Pod "pvc-tester-4jdll": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.179949273s STEP: Saw pod success Jan 11 20:17:07.764: INFO: Pod "pvc-tester-4jdll" satisfied condition "success or failure" Jan 11 20:17:07.764: INFO: Pod pvc-tester-4jdll succeeded Jan 11 20:17:07.764: INFO: Deleting pod "pvc-tester-4jdll" in namespace "pv-4483" Jan 11 20:17:07.858: INFO: Wait up to 5m0s for pod "pvc-tester-4jdll" to be fully deleted Jan 11 20:17:08.038: INFO: Creating nfs test pod STEP: Pod should terminate with exitcode 0 (success) Jan 11 20:17:08.128: INFO: Waiting up to 5m0s for pod "pvc-tester-llw56" in namespace "pv-4483" to be "success or failure" Jan 11 20:17:08.218: INFO: Pod "pvc-tester-llw56": Phase="Pending", Reason="", readiness=false. Elapsed: 89.528669ms Jan 11 20:17:10.308: INFO: Pod "pvc-tester-llw56": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.17975664s STEP: Saw pod success Jan 11 20:17:10.308: INFO: Pod "pvc-tester-llw56" satisfied condition "success or failure" Jan 11 20:17:10.308: INFO: Pod pvc-tester-llw56 succeeded Jan 11 20:17:10.308: INFO: Deleting pod "pvc-tester-llw56" in namespace "pv-4483" Jan 11 20:17:10.402: INFO: Wait up to 5m0s for pod "pvc-tester-llw56" to be fully deleted Jan 11 20:17:10.581: INFO: Creating nfs test pod STEP: Pod should terminate with exitcode 0 (success) Jan 11 20:17:10.672: INFO: Waiting up to 5m0s for pod "pvc-tester-dx7tm" in namespace "pv-4483" to be "success or failure" Jan 11 20:17:10.762: INFO: Pod "pvc-tester-dx7tm": Phase="Pending", Reason="", readiness=false. Elapsed: 89.676068ms Jan 11 20:17:12.852: INFO: Pod "pvc-tester-dx7tm": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.179782302s STEP: Saw pod success Jan 11 20:17:12.852: INFO: Pod "pvc-tester-dx7tm" satisfied condition "success or failure" Jan 11 20:17:12.852: INFO: Pod pvc-tester-dx7tm succeeded Jan 11 20:17:12.852: INFO: Deleting pod "pvc-tester-dx7tm" in namespace "pv-4483" Jan 11 20:17:12.944: INFO: Wait up to 5m0s for pod "pvc-tester-dx7tm" to be fully deleted STEP: Deleting PVCs to invoke reclaim policy Jan 11 20:17:13.213: INFO: Deleting PVC pvc-scvkl to trigger reclamation of PV nfs-5cmzz Jan 11 20:17:13.213: INFO: Deleting PersistentVolumeClaim "pvc-scvkl" Jan 11 20:17:13.303: INFO: Waiting for reclaim process to complete. Jan 11 20:17:13.303: INFO: Waiting up to 3m0s for PersistentVolume nfs-5cmzz to have phase Released Jan 11 20:17:13.393: INFO: PersistentVolume nfs-5cmzz found and phase=Released (89.542822ms) Jan 11 20:17:13.483: INFO: PV nfs-5cmzz now in "Released" phase Jan 11 20:17:13.662: INFO: Deleting PVC pvc-5v4s2 to trigger reclamation of PV nfs-ql67r Jan 11 20:17:13.662: INFO: Deleting PersistentVolumeClaim "pvc-5v4s2" Jan 11 20:17:13.752: INFO: Waiting for reclaim process to complete. Jan 11 20:17:13.752: INFO: Waiting up to 3m0s for PersistentVolume nfs-ql67r to have phase Released Jan 11 20:17:13.842: INFO: PersistentVolume nfs-ql67r found but phase is Bound instead of Released. Jan 11 20:17:15.932: INFO: PersistentVolume nfs-ql67r found and phase=Released (2.179588259s) Jan 11 20:17:16.022: INFO: PV nfs-ql67r now in "Released" phase Jan 11 20:17:16.201: INFO: Deleting PVC pvc-zd5bm to trigger reclamation of PV nfs-rq8tj Jan 11 20:17:16.201: INFO: Deleting PersistentVolumeClaim "pvc-zd5bm" Jan 11 20:17:16.291: INFO: Waiting for reclaim process to complete. Jan 11 20:17:16.291: INFO: Waiting up to 3m0s for PersistentVolume nfs-rq8tj to have phase Released Jan 11 20:17:16.381: INFO: PersistentVolume nfs-rq8tj found and phase=Released (89.507905ms) Jan 11 20:17:16.471: INFO: PV nfs-rq8tj now in "Released" phase [AfterEach] with multiple PVs and PVCs all in same ns /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:217 Jan 11 20:17:16.471: INFO: AfterEach: deleting 0 PVCs and 3 PVs... Jan 11 20:17:16.471: INFO: Deleting PersistentVolume "nfs-5cmzz" Jan 11 20:17:16.561: INFO: Deleting PersistentVolume "nfs-ql67r" Jan 11 20:17:16.652: INFO: Deleting PersistentVolume "nfs-rq8tj" [AfterEach] NFS /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:147 Jan 11 20:17:16.743: INFO: Deleting pod "nfs-server" in namespace "pv-4483" Jan 11 20:17:16.834: INFO: Wait up to 5m0s for pod "nfs-server" to be fully deleted [AfterEach] [sig-storage] PersistentVolumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:17:25.014: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-4483" for this suite. Jan 11 20:17:31.376: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:17:34.692: INFO: namespace pv-4483 deletion completed in 9.586828038s • [SLOW TEST:34.520 seconds] [sig-storage] PersistentVolumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 NFS /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:120 with multiple PVs and PVCs all in same ns /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:210 should create 3 PVs and 3 PVCs: test write access /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:241 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:17:21.983: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename projected STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-50 STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:111 STEP: Creating configMap with name projected-configmap-test-volume-map-14f0213c-60f2-4099-a089-e37beb755089 STEP: Creating a pod to test consume configMaps Jan 11 20:17:22.805: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-08041897-b337-4833-8a11-dfaf4ffcd91e" in namespace "projected-50" to be "success or failure" Jan 11 20:17:22.895: INFO: Pod "pod-projected-configmaps-08041897-b337-4833-8a11-dfaf4ffcd91e": Phase="Pending", Reason="", readiness=false. Elapsed: 89.897218ms Jan 11 20:17:24.986: INFO: Pod "pod-projected-configmaps-08041897-b337-4833-8a11-dfaf4ffcd91e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.180237263s STEP: Saw pod success Jan 11 20:17:24.986: INFO: Pod "pod-projected-configmaps-08041897-b337-4833-8a11-dfaf4ffcd91e" satisfied condition "success or failure" Jan 11 20:17:25.075: INFO: Trying to get logs from node ip-10-250-27-25.ec2.internal pod pod-projected-configmaps-08041897-b337-4833-8a11-dfaf4ffcd91e container projected-configmap-volume-test: STEP: delete the pod Jan 11 20:17:25.267: INFO: Waiting for pod pod-projected-configmaps-08041897-b337-4833-8a11-dfaf4ffcd91e to disappear Jan 11 20:17:25.356: INFO: Pod pod-projected-configmaps-08041897-b337-4833-8a11-dfaf4ffcd91e no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:17:25.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-50" for this suite. Jan 11 20:17:31.719: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:17:35.036: INFO: namespace projected-50 deletion completed in 9.58813917s • [SLOW TEST:13.052 seconds] [sig-storage] Projected configMap /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:35 should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:111 ------------------------------ SSSSS ------------------------------ [BeforeEach] [k8s.io] PrivilegedPod [NodeConformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:16:59.702: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename e2e-privileged-pod STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-privileged-pod-4009 STEP: Waiting for a default service account to be provisioned in namespace [It] should enable privileged commands [LinuxOnly] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/privileged.go:48 STEP: Creating a pod with a privileged container STEP: Executing in the privileged container Jan 11 20:17:04.817: INFO: ExecWithOptions {Command:[ip link add dummy1 type dummy] Namespace:e2e-privileged-pod-4009 PodName:privileged-pod ContainerName:privileged-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 11 20:17:04.817: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config Jan 11 20:17:05.668: INFO: ExecWithOptions {Command:[ip link del dummy1] Namespace:e2e-privileged-pod-4009 PodName:privileged-pod ContainerName:privileged-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 11 20:17:05.668: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Executing in the non-privileged container Jan 11 20:17:06.524: INFO: ExecWithOptions {Command:[ip link add dummy1 type dummy] Namespace:e2e-privileged-pod-4009 PodName:privileged-pod ContainerName:not-privileged-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 11 20:17:06.525: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config [AfterEach] [k8s.io] PrivilegedPod [NodeConformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:17:07.359: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-privileged-pod-4009" for this suite. Jan 11 20:17:55.721: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:17:59.044: INFO: namespace e2e-privileged-pod-4009 deletion completed in 51.593283041s • [SLOW TEST:59.342 seconds] [k8s.io] PrivilegedPod [NodeConformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693 should enable privileged commands [LinuxOnly] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/privileged.go:48 ------------------------------ S ------------------------------ [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:93 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:17:35.043: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename provisioning STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in provisioning-7429 STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to unmount after the subpath directory is deleted /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:425 Jan 11 20:17:35.682: INFO: Could not find CSI Name for in-tree plugin kubernetes.io/host-path Jan 11 20:17:35.864: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-7429" in namespace "provisioning-7429" to be "success or failure" Jan 11 20:17:35.954: INFO: Pod "hostpath-symlink-prep-provisioning-7429": Phase="Pending", Reason="", readiness=false. Elapsed: 89.909092ms Jan 11 20:17:38.045: INFO: Pod "hostpath-symlink-prep-provisioning-7429": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.180058314s STEP: Saw pod success Jan 11 20:17:38.045: INFO: Pod "hostpath-symlink-prep-provisioning-7429" satisfied condition "success or failure" Jan 11 20:17:38.045: INFO: Deleting pod "hostpath-symlink-prep-provisioning-7429" in namespace "provisioning-7429" Jan 11 20:17:38.149: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-7429" to be fully deleted Jan 11 20:17:38.238: INFO: Creating resource for inline volume STEP: Creating pod pod-subpath-test-hostpathsymlink-48nv Jan 11 20:17:40.509: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=provisioning-7429 pod-subpath-test-hostpathsymlink-48nv --container test-container-volume-hostpathsymlink-48nv -- /bin/sh -c rm -r /test-volume/provisioning-7429' Jan 11 20:17:41.858: INFO: stderr: "" Jan 11 20:17:41.858: INFO: stdout: "" STEP: Deleting pod pod-subpath-test-hostpathsymlink-48nv Jan 11 20:17:41.858: INFO: Deleting pod "pod-subpath-test-hostpathsymlink-48nv" in namespace "provisioning-7429" Jan 11 20:17:41.950: INFO: Wait up to 5m0s for pod "pod-subpath-test-hostpathsymlink-48nv" to be fully deleted STEP: Deleting pod Jan 11 20:17:54.130: INFO: Deleting pod "pod-subpath-test-hostpathsymlink-48nv" in namespace "provisioning-7429" Jan 11 20:17:54.309: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-7429" in namespace "provisioning-7429" to be "success or failure" Jan 11 20:17:54.400: INFO: Pod "hostpath-symlink-prep-provisioning-7429": Phase="Pending", Reason="", readiness=false. Elapsed: 90.812367ms Jan 11 20:17:56.491: INFO: Pod "hostpath-symlink-prep-provisioning-7429": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.181299583s STEP: Saw pod success Jan 11 20:17:56.491: INFO: Pod "hostpath-symlink-prep-provisioning-7429" satisfied condition "success or failure" Jan 11 20:17:56.491: INFO: Deleting pod "hostpath-symlink-prep-provisioning-7429" in namespace "provisioning-7429" Jan 11 20:17:56.584: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-7429" to be fully deleted Jan 11 20:17:56.673: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics [AfterEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:17:56.673: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "provisioning-7429" for this suite. Jan 11 20:18:03.033: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:18:06.360: INFO: namespace provisioning-7429 deletion completed in 9.595858175s • [SLOW TEST:31.317 seconds] [sig-storage] In-tree Volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Driver: hostPathSymlink] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:69 [Testpattern: Inline-volume (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:92 should be able to unmount after the subpath directory is deleted /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:425 ------------------------------ SSSSSS ------------------------------ [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:17:34.727: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-lifecycle-hook-8897 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Jan 11 20:17:40.184: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 11 20:17:40.274: INFO: Pod pod-with-prestop-http-hook still exists Jan 11 20:17:42.275: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 11 20:17:42.365: INFO: Pod pod-with-prestop-http-hook still exists Jan 11 20:17:44.275: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 11 20:17:44.365: INFO: Pod pod-with-prestop-http-hook still exists Jan 11 20:17:46.275: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 11 20:17:46.367: INFO: Pod pod-with-prestop-http-hook still exists Jan 11 20:17:48.275: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 11 20:17:48.365: INFO: Pod pod-with-prestop-http-hook still exists Jan 11 20:17:50.275: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 11 20:17:50.365: INFO: Pod pod-with-prestop-http-hook still exists Jan 11 20:17:52.275: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 11 20:17:52.366: INFO: Pod pod-with-prestop-http-hook still exists Jan 11 20:17:54.275: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 11 20:17:54.365: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:17:54.461: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-8897" for this suite. Jan 11 20:18:06.824: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:18:10.144: INFO: namespace container-lifecycle-hook-8897 deletion completed in 15.590151704s • [SLOW TEST:35.417 seconds] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693 when create a pod with lifecycle hook /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSS ------------------------------ [BeforeEach] [Testpattern: inline ephemeral CSI volume] ephemeral /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:93 [BeforeEach] [Testpattern: inline ephemeral CSI volume] ephemeral /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:79 [BeforeEach] [Testpattern: inline ephemeral CSI volume] ephemeral /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:17:10.946: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename ephemeral STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in ephemeral-1155 STEP: Waiting for a default service account to be provisioned in namespace [It] should create read/write inline ephemeral volume /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:128 STEP: deploying csi-hostpath driver Jan 11 20:17:11.770: INFO: creating *v1.ServiceAccount: ephemeral-1155/csi-attacher Jan 11 20:17:11.860: INFO: creating *v1.ClusterRole: external-attacher-runner-ephemeral-1155 Jan 11 20:17:11.860: INFO: Define cluster role external-attacher-runner-ephemeral-1155 Jan 11 20:17:11.949: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-ephemeral-1155 Jan 11 20:17:12.039: INFO: creating *v1.Role: ephemeral-1155/external-attacher-cfg-ephemeral-1155 Jan 11 20:17:12.128: INFO: creating *v1.RoleBinding: ephemeral-1155/csi-attacher-role-cfg Jan 11 20:17:12.218: INFO: creating *v1.ServiceAccount: ephemeral-1155/csi-provisioner Jan 11 20:17:12.307: INFO: creating *v1.ClusterRole: external-provisioner-runner-ephemeral-1155 Jan 11 20:17:12.307: INFO: Define cluster role external-provisioner-runner-ephemeral-1155 Jan 11 20:17:12.397: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-ephemeral-1155 Jan 11 20:17:12.486: INFO: creating *v1.Role: ephemeral-1155/external-provisioner-cfg-ephemeral-1155 Jan 11 20:17:12.575: INFO: creating *v1.RoleBinding: ephemeral-1155/csi-provisioner-role-cfg Jan 11 20:17:12.665: INFO: creating *v1.ServiceAccount: ephemeral-1155/csi-snapshotter Jan 11 20:17:12.754: INFO: creating *v1.ClusterRole: external-snapshotter-runner-ephemeral-1155 Jan 11 20:17:12.754: INFO: Define cluster role external-snapshotter-runner-ephemeral-1155 Jan 11 20:17:12.844: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-ephemeral-1155 Jan 11 20:17:12.933: INFO: creating *v1.Role: ephemeral-1155/external-snapshotter-leaderelection-ephemeral-1155 Jan 11 20:17:13.023: INFO: creating *v1.RoleBinding: ephemeral-1155/external-snapshotter-leaderelection Jan 11 20:17:13.112: INFO: creating *v1.ServiceAccount: ephemeral-1155/csi-resizer Jan 11 20:17:13.202: INFO: creating *v1.ClusterRole: external-resizer-runner-ephemeral-1155 Jan 11 20:17:13.202: INFO: Define cluster role external-resizer-runner-ephemeral-1155 Jan 11 20:17:13.292: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-ephemeral-1155 Jan 11 20:17:13.381: INFO: creating *v1.Role: ephemeral-1155/external-resizer-cfg-ephemeral-1155 Jan 11 20:17:13.471: INFO: creating *v1.RoleBinding: ephemeral-1155/csi-resizer-role-cfg Jan 11 20:17:13.560: INFO: creating *v1.Service: ephemeral-1155/csi-hostpath-attacher Jan 11 20:17:13.654: INFO: creating *v1.StatefulSet: ephemeral-1155/csi-hostpath-attacher Jan 11 20:17:13.744: INFO: creating *v1beta1.CSIDriver: csi-hostpath-ephemeral-1155 Jan 11 20:17:13.834: INFO: creating *v1.Service: ephemeral-1155/csi-hostpathplugin Jan 11 20:17:13.928: INFO: creating *v1.StatefulSet: ephemeral-1155/csi-hostpathplugin Jan 11 20:17:14.018: INFO: creating *v1.Service: ephemeral-1155/csi-hostpath-provisioner Jan 11 20:17:14.110: INFO: creating *v1.StatefulSet: ephemeral-1155/csi-hostpath-provisioner Jan 11 20:17:14.199: INFO: creating *v1.Service: ephemeral-1155/csi-hostpath-resizer Jan 11 20:17:14.292: INFO: creating *v1.StatefulSet: ephemeral-1155/csi-hostpath-resizer Jan 11 20:17:14.382: INFO: creating *v1.Service: ephemeral-1155/csi-snapshotter Jan 11 20:17:14.474: INFO: creating *v1.StatefulSet: ephemeral-1155/csi-snapshotter Jan 11 20:17:14.564: INFO: creating *v1.ClusterRoleBinding: psp-csi-hostpath-role-ephemeral-1155 STEP: checking the requested inline volume exists in the pod running on node {Name:ip-10-250-7-77.ec2.internal Selector:map[] Affinity:nil} Jan 11 20:17:21.013: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=ephemeral-1155 inline-volume-tester-95slf -- /bin/sh -c mount | grep /mnt/test | grep rw,' Jan 11 20:17:22.321: INFO: stderr: "" Jan 11 20:17:22.321: INFO: stdout: "/dev/nvme0n1p9 on /mnt/test-0 type ext4 (rw,seclabel,relatime)\n" Jan 11 20:17:22.550: INFO: Pod inline-volume-tester-95slf has the following logs: /dev/nvme0n1p9 on /mnt/test-0 type ext4 (rw,seclabel,relatime) STEP: Deleting pod inline-volume-tester-95slf in namespace ephemeral-1155 STEP: uninstalling csi-hostpath driver Jan 11 20:17:54.819: INFO: deleting *v1.ServiceAccount: ephemeral-1155/csi-attacher Jan 11 20:17:54.911: INFO: deleting *v1.ClusterRole: external-attacher-runner-ephemeral-1155 Jan 11 20:17:55.002: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-ephemeral-1155 Jan 11 20:17:55.093: INFO: deleting *v1.Role: ephemeral-1155/external-attacher-cfg-ephemeral-1155 Jan 11 20:17:55.184: INFO: deleting *v1.RoleBinding: ephemeral-1155/csi-attacher-role-cfg Jan 11 20:17:55.274: INFO: deleting *v1.ServiceAccount: ephemeral-1155/csi-provisioner Jan 11 20:17:55.365: INFO: deleting *v1.ClusterRole: external-provisioner-runner-ephemeral-1155 Jan 11 20:17:55.456: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-ephemeral-1155 Jan 11 20:17:55.546: INFO: deleting *v1.Role: ephemeral-1155/external-provisioner-cfg-ephemeral-1155 Jan 11 20:17:55.637: INFO: deleting *v1.RoleBinding: ephemeral-1155/csi-provisioner-role-cfg Jan 11 20:17:55.732: INFO: deleting *v1.ServiceAccount: ephemeral-1155/csi-snapshotter Jan 11 20:17:55.822: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-ephemeral-1155 Jan 11 20:17:55.913: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-ephemeral-1155 Jan 11 20:17:56.004: INFO: deleting *v1.Role: ephemeral-1155/external-snapshotter-leaderelection-ephemeral-1155 Jan 11 20:17:56.095: INFO: deleting *v1.RoleBinding: ephemeral-1155/external-snapshotter-leaderelection Jan 11 20:17:56.187: INFO: deleting *v1.ServiceAccount: ephemeral-1155/csi-resizer Jan 11 20:17:56.278: INFO: deleting *v1.ClusterRole: external-resizer-runner-ephemeral-1155 Jan 11 20:17:56.369: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-ephemeral-1155 Jan 11 20:17:56.459: INFO: deleting *v1.Role: ephemeral-1155/external-resizer-cfg-ephemeral-1155 Jan 11 20:17:56.550: INFO: deleting *v1.RoleBinding: ephemeral-1155/csi-resizer-role-cfg Jan 11 20:17:56.645: INFO: deleting *v1.Service: ephemeral-1155/csi-hostpath-attacher Jan 11 20:17:56.744: INFO: deleting *v1.StatefulSet: ephemeral-1155/csi-hostpath-attacher Jan 11 20:17:56.835: INFO: deleting *v1beta1.CSIDriver: csi-hostpath-ephemeral-1155 Jan 11 20:17:56.926: INFO: deleting *v1.Service: ephemeral-1155/csi-hostpathplugin Jan 11 20:17:57.021: INFO: deleting *v1.StatefulSet: ephemeral-1155/csi-hostpathplugin Jan 11 20:17:57.112: INFO: deleting *v1.Service: ephemeral-1155/csi-hostpath-provisioner Jan 11 20:17:57.208: INFO: deleting *v1.StatefulSet: ephemeral-1155/csi-hostpath-provisioner Jan 11 20:17:57.299: INFO: deleting *v1.Service: ephemeral-1155/csi-hostpath-resizer Jan 11 20:17:57.394: INFO: deleting *v1.StatefulSet: ephemeral-1155/csi-hostpath-resizer Jan 11 20:17:57.485: INFO: deleting *v1.Service: ephemeral-1155/csi-snapshotter Jan 11 20:17:57.582: INFO: deleting *v1.StatefulSet: ephemeral-1155/csi-snapshotter Jan 11 20:17:57.673: INFO: deleting *v1.ClusterRoleBinding: psp-csi-hostpath-role-ephemeral-1155 [AfterEach] [Testpattern: inline ephemeral CSI volume] ephemeral /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:17:57.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "ephemeral-1155" for this suite. Jan 11 20:18:10.124: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:18:13.424: INFO: namespace ephemeral-1155 deletion completed in 15.569230398s • [SLOW TEST:62.478 seconds] [sig-storage] CSI Volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Driver: csi-hostpath] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:62 [Testpattern: inline ephemeral CSI volume] ephemeral /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:92 should create read/write inline ephemeral volume /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:128 ------------------------------ [BeforeEach] [sig-api-machinery] Watchers /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:17:07.892: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename watch STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in watch-6281 STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Jan 11 20:17:08.904: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-6281 /api/v1/namespaces/watch-6281/configmaps/e2e-watch-test-configmap-a e7bc2793-4cf6-4e38-9f32-1c40469f22fd 76649 0 2020-01-11 20:17:08 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Jan 11 20:17:08.904: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-6281 /api/v1/namespaces/watch-6281/configmaps/e2e-watch-test-configmap-a e7bc2793-4cf6-4e38-9f32-1c40469f22fd 76649 0 2020-01-11 20:17:08 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification Jan 11 20:17:19.083: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-6281 /api/v1/namespaces/watch-6281/configmaps/e2e-watch-test-configmap-a e7bc2793-4cf6-4e38-9f32-1c40469f22fd 76822 0 2020-01-11 20:17:08 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Jan 11 20:17:19.083: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-6281 /api/v1/namespaces/watch-6281/configmaps/e2e-watch-test-configmap-a e7bc2793-4cf6-4e38-9f32-1c40469f22fd 76822 0 2020-01-11 20:17:08 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Jan 11 20:17:29.262: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-6281 /api/v1/namespaces/watch-6281/configmaps/e2e-watch-test-configmap-a e7bc2793-4cf6-4e38-9f32-1c40469f22fd 76864 0 2020-01-11 20:17:08 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jan 11 20:17:29.263: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-6281 /api/v1/namespaces/watch-6281/configmaps/e2e-watch-test-configmap-a e7bc2793-4cf6-4e38-9f32-1c40469f22fd 76864 0 2020-01-11 20:17:08 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification Jan 11 20:17:39.354: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-6281 /api/v1/namespaces/watch-6281/configmaps/e2e-watch-test-configmap-a e7bc2793-4cf6-4e38-9f32-1c40469f22fd 76927 0 2020-01-11 20:17:08 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jan 11 20:17:39.354: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-6281 /api/v1/namespaces/watch-6281/configmaps/e2e-watch-test-configmap-a e7bc2793-4cf6-4e38-9f32-1c40469f22fd 76927 0 2020-01-11 20:17:08 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Jan 11 20:17:49.445: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-6281 /api/v1/namespaces/watch-6281/configmaps/e2e-watch-test-configmap-b 95f9d22e-371e-4cef-96d7-d42933c8b862 76956 0 2020-01-11 20:17:49 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Jan 11 20:17:49.445: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-6281 /api/v1/namespaces/watch-6281/configmaps/e2e-watch-test-configmap-b 95f9d22e-371e-4cef-96d7-d42933c8b862 76956 0 2020-01-11 20:17:49 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification Jan 11 20:17:59.538: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-6281 /api/v1/namespaces/watch-6281/configmaps/e2e-watch-test-configmap-b 95f9d22e-371e-4cef-96d7-d42933c8b862 77064 0 2020-01-11 20:17:49 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Jan 11 20:17:59.538: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-6281 /api/v1/namespaces/watch-6281/configmaps/e2e-watch-test-configmap-b 95f9d22e-371e-4cef-96d7-d42933c8b862 77064 0 2020-01-11 20:17:49 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:18:09.538: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-6281" for this suite. Jan 11 20:18:15.897: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:18:19.201: INFO: namespace watch-6281 deletion completed in 9.572414891s • [SLOW TEST:71.309 seconds] [sig-api-machinery] Watchers /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSS ------------------------------ [BeforeEach] [sig-node] Downward API /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:18:10.151: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename downward-api STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-5607 STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating a pod to test downward api env vars Jan 11 20:18:10.884: INFO: Waiting up to 5m0s for pod "downward-api-f55399be-bfb5-4fbf-a546-3320c976d81f" in namespace "downward-api-5607" to be "success or failure" Jan 11 20:18:10.974: INFO: Pod "downward-api-f55399be-bfb5-4fbf-a546-3320c976d81f": Phase="Pending", Reason="", readiness=false. Elapsed: 89.837118ms Jan 11 20:18:13.064: INFO: Pod "downward-api-f55399be-bfb5-4fbf-a546-3320c976d81f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.179787481s STEP: Saw pod success Jan 11 20:18:13.064: INFO: Pod "downward-api-f55399be-bfb5-4fbf-a546-3320c976d81f" satisfied condition "success or failure" Jan 11 20:18:13.154: INFO: Trying to get logs from node ip-10-250-27-25.ec2.internal pod downward-api-f55399be-bfb5-4fbf-a546-3320c976d81f container dapi-container: STEP: delete the pod Jan 11 20:18:13.345: INFO: Waiting for pod downward-api-f55399be-bfb5-4fbf-a546-3320c976d81f to disappear Jan 11 20:18:13.435: INFO: Pod downward-api-f55399be-bfb5-4fbf-a546-3320c976d81f no longer exists [AfterEach] [sig-node] Downward API /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:18:13.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5607" for this suite. Jan 11 20:18:19.798: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:18:23.116: INFO: namespace downward-api-5607 deletion completed in 9.589069475s • [SLOW TEST:12.964 seconds] [sig-node] Downward API /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSS ------------------------------ [BeforeEach] [sig-apps] Job /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:18:13.425: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename job STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in job-4498 STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to exceed backoffLimit /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:226 STEP: Creating a job STEP: Ensuring job exceed backofflimit STEP: Checking that 2 pod created and status is failed [AfterEach] [sig-apps] Job /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:18:26.332: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-4498" for this suite. Jan 11 20:18:32.690: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:18:35.985: INFO: namespace job-4498 deletion completed in 9.562040254s • [SLOW TEST:22.559 seconds] [sig-apps] Job /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should fail to exceed backoffLimit /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:226 ------------------------------ SSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:14:20.244: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in persistent-local-volumes-test-370 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:152 [It] should fail due to non-existent path /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:307 STEP: Creating local PVC and PV Jan 11 20:14:20.973: INFO: Creating a PV followed by a PVC Jan 11 20:14:21.154: INFO: Waiting for PV local-pvzfwp5 to bind to PVC pvc-qzcwb Jan 11 20:14:21.154: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-qzcwb] to have phase Bound Jan 11 20:14:21.243: INFO: PersistentVolumeClaim pvc-qzcwb found and phase=Bound (89.073855ms) Jan 11 20:14:21.243: INFO: Waiting up to 3m0s for PersistentVolume local-pvzfwp5 to have phase Bound Jan 11 20:14:21.332: INFO: PersistentVolume local-pvzfwp5 found and phase=Bound (89.341716ms) STEP: Creating a pod STEP: Cleaning up PVC and PV Jan 11 20:18:22.137: INFO: Deleting PersistentVolumeClaim "pvc-qzcwb" Jan 11 20:18:22.227: INFO: Deleting PersistentVolume "local-pvzfwp5" [AfterEach] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:18:22.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-370" for this suite. Jan 11 20:18:34.676: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:18:37.949: INFO: namespace persistent-local-volumes-test-370 deletion completed in 15.541728381s • [SLOW TEST:257.705 seconds] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Local volume that cannot be mounted [Slow] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:304 should fail due to non-existent path /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:307 ------------------------------ SSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:18:35.991: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename custom-resource-definition STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in custom-resource-definition-1650 STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:18:36.918: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-1650" for this suite. Jan 11 20:18:43.277: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:18:46.578: INFO: namespace custom-resource-definition-1650 deletion completed in 9.569073965s • [SLOW TEST:10.587 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include custom resource definition resources in discovery documents [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:18:19.206: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in csi-mock-volumes-8830 STEP: Waiting for a default service account to be provisioned in namespace [It] should not be passed when podInfoOnMount=nil /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:347 STEP: deploying csi mock driver Jan 11 20:18:20.036: INFO: creating *v1.ServiceAccount: csi-mock-volumes-8830/csi-attacher Jan 11 20:18:20.126: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-8830 Jan 11 20:18:20.126: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-8830 Jan 11 20:18:20.215: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-8830 Jan 11 20:18:20.304: INFO: creating *v1.Role: csi-mock-volumes-8830/external-attacher-cfg-csi-mock-volumes-8830 Jan 11 20:18:20.394: INFO: creating *v1.RoleBinding: csi-mock-volumes-8830/csi-attacher-role-cfg Jan 11 20:18:20.483: INFO: creating *v1.ServiceAccount: csi-mock-volumes-8830/csi-provisioner Jan 11 20:18:20.576: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-8830 Jan 11 20:18:20.576: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-8830 Jan 11 20:18:20.665: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-8830 Jan 11 20:18:20.756: INFO: creating *v1.Role: csi-mock-volumes-8830/external-provisioner-cfg-csi-mock-volumes-8830 Jan 11 20:18:20.845: INFO: creating *v1.RoleBinding: csi-mock-volumes-8830/csi-provisioner-role-cfg Jan 11 20:18:20.934: INFO: creating *v1.ServiceAccount: csi-mock-volumes-8830/csi-resizer Jan 11 20:18:21.025: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-8830 Jan 11 20:18:21.025: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-8830 Jan 11 20:18:21.114: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-8830 Jan 11 20:18:21.203: INFO: creating *v1.Role: csi-mock-volumes-8830/external-resizer-cfg-csi-mock-volumes-8830 Jan 11 20:18:21.293: INFO: creating *v1.RoleBinding: csi-mock-volumes-8830/csi-resizer-role-cfg Jan 11 20:18:21.382: INFO: creating *v1.ServiceAccount: csi-mock-volumes-8830/csi-mock Jan 11 20:18:21.472: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-8830 Jan 11 20:18:21.561: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-8830 Jan 11 20:18:21.650: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-8830 Jan 11 20:18:21.740: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-8830 Jan 11 20:18:21.829: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-8830 Jan 11 20:18:21.919: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-8830 Jan 11 20:18:22.008: INFO: creating *v1.StatefulSet: csi-mock-volumes-8830/csi-mockplugin Jan 11 20:18:22.098: INFO: creating *v1beta1.CSIDriver: csi-mock-csi-mock-volumes-8830 Jan 11 20:18:22.188: INFO: creating *v1.StatefulSet: csi-mock-volumes-8830/csi-mockplugin-attacher Jan 11 20:18:22.278: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-8830" STEP: Creating pod Jan 11 20:18:22.545: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Jan 11 20:18:22.637: INFO: Waiting up to 5m0s for PersistentVolumeClaims [pvc-7hlj6] to have phase Bound Jan 11 20:18:22.727: INFO: PersistentVolumeClaim pvc-7hlj6 found but phase is Pending instead of Bound. Jan 11 20:18:24.817: INFO: PersistentVolumeClaim pvc-7hlj6 found and phase=Bound (2.179846199s) STEP: Deleting the previously created pod Jan 11 20:18:29.264: INFO: Deleting pod "pvc-volume-tester-rrjqb" in namespace "csi-mock-volumes-8830" Jan 11 20:18:29.355: INFO: Wait up to 5m0s for pod "pvc-volume-tester-rrjqb" to be fully deleted STEP: Checking CSI driver logs Jan 11 20:18:45.631: INFO: CSI driver logs: mock driver started gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":""} gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-8830","vendor_version":"0.3.0","manifest":{"url":"https://github.com/kubernetes-csi/csi-test/mock"}},"Error":""} gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":""} gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":9}}}]},"Error":""} gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":""} gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-8830","vendor_version":"0.3.0","manifest":{"url":"https://github.com/kubernetes-csi/csi-test/mock"}},"Error":""} gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":""} gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":9}}}]},"Error":""} gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-3a67e8a3-f021-4be7-b091-9deab4365f9b","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":{"volume":{"capacity_bytes":1073741824,"volume_id":"4","volume_context":{"name":"pvc-3a67e8a3-f021-4be7-b091-9deab4365f9b"}}},"Error":""} gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-8830","vendor_version":"0.3.0","manifest":{"url":"https://github.com/kubernetes-csi/csi-test/mock"}},"Error":""} gRPCCall: {"Method":"/csi.v1.Node/NodeGetInfo","Request":{},"Response":{"node_id":"csi-mock-csi-mock-volumes-8830","max_volumes_per_node":2},"Error":""} gRPCCall: {"Method":"/csi.v1.Controller/ControllerPublishVolume","Request":{"volume_id":"4","node_id":"csi-mock-csi-mock-volumes-8830","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-3a67e8a3-f021-4be7-b091-9deab4365f9b","storage.kubernetes.io/csiProvisionerIdentity":"1578773903964-8081-csi-mock-csi-mock-volumes-8830"}},"Response":{"publish_context":{"device":"/dev/mock","readonly":"false"}},"Error":""} gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}}]},"Error":""} gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","publish_context":{"device":"/dev/mock","readonly":"false"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-3a67e8a3-f021-4be7-b091-9deab4365f9b/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-3a67e8a3-f021-4be7-b091-9deab4365f9b","storage.kubernetes.io/csiProvisionerIdentity":"1578773903964-8081-csi-mock-csi-mock-volumes-8830"}},"Response":{},"Error":""} gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}}]},"Error":""} gRPCCall: {"Method":"/csi.v1.Node/NodePublishVolume","Request":{"volume_id":"4","publish_context":{"device":"/dev/mock","readonly":"false"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-3a67e8a3-f021-4be7-b091-9deab4365f9b/globalmount","target_path":"/var/lib/kubelet/pods/6280fa81-ee81-4c8b-83a5-0c608f2bafc8/volumes/kubernetes.io~csi/pvc-3a67e8a3-f021-4be7-b091-9deab4365f9b/mount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-3a67e8a3-f021-4be7-b091-9deab4365f9b","storage.kubernetes.io/csiProvisionerIdentity":"1578773903964-8081-csi-mock-csi-mock-volumes-8830"}},"Response":{},"Error":""} gRPCCall: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/6280fa81-ee81-4c8b-83a5-0c608f2bafc8/volumes/kubernetes.io~csi/pvc-3a67e8a3-f021-4be7-b091-9deab4365f9b/mount"},"Response":{},"Error":""} gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}}]},"Error":""} gRPCCall: {"Method":"/csi.v1.Node/NodeUnstageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-3a67e8a3-f021-4be7-b091-9deab4365f9b/globalmount"},"Response":{},"Error":""} gRPCCall: {"Method":"/csi.v1.Controller/ControllerUnpublishVolume","Request":{"volume_id":"4","node_id":"csi-mock-csi-mock-volumes-8830"},"Response":{},"Error":""} Jan 11 20:18:45.631: INFO: Found NodeUnpublishVolume: {Method:/csi.v1.Node/NodeUnpublishVolume Request:{VolumeContext:map[]}} STEP: Deleting pod pvc-volume-tester-rrjqb Jan 11 20:18:45.631: INFO: Deleting pod "pvc-volume-tester-rrjqb" in namespace "csi-mock-volumes-8830" STEP: Deleting claim pvc-7hlj6 Jan 11 20:18:45.899: INFO: Waiting up to 2m0s for PersistentVolume pvc-3a67e8a3-f021-4be7-b091-9deab4365f9b to get deleted Jan 11 20:18:45.989: INFO: PersistentVolume pvc-3a67e8a3-f021-4be7-b091-9deab4365f9b was removed STEP: Deleting storageclass csi-mock-volumes-8830-sc STEP: Cleaning up resources STEP: uninstalling csi mock driver Jan 11 20:18:46.079: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8830/csi-attacher Jan 11 20:18:46.170: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-8830 Jan 11 20:18:46.261: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-8830 Jan 11 20:18:46.351: INFO: deleting *v1.Role: csi-mock-volumes-8830/external-attacher-cfg-csi-mock-volumes-8830 Jan 11 20:18:46.442: INFO: deleting *v1.RoleBinding: csi-mock-volumes-8830/csi-attacher-role-cfg Jan 11 20:18:46.532: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8830/csi-provisioner Jan 11 20:18:46.623: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-8830 Jan 11 20:18:46.714: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-8830 Jan 11 20:18:46.805: INFO: deleting *v1.Role: csi-mock-volumes-8830/external-provisioner-cfg-csi-mock-volumes-8830 Jan 11 20:18:46.896: INFO: deleting *v1.RoleBinding: csi-mock-volumes-8830/csi-provisioner-role-cfg Jan 11 20:18:46.987: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8830/csi-resizer Jan 11 20:18:47.077: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-8830 Jan 11 20:18:47.168: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-8830 Jan 11 20:18:47.259: INFO: deleting *v1.Role: csi-mock-volumes-8830/external-resizer-cfg-csi-mock-volumes-8830 Jan 11 20:18:47.349: INFO: deleting *v1.RoleBinding: csi-mock-volumes-8830/csi-resizer-role-cfg Jan 11 20:18:47.440: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8830/csi-mock Jan 11 20:18:47.530: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-8830 Jan 11 20:18:47.621: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-8830 Jan 11 20:18:47.712: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-8830 Jan 11 20:18:47.803: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-8830 Jan 11 20:18:47.893: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-8830 Jan 11 20:18:47.984: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-8830 Jan 11 20:18:48.077: INFO: deleting *v1.StatefulSet: csi-mock-volumes-8830/csi-mockplugin Jan 11 20:18:48.168: INFO: deleting *v1beta1.CSIDriver: csi-mock-csi-mock-volumes-8830 Jan 11 20:18:48.258: INFO: deleting *v1.StatefulSet: csi-mock-volumes-8830/csi-mockplugin-attacher [AfterEach] [sig-storage] CSI mock volume /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:18:48.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "csi-mock-volumes-8830" for this suite. Jan 11 20:18:54.799: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:18:58.098: INFO: namespace csi-mock-volumes-8830 deletion completed in 9.567746529s • [SLOW TEST:38.892 seconds] [sig-storage] CSI mock volume /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI workload information using mock driver /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:297 should not be passed when podInfoOnMount=nil /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:347 ------------------------------ [BeforeEach] [sig-apps] ReplicaSet /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:18:23.127: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename replicaset STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in replicaset-5764 STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Jan 11 20:18:26.401: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:18:26.672: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-5764" for this suite. Jan 11 20:18:55.033: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:18:58.352: INFO: namespace replicaset-5764 deletion completed in 31.588722594s • [SLOW TEST:35.225 seconds] [sig-apps] ReplicaSet /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:18:58.101: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename kubectl STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-9248 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:225 [It] should create a quota without scopes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1946 STEP: calling kubectl quota Jan 11 20:18:58.751: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config create quota million --hard=pods=1000000,services=1000000 --namespace=kubectl-9248' Jan 11 20:18:59.174: INFO: stderr: "" Jan 11 20:18:59.174: INFO: stdout: "resourcequota/million created\n" STEP: verifying that the quota was created [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:18:59.263: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9248" for this suite. Jan 11 20:19:05.624: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:19:08.920: INFO: namespace kubectl-9248 deletion completed in 9.566988445s • [SLOW TEST:10.819 seconds] [sig-cli] Kubectl client /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl create quota /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1945 should create a quota without scopes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1946 ------------------------------ SSSSSSSSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:18:58.359: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename dns STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in dns-9695 STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:397 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... Jan 11 20:18:59.092: INFO: Created pod &Pod{ObjectMeta:{dns-9695 dns-9695 /api/v1/namespaces/dns-9695/pods/dns-9695 789bd6e0-fa51-473e-9725-fa373aa9a600 77589 0 2020-01-11 20:18:59 +0000 UTC map[] map[kubernetes.io/psp:e2e-test-privileged-psp] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-wlj59,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-wlj59,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.6,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-wlj59,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: Verifying customized DNS suffix list is configured on pod... Jan 11 20:19:01.272: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-9695 PodName:dns-9695 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 11 20:19:01.272: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Verifying customized DNS server is configured on pod... Jan 11 20:19:02.242: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-9695 PodName:dns-9695 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 11 20:19:02.242: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config Jan 11 20:19:03.119: INFO: Deleting pod dns-9695... [AfterEach] [sig-network] DNS /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:19:03.213: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9695" for this suite. Jan 11 20:19:09.574: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:19:12.890: INFO: namespace dns-9695 deletion completed in 9.585453387s • [SLOW TEST:14.531 seconds] [sig-network] DNS /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should support configurable pod DNS nameservers /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:397 ------------------------------ SSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:18:46.604: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in persistent-local-volumes-test-5002 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:152 [BeforeEach] [Volume type: dir-bindmounted] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Jan 11 20:18:49.704: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-5002 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-ba0aee70-78b8-4251-ba89-1d77ce53042e && mount --bind /tmp/local-volume-test-ba0aee70-78b8-4251-ba89-1d77ce53042e /tmp/local-volume-test-ba0aee70-78b8-4251-ba89-1d77ce53042e' Jan 11 20:18:50.964: INFO: stderr: "" Jan 11 20:18:50.964: INFO: stdout: "" STEP: Creating local PVCs and PVs Jan 11 20:18:50.964: INFO: Creating a PV followed by a PVC Jan 11 20:18:51.143: INFO: Waiting for PV local-pv69hvd to bind to PVC pvc-b9ps4 Jan 11 20:18:51.143: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-b9ps4] to have phase Bound Jan 11 20:18:51.232: INFO: PersistentVolumeClaim pvc-b9ps4 found and phase=Bound (89.06421ms) Jan 11 20:18:51.232: INFO: Waiting up to 3m0s for PersistentVolume local-pv69hvd to have phase Bound Jan 11 20:18:51.321: INFO: PersistentVolume local-pv69hvd found and phase=Bound (89.020071ms) [It] should be able to write from pod1 and read from pod2 /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 STEP: Creating pod1 STEP: Creating a pod Jan 11 20:18:53.945: INFO: pod "security-context-bd3b7241-e64c-427d-b7fd-bfec022dbed9" created on Node "ip-10-250-27-25.ec2.internal" STEP: Writing in pod1 Jan 11 20:18:53.945: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-5002 security-context-bd3b7241-e64c-427d-b7fd-bfec022dbed9 -- /bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file' Jan 11 20:18:55.257: INFO: stderr: "" Jan 11 20:18:55.257: INFO: stdout: "" Jan 11 20:18:55.257: INFO: podRWCmdExec out: "" err: Jan 11 20:18:55.257: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-5002 security-context-bd3b7241-e64c-427d-b7fd-bfec022dbed9 -- /bin/sh -c cat /mnt/volume1/test-file' Jan 11 20:18:56.529: INFO: stderr: "" Jan 11 20:18:56.529: INFO: stdout: "test-file-content\n" Jan 11 20:18:56.529: INFO: podRWCmdExec out: "test-file-content\n" err: STEP: Deleting pod1 STEP: Deleting pod security-context-bd3b7241-e64c-427d-b7fd-bfec022dbed9 in namespace persistent-local-volumes-test-5002 STEP: Creating pod2 STEP: Creating a pod Jan 11 20:18:59.067: INFO: pod "security-context-b7638975-2619-4820-b579-4e1a0ff8fb8a" created on Node "ip-10-250-27-25.ec2.internal" STEP: Reading in pod2 Jan 11 20:18:59.067: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-5002 security-context-b7638975-2619-4820-b579-4e1a0ff8fb8a -- /bin/sh -c cat /mnt/volume1/test-file' Jan 11 20:19:00.340: INFO: stderr: "" Jan 11 20:19:00.340: INFO: stdout: "test-file-content\n" Jan 11 20:19:00.340: INFO: podRWCmdExec out: "test-file-content\n" err: STEP: Deleting pod2 STEP: Deleting pod security-context-b7638975-2619-4820-b579-4e1a0ff8fb8a in namespace persistent-local-volumes-test-5002 [AfterEach] [Volume type: dir-bindmounted] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jan 11 20:19:00.431: INFO: Deleting PersistentVolumeClaim "pvc-b9ps4" Jan 11 20:19:00.521: INFO: Deleting PersistentVolume "local-pv69hvd" STEP: Removing the test directory Jan 11 20:19:00.612: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-5002 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount /tmp/local-volume-test-ba0aee70-78b8-4251-ba89-1d77ce53042e && rm -r /tmp/local-volume-test-ba0aee70-78b8-4251-ba89-1d77ce53042e' Jan 11 20:19:01.974: INFO: stderr: "" Jan 11 20:19:01.976: INFO: stdout: "" [AfterEach] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:19:02.132: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-5002" for this suite. Jan 11 20:19:14.491: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:19:17.791: INFO: namespace persistent-local-volumes-test-5002 deletion completed in 15.568636065s • [SLOW TEST:31.187 seconds] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-bindmounted] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume one after the other /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254 should be able to write from pod1 and read from pod2 /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 ------------------------------ SSSSSSS ------------------------------ [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:93 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:19:08.934: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename provisioning STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in provisioning-5490 STEP: Waiting for a default service account to be provisioned in namespace [It] should support existing directory /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:188 Jan 11 20:19:09.849: INFO: Could not find CSI Name for in-tree plugin kubernetes.io/empty-dir Jan 11 20:19:09.849: INFO: Creating resource for inline volume STEP: Creating pod pod-subpath-test-emptydir-vc26 STEP: Creating a pod to test subpath Jan 11 20:19:09.940: INFO: Waiting up to 5m0s for pod "pod-subpath-test-emptydir-vc26" in namespace "provisioning-5490" to be "success or failure" Jan 11 20:19:10.030: INFO: Pod "pod-subpath-test-emptydir-vc26": Phase="Pending", Reason="", readiness=false. Elapsed: 89.306177ms Jan 11 20:19:12.120: INFO: Pod "pod-subpath-test-emptydir-vc26": Phase="Pending", Reason="", readiness=false. Elapsed: 2.17926908s Jan 11 20:19:14.209: INFO: Pod "pod-subpath-test-emptydir-vc26": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.268997942s STEP: Saw pod success Jan 11 20:19:14.209: INFO: Pod "pod-subpath-test-emptydir-vc26" satisfied condition "success or failure" Jan 11 20:19:14.298: INFO: Trying to get logs from node ip-10-250-27-25.ec2.internal pod pod-subpath-test-emptydir-vc26 container test-container-volume-emptydir-vc26: STEP: delete the pod Jan 11 20:19:14.488: INFO: Waiting for pod pod-subpath-test-emptydir-vc26 to disappear Jan 11 20:19:14.577: INFO: Pod pod-subpath-test-emptydir-vc26 no longer exists STEP: Deleting pod pod-subpath-test-emptydir-vc26 Jan 11 20:19:14.577: INFO: Deleting pod "pod-subpath-test-emptydir-vc26" in namespace "provisioning-5490" STEP: Deleting pod Jan 11 20:19:14.666: INFO: Deleting pod "pod-subpath-test-emptydir-vc26" in namespace "provisioning-5490" Jan 11 20:19:14.755: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics [AfterEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:19:14.756: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "provisioning-5490" for this suite. Jan 11 20:19:23.114: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:19:26.421: INFO: namespace provisioning-5490 deletion completed in 11.575562388s • [SLOW TEST:17.487 seconds] [sig-storage] In-tree Volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Driver: emptydir] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:69 [Testpattern: Inline-volume (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:92 should support existing directory /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:188 ------------------------------ [BeforeEach] [Testpattern: Inline-volume (default fs)] volumeIO /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:93 [BeforeEach] [Testpattern: Inline-volume (default fs)] volumeIO /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:18:37.955: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename volumeio STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in volumeio-8041 STEP: Waiting for a default service account to be provisioned in namespace [It] should write files of various sizes, verify size, validate content [Slow] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_io.go:137 Jan 11 20:18:38.592: INFO: Could not find CSI Name for in-tree plugin kubernetes.io/host-path Jan 11 20:18:38.683: INFO: Creating resource for inline volume STEP: starting hostpath-io-client STEP: writing 1048576 bytes to test file /opt/hostPath_io_test_volumeio-8041-1048576 Jan 11 20:18:42.954: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=volumeio-8041 hostpath-io-client -- /bin/sh -c i=0; while [ $i -lt 1 ]; do dd if=/opt/hostpath-volumeio-8041-dd_if bs=1048576 >>/opt/hostPath_io_test_volumeio-8041-1048576 2>/dev/null; let i+=1; done' Jan 11 20:18:44.229: INFO: stderr: "" Jan 11 20:18:44.229: INFO: stdout: "" STEP: verifying file size Jan 11 20:18:44.229: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=volumeio-8041 hostpath-io-client -- /bin/sh -c stat -c %s /opt/hostPath_io_test_volumeio-8041-1048576' Jan 11 20:18:45.497: INFO: stderr: "" Jan 11 20:18:45.497: INFO: stdout: "1048576\n" STEP: verifying file hash Jan 11 20:18:45.497: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=volumeio-8041 hostpath-io-client -- /bin/sh -c md5sum /opt/hostPath_io_test_volumeio-8041-1048576 | cut -d' ' -f1' Jan 11 20:18:46.785: INFO: stderr: "" Jan 11 20:18:46.785: INFO: stdout: "5c34c2813223a7ca05a3c2f38c0d1710\n" STEP: writing 104857600 bytes to test file /opt/hostPath_io_test_volumeio-8041-104857600 Jan 11 20:18:46.785: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=volumeio-8041 hostpath-io-client -- /bin/sh -c i=0; while [ $i -lt 100 ]; do dd if=/opt/hostpath-volumeio-8041-dd_if bs=1048576 >>/opt/hostPath_io_test_volumeio-8041-104857600 2>/dev/null; let i+=1; done' Jan 11 20:18:48.395: INFO: stderr: "" Jan 11 20:18:48.395: INFO: stdout: "" STEP: verifying file size Jan 11 20:18:48.395: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=volumeio-8041 hostpath-io-client -- /bin/sh -c stat -c %s /opt/hostPath_io_test_volumeio-8041-104857600' Jan 11 20:18:49.649: INFO: stderr: "" Jan 11 20:18:49.649: INFO: stdout: "104857600\n" STEP: verifying file hash Jan 11 20:18:49.649: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=volumeio-8041 hostpath-io-client -- /bin/sh -c md5sum /opt/hostPath_io_test_volumeio-8041-104857600 | cut -d' ' -f1' Jan 11 20:18:51.188: INFO: stderr: "" Jan 11 20:18:51.188: INFO: stdout: "f2fa202b1ffeedda5f3a58bd1ae81104\n" STEP: deleting test file /opt/hostPath_io_test_volumeio-8041-104857600... Jan 11 20:18:51.188: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=volumeio-8041 hostpath-io-client -- /bin/sh -c rm -f /opt/hostPath_io_test_volumeio-8041-104857600' Jan 11 20:18:52.517: INFO: stderr: "" Jan 11 20:18:52.517: INFO: stdout: "" STEP: deleting test file /opt/hostPath_io_test_volumeio-8041-1048576... Jan 11 20:18:52.517: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=volumeio-8041 hostpath-io-client -- /bin/sh -c rm -f /opt/hostPath_io_test_volumeio-8041-1048576' Jan 11 20:18:53.885: INFO: stderr: "" Jan 11 20:18:53.885: INFO: stdout: "" STEP: deleting test file /opt/hostpath-volumeio-8041-dd_if... Jan 11 20:18:53.885: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=volumeio-8041 hostpath-io-client -- /bin/sh -c rm -f /opt/hostpath-volumeio-8041-dd_if' Jan 11 20:18:55.157: INFO: stderr: "" Jan 11 20:18:55.157: INFO: stdout: "" STEP: deleting client pod "hostpath-io-client"... Jan 11 20:18:55.157: INFO: Deleting pod "hostpath-io-client" in namespace "volumeio-8041" Jan 11 20:18:55.247: INFO: Wait up to 5m0s for pod "hostpath-io-client" to be fully deleted Jan 11 20:19:05.426: INFO: sleeping a bit so kubelet can unmount and detach the volume Jan 11 20:19:25.426: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics [AfterEach] [Testpattern: Inline-volume (default fs)] volumeIO /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:19:25.426: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "volumeio-8041" for this suite. Jan 11 20:19:31.787: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:19:35.023: INFO: namespace volumeio-8041 deletion completed in 9.50513544s • [SLOW TEST:57.068 seconds] [sig-storage] In-tree Volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Driver: hostPath] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:69 [Testpattern: Inline-volume (default fs)] volumeIO /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:92 should write files of various sizes, verify size, validate content [Slow] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_io.go:137 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PV Protection /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:19:35.055: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename pv-protection STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pv-protection-9486 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PV Protection /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pv_protection.go:51 Jan 11 20:19:35.748: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable STEP: Creating a PV STEP: Waiting for PV to enter phase Available Jan 11 20:19:35.927: INFO: Waiting up to 30s for PersistentVolume hostpath-f4fdk to have phase Available Jan 11 20:19:36.017: INFO: PersistentVolume hostpath-f4fdk found and phase=Available (89.114559ms) STEP: Checking that PV Protection finalizer is set [It] Verify that PV bound to a PVC is not removed immediately /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pv_protection.go:106 STEP: Creating a PVC STEP: Waiting for PVC to become Bound Jan 11 20:19:36.197: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-m4jt2] to have phase Bound Jan 11 20:19:36.286: INFO: PersistentVolumeClaim pvc-m4jt2 found and phase=Bound (88.854394ms) STEP: Deleting the PV, however, the PV must not be removed from the system as it's bound to a PVC STEP: Checking that the PV status is Terminating STEP: Deleting the PVC that is bound to the PV STEP: Checking that the PV is automatically removed from the system because it's no longer bound to a PVC Jan 11 20:19:36.555: INFO: Waiting up to 3m0s for PersistentVolume hostpath-f4fdk to get deleted Jan 11 20:19:36.644: INFO: PersistentVolume hostpath-f4fdk found and phase=Bound (89.393863ms) Jan 11 20:19:38.734: INFO: PersistentVolume hostpath-f4fdk was removed [AfterEach] [sig-storage] PV Protection /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:19:38.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-protection-9486" for this suite. Jan 11 20:19:45.093: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:19:48.306: INFO: namespace pv-protection-9486 deletion completed in 9.480883841s [AfterEach] [sig-storage] PV Protection /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pv_protection.go:92 Jan 11 20:19:48.306: INFO: AfterEach: Cleaning up test resources. Jan 11 20:19:48.306: INFO: Deleting PersistentVolumeClaim "pvc-m4jt2" Jan 11 20:19:48.395: INFO: Deleting PersistentVolume "hostpath-f4fdk" • [SLOW TEST:13.430 seconds] [sig-storage] PV Protection /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Verify that PV bound to a PVC is not removed immediately /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pv_protection.go:106 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:18:06.370: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename statefulset STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in statefulset-9525 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:62 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:77 STEP: Creating service test in namespace statefulset-9525 [It] should not deadlock when a pod's predecessor fails /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:222 STEP: Creating statefulset ss in namespace statefulset-9525 Jan 11 20:18:07.221: INFO: Default storage class: "default" Jan 11 20:18:07.401: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Pending - Ready=false Jan 11 20:18:17.491: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Pending - Ready=false Jan 11 20:18:27.491: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false STEP: Resuming stateful pod at index 0. Jan 11 20:18:27.581: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=statefulset-9525 ss-0 -- /bin/sh -x -c dd if=/dev/zero of=/data/statefulset-continue bs=1 count=1 conv=fsync' Jan 11 20:18:28.915: INFO: stderr: "+ dd 'if=/dev/zero' 'of=/data/statefulset-continue' 'bs=1' 'count=1' 'conv=fsync'\n1+0 records in\n1+0 records out\n" Jan 11 20:18:28.915: INFO: stdout: "" Jan 11 20:18:28.915: INFO: Resumed pod ss-0 STEP: Waiting for stateful pod at index 1 to enter running. Jan 11 20:18:29.005: INFO: Found 1 stateful pods, waiting for 2 Jan 11 20:18:39.096: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jan 11 20:18:39.096: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Pending - Ready=false Jan 11 20:18:49.096: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jan 11 20:18:49.096: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Pending - Ready=false Jan 11 20:18:59.095: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jan 11 20:18:59.095: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false STEP: Deleting healthy stateful pod at index 0. STEP: Confirming stateful pod at index 0 is recreated. Jan 11 20:18:59.277: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false Jan 11 20:19:09.368: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jan 11 20:19:09.369: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false STEP: Resuming stateful pod at index 1. Jan 11 20:19:09.459: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=statefulset-9525 ss-1 -- /bin/sh -x -c dd if=/dev/zero of=/data/statefulset-continue bs=1 count=1 conv=fsync' Jan 11 20:19:10.741: INFO: stderr: "+ dd 'if=/dev/zero' 'of=/data/statefulset-continue' 'bs=1' 'count=1' 'conv=fsync'\n1+0 records in\n1+0 records out\n" Jan 11 20:19:10.741: INFO: stdout: "" Jan 11 20:19:10.741: INFO: Resumed pod ss-1 STEP: Confirming all stateful pods in statefulset are created. Jan 11 20:19:10.831: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jan 11 20:19:10.831: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=false Jan 11 20:19:20.922: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jan 11 20:19:20.922: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 Jan 11 20:19:20.922: INFO: Deleting all statefulset in ns statefulset-9525 Jan 11 20:19:21.011: INFO: Scaling statefulset ss to 0 Jan 11 20:19:31.371: INFO: Waiting for statefulset status.replicas updated to 0 Jan 11 20:19:31.461: INFO: Deleting statefulset ss Jan 11 20:19:31.643: INFO: Deleting pvc: datadir-ss-0 with volume pvc-7a273daf-0b68-49e5-b4fc-bb24164bc112 Jan 11 20:19:31.733: INFO: Deleting pvc: datadir-ss-1 with volume pvc-13f9e99f-ec34-4234-912c-e91f2c45ca65 Jan 11 20:19:31.914: INFO: Still waiting for pvs of statefulset to disappear: pvc-13f9e99f-ec34-4234-912c-e91f2c45ca65: {Phase:Bound Message: Reason:} pvc-7a273daf-0b68-49e5-b4fc-bb24164bc112: {Phase:Bound Message: Reason:} Jan 11 20:19:42.005: INFO: Still waiting for pvs of statefulset to disappear: pvc-7a273daf-0b68-49e5-b4fc-bb24164bc112: {Phase:Failed Message:Error deleting EBS volume "vol-017fc9057b555bf8f" since volume is currently attached to "i-0a8c404292a3c92e9" Reason:} [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:19:52.005: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-9525" for this suite. Jan 11 20:19:58.366: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:20:01.678: INFO: namespace statefulset-9525 deletion completed in 9.582944957s • [SLOW TEST:115.309 seconds] [sig-apps] StatefulSet /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693 should not deadlock when a pod's predecessor fails /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:222 ------------------------------ S ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:20:01.682: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename projected STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-5492 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating a pod to test downward API volume plugin Jan 11 20:20:02.515: INFO: Waiting up to 5m0s for pod "downwardapi-volume-937f91c9-2e26-4b4a-95f0-77e29bfe15bb" in namespace "projected-5492" to be "success or failure" Jan 11 20:20:02.605: INFO: Pod "downwardapi-volume-937f91c9-2e26-4b4a-95f0-77e29bfe15bb": Phase="Pending", Reason="", readiness=false. Elapsed: 89.950898ms Jan 11 20:20:04.696: INFO: Pod "downwardapi-volume-937f91c9-2e26-4b4a-95f0-77e29bfe15bb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.180817832s STEP: Saw pod success Jan 11 20:20:04.696: INFO: Pod "downwardapi-volume-937f91c9-2e26-4b4a-95f0-77e29bfe15bb" satisfied condition "success or failure" Jan 11 20:20:04.786: INFO: Trying to get logs from node ip-10-250-27-25.ec2.internal pod downwardapi-volume-937f91c9-2e26-4b4a-95f0-77e29bfe15bb container client-container: STEP: delete the pod Jan 11 20:20:04.977: INFO: Waiting for pod downwardapi-volume-937f91c9-2e26-4b4a-95f0-77e29bfe15bb to disappear Jan 11 20:20:05.067: INFO: Pod downwardapi-volume-937f91c9-2e26-4b4a-95f0-77e29bfe15bb no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:20:05.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5492" for this suite. Jan 11 20:20:13.431: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:20:16.745: INFO: namespace projected-5492 deletion completed in 11.587013944s • [SLOW TEST:15.063 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:19:48.511: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename kubectl STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-8329 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:225 [BeforeEach] Simple pod /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:371 STEP: creating the pod from Jan 11 20:19:49.151: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config create -f - --namespace=kubectl-8329' Jan 11 20:19:50.118: INFO: stderr: "" Jan 11 20:19:50.118: INFO: stdout: "pod/httpd created\n" Jan 11 20:19:50.118: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [httpd] Jan 11 20:19:50.118: INFO: Waiting up to 5m0s for pod "httpd" in namespace "kubectl-8329" to be "running and ready" Jan 11 20:19:50.208: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 89.507639ms Jan 11 20:19:52.297: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 2.179232s Jan 11 20:19:54.387: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 4.268788521s Jan 11 20:19:56.477: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 6.35881342s Jan 11 20:19:58.567: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 8.448660763s Jan 11 20:20:00.657: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 10.539062622s Jan 11 20:20:02.747: INFO: Pod "httpd": Phase="Running", Reason="", readiness=true. Elapsed: 12.628664075s Jan 11 20:20:02.747: INFO: Pod "httpd" satisfied condition "running and ready" Jan 11 20:20:02.747: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [httpd] [It] should support exec /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:381 STEP: executing a command in the container Jan 11 20:20:02.747: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=kubectl-8329 httpd echo running in container' Jan 11 20:20:04.085: INFO: stderr: "" Jan 11 20:20:04.086: INFO: stdout: "running in container\n" STEP: executing a very long command in the container Jan 11 20:20:04.086: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=kubectl-8329 httpd echo aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa' Jan 11 20:20:05.567: INFO: stderr: "" Jan 11 20:20:05.567: INFO: stdout: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa\n" STEP: executing a command in the container with noninteractive stdin Jan 11 20:20:05.568: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=kubectl-8329 -i httpd cat' Jan 11 20:20:07.216: INFO: stderr: "" Jan 11 20:20:07.216: INFO: stdout: "abcd1234" STEP: executing a command in the container with pseudo-interactive stdin Jan 11 20:20:07.216: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=kubectl-8329 -i httpd sh' Jan 11 20:20:08.777: INFO: stderr: "" Jan 11 20:20:08.777: INFO: stdout: "hi\n" [AfterEach] Simple pod /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:377 STEP: using delete to clean up resources Jan 11 20:20:08.777: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config delete --grace-period=0 --force -f - --namespace=kubectl-8329' Jan 11 20:20:09.307: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 11 20:20:09.307: INFO: stdout: "pod \"httpd\" force deleted\n" Jan 11 20:20:09.307: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config get rc,svc -l name=httpd --no-headers --namespace=kubectl-8329' Jan 11 20:20:09.827: INFO: stderr: "No resources found in kubectl-8329 namespace.\n" Jan 11 20:20:09.828: INFO: stdout: "" Jan 11 20:20:09.828: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config get pods -l name=httpd --namespace=kubectl-8329 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jan 11 20:20:10.261: INFO: stderr: "" Jan 11 20:20:10.261: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:20:10.261: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8329" for this suite. Jan 11 20:20:16.620: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:20:19.881: INFO: namespace kubectl-8329 deletion completed in 9.529027983s • [SLOW TEST:31.370 seconds] [sig-cli] Kubectl client /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Simple pod /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:369 should support exec /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:381 ------------------------------ [BeforeEach] [sig-apps] Job /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:19:26.423: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename job STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in job-3813 STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods Jan 11 20:19:30.104: INFO: Successfully updated pod "adopt-release-7pkqb" STEP: Checking that the Job readopts the Pod Jan 11 20:19:30.104: INFO: Waiting up to 15m0s for pod "adopt-release-7pkqb" in namespace "job-3813" to be "adopted" Jan 11 20:19:30.193: INFO: Pod "adopt-release-7pkqb": Phase="Running", Reason="", readiness=true. Elapsed: 89.27745ms Jan 11 20:19:30.193: INFO: Pod "adopt-release-7pkqb" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod Jan 11 20:19:30.874: INFO: Successfully updated pod "adopt-release-7pkqb" STEP: Checking that the Job releases the Pod Jan 11 20:19:30.874: INFO: Waiting up to 15m0s for pod "adopt-release-7pkqb" in namespace "job-3813" to be "released" Jan 11 20:19:30.964: INFO: Pod "adopt-release-7pkqb": Phase="Running", Reason="", readiness=true. Elapsed: 89.931062ms Jan 11 20:19:30.964: INFO: Pod "adopt-release-7pkqb" satisfied condition "released" [AfterEach] [sig-apps] Job /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:19:30.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-3813" for this suite. Jan 11 20:20:19.325: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:20:22.628: INFO: namespace job-3813 deletion completed in 51.572726535s • [SLOW TEST:56.205 seconds] [sig-apps] Job /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:20:16.755: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename custom-resource-definition STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in custom-resource-definition-5790 STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Jan 11 20:20:18.153: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:20:22.588: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-5790" for this suite. Jan 11 20:20:28.952: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:20:32.317: INFO: namespace custom-resource-definition-5790 deletion completed in 9.637194125s • [SLOW TEST:15.562 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:42 listing custom resource definition objects works [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSS ------------------------------ [BeforeEach] [Testpattern: inline ephemeral CSI volume] ephemeral /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:93 [BeforeEach] [Testpattern: inline ephemeral CSI volume] ephemeral /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:79 [BeforeEach] [Testpattern: inline ephemeral CSI volume] ephemeral /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:19:17.802: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename ephemeral STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in ephemeral-3918 STEP: Waiting for a default service account to be provisioned in namespace [It] should create read-only inline ephemeral volume /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:116 STEP: deploying csi-hostpath driver Jan 11 20:19:18.639: INFO: creating *v1.ServiceAccount: ephemeral-3918/csi-attacher Jan 11 20:19:18.729: INFO: creating *v1.ClusterRole: external-attacher-runner-ephemeral-3918 Jan 11 20:19:18.729: INFO: Define cluster role external-attacher-runner-ephemeral-3918 Jan 11 20:19:18.818: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-ephemeral-3918 Jan 11 20:19:18.908: INFO: creating *v1.Role: ephemeral-3918/external-attacher-cfg-ephemeral-3918 Jan 11 20:19:18.998: INFO: creating *v1.RoleBinding: ephemeral-3918/csi-attacher-role-cfg Jan 11 20:19:19.087: INFO: creating *v1.ServiceAccount: ephemeral-3918/csi-provisioner Jan 11 20:19:19.177: INFO: creating *v1.ClusterRole: external-provisioner-runner-ephemeral-3918 Jan 11 20:19:19.177: INFO: Define cluster role external-provisioner-runner-ephemeral-3918 Jan 11 20:19:19.267: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-ephemeral-3918 Jan 11 20:19:19.357: INFO: creating *v1.Role: ephemeral-3918/external-provisioner-cfg-ephemeral-3918 Jan 11 20:19:19.446: INFO: creating *v1.RoleBinding: ephemeral-3918/csi-provisioner-role-cfg Jan 11 20:19:19.536: INFO: creating *v1.ServiceAccount: ephemeral-3918/csi-snapshotter Jan 11 20:19:19.625: INFO: creating *v1.ClusterRole: external-snapshotter-runner-ephemeral-3918 Jan 11 20:19:19.625: INFO: Define cluster role external-snapshotter-runner-ephemeral-3918 Jan 11 20:19:19.715: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-ephemeral-3918 Jan 11 20:19:19.804: INFO: creating *v1.Role: ephemeral-3918/external-snapshotter-leaderelection-ephemeral-3918 Jan 11 20:19:19.894: INFO: creating *v1.RoleBinding: ephemeral-3918/external-snapshotter-leaderelection Jan 11 20:19:19.984: INFO: creating *v1.ServiceAccount: ephemeral-3918/csi-resizer Jan 11 20:19:20.073: INFO: creating *v1.ClusterRole: external-resizer-runner-ephemeral-3918 Jan 11 20:19:20.073: INFO: Define cluster role external-resizer-runner-ephemeral-3918 Jan 11 20:19:20.163: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-ephemeral-3918 Jan 11 20:19:20.252: INFO: creating *v1.Role: ephemeral-3918/external-resizer-cfg-ephemeral-3918 Jan 11 20:19:20.342: INFO: creating *v1.RoleBinding: ephemeral-3918/csi-resizer-role-cfg Jan 11 20:19:20.431: INFO: creating *v1.Service: ephemeral-3918/csi-hostpath-attacher Jan 11 20:19:20.525: INFO: creating *v1.StatefulSet: ephemeral-3918/csi-hostpath-attacher Jan 11 20:19:20.615: INFO: creating *v1beta1.CSIDriver: csi-hostpath-ephemeral-3918 Jan 11 20:19:20.704: INFO: creating *v1.Service: ephemeral-3918/csi-hostpathplugin Jan 11 20:19:20.801: INFO: creating *v1.StatefulSet: ephemeral-3918/csi-hostpathplugin Jan 11 20:19:20.891: INFO: creating *v1.Service: ephemeral-3918/csi-hostpath-provisioner Jan 11 20:19:20.986: INFO: creating *v1.StatefulSet: ephemeral-3918/csi-hostpath-provisioner Jan 11 20:19:21.076: INFO: creating *v1.Service: ephemeral-3918/csi-hostpath-resizer Jan 11 20:19:21.169: INFO: creating *v1.StatefulSet: ephemeral-3918/csi-hostpath-resizer Jan 11 20:19:21.259: INFO: creating *v1.Service: ephemeral-3918/csi-snapshotter Jan 11 20:19:21.352: INFO: creating *v1.StatefulSet: ephemeral-3918/csi-snapshotter Jan 11 20:19:21.442: INFO: creating *v1.ClusterRoleBinding: psp-csi-hostpath-role-ephemeral-3918 STEP: checking the requested inline volume exists in the pod running on node {Name:ip-10-250-27-25.ec2.internal Selector:map[] Affinity:nil} Jan 11 20:19:25.890: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=ephemeral-3918 inline-volume-tester-v5ktm -- /bin/sh -c mount | grep /mnt/test | grep ro,' Jan 11 20:19:27.192: INFO: stderr: "" Jan 11 20:19:27.192: INFO: stdout: "/dev/nvme0n1p9 on /mnt/test-0 type ext4 (ro,seclabel,relatime)\n" Jan 11 20:19:27.422: INFO: Pod inline-volume-tester-v5ktm has the following logs: /dev/nvme0n1p9 on /mnt/test-0 type ext4 (ro,seclabel,relatime) STEP: Deleting pod inline-volume-tester-v5ktm in namespace ephemeral-3918 STEP: uninstalling csi-hostpath driver Jan 11 20:19:59.693: INFO: deleting *v1.ServiceAccount: ephemeral-3918/csi-attacher Jan 11 20:19:59.786: INFO: deleting *v1.ClusterRole: external-attacher-runner-ephemeral-3918 Jan 11 20:19:59.877: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-ephemeral-3918 Jan 11 20:19:59.967: INFO: deleting *v1.Role: ephemeral-3918/external-attacher-cfg-ephemeral-3918 Jan 11 20:20:00.059: INFO: deleting *v1.RoleBinding: ephemeral-3918/csi-attacher-role-cfg Jan 11 20:20:00.150: INFO: deleting *v1.ServiceAccount: ephemeral-3918/csi-provisioner Jan 11 20:20:00.241: INFO: deleting *v1.ClusterRole: external-provisioner-runner-ephemeral-3918 Jan 11 20:20:00.332: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-ephemeral-3918 Jan 11 20:20:00.423: INFO: deleting *v1.Role: ephemeral-3918/external-provisioner-cfg-ephemeral-3918 Jan 11 20:20:00.514: INFO: deleting *v1.RoleBinding: ephemeral-3918/csi-provisioner-role-cfg Jan 11 20:20:00.604: INFO: deleting *v1.ServiceAccount: ephemeral-3918/csi-snapshotter Jan 11 20:20:00.695: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-ephemeral-3918 Jan 11 20:20:00.786: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-ephemeral-3918 Jan 11 20:20:00.876: INFO: deleting *v1.Role: ephemeral-3918/external-snapshotter-leaderelection-ephemeral-3918 Jan 11 20:20:00.967: INFO: deleting *v1.RoleBinding: ephemeral-3918/external-snapshotter-leaderelection Jan 11 20:20:01.058: INFO: deleting *v1.ServiceAccount: ephemeral-3918/csi-resizer Jan 11 20:20:01.149: INFO: deleting *v1.ClusterRole: external-resizer-runner-ephemeral-3918 Jan 11 20:20:01.239: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-ephemeral-3918 Jan 11 20:20:01.333: INFO: deleting *v1.Role: ephemeral-3918/external-resizer-cfg-ephemeral-3918 Jan 11 20:20:01.424: INFO: deleting *v1.RoleBinding: ephemeral-3918/csi-resizer-role-cfg Jan 11 20:20:01.516: INFO: deleting *v1.Service: ephemeral-3918/csi-hostpath-attacher Jan 11 20:20:01.613: INFO: deleting *v1.StatefulSet: ephemeral-3918/csi-hostpath-attacher Jan 11 20:20:01.717: INFO: deleting *v1beta1.CSIDriver: csi-hostpath-ephemeral-3918 Jan 11 20:20:01.808: INFO: deleting *v1.Service: ephemeral-3918/csi-hostpathplugin Jan 11 20:20:01.904: INFO: deleting *v1.StatefulSet: ephemeral-3918/csi-hostpathplugin Jan 11 20:20:01.995: INFO: deleting *v1.Service: ephemeral-3918/csi-hostpath-provisioner Jan 11 20:20:02.144: INFO: deleting *v1.StatefulSet: ephemeral-3918/csi-hostpath-provisioner Jan 11 20:20:02.235: INFO: deleting *v1.Service: ephemeral-3918/csi-hostpath-resizer Jan 11 20:20:02.331: INFO: deleting *v1.StatefulSet: ephemeral-3918/csi-hostpath-resizer Jan 11 20:20:02.421: INFO: deleting *v1.Service: ephemeral-3918/csi-snapshotter Jan 11 20:20:02.518: INFO: deleting *v1.StatefulSet: ephemeral-3918/csi-snapshotter Jan 11 20:20:02.609: INFO: deleting *v1.ClusterRoleBinding: psp-csi-hostpath-role-ephemeral-3918 [AfterEach] [Testpattern: inline ephemeral CSI volume] ephemeral /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:20:02.700: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready WARNING: pod log: inline-volume-tester-v5ktm/csi-volume-tester: Get https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com/api/v1/namespaces/ephemeral-3918/pods/inline-volume-tester-v5ktm/log?container=csi-volume-tester&follow=true: context canceled STEP: Destroying namespace "ephemeral-3918" for this suite. Jan 11 20:20:31.060: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:20:34.458: INFO: namespace ephemeral-3918 deletion completed in 31.666676345s • [SLOW TEST:76.656 seconds] [sig-storage] CSI Volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Driver: csi-hostpath] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:62 [Testpattern: inline ephemeral CSI volume] ephemeral /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:92 should create read-only inline ephemeral volume /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:116 ------------------------------ SSS ------------------------------ [BeforeEach] [sig-storage] HostPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:20:22.643: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename hostpath STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in hostpath-4462 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should support r/w [NodeConformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:65 STEP: Creating a pod to test hostPath r/w Jan 11 20:20:23.373: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-4462" to be "success or failure" Jan 11 20:20:23.463: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 89.216395ms Jan 11 20:20:25.553: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.179233218s STEP: Saw pod success Jan 11 20:20:25.553: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Jan 11 20:20:25.642: INFO: Trying to get logs from node ip-10-250-27-25.ec2.internal pod pod-host-path-test container test-container-2: STEP: delete the pod Jan 11 20:20:25.832: INFO: Waiting for pod pod-host-path-test to disappear Jan 11 20:20:25.921: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:20:25.921: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-4462" for this suite. Jan 11 20:20:32.280: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:20:35.576: INFO: namespace hostpath-4462 deletion completed in 9.564208521s • [SLOW TEST:12.933 seconds] [sig-storage] HostPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should support r/w [NodeConformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:65 ------------------------------ SSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:16:26.069: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename container-probe STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-probe-9287 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:52 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating pod busybox-dc28624d-15e3-4e5a-8bec-3b38a398087f in namespace container-probe-9287 Jan 11 20:16:28.983: INFO: Started pod busybox-dc28624d-15e3-4e5a-8bec-3b38a398087f in namespace container-probe-9287 STEP: checking the pod's current state and verifying that restartCount is present Jan 11 20:16:29.073: INFO: Initial restart count of pod busybox-dc28624d-15e3-4e5a-8bec-3b38a398087f is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:20:29.563: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9287" for this suite. Jan 11 20:20:35.924: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:20:39.240: INFO: namespace container-probe-9287 deletion completed in 9.586053639s • [SLOW TEST:253.171 seconds] [k8s.io] Probing container /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl Port forwarding /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:20:19.883: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename port-forwarding STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in port-forwarding-6840 STEP: Waiting for a default service account to be provisioned in namespace [It] should support forwarding over websockets /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:479 Jan 11 20:20:20.550: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Creating the pod STEP: Sending the expected data to the local port STEP: Reading data from the local port STEP: Verifying logs [AfterEach] [sig-cli] Kubectl Port forwarding /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:20:27.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "port-forwarding-6840" for this suite. Jan 11 20:20:40.010: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:20:43.240: INFO: namespace port-forwarding-6840 deletion completed in 15.498208279s • [SLOW TEST:23.357 seconds] [sig-cli] Kubectl Port forwarding /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 With a server listening on localhost /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:463 should support forwarding over websockets /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:479 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:20:34.464: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename emptydir STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-9476 STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating a pod to test emptydir 0644 on node default medium Jan 11 20:20:35.240: INFO: Waiting up to 5m0s for pod "pod-8fab9b00-1893-425e-a862-b3f80609fc42" in namespace "emptydir-9476" to be "success or failure" Jan 11 20:20:35.332: INFO: Pod "pod-8fab9b00-1893-425e-a862-b3f80609fc42": Phase="Pending", Reason="", readiness=false. Elapsed: 91.203201ms Jan 11 20:20:37.422: INFO: Pod "pod-8fab9b00-1893-425e-a862-b3f80609fc42": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.181092139s STEP: Saw pod success Jan 11 20:20:37.422: INFO: Pod "pod-8fab9b00-1893-425e-a862-b3f80609fc42" satisfied condition "success or failure" Jan 11 20:20:37.511: INFO: Trying to get logs from node ip-10-250-27-25.ec2.internal pod pod-8fab9b00-1893-425e-a862-b3f80609fc42 container test-container: STEP: delete the pod Jan 11 20:20:37.700: INFO: Waiting for pod pod-8fab9b00-1893-425e-a862-b3f80609fc42 to disappear Jan 11 20:20:37.789: INFO: Pod pod-8fab9b00-1893-425e-a862-b3f80609fc42 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:20:37.789: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9476" for this suite. Jan 11 20:20:44.149: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:20:47.450: INFO: namespace emptydir-9476 deletion completed in 9.569703605s • [SLOW TEST:12.986 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] NodeLease /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:20:43.261: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename node-lease-test STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in node-lease-test-8450 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] NodeLease /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node_lease.go:43 [It] the kubelet should create and update a lease in the kube-node-lease namespace /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node_lease.go:50 STEP: check that lease for this Kubelet exists in the kube-node-lease namespace STEP: check that node lease is updated at least once within the lease duration [AfterEach] [k8s.io] NodeLease /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:20:44.130: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "node-lease-test-8450" for this suite. Jan 11 20:20:50.513: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:20:53.732: INFO: namespace node-lease-test-8450 deletion completed in 9.510512477s • [SLOW TEST:10.471 seconds] [k8s.io] NodeLease /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693 when the NodeLease feature is enabled /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node_lease.go:49 the kubelet should create and update a lease in the kube-node-lease namespace /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node_lease.go:50 ------------------------------ S ------------------------------ [BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:34 [BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:20:39.256: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename sysctl STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in sysctl-8226 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:63 [It] should support unsafe sysctls which are actually whitelisted /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:110 STEP: Creating a pod with the kernel.shm_rmid_forced sysctl STEP: Watching for error events or started pod STEP: Waiting for pod completion STEP: Checking that the pod succeeded STEP: Getting logs from the pod STEP: Checking that the sysctl is actually updated [AfterEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:20:42.361: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sysctl-8226" for this suite. Jan 11 20:20:50.722: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:20:54.035: INFO: namespace sysctl-8226 deletion completed in 11.583274808s • [SLOW TEST:14.779 seconds] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693 should support unsafe sysctls which are actually whitelisted /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:110 ------------------------------ S ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:20:47.465: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename kubectl STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-7896 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:225 [It] should support proxy with --port 0 [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: starting the proxy server Jan 11 20:20:48.250: INFO: Asynchronously running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:20:48.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7896" for this suite. Jan 11 20:20:55.342: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:20:58.699: INFO: namespace kubectl-7896 deletion completed in 9.624948467s • [SLOW TEST:11.233 seconds] [sig-cli] Kubectl client /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Proxy server /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1782 should support proxy with --port 0 [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-instrumentation] MetricsGrabber /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:20:54.038: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename metrics-grabber STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in metrics-grabber-5102 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-instrumentation] MetricsGrabber /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/monitoring/metrics_grabber.go:36 W0111 20:20:54.843554 8609 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. [It] should grab all metrics from API server. /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/monitoring/metrics_grabber.go:45 STEP: Connecting to /metrics endpoint [AfterEach] [sig-instrumentation] MetricsGrabber /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:20:56.465: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "metrics-grabber-5102" for this suite. Jan 11 20:21:02.826: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:21:06.273: INFO: namespace metrics-grabber-5102 deletion completed in 9.71723306s • [SLOW TEST:12.235 seconds] [sig-instrumentation] MetricsGrabber /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/common/framework.go:23 should grab all metrics from API server. /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/monitoring/metrics_grabber.go:45 ------------------------------ SS ------------------------------ [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:20:53.735: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-lifecycle-hook-9332 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Jan 11 20:20:59.179: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 11 20:20:59.268: INFO: Pod pod-with-prestop-exec-hook still exists Jan 11 20:21:01.269: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 11 20:21:01.358: INFO: Pod pod-with-prestop-exec-hook still exists Jan 11 20:21:03.269: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 11 20:21:03.358: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:21:03.455: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-9332" for this suite. Jan 11 20:21:15.814: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:21:19.034: INFO: namespace container-lifecycle-hook-9332 deletion completed in 15.488237291s • [SLOW TEST:25.299 seconds] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693 when create a pod with lifecycle hook /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:20:58.733: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in crd-publish-openapi-59 STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Jan 11 20:20:59.370: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Jan 11 20:21:03.780: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-59 create -f -' Jan 11 20:21:05.312: INFO: stderr: "" Jan 11 20:21:05.312: INFO: stdout: "e2e-test-crd-publish-openapi-2358-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Jan 11 20:21:05.312: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-59 delete e2e-test-crd-publish-openapi-2358-crds test-cr' Jan 11 20:21:05.862: INFO: stderr: "" Jan 11 20:21:05.862: INFO: stdout: "e2e-test-crd-publish-openapi-2358-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" Jan 11 20:21:05.862: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-59 apply -f -' Jan 11 20:21:07.011: INFO: stderr: "" Jan 11 20:21:07.011: INFO: stdout: "e2e-test-crd-publish-openapi-2358-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Jan 11 20:21:07.011: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=crd-publish-openapi-59 delete e2e-test-crd-publish-openapi-2358-crds test-cr' Jan 11 20:21:07.540: INFO: stderr: "" Jan 11 20:21:07.540: INFO: stdout: "e2e-test-crd-publish-openapi-2358-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Jan 11 20:21:07.540: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config explain e2e-test-crd-publish-openapi-2358-crds' Jan 11 20:21:08.442: INFO: stderr: "" Jan 11 20:21:08.442: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-2358-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:21:12.728: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-59" for this suite. Jan 11 20:21:19.088: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:21:22.389: INFO: namespace crd-publish-openapi-59 deletion completed in 9.571391095s • [SLOW TEST:23.657 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:21:19.046: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename kubectl STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-7176 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:225 [It] should check if v1 is in available api versions [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: validating api versions Jan 11 20:21:19.689: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config api-versions' Jan 11 20:21:20.224: INFO: stderr: "" Jan 11 20:21:20.224: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncert.gardener.cloud/v1alpha1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ncrd.projectcalico.org/v1\ndns.gardener.cloud/v1alpha1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nmetrics.k8s.io/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nsnapshot.storage.k8s.io/v1alpha1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:21:20.224: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7176" for this suite. Jan 11 20:21:28.585: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:21:31.802: INFO: namespace kubectl-7176 deletion completed in 11.486931826s • [SLOW TEST:12.756 seconds] [sig-cli] Kubectl client /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl api-versions /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:738 should check if v1 is in available api versions [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:21:22.399: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename kubectl STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-2777 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:225 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Jan 11 20:21:23.052: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config create -f - --namespace=kubectl-2777' Jan 11 20:21:24.008: INFO: stderr: "" Jan 11 20:21:24.008: INFO: stdout: "replicationcontroller/redis-master created\n" Jan 11 20:21:24.008: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config create -f - --namespace=kubectl-2777' Jan 11 20:21:24.953: INFO: stderr: "" Jan 11 20:21:24.953: INFO: stdout: "service/redis-master created\n" STEP: Waiting for Redis master to start. Jan 11 20:21:26.043: INFO: Selector matched 1 pods for map[app:redis] Jan 11 20:21:26.043: INFO: Found 1 / 1 Jan 11 20:21:26.043: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jan 11 20:21:26.133: INFO: Selector matched 1 pods for map[app:redis] Jan 11 20:21:26.133: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jan 11 20:21:26.133: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config describe pod redis-master-kgk5m --namespace=kubectl-2777' Jan 11 20:21:26.747: INFO: stderr: "" Jan 11 20:21:26.747: INFO: stdout: "Name: redis-master-kgk5m\nNamespace: kubectl-2777\nPriority: 0\nNode: ip-10-250-27-25.ec2.internal/10.250.27.25\nStart Time: Sat, 11 Jan 2020 20:21:23 +0000\nLabels: app=redis\n role=master\nAnnotations: cni.projectcalico.org/podIP: 100.64.1.48/32\n kubernetes.io/psp: e2e-test-privileged-psp\nStatus: Running\nIP: 100.64.1.48\nIPs:\n IP: 100.64.1.48\nControlled By: ReplicationController/redis-master\nContainers:\n redis-master:\n Container ID: docker://9fc6dc900eda30e52ac01e5c734ac9da875b6d1a6b5d6e06adf07f6c90572010\n Image: docker.io/library/redis:5.0.5-alpine\n Image ID: docker-pullable://redis@sha256:50899ea1ceed33fa03232f3ac57578a424faa1742c1ac9c7a7bdb95cdf19b858\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Sat, 11 Jan 2020 20:21:24 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-qs48z (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-qs48z:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-qs48z\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled default-scheduler Successfully assigned kubectl-2777/redis-master-kgk5m to ip-10-250-27-25.ec2.internal\n Normal Pulled 2s kubelet, ip-10-250-27-25.ec2.internal Container image \"docker.io/library/redis:5.0.5-alpine\" already present on machine\n Normal Created 2s kubelet, ip-10-250-27-25.ec2.internal Created container redis-master\n Normal Started 2s kubelet, ip-10-250-27-25.ec2.internal Started container redis-master\n" Jan 11 20:21:26.747: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config describe rc redis-master --namespace=kubectl-2777' Jan 11 20:21:27.449: INFO: stderr: "" Jan 11 20:21:27.449: INFO: stdout: "Name: redis-master\nNamespace: kubectl-2777\nSelector: app=redis,role=master\nLabels: app=redis\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=redis\n role=master\n Containers:\n redis-master:\n Image: docker.io/library/redis:5.0.5-alpine\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 4s replication-controller Created pod: redis-master-kgk5m\n" Jan 11 20:21:27.449: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config describe service redis-master --namespace=kubectl-2777' Jan 11 20:21:28.255: INFO: stderr: "" Jan 11 20:21:28.255: INFO: stdout: "Name: redis-master\nNamespace: kubectl-2777\nLabels: app=redis\n role=master\nAnnotations: \nSelector: app=redis,role=master\nType: ClusterIP\nIP: 100.107.0.220\nPort: 6379/TCP\nTargetPort: redis-server/TCP\nEndpoints: 100.64.1.48:6379\nSession Affinity: None\nEvents: \n" Jan 11 20:21:28.346: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config describe node ip-10-250-27-25.ec2.internal' Jan 11 20:21:29.223: INFO: stderr: "" Jan 11 20:21:29.223: INFO: stdout: "Name: ip-10-250-27-25.ec2.internal\nRoles: \nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/instance-type=m5.large\n beta.kubernetes.io/os=linux\n failure-domain.beta.kubernetes.io/region=us-east-1\n failure-domain.beta.kubernetes.io/zone=us-east-1c\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=ip-10-250-27-25.ec2.internal\n kubernetes.io/os=linux\n node.kubernetes.io/role=node\n worker.garden.sapcloud.io/group=worker-1\n worker.gardener.cloud/pool=worker-1\nAnnotations: csi.volume.kubernetes.io/nodeid:\n {\"csi-hostpath-ephemeral-1641\":\"ip-10-250-27-25.ec2.internal\",\"csi-hostpath-ephemeral-3918\":\"ip-10-250-27-25.ec2.internal\",\"csi-hostpath-p...\n node.alpha.kubernetes.io/ttl: 0\n projectcalico.org/IPv4Address: 10.250.27.25/19\n projectcalico.org/IPv4IPIPTunnelAddr: 100.64.1.1\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sat, 11 Jan 2020 15:56:03 +0000\nTaints: \nUnschedulable: false\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n FrequentContainerdRestart False Sat, 11 Jan 2020 20:20:38 +0000 Sat, 11 Jan 2020 15:56:58 +0000 NoFrequentContainerdRestart containerd is functioning properly\n CorruptDockerOverlay2 False Sat, 11 Jan 2020 20:20:38 +0000 Sat, 11 Jan 2020 15:56:58 +0000 NoCorruptDockerOverlay2 docker overlay2 is functioning properly\n KernelDeadlock False Sat, 11 Jan 2020 20:20:38 +0000 Sat, 11 Jan 2020 15:56:58 +0000 KernelHasNoDeadlock kernel has no deadlock\n ReadonlyFilesystem False Sat, 11 Jan 2020 20:20:38 +0000 Sat, 11 Jan 2020 15:56:58 +0000 FilesystemIsNotReadOnly Filesystem is not read-only\n FrequentUnregisterNetDevice False Sat, 11 Jan 2020 20:20:38 +0000 Sat, 11 Jan 2020 15:56:58 +0000 NoFrequentUnregisterNetDevice node is functioning properly\n FrequentKubeletRestart False Sat, 11 Jan 2020 20:20:38 +0000 Sat, 11 Jan 2020 15:56:58 +0000 NoFrequentKubeletRestart kubelet is functioning properly\n FrequentDockerRestart False Sat, 11 Jan 2020 20:20:38 +0000 Sat, 11 Jan 2020 15:56:58 +0000 NoFrequentDockerRestart docker is functioning properly\n NetworkUnavailable False Sat, 11 Jan 2020 15:56:18 +0000 Sat, 11 Jan 2020 15:56:18 +0000 CalicoIsUp Calico is running on this node\n MemoryPressure False Sat, 11 Jan 2020 20:21:26 +0000 Sat, 11 Jan 2020 15:56:03 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Sat, 11 Jan 2020 20:21:26 +0000 Sat, 11 Jan 2020 15:56:03 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Sat, 11 Jan 2020 20:21:26 +0000 Sat, 11 Jan 2020 15:56:03 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Sat, 11 Jan 2020 20:21:26 +0000 Sat, 11 Jan 2020 15:56:13 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 10.250.27.25\n Hostname: ip-10-250-27-25.ec2.internal\n InternalDNS: ip-10-250-27-25.ec2.internal\nCapacity:\n attachable-volumes-aws-ebs: 25\n cpu: 2\n ephemeral-storage: 28056816Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 7865496Ki\n pods: 110\nAllocatable:\n attachable-volumes-aws-ebs: 25\n cpu: 1920m\n ephemeral-storage: 27293670584\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 6577812679\n pods: 110\nSystem Info:\n Machine ID: ec280dba3c1837e27848a3dec8c080a9\n System UUID: ec280dba-3c18-37e2-7848-a3dec8c080a9\n Boot ID: 89e42b89-b944-47ea-8bf6-5f2fe6d80c97\n Kernel Version: 4.19.86-coreos\n OS Image: Container Linux by CoreOS 2303.3.0 (Rhyolite)\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: docker://18.6.3\n Kubelet Version: v1.16.4\n Kube-Proxy Version: v1.16.4\nPodCIDR: 100.64.1.0/24\nPodCIDRs: 100.64.1.0/24\nProviderID: aws:///us-east-1c/i-0a8c404292a3c92e9\nNon-terminated Pods: (10 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n container-lifecycle-hook-825 pod-handle-http-request 0 (0%) 0 (0%) 0 (0%) 0 (0%) 23s\n container-probe-8193 liveness-81af33dc-8925-4583-8828-cf006b5db52e 0 (0%) 0 (0%) 0 (0%) 0 (0%) 53s\n kube-system calico-node-m8r2d 100m (5%) 500m (26%) 100Mi (1%) 700Mi (11%) 4h25m\n kube-system kube-proxy-rq4kf 20m (1%) 0 (0%) 64Mi (1%) 0 (0%) 4h25m\n kube-system node-exporter-l6q84 5m (0%) 25m (1%) 10Mi (0%) 100Mi (1%) 4h25m\n kube-system node-problem-detector-9z5sq 20m (1%) 200m (10%) 20Mi (0%) 100Mi (1%) 4h25m\n kubectl-2777 redis-master-kgk5m 0 (0%) 0 (0%) 0 (0%) 0 (0%) 6s\n pods-2560 pod-back-off-image 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2m16s\n pods-7628 back-off-cap 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12m\n pv-7435 nfs-server 0 (0%) 0 (0%) 0 (0%) 0 (0%) 3m30s\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 145m (7%) 725m (37%)\n memory 194Mi (3%) 900Mi (14%)\n ephemeral-storage 0 (0%) 0 (0%)\n attachable-volumes-aws-ebs 0 0\nEvents: \n" Jan 11 20:21:29.224: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config describe namespace kubectl-2777' Jan 11 20:21:29.916: INFO: stderr: "" Jan 11 20:21:29.917: INFO: stdout: "Name: kubectl-2777\nLabels: e2e-framework=kubectl\n e2e-run=9686fcea-62a5-4baa-98ee-4ffc762fc4a8\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo resource limits.\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:21:29.917: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2777" for this suite. Jan 11 20:21:42.276: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:21:45.584: INFO: namespace kubectl-2777 deletion completed in 15.576322649s • [SLOW TEST:23.185 seconds] [sig-cli] Kubectl client /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl describe /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1000 should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ S ------------------------------ [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:21:06.277: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-lifecycle-hook-825 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Jan 11 20:21:11.832: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 11 20:21:11.922: INFO: Pod pod-with-poststart-http-hook still exists Jan 11 20:21:13.922: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 11 20:21:14.012: INFO: Pod pod-with-poststart-http-hook still exists Jan 11 20:21:15.922: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 11 20:21:16.012: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:21:16.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-825" for this suite. Jan 11 20:21:44.375: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:21:47.693: INFO: namespace container-lifecycle-hook-825 deletion completed in 31.588429767s • [SLOW TEST:41.415 seconds] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693 when create a pod with lifecycle hook /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSS ------------------------------ [BeforeEach] [sig-storage] Ephemeralstorage /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:21:47.703: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename pv STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pv-5420 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Ephemeralstorage /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:50 [It] should allow deletion of pod with invalid volume : secret /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:56 Jan 11 20:22:18.448: INFO: Deleting pod "pv-5420"/"pod-ephm-test-projected-krn8" Jan 11 20:22:18.448: INFO: Deleting pod "pod-ephm-test-projected-krn8" in namespace "pv-5420" Jan 11 20:22:18.539: INFO: Wait up to 5m0s for pod "pod-ephm-test-projected-krn8" to be fully deleted [AfterEach] [sig-storage] Ephemeralstorage /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:22:24.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-5420" for this suite. Jan 11 20:22:31.081: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:22:34.479: INFO: namespace pv-5420 deletion completed in 9.667827738s • [SLOW TEST:46.776 seconds] [sig-storage] Ephemeralstorage /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 When pod refers to non-existent ephemeral storage /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:54 should allow deletion of pod with invalid volume : secret /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:56 ------------------------------ SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] DisruptionController /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:21:45.591: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename disruption STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in disruption-8214 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] DisruptionController /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:52 [It] evictions: no PDB => should allow an eviction /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:149 STEP: locating a running pod STEP: Waiting for all pods to be running [AfterEach] [sig-apps] DisruptionController /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:21:48.701: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "disruption-8214" for this suite. Jan 11 20:22:35.061: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:22:38.358: INFO: namespace disruption-8214 deletion completed in 49.565399446s • [SLOW TEST:52.767 seconds] [sig-apps] DisruptionController /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 evictions: no PDB => should allow an eviction /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:149 ------------------------------ SSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:22:34.498: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename resourcequota STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in resourcequota-3166 STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:22:46.791: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3166" for this suite. Jan 11 20:22:55.155: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:22:58.470: INFO: namespace resourcequota-3166 deletion completed in 11.587490595s • [SLOW TEST:23.972 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:22:38.373: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in persistent-local-volumes-test-746 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:152 [BeforeEach] Pod with node different from PV's NodeAffinity /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:349 STEP: Initializing test volumes Jan 11 20:22:41.464: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-746 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-84992a88-8498-4f83-8b70-a56ce5eb0bf7' Jan 11 20:22:42.754: INFO: stderr: "" Jan 11 20:22:42.754: INFO: stdout: "" STEP: Creating local PVCs and PVs Jan 11 20:22:42.754: INFO: Creating a PV followed by a PVC Jan 11 20:22:42.934: INFO: Waiting for PV local-pv8g9f7 to bind to PVC pvc-chc8m Jan 11 20:22:42.934: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-chc8m] to have phase Bound Jan 11 20:22:43.024: INFO: PersistentVolumeClaim pvc-chc8m found and phase=Bound (89.389732ms) Jan 11 20:22:43.024: INFO: Waiting up to 3m0s for PersistentVolume local-pv8g9f7 to have phase Bound Jan 11 20:22:43.113: INFO: PersistentVolume local-pv8g9f7 found and phase=Bound (88.841276ms) [It] should fail scheduling due to different NodeAffinity /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:365 STEP: local-volume-type: dir STEP: Initializing test volumes Jan 11 20:22:43.291: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-746 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-a16d9dc7-5cd0-489e-b058-68a388e7e5d9' Jan 11 20:22:44.573: INFO: stderr: "" Jan 11 20:22:44.573: INFO: stdout: "" STEP: Creating local PVCs and PVs Jan 11 20:22:44.573: INFO: Creating a PV followed by a PVC Jan 11 20:22:44.752: INFO: Waiting for PV local-pvvc7xl to bind to PVC pvc-zmqxw Jan 11 20:22:44.752: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-zmqxw] to have phase Bound Jan 11 20:22:44.841: INFO: PersistentVolumeClaim pvc-zmqxw found and phase=Bound (89.161742ms) Jan 11 20:22:44.841: INFO: Waiting up to 3m0s for PersistentVolume local-pvvc7xl to have phase Bound Jan 11 20:22:44.931: INFO: PersistentVolume local-pvvc7xl found and phase=Bound (89.247495ms) Jan 11 20:22:45.200: INFO: Waiting up to 5m0s for pod "security-context-35731c30-8c13-4c88-a82c-fc03ddfea5d8" in namespace "persistent-local-volumes-test-746" to be "Unschedulable" Jan 11 20:22:45.289: INFO: Pod "security-context-35731c30-8c13-4c88-a82c-fc03ddfea5d8": Phase="Pending", Reason="", readiness=false. Elapsed: 89.294039ms Jan 11 20:22:45.289: INFO: Pod "security-context-35731c30-8c13-4c88-a82c-fc03ddfea5d8" satisfied condition "Unschedulable" [AfterEach] Pod with node different from PV's NodeAffinity /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:360 STEP: Cleaning up PVC and PV Jan 11 20:22:45.289: INFO: Deleting PersistentVolumeClaim "pvc-chc8m" Jan 11 20:22:45.379: INFO: Deleting PersistentVolume "local-pv8g9f7" STEP: Removing the test directory Jan 11 20:22:45.470: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-746 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-84992a88-8498-4f83-8b70-a56ce5eb0bf7' Jan 11 20:22:46.732: INFO: stderr: "" Jan 11 20:22:46.732: INFO: stdout: "" [AfterEach] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:22:46.823: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-746" for this suite. Jan 11 20:22:59.181: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:23:02.596: INFO: namespace persistent-local-volumes-test-746 deletion completed in 15.683686605s • [SLOW TEST:24.223 seconds] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Pod with node different from PV's NodeAffinity /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:343 should fail scheduling due to different NodeAffinity /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:365 ------------------------------ SSSSSS ------------------------------ [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:20:35.591: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename container-probe STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-probe-8193 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:52 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating pod liveness-81af33dc-8925-4583-8828-cf006b5db52e in namespace container-probe-8193 Jan 11 20:20:38.523: INFO: Started pod liveness-81af33dc-8925-4583-8828-cf006b5db52e in namespace container-probe-8193 STEP: checking the pod's current state and verifying that restartCount is present Jan 11 20:20:38.612: INFO: Initial restart count of pod liveness-81af33dc-8925-4583-8828-cf006b5db52e is 0 Jan 11 20:20:51.239: INFO: Restart count of pod container-probe-8193/liveness-81af33dc-8925-4583-8828-cf006b5db52e is now 1 (12.626651673s elapsed) Jan 11 20:21:12.136: INFO: Restart count of pod container-probe-8193/liveness-81af33dc-8925-4583-8828-cf006b5db52e is now 2 (33.524315183s elapsed) Jan 11 20:21:30.943: INFO: Restart count of pod container-probe-8193/liveness-81af33dc-8925-4583-8828-cf006b5db52e is now 3 (52.330453468s elapsed) Jan 11 20:21:51.842: INFO: Restart count of pod container-probe-8193/liveness-81af33dc-8925-4583-8828-cf006b5db52e is now 4 (1m13.230084376s elapsed) Jan 11 20:23:02.896: INFO: Restart count of pod container-probe-8193/liveness-81af33dc-8925-4583-8828-cf006b5db52e is now 5 (2m24.284067769s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:23:02.988: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8193" for this suite. Jan 11 20:23:09.347: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:23:12.650: INFO: namespace container-probe-8193 deletion completed in 9.571327967s • [SLOW TEST:157.059 seconds] [k8s.io] Probing container /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693 should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:23:02.614: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in persistent-local-volumes-test-4284 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:152 [BeforeEach] [Volume type: tmpfs] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating tmpfs mount point on node "ip-10-250-27-25.ec2.internal" at path "/tmp/local-volume-test-f929cb6b-2752-42d1-b0dd-244920cffe62" Jan 11 20:23:05.707: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-4284 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-f929cb6b-2752-42d1-b0dd-244920cffe62" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-f929cb6b-2752-42d1-b0dd-244920cffe62" "/tmp/local-volume-test-f929cb6b-2752-42d1-b0dd-244920cffe62"' Jan 11 20:23:07.081: INFO: stderr: "" Jan 11 20:23:07.081: INFO: stdout: "" STEP: Creating local PVCs and PVs Jan 11 20:23:07.081: INFO: Creating a PV followed by a PVC Jan 11 20:23:07.261: INFO: Waiting for PV local-pvh8r6z to bind to PVC pvc-gzfn4 Jan 11 20:23:07.261: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-gzfn4] to have phase Bound Jan 11 20:23:07.350: INFO: PersistentVolumeClaim pvc-gzfn4 found and phase=Bound (89.535143ms) Jan 11 20:23:07.350: INFO: Waiting up to 3m0s for PersistentVolume local-pvh8r6z to have phase Bound Jan 11 20:23:07.440: INFO: PersistentVolume local-pvh8r6z found and phase=Bound (89.341996ms) [BeforeEach] One pod requesting one prebound PVC /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Jan 11 20:23:10.069: INFO: pod "security-context-4552ffa5-161f-4f8f-b401-a11baae994f1" created on Node "ip-10-250-27-25.ec2.internal" STEP: Writing in pod1 Jan 11 20:23:10.069: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-4284 security-context-4552ffa5-161f-4f8f-b401-a11baae994f1 -- /bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file' Jan 11 20:23:11.389: INFO: stderr: "" Jan 11 20:23:11.389: INFO: stdout: "" Jan 11 20:23:11.389: INFO: podRWCmdExec out: "" err: [It] should be able to mount volume and read from pod1 /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 STEP: Reading in pod1 Jan 11 20:23:11.389: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-4284 security-context-4552ffa5-161f-4f8f-b401-a11baae994f1 -- /bin/sh -c cat /mnt/volume1/test-file' Jan 11 20:23:12.733: INFO: stderr: "" Jan 11 20:23:12.733: INFO: stdout: "test-file-content\n" Jan 11 20:23:12.733: INFO: podRWCmdExec out: "test-file-content\n" err: [AfterEach] One pod requesting one prebound PVC /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod security-context-4552ffa5-161f-4f8f-b401-a11baae994f1 in namespace persistent-local-volumes-test-4284 [AfterEach] [Volume type: tmpfs] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jan 11 20:23:12.826: INFO: Deleting PersistentVolumeClaim "pvc-gzfn4" Jan 11 20:23:12.916: INFO: Deleting PersistentVolume "local-pvh8r6z" STEP: Unmount tmpfs mount point on node "ip-10-250-27-25.ec2.internal" at path "/tmp/local-volume-test-f929cb6b-2752-42d1-b0dd-244920cffe62" Jan 11 20:23:13.007: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-4284 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-f929cb6b-2752-42d1-b0dd-244920cffe62"' Jan 11 20:23:14.382: INFO: stderr: "" Jan 11 20:23:14.382: INFO: stdout: "" STEP: Removing the test directory Jan 11 20:23:14.383: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-4284 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-f929cb6b-2752-42d1-b0dd-244920cffe62' Jan 11 20:23:15.787: INFO: stderr: "" Jan 11 20:23:15.787: INFO: stdout: "" [AfterEach] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:23:15.878: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-4284" for this suite. Jan 11 20:23:22.237: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:23:25.543: INFO: namespace persistent-local-volumes-test-4284 deletion completed in 9.574532483s • [SLOW TEST:22.930 seconds] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: tmpfs] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and read from pod1 /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 ------------------------------ SSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl Port forwarding /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:22:58.511: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename port-forwarding STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in port-forwarding-953 STEP: Waiting for a default service account to be provisioned in namespace [It] should support a client that connects, sends DATA, and disconnects /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:468 STEP: Creating the target pod STEP: Running 'kubectl port-forward' Jan 11 20:23:07.529: INFO: starting port-forward command and streaming output Jan 11 20:23:07.529: INFO: Asynchronously running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config port-forward --namespace=port-forwarding-953 pfpod :80' Jan 11 20:23:07.529: INFO: reading from `kubectl port-forward` command's stdout STEP: Dialing the local port STEP: Sending the expected data to the local port STEP: Closing the write half of the client's connection STEP: Reading data from the local port STEP: Waiting for the target pod to stop running Jan 11 20:23:09.783: INFO: Waiting up to 5m0s for pod "pfpod" in namespace "port-forwarding-953" to be "container terminated" Jan 11 20:23:09.873: INFO: Pod "pfpod": Phase="Running", Reason="", readiness=true. Elapsed: 89.982932ms Jan 11 20:23:11.964: INFO: Pod "pfpod": Phase="Running", Reason="", readiness=false. Elapsed: 2.18101995s Jan 11 20:23:11.964: INFO: Pod "pfpod" satisfied condition "container terminated" STEP: Verifying logs STEP: Closing the connection to the local port [AfterEach] [sig-cli] Kubectl Port forwarding /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:23:12.197: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "port-forwarding-953" for this suite. Jan 11 20:23:24.560: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:23:27.889: INFO: namespace port-forwarding-953 deletion completed in 15.598622629s • [SLOW TEST:29.377 seconds] [sig-cli] Kubectl Port forwarding /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 With a server listening on localhost /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:463 that expects a client request /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:464 should support a client that connects, sends DATA, and disconnects /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:468 ------------------------------ SSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:23:12.668: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename services STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-7215 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:91 [It] should create endpoints for unready pods /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1382 STEP: creating RC slow-terminating-unready-pod with selectors map[name:slow-terminating-unready-pod] STEP: creating Service tolerate-unready with selectors map[name:slow-terminating-unready-pod testid:tolerate-unready-ad27005d-009d-42f1-942a-04329b5c7d01] STEP: Verifying pods for RC slow-terminating-unready-pod Jan 11 20:23:13.579: INFO: Pod name slow-terminating-unready-pod: Found 1 pods out of 1 STEP: ensuring each pod is running STEP: trying to dial each unique pod Jan 11 20:23:16.115: INFO: Controller slow-terminating-unready-pod: Got non-empty result from replica 1 [slow-terminating-unready-pod-t2qhb]: "NOW: 2020-01-11 20:23:16.02653576 +0000 UTC m=+1.874827470", 1 of 1 required successes so far STEP: Waiting for endpoints of Service with DNS name tolerate-unready.services-7215.svc.cluster.local Jan 11 20:23:16.116: INFO: Creating new exec pod Jan 11 20:23:18.387: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=services-7215 execpod-94s9k -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-7215.svc.cluster.local:80/' Jan 11 20:23:19.769: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://tolerate-unready.services-7215.svc.cluster.local:80/\n" Jan 11 20:23:19.769: INFO: stdout: "NOW: 2020-01-11 20:23:19.70087862 +0000 UTC m=+5.549170322" STEP: Scaling down replication controller to zero STEP: Scaling ReplicationController slow-terminating-unready-pod in namespace services-7215 to 0 STEP: Update service to not tolerate unready services STEP: Check if pod is unreachable Jan 11 20:23:23.679: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=services-7215 execpod-94s9k -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-7215.svc.cluster.local:80/; test "$?" -ne "0"' Jan 11 20:23:27.058: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://tolerate-unready.services-7215.svc.cluster.local:80/\n+ test 28 -ne 0\n" Jan 11 20:23:27.059: INFO: stdout: "" STEP: Update service to tolerate unready services again STEP: Check if terminating pod is available through service Jan 11 20:23:27.239: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=services-7215 execpod-94s9k -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-7215.svc.cluster.local:80/' Jan 11 20:23:28.633: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://tolerate-unready.services-7215.svc.cluster.local:80/\n" Jan 11 20:23:28.633: INFO: stdout: "NOW: 2020-01-11 20:23:28.559584914 +0000 UTC m=+14.407876609" STEP: Remove pods immediately STEP: stopping RC slow-terminating-unready-pod in namespace services-7215 STEP: deleting service tolerate-unready in namespace services-7215 [AfterEach] [sig-network] Services /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:23:29.181: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7215" for this suite. Jan 11 20:23:35.540: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:23:38.863: INFO: namespace services-7215 deletion completed in 9.591931771s [AfterEach] [sig-network] Services /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:95 • [SLOW TEST:26.195 seconds] [sig-network] Services /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should create endpoints for unready pods /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1382 ------------------------------ S ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:23:38.867: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename projected STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-5749 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating a pod to test downward API volume plugin Jan 11 20:23:39.600: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1026489a-c8b1-4c19-9296-ec4a9e50648c" in namespace "projected-5749" to be "success or failure" Jan 11 20:23:39.689: INFO: Pod "downwardapi-volume-1026489a-c8b1-4c19-9296-ec4a9e50648c": Phase="Pending", Reason="", readiness=false. Elapsed: 88.933825ms Jan 11 20:23:41.779: INFO: Pod "downwardapi-volume-1026489a-c8b1-4c19-9296-ec4a9e50648c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.178623544s STEP: Saw pod success Jan 11 20:23:41.779: INFO: Pod "downwardapi-volume-1026489a-c8b1-4c19-9296-ec4a9e50648c" satisfied condition "success or failure" Jan 11 20:23:41.868: INFO: Trying to get logs from node ip-10-250-27-25.ec2.internal pod downwardapi-volume-1026489a-c8b1-4c19-9296-ec4a9e50648c container client-container: STEP: delete the pod Jan 11 20:23:42.058: INFO: Waiting for pod downwardapi-volume-1026489a-c8b1-4c19-9296-ec4a9e50648c to disappear Jan 11 20:23:42.148: INFO: Pod downwardapi-volume-1026489a-c8b1-4c19-9296-ec4a9e50648c no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:23:42.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5749" for this suite. Jan 11 20:23:48.511: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:23:51.814: INFO: namespace projected-5749 deletion completed in 9.574337081s • [SLOW TEST:12.947 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSS ------------------------------ [BeforeEach] [k8s.io] Docker Containers /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:23:51.828: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename containers STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in containers-3003 STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating a pod to test override all Jan 11 20:23:52.560: INFO: Waiting up to 5m0s for pod "client-containers-9a6e8b58-cb42-4352-8097-8372f7af26b9" in namespace "containers-3003" to be "success or failure" Jan 11 20:23:52.650: INFO: Pod "client-containers-9a6e8b58-cb42-4352-8097-8372f7af26b9": Phase="Pending", Reason="", readiness=false. Elapsed: 90.209171ms Jan 11 20:23:54.740: INFO: Pod "client-containers-9a6e8b58-cb42-4352-8097-8372f7af26b9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.179596856s STEP: Saw pod success Jan 11 20:23:54.740: INFO: Pod "client-containers-9a6e8b58-cb42-4352-8097-8372f7af26b9" satisfied condition "success or failure" Jan 11 20:23:54.834: INFO: Trying to get logs from node ip-10-250-27-25.ec2.internal pod client-containers-9a6e8b58-cb42-4352-8097-8372f7af26b9 container test-container: STEP: delete the pod Jan 11 20:23:55.023: INFO: Waiting for pod client-containers-9a6e8b58-cb42-4352-8097-8372f7af26b9 to disappear Jan 11 20:23:55.112: INFO: Pod client-containers-9a6e8b58-cb42-4352-8097-8372f7af26b9 no longer exists [AfterEach] [k8s.io] Docker Containers /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:23:55.112: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-3003" for this suite. Jan 11 20:24:01.472: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:24:04.909: INFO: namespace containers-3003 deletion completed in 9.706166897s • [SLOW TEST:13.081 seconds] [k8s.io] Docker Containers /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ S ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:23:25.553: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename kubectl STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-8905 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:225 [BeforeEach] Simple pod /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:371 STEP: creating the pod from apiVersion: v1 kind: Pod metadata: name: httpd labels: name: httpd spec: containers: - name: httpd image: docker.io/library/httpd:2.4.38-alpine ports: - containerPort: 80 readinessProbe: httpGet: path: / port: 80 initialDelaySeconds: 5 timeoutSeconds: 5 Jan 11 20:23:26.196: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config create -f - --namespace=kubectl-8905' Jan 11 20:23:27.325: INFO: stderr: "" Jan 11 20:23:27.325: INFO: stdout: "pod/httpd created\n" Jan 11 20:23:27.325: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [httpd] Jan 11 20:23:27.325: INFO: Waiting up to 5m0s for pod "httpd" in namespace "kubectl-8905" to be "running and ready" Jan 11 20:23:27.417: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 91.51842ms Jan 11 20:23:29.506: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 2.181250647s Jan 11 20:23:31.596: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 4.27095471s Jan 11 20:23:33.699: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 6.373461672s Jan 11 20:23:35.788: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 8.463145031s Jan 11 20:23:37.879: INFO: Pod "httpd": Phase="Running", Reason="", readiness=true. Elapsed: 10.553884028s Jan 11 20:23:37.879: INFO: Pod "httpd" satisfied condition "running and ready" Jan 11 20:23:37.879: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [httpd] [It] should return command exit codes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:489 STEP: execing into a container with a successful command Jan 11 20:23:37.879: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-8905 exec httpd -- /bin/sh -c exit 0' Jan 11 20:23:39.203: INFO: stderr: "" Jan 11 20:23:39.203: INFO: stdout: "" STEP: execing into a container with a failing command Jan 11 20:23:39.203: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-8905 exec httpd -- /bin/sh -c exit 42' Jan 11 20:23:40.526: INFO: rc: 42 STEP: running a successful command Jan 11 20:23:40.526: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-8905 run -i --image=docker.io/library/busybox:1.29 --restart=Never success -- /bin/sh -c exit 0' Jan 11 20:23:43.220: INFO: stderr: "" Jan 11 20:23:43.220: INFO: stdout: "" STEP: running a failing command Jan 11 20:23:43.220: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-8905 run -i --image=docker.io/library/busybox:1.29 --restart=Never failure-1 -- /bin/sh -c exit 42' Jan 11 20:23:45.088: INFO: rc: 42 STEP: running a failing command without --restart=Never Jan 11 20:23:45.089: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-8905 run -i --image=docker.io/library/busybox:1.29 --restart=OnFailure failure-2 -- /bin/sh -c cat && exit 42' Jan 11 20:23:48.216: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\n" Jan 11 20:23:48.216: INFO: stdout: "abcd1234" STEP: running a failing command without --restart=Never, but with --rm Jan 11 20:23:48.216: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-8905 run -i --image=docker.io/library/busybox:1.29 --restart=OnFailure --rm failure-3 -- /bin/sh -c cat && exit 42' Jan 11 20:23:51.344: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\n" Jan 11 20:23:51.344: INFO: stdout: "abcd1234job.batch \"failure-3\" deleted\n" Jan 11 20:23:51.344: INFO: Waiting for pod failure-3 to disappear Jan 11 20:23:51.438: INFO: Pod failure-3 no longer exists STEP: running a failing command with --leave-stdin-open Jan 11 20:23:51.438: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-8905 run -i --image=docker.io/library/busybox:1.29 --restart=Never failure-4 --leave-stdin-open -- /bin/sh -c exit 42' Jan 11 20:23:52.979: INFO: stderr: "" Jan 11 20:23:52.979: INFO: stdout: "" [AfterEach] Simple pod /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:377 STEP: using delete to clean up resources Jan 11 20:23:52.979: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config delete --grace-period=0 --force -f - --namespace=kubectl-8905' Jan 11 20:23:53.533: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 11 20:23:53.533: INFO: stdout: "pod \"httpd\" force deleted\n" Jan 11 20:23:53.533: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config get rc,svc -l name=httpd --no-headers --namespace=kubectl-8905' Jan 11 20:23:54.090: INFO: stderr: "No resources found in kubectl-8905 namespace.\n" Jan 11 20:23:54.091: INFO: stdout: "" Jan 11 20:23:54.091: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config get pods -l name=httpd --namespace=kubectl-8905 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jan 11 20:23:54.597: INFO: stderr: "" Jan 11 20:23:54.597: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:23:54.597: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8905" for this suite. Jan 11 20:24:06.956: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:24:10.261: INFO: namespace kubectl-8905 deletion completed in 15.573130128s • [SLOW TEST:44.707 seconds] [sig-cli] Kubectl client /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Simple pod /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:369 should return command exit codes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:489 ------------------------------ SSSSS ------------------------------ [BeforeEach] [sig-node] RuntimeClass /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:24:04.913: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename runtimeclass STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in runtimeclass-2779 STEP: Waiting for a default service account to be provisioned in namespace [It] should reject a Pod requesting a deleted RuntimeClass /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtimeclass.go:65 STEP: Deleting RuntimeClass runtimeclass-2779-delete-me STEP: Waiting for the RuntimeClass to disappear [AfterEach] [sig-node] RuntimeClass /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:24:05.917: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "runtimeclass-2779" for this suite. Jan 11 20:24:14.277: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:24:17.577: INFO: namespace runtimeclass-2779 deletion completed in 11.569467256s • [SLOW TEST:12.665 seconds] [sig-node] RuntimeClass /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtimeclass.go:40 should reject a Pod requesting a deleted RuntimeClass /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtimeclass.go:65 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] CronJob /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:20:32.324: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename cronjob STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in cronjob-7738 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] CronJob /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:55 [It] should delete successful/failed finished jobs with limit of one job /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:233 STEP: Creating a AllowConcurrent cronjob with custom successful-jobs-history-limit STEP: Ensuring a finished job exists STEP: Ensuring a finished job exists by listing jobs explicitly STEP: Ensuring this job and its pods does not exist anymore STEP: Ensuring there is 1 finished job by listing jobs explicitly STEP: Removing cronjob STEP: Creating a AllowConcurrent cronjob with custom failed-jobs-history-limit STEP: Ensuring a finished job exists STEP: Ensuring a finished job exists by listing jobs explicitly STEP: Ensuring this job and its pods does not exist anymore STEP: Ensuring there is 1 finished job by listing jobs explicitly STEP: Removing cronjob [AfterEach] [sig-apps] CronJob /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:24:14.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "cronjob-7738" for this suite. Jan 11 20:24:20.768: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:24:24.078: INFO: namespace cronjob-7738 deletion completed in 9.579893864s • [SLOW TEST:231.754 seconds] [sig-apps] CronJob /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete successful/failed finished jobs with limit of one job /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:233 ------------------------------ [BeforeEach] [Testpattern: Inline-volume (default fs)] volumeIO /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:93 [BeforeEach] [Testpattern: Inline-volume (default fs)] volumeIO /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:23:27.896: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename volumeio STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in volumeio-5553 STEP: Waiting for a default service account to be provisioned in namespace [It] should write files of various sizes, verify size, validate content [Slow] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_io.go:137 Jan 11 20:23:28.552: INFO: Could not find CSI Name for in-tree plugin kubernetes.io/empty-dir Jan 11 20:23:28.552: INFO: Creating resource for inline volume STEP: starting emptydir-io-client STEP: writing 1048576 bytes to test file /opt/emptydir_io_test_volumeio-5553-1048576 Jan 11 20:23:30.825: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=volumeio-5553 emptydir-io-client -- /bin/sh -c i=0; while [ $i -lt 1 ]; do dd if=/opt/emptydir-volumeio-5553-dd_if bs=1048576 >>/opt/emptydir_io_test_volumeio-5553-1048576 2>/dev/null; let i+=1; done' Jan 11 20:23:32.198: INFO: stderr: "" Jan 11 20:23:32.198: INFO: stdout: "" STEP: verifying file size Jan 11 20:23:32.198: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=volumeio-5553 emptydir-io-client -- /bin/sh -c stat -c %s /opt/emptydir_io_test_volumeio-5553-1048576' Jan 11 20:23:33.633: INFO: stderr: "" Jan 11 20:23:33.633: INFO: stdout: "1048576\n" STEP: verifying file hash Jan 11 20:23:33.633: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=volumeio-5553 emptydir-io-client -- /bin/sh -c md5sum /opt/emptydir_io_test_volumeio-5553-1048576 | cut -d' ' -f1' Jan 11 20:23:35.004: INFO: stderr: "" Jan 11 20:23:35.004: INFO: stdout: "5c34c2813223a7ca05a3c2f38c0d1710\n" STEP: writing 104857600 bytes to test file /opt/emptydir_io_test_volumeio-5553-104857600 Jan 11 20:23:35.004: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=volumeio-5553 emptydir-io-client -- /bin/sh -c i=0; while [ $i -lt 100 ]; do dd if=/opt/emptydir-volumeio-5553-dd_if bs=1048576 >>/opt/emptydir_io_test_volumeio-5553-104857600 2>/dev/null; let i+=1; done' Jan 11 20:23:36.437: INFO: stderr: "" Jan 11 20:23:36.437: INFO: stdout: "" STEP: verifying file size Jan 11 20:23:36.437: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=volumeio-5553 emptydir-io-client -- /bin/sh -c stat -c %s /opt/emptydir_io_test_volumeio-5553-104857600' Jan 11 20:23:37.765: INFO: stderr: "" Jan 11 20:23:37.765: INFO: stdout: "104857600\n" STEP: verifying file hash Jan 11 20:23:37.765: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=volumeio-5553 emptydir-io-client -- /bin/sh -c md5sum /opt/emptydir_io_test_volumeio-5553-104857600 | cut -d' ' -f1' Jan 11 20:23:39.343: INFO: stderr: "" Jan 11 20:23:39.343: INFO: stdout: "f2fa202b1ffeedda5f3a58bd1ae81104\n" STEP: deleting test file /opt/emptydir_io_test_volumeio-5553-104857600... Jan 11 20:23:39.344: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=volumeio-5553 emptydir-io-client -- /bin/sh -c rm -f /opt/emptydir_io_test_volumeio-5553-104857600' Jan 11 20:23:40.791: INFO: stderr: "" Jan 11 20:23:40.791: INFO: stdout: "" STEP: deleting test file /opt/emptydir_io_test_volumeio-5553-1048576... Jan 11 20:23:40.791: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=volumeio-5553 emptydir-io-client -- /bin/sh -c rm -f /opt/emptydir_io_test_volumeio-5553-1048576' Jan 11 20:23:42.233: INFO: stderr: "" Jan 11 20:23:42.233: INFO: stdout: "" STEP: deleting test file /opt/emptydir-volumeio-5553-dd_if... Jan 11 20:23:42.233: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=volumeio-5553 emptydir-io-client -- /bin/sh -c rm -f /opt/emptydir-volumeio-5553-dd_if' Jan 11 20:23:43.505: INFO: stderr: "" Jan 11 20:23:43.505: INFO: stdout: "" STEP: deleting client pod "emptydir-io-client"... Jan 11 20:23:43.505: INFO: Deleting pod "emptydir-io-client" in namespace "volumeio-5553" Jan 11 20:23:43.596: INFO: Wait up to 5m0s for pod "emptydir-io-client" to be fully deleted Jan 11 20:23:55.776: INFO: sleeping a bit so kubelet can unmount and detach the volume Jan 11 20:24:15.776: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics [AfterEach] [Testpattern: Inline-volume (default fs)] volumeIO /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:24:15.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "volumeio-5553" for this suite. Jan 11 20:24:22.138: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:24:25.465: INFO: namespace volumeio-5553 deletion completed in 9.597371981s • [SLOW TEST:57.569 seconds] [sig-storage] In-tree Volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Driver: emptydir] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:69 [Testpattern: Inline-volume (default fs)] volumeIO /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:92 should write files of various sizes, verify size, validate content [Slow] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_io.go:137 ------------------------------ SS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:17:59.048: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename pv STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pv-7435 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:110 [BeforeEach] NFS /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:127 STEP: creating nfs-server pod STEP: locating the "nfs-server" server pod Jan 11 20:18:02.053: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config logs nfs-server nfs-server --namespace=pv-7435' Jan 11 20:18:02.614: INFO: stderr: "" Jan 11 20:18:02.615: INFO: stdout: "Serving /exports\nrpcinfo: can't contact rpcbind: : RPC: Unable to receive; errno = Connection refused\nStarting rpcbind\nNFS started\n" Jan 11 20:18:02.615: INFO: nfs server pod IP address: 100.64.1.15 [It] should create 4 PVs and 2 PVCs: test write access [Slow] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:251 Jan 11 20:18:02.615: INFO: Creating a PV followed by a PVC Jan 11 20:18:02.796: INFO: Creating a PV followed by a PVC Jan 11 20:18:03.157: INFO: Waiting up to 3m0s for PersistentVolume nfs-2b6dj to have phase Bound Jan 11 20:18:03.247: INFO: PersistentVolume nfs-2b6dj found and phase=Bound (89.687325ms) Jan 11 20:18:03.340: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-brzts] to have phase Bound Jan 11 20:18:03.431: INFO: PersistentVolumeClaim pvc-brzts found and phase=Bound (90.830605ms) Jan 11 20:18:03.431: INFO: Waiting up to 3m0s for PersistentVolume nfs-llz95 to have phase Bound Jan 11 20:18:03.521: INFO: PersistentVolume nfs-llz95 found and phase=Bound (89.91899ms) Jan 11 20:18:03.611: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-jpbsf] to have phase Bound Jan 11 20:18:03.701: INFO: PersistentVolumeClaim pvc-jpbsf found and phase=Bound (90.055655ms) Jan 11 20:18:03.701: INFO: Waiting up to 3m0s for PersistentVolume nfs-9ktwq to have phase Bound Jan 11 20:18:03.791: INFO: PersistentVolume nfs-9ktwq found but phase is Available instead of Bound. Jan 11 20:18:05.882: INFO: PersistentVolume nfs-9ktwq found but phase is Available instead of Bound. Jan 11 20:18:07.972: INFO: PersistentVolume nfs-9ktwq found but phase is Available instead of Bound. Jan 11 20:18:10.062: INFO: PersistentVolume nfs-9ktwq found but phase is Available instead of Bound. Jan 11 20:18:12.152: INFO: PersistentVolume nfs-9ktwq found but phase is Available instead of Bound. Jan 11 20:18:14.243: INFO: PersistentVolume nfs-9ktwq found but phase is Available instead of Bound. Jan 11 20:18:16.332: INFO: PersistentVolume nfs-9ktwq found but phase is Available instead of Bound. Jan 11 20:18:18.423: INFO: PersistentVolume nfs-9ktwq found but phase is Available instead of Bound. Jan 11 20:18:20.513: INFO: PersistentVolume nfs-9ktwq found but phase is Available instead of Bound. Jan 11 20:18:22.603: INFO: PersistentVolume nfs-9ktwq found but phase is Available instead of Bound. Jan 11 20:18:24.693: INFO: PersistentVolume nfs-9ktwq found but phase is Available instead of Bound. Jan 11 20:18:26.783: INFO: PersistentVolume nfs-9ktwq found but phase is Available instead of Bound. Jan 11 20:18:28.874: INFO: PersistentVolume nfs-9ktwq found but phase is Available instead of Bound. Jan 11 20:18:30.964: INFO: PersistentVolume nfs-9ktwq found but phase is Available instead of Bound. Jan 11 20:18:33.054: INFO: PersistentVolume nfs-9ktwq found but phase is Available instead of Bound. Jan 11 20:18:35.145: INFO: PersistentVolume nfs-9ktwq found but phase is Available instead of Bound. Jan 11 20:18:37.235: INFO: PersistentVolume nfs-9ktwq found but phase is Available instead of Bound. Jan 11 20:18:39.325: INFO: PersistentVolume nfs-9ktwq found but phase is Available instead of Bound. Jan 11 20:18:41.415: INFO: PersistentVolume nfs-9ktwq found but phase is Available instead of Bound. Jan 11 20:18:43.505: INFO: PersistentVolume nfs-9ktwq found but phase is Available instead of Bound. Jan 11 20:18:45.596: INFO: PersistentVolume nfs-9ktwq found but phase is Available instead of Bound. Jan 11 20:18:47.686: INFO: PersistentVolume nfs-9ktwq found but phase is Available instead of Bound. Jan 11 20:18:49.776: INFO: PersistentVolume nfs-9ktwq found but phase is Available instead of Bound. Jan 11 20:18:51.866: INFO: PersistentVolume nfs-9ktwq found but phase is Available instead of Bound. Jan 11 20:18:53.956: INFO: PersistentVolume nfs-9ktwq found but phase is Available instead of Bound. Jan 11 20:18:56.046: INFO: PersistentVolume nfs-9ktwq found but phase is Available instead of Bound. Jan 11 20:18:58.137: INFO: PersistentVolume nfs-9ktwq found but phase is Available instead of Bound. Jan 11 20:19:00.227: INFO: PersistentVolume nfs-9ktwq found but phase is Available instead of Bound. Jan 11 20:19:02.317: INFO: PersistentVolume nfs-9ktwq found but phase is Available instead of Bound. Jan 11 20:19:04.407: INFO: PersistentVolume nfs-9ktwq found but phase is Available instead of Bound. Jan 11 20:19:06.497: INFO: PersistentVolume nfs-9ktwq found but phase is Available instead of Bound. Jan 11 20:19:08.587: INFO: PersistentVolume nfs-9ktwq found but phase is Available instead of Bound. Jan 11 20:19:10.677: INFO: PersistentVolume nfs-9ktwq found but phase is Available instead of Bound. Jan 11 20:19:12.767: INFO: PersistentVolume nfs-9ktwq found but phase is Available instead of Bound. Jan 11 20:19:14.857: INFO: PersistentVolume nfs-9ktwq found but phase is Available instead of Bound. Jan 11 20:19:16.947: INFO: PersistentVolume nfs-9ktwq found but phase is Available instead of Bound. Jan 11 20:19:19.038: INFO: PersistentVolume nfs-9ktwq found but phase is Available instead of Bound. Jan 11 20:19:21.128: INFO: PersistentVolume nfs-9ktwq found but phase is Available instead of Bound. Jan 11 20:19:23.218: INFO: PersistentVolume nfs-9ktwq found but phase is Available instead of Bound. Jan 11 20:19:25.309: INFO: PersistentVolume nfs-9ktwq found but phase is Available instead of Bound. Jan 11 20:19:27.399: INFO: PersistentVolume nfs-9ktwq found but phase is Available instead of Bound. Jan 11 20:19:29.489: INFO: PersistentVolume nfs-9ktwq found but phase is Available instead of Bound. Jan 11 20:19:31.579: INFO: PersistentVolume nfs-9ktwq found but phase is Available instead of Bound. Jan 11 20:19:33.669: INFO: PersistentVolume nfs-9ktwq found but phase is Available instead of Bound. Jan 11 20:19:35.759: INFO: PersistentVolume nfs-9ktwq found but phase is Available instead of Bound. Jan 11 20:19:37.850: INFO: PersistentVolume nfs-9ktwq found but phase is Available instead of Bound. Jan 11 20:19:39.940: INFO: PersistentVolume nfs-9ktwq found but phase is Available instead of Bound. Jan 11 20:19:42.031: INFO: PersistentVolume nfs-9ktwq found but phase is Available instead of Bound. Jan 11 20:19:44.121: INFO: PersistentVolume nfs-9ktwq found but phase is Available instead of Bound. Jan 11 20:19:46.214: INFO: PersistentVolume nfs-9ktwq found but phase is Available instead of Bound. Jan 11 20:19:48.304: INFO: PersistentVolume nfs-9ktwq found but phase is Available instead of Bound. Jan 11 20:19:50.394: INFO: PersistentVolume nfs-9ktwq found but phase is Available instead of Bound. Jan 11 20:19:52.484: INFO: PersistentVolume nfs-9ktwq found but phase is Available instead of Bound. Jan 11 20:19:54.574: INFO: PersistentVolume nfs-9ktwq found but phase is Available instead of Bound. Jan 11 20:19:56.665: INFO: PersistentVolume nfs-9ktwq found but phase is Available instead of Bound. Jan 11 20:19:58.755: INFO: PersistentVolume nfs-9ktwq found but phase is Available instead of Bound. Jan 11 20:20:00.845: INFO: PersistentVolume nfs-9ktwq found but phase is Available instead of Bound. Jan 11 20:20:02.935: INFO: PersistentVolume nfs-9ktwq found but phase is Available instead of Bound. Jan 11 20:20:05.026: INFO: PersistentVolume nfs-9ktwq found but phase is Available instead of Bound. Jan 11 20:20:07.116: INFO: PersistentVolume nfs-9ktwq found but phase is Available instead of Bound. Jan 11 20:20:09.206: INFO: PersistentVolume nfs-9ktwq found but phase is Available instead of Bound. Jan 11 20:20:11.296: INFO: PersistentVolume nfs-9ktwq found but phase is Available instead of Bound. Jan 11 20:20:13.387: INFO: PersistentVolume nfs-9ktwq found but phase is Available instead of Bound. Jan 11 20:20:15.477: INFO: PersistentVolume nfs-9ktwq found but phase is Available instead of Bound. Jan 11 20:20:17.567: INFO: PersistentVolume nfs-9ktwq found but phase is Available instead of Bound. Jan 11 20:20:19.657: INFO: PersistentVolume nfs-9ktwq found but phase is Available instead of Bound. Jan 11 20:20:21.747: INFO: PersistentVolume nfs-9ktwq found but phase is Available instead of Bound. Jan 11 20:20:23.837: INFO: PersistentVolume nfs-9ktwq found but phase is Available instead of Bound. Jan 11 20:20:25.927: INFO: PersistentVolume nfs-9ktwq found but phase is Available instead of Bound. Jan 11 20:20:28.018: INFO: PersistentVolume nfs-9ktwq found but phase is Available instead of Bound. Jan 11 20:20:30.108: INFO: PersistentVolume nfs-9ktwq found but phase is Available instead of Bound. Jan 11 20:20:32.198: INFO: PersistentVolume nfs-9ktwq found but phase is Available instead of Bound. Jan 11 20:20:34.288: INFO: PersistentVolume nfs-9ktwq found but phase is Available instead of Bound. Jan 11 20:20:36.378: INFO: PersistentVolume nfs-9ktwq found but phase is Available instead of Bound. Jan 11 20:20:38.468: INFO: PersistentVolume nfs-9ktwq found but phase is Available instead of Bound. Jan 11 20:20:40.558: INFO: PersistentVolume nfs-9ktwq found but phase is Available instead of Bound. Jan 11 20:20:42.648: INFO: PersistentVolume nfs-9ktwq found but phase is Available instead of Bound. Jan 11 20:20:44.738: INFO: PersistentVolume nfs-9ktwq found but phase is Available instead of Bound. Jan 11 20:20:46.829: INFO: PersistentVolume nfs-9ktwq found but phase is Available instead of Bound. Jan 11 20:20:48.919: INFO: PersistentVolume nfs-9ktwq found but phase is Available instead of Bound. Jan 11 20:20:51.009: INFO: PersistentVolume nfs-9ktwq found but phase is Available instead of Bound. Jan 11 20:20:53.099: INFO: PersistentVolume nfs-9ktwq found but phase is Available instead of Bound. Jan 11 20:20:55.189: INFO: PersistentVolume nfs-9ktwq found but phase is Available instead of Bound. Jan 11 20:20:57.279: INFO: PersistentVolume nfs-9ktwq found but phase is Available instead of Bound. Jan 11 20:20:59.370: INFO: PersistentVolume nfs-9ktwq found but phase is Available instead of Bound. Jan 11 20:21:01.460: INFO: PersistentVolume nfs-9ktwq found but phase is Available instead of Bound. Jan 11 20:21:03.550: INFO: PersistentVolume nfs-9ktwq found but phase is Available instead of Bound. Jan 11 20:21:05.551: INFO: WARN: pv nfs-9ktwq is not bound after max wait Jan 11 20:21:05.551: INFO: This may be ok since there are more pvs than pvcs Jan 11 20:21:05.551: INFO: Waiting up to 3m0s for PersistentVolume nfs-wwhmn to have phase Bound Jan 11 20:21:05.640: INFO: PersistentVolume nfs-wwhmn found but phase is Available instead of Bound. Jan 11 20:21:07.734: INFO: PersistentVolume nfs-wwhmn found but phase is Available instead of Bound. Jan 11 20:21:09.824: INFO: PersistentVolume nfs-wwhmn found but phase is Available instead of Bound. Jan 11 20:21:11.915: INFO: PersistentVolume nfs-wwhmn found but phase is Available instead of Bound. Jan 11 20:21:14.006: INFO: PersistentVolume nfs-wwhmn found but phase is Available instead of Bound. Jan 11 20:21:16.096: INFO: PersistentVolume nfs-wwhmn found but phase is Available instead of Bound. Jan 11 20:21:18.187: INFO: PersistentVolume nfs-wwhmn found but phase is Available instead of Bound. Jan 11 20:21:20.276: INFO: PersistentVolume nfs-wwhmn found but phase is Available instead of Bound. Jan 11 20:21:22.367: INFO: PersistentVolume nfs-wwhmn found but phase is Available instead of Bound. Jan 11 20:21:24.457: INFO: PersistentVolume nfs-wwhmn found but phase is Available instead of Bound. Jan 11 20:21:26.547: INFO: PersistentVolume nfs-wwhmn found but phase is Available instead of Bound. Jan 11 20:21:28.637: INFO: PersistentVolume nfs-wwhmn found but phase is Available instead of Bound. Jan 11 20:21:30.728: INFO: PersistentVolume nfs-wwhmn found but phase is Available instead of Bound. Jan 11 20:21:32.818: INFO: PersistentVolume nfs-wwhmn found but phase is Available instead of Bound. Jan 11 20:21:34.908: INFO: PersistentVolume nfs-wwhmn found but phase is Available instead of Bound. Jan 11 20:21:36.999: INFO: PersistentVolume nfs-wwhmn found but phase is Available instead of Bound. Jan 11 20:21:39.089: INFO: PersistentVolume nfs-wwhmn found but phase is Available instead of Bound. Jan 11 20:21:41.180: INFO: PersistentVolume nfs-wwhmn found but phase is Available instead of Bound. Jan 11 20:21:43.270: INFO: PersistentVolume nfs-wwhmn found but phase is Available instead of Bound. Jan 11 20:21:45.360: INFO: PersistentVolume nfs-wwhmn found but phase is Available instead of Bound. Jan 11 20:21:47.450: INFO: PersistentVolume nfs-wwhmn found but phase is Available instead of Bound. Jan 11 20:21:49.540: INFO: PersistentVolume nfs-wwhmn found but phase is Available instead of Bound. Jan 11 20:21:51.630: INFO: PersistentVolume nfs-wwhmn found but phase is Available instead of Bound. Jan 11 20:21:53.721: INFO: PersistentVolume nfs-wwhmn found but phase is Available instead of Bound. Jan 11 20:21:55.811: INFO: PersistentVolume nfs-wwhmn found but phase is Available instead of Bound. Jan 11 20:21:57.902: INFO: PersistentVolume nfs-wwhmn found but phase is Available instead of Bound. Jan 11 20:21:59.993: INFO: PersistentVolume nfs-wwhmn found but phase is Available instead of Bound. Jan 11 20:22:02.086: INFO: PersistentVolume nfs-wwhmn found but phase is Available instead of Bound. Jan 11 20:22:04.176: INFO: PersistentVolume nfs-wwhmn found but phase is Available instead of Bound. Jan 11 20:22:06.267: INFO: PersistentVolume nfs-wwhmn found but phase is Available instead of Bound. Jan 11 20:22:08.357: INFO: PersistentVolume nfs-wwhmn found but phase is Available instead of Bound. Jan 11 20:22:10.447: INFO: PersistentVolume nfs-wwhmn found but phase is Available instead of Bound. Jan 11 20:22:12.537: INFO: PersistentVolume nfs-wwhmn found but phase is Available instead of Bound. Jan 11 20:22:14.627: INFO: PersistentVolume nfs-wwhmn found but phase is Available instead of Bound. Jan 11 20:22:16.717: INFO: PersistentVolume nfs-wwhmn found but phase is Available instead of Bound. Jan 11 20:22:18.807: INFO: PersistentVolume nfs-wwhmn found but phase is Available instead of Bound. Jan 11 20:22:20.897: INFO: PersistentVolume nfs-wwhmn found but phase is Available instead of Bound. Jan 11 20:22:22.988: INFO: PersistentVolume nfs-wwhmn found but phase is Available instead of Bound. Jan 11 20:22:25.078: INFO: PersistentVolume nfs-wwhmn found but phase is Available instead of Bound. Jan 11 20:22:27.168: INFO: PersistentVolume nfs-wwhmn found but phase is Available instead of Bound. Jan 11 20:22:29.258: INFO: PersistentVolume nfs-wwhmn found but phase is Available instead of Bound. Jan 11 20:22:31.348: INFO: PersistentVolume nfs-wwhmn found but phase is Available instead of Bound. Jan 11 20:22:33.438: INFO: PersistentVolume nfs-wwhmn found but phase is Available instead of Bound. Jan 11 20:22:35.529: INFO: PersistentVolume nfs-wwhmn found but phase is Available instead of Bound. Jan 11 20:22:37.619: INFO: PersistentVolume nfs-wwhmn found but phase is Available instead of Bound. Jan 11 20:22:39.709: INFO: PersistentVolume nfs-wwhmn found but phase is Available instead of Bound. Jan 11 20:22:41.800: INFO: PersistentVolume nfs-wwhmn found but phase is Available instead of Bound. Jan 11 20:22:43.890: INFO: PersistentVolume nfs-wwhmn found but phase is Available instead of Bound. Jan 11 20:22:45.980: INFO: PersistentVolume nfs-wwhmn found but phase is Available instead of Bound. Jan 11 20:22:48.070: INFO: PersistentVolume nfs-wwhmn found but phase is Available instead of Bound. Jan 11 20:22:50.160: INFO: PersistentVolume nfs-wwhmn found but phase is Available instead of Bound. Jan 11 20:22:52.250: INFO: PersistentVolume nfs-wwhmn found but phase is Available instead of Bound. Jan 11 20:22:54.341: INFO: PersistentVolume nfs-wwhmn found but phase is Available instead of Bound. Jan 11 20:22:56.431: INFO: PersistentVolume nfs-wwhmn found but phase is Available instead of Bound. Jan 11 20:22:58.524: INFO: PersistentVolume nfs-wwhmn found but phase is Available instead of Bound. Jan 11 20:23:00.614: INFO: PersistentVolume nfs-wwhmn found but phase is Available instead of Bound. Jan 11 20:23:02.704: INFO: PersistentVolume nfs-wwhmn found but phase is Available instead of Bound. Jan 11 20:23:04.794: INFO: PersistentVolume nfs-wwhmn found but phase is Available instead of Bound. Jan 11 20:23:06.885: INFO: PersistentVolume nfs-wwhmn found but phase is Available instead of Bound. Jan 11 20:23:08.975: INFO: PersistentVolume nfs-wwhmn found but phase is Available instead of Bound. Jan 11 20:23:11.066: INFO: PersistentVolume nfs-wwhmn found but phase is Available instead of Bound. Jan 11 20:23:13.156: INFO: PersistentVolume nfs-wwhmn found but phase is Available instead of Bound. Jan 11 20:23:15.246: INFO: PersistentVolume nfs-wwhmn found but phase is Available instead of Bound. Jan 11 20:23:17.336: INFO: PersistentVolume nfs-wwhmn found but phase is Available instead of Bound. Jan 11 20:23:19.427: INFO: PersistentVolume nfs-wwhmn found but phase is Available instead of Bound. Jan 11 20:23:21.517: INFO: PersistentVolume nfs-wwhmn found but phase is Available instead of Bound. Jan 11 20:23:23.607: INFO: PersistentVolume nfs-wwhmn found but phase is Available instead of Bound. Jan 11 20:23:25.697: INFO: PersistentVolume nfs-wwhmn found but phase is Available instead of Bound. Jan 11 20:23:27.789: INFO: PersistentVolume nfs-wwhmn found but phase is Available instead of Bound. Jan 11 20:23:29.879: INFO: PersistentVolume nfs-wwhmn found but phase is Available instead of Bound. Jan 11 20:23:31.970: INFO: PersistentVolume nfs-wwhmn found but phase is Available instead of Bound. Jan 11 20:23:34.060: INFO: PersistentVolume nfs-wwhmn found but phase is Available instead of Bound. Jan 11 20:23:36.150: INFO: PersistentVolume nfs-wwhmn found but phase is Available instead of Bound. Jan 11 20:23:38.240: INFO: PersistentVolume nfs-wwhmn found but phase is Available instead of Bound. Jan 11 20:23:40.332: INFO: PersistentVolume nfs-wwhmn found but phase is Available instead of Bound. Jan 11 20:23:42.424: INFO: PersistentVolume nfs-wwhmn found but phase is Available instead of Bound. Jan 11 20:23:44.514: INFO: PersistentVolume nfs-wwhmn found but phase is Available instead of Bound. Jan 11 20:23:46.604: INFO: PersistentVolume nfs-wwhmn found but phase is Available instead of Bound. Jan 11 20:23:48.698: INFO: PersistentVolume nfs-wwhmn found but phase is Available instead of Bound. Jan 11 20:23:50.788: INFO: PersistentVolume nfs-wwhmn found but phase is Available instead of Bound. Jan 11 20:23:52.878: INFO: PersistentVolume nfs-wwhmn found but phase is Available instead of Bound. Jan 11 20:23:54.967: INFO: PersistentVolume nfs-wwhmn found but phase is Available instead of Bound. Jan 11 20:23:57.058: INFO: PersistentVolume nfs-wwhmn found but phase is Available instead of Bound. Jan 11 20:23:59.148: INFO: PersistentVolume nfs-wwhmn found but phase is Available instead of Bound. Jan 11 20:24:01.239: INFO: PersistentVolume nfs-wwhmn found but phase is Available instead of Bound. Jan 11 20:24:03.332: INFO: PersistentVolume nfs-wwhmn found but phase is Available instead of Bound. Jan 11 20:24:05.429: INFO: PersistentVolume nfs-wwhmn found but phase is Available instead of Bound. Jan 11 20:24:07.429: INFO: WARN: pv nfs-wwhmn is not bound after max wait Jan 11 20:24:07.429: INFO: This may be ok since there are more pvs than pvcs STEP: Checking pod has write access to PersistentVolumes Jan 11 20:24:07.519: INFO: Creating nfs test pod STEP: Pod should terminate with exitcode 0 (success) Jan 11 20:24:07.613: INFO: Waiting up to 5m0s for pod "pvc-tester-wc292" in namespace "pv-7435" to be "success or failure" Jan 11 20:24:07.703: INFO: Pod "pvc-tester-wc292": Phase="Pending", Reason="", readiness=false. Elapsed: 90.578903ms Jan 11 20:24:09.796: INFO: Pod "pvc-tester-wc292": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.183569321s STEP: Saw pod success Jan 11 20:24:09.796: INFO: Pod "pvc-tester-wc292" satisfied condition "success or failure" Jan 11 20:24:09.796: INFO: Pod pvc-tester-wc292 succeeded Jan 11 20:24:09.796: INFO: Deleting pod "pvc-tester-wc292" in namespace "pv-7435" Jan 11 20:24:09.892: INFO: Wait up to 5m0s for pod "pvc-tester-wc292" to be fully deleted Jan 11 20:24:10.072: INFO: Creating nfs test pod STEP: Pod should terminate with exitcode 0 (success) Jan 11 20:24:10.163: INFO: Waiting up to 5m0s for pod "pvc-tester-zxjlt" in namespace "pv-7435" to be "success or failure" Jan 11 20:24:10.252: INFO: Pod "pvc-tester-zxjlt": Phase="Pending", Reason="", readiness=false. Elapsed: 89.373356ms Jan 11 20:24:12.343: INFO: Pod "pvc-tester-zxjlt": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.179837465s STEP: Saw pod success Jan 11 20:24:12.343: INFO: Pod "pvc-tester-zxjlt" satisfied condition "success or failure" Jan 11 20:24:12.343: INFO: Pod pvc-tester-zxjlt succeeded Jan 11 20:24:12.343: INFO: Deleting pod "pvc-tester-zxjlt" in namespace "pv-7435" Jan 11 20:24:12.436: INFO: Wait up to 5m0s for pod "pvc-tester-zxjlt" to be fully deleted STEP: Deleting PVCs to invoke reclaim policy Jan 11 20:24:12.706: INFO: Deleting PVC pvc-brzts to trigger reclamation of PV nfs-2b6dj Jan 11 20:24:12.706: INFO: Deleting PersistentVolumeClaim "pvc-brzts" Jan 11 20:24:12.797: INFO: Waiting for reclaim process to complete. Jan 11 20:24:12.797: INFO: Waiting up to 3m0s for PersistentVolume nfs-2b6dj to have phase Released Jan 11 20:24:12.887: INFO: PersistentVolume nfs-2b6dj found and phase=Released (90.283316ms) Jan 11 20:24:12.977: INFO: PV nfs-2b6dj now in "Released" phase Jan 11 20:24:13.157: INFO: Deleting PVC pvc-jpbsf to trigger reclamation of PV nfs-llz95 Jan 11 20:24:13.157: INFO: Deleting PersistentVolumeClaim "pvc-jpbsf" Jan 11 20:24:13.247: INFO: Waiting for reclaim process to complete. Jan 11 20:24:13.248: INFO: Waiting up to 3m0s for PersistentVolume nfs-llz95 to have phase Released Jan 11 20:24:13.337: INFO: PersistentVolume nfs-llz95 found and phase=Released (89.827442ms) Jan 11 20:24:13.428: INFO: PV nfs-llz95 now in "Released" phase [AfterEach] with multiple PVs and PVCs all in same ns /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:217 Jan 11 20:24:13.608: INFO: AfterEach: deleting 0 PVCs and 4 PVs... Jan 11 20:24:13.608: INFO: Deleting PersistentVolume "nfs-9ktwq" Jan 11 20:24:13.700: INFO: Deleting PersistentVolume "nfs-wwhmn" Jan 11 20:24:13.790: INFO: Deleting PersistentVolume "nfs-2b6dj" Jan 11 20:24:13.882: INFO: Deleting PersistentVolume "nfs-llz95" [AfterEach] NFS /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:147 Jan 11 20:24:13.973: INFO: Deleting pod "nfs-server" in namespace "pv-7435" Jan 11 20:24:14.064: INFO: Wait up to 5m0s for pod "nfs-server" to be fully deleted [AfterEach] [sig-storage] PersistentVolumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:24:24.245: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-7435" for this suite. Jan 11 20:24:30.607: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:24:34.040: INFO: namespace pv-7435 deletion completed in 9.703056503s • [SLOW TEST:394.992 seconds] [sig-storage] PersistentVolumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 NFS /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:120 with multiple PVs and PVCs all in same ns /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:210 should create 4 PVs and 2 PVCs: test write access [Slow] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:251 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:24:24.080: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename dns STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in dns-5429 STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5429.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5429.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 11 20:24:28.424: INFO: DNS probes using dns-5429/dns-test-c045beda-368d-4863-8fb6-bd6ebc3ffe8a succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:24:28.518: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5429" for this suite. Jan 11 20:24:34.881: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:24:38.194: INFO: namespace dns-5429 deletion completed in 9.585882093s • [SLOW TEST:14.115 seconds] [sig-network] DNS /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:24:38.211: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename deployment STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in deployment-5905 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Jan 11 20:24:38.853: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Jan 11 20:24:39.033: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jan 11 20:24:41.216: INFO: Creating deployment "test-rolling-update-deployment" Jan 11 20:24:41.307: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Jan 11 20:24:41.487: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Jan 11 20:24:41.577: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714371081, loc:(*time.Location)(0x84bfb00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714371081, loc:(*time.Location)(0x84bfb00)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714371081, loc:(*time.Location)(0x84bfb00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714371081, loc:(*time.Location)(0x84bfb00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-55d946486\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 11 20:24:43.667: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:2, UnavailableReplicas:0, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714371081, loc:(*time.Location)(0x84bfb00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714371081, loc:(*time.Location)(0x84bfb00)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714371083, loc:(*time.Location)(0x84bfb00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714371081, loc:(*time.Location)(0x84bfb00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-55d946486\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 11 20:24:45.668: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:62 Jan 11 20:24:45.939: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-5905 /apis/apps/v1/namespaces/deployment-5905/deployments/test-rolling-update-deployment a62219b3-1482-4273-9134-10a5de785f92 79871 1 2020-01-11 20:24:41 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{redis docker.io/library/redis:5.0.5-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0018bec08 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-01-11 20:24:41 +0000 UTC,LastTransitionTime:2020-01-11 20:24:41 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-55d946486" has successfully progressed.,LastUpdateTime:2020-01-11 20:24:44 +0000 UTC,LastTransitionTime:2020-01-11 20:24:41 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Jan 11 20:24:46.029: INFO: New ReplicaSet "test-rolling-update-deployment-55d946486" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-55d946486 deployment-5905 /apis/apps/v1/namespaces/deployment-5905/replicasets/test-rolling-update-deployment-55d946486 71ce7550-6161-4069-9841-2f9b05881eab 79857 1 2020-01-11 20:24:41 +0000 UTC map[name:sample-pod pod-template-hash:55d946486] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment a62219b3-1482-4273-9134-10a5de785f92 0xc000786260 0xc000786261}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 55d946486,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:55d946486] map[] [] [] []} {[] [] [{redis docker.io/library/redis:5.0.5-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc000786388 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Jan 11 20:24:46.029: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Jan 11 20:24:46.029: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-5905 /apis/apps/v1/namespaces/deployment-5905/replicasets/test-rolling-update-controller f927de83-06c4-46e4-be07-ea5de5d4c06a 79868 2 2020-01-11 20:24:38 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment a62219b3-1482-4273-9134-10a5de785f92 0xc000786117 0xc000786118}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc000786188 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jan 11 20:24:46.119: INFO: Pod "test-rolling-update-deployment-55d946486-mzh95" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-55d946486-mzh95 test-rolling-update-deployment-55d946486- deployment-5905 /api/v1/namespaces/deployment-5905/pods/test-rolling-update-deployment-55d946486-mzh95 f07c5387-aedd-44c3-9b1a-baa58cd335e2 79856 0 2020-01-11 20:24:41 +0000 UTC map[name:sample-pod pod-template-hash:55d946486] map[cni.projectcalico.org/podIP:100.64.1.76/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet test-rolling-update-deployment-55d946486 71ce7550-6161-4069-9841-2f9b05881eab 0xc000786c90 0xc000786c91}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8kh2s,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8kh2s,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:redis,Image:docker.io/library/redis:5.0.5-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8kh2s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-10-250-27-25.ec2.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 20:24:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 20:24:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 20:24:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 20:24:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.27.25,PodIP:100.64.1.76,StartTime:2020-01-11 20:24:41 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:redis,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-11 20:24:42 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:redis:5.0.5-alpine,ImageID:docker-pullable://redis@sha256:50899ea1ceed33fa03232f3ac57578a424faa1742c1ac9c7a7bdb95cdf19b858,ContainerID:docker://f5339f8f117bcf7b1b5cca92fa42d6521f3479ab668ecc0e6f247f25ca7895bb,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:100.64.1.76,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:24:46.120: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-5905" for this suite. Jan 11 20:24:52.483: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:24:55.790: INFO: namespace deployment-5905 deletion completed in 9.579047527s • [SLOW TEST:17.580 seconds] [sig-apps] Deployment /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:24:17.600: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename dns STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in dns-8876 STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8876.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-8876.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8876.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-8876.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-8876.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-8876.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-8876.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-8876.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-8876.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-8876.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-8876.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-8876.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8876.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 126.55.108.100.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/100.108.55.126_udp@PTR;check="$$(dig +tcp +noall +answer +search 126.55.108.100.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/100.108.55.126_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8876.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-8876.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8876.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-8876.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-8876.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-8876.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-8876.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-8876.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-8876.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-8876.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-8876.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-8876.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8876.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 126.55.108.100.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/100.108.55.126_udp@PTR;check="$$(dig +tcp +noall +answer +search 126.55.108.100.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/100.108.55.126_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 11 20:24:20.976: INFO: Unable to read wheezy_udp@dns-test-service.dns-8876.svc.cluster.local from pod dns-8876/dns-test-a6147995-3670-4c8e-acca-cd020e53cb23: the server could not find the requested resource (get pods dns-test-a6147995-3670-4c8e-acca-cd020e53cb23) Jan 11 20:24:21.085: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8876.svc.cluster.local from pod dns-8876/dns-test-a6147995-3670-4c8e-acca-cd020e53cb23: the server could not find the requested resource (get pods dns-test-a6147995-3670-4c8e-acca-cd020e53cb23) Jan 11 20:24:21.177: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8876.svc.cluster.local from pod dns-8876/dns-test-a6147995-3670-4c8e-acca-cd020e53cb23: the server could not find the requested resource (get pods dns-test-a6147995-3670-4c8e-acca-cd020e53cb23) Jan 11 20:24:21.293: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8876.svc.cluster.local from pod dns-8876/dns-test-a6147995-3670-4c8e-acca-cd020e53cb23: the server could not find the requested resource (get pods dns-test-a6147995-3670-4c8e-acca-cd020e53cb23) Jan 11 20:24:22.137: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8876.svc.cluster.local from pod dns-8876/dns-test-a6147995-3670-4c8e-acca-cd020e53cb23: the server could not find the requested resource (get pods dns-test-a6147995-3670-4c8e-acca-cd020e53cb23) Jan 11 20:24:22.229: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8876.svc.cluster.local from pod dns-8876/dns-test-a6147995-3670-4c8e-acca-cd020e53cb23: the server could not find the requested resource (get pods dns-test-a6147995-3670-4c8e-acca-cd020e53cb23) Jan 11 20:24:22.824: INFO: Lookups using dns-8876/dns-test-a6147995-3670-4c8e-acca-cd020e53cb23 failed for: [wheezy_udp@dns-test-service.dns-8876.svc.cluster.local wheezy_tcp@dns-test-service.dns-8876.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8876.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8876.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8876.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8876.svc.cluster.local] Jan 11 20:24:28.125: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8876.svc.cluster.local from pod dns-8876/dns-test-a6147995-3670-4c8e-acca-cd020e53cb23: the server could not find the requested resource (get pods dns-test-a6147995-3670-4c8e-acca-cd020e53cb23) Jan 11 20:24:28.235: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8876.svc.cluster.local from pod dns-8876/dns-test-a6147995-3670-4c8e-acca-cd020e53cb23: the server could not find the requested resource (get pods dns-test-a6147995-3670-4c8e-acca-cd020e53cb23) Jan 11 20:24:29.087: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8876.svc.cluster.local from pod dns-8876/dns-test-a6147995-3670-4c8e-acca-cd020e53cb23: the server could not find the requested resource (get pods dns-test-a6147995-3670-4c8e-acca-cd020e53cb23) Jan 11 20:24:29.180: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8876.svc.cluster.local from pod dns-8876/dns-test-a6147995-3670-4c8e-acca-cd020e53cb23: the server could not find the requested resource (get pods dns-test-a6147995-3670-4c8e-acca-cd020e53cb23) Jan 11 20:24:29.850: INFO: Lookups using dns-8876/dns-test-a6147995-3670-4c8e-acca-cd020e53cb23 failed for: [wheezy_udp@_http._tcp.dns-test-service.dns-8876.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8876.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8876.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8876.svc.cluster.local] Jan 11 20:24:33.104: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8876.svc.cluster.local from pod dns-8876/dns-test-a6147995-3670-4c8e-acca-cd020e53cb23: the server could not find the requested resource (get pods dns-test-a6147995-3670-4c8e-acca-cd020e53cb23) Jan 11 20:24:33.197: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8876.svc.cluster.local from pod dns-8876/dns-test-a6147995-3670-4c8e-acca-cd020e53cb23: the server could not find the requested resource (get pods dns-test-a6147995-3670-4c8e-acca-cd020e53cb23) Jan 11 20:24:34.044: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8876.svc.cluster.local from pod dns-8876/dns-test-a6147995-3670-4c8e-acca-cd020e53cb23: the server could not find the requested resource (get pods dns-test-a6147995-3670-4c8e-acca-cd020e53cb23) Jan 11 20:24:34.137: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8876.svc.cluster.local from pod dns-8876/dns-test-a6147995-3670-4c8e-acca-cd020e53cb23: the server could not find the requested resource (get pods dns-test-a6147995-3670-4c8e-acca-cd020e53cb23) Jan 11 20:24:34.699: INFO: Lookups using dns-8876/dns-test-a6147995-3670-4c8e-acca-cd020e53cb23 failed for: [wheezy_udp@_http._tcp.dns-test-service.dns-8876.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8876.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8876.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8876.svc.cluster.local] Jan 11 20:24:38.104: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8876.svc.cluster.local from pod dns-8876/dns-test-a6147995-3670-4c8e-acca-cd020e53cb23: the server could not find the requested resource (get pods dns-test-a6147995-3670-4c8e-acca-cd020e53cb23) Jan 11 20:24:38.196: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8876.svc.cluster.local from pod dns-8876/dns-test-a6147995-3670-4c8e-acca-cd020e53cb23: the server could not find the requested resource (get pods dns-test-a6147995-3670-4c8e-acca-cd020e53cb23) Jan 11 20:24:39.038: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8876.svc.cluster.local from pod dns-8876/dns-test-a6147995-3670-4c8e-acca-cd020e53cb23: the server could not find the requested resource (get pods dns-test-a6147995-3670-4c8e-acca-cd020e53cb23) Jan 11 20:24:39.130: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8876.svc.cluster.local from pod dns-8876/dns-test-a6147995-3670-4c8e-acca-cd020e53cb23: the server could not find the requested resource (get pods dns-test-a6147995-3670-4c8e-acca-cd020e53cb23) Jan 11 20:24:39.692: INFO: Lookups using dns-8876/dns-test-a6147995-3670-4c8e-acca-cd020e53cb23 failed for: [wheezy_udp@_http._tcp.dns-test-service.dns-8876.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8876.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8876.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8876.svc.cluster.local] Jan 11 20:24:43.104: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8876.svc.cluster.local from pod dns-8876/dns-test-a6147995-3670-4c8e-acca-cd020e53cb23: the server could not find the requested resource (get pods dns-test-a6147995-3670-4c8e-acca-cd020e53cb23) Jan 11 20:24:43.196: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8876.svc.cluster.local from pod dns-8876/dns-test-a6147995-3670-4c8e-acca-cd020e53cb23: the server could not find the requested resource (get pods dns-test-a6147995-3670-4c8e-acca-cd020e53cb23) Jan 11 20:24:44.041: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8876.svc.cluster.local from pod dns-8876/dns-test-a6147995-3670-4c8e-acca-cd020e53cb23: the server could not find the requested resource (get pods dns-test-a6147995-3670-4c8e-acca-cd020e53cb23) Jan 11 20:24:44.133: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8876.svc.cluster.local from pod dns-8876/dns-test-a6147995-3670-4c8e-acca-cd020e53cb23: the server could not find the requested resource (get pods dns-test-a6147995-3670-4c8e-acca-cd020e53cb23) Jan 11 20:24:44.697: INFO: Lookups using dns-8876/dns-test-a6147995-3670-4c8e-acca-cd020e53cb23 failed for: [wheezy_udp@_http._tcp.dns-test-service.dns-8876.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8876.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8876.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8876.svc.cluster.local] Jan 11 20:24:48.104: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8876.svc.cluster.local from pod dns-8876/dns-test-a6147995-3670-4c8e-acca-cd020e53cb23: the server could not find the requested resource (get pods dns-test-a6147995-3670-4c8e-acca-cd020e53cb23) Jan 11 20:24:48.195: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8876.svc.cluster.local from pod dns-8876/dns-test-a6147995-3670-4c8e-acca-cd020e53cb23: the server could not find the requested resource (get pods dns-test-a6147995-3670-4c8e-acca-cd020e53cb23) Jan 11 20:24:49.046: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8876.svc.cluster.local from pod dns-8876/dns-test-a6147995-3670-4c8e-acca-cd020e53cb23: the server could not find the requested resource (get pods dns-test-a6147995-3670-4c8e-acca-cd020e53cb23) Jan 11 20:24:49.140: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8876.svc.cluster.local from pod dns-8876/dns-test-a6147995-3670-4c8e-acca-cd020e53cb23: the server could not find the requested resource (get pods dns-test-a6147995-3670-4c8e-acca-cd020e53cb23) Jan 11 20:24:49.700: INFO: Lookups using dns-8876/dns-test-a6147995-3670-4c8e-acca-cd020e53cb23 failed for: [wheezy_udp@_http._tcp.dns-test-service.dns-8876.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8876.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8876.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8876.svc.cluster.local] Jan 11 20:24:54.694: INFO: DNS probes using dns-8876/dns-test-a6147995-3670-4c8e-acca-cd020e53cb23 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:24:54.977: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8876" for this suite. Jan 11 20:25:03.336: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:25:06.635: INFO: namespace dns-8876 deletion completed in 11.56654753s • [SLOW TEST:49.035 seconds] [sig-network] DNS /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:24:55.842: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename configmap STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-6808 STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating configMap with name configmap-test-volume-map-0f9c920c-2aa4-412c-8dcc-e189c29f4d27 STEP: Creating a pod to test consume configMaps Jan 11 20:24:56.664: INFO: Waiting up to 5m0s for pod "pod-configmaps-1eb269b6-b84e-4a89-a7ac-abc82d8ff70d" in namespace "configmap-6808" to be "success or failure" Jan 11 20:24:56.754: INFO: Pod "pod-configmaps-1eb269b6-b84e-4a89-a7ac-abc82d8ff70d": Phase="Pending", Reason="", readiness=false. Elapsed: 89.735956ms Jan 11 20:24:58.844: INFO: Pod "pod-configmaps-1eb269b6-b84e-4a89-a7ac-abc82d8ff70d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.179953147s STEP: Saw pod success Jan 11 20:24:58.844: INFO: Pod "pod-configmaps-1eb269b6-b84e-4a89-a7ac-abc82d8ff70d" satisfied condition "success or failure" Jan 11 20:24:58.933: INFO: Trying to get logs from node ip-10-250-27-25.ec2.internal pod pod-configmaps-1eb269b6-b84e-4a89-a7ac-abc82d8ff70d container configmap-volume-test: STEP: delete the pod Jan 11 20:24:59.124: INFO: Waiting for pod pod-configmaps-1eb269b6-b84e-4a89-a7ac-abc82d8ff70d to disappear Jan 11 20:24:59.214: INFO: Pod pod-configmaps-1eb269b6-b84e-4a89-a7ac-abc82d8ff70d no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:24:59.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6808" for this suite. Jan 11 20:25:05.575: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:25:08.891: INFO: namespace configmap-6808 deletion completed in 9.586064627s • [SLOW TEST:13.049 seconds] [sig-storage] ConfigMap /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSS ------------------------------ [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:24:10.274: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename container-probe STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-probe-4316 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:52 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating pod busybox-289ce77e-8d05-4b34-9581-b7152157448d in namespace container-probe-4316 Jan 11 20:24:13.181: INFO: Started pod busybox-289ce77e-8d05-4b34-9581-b7152157448d in namespace container-probe-4316 STEP: checking the pod's current state and verifying that restartCount is present Jan 11 20:24:13.270: INFO: Initial restart count of pod busybox-289ce77e-8d05-4b34-9581-b7152157448d is 0 Jan 11 20:25:07.691: INFO: Restart count of pod container-probe-4316/busybox-289ce77e-8d05-4b34-9581-b7152157448d is now 1 (54.420210341s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:25:07.784: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4316" for this suite. Jan 11 20:25:14.144: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:25:17.443: INFO: namespace container-probe-4316 deletion completed in 9.568057912s • [SLOW TEST:67.169 seconds] [k8s.io] Probing container /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:25:17.454: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename configmap STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-6862 STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating configMap with name configmap-test-volume-map-164589ab-1ebe-4c87-bebe-31964f6b68eb STEP: Creating a pod to test consume configMaps Jan 11 20:25:18.274: INFO: Waiting up to 5m0s for pod "pod-configmaps-fb412a54-c00f-4583-befe-d6c56a1304a8" in namespace "configmap-6862" to be "success or failure" Jan 11 20:25:18.363: INFO: Pod "pod-configmaps-fb412a54-c00f-4583-befe-d6c56a1304a8": Phase="Pending", Reason="", readiness=false. Elapsed: 89.420786ms Jan 11 20:25:20.453: INFO: Pod "pod-configmaps-fb412a54-c00f-4583-befe-d6c56a1304a8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.178970239s STEP: Saw pod success Jan 11 20:25:20.453: INFO: Pod "pod-configmaps-fb412a54-c00f-4583-befe-d6c56a1304a8" satisfied condition "success or failure" Jan 11 20:25:20.541: INFO: Trying to get logs from node ip-10-250-27-25.ec2.internal pod pod-configmaps-fb412a54-c00f-4583-befe-d6c56a1304a8 container configmap-volume-test: STEP: delete the pod Jan 11 20:25:20.734: INFO: Waiting for pod pod-configmaps-fb412a54-c00f-4583-befe-d6c56a1304a8 to disappear Jan 11 20:25:20.823: INFO: Pod pod-configmaps-fb412a54-c00f-4583-befe-d6c56a1304a8 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:25:20.823: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6862" for this suite. Jan 11 20:25:27.182: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:25:30.489: INFO: namespace configmap-6862 deletion completed in 9.575219405s • [SLOW TEST:13.035 seconds] [sig-storage] ConfigMap /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:34 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSS ------------------------------ [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:21:31.808: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename statefulset STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in statefulset-4 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:62 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:77 STEP: Creating service test in namespace statefulset-4 [It] should perform rolling updates and roll backs of template modifications with PVCs /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:258 STEP: Creating a new StatefulSet with PVCs Jan 11 20:21:32.633: INFO: Default storage class: "default" Jan 11 20:21:32.813: INFO: Found 1 stateful pods, waiting for 3 Jan 11 20:21:42.903: INFO: Found 1 stateful pods, waiting for 3 Jan 11 20:21:52.903: INFO: Found 1 stateful pods, waiting for 3 Jan 11 20:22:02.903: INFO: Found 2 stateful pods, waiting for 3 Jan 11 20:22:12.903: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jan 11 20:22:12.903: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jan 11 20:22:12.903: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false Jan 11 20:22:22.903: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jan 11 20:22:22.903: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jan 11 20:22:22.903: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false Jan 11 20:22:32.904: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jan 11 20:22:32.904: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jan 11 20:22:32.904: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true Jan 11 20:22:33.173: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=statefulset-4 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 11 20:22:34.452: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Jan 11 20:22:34.453: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 11 20:22:34.453: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Jan 11 20:22:45.001: INFO: Updating stateful set ss STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Jan 11 20:22:45.269: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=statefulset-4 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 11 20:22:46.629: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Jan 11 20:22:46.629: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 11 20:22:46.629: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 11 20:22:57.168: INFO: Waiting for StatefulSet statefulset-4/ss to complete update Jan 11 20:22:57.168: INFO: Waiting for Pod statefulset-4/ss-0 to have revision ss-59b79b8798 update revision ss-6d5f4b76b7 Jan 11 20:22:57.168: INFO: Waiting for Pod statefulset-4/ss-1 to have revision ss-59b79b8798 update revision ss-6d5f4b76b7 Jan 11 20:23:07.351: INFO: Waiting for StatefulSet statefulset-4/ss to complete update Jan 11 20:23:07.351: INFO: Waiting for Pod statefulset-4/ss-0 to have revision ss-59b79b8798 update revision ss-6d5f4b76b7 Jan 11 20:23:07.351: INFO: Waiting for Pod statefulset-4/ss-1 to have revision ss-59b79b8798 update revision ss-6d5f4b76b7 Jan 11 20:23:17.347: INFO: Waiting for StatefulSet statefulset-4/ss to complete update Jan 11 20:23:17.347: INFO: Waiting for Pod statefulset-4/ss-0 to have revision ss-59b79b8798 update revision ss-6d5f4b76b7 Jan 11 20:23:17.347: INFO: Waiting for Pod statefulset-4/ss-1 to have revision ss-59b79b8798 update revision ss-6d5f4b76b7 Jan 11 20:23:27.347: INFO: Waiting for StatefulSet statefulset-4/ss to complete update Jan 11 20:23:27.347: INFO: Waiting for Pod statefulset-4/ss-0 to have revision ss-59b79b8798 update revision ss-6d5f4b76b7 Jan 11 20:23:37.347: INFO: Waiting for StatefulSet statefulset-4/ss to complete update STEP: Rolling back to a previous revision Jan 11 20:23:47.347: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=statefulset-4 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 11 20:23:48.664: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Jan 11 20:23:48.664: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 11 20:23:48.664: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 11 20:23:59.212: INFO: Updating stateful set ss STEP: Rolling back update in reverse ordinal order Jan 11 20:23:59.481: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=statefulset-4 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 11 20:24:00.816: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Jan 11 20:24:00.816: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 11 20:24:00.816: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 11 20:24:01.175: INFO: Waiting for StatefulSet statefulset-4/ss to complete update Jan 11 20:24:01.175: INFO: Waiting for Pod statefulset-4/ss-0 to have revision ss-6d5f4b76b7 update revision ss-59b79b8798 Jan 11 20:24:01.175: INFO: Waiting for Pod statefulset-4/ss-1 to have revision ss-6d5f4b76b7 update revision ss-59b79b8798 Jan 11 20:24:01.175: INFO: Waiting for Pod statefulset-4/ss-2 to have revision ss-6d5f4b76b7 update revision ss-59b79b8798 Jan 11 20:24:11.354: INFO: Waiting for StatefulSet statefulset-4/ss to complete update Jan 11 20:24:11.354: INFO: Waiting for Pod statefulset-4/ss-0 to have revision ss-6d5f4b76b7 update revision ss-59b79b8798 Jan 11 20:24:11.354: INFO: Waiting for Pod statefulset-4/ss-1 to have revision ss-6d5f4b76b7 update revision ss-59b79b8798 Jan 11 20:24:11.354: INFO: Waiting for Pod statefulset-4/ss-2 to have revision ss-6d5f4b76b7 update revision ss-59b79b8798 Jan 11 20:24:21.354: INFO: Waiting for StatefulSet statefulset-4/ss to complete update Jan 11 20:24:21.354: INFO: Waiting for Pod statefulset-4/ss-0 to have revision ss-6d5f4b76b7 update revision ss-59b79b8798 Jan 11 20:24:21.354: INFO: Waiting for Pod statefulset-4/ss-1 to have revision ss-6d5f4b76b7 update revision ss-59b79b8798 Jan 11 20:24:31.355: INFO: Waiting for StatefulSet statefulset-4/ss to complete update Jan 11 20:24:31.356: INFO: Waiting for Pod statefulset-4/ss-0 to have revision ss-6d5f4b76b7 update revision ss-59b79b8798 Jan 11 20:24:41.354: INFO: Waiting for StatefulSet statefulset-4/ss to complete update Jan 11 20:24:41.355: INFO: Waiting for Pod statefulset-4/ss-0 to have revision ss-6d5f4b76b7 update revision ss-59b79b8798 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 Jan 11 20:24:51.356: INFO: Deleting all statefulset in ns statefulset-4 Jan 11 20:24:51.445: INFO: Scaling statefulset ss to 0 Jan 11 20:25:21.803: INFO: Waiting for statefulset status.replicas updated to 0 Jan 11 20:25:21.892: INFO: Deleting statefulset ss Jan 11 20:25:22.072: INFO: Deleting pvc: datadir-ss-0 with volume pvc-50ddf82c-c089-4e4b-a44d-9b73e9bbb464 Jan 11 20:25:22.161: INFO: Deleting pvc: datadir-ss-1 with volume pvc-c3bca6a8-e948-426f-81db-0e7e1d04cae9 Jan 11 20:25:22.251: INFO: Deleting pvc: datadir-ss-2 with volume pvc-7b71bb85-14b7-4dd7-8fea-eca3ae045078 Jan 11 20:25:22.431: INFO: Still waiting for pvs of statefulset to disappear: pvc-7b71bb85-14b7-4dd7-8fea-eca3ae045078: {Phase:Released Message: Reason:} [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:25:32.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-4" for this suite. Jan 11 20:25:40.880: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:25:44.105: INFO: namespace statefulset-4 deletion completed in 11.493074362s • [SLOW TEST:252.297 seconds] [sig-apps] StatefulSet /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693 should perform rolling updates and roll backs of template modifications with PVCs /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:258 ------------------------------ SSSSSS ------------------------------ [BeforeEach] [sig-apps] CronJob /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:24:34.061: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename cronjob STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in cronjob-7008 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] CronJob /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:55 [It] should remove from active list jobs that have been deleted /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:194 STEP: Creating a ForbidConcurrent cronjob STEP: Ensuring a job is scheduled STEP: Ensuring exactly one is scheduled STEP: Deleting the job STEP: deleting Job.batch forbid-1578774300 in namespace cronjob-7008, will wait for the garbage collector to delete the pods Jan 11 20:25:01.596: INFO: Deleting Job.batch forbid-1578774300 took: 93.250848ms Jan 11 20:25:02.296: INFO: Terminating Job.batch forbid-1578774300 pods took: 700.280575ms STEP: Ensuring job was deleted STEP: Ensuring the job is not in the cronjob active list STEP: Ensuring MissingJob event has occurred STEP: Removing cronjob [AfterEach] [sig-apps] CronJob /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:25:48.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "cronjob-7008" for this suite. Jan 11 20:25:54.614: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:25:57.936: INFO: namespace cronjob-7008 deletion completed in 9.594931753s • [SLOW TEST:83.874 seconds] [sig-apps] CronJob /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should remove from active list jobs that have been deleted /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:194 ------------------------------ SSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:25:44.145: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename resourcequota STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in resourcequota-9442 STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:25:52.064: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9442" for this suite. Jan 11 20:25:58.424: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:26:01.658: INFO: namespace resourcequota-9442 deletion completed in 9.503185653s • [SLOW TEST:17.514 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Pods /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:19:12.897: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename pods STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pods-2560 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:165 [It] should have their auto-restart back-off timer reset on image update [Slow][NodeConformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:681 STEP: getting restart delay-0 Jan 11 20:21:09.371: INFO: getRestartDelay: restartCount = 4, finishedAt=2020-01-11 20:20:15 +0000 UTC restartedAt=2020-01-11 20:21:08 +0000 UTC (53s) STEP: getting restart delay-1 Jan 11 20:22:42.041: INFO: getRestartDelay: restartCount = 5, finishedAt=2020-01-11 20:21:13 +0000 UTC restartedAt=2020-01-11 20:22:40 +0000 UTC (1m27s) STEP: getting restart delay-2 Jan 11 20:25:32.125: INFO: getRestartDelay: restartCount = 6, finishedAt=2020-01-11 20:22:45 +0000 UTC restartedAt=2020-01-11 20:25:31 +0000 UTC (2m46s) STEP: updating the image Jan 11 20:25:32.806: INFO: Successfully updated pod "pod-back-off-image" STEP: get restart delay after image update Jan 11 20:25:55.982: INFO: getRestartDelay: restartCount = 8, finishedAt=2020-01-11 20:25:42 +0000 UTC restartedAt=2020-01-11 20:25:54 +0000 UTC (12s) [AfterEach] [k8s.io] Pods /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:25:55.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2560" for this suite. Jan 11 20:26:08.348: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:26:11.679: INFO: namespace pods-2560 deletion completed in 15.601768721s • [SLOW TEST:418.782 seconds] [k8s.io] Pods /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693 should have their auto-restart back-off timer reset on image update [Slow][NodeConformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:681 ------------------------------ SSSSSSSSSSSS ------------------------------ [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:93 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:24:25.469: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename provisioning STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in provisioning-4433 STEP: Waiting for a default service account to be provisioned in namespace [It] should support restarting containers using directory as subpath [Slow] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:303 Jan 11 20:24:26.110: INFO: Could not find CSI Name for in-tree plugin kubernetes.io/empty-dir Jan 11 20:24:26.110: INFO: Creating resource for inline volume STEP: Creating pod pod-subpath-test-emptydir-lzqr STEP: Failing liveness probe Jan 11 20:24:28.383: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=provisioning-4433 pod-subpath-test-emptydir-lzqr --container test-container-volume-emptydir-lzqr -- /bin/sh -c rm /probe-volume/probe-file' Jan 11 20:24:29.758: INFO: stderr: "" Jan 11 20:24:29.758: INFO: stdout: "" Jan 11 20:24:29.758: INFO: Pod exec output: STEP: Waiting for container to restart Jan 11 20:24:29.848: INFO: Container test-container-subpath-emptydir-lzqr, restarts: 0 Jan 11 20:24:39.938: INFO: Container test-container-subpath-emptydir-lzqr, restarts: 2 Jan 11 20:24:39.938: INFO: Container has restart count: 2 STEP: Rewriting the file Jan 11 20:24:39.938: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=provisioning-4433 pod-subpath-test-emptydir-lzqr --container test-container-volume-emptydir-lzqr -- /bin/sh -c echo test-after > /probe-volume/probe-file' Jan 11 20:24:41.316: INFO: stderr: "" Jan 11 20:24:41.316: INFO: stdout: "" Jan 11 20:24:41.316: INFO: Pod exec output: STEP: Waiting for container to stop restarting Jan 11 20:24:59.497: INFO: Container has restart count: 3 Jan 11 20:26:01.500: INFO: Container restart has stabilized Jan 11 20:26:01.500: INFO: Deleting pod "pod-subpath-test-emptydir-lzqr" in namespace "provisioning-4433" Jan 11 20:26:01.592: INFO: Wait up to 5m0s for pod "pod-subpath-test-emptydir-lzqr" to be fully deleted STEP: Deleting pod Jan 11 20:26:09.772: INFO: Deleting pod "pod-subpath-test-emptydir-lzqr" in namespace "provisioning-4433" Jan 11 20:26:09.863: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics [AfterEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:26:09.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "provisioning-4433" for this suite. Jan 11 20:26:16.226: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:26:19.543: INFO: namespace provisioning-4433 deletion completed in 9.588502219s • [SLOW TEST:114.074 seconds] [sig-storage] In-tree Volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Driver: emptydir] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:69 [Testpattern: Inline-volume (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:92 should support restarting containers using directory as subpath [Slow] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:303 ------------------------------ S ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:26:11.699: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename emptydir STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-6497 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:46 [It] nonexistent volume subPath should have the correct mode and owner using FSGroup /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:59 STEP: Creating a pod to test emptydir subpath on tmpfs Jan 11 20:26:12.552: INFO: Waiting up to 5m0s for pod "pod-9524d35e-1eaf-492e-8586-083a5d0bbf84" in namespace "emptydir-6497" to be "success or failure" Jan 11 20:26:12.642: INFO: Pod "pod-9524d35e-1eaf-492e-8586-083a5d0bbf84": Phase="Pending", Reason="", readiness=false. Elapsed: 89.949599ms Jan 11 20:26:14.732: INFO: Pod "pod-9524d35e-1eaf-492e-8586-083a5d0bbf84": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.180097436s STEP: Saw pod success Jan 11 20:26:14.732: INFO: Pod "pod-9524d35e-1eaf-492e-8586-083a5d0bbf84" satisfied condition "success or failure" Jan 11 20:26:14.823: INFO: Trying to get logs from node ip-10-250-27-25.ec2.internal pod pod-9524d35e-1eaf-492e-8586-083a5d0bbf84 container test-container: STEP: delete the pod Jan 11 20:26:15.146: INFO: Waiting for pod pod-9524d35e-1eaf-492e-8586-083a5d0bbf84 to disappear Jan 11 20:26:15.236: INFO: Pod pod-9524d35e-1eaf-492e-8586-083a5d0bbf84 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:26:15.236: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6497" for this suite. Jan 11 20:26:21.597: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:26:24.919: INFO: namespace emptydir-6497 deletion completed in 9.592570584s • [SLOW TEST:13.220 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:44 nonexistent volume subPath should have the correct mode and owner using FSGroup /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:59 ------------------------------ SSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Servers with support for API chunking /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:25:57.944: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename chunking STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in chunking-6320 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for API chunking /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/chunking.go:50 STEP: creating a large number of resources [It] should return chunks of results for list calls /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/chunking.go:78 STEP: retrieving those results in paged fashion several times Jan 11 20:26:16.270: INFO: Retrieved 11/11 results with rv 81137 and continue eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6ODExMzcsInN0YXJ0IjoidGVtcGxhdGUtMDAxMFx1MDAwMCJ9 Jan 11 20:26:16.360: INFO: Retrieved 13/13 results with rv 81137 and continue eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6ODExMzcsInN0YXJ0IjoidGVtcGxhdGUtMDAyM1x1MDAwMCJ9 Jan 11 20:26:16.452: INFO: Retrieved 35/35 results with rv 81137 and continue eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6ODExMzcsInN0YXJ0IjoidGVtcGxhdGUtMDA1OFx1MDAwMCJ9 Jan 11 20:26:16.543: INFO: Retrieved 25/25 results with rv 81137 and continue eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6ODExMzcsInN0YXJ0IjoidGVtcGxhdGUtMDA4M1x1MDAwMCJ9 Jan 11 20:26:16.633: INFO: Retrieved 7/7 results with rv 81137 and continue eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6ODExMzcsInN0YXJ0IjoidGVtcGxhdGUtMDA5MFx1MDAwMCJ9 Jan 11 20:26:16.724: INFO: Retrieved 24/24 results with rv 81137 and continue eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6ODExMzcsInN0YXJ0IjoidGVtcGxhdGUtMDExNFx1MDAwMCJ9 Jan 11 20:26:16.815: INFO: Retrieved 32/32 results with rv 81137 and continue eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6ODExMzcsInN0YXJ0IjoidGVtcGxhdGUtMDE0Nlx1MDAwMCJ9 Jan 11 20:26:16.906: INFO: Retrieved 28/28 results with rv 81137 and continue eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6ODExMzcsInN0YXJ0IjoidGVtcGxhdGUtMDE3NFx1MDAwMCJ9 Jan 11 20:26:16.998: INFO: Retrieved 27/27 results with rv 81137 and continue eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6ODExMzcsInN0YXJ0IjoidGVtcGxhdGUtMDIwMVx1MDAwMCJ9 Jan 11 20:26:17.088: INFO: Retrieved 24/24 results with rv 81137 and continue eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6ODExMzcsInN0YXJ0IjoidGVtcGxhdGUtMDIyNVx1MDAwMCJ9 Jan 11 20:26:17.179: INFO: Retrieved 26/26 results with rv 81137 and continue eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6ODExMzcsInN0YXJ0IjoidGVtcGxhdGUtMDI1MVx1MDAwMCJ9 Jan 11 20:26:17.270: INFO: Retrieved 10/10 results with rv 81137 and continue eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6ODExMzcsInN0YXJ0IjoidGVtcGxhdGUtMDI2MVx1MDAwMCJ9 Jan 11 20:26:17.360: INFO: Retrieved 2/2 results with rv 81137 and continue eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6ODExMzcsInN0YXJ0IjoidGVtcGxhdGUtMDI2M1x1MDAwMCJ9 Jan 11 20:26:17.450: INFO: Retrieved 25/25 results with rv 81137 and continue eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6ODExMzcsInN0YXJ0IjoidGVtcGxhdGUtMDI4OFx1MDAwMCJ9 Jan 11 20:26:17.541: INFO: Retrieved 7/7 results with rv 81137 and continue eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6ODExMzcsInN0YXJ0IjoidGVtcGxhdGUtMDI5NVx1MDAwMCJ9 Jan 11 20:26:17.631: INFO: Retrieved 18/18 results with rv 81137 and continue eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6ODExMzcsInN0YXJ0IjoidGVtcGxhdGUtMDMxM1x1MDAwMCJ9 Jan 11 20:26:17.722: INFO: Retrieved 18/18 results with rv 81137 and continue eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6ODExMzcsInN0YXJ0IjoidGVtcGxhdGUtMDMzMVx1MDAwMCJ9 Jan 11 20:26:17.813: INFO: Retrieved 11/11 results with rv 81137 and continue eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6ODExMzcsInN0YXJ0IjoidGVtcGxhdGUtMDM0Mlx1MDAwMCJ9 Jan 11 20:26:17.904: INFO: Retrieved 24/24 results with rv 81137 and continue eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6ODExMzcsInN0YXJ0IjoidGVtcGxhdGUtMDM2Nlx1MDAwMCJ9 Jan 11 20:26:17.994: INFO: Retrieved 6/6 results with rv 81137 and continue eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6ODExMzcsInN0YXJ0IjoidGVtcGxhdGUtMDM3Mlx1MDAwMCJ9 Jan 11 20:26:18.085: INFO: Retrieved 11/11 results with rv 81137 and continue eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6ODExMzcsInN0YXJ0IjoidGVtcGxhdGUtMDM4M1x1MDAwMCJ9 Jan 11 20:26:18.175: INFO: Retrieved 14/14 results with rv 81137 and continue eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6ODExMzcsInN0YXJ0IjoidGVtcGxhdGUtMDM5N1x1MDAwMCJ9 Jan 11 20:26:18.266: INFO: Retrieved 2/25 results with rv 81137 and continue Jan 11 20:26:18.356: INFO: Retrieved 7/7 results with rv 81142 and continue eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6ODExNDIsInN0YXJ0IjoidGVtcGxhdGUtMDAwNlx1MDAwMCJ9 Jan 11 20:26:18.447: INFO: Retrieved 24/24 results with rv 81142 and continue eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6ODExNDIsInN0YXJ0IjoidGVtcGxhdGUtMDAzMFx1MDAwMCJ9 Jan 11 20:26:18.538: INFO: Retrieved 6/6 results with rv 81142 and continue eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6ODExNDIsInN0YXJ0IjoidGVtcGxhdGUtMDAzNlx1MDAwMCJ9 Jan 11 20:26:18.629: INFO: Retrieved 36/36 results with rv 81142 and continue eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6ODExNDIsInN0YXJ0IjoidGVtcGxhdGUtMDA3Mlx1MDAwMCJ9 Jan 11 20:26:18.722: INFO: Retrieved 32/32 results with rv 81142 and continue eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6ODExNDIsInN0YXJ0IjoidGVtcGxhdGUtMDEwNFx1MDAwMCJ9 Jan 11 20:26:18.813: INFO: Retrieved 6/6 results with rv 81142 and continue eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6ODExNDIsInN0YXJ0IjoidGVtcGxhdGUtMDExMFx1MDAwMCJ9 Jan 11 20:26:18.904: INFO: Retrieved 19/19 results with rv 81142 and continue eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6ODExNDIsInN0YXJ0IjoidGVtcGxhdGUtMDEyOVx1MDAwMCJ9 Jan 11 20:26:18.995: INFO: Retrieved 28/28 results with rv 81142 and continue eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6ODExNDIsInN0YXJ0IjoidGVtcGxhdGUtMDE1N1x1MDAwMCJ9 Jan 11 20:26:19.086: INFO: Retrieved 5/5 results with rv 81142 and continue eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6ODExNDIsInN0YXJ0IjoidGVtcGxhdGUtMDE2Mlx1MDAwMCJ9 Jan 11 20:26:19.177: INFO: Retrieved 17/17 results with rv 81142 and continue eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6ODExNDIsInN0YXJ0IjoidGVtcGxhdGUtMDE3OVx1MDAwMCJ9 Jan 11 20:26:19.268: INFO: Retrieved 5/5 results with rv 81142 and continue eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6ODExNDIsInN0YXJ0IjoidGVtcGxhdGUtMDE4NFx1MDAwMCJ9 Jan 11 20:26:19.358: INFO: Retrieved 17/17 results with rv 81142 and continue eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6ODExNDIsInN0YXJ0IjoidGVtcGxhdGUtMDIwMVx1MDAwMCJ9 Jan 11 20:26:19.450: INFO: Retrieved 19/19 results with rv 81142 and continue eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6ODExNDIsInN0YXJ0IjoidGVtcGxhdGUtMDIyMFx1MDAwMCJ9 Jan 11 20:26:19.541: INFO: Retrieved 30/30 results with rv 81142 and continue eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6ODExNDIsInN0YXJ0IjoidGVtcGxhdGUtMDI1MFx1MDAwMCJ9 Jan 11 20:26:19.632: INFO: Retrieved 33/33 results with rv 81142 and continue eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6ODExNDIsInN0YXJ0IjoidGVtcGxhdGUtMDI4M1x1MDAwMCJ9 Jan 11 20:26:19.723: INFO: Retrieved 29/29 results with rv 81142 and continue eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6ODExNDIsInN0YXJ0IjoidGVtcGxhdGUtMDMxMlx1MDAwMCJ9 Jan 11 20:26:19.814: INFO: Retrieved 37/37 results with rv 81142 and continue eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6ODExNDIsInN0YXJ0IjoidGVtcGxhdGUtMDM0OVx1MDAwMCJ9 Jan 11 20:26:19.905: INFO: Retrieved 15/15 results with rv 81142 and continue eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6ODExNDIsInN0YXJ0IjoidGVtcGxhdGUtMDM2NFx1MDAwMCJ9 Jan 11 20:26:19.995: INFO: Retrieved 34/34 results with rv 81142 and continue eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6ODExNDIsInN0YXJ0IjoidGVtcGxhdGUtMDM5OFx1MDAwMCJ9 Jan 11 20:26:20.086: INFO: Retrieved 1/5 results with rv 81142 and continue Jan 11 20:26:20.178: INFO: Retrieved 27/27 results with rv 81151 and continue eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6ODExNTEsInN0YXJ0IjoidGVtcGxhdGUtMDAyNlx1MDAwMCJ9 Jan 11 20:26:20.269: INFO: Retrieved 35/35 results with rv 81151 and continue eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6ODExNTEsInN0YXJ0IjoidGVtcGxhdGUtMDA2MVx1MDAwMCJ9 Jan 11 20:26:20.360: INFO: Retrieved 38/38 results with rv 81151 and continue eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6ODExNTEsInN0YXJ0IjoidGVtcGxhdGUtMDA5OVx1MDAwMCJ9 Jan 11 20:26:20.451: INFO: Retrieved 10/10 results with rv 81151 and continue eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6ODExNTEsInN0YXJ0IjoidGVtcGxhdGUtMDEwOVx1MDAwMCJ9 Jan 11 20:26:20.542: INFO: Retrieved 14/14 results with rv 81151 and continue eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6ODExNTEsInN0YXJ0IjoidGVtcGxhdGUtMDEyM1x1MDAwMCJ9 Jan 11 20:26:20.633: INFO: Retrieved 25/25 results with rv 81151 and continue eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6ODExNTEsInN0YXJ0IjoidGVtcGxhdGUtMDE0OFx1MDAwMCJ9 Jan 11 20:26:20.724: INFO: Retrieved 32/32 results with rv 81151 and continue eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6ODExNTEsInN0YXJ0IjoidGVtcGxhdGUtMDE4MFx1MDAwMCJ9 Jan 11 20:26:20.815: INFO: Retrieved 32/32 results with rv 81151 and continue eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6ODExNTEsInN0YXJ0IjoidGVtcGxhdGUtMDIxMlx1MDAwMCJ9 Jan 11 20:26:20.906: INFO: Retrieved 17/17 results with rv 81151 and continue eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6ODExNTEsInN0YXJ0IjoidGVtcGxhdGUtMDIyOVx1MDAwMCJ9 Jan 11 20:26:20.997: INFO: Retrieved 37/37 results with rv 81151 and continue eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6ODExNTEsInN0YXJ0IjoidGVtcGxhdGUtMDI2Nlx1MDAwMCJ9 Jan 11 20:26:21.089: INFO: Retrieved 33/33 results with rv 81151 and continue eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6ODExNTEsInN0YXJ0IjoidGVtcGxhdGUtMDI5OVx1MDAwMCJ9 Jan 11 20:26:21.179: INFO: Retrieved 29/29 results with rv 81151 and continue eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6ODExNTEsInN0YXJ0IjoidGVtcGxhdGUtMDMyOFx1MDAwMCJ9 Jan 11 20:26:21.271: INFO: Retrieved 28/28 results with rv 81151 and continue eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6ODExNTEsInN0YXJ0IjoidGVtcGxhdGUtMDM1Nlx1MDAwMCJ9 Jan 11 20:26:21.361: INFO: Retrieved 9/9 results with rv 81151 and continue eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6ODExNTEsInN0YXJ0IjoidGVtcGxhdGUtMDM2NVx1MDAwMCJ9 Jan 11 20:26:21.452: INFO: Retrieved 32/32 results with rv 81151 and continue eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6ODExNTEsInN0YXJ0IjoidGVtcGxhdGUtMDM5N1x1MDAwMCJ9 Jan 11 20:26:21.542: INFO: Retrieved 2/32 results with rv 81151 and continue STEP: retrieving those results all at once [AfterEach] [sig-api-machinery] Servers with support for API chunking /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:26:21.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "chunking-6320" for this suite. Jan 11 20:26:30.008: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:26:33.416: INFO: namespace chunking-6320 deletion completed in 11.679502812s • [SLOW TEST:35.472 seconds] [sig-api-machinery] Servers with support for API chunking /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should return chunks of results for list calls /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/chunking.go:78 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:26:24.928: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename services STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-6600 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:91 [It] should prevent NodePort collisions /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1198 STEP: creating service nodeport-collision-1 with type NodePort in namespace services-6600 STEP: creating service nodeport-collision-2 with conflicting NodePort STEP: deleting service nodeport-collision-1 to release NodePort STEP: creating service nodeport-collision-2 with no-longer-conflicting NodePort STEP: deleting service nodeport-collision-2 in namespace services-6600 [AfterEach] [sig-network] Services /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:26:26.048: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6600" for this suite. Jan 11 20:26:32.410: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:26:35.740: INFO: namespace services-6600 deletion completed in 9.600700284s [AfterEach] [sig-network] Services /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:95 • [SLOW TEST:10.812 seconds] [sig-network] Services /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should prevent NodePort collisions /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1198 ------------------------------ SSSSSSSSSS ------------------------------ [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:93 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:26:33.439: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename provisioning STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in provisioning-492 STEP: Waiting for a default service account to be provisioned in namespace [It] should fail if subpath file is outside the volume [Slow][LinuxOnly] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:239 Jan 11 20:26:34.081: INFO: Could not find CSI Name for in-tree plugin kubernetes.io/empty-dir Jan 11 20:26:34.081: INFO: Creating resource for inline volume STEP: Creating pod pod-subpath-test-emptydir-shbx STEP: Checking for subpath error in container status Jan 11 20:26:38.355: INFO: Deleting pod "pod-subpath-test-emptydir-shbx" in namespace "provisioning-492" Jan 11 20:26:38.446: INFO: Wait up to 5m0s for pod "pod-subpath-test-emptydir-shbx" to be fully deleted STEP: Deleting pod Jan 11 20:26:44.627: INFO: Deleting pod "pod-subpath-test-emptydir-shbx" in namespace "provisioning-492" Jan 11 20:26:44.717: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics [AfterEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:26:44.717: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "provisioning-492" for this suite. Jan 11 20:26:51.080: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:26:54.405: INFO: namespace provisioning-492 deletion completed in 9.596046363s • [SLOW TEST:20.966 seconds] [sig-storage] In-tree Volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Driver: emptydir] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:69 [Testpattern: Inline-volume (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:92 should fail if subpath file is outside the volume [Slow][LinuxOnly] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:239 ------------------------------ SS ------------------------------ [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:93 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:25:30.497: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename provisioning STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in provisioning-4456 STEP: Waiting for a default service account to be provisioned in namespace [It] should support restarting containers using file as subpath [Slow][LinuxOnly] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:318 Jan 11 20:25:31.151: INFO: Could not find CSI Name for in-tree plugin kubernetes.io/host-path Jan 11 20:25:31.242: INFO: Creating resource for inline volume STEP: Creating pod pod-subpath-test-hostpath-dzdn STEP: Failing liveness probe Jan 11 20:25:33.514: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=provisioning-4456 pod-subpath-test-hostpath-dzdn --container test-container-volume-hostpath-dzdn -- /bin/sh -c rm /probe-volume/probe-file' Jan 11 20:25:34.864: INFO: stderr: "" Jan 11 20:25:34.864: INFO: stdout: "" Jan 11 20:25:34.864: INFO: Pod exec output: STEP: Waiting for container to restart Jan 11 20:25:34.953: INFO: Container test-container-subpath-hostpath-dzdn, restarts: 0 Jan 11 20:25:45.043: INFO: Container test-container-subpath-hostpath-dzdn, restarts: 2 Jan 11 20:25:45.043: INFO: Container has restart count: 2 STEP: Rewriting the file Jan 11 20:25:45.043: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=provisioning-4456 pod-subpath-test-hostpath-dzdn --container test-container-volume-hostpath-dzdn -- /bin/sh -c echo test-after > /probe-volume/probe-file' Jan 11 20:25:46.402: INFO: stderr: "" Jan 11 20:25:46.402: INFO: stdout: "" Jan 11 20:25:46.402: INFO: Pod exec output: STEP: Waiting for container to stop restarting Jan 11 20:26:46.582: INFO: Container restart has stabilized Jan 11 20:26:46.582: INFO: Deleting pod "pod-subpath-test-hostpath-dzdn" in namespace "provisioning-4456" Jan 11 20:26:46.673: INFO: Wait up to 5m0s for pod "pod-subpath-test-hostpath-dzdn" to be fully deleted STEP: Deleting pod Jan 11 20:26:50.852: INFO: Deleting pod "pod-subpath-test-hostpath-dzdn" in namespace "provisioning-4456" Jan 11 20:26:50.941: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics [AfterEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:26:50.941: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "provisioning-4456" for this suite. Jan 11 20:26:57.301: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:27:00.600: INFO: namespace provisioning-4456 deletion completed in 9.568487203s • [SLOW TEST:90.104 seconds] [sig-storage] In-tree Volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Driver: hostPath] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:69 [Testpattern: Inline-volume (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:92 should support restarting containers using file as subpath [Slow][LinuxOnly] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:318 ------------------------------ SSSSSSSS ------------------------------ [BeforeEach] [sig-network] Networking /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:26:19.547: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename pod-network-test STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pod-network-test-7882 STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Performing setup for networking test in namespace pod-network-test-7882 STEP: creating a selector STEP: Creating the service pods in kubernetes Jan 11 20:26:20.188: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jan 11 20:26:43.726: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://100.64.1.86:8080/dial?request=hostName&protocol=udp&host=100.64.1.84&port=8081&tries=1'] Namespace:pod-network-test-7882 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 11 20:26:43.726: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config Jan 11 20:26:44.607: INFO: Waiting for endpoints: map[] Jan 11 20:26:44.697: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://100.64.1.86:8080/dial?request=hostName&protocol=udp&host=100.64.0.218&port=8081&tries=1'] Namespace:pod-network-test-7882 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 11 20:26:44.697: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config Jan 11 20:26:45.603: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:26:45.603: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-7882" for this suite. Jan 11 20:26:57.966: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:27:01.298: INFO: namespace pod-network-test-7882 deletion completed in 15.603118504s • [SLOW TEST:41.751 seconds] [sig-network] Networking /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:26:35.761: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in persistent-local-volumes-test-1786 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:152 [BeforeEach] [Volume type: blockfswithformat] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "ip-10-250-27-25.ec2.internal" using path "/tmp/local-volume-test-21c96eff-0f5b-4390-890f-c169bdbabdf2" Jan 11 20:26:38.857: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-1786 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-21c96eff-0f5b-4390-890f-c169bdbabdf2 && dd if=/dev/zero of=/tmp/local-volume-test-21c96eff-0f5b-4390-890f-c169bdbabdf2/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-21c96eff-0f5b-4390-890f-c169bdbabdf2/file' Jan 11 20:26:40.314: INFO: stderr: "5120+0 records in\n5120+0 records out\n20971520 bytes (21 MB, 20 MiB) copied, 0.01738 s, 1.2 GB/s\n" Jan 11 20:26:40.314: INFO: stdout: "" Jan 11 20:26:40.314: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-1786 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-21c96eff-0f5b-4390-890f-c169bdbabdf2/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}' Jan 11 20:26:41.723: INFO: stderr: "" Jan 11 20:26:41.723: INFO: stdout: "/dev/loop0\n" Jan 11 20:26:41.723: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-1786 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkfs -t ext4 /dev/loop0 && mount -t ext4 /dev/loop0 /tmp/local-volume-test-21c96eff-0f5b-4390-890f-c169bdbabdf2 && chmod o+rwx /tmp/local-volume-test-21c96eff-0f5b-4390-890f-c169bdbabdf2' Jan 11 20:26:43.118: INFO: stderr: "mke2fs 1.44.5 (15-Dec-2018)\n" Jan 11 20:26:43.118: INFO: stdout: "Discarding device blocks: 1024/20480\b\b\b\b\b\b\b\b\b\b\b \b\b\b\b\b\b\b\b\b\b\bdone \nCreating filesystem with 20480 1k blocks and 5136 inodes\nFilesystem UUID: 7d06d242-a137-4168-b1e1-c85a3044a99d\nSuperblock backups stored on blocks: \n\t8193\n\nAllocating group tables: 0/3\b\b\b \b\b\bdone \nWriting inode tables: 0/3\b\b\b \b\b\bdone \nCreating journal (1024 blocks): done\nWriting superblocks and filesystem accounting information: 0/3\b\b\b \b\b\bdone\n\n" STEP: Creating local PVCs and PVs Jan 11 20:26:43.118: INFO: Creating a PV followed by a PVC Jan 11 20:26:43.298: INFO: Waiting for PV local-pvrkthd to bind to PVC pvc-sf5hs Jan 11 20:26:43.299: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-sf5hs] to have phase Bound Jan 11 20:26:43.388: INFO: PersistentVolumeClaim pvc-sf5hs found and phase=Bound (89.764884ms) Jan 11 20:26:43.388: INFO: Waiting up to 3m0s for PersistentVolume local-pvrkthd to have phase Bound Jan 11 20:26:43.479: INFO: PersistentVolume local-pvrkthd found and phase=Bound (90.244362ms) [It] should be able to write from pod1 and read from pod2 /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 STEP: Creating pod1 to write to the PV STEP: Creating a pod Jan 11 20:26:46.110: INFO: pod "security-context-7e234d2b-4162-4b36-a12b-a5c764a2df06" created on Node "ip-10-250-27-25.ec2.internal" STEP: Writing in pod1 Jan 11 20:26:46.110: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-1786 security-context-7e234d2b-4162-4b36-a12b-a5c764a2df06 -- /bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file' Jan 11 20:26:47.481: INFO: stderr: "" Jan 11 20:26:47.481: INFO: stdout: "" Jan 11 20:26:47.481: INFO: podRWCmdExec out: "" err: Jan 11 20:26:47.481: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-1786 security-context-7e234d2b-4162-4b36-a12b-a5c764a2df06 -- /bin/sh -c cat /mnt/volume1/test-file' Jan 11 20:26:48.787: INFO: stderr: "" Jan 11 20:26:48.788: INFO: stdout: "test-file-content\n" Jan 11 20:26:48.788: INFO: podRWCmdExec out: "test-file-content\n" err: STEP: Creating pod2 to read from the PV STEP: Creating a pod Jan 11 20:26:51.237: INFO: pod "security-context-ce90ba1a-da11-4b25-be52-e1a62ddae934" created on Node "ip-10-250-27-25.ec2.internal" Jan 11 20:26:51.237: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-1786 security-context-ce90ba1a-da11-4b25-be52-e1a62ddae934 -- /bin/sh -c cat /mnt/volume1/test-file' Jan 11 20:26:52.699: INFO: stderr: "" Jan 11 20:26:52.699: INFO: stdout: "test-file-content\n" Jan 11 20:26:52.699: INFO: podRWCmdExec out: "test-file-content\n" err: STEP: Writing in pod2 Jan 11 20:26:52.699: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-1786 security-context-ce90ba1a-da11-4b25-be52-e1a62ddae934 -- /bin/sh -c mkdir -p /mnt/volume1; echo /tmp/local-volume-test-21c96eff-0f5b-4390-890f-c169bdbabdf2 > /mnt/volume1/test-file' Jan 11 20:26:53.990: INFO: stderr: "" Jan 11 20:26:53.990: INFO: stdout: "" Jan 11 20:26:53.990: INFO: podRWCmdExec out: "" err: STEP: Reading in pod1 Jan 11 20:26:53.990: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-1786 security-context-7e234d2b-4162-4b36-a12b-a5c764a2df06 -- /bin/sh -c cat /mnt/volume1/test-file' Jan 11 20:26:55.277: INFO: stderr: "" Jan 11 20:26:55.277: INFO: stdout: "/tmp/local-volume-test-21c96eff-0f5b-4390-890f-c169bdbabdf2\n" Jan 11 20:26:55.277: INFO: podRWCmdExec out: "/tmp/local-volume-test-21c96eff-0f5b-4390-890f-c169bdbabdf2\n" err: STEP: Deleting pod1 STEP: Deleting pod security-context-7e234d2b-4162-4b36-a12b-a5c764a2df06 in namespace persistent-local-volumes-test-1786 STEP: Deleting pod2 STEP: Deleting pod security-context-ce90ba1a-da11-4b25-be52-e1a62ddae934 in namespace persistent-local-volumes-test-1786 [AfterEach] [Volume type: blockfswithformat] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jan 11 20:26:55.459: INFO: Deleting PersistentVolumeClaim "pvc-sf5hs" Jan 11 20:26:55.549: INFO: Deleting PersistentVolume "local-pvrkthd" Jan 11 20:26:55.639: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-1786 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount /tmp/local-volume-test-21c96eff-0f5b-4390-890f-c169bdbabdf2' Jan 11 20:26:56.945: INFO: stderr: "" Jan 11 20:26:56.945: INFO: stdout: "" Jan 11 20:26:56.945: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-1786 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-21c96eff-0f5b-4390-890f-c169bdbabdf2/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}' Jan 11 20:26:58.382: INFO: stderr: "" Jan 11 20:26:58.382: INFO: stdout: "/dev/loop0\n" STEP: Tear down block device "/dev/loop0" on node "ip-10-250-27-25.ec2.internal" at path /tmp/local-volume-test-21c96eff-0f5b-4390-890f-c169bdbabdf2/file Jan 11 20:26:58.382: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-1786 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0' Jan 11 20:26:59.739: INFO: stderr: "" Jan 11 20:26:59.739: INFO: stdout: "" STEP: Removing the test directory /tmp/local-volume-test-21c96eff-0f5b-4390-890f-c169bdbabdf2 Jan 11 20:26:59.739: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-1786 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-21c96eff-0f5b-4390-890f-c169bdbabdf2' Jan 11 20:27:01.070: INFO: stderr: "" Jan 11 20:27:01.070: INFO: stdout: "" [AfterEach] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:27:01.161: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-1786" for this suite. Jan 11 20:27:07.523: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:27:10.847: INFO: namespace persistent-local-volumes-test-1786 deletion completed in 9.594464794s • [SLOW TEST:35.086 seconds] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: blockfswithformat] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume at the same time /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248 should be able to write from pod1 and read from pod2 /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 ------------------------------ [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:25:06.637: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename var-expansion STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in var-expansion-1216 STEP: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with absolute path [sig-storage][NodeFeature:VolumeSubpathEnvExpansion][Slow] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/expansion.go:271 Jan 11 20:27:07.636: INFO: Deleting pod "var-expansion-6fe15001-6941-48ba-a5c4-ba7bb712d00d" in namespace "var-expansion-1216" Jan 11 20:27:07.726: INFO: Wait up to 5m0s for pod "var-expansion-6fe15001-6941-48ba-a5c4-ba7bb712d00d" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:27:11.905: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-1216" for this suite. Jan 11 20:27:18.264: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:27:21.566: INFO: namespace var-expansion-1216 deletion completed in 9.569906566s • [SLOW TEST:134.929 seconds] [k8s.io] Variable Expansion /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693 should fail substituting values in a volume subpath with absolute path [sig-storage][NodeFeature:VolumeSubpathEnvExpansion][Slow] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/expansion.go:271 ------------------------------ S ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:27:00.614: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in persistent-local-volumes-test-6732 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:152 [BeforeEach] [Volume type: tmpfs] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating tmpfs mount point on node "ip-10-250-27-25.ec2.internal" at path "/tmp/local-volume-test-4f13a604-9ad1-452c-8b81-57be922436d0" Jan 11 20:27:03.706: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-6732 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-4f13a604-9ad1-452c-8b81-57be922436d0" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-4f13a604-9ad1-452c-8b81-57be922436d0" "/tmp/local-volume-test-4f13a604-9ad1-452c-8b81-57be922436d0"' Jan 11 20:27:05.083: INFO: stderr: "" Jan 11 20:27:05.083: INFO: stdout: "" STEP: Creating local PVCs and PVs Jan 11 20:27:05.083: INFO: Creating a PV followed by a PVC Jan 11 20:27:05.262: INFO: Waiting for PV local-pvfqfdd to bind to PVC pvc-6zdr4 Jan 11 20:27:05.262: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-6zdr4] to have phase Bound Jan 11 20:27:05.352: INFO: PersistentVolumeClaim pvc-6zdr4 found and phase=Bound (89.374598ms) Jan 11 20:27:05.352: INFO: Waiting up to 3m0s for PersistentVolume local-pvfqfdd to have phase Bound Jan 11 20:27:05.441: INFO: PersistentVolume local-pvfqfdd found and phase=Bound (89.125407ms) [BeforeEach] Set fsGroup for local volume /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set same fsGroup for two pods simultaneously [Slow] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:274 STEP: Create first pod and check fsGroup is set STEP: Creating a pod Jan 11 20:27:07.978: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec security-context-ff8b0e70-185a-4656-a782-46687a4baf98 --namespace=persistent-local-volumes-test-6732 -- stat -c %g /mnt/volume1' Jan 11 20:27:09.307: INFO: stderr: "" Jan 11 20:27:09.307: INFO: stdout: "1234\n" STEP: Create second pod with same fsGroup and check fsGroup is correct STEP: Creating a pod Jan 11 20:27:11.667: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec security-context-1877128b-33bb-4d9a-8326-5ddb0ec0ed10 --namespace=persistent-local-volumes-test-6732 -- stat -c %g /mnt/volume1' Jan 11 20:27:13.021: INFO: stderr: "" Jan 11 20:27:13.021: INFO: stdout: "1234\n" STEP: Deleting first pod STEP: Deleting pod security-context-ff8b0e70-185a-4656-a782-46687a4baf98 in namespace persistent-local-volumes-test-6732 STEP: Deleting second pod STEP: Deleting pod security-context-1877128b-33bb-4d9a-8326-5ddb0ec0ed10 in namespace persistent-local-volumes-test-6732 [AfterEach] [Volume type: tmpfs] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jan 11 20:27:13.202: INFO: Deleting PersistentVolumeClaim "pvc-6zdr4" Jan 11 20:27:13.292: INFO: Deleting PersistentVolume "local-pvfqfdd" STEP: Unmount tmpfs mount point on node "ip-10-250-27-25.ec2.internal" at path "/tmp/local-volume-test-4f13a604-9ad1-452c-8b81-57be922436d0" Jan 11 20:27:13.383: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-6732 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-4f13a604-9ad1-452c-8b81-57be922436d0"' Jan 11 20:27:14.712: INFO: stderr: "" Jan 11 20:27:14.712: INFO: stdout: "" STEP: Removing the test directory Jan 11 20:27:14.712: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-6732 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-4f13a604-9ad1-452c-8b81-57be922436d0' Jan 11 20:27:16.116: INFO: stderr: "" Jan 11 20:27:16.116: INFO: stdout: "" [AfterEach] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:27:16.207: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-6732" for this suite. Jan 11 20:27:28.567: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:27:31.874: INFO: namespace persistent-local-volumes-test-6732 deletion completed in 15.576541534s • [SLOW TEST:31.261 seconds] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: tmpfs] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set same fsGroup for two pods simultaneously [Slow] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:274 ------------------------------ SSS ------------------------------ [BeforeEach] [sig-network] Networking /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:27:01.319: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename pod-network-test STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pod-network-test-1771 STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Performing setup for networking test in namespace pod-network-test-1771 STEP: creating a selector STEP: Creating the service pods in kubernetes Jan 11 20:27:01.963: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jan 11 20:27:25.607: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://100.64.1.94:8080/dial?request=hostName&protocol=http&host=100.64.1.91&port=8080&tries=1'] Namespace:pod-network-test-1771 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 11 20:27:25.607: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config Jan 11 20:27:26.527: INFO: Waiting for endpoints: map[] Jan 11 20:27:26.617: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://100.64.1.94:8080/dial?request=hostName&protocol=http&host=100.64.0.219&port=8080&tries=1'] Namespace:pod-network-test-1771 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 11 20:27:26.617: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config Jan 11 20:27:27.476: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:27:27.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-1771" for this suite. Jan 11 20:27:39.836: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:27:43.150: INFO: namespace pod-network-test-1771 deletion completed in 15.583712596s • [SLOW TEST:41.832 seconds] [sig-network] Networking /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ S ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:27:21.573: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in persistent-local-volumes-test-399 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:152 [BeforeEach] [Volume type: dir-link] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Jan 11 20:27:24.899: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-399 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-95739fc5-bb83-4e9f-862a-77a2c4244984-backend && ln -s /tmp/local-volume-test-95739fc5-bb83-4e9f-862a-77a2c4244984-backend /tmp/local-volume-test-95739fc5-bb83-4e9f-862a-77a2c4244984' Jan 11 20:27:26.327: INFO: stderr: "" Jan 11 20:27:26.327: INFO: stdout: "" STEP: Creating local PVCs and PVs Jan 11 20:27:26.327: INFO: Creating a PV followed by a PVC Jan 11 20:27:26.507: INFO: Waiting for PV local-pv2g5j9 to bind to PVC pvc-t9fsg Jan 11 20:27:26.507: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-t9fsg] to have phase Bound Jan 11 20:27:26.595: INFO: PersistentVolumeClaim pvc-t9fsg found and phase=Bound (88.805872ms) Jan 11 20:27:26.596: INFO: Waiting up to 3m0s for PersistentVolume local-pv2g5j9 to have phase Bound Jan 11 20:27:26.684: INFO: PersistentVolume local-pv2g5j9 found and phase=Bound (88.723448ms) [BeforeEach] One pod requesting one prebound PVC /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Jan 11 20:27:29.310: INFO: pod "security-context-34fc74d1-4a5e-4423-bb35-b64e890de7ea" created on Node "ip-10-250-27-25.ec2.internal" STEP: Writing in pod1 Jan 11 20:27:29.310: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-399 security-context-34fc74d1-4a5e-4423-bb35-b64e890de7ea -- /bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file' Jan 11 20:27:30.661: INFO: stderr: "" Jan 11 20:27:30.661: INFO: stdout: "" Jan 11 20:27:30.661: INFO: podRWCmdExec out: "" err: [It] should be able to mount volume and write from pod1 /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 Jan 11 20:27:30.661: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-399 security-context-34fc74d1-4a5e-4423-bb35-b64e890de7ea -- /bin/sh -c cat /mnt/volume1/test-file' Jan 11 20:27:32.008: INFO: stderr: "" Jan 11 20:27:32.008: INFO: stdout: "test-file-content\n" Jan 11 20:27:32.008: INFO: podRWCmdExec out: "test-file-content\n" err: STEP: Writing in pod1 Jan 11 20:27:32.008: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-399 security-context-34fc74d1-4a5e-4423-bb35-b64e890de7ea -- /bin/sh -c mkdir -p /mnt/volume1; echo /tmp/local-volume-test-95739fc5-bb83-4e9f-862a-77a2c4244984 > /mnt/volume1/test-file' Jan 11 20:27:33.562: INFO: stderr: "" Jan 11 20:27:33.562: INFO: stdout: "" Jan 11 20:27:33.562: INFO: podRWCmdExec out: "" err: [AfterEach] One pod requesting one prebound PVC /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod security-context-34fc74d1-4a5e-4423-bb35-b64e890de7ea in namespace persistent-local-volumes-test-399 [AfterEach] [Volume type: dir-link] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jan 11 20:27:33.652: INFO: Deleting PersistentVolumeClaim "pvc-t9fsg" Jan 11 20:27:33.742: INFO: Deleting PersistentVolume "local-pv2g5j9" STEP: Removing the test directory Jan 11 20:27:33.832: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-399 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-95739fc5-bb83-4e9f-862a-77a2c4244984 && rm -r /tmp/local-volume-test-95739fc5-bb83-4e9f-862a-77a2c4244984-backend' Jan 11 20:27:35.179: INFO: stderr: "" Jan 11 20:27:35.179: INFO: stdout: "" [AfterEach] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:27:35.270: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-399" for this suite. Jan 11 20:27:41.629: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:27:44.922: INFO: namespace persistent-local-volumes-test-399 deletion completed in 9.561429093s • [SLOW TEST:23.349 seconds] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-link] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and write from pod1 /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 ------------------------------ SSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:26:01.676: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename projected STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-4300 STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating projection with configMap that has name projected-configmap-test-upd-50f32bcd-3e25-44c1-84d8-c20f00b0fd9c STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-50f32bcd-3e25-44c1-84d8-c20f00b0fd9c STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:27:31.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4300" for this suite. Jan 11 20:27:45.728: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:27:48.936: INFO: namespace projected-4300 deletion completed in 17.47844463s • [SLOW TEST:107.260 seconds] [sig-storage] Projected configMap /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:35 updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ [BeforeEach] [sig-auth] ServiceAccounts /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:27:31.880: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename svcaccounts STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in svcaccounts-6061 STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure a single API token exists /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/service_accounts.go:47 STEP: waiting for a single token reference Jan 11 20:27:33.201: INFO: default service account has a single secret reference STEP: ensuring the single token reference persists STEP: deleting the service account token STEP: waiting for a new token reference Jan 11 20:27:35.971: INFO: default service account has a new single secret reference STEP: ensuring the single token reference persists STEP: deleting the reference to the service account token STEP: waiting for a new token to be created and added Jan 11 20:27:38.829: INFO: default service account has a new single secret reference STEP: ensuring the single token reference persists [AfterEach] [sig-auth] ServiceAccounts /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:27:40.918: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-6061" for this suite. Jan 11 20:27:47.278: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:27:50.573: INFO: namespace svcaccounts-6061 deletion completed in 9.563932508s • [SLOW TEST:18.693 seconds] [sig-auth] ServiceAccounts /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should ensure a single API token exists /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/service_accounts.go:47 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:27:48.938: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename projected STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-640 STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating configMap with name projected-configmap-test-volume-6ed04200-b5f9-4d8c-a070-29ec2af71544 STEP: Creating a pod to test consume configMaps Jan 11 20:27:50.034: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-80571ec0-cc06-453c-ba47-be6e057356f3" in namespace "projected-640" to be "success or failure" Jan 11 20:27:50.124: INFO: Pod "pod-projected-configmaps-80571ec0-cc06-453c-ba47-be6e057356f3": Phase="Pending", Reason="", readiness=false. Elapsed: 89.44799ms Jan 11 20:27:52.213: INFO: Pod "pod-projected-configmaps-80571ec0-cc06-453c-ba47-be6e057356f3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.178589497s STEP: Saw pod success Jan 11 20:27:52.213: INFO: Pod "pod-projected-configmaps-80571ec0-cc06-453c-ba47-be6e057356f3" satisfied condition "success or failure" Jan 11 20:27:52.302: INFO: Trying to get logs from node ip-10-250-7-77.ec2.internal pod pod-projected-configmaps-80571ec0-cc06-453c-ba47-be6e057356f3 container projected-configmap-volume-test: STEP: delete the pod Jan 11 20:27:52.490: INFO: Waiting for pod pod-projected-configmaps-80571ec0-cc06-453c-ba47-be6e057356f3 to disappear Jan 11 20:27:52.579: INFO: Pod pod-projected-configmaps-80571ec0-cc06-453c-ba47-be6e057356f3 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:27:52.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-640" for this suite. Jan 11 20:27:58.938: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:28:02.151: INFO: namespace projected-640 deletion completed in 9.481000864s • [SLOW TEST:13.213 seconds] [sig-storage] Projected configMap /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:35 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:26:54.409: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in csi-mock-volumes-4203 STEP: Waiting for a default service account to be provisioned in namespace [It] should report attach limit when limit is bigger than 0 [Slow] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:386 STEP: deploying csi mock driver Jan 11 20:26:55.244: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4203/csi-attacher Jan 11 20:26:55.334: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-4203 Jan 11 20:26:55.334: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-4203 Jan 11 20:26:55.424: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-4203 Jan 11 20:26:55.514: INFO: creating *v1.Role: csi-mock-volumes-4203/external-attacher-cfg-csi-mock-volumes-4203 Jan 11 20:26:55.604: INFO: creating *v1.RoleBinding: csi-mock-volumes-4203/csi-attacher-role-cfg Jan 11 20:26:55.696: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4203/csi-provisioner Jan 11 20:26:55.789: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-4203 Jan 11 20:26:55.789: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-4203 Jan 11 20:26:55.879: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-4203 Jan 11 20:26:55.969: INFO: creating *v1.Role: csi-mock-volumes-4203/external-provisioner-cfg-csi-mock-volumes-4203 Jan 11 20:26:56.059: INFO: creating *v1.RoleBinding: csi-mock-volumes-4203/csi-provisioner-role-cfg Jan 11 20:26:56.149: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4203/csi-resizer Jan 11 20:26:56.239: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-4203 Jan 11 20:26:56.239: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-4203 Jan 11 20:26:56.332: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-4203 Jan 11 20:26:56.422: INFO: creating *v1.Role: csi-mock-volumes-4203/external-resizer-cfg-csi-mock-volumes-4203 Jan 11 20:26:56.512: INFO: creating *v1.RoleBinding: csi-mock-volumes-4203/csi-resizer-role-cfg Jan 11 20:26:56.605: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4203/csi-mock Jan 11 20:26:56.696: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-4203 Jan 11 20:26:56.786: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-4203 Jan 11 20:26:56.876: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-4203 Jan 11 20:26:56.966: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-4203 Jan 11 20:26:57.057: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-4203 Jan 11 20:26:57.147: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-4203 Jan 11 20:26:57.237: INFO: creating *v1.StatefulSet: csi-mock-volumes-4203/csi-mockplugin Jan 11 20:26:57.327: INFO: creating *v1.StatefulSet: csi-mock-volumes-4203/csi-mockplugin-attacher STEP: Creating pod Jan 11 20:27:07.872: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Jan 11 20:27:07.964: INFO: Waiting up to 5m0s for PersistentVolumeClaims [pvc-qlc2q] to have phase Bound Jan 11 20:27:08.054: INFO: PersistentVolumeClaim pvc-qlc2q found and phase=Bound (89.938277ms) STEP: Creating pod Jan 11 20:27:26.594: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Jan 11 20:27:26.686: INFO: Waiting up to 5m0s for PersistentVolumeClaims [pvc-68z2m] to have phase Bound Jan 11 20:27:26.776: INFO: PersistentVolumeClaim pvc-68z2m found and phase=Bound (89.380767ms) STEP: Creating pod Jan 11 20:27:31.315: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Jan 11 20:27:31.406: INFO: Waiting up to 5m0s for PersistentVolumeClaims [pvc-nnw4s] to have phase Bound Jan 11 20:27:31.496: INFO: PersistentVolumeClaim pvc-nnw4s found and phase=Bound (89.971345ms) STEP: Deleting pod pvc-volume-tester-lxbwb Jan 11 20:27:31.858: INFO: Deleting pod "pvc-volume-tester-lxbwb" in namespace "csi-mock-volumes-4203" Jan 11 20:27:31.949: INFO: Wait up to 5m0s for pod "pvc-volume-tester-lxbwb" to be fully deleted STEP: Deleting pod pvc-volume-tester-j5lc6 Jan 11 20:27:44.129: INFO: Deleting pod "pvc-volume-tester-j5lc6" in namespace "csi-mock-volumes-4203" Jan 11 20:27:44.221: INFO: Wait up to 5m0s for pod "pvc-volume-tester-j5lc6" to be fully deleted STEP: Deleting pod pvc-volume-tester-mz59d Jan 11 20:27:46.401: INFO: Deleting pod "pvc-volume-tester-mz59d" in namespace "csi-mock-volumes-4203" Jan 11 20:27:46.492: INFO: Wait up to 5m0s for pod "pvc-volume-tester-mz59d" to be fully deleted STEP: Deleting claim pvc-qlc2q Jan 11 20:27:46.761: INFO: Waiting up to 2m0s for PersistentVolume pvc-ea199538-d5f6-479f-8e9c-6f68c15bdf08 to get deleted Jan 11 20:27:46.851: INFO: PersistentVolume pvc-ea199538-d5f6-479f-8e9c-6f68c15bdf08 was removed STEP: Deleting claim pvc-68z2m Jan 11 20:27:47.032: INFO: Waiting up to 2m0s for PersistentVolume pvc-9c0c7234-a9d0-431d-8331-e8f09cd342e8 to get deleted Jan 11 20:27:47.122: INFO: PersistentVolume pvc-9c0c7234-a9d0-431d-8331-e8f09cd342e8 found and phase=Released (89.685461ms) Jan 11 20:27:49.212: INFO: PersistentVolume pvc-9c0c7234-a9d0-431d-8331-e8f09cd342e8 was removed STEP: Deleting claim pvc-nnw4s Jan 11 20:27:49.393: INFO: Waiting up to 2m0s for PersistentVolume pvc-909897a4-6510-4afd-8558-6555ce268f72 to get deleted Jan 11 20:27:49.483: INFO: PersistentVolume pvc-909897a4-6510-4afd-8558-6555ce268f72 found and phase=Released (89.722273ms) Jan 11 20:27:51.573: INFO: PersistentVolume pvc-909897a4-6510-4afd-8558-6555ce268f72 was removed STEP: Deleting storageclass csi-mock-volumes-4203-sc STEP: Cleaning up resources STEP: uninstalling csi mock driver Jan 11 20:27:51.664: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4203/csi-attacher Jan 11 20:27:51.755: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-4203 Jan 11 20:27:51.846: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-4203 Jan 11 20:27:51.937: INFO: deleting *v1.Role: csi-mock-volumes-4203/external-attacher-cfg-csi-mock-volumes-4203 Jan 11 20:27:52.029: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4203/csi-attacher-role-cfg Jan 11 20:27:52.120: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4203/csi-provisioner Jan 11 20:27:52.211: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-4203 Jan 11 20:27:52.302: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-4203 Jan 11 20:27:52.393: INFO: deleting *v1.Role: csi-mock-volumes-4203/external-provisioner-cfg-csi-mock-volumes-4203 Jan 11 20:27:52.484: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4203/csi-provisioner-role-cfg Jan 11 20:27:52.576: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4203/csi-resizer Jan 11 20:27:52.667: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-4203 Jan 11 20:27:52.758: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-4203 Jan 11 20:27:52.850: INFO: deleting *v1.Role: csi-mock-volumes-4203/external-resizer-cfg-csi-mock-volumes-4203 Jan 11 20:27:52.941: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4203/csi-resizer-role-cfg Jan 11 20:27:53.032: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4203/csi-mock Jan 11 20:27:53.125: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-4203 Jan 11 20:27:53.216: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-4203 Jan 11 20:27:53.307: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-4203 Jan 11 20:27:53.398: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-4203 Jan 11 20:27:53.489: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-4203 Jan 11 20:27:53.580: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-4203 Jan 11 20:27:53.672: INFO: deleting *v1.StatefulSet: csi-mock-volumes-4203/csi-mockplugin Jan 11 20:27:53.763: INFO: deleting *v1.StatefulSet: csi-mock-volumes-4203/csi-mockplugin-attacher STEP: removing the label attach-limit-csi-csi-mock-volumes-4203 off the node ip-10-250-27-25.ec2.internal STEP: verifying the node doesn't have the label attach-limit-csi-csi-mock-volumes-4203 [AfterEach] [sig-storage] CSI mock volume /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:27:54.127: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "csi-mock-volumes-4203" for this suite. Jan 11 20:28:00.488: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:28:03.958: INFO: namespace csi-mock-volumes-4203 deletion completed in 9.739766833s • [SLOW TEST:69.548 seconds] [sig-storage] CSI mock volume /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI volume limit information using mock driver /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:385 should report attach limit when limit is bigger than 0 [Slow] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:386 ------------------------------ S ------------------------------ [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:28:03.961: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename security-context-test STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in security-context-test-753 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:40 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Jan 11 20:28:04.695: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-cc89c52e-c0cc-4e75-b806-924ff62510f4" in namespace "security-context-test-753" to be "success or failure" Jan 11 20:28:04.785: INFO: Pod "alpine-nnp-false-cc89c52e-c0cc-4e75-b806-924ff62510f4": Phase="Pending", Reason="", readiness=false. Elapsed: 90.007367ms Jan 11 20:28:06.875: INFO: Pod "alpine-nnp-false-cc89c52e-c0cc-4e75-b806-924ff62510f4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.179936942s Jan 11 20:28:06.875: INFO: Pod "alpine-nnp-false-cc89c52e-c0cc-4e75-b806-924ff62510f4" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:28:07.107: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-753" for this suite. Jan 11 20:28:13.471: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:28:16.799: INFO: namespace security-context-test-753 deletion completed in 9.599606134s • [SLOW TEST:12.838 seconds] [k8s.io] Security Context /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693 when creating containers with AllowPrivilegeEscalation /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:277 should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSS ------------------------------ [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:93 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:28:02.166: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename provisioning STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in provisioning-867 STEP: Waiting for a default service account to be provisioned in namespace [It] should support non-existent path /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:177 Jan 11 20:28:03.055: INFO: Could not find CSI Name for in-tree plugin kubernetes.io/empty-dir Jan 11 20:28:03.055: INFO: Creating resource for inline volume STEP: Creating pod pod-subpath-test-emptydir-97nd STEP: Creating a pod to test subpath Jan 11 20:28:03.147: INFO: Waiting up to 5m0s for pod "pod-subpath-test-emptydir-97nd" in namespace "provisioning-867" to be "success or failure" Jan 11 20:28:03.237: INFO: Pod "pod-subpath-test-emptydir-97nd": Phase="Pending", Reason="", readiness=false. Elapsed: 89.447417ms Jan 11 20:28:05.327: INFO: Pod "pod-subpath-test-emptydir-97nd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.179852776s Jan 11 20:28:07.417: INFO: Pod "pod-subpath-test-emptydir-97nd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.269786267s STEP: Saw pod success Jan 11 20:28:07.417: INFO: Pod "pod-subpath-test-emptydir-97nd" satisfied condition "success or failure" Jan 11 20:28:07.506: INFO: Trying to get logs from node ip-10-250-27-25.ec2.internal pod pod-subpath-test-emptydir-97nd container test-container-volume-emptydir-97nd: STEP: delete the pod Jan 11 20:28:07.696: INFO: Waiting for pod pod-subpath-test-emptydir-97nd to disappear Jan 11 20:28:07.785: INFO: Pod pod-subpath-test-emptydir-97nd no longer exists STEP: Deleting pod pod-subpath-test-emptydir-97nd Jan 11 20:28:07.785: INFO: Deleting pod "pod-subpath-test-emptydir-97nd" in namespace "provisioning-867" STEP: Deleting pod Jan 11 20:28:07.875: INFO: Deleting pod "pod-subpath-test-emptydir-97nd" in namespace "provisioning-867" Jan 11 20:28:07.964: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics [AfterEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:28:07.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "provisioning-867" for this suite. Jan 11 20:28:14.323: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:28:17.537: INFO: namespace provisioning-867 deletion completed in 9.48280992s • [SLOW TEST:15.372 seconds] [sig-storage] In-tree Volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Driver: emptydir] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:69 [Testpattern: Inline-volume (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:92 should support non-existent path /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:177 ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:27:50.622: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename webhook STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-7189 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:88 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 11 20:27:52.299: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714371271, loc:(*time.Location)(0x84bfb00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714371271, loc:(*time.Location)(0x84bfb00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714371272, loc:(*time.Location)(0x84bfb00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714371271, loc:(*time.Location)(0x84bfb00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 11 20:27:55.481: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:27:56.549: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7189" for this suite. Jan 11 20:28:04.909: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:28:08.207: INFO: namespace webhook-7189 deletion completed in 11.567914503s STEP: Destroying namespace "webhook-7189-markers" for this suite. Jan 11 20:28:14.477: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:28:17.776: INFO: namespace webhook-7189-markers deletion completed in 9.568575638s [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:103 • [SLOW TEST:27.513 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PVC Protection /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:27:44.929: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename pvc-protection STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pvc-protection-3841 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PVC Protection /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:45 Jan 11 20:27:45.747: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable STEP: Creating a PVC Jan 11 20:27:45.926: INFO: Default storage class: "default" Jan 11 20:27:45.926: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil STEP: Creating a Pod that becomes Running and therefore is actively using the PVC STEP: Waiting for PVC to become Bound Jan 11 20:28:04.377: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-protectionpnrkt] to have phase Bound Jan 11 20:28:04.466: INFO: PersistentVolumeClaim pvc-protectionpnrkt found and phase=Bound (89.059607ms) STEP: Checking that PVC Protection finalizer is set [It] Verify that scheduling of a pod that uses PVC that is being deleted fails and the pod becomes Unschedulable /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:117 STEP: Deleting the PVC, however, the PVC must not be removed from the system as it's in active use by a pod STEP: Checking that the PVC status is Terminating STEP: Creating second Pod whose scheduling fails because it uses a PVC that is being deleted Jan 11 20:28:04.825: INFO: Waiting up to 5m0s for pod "pvc-tester-sl97f" in namespace "pvc-protection-3841" to be "Unschedulable" Jan 11 20:28:04.914: INFO: Pod "pvc-tester-sl97f": Phase="Pending", Reason="", readiness=false. Elapsed: 88.970164ms Jan 11 20:28:04.914: INFO: Pod "pvc-tester-sl97f" satisfied condition "Unschedulable" STEP: Deleting the second pod that uses the PVC that is being deleted Jan 11 20:28:05.003: INFO: Deleting pod "pvc-tester-sl97f" in namespace "pvc-protection-3841" Jan 11 20:28:05.095: INFO: Wait up to 5m0s for pod "pvc-tester-sl97f" to be fully deleted STEP: Checking again that the PVC status is Terminating STEP: Deleting the first pod that uses the PVC Jan 11 20:28:05.274: INFO: Deleting pod "pvc-tester-xstf2" in namespace "pvc-protection-3841" Jan 11 20:28:05.366: INFO: Wait up to 5m0s for pod "pvc-tester-xstf2" to be fully deleted STEP: Checking that the PVC is automatically removed from the system because it's no longer in active use by a pod Jan 11 20:28:15.544: INFO: Waiting up to 3m0s for PersistentVolumeClaim pvc-protectionpnrkt to be removed Jan 11 20:28:15.633: INFO: Claim "pvc-protectionpnrkt" in namespace "pvc-protection-3841" doesn't exist in the system [AfterEach] [sig-storage] PVC Protection /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:28:15.633: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pvc-protection-3841" for this suite. Jan 11 20:28:21.992: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:28:25.290: INFO: namespace pvc-protection-3841 deletion completed in 9.566762406s [AfterEach] [sig-storage] PVC Protection /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:80 • [SLOW TEST:40.362 seconds] [sig-storage] PVC Protection /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Verify that scheduling of a pod that uses PVC that is being deleted fails and the pod becomes Unschedulable /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:117 ------------------------------ SS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:28:18.148: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename downward-api STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-1601 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating a pod to test downward API volume plugin Jan 11 20:28:19.341: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5b24e73d-f86a-472d-ae1b-fa981f29c8f6" in namespace "downward-api-1601" to be "success or failure" Jan 11 20:28:19.430: INFO: Pod "downwardapi-volume-5b24e73d-f86a-472d-ae1b-fa981f29c8f6": Phase="Pending", Reason="", readiness=false. Elapsed: 89.860515ms Jan 11 20:28:21.521: INFO: Pod "downwardapi-volume-5b24e73d-f86a-472d-ae1b-fa981f29c8f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.179883892s STEP: Saw pod success Jan 11 20:28:21.521: INFO: Pod "downwardapi-volume-5b24e73d-f86a-472d-ae1b-fa981f29c8f6" satisfied condition "success or failure" Jan 11 20:28:21.610: INFO: Trying to get logs from node ip-10-250-27-25.ec2.internal pod downwardapi-volume-5b24e73d-f86a-472d-ae1b-fa981f29c8f6 container client-container: STEP: delete the pod Jan 11 20:28:21.799: INFO: Waiting for pod downwardapi-volume-5b24e73d-f86a-472d-ae1b-fa981f29c8f6 to disappear Jan 11 20:28:21.889: INFO: Pod downwardapi-volume-5b24e73d-f86a-472d-ae1b-fa981f29c8f6 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:28:21.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1601" for this suite. Jan 11 20:28:28.248: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:28:31.547: INFO: namespace downward-api-1601 deletion completed in 9.567762684s • [SLOW TEST:13.399 seconds] [sig-storage] Downward API volume /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:28:25.295: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename projected STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-8040 STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:75 STEP: Creating configMap with name projected-configmap-test-volume-2cbb2393-0f4a-4043-9d1c-02c054b96530 STEP: Creating a pod to test consume configMaps Jan 11 20:28:26.134: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-cf1165c0-1b09-41e4-9081-1c5b89b774b0" in namespace "projected-8040" to be "success or failure" Jan 11 20:28:26.223: INFO: Pod "pod-projected-configmaps-cf1165c0-1b09-41e4-9081-1c5b89b774b0": Phase="Pending", Reason="", readiness=false. Elapsed: 89.106496ms Jan 11 20:28:28.312: INFO: Pod "pod-projected-configmaps-cf1165c0-1b09-41e4-9081-1c5b89b774b0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.178710874s STEP: Saw pod success Jan 11 20:28:28.312: INFO: Pod "pod-projected-configmaps-cf1165c0-1b09-41e4-9081-1c5b89b774b0" satisfied condition "success or failure" Jan 11 20:28:28.401: INFO: Trying to get logs from node ip-10-250-27-25.ec2.internal pod pod-projected-configmaps-cf1165c0-1b09-41e4-9081-1c5b89b774b0 container projected-configmap-volume-test: STEP: delete the pod Jan 11 20:28:28.592: INFO: Waiting for pod pod-projected-configmaps-cf1165c0-1b09-41e4-9081-1c5b89b774b0 to disappear Jan 11 20:28:28.681: INFO: Pod pod-projected-configmaps-cf1165c0-1b09-41e4-9081-1c5b89b774b0 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:28:28.681: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8040" for this suite. Jan 11 20:28:35.041: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:28:38.342: INFO: namespace projected-8040 deletion completed in 9.569858871s • [SLOW TEST:13.047 seconds] [sig-storage] Projected configMap /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:35 should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:75 ------------------------------ SSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:27:43.154: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in csi-mock-volumes-104 STEP: Waiting for a default service account to be provisioned in namespace [It] should expand volume by restarting pod if attach=on, nodeExpansion=on /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:449 STEP: deploying csi mock driver Jan 11 20:27:44.140: INFO: creating *v1.ServiceAccount: csi-mock-volumes-104/csi-attacher Jan 11 20:27:44.229: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-104 Jan 11 20:27:44.229: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-104 Jan 11 20:27:44.319: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-104 Jan 11 20:27:44.409: INFO: creating *v1.Role: csi-mock-volumes-104/external-attacher-cfg-csi-mock-volumes-104 Jan 11 20:27:44.499: INFO: creating *v1.RoleBinding: csi-mock-volumes-104/csi-attacher-role-cfg Jan 11 20:27:44.589: INFO: creating *v1.ServiceAccount: csi-mock-volumes-104/csi-provisioner Jan 11 20:27:44.679: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-104 Jan 11 20:27:44.679: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-104 Jan 11 20:27:44.769: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-104 Jan 11 20:27:44.859: INFO: creating *v1.Role: csi-mock-volumes-104/external-provisioner-cfg-csi-mock-volumes-104 Jan 11 20:27:44.949: INFO: creating *v1.RoleBinding: csi-mock-volumes-104/csi-provisioner-role-cfg Jan 11 20:27:45.039: INFO: creating *v1.ServiceAccount: csi-mock-volumes-104/csi-resizer Jan 11 20:27:45.129: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-104 Jan 11 20:27:45.129: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-104 Jan 11 20:27:45.220: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-104 Jan 11 20:27:45.310: INFO: creating *v1.Role: csi-mock-volumes-104/external-resizer-cfg-csi-mock-volumes-104 Jan 11 20:27:45.400: INFO: creating *v1.RoleBinding: csi-mock-volumes-104/csi-resizer-role-cfg Jan 11 20:27:45.490: INFO: creating *v1.ServiceAccount: csi-mock-volumes-104/csi-mock Jan 11 20:27:45.580: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-104 Jan 11 20:27:45.670: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-104 Jan 11 20:27:45.760: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-104 Jan 11 20:27:45.850: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-104 Jan 11 20:27:45.940: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-104 Jan 11 20:27:46.030: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-104 Jan 11 20:27:46.120: INFO: creating *v1.StatefulSet: csi-mock-volumes-104/csi-mockplugin Jan 11 20:27:46.210: INFO: creating *v1.StatefulSet: csi-mock-volumes-104/csi-mockplugin-attacher Jan 11 20:27:46.303: INFO: creating *v1.StatefulSet: csi-mock-volumes-104/csi-mockplugin-resizer STEP: Creating pod Jan 11 20:27:46.572: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Jan 11 20:27:46.663: INFO: Waiting up to 5m0s for PersistentVolumeClaims [pvc-6zrfb] to have phase Bound Jan 11 20:27:46.753: INFO: PersistentVolumeClaim pvc-6zrfb found but phase is Pending instead of Bound. Jan 11 20:27:48.843: INFO: PersistentVolumeClaim pvc-6zrfb found and phase=Bound (2.17946073s) STEP: Expanding current pvc STEP: Waiting for persistent volume resize to finish STEP: Checking for conditions on pvc STEP: Deleting the previously created pod Jan 11 20:28:07.650: INFO: Deleting pod "pvc-volume-tester-2q2v9" in namespace "csi-mock-volumes-104" Jan 11 20:28:07.741: INFO: Wait up to 5m0s for pod "pvc-volume-tester-2q2v9" to be fully deleted STEP: Creating a new pod with same volume STEP: Waiting for PVC resize to finish STEP: Deleting pod pvc-volume-tester-2q2v9 Jan 11 20:28:14.102: INFO: Deleting pod "pvc-volume-tester-2q2v9" in namespace "csi-mock-volumes-104" STEP: Deleting pod pvc-volume-tester-cvh5d Jan 11 20:28:14.192: INFO: Deleting pod "pvc-volume-tester-cvh5d" in namespace "csi-mock-volumes-104" Jan 11 20:28:14.283: INFO: Wait up to 5m0s for pod "pvc-volume-tester-cvh5d" to be fully deleted STEP: Deleting claim pvc-6zrfb Jan 11 20:28:24.643: INFO: Waiting up to 2m0s for PersistentVolume pvc-c141999c-6d4f-42bf-a655-f1762ae82ee0 to get deleted Jan 11 20:28:24.733: INFO: PersistentVolume pvc-c141999c-6d4f-42bf-a655-f1762ae82ee0 found and phase=Released (89.95317ms) Jan 11 20:28:26.825: INFO: PersistentVolume pvc-c141999c-6d4f-42bf-a655-f1762ae82ee0 found and phase=Released (2.181958676s) Jan 11 20:28:28.915: INFO: PersistentVolume pvc-c141999c-6d4f-42bf-a655-f1762ae82ee0 was removed STEP: Deleting storageclass csi-mock-volumes-104-sc STEP: Cleaning up resources STEP: uninstalling csi mock driver Jan 11 20:28:29.007: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-104/csi-attacher Jan 11 20:28:29.098: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-104 Jan 11 20:28:29.191: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-104 Jan 11 20:28:29.282: INFO: deleting *v1.Role: csi-mock-volumes-104/external-attacher-cfg-csi-mock-volumes-104 Jan 11 20:28:29.374: INFO: deleting *v1.RoleBinding: csi-mock-volumes-104/csi-attacher-role-cfg Jan 11 20:28:29.466: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-104/csi-provisioner Jan 11 20:28:29.557: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-104 Jan 11 20:28:29.648: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-104 Jan 11 20:28:29.739: INFO: deleting *v1.Role: csi-mock-volumes-104/external-provisioner-cfg-csi-mock-volumes-104 Jan 11 20:28:29.830: INFO: deleting *v1.RoleBinding: csi-mock-volumes-104/csi-provisioner-role-cfg Jan 11 20:28:29.923: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-104/csi-resizer Jan 11 20:28:30.014: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-104 Jan 11 20:28:30.106: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-104 Jan 11 20:28:30.197: INFO: deleting *v1.Role: csi-mock-volumes-104/external-resizer-cfg-csi-mock-volumes-104 Jan 11 20:28:30.289: INFO: deleting *v1.RoleBinding: csi-mock-volumes-104/csi-resizer-role-cfg Jan 11 20:28:30.381: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-104/csi-mock Jan 11 20:28:30.472: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-104 Jan 11 20:28:30.564: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-104 Jan 11 20:28:30.655: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-104 Jan 11 20:28:30.746: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-104 Jan 11 20:28:30.838: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-104 Jan 11 20:28:30.929: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-104 Jan 11 20:28:31.021: INFO: deleting *v1.StatefulSet: csi-mock-volumes-104/csi-mockplugin Jan 11 20:28:31.113: INFO: deleting *v1.StatefulSet: csi-mock-volumes-104/csi-mockplugin-attacher Jan 11 20:28:31.204: INFO: deleting *v1.StatefulSet: csi-mock-volumes-104/csi-mockplugin-resizer [AfterEach] [sig-storage] CSI mock volume /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:28:31.296: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "csi-mock-volumes-104" for this suite. Jan 11 20:28:37.657: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:28:40.975: INFO: namespace csi-mock-volumes-104 deletion completed in 9.587596513s • [SLOW TEST:57.822 seconds] [sig-storage] CSI mock volume /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI Volume expansion /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:420 should expand volume by restarting pod if attach=on, nodeExpansion=on /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:449 ------------------------------ SSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Secrets /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:28:31.571: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename secrets STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secrets-6152 STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: creating secret secrets-6152/secret-test-6eefe8f8-e0c9-4195-98db-c25fcfc0147b STEP: Creating a pod to test consume secrets Jan 11 20:28:32.429: INFO: Waiting up to 5m0s for pod "pod-configmaps-b4ec150b-6473-4175-946f-78179babe432" in namespace "secrets-6152" to be "success or failure" Jan 11 20:28:32.518: INFO: Pod "pod-configmaps-b4ec150b-6473-4175-946f-78179babe432": Phase="Pending", Reason="", readiness=false. Elapsed: 88.978425ms Jan 11 20:28:34.608: INFO: Pod "pod-configmaps-b4ec150b-6473-4175-946f-78179babe432": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.179098849s STEP: Saw pod success Jan 11 20:28:34.608: INFO: Pod "pod-configmaps-b4ec150b-6473-4175-946f-78179babe432" satisfied condition "success or failure" Jan 11 20:28:34.697: INFO: Trying to get logs from node ip-10-250-27-25.ec2.internal pod pod-configmaps-b4ec150b-6473-4175-946f-78179babe432 container env-test: STEP: delete the pod Jan 11 20:28:34.887: INFO: Waiting for pod pod-configmaps-b4ec150b-6473-4175-946f-78179babe432 to disappear Jan 11 20:28:34.981: INFO: Pod pod-configmaps-b4ec150b-6473-4175-946f-78179babe432 no longer exists [AfterEach] [sig-api-machinery] Secrets /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:28:34.981: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6152" for this suite. Jan 11 20:28:41.341: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:28:44.646: INFO: namespace secrets-6152 deletion completed in 9.574168878s • [SLOW TEST:13.075 seconds] [sig-api-machinery] Secrets /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32 should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Subpath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:28:16.808: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename subpath STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in subpath-3504 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating pod pod-subpath-test-projected-kx4n STEP: Creating a pod to test atomic-volume-subpath Jan 11 20:28:17.724: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-kx4n" in namespace "subpath-3504" to be "success or failure" Jan 11 20:28:17.813: INFO: Pod "pod-subpath-test-projected-kx4n": Phase="Pending", Reason="", readiness=false. Elapsed: 89.438149ms Jan 11 20:28:19.904: INFO: Pod "pod-subpath-test-projected-kx4n": Phase="Running", Reason="", readiness=true. Elapsed: 2.180015057s Jan 11 20:28:21.994: INFO: Pod "pod-subpath-test-projected-kx4n": Phase="Running", Reason="", readiness=true. Elapsed: 4.270001689s Jan 11 20:28:24.084: INFO: Pod "pod-subpath-test-projected-kx4n": Phase="Running", Reason="", readiness=true. Elapsed: 6.360493677s Jan 11 20:28:26.175: INFO: Pod "pod-subpath-test-projected-kx4n": Phase="Running", Reason="", readiness=true. Elapsed: 8.450730162s Jan 11 20:28:28.265: INFO: Pod "pod-subpath-test-projected-kx4n": Phase="Running", Reason="", readiness=true. Elapsed: 10.541478012s Jan 11 20:28:30.357: INFO: Pod "pod-subpath-test-projected-kx4n": Phase="Running", Reason="", readiness=true. Elapsed: 12.633260902s Jan 11 20:28:32.447: INFO: Pod "pod-subpath-test-projected-kx4n": Phase="Running", Reason="", readiness=true. Elapsed: 14.72327847s Jan 11 20:28:34.537: INFO: Pod "pod-subpath-test-projected-kx4n": Phase="Running", Reason="", readiness=true. Elapsed: 16.813576977s Jan 11 20:28:36.628: INFO: Pod "pod-subpath-test-projected-kx4n": Phase="Running", Reason="", readiness=true. Elapsed: 18.90402361s Jan 11 20:28:38.718: INFO: Pod "pod-subpath-test-projected-kx4n": Phase="Running", Reason="", readiness=true. Elapsed: 20.994183988s Jan 11 20:28:40.808: INFO: Pod "pod-subpath-test-projected-kx4n": Phase="Succeeded", Reason="", readiness=false. Elapsed: 23.08443902s STEP: Saw pod success Jan 11 20:28:40.808: INFO: Pod "pod-subpath-test-projected-kx4n" satisfied condition "success or failure" Jan 11 20:28:40.899: INFO: Trying to get logs from node ip-10-250-27-25.ec2.internal pod pod-subpath-test-projected-kx4n container test-container-subpath-projected-kx4n: STEP: delete the pod Jan 11 20:28:41.090: INFO: Waiting for pod pod-subpath-test-projected-kx4n to disappear Jan 11 20:28:41.179: INFO: Pod pod-subpath-test-projected-kx4n no longer exists STEP: Deleting pod pod-subpath-test-projected-kx4n Jan 11 20:28:41.179: INFO: Deleting pod "pod-subpath-test-projected-kx4n" in namespace "subpath-3504" [AfterEach] [sig-storage] Subpath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:28:41.269: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-3504" for this suite. Jan 11 20:28:47.633: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:28:50.969: INFO: namespace subpath-3504 deletion completed in 9.608445951s • [SLOW TEST:34.161 seconds] [sig-storage] Subpath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSS ------------------------------ [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:28:17.539: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename init-container STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in init-container-9123 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartAlways pod [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: creating the pod Jan 11 20:28:18.250: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:28:21.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-9123" for this suite. Jan 11 20:28:50.204: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:28:53.474: INFO: namespace init-container-9123 deletion completed in 31.539003371s • [SLOW TEST:35.935 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693 should invoke init containers on a RestartAlways pod [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:28:40.988: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename emptydir STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-855 STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating a pod to test emptydir volume type on node default medium Jan 11 20:28:41.724: INFO: Waiting up to 5m0s for pod "pod-00452444-99a4-4742-a2d6-605c9bdefe95" in namespace "emptydir-855" to be "success or failure" Jan 11 20:28:41.814: INFO: Pod "pod-00452444-99a4-4742-a2d6-605c9bdefe95": Phase="Pending", Reason="", readiness=false. Elapsed: 89.956393ms Jan 11 20:28:43.904: INFO: Pod "pod-00452444-99a4-4742-a2d6-605c9bdefe95": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.180244844s STEP: Saw pod success Jan 11 20:28:43.904: INFO: Pod "pod-00452444-99a4-4742-a2d6-605c9bdefe95" satisfied condition "success or failure" Jan 11 20:28:43.994: INFO: Trying to get logs from node ip-10-250-27-25.ec2.internal pod pod-00452444-99a4-4742-a2d6-605c9bdefe95 container test-container: STEP: delete the pod Jan 11 20:28:44.186: INFO: Waiting for pod pod-00452444-99a4-4742-a2d6-605c9bdefe95 to disappear Jan 11 20:28:44.276: INFO: Pod pod-00452444-99a4-4742-a2d6-605c9bdefe95 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:28:44.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-855" for this suite. Jan 11 20:28:50.638: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:28:53.953: INFO: namespace emptydir-855 deletion completed in 9.585977917s • [SLOW TEST:12.965 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:93 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:27:10.849: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename provisioning STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in provisioning-1360 STEP: Waiting for a default service account to be provisioned in namespace [It] should support restarting containers using file as subpath [Slow][LinuxOnly] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:318 Jan 11 20:27:11.490: INFO: Could not find CSI Name for in-tree plugin kubernetes.io/empty-dir Jan 11 20:27:11.490: INFO: Creating resource for inline volume STEP: Creating pod pod-subpath-test-emptydir-svsj STEP: Failing liveness probe Jan 11 20:27:15.762: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=provisioning-1360 pod-subpath-test-emptydir-svsj --container test-container-volume-emptydir-svsj -- /bin/sh -c rm /probe-volume/probe-file' Jan 11 20:27:17.091: INFO: stderr: "" Jan 11 20:27:17.091: INFO: stdout: "" Jan 11 20:27:17.091: INFO: Pod exec output: STEP: Waiting for container to restart Jan 11 20:27:17.181: INFO: Container test-container-subpath-emptydir-svsj, restarts: 0 Jan 11 20:27:27.272: INFO: Container test-container-subpath-emptydir-svsj, restarts: 2 Jan 11 20:27:27.272: INFO: Container has restart count: 2 STEP: Rewriting the file Jan 11 20:27:27.272: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=provisioning-1360 pod-subpath-test-emptydir-svsj --container test-container-volume-emptydir-svsj -- /bin/sh -c echo test-after > /probe-volume/probe-file' Jan 11 20:27:28.569: INFO: stderr: "" Jan 11 20:27:28.569: INFO: stdout: "" Jan 11 20:27:28.569: INFO: Pod exec output: STEP: Waiting for container to stop restarting Jan 11 20:27:42.750: INFO: Container has restart count: 3 Jan 11 20:28:44.751: INFO: Container restart has stabilized Jan 11 20:28:44.751: INFO: Deleting pod "pod-subpath-test-emptydir-svsj" in namespace "provisioning-1360" Jan 11 20:28:44.843: INFO: Wait up to 5m0s for pod "pod-subpath-test-emptydir-svsj" to be fully deleted STEP: Deleting pod Jan 11 20:28:49.022: INFO: Deleting pod "pod-subpath-test-emptydir-svsj" in namespace "provisioning-1360" Jan 11 20:28:49.112: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics [AfterEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:28:49.112: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "provisioning-1360" for this suite. Jan 11 20:28:55.473: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:28:58.807: INFO: namespace provisioning-1360 deletion completed in 9.603684455s • [SLOW TEST:107.958 seconds] [sig-storage] In-tree Volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Driver: emptydir] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:69 [Testpattern: Inline-volume (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:92 should support restarting containers using file as subpath [Slow][LinuxOnly] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:318 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-auth] Certificates API /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:28:38.351: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename certificates STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in certificates-8516 STEP: Waiting for a default service account to be provisioned in namespace [It] should support building a client with a CSR /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/certificates.go:39 Jan 11 20:28:39.227: INFO: creating CSR Jan 11 20:28:39.317: INFO: approving CSR Jan 11 20:28:44.407: INFO: waiting for CSR to be signed Jan 11 20:28:49.497: INFO: testing the client Jan 11 20:28:49.497: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config [AfterEach] [sig-auth] Certificates API /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:28:49.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "certificates-8516" for this suite. Jan 11 20:28:56.222: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:28:59.516: INFO: namespace certificates-8516 deletion completed in 9.562235267s • [SLOW TEST:21.166 seconds] [sig-auth] Certificates API /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should support building a client with a CSR /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/certificates.go:39 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Discovery /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:28:50.980: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename discovery STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in discovery-1667 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Discovery /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/discovery.go:35 STEP: Setting up server cert [It] Custom resource should have storage version hash /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/discovery.go:44 Jan 11 20:28:52.392: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config [AfterEach] [sig-api-machinery] Discovery /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:28:52.933: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "discovery-1667" for this suite. Jan 11 20:28:59.294: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:29:02.788: INFO: namespace discovery-1667 deletion completed in 9.763388603s • [SLOW TEST:11.808 seconds] [sig-api-machinery] Discovery /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Custom resource should have storage version hash /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/discovery.go:44 ------------------------------ [BeforeEach] [k8s.io] Pods /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:28:44.661: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename pods STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pods-9564 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:165 [It] should be updated [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Jan 11 20:28:48.427: INFO: Successfully updated pod "pod-update-2b9cb251-cfb3-4f33-8b4a-3148ff4b0d89" STEP: verifying the updated pod is in kubernetes Jan 11 20:28:48.605: INFO: Pod update OK [AfterEach] [k8s.io] Pods /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:28:48.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9564" for this suite. Jan 11 20:29:00.963: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:29:04.395: INFO: namespace pods-9564 deletion completed in 15.70041033s • [SLOW TEST:19.734 seconds] [k8s.io] Pods /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693 should be updated [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:28:53.976: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename emptydir STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-5551 STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating a pod to test emptydir 0666 on node default medium Jan 11 20:28:54.941: INFO: Waiting up to 5m0s for pod "pod-feed0463-c134-4fb7-b22d-46634325a8e2" in namespace "emptydir-5551" to be "success or failure" Jan 11 20:28:55.031: INFO: Pod "pod-feed0463-c134-4fb7-b22d-46634325a8e2": Phase="Pending", Reason="", readiness=false. Elapsed: 89.58659ms Jan 11 20:28:57.121: INFO: Pod "pod-feed0463-c134-4fb7-b22d-46634325a8e2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.179597001s STEP: Saw pod success Jan 11 20:28:57.121: INFO: Pod "pod-feed0463-c134-4fb7-b22d-46634325a8e2" satisfied condition "success or failure" Jan 11 20:28:57.211: INFO: Trying to get logs from node ip-10-250-27-25.ec2.internal pod pod-feed0463-c134-4fb7-b22d-46634325a8e2 container test-container: STEP: delete the pod Jan 11 20:28:57.400: INFO: Waiting for pod pod-feed0463-c134-4fb7-b22d-46634325a8e2 to disappear Jan 11 20:28:57.490: INFO: Pod pod-feed0463-c134-4fb7-b22d-46634325a8e2 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:28:57.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5551" for this suite. Jan 11 20:29:03.851: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:29:07.180: INFO: namespace emptydir-5551 deletion completed in 9.598037188s • [SLOW TEST:13.204 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ S ------------------------------ [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:28:53.479: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename security-context-test STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in security-context-test-3835 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:40 [It] should not run without a specified user ID /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:153 [AfterEach] [k8s.io] Security Context /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:28:56.300: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-3835" for this suite. Jan 11 20:29:08.667: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:29:11.898: INFO: namespace security-context-test-3835 deletion completed in 15.503184094s • [SLOW TEST:18.419 seconds] [k8s.io] Security Context /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693 When creating a container with runAsNonRoot /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:98 should not run without a specified user ID /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:153 ------------------------------ SSS ------------------------------ [BeforeEach] [sig-apps] DisruptionController /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:29:04.403: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename disruption STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in disruption-9243 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] DisruptionController /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:52 [It] should create a PodDisruptionBudget /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:57 STEP: Waiting for the pdb to be processed [AfterEach] [sig-apps] DisruptionController /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:29:05.233: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "disruption-9243" for this suite. Jan 11 20:29:11.593: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:29:14.900: INFO: namespace disruption-9243 deletion completed in 9.575809111s • [SLOW TEST:10.497 seconds] [sig-apps] DisruptionController /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should create a PodDisruptionBudget /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:57 ------------------------------ SS ------------------------------ [BeforeEach] [sig-apps] DisruptionController /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:28:59.550: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename disruption STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in disruption-3115 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] DisruptionController /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:52 [It] evictions: maxUnavailable deny evictions, integer => should not allow an eviction /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:149 STEP: Waiting for the pdb to be processed STEP: locating a running pod [AfterEach] [sig-apps] DisruptionController /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:29:02.728: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "disruption-3115" for this suite. Jan 11 20:29:15.087: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:29:18.390: INFO: namespace disruption-3115 deletion completed in 15.571493729s • [SLOW TEST:18.840 seconds] [sig-apps] DisruptionController /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 evictions: maxUnavailable deny evictions, integer => should not allow an eviction /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:149 ------------------------------ SS ------------------------------ [BeforeEach] [sig-auth] PodSecurityPolicy /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:29:07.183: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename podsecuritypolicy STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-auth] PodSecurityPolicy /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/pod_security_policy.go:56 STEP: Creating a kubernetes client that impersonates the default service account Jan 11 20:29:07.542: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Binding the edit role to the default SA [It] should forbid pod creation when no PSP is available /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/pod_security_policy.go:81 STEP: Running a restricted pod [AfterEach] [sig-auth] PodSecurityPolicy /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:29:07.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "podsecuritypolicy-2949" for this suite. Jan 11 20:29:16.089: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:29:19.419: INFO: namespace podsecuritypolicy-2949 deletion completed in 11.601484186s • [SLOW TEST:12.237 seconds] [sig-auth] PodSecurityPolicy /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should forbid pod creation when no PSP is available /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/pod_security_policy.go:81 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:29:11.905: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename secrets STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secrets-1575 STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating secret with name secret-test-a9e48f08-6b1f-4cf5-983e-0dba7746576b STEP: Creating a pod to test consume secrets Jan 11 20:29:12.749: INFO: Waiting up to 5m0s for pod "pod-secrets-90ae0426-7cb0-438f-a70f-2972138b7c68" in namespace "secrets-1575" to be "success or failure" Jan 11 20:29:12.838: INFO: Pod "pod-secrets-90ae0426-7cb0-438f-a70f-2972138b7c68": Phase="Pending", Reason="", readiness=false. Elapsed: 89.366734ms Jan 11 20:29:14.930: INFO: Pod "pod-secrets-90ae0426-7cb0-438f-a70f-2972138b7c68": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.180978996s STEP: Saw pod success Jan 11 20:29:14.930: INFO: Pod "pod-secrets-90ae0426-7cb0-438f-a70f-2972138b7c68" satisfied condition "success or failure" Jan 11 20:29:15.019: INFO: Trying to get logs from node ip-10-250-27-25.ec2.internal pod pod-secrets-90ae0426-7cb0-438f-a70f-2972138b7c68 container secret-volume-test: STEP: delete the pod Jan 11 20:29:15.208: INFO: Waiting for pod pod-secrets-90ae0426-7cb0-438f-a70f-2972138b7c68 to disappear Jan 11 20:29:15.300: INFO: Pod pod-secrets-90ae0426-7cb0-438f-a70f-2972138b7c68 no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:29:15.300: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1575" for this suite. Jan 11 20:29:21.659: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:29:24.915: INFO: namespace secrets-1575 deletion completed in 9.524688219s • [SLOW TEST:13.010 seconds] [sig-storage] Secrets /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:35 should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:29:02.790: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in persistent-local-volumes-test-3355 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:152 [BeforeEach] [Volume type: dir-link] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Jan 11 20:29:05.914: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-3355 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-8a2c438f-3afe-4f61-b1cb-a951e2b26051-backend && ln -s /tmp/local-volume-test-8a2c438f-3afe-4f61-b1cb-a951e2b26051-backend /tmp/local-volume-test-8a2c438f-3afe-4f61-b1cb-a951e2b26051' Jan 11 20:29:07.216: INFO: stderr: "" Jan 11 20:29:07.216: INFO: stdout: "" STEP: Creating local PVCs and PVs Jan 11 20:29:07.216: INFO: Creating a PV followed by a PVC Jan 11 20:29:07.397: INFO: Waiting for PV local-pvvk877 to bind to PVC pvc-qlt9m Jan 11 20:29:07.397: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-qlt9m] to have phase Bound Jan 11 20:29:07.487: INFO: PersistentVolumeClaim pvc-qlt9m found and phase=Bound (89.639488ms) Jan 11 20:29:07.487: INFO: Waiting up to 3m0s for PersistentVolume local-pvvk877 to have phase Bound Jan 11 20:29:07.577: INFO: PersistentVolume local-pvvk877 found and phase=Bound (89.890885ms) [It] should be able to write from pod1 and read from pod2 /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 STEP: Creating pod1 to write to the PV STEP: Creating a pod Jan 11 20:29:10.209: INFO: pod "security-context-b93ccc35-1ce7-403a-95f2-b978da9358b7" created on Node "ip-10-250-27-25.ec2.internal" STEP: Writing in pod1 Jan 11 20:29:10.209: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-3355 security-context-b93ccc35-1ce7-403a-95f2-b978da9358b7 -- /bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file' Jan 11 20:29:11.565: INFO: stderr: "" Jan 11 20:29:11.565: INFO: stdout: "" Jan 11 20:29:11.565: INFO: podRWCmdExec out: "" err: Jan 11 20:29:11.565: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-3355 security-context-b93ccc35-1ce7-403a-95f2-b978da9358b7 -- /bin/sh -c cat /mnt/volume1/test-file' Jan 11 20:29:12.861: INFO: stderr: "" Jan 11 20:29:12.861: INFO: stdout: "test-file-content\n" Jan 11 20:29:12.861: INFO: podRWCmdExec out: "test-file-content\n" err: STEP: Creating pod2 to read from the PV STEP: Creating a pod Jan 11 20:29:15.313: INFO: pod "security-context-98cb4f06-1f2b-4598-8655-70deda13d5ea" created on Node "ip-10-250-27-25.ec2.internal" Jan 11 20:29:15.313: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-3355 security-context-98cb4f06-1f2b-4598-8655-70deda13d5ea -- /bin/sh -c cat /mnt/volume1/test-file' Jan 11 20:29:16.845: INFO: stderr: "" Jan 11 20:29:16.845: INFO: stdout: "test-file-content\n" Jan 11 20:29:16.845: INFO: podRWCmdExec out: "test-file-content\n" err: STEP: Writing in pod2 Jan 11 20:29:16.846: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-3355 security-context-98cb4f06-1f2b-4598-8655-70deda13d5ea -- /bin/sh -c mkdir -p /mnt/volume1; echo /tmp/local-volume-test-8a2c438f-3afe-4f61-b1cb-a951e2b26051 > /mnt/volume1/test-file' Jan 11 20:29:18.204: INFO: stderr: "" Jan 11 20:29:18.204: INFO: stdout: "" Jan 11 20:29:18.204: INFO: podRWCmdExec out: "" err: STEP: Reading in pod1 Jan 11 20:29:18.204: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-3355 security-context-b93ccc35-1ce7-403a-95f2-b978da9358b7 -- /bin/sh -c cat /mnt/volume1/test-file' Jan 11 20:29:19.563: INFO: stderr: "" Jan 11 20:29:19.564: INFO: stdout: "/tmp/local-volume-test-8a2c438f-3afe-4f61-b1cb-a951e2b26051\n" Jan 11 20:29:19.564: INFO: podRWCmdExec out: "/tmp/local-volume-test-8a2c438f-3afe-4f61-b1cb-a951e2b26051\n" err: STEP: Deleting pod1 STEP: Deleting pod security-context-b93ccc35-1ce7-403a-95f2-b978da9358b7 in namespace persistent-local-volumes-test-3355 STEP: Deleting pod2 STEP: Deleting pod security-context-98cb4f06-1f2b-4598-8655-70deda13d5ea in namespace persistent-local-volumes-test-3355 [AfterEach] [Volume type: dir-link] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jan 11 20:29:19.745: INFO: Deleting PersistentVolumeClaim "pvc-qlt9m" Jan 11 20:29:19.836: INFO: Deleting PersistentVolume "local-pvvk877" STEP: Removing the test directory Jan 11 20:29:19.927: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-3355 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-8a2c438f-3afe-4f61-b1cb-a951e2b26051 && rm -r /tmp/local-volume-test-8a2c438f-3afe-4f61-b1cb-a951e2b26051-backend' Jan 11 20:29:21.258: INFO: stderr: "" Jan 11 20:29:21.313: INFO: stdout: "" [AfterEach] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:29:21.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-3355" for this suite. Jan 11 20:29:29.772: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:29:33.269: INFO: namespace persistent-local-volumes-test-3355 deletion completed in 11.770787449s • [SLOW TEST:30.479 seconds] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-link] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume at the same time /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248 should be able to write from pod1 and read from pod2 /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 ------------------------------ SSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:29:18.395: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename kubectl STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-9583 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:225 [It] should check if cluster-info dump succeeds /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:994 STEP: running cluster-info dump Jan 11 20:29:19.052: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config cluster-info dump' Jan 11 20:29:24.932: INFO: stderr: "" Jan 11 20:29:24.936: INFO: stdout: "{\n \"kind\": \"NodeList\",\n \"apiVersion\": \"v1\",\n \"metadata\": {\n \"selfLink\": \"/api/v1/nodes\",\n \"resourceVersion\": \"83444\"\n },\n \"items\": [\n {\n \"metadata\": {\n \"name\": \"ip-10-250-27-25.ec2.internal\",\n \"selfLink\": \"/api/v1/nodes/ip-10-250-27-25.ec2.internal\",\n \"uid\": \"af7f64f3-a5de-4df3-9e07-f69e835ab580\",\n \"resourceVersion\": \"83330\",\n \"creationTimestamp\": \"2020-01-11T15:56:03Z\",\n \"labels\": {\n \"beta.kubernetes.io/arch\": \"amd64\",\n \"beta.kubernetes.io/instance-type\": \"m5.large\",\n \"beta.kubernetes.io/os\": \"linux\",\n \"failure-domain.beta.kubernetes.io/region\": \"us-east-1\",\n \"failure-domain.beta.kubernetes.io/zone\": \"us-east-1c\",\n \"kubernetes.io/arch\": \"amd64\",\n \"kubernetes.io/hostname\": \"ip-10-250-27-25.ec2.internal\",\n \"kubernetes.io/os\": \"linux\",\n \"node.kubernetes.io/role\": \"node\",\n \"worker.garden.sapcloud.io/group\": \"worker-1\",\n \"worker.gardener.cloud/pool\": \"worker-1\"\n },\n \"annotations\": {\n \"csi.volume.kubernetes.io/nodeid\": \"{\\\"csi-hostpath-ephemeral-1641\\\":\\\"ip-10-250-27-25.ec2.internal\\\",\\\"csi-hostpath-ephemeral-3918\\\":\\\"ip-10-250-27-25.ec2.internal\\\",\\\"csi-hostpath-provisioning-1550\\\":\\\"ip-10-250-27-25.ec2.internal\\\",\\\"csi-hostpath-provisioning-5271\\\":\\\"ip-10-250-27-25.ec2.internal\\\",\\\"csi-hostpath-provisioning-5738\\\":\\\"ip-10-250-27-25.ec2.internal\\\",\\\"csi-hostpath-provisioning-6240\\\":\\\"ip-10-250-27-25.ec2.internal\\\",\\\"csi-hostpath-provisioning-8445\\\":\\\"ip-10-250-27-25.ec2.internal\\\",\\\"csi-hostpath-volume-expand-7991\\\":\\\"ip-10-250-27-25.ec2.internal\\\",\\\"csi-hostpath-volume-expand-8205\\\":\\\"ip-10-250-27-25.ec2.internal\\\",\\\"csi-hostpath-volumemode-2239\\\":\\\"ip-10-250-27-25.ec2.internal\\\",\\\"csi-mock-csi-mock-volumes-104\\\":\\\"csi-mock-csi-mock-volumes-104\\\",\\\"csi-mock-csi-mock-volumes-1062\\\":\\\"csi-mock-csi-mock-volumes-1062\\\",\\\"csi-mock-csi-mock-volumes-2239\\\":\\\"csi-mock-csi-mock-volumes-2239\\\",\\\"csi-mock-csi-mock-volumes-3620\\\":\\\"csi-mock-csi-mock-volumes-3620\\\",\\\"csi-mock-csi-mock-volumes-4203\\\":\\\"csi-mock-csi-mock-volumes-4203\\\",\\\"csi-mock-csi-mock-volumes-4249\\\":\\\"csi-mock-csi-mock-volumes-4249\\\",\\\"csi-mock-csi-mock-volumes-6381\\\":\\\"csi-mock-csi-mock-volumes-6381\\\",\\\"csi-mock-csi-mock-volumes-7446\\\":\\\"csi-mock-csi-mock-volumes-7446\\\",\\\"csi-mock-csi-mock-volumes-795\\\":\\\"csi-mock-csi-mock-volumes-795\\\",\\\"csi-mock-csi-mock-volumes-8830\\\":\\\"csi-mock-csi-mock-volumes-8830\\\"}\",\n \"node.alpha.kubernetes.io/ttl\": \"0\",\n \"projectcalico.org/IPv4Address\": \"10.250.27.25/19\",\n \"projectcalico.org/IPv4IPIPTunnelAddr\": \"100.64.1.1\",\n \"volumes.kubernetes.io/controller-managed-attach-detach\": \"true\"\n }\n },\n \"spec\": {\n \"podCIDR\": \"100.64.1.0/24\",\n \"podCIDRs\": [\n \"100.64.1.0/24\"\n ],\n \"providerID\": \"aws:///us-east-1c/i-0a8c404292a3c92e9\"\n },\n \"status\": {\n \"capacity\": {\n \"attachable-volumes-aws-ebs\": \"25\",\n \"cpu\": \"2\",\n \"ephemeral-storage\": \"28056816Ki\",\n \"hugepages-1Gi\": \"0\",\n \"hugepages-2Mi\": \"0\",\n \"memory\": \"7865496Ki\",\n \"pods\": \"110\"\n },\n \"allocatable\": {\n \"attachable-volumes-aws-ebs\": \"25\",\n \"cpu\": \"1920m\",\n \"ephemeral-storage\": \"27293670584\",\n \"hugepages-1Gi\": \"0\",\n \"hugepages-2Mi\": \"0\",\n \"memory\": \"6577812679\",\n \"pods\": \"110\"\n },\n \"conditions\": [\n {\n \"type\": \"FrequentContainerdRestart\",\n \"status\": \"False\",\n \"lastHeartbeatTime\": \"2020-01-11T20:28:42Z\",\n \"lastTransitionTime\": \"2020-01-11T15:56:58Z\",\n \"reason\": \"NoFrequentContainerdRestart\",\n \"message\": \"containerd is functioning properly\"\n },\n {\n \"type\": \"CorruptDockerOverlay2\",\n \"status\": \"False\",\n \"lastHeartbeatTime\": \"2020-01-11T20:28:42Z\",\n \"lastTransitionTime\": \"2020-01-11T15:56:58Z\",\n \"reason\": \"NoCorruptDockerOverlay2\",\n \"message\": \"docker overlay2 is functioning properly\"\n },\n {\n \"type\": \"KernelDeadlock\",\n \"status\": \"False\",\n \"lastHeartbeatTime\": \"2020-01-11T20:28:42Z\",\n \"lastTransitionTime\": \"2020-01-11T15:56:58Z\",\n \"reason\": \"KernelHasNoDeadlock\",\n \"message\": \"kernel has no deadlock\"\n },\n {\n \"type\": \"ReadonlyFilesystem\",\n \"status\": \"False\",\n \"lastHeartbeatTime\": \"2020-01-11T20:28:42Z\",\n \"lastTransitionTime\": \"2020-01-11T15:56:58Z\",\n \"reason\": \"FilesystemIsNotReadOnly\",\n \"message\": \"Filesystem is not read-only\"\n },\n {\n \"type\": \"FrequentUnregisterNetDevice\",\n \"status\": \"False\",\n \"lastHeartbeatTime\": \"2020-01-11T20:28:42Z\",\n \"lastTransitionTime\": \"2020-01-11T15:56:58Z\",\n \"reason\": \"NoFrequentUnregisterNetDevice\",\n \"message\": \"node is functioning properly\"\n },\n {\n \"type\": \"FrequentKubeletRestart\",\n \"status\": \"False\",\n \"lastHeartbeatTime\": \"2020-01-11T20:28:42Z\",\n \"lastTransitionTime\": \"2020-01-11T15:56:58Z\",\n \"reason\": \"NoFrequentKubeletRestart\",\n \"message\": \"kubelet is functioning properly\"\n },\n {\n \"type\": \"FrequentDockerRestart\",\n \"status\": \"False\",\n \"lastHeartbeatTime\": \"2020-01-11T20:28:42Z\",\n \"lastTransitionTime\": \"2020-01-11T15:56:58Z\",\n \"reason\": \"NoFrequentDockerRestart\",\n \"message\": \"docker is functioning properly\"\n },\n {\n \"type\": \"NetworkUnavailable\",\n \"status\": \"False\",\n \"lastHeartbeatTime\": \"2020-01-11T15:56:18Z\",\n \"lastTransitionTime\": \"2020-01-11T15:56:18Z\",\n \"reason\": \"CalicoIsUp\",\n \"message\": \"Calico is running on this node\"\n },\n {\n \"type\": \"MemoryPressure\",\n \"status\": \"False\",\n \"lastHeartbeatTime\": \"2020-01-11T20:29:17Z\",\n \"lastTransitionTime\": \"2020-01-11T15:56:03Z\",\n \"reason\": \"KubeletHasSufficientMemory\",\n \"message\": \"kubelet has sufficient memory available\"\n },\n {\n \"type\": \"DiskPressure\",\n \"status\": \"False\",\n \"lastHeartbeatTime\": \"2020-01-11T20:29:17Z\",\n \"lastTransitionTime\": \"2020-01-11T15:56:03Z\",\n \"reason\": \"KubeletHasNoDiskPressure\",\n \"message\": \"kubelet has no disk pressure\"\n },\n {\n \"type\": \"PIDPressure\",\n \"status\": \"False\",\n \"lastHeartbeatTime\": \"2020-01-11T20:29:17Z\",\n \"lastTransitionTime\": \"2020-01-11T15:56:03Z\",\n \"reason\": \"KubeletHasSufficientPID\",\n \"message\": \"kubelet has sufficient PID available\"\n },\n {\n \"type\": \"Ready\",\n \"status\": \"True\",\n \"lastHeartbeatTime\": \"2020-01-11T20:29:17Z\",\n \"lastTransitionTime\": \"2020-01-11T15:56:13Z\",\n \"reason\": \"KubeletReady\",\n \"message\": \"kubelet is posting ready status\"\n }\n ],\n \"addresses\": [\n {\n \"type\": \"InternalIP\",\n \"address\": \"10.250.27.25\"\n },\n {\n \"type\": \"Hostname\",\n \"address\": \"ip-10-250-27-25.ec2.internal\"\n },\n {\n \"type\": \"InternalDNS\",\n \"address\": \"ip-10-250-27-25.ec2.internal\"\n }\n ],\n \"daemonEndpoints\": {\n \"kubeletEndpoint\": {\n \"Port\": 10250\n }\n },\n \"nodeInfo\": {\n \"machineID\": \"ec280dba3c1837e27848a3dec8c080a9\",\n \"systemUUID\": \"ec280dba-3c18-37e2-7848-a3dec8c080a9\",\n \"bootID\": \"89e42b89-b944-47ea-8bf6-5f2fe6d80c97\",\n \"kernelVersion\": \"4.19.86-coreos\",\n \"osImage\": \"Container Linux by CoreOS 2303.3.0 (Rhyolite)\",\n \"containerRuntimeVersion\": \"docker://18.6.3\",\n \"kubeletVersion\": \"v1.16.4\",\n \"kubeProxyVersion\": \"v1.16.4\",\n \"operatingSystem\": \"linux\",\n \"architecture\": \"amd64\"\n },\n \"images\": [\n {\n \"names\": [\n \"eu.gcr.io/gardener-project/k8s.gcr.io/hyperkube@sha256:1d8d7ef8bae1a6c8564d97a7d83a3661ea4b43127b0a6d901f3cd4b1126ee102\",\n \"eu.gcr.io/gardener-project/k8s.gcr.io/hyperkube:v1.16.4\"\n ],\n \"sizeBytes\": 601224435\n },\n {\n \"names\": [\n \"quay.io/kubernetes_incubator/nfs-provisioner@sha256:df762117e3c891f2d2ddff46ecb0776ba1f9f3c44cfd7739b0683bcd7a7954a8\",\n \"quay.io/kubernetes_incubator/nfs-provisioner:v2.2.2\"\n ],\n \"sizeBytes\": 391772778\n },\n {\n \"names\": [\n \"gcr.io/google-samples/gb-frontend@sha256:35cb427341429fac3df10ff74600ea73e8ec0754d78f9ce89e0b4f3d70d53ba6\",\n \"gcr.io/google-samples/gb-frontend:v6\"\n ],\n \"sizeBytes\": 373099368\n },\n {\n \"names\": [\n \"k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa\",\n \"k8s.gcr.io/etcd:3.3.15\"\n ],\n \"sizeBytes\": 246640776\n },\n {\n \"names\": [\n \"gcr.io/kubernetes-e2e-test-images/volume/nfs@sha256:c2ad734346f608a5f7d69cfded93c4e8094069320657bd372d12ba21dea3ea71\",\n \"gcr.io/kubernetes-e2e-test-images/volume/nfs:1.0\"\n ],\n \"sizeBytes\": 225358913\n },\n {\n \"names\": [\n \"gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb\",\n \"gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0\"\n ],\n \"sizeBytes\": 195659796\n },\n {\n \"names\": [\n \"eu.gcr.io/gardener-project/3rd/quay_io/calico/node@sha256:d017c694acb9df5ad8e957a14b4c5a613c3a42771a34904f40c279dd2f61461e\",\n \"eu.gcr.io/gardener-project/3rd/quay_io/calico/node:v3.8.2-mod-1\"\n ],\n \"sizeBytes\": 185406766\n },\n {\n \"names\": [\n \"eu.gcr.io/gardener-project/3rd/quay_io/calico/cni@sha256:fe6cb51f30add991b76eadfa26ec10fa8796383a1ddf807be5d4228725207b9d\",\n \"eu.gcr.io/gardener-project/3rd/quay_io/calico/cni:v3.8.2-mod-1\"\n ],\n \"sizeBytes\": 153790666\n },\n {\n \"names\": [\n \"httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a\",\n \"httpd:2.4.39-alpine\"\n ],\n \"sizeBytes\": 126894770\n },\n {\n \"names\": [\n \"httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"httpd:2.4.38-alpine\"\n ],\n \"sizeBytes\": 123781643\n },\n {\n \"names\": [\n \"eu.gcr.io/gardener-project/3rd/k8s_gcr_io/node-problem-detector@sha256:00aceed3b4ef20d0d578aff3f904212daa2f0aaf18350d3e213cf4ca0703ccf0\",\n \"eu.gcr.io/gardener-project/3rd/k8s_gcr_io/node-problem-detector:v0.7.1-mod-1\"\n ],\n \"sizeBytes\": 96768084\n },\n {\n \"names\": [\n \"gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:1bafcc6fb1aa990b487850adba9cadc020e42d7905aa8a30481182a477ba24b0\",\n \"gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.10\"\n ],\n \"sizeBytes\": 61365829\n },\n {\n \"names\": [\n \"gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727\",\n \"gcr.io/kubernetes-e2e-test-images/agnhost:2.6\"\n ],\n \"sizeBytes\": 57345321\n },\n {\n \"names\": [\n \"quay.io/k8scsi/csi-provisioner@sha256:0efcb424f1dde9b9fb11a1a14f2e48ab47e1c3f08bc3a929990dcfcb1f7ab34f\",\n \"quay.io/k8scsi/csi-provisioner:v1.4.0-rc1\"\n ],\n \"sizeBytes\": 54431016\n },\n {\n \"names\": [\n \"quay.io/k8scsi/csi-snapshotter@sha256:e3d3e742e32d00488fdb401045b9b1d033d7ca0ab6e760f77b24750fc95e5f70\",\n \"quay.io/k8scsi/csi-snapshotter:v2.0.0-rc1\"\n ],\n \"sizeBytes\": 51703561\n },\n {\n \"names\": [\n \"eu.gcr.io/gardener-project/3rd/quay_io/calico/typha@sha256:52298609a808087c774e95ded163e91828106bed6cf3117c51aba3f4d3b7943c\",\n \"eu.gcr.io/gardener-project/3rd/quay_io/calico/typha:v3.8.2\"\n ],\n \"sizeBytes\": 49771411\n },\n {\n \"names\": [\n \"quay.io/k8scsi/csi-attacher@sha256:26fccd7a99d973845df1193b46ebdcc6ab8dc5f6e6be319750c471fce1742d13\",\n \"quay.io/k8scsi/csi-attacher:v1.2.0\"\n ],\n \"sizeBytes\": 46226754\n },\n {\n \"names\": [\n \"quay.io/k8scsi/csi-attacher@sha256:0aba670b4d9d6b2e720bbf575d733156c676b693ca26501235444490300db838\",\n \"quay.io/k8scsi/csi-attacher:v1.1.0\"\n ],\n \"sizeBytes\": 42839085\n },\n {\n \"names\": [\n \"quay.io/k8scsi/csi-resizer@sha256:7d46fb6eb8b890dc546029d1565d502b4a1d974d33625c6ee2bc7991b77fc1a1\",\n \"quay.io/k8scsi/csi-resizer:v0.2.0\"\n ],\n \"sizeBytes\": 42817100\n },\n {\n \"names\": [\n \"quay.io/k8scsi/csi-resizer@sha256:f315c9042e56def3c05c6b04fe79ec9da6d39ddc557ca365a76cf35964ea08b6\",\n \"quay.io/k8scsi/csi-resizer:v0.1.0\"\n ],\n \"sizeBytes\": 42623056\n },\n {\n \"names\": [\n \"gcr.io/google-containers/debian-base@sha256:6966a0aedd7592c18ff2dd803c08bd85780ee19f5e3a2e7cf908a4cd837afcde\",\n \"gcr.io/google-containers/debian-base:0.4.1\"\n ],\n \"sizeBytes\": 42323657\n },\n {\n \"names\": [\n \"gcr.io/kubernetes-e2e-test-images/nonroot@sha256:d4ede5c74517090b6686219059118ed178cf4620f5db8781b32f806bb1e7395b\",\n \"gcr.io/kubernetes-e2e-test-images/nonroot:1.0\"\n ],\n \"sizeBytes\": 42321438\n },\n {\n \"names\": [\n \"redis@sha256:50899ea1ceed33fa03232f3ac57578a424faa1742c1ac9c7a7bdb95cdf19b858\",\n \"redis:5.0.5-alpine\"\n ],\n \"sizeBytes\": 29331594\n },\n {\n \"names\": [\n \"quay.io/k8scsi/hostpathplugin@sha256:b4826e492fc1762fceaf9726f41575ca0a4567864d3d235da874818de18039de\",\n \"quay.io/k8scsi/hostpathplugin:v1.2.0-rc5\"\n ],\n \"sizeBytes\": 28761497\n },\n {\n \"names\": [\n \"eu.gcr.io/gardener-project/3rd/quay_io/prometheus/node-exporter@sha256:fea82a3a79228af2840c72ff394d7446ace51ae035f5b26cd9767b250baf13b7\",\n \"eu.gcr.io/gardener-project/3rd/quay_io/prometheus/node-exporter:v0.18.1\"\n ],\n \"sizeBytes\": 22933477\n },\n {\n \"names\": [\n \"gcr.io/kubernetes-e2e-test-images/echoserver@sha256:e9ba514b896cdf559eef8788b66c2c3ee55f3572df617647b4b0d8b6bf81cf19\",\n \"gcr.io/kubernetes-e2e-test-images/echoserver:2.2\"\n ],\n \"sizeBytes\": 21692741\n },\n {\n \"names\": [\n \"gcr.io/kubernetes-e2e-test-images/regression-issue-74839-amd64@sha256:3b36bd80b97c532a774e7f6246797b8575d97037982f353476c703ba6686c75c\",\n \"gcr.io/kubernetes-e2e-test-images/regression-issue-74839-amd64:1.0\"\n ],\n \"sizeBytes\": 19227369\n },\n {\n \"names\": [\n \"quay.io/k8scsi/mock-driver@sha256:e0eed916b7d970bad2b7d9875f9ad16932f987f0f3d91ec5d86da68b0b5cc9d1\",\n \"quay.io/k8scsi/mock-driver:v2.1.0\"\n ],\n \"sizeBytes\": 16226335\n },\n {\n \"names\": [\n \"nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n \"nginx:1.14-alpine\"\n ],\n \"sizeBytes\": 16032814\n },\n {\n \"names\": [\n \"quay.io/k8scsi/csi-node-driver-registrar@sha256:13daf82fb99e951a4bff8ae5fc7c17c3a8fe7130be6400990d8f6076c32d4599\",\n \"quay.io/k8scsi/csi-node-driver-registrar:v1.1.0\"\n ],\n \"sizeBytes\": 15815995\n },\n {\n \"names\": [\n \"quay.io/k8scsi/livenessprobe@sha256:dde617756e0f602adc566ab71fd885f1dad451ad3fb063ac991c95a2ff47aea5\",\n \"quay.io/k8scsi/livenessprobe:v1.1.0\"\n ],\n \"sizeBytes\": 14967303\n },\n {\n \"names\": [\n \"eu.gcr.io/gardener-project/3rd/quay_io/calico/pod2daemon-flexvol@sha256:fd246ba4eb5b96a7b97bfd8d99eb823ba179e6eeb9852cb3e3f7bf2f44a800a8\",\n \"eu.gcr.io/gardener-project/3rd/quay_io/calico/pod2daemon-flexvol:v3.8.2\"\n ],\n \"sizeBytes\": 9371181\n },\n {\n \"names\": [\n \"gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd\",\n \"gcr.io/kubernetes-e2e-test-images/dnsutils:1.1\"\n ],\n \"sizeBytes\": 9349974\n },\n {\n \"names\": [\n \"gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411\",\n \"gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0\"\n ],\n \"sizeBytes\": 6757579\n },\n {\n \"names\": [\n \"gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc\",\n \"gcr.io/kubernetes-e2e-test-images/nautilus:1.0\"\n ],\n \"sizeBytes\": 4753501\n },\n {\n \"names\": [\n \"gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6\",\n \"gcr.io/kubernetes-e2e-test-images/kitten:1.0\"\n ],\n \"sizeBytes\": 4747037\n },\n {\n \"names\": [\n \"gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e\",\n \"gcr.io/kubernetes-e2e-test-images/test-webserver:1.0\"\n ],\n \"sizeBytes\": 4732240\n },\n {\n \"names\": [\n \"alpine@sha256:8421d9a84432575381bfabd248f1eb56f3aa21d9d7cd2511583c68c9b7511d10\",\n \"alpine:3.7\"\n ],\n \"sizeBytes\": 4206494\n },\n {\n \"names\": [\n \"gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2\",\n \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\"\n ],\n \"sizeBytes\": 1563521\n },\n {\n \"names\": [\n \"gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d\",\n \"gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0\"\n ],\n \"sizeBytes\": 1450451\n },\n {\n \"names\": [\n \"busybox@sha256:6915be4043561d64e0ab0f8f098dc2ac48e077fe23f488ac24b665166898115a\",\n \"busybox:latest\"\n ],\n \"sizeBytes\": 1219782\n },\n {\n \"names\": [\n \"busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796\",\n \"busybox:1.29\"\n ],\n \"sizeBytes\": 1154361\n },\n {\n \"names\": [\n \"busybox@sha256:bbc3a03235220b170ba48a157dd097dd1379299370e1ed99ce976df0355d24f0\",\n \"busybox:1.27\"\n ],\n \"sizeBytes\": 1129289\n },\n {\n \"names\": [\n \"eu.gcr.io/gardener-project/3rd/gcr_io/google_containers/pause-amd64@sha256:ffa28932647c3b6cab6a618aafe98d33dd185d96158ecf9b1addf042d6244025\",\n \"k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea\",\n \"eu.gcr.io/gardener-project/3rd/gcr_io/google_containers/pause-amd64:3.1\",\n \"k8s.gcr.io/pause:3.1\"\n ],\n \"sizeBytes\": 742472\n }\n ],\n \"volumesInUse\": [\n \"kubernetes.io/csi/csi-hostpath-provisioning-8445^15e49ff2-34ae-11ea-98fd-0e6a2517c83d\"\n ]\n }\n },\n {\n \"metadata\": {\n \"name\": \"ip-10-250-7-77.ec2.internal\",\n \"selfLink\": \"/api/v1/nodes/ip-10-250-7-77.ec2.internal\",\n \"uid\": \"3773c02c-1fbb-4cbe-a527-8933de0a8978\",\n \"resourceVersion\": \"83387\",\n \"creationTimestamp\": \"2020-01-11T15:55:58Z\",\n \"labels\": {\n \"beta.kubernetes.io/arch\": \"amd64\",\n \"beta.kubernetes.io/instance-type\": \"m5.large\",\n \"beta.kubernetes.io/os\": \"linux\",\n \"failure-domain.beta.kubernetes.io/region\": \"us-east-1\",\n \"failure-domain.beta.kubernetes.io/zone\": \"us-east-1c\",\n \"kubernetes.io/arch\": \"amd64\",\n \"kubernetes.io/hostname\": \"ip-10-250-7-77.ec2.internal\",\n \"kubernetes.io/os\": \"linux\",\n \"node.kubernetes.io/role\": \"node\",\n \"worker.garden.sapcloud.io/group\": \"worker-1\",\n \"worker.gardener.cloud/pool\": \"worker-1\"\n },\n \"annotations\": {\n \"csi.volume.kubernetes.io/nodeid\": \"{\\\"csi-hostpath-ephemeral-1155\\\":\\\"ip-10-250-7-77.ec2.internal\\\",\\\"csi-hostpath-ephemeral-9708\\\":\\\"ip-10-250-7-77.ec2.internal\\\",\\\"csi-hostpath-provisioning-1947\\\":\\\"ip-10-250-7-77.ec2.internal\\\",\\\"csi-hostpath-provisioning-2263\\\":\\\"ip-10-250-7-77.ec2.internal\\\",\\\"csi-hostpath-provisioning-3332\\\":\\\"ip-10-250-7-77.ec2.internal\\\",\\\"csi-hostpath-provisioning-4625\\\":\\\"ip-10-250-7-77.ec2.internal\\\",\\\"csi-hostpath-provisioning-5877\\\":\\\"ip-10-250-7-77.ec2.internal\\\",\\\"csi-hostpath-provisioning-638\\\":\\\"ip-10-250-7-77.ec2.internal\\\",\\\"csi-hostpath-provisioning-888\\\":\\\"ip-10-250-7-77.ec2.internal\\\",\\\"csi-hostpath-provisioning-9667\\\":\\\"ip-10-250-7-77.ec2.internal\\\",\\\"csi-hostpath-volume-1340\\\":\\\"ip-10-250-7-77.ec2.internal\\\",\\\"csi-hostpath-volume-2441\\\":\\\"ip-10-250-7-77.ec2.internal\\\",\\\"csi-hostpath-volume-expand-1240\\\":\\\"ip-10-250-7-77.ec2.internal\\\",\\\"csi-hostpath-volume-expand-1929\\\":\\\"ip-10-250-7-77.ec2.internal\\\",\\\"csi-hostpath-volume-expand-8983\\\":\\\"ip-10-250-7-77.ec2.internal\\\",\\\"csi-hostpath-volumeio-3164\\\":\\\"ip-10-250-7-77.ec2.internal\\\",\\\"csi-hostpath-volumemode-2792\\\":\\\"ip-10-250-7-77.ec2.internal\\\",\\\"csi-mock-csi-mock-volumes-4004\\\":\\\"csi-mock-csi-mock-volumes-4004\\\",\\\"csi-mock-csi-mock-volumes-8663\\\":\\\"csi-mock-csi-mock-volumes-8663\\\"}\",\n \"node.alpha.kubernetes.io/ttl\": \"0\",\n \"projectcalico.org/IPv4Address\": \"10.250.7.77/19\",\n \"projectcalico.org/IPv4IPIPTunnelAddr\": \"100.64.0.1\",\n \"volumes.kubernetes.io/controller-managed-attach-detach\": \"true\"\n }\n },\n \"spec\": {\n \"podCIDR\": \"100.64.0.0/24\",\n \"podCIDRs\": [\n \"100.64.0.0/24\"\n ],\n \"providerID\": \"aws:///us-east-1c/i-0551dba45aad7abfa\"\n },\n \"status\": {\n \"capacity\": {\n \"attachable-volumes-aws-ebs\": \"25\",\n \"cpu\": \"2\",\n \"ephemeral-storage\": \"28056816Ki\",\n \"hugepages-1Gi\": \"0\",\n \"hugepages-2Mi\": \"0\",\n \"memory\": \"7865496Ki\",\n \"pods\": \"110\"\n },\n \"allocatable\": {\n \"attachable-volumes-aws-ebs\": \"25\",\n \"cpu\": \"1920m\",\n \"ephemeral-storage\": \"27293670584\",\n \"hugepages-1Gi\": \"0\",\n \"hugepages-2Mi\": \"0\",\n \"memory\": \"6577812679\",\n \"pods\": \"110\"\n },\n \"conditions\": [\n {\n \"type\": \"FrequentKubeletRestart\",\n \"status\": \"False\",\n \"lastHeartbeatTime\": \"2020-01-11T20:28:23Z\",\n \"lastTransitionTime\": \"2020-01-11T15:56:28Z\",\n \"reason\": \"NoFrequentKubeletRestart\",\n \"message\": \"kubelet is functioning properly\"\n },\n {\n \"type\": \"FrequentDockerRestart\",\n \"status\": \"False\",\n \"lastHeartbeatTime\": \"2020-01-11T20:28:23Z\",\n \"lastTransitionTime\": \"2020-01-11T15:56:28Z\",\n \"reason\": \"NoFrequentDockerRestart\",\n \"message\": \"docker is functioning properly\"\n },\n {\n \"type\": \"FrequentContainerdRestart\",\n \"status\": \"False\",\n \"lastHeartbeatTime\": \"2020-01-11T20:28:23Z\",\n \"lastTransitionTime\": \"2020-01-11T15:56:28Z\",\n \"reason\": \"NoFrequentContainerdRestart\",\n \"message\": \"containerd is functioning properly\"\n },\n {\n \"type\": \"KernelDeadlock\",\n \"status\": \"False\",\n \"lastHeartbeatTime\": \"2020-01-11T20:28:23Z\",\n \"lastTransitionTime\": \"2020-01-11T15:56:28Z\",\n \"reason\": \"KernelHasNoDeadlock\",\n \"message\": \"kernel has no deadlock\"\n },\n {\n \"type\": \"ReadonlyFilesystem\",\n \"status\": \"False\",\n \"lastHeartbeatTime\": \"2020-01-11T20:28:23Z\",\n \"lastTransitionTime\": \"2020-01-11T15:56:28Z\",\n \"reason\": \"FilesystemIsNotReadOnly\",\n \"message\": \"Filesystem is not read-only\"\n },\n {\n \"type\": \"CorruptDockerOverlay2\",\n \"status\": \"False\",\n \"lastHeartbeatTime\": \"2020-01-11T20:28:23Z\",\n \"lastTransitionTime\": \"2020-01-11T15:56:28Z\",\n \"reason\": \"NoCorruptDockerOverlay2\",\n \"message\": \"docker overlay2 is functioning properly\"\n },\n {\n \"type\": \"FrequentUnregisterNetDevice\",\n \"status\": \"False\",\n \"lastHeartbeatTime\": \"2020-01-11T20:28:23Z\",\n \"lastTransitionTime\": \"2020-01-11T15:56:28Z\",\n \"reason\": \"NoFrequentUnregisterNetDevice\",\n \"message\": \"node is functioning properly\"\n },\n {\n \"type\": \"NetworkUnavailable\",\n \"status\": \"False\",\n \"lastHeartbeatTime\": \"2020-01-11T15:56:16Z\",\n \"lastTransitionTime\": \"2020-01-11T15:56:16Z\",\n \"reason\": \"CalicoIsUp\",\n \"message\": \"Calico is running on this node\"\n },\n {\n \"type\": \"MemoryPressure\",\n \"status\": \"False\",\n \"lastHeartbeatTime\": \"2020-01-11T20:29:18Z\",\n \"lastTransitionTime\": \"2020-01-11T15:55:58Z\",\n \"reason\": \"KubeletHasSufficientMemory\",\n \"message\": \"kubelet has sufficient memory available\"\n },\n {\n \"type\": \"DiskPressure\",\n \"status\": \"False\",\n \"lastHeartbeatTime\": \"2020-01-11T20:29:18Z\",\n \"lastTransitionTime\": \"2020-01-11T15:55:58Z\",\n \"reason\": \"KubeletHasNoDiskPressure\",\n \"message\": \"kubelet has no disk pressure\"\n },\n {\n \"type\": \"PIDPressure\",\n \"status\": \"False\",\n \"lastHeartbeatTime\": \"2020-01-11T20:29:18Z\",\n \"lastTransitionTime\": \"2020-01-11T15:55:58Z\",\n \"reason\": \"KubeletHasSufficientPID\",\n \"message\": \"kubelet has sufficient PID available\"\n },\n {\n \"type\": \"Ready\",\n \"status\": \"True\",\n \"lastHeartbeatTime\": \"2020-01-11T20:29:18Z\",\n \"lastTransitionTime\": \"2020-01-11T15:56:08Z\",\n \"reason\": \"KubeletReady\",\n \"message\": \"kubelet is posting ready status\"\n }\n ],\n \"addresses\": [\n {\n \"type\": \"InternalIP\",\n \"address\": \"10.250.7.77\"\n },\n {\n \"type\": \"Hostname\",\n \"address\": \"ip-10-250-7-77.ec2.internal\"\n },\n {\n \"type\": \"InternalDNS\",\n \"address\": \"ip-10-250-7-77.ec2.internal\"\n }\n ],\n \"daemonEndpoints\": {\n \"kubeletEndpoint\": {\n \"Port\": 10250\n }\n },\n \"nodeInfo\": {\n \"machineID\": \"ec223a25fa514279256b8b36a522519a\",\n \"systemUUID\": \"ec223a25-fa51-4279-256b-8b36a522519a\",\n \"bootID\": \"652118c2-7bd4-4ebf-b248-be5c7a65a3aa\",\n \"kernelVersion\": \"4.19.86-coreos\",\n \"osImage\": \"Container Linux by CoreOS 2303.3.0 (Rhyolite)\",\n \"containerRuntimeVersion\": \"docker://18.6.3\",\n \"kubeletVersion\": \"v1.16.4\",\n \"kubeProxyVersion\": \"v1.16.4\",\n \"operatingSystem\": \"linux\",\n \"architecture\": \"amd64\"\n },\n \"images\": [\n {\n \"names\": [\n \"eu.gcr.io/gardener-project/k8s.gcr.io/hyperkube@sha256:1d8d7ef8bae1a6c8564d97a7d83a3661ea4b43127b0a6d901f3cd4b1126ee102\",\n \"eu.gcr.io/gardener-project/k8s.gcr.io/hyperkube:v1.16.4\"\n ],\n \"sizeBytes\": 601224435\n },\n {\n \"names\": [\n \"eu.gcr.io/gardener-project/3rd/quay_io/kubernetes-ingress-controller/nginx-ingress-controller@sha256:4980f4ee069f767334c6fb6a7d75fbdc87236542fd749e22af5d80f2217959f4\",\n \"eu.gcr.io/gardener-project/3rd/quay_io/kubernetes-ingress-controller/nginx-ingress-controller:0.22.0\"\n ],\n \"sizeBytes\": 551728251\n },\n {\n \"names\": [\n \"gcr.io/kubernetes-e2e-test-images/volume/nfs@sha256:c2ad734346f608a5f7d69cfded93c4e8094069320657bd372d12ba21dea3ea71\",\n \"gcr.io/kubernetes-e2e-test-images/volume/nfs:1.0\"\n ],\n \"sizeBytes\": 225358913\n },\n {\n \"names\": [\n \"gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb\",\n \"gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0\"\n ],\n \"sizeBytes\": 195659796\n },\n {\n \"names\": [\n \"eu.gcr.io/gardener-project/3rd/quay_io/calico/node@sha256:d017c694acb9df5ad8e957a14b4c5a613c3a42771a34904f40c279dd2f61461e\",\n \"eu.gcr.io/gardener-project/3rd/quay_io/calico/node:v3.8.2-mod-1\"\n ],\n \"sizeBytes\": 185406766\n },\n {\n \"names\": [\n \"eu.gcr.io/gardener-project/3rd/quay_io/calico/cni@sha256:fe6cb51f30add991b76eadfa26ec10fa8796383a1ddf807be5d4228725207b9d\",\n \"eu.gcr.io/gardener-project/3rd/quay_io/calico/cni:v3.8.2-mod-1\"\n ],\n \"sizeBytes\": 153790666\n },\n {\n \"names\": [\n \"httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a\",\n \"httpd:2.4.39-alpine\"\n ],\n \"sizeBytes\": 126894770\n },\n {\n \"names\": [\n \"httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"httpd:2.4.38-alpine\"\n ],\n \"sizeBytes\": 123781643\n },\n {\n \"names\": [\n \"eu.gcr.io/gardener-project/3rd/k8s_gcr_io/kubernetes-dashboard-amd64@sha256:2f4fefeb964b1b7b09a3d2607a963506a47a6628d5268825e8b45b8a4c5ace93\",\n \"eu.gcr.io/gardener-project/3rd/k8s_gcr_io/kubernetes-dashboard-amd64:v1.10.1\"\n ],\n \"sizeBytes\": 121711221\n },\n {\n \"names\": [\n \"eu.gcr.io/gardener-project/3rd/k8s_gcr_io/node-problem-detector@sha256:00aceed3b4ef20d0d578aff3f904212daa2f0aaf18350d3e213cf4ca0703ccf0\",\n \"eu.gcr.io/gardener-project/3rd/k8s_gcr_io/node-problem-detector:v0.7.1-mod-1\"\n ],\n \"sizeBytes\": 96768084\n },\n {\n \"names\": [\n \"eu.gcr.io/gardener-project/gardener/ingress-default-backend@sha256:17b68928ead12cc9df88ee60d9c638d3fd642a7e122c2bb7586da1a21eb2de45\",\n \"eu.gcr.io/gardener-project/gardener/ingress-default-backend:0.7.0\"\n ],\n \"sizeBytes\": 69546830\n },\n {\n \"names\": [\n \"gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727\",\n \"gcr.io/kubernetes-e2e-test-images/agnhost:2.6\"\n ],\n \"sizeBytes\": 57345321\n },\n {\n \"names\": [\n \"quay.io/k8scsi/csi-provisioner@sha256:0efcb424f1dde9b9fb11a1a14f2e48ab47e1c3f08bc3a929990dcfcb1f7ab34f\",\n \"quay.io/k8scsi/csi-provisioner:v1.4.0-rc1\"\n ],\n \"sizeBytes\": 54431016\n },\n {\n \"names\": [\n \"quay.io/k8scsi/csi-snapshotter@sha256:e3d3e742e32d00488fdb401045b9b1d033d7ca0ab6e760f77b24750fc95e5f70\",\n \"quay.io/k8scsi/csi-snapshotter:v2.0.0-rc1\"\n ],\n \"sizeBytes\": 51703561\n },\n {\n \"names\": [\n \"eu.gcr.io/gardener-project/3rd/quay_io/calico/typha@sha256:52298609a808087c774e95ded163e91828106bed6cf3117c51aba3f4d3b7943c\",\n \"eu.gcr.io/gardener-project/3rd/quay_io/calico/typha:v3.8.2\"\n ],\n \"sizeBytes\": 49771411\n },\n {\n \"names\": [\n \"eu.gcr.io/gardener-project/3rd/quay_io/calico/kube-controllers@sha256:242c3e83e41c5ad4a246cba351360d92fb90e1c140cd24e42140e640a0ed3290\",\n \"eu.gcr.io/gardener-project/3rd/quay_io/calico/kube-controllers:v3.8.2\"\n ],\n \"sizeBytes\": 46809393\n },\n {\n \"names\": [\n \"quay.io/k8scsi/csi-attacher@sha256:26fccd7a99d973845df1193b46ebdcc6ab8dc5f6e6be319750c471fce1742d13\",\n \"quay.io/k8scsi/csi-attacher:v1.2.0\"\n ],\n \"sizeBytes\": 46226754\n },\n {\n \"names\": [\n \"eu.gcr.io/gardener-project/3rd/coredns/coredns@sha256:b1f81b52011f91ebcf512111caa6d6d0896a65251188210cd3145d5b23204531\",\n \"eu.gcr.io/gardener-project/3rd/coredns/coredns:1.6.3\"\n ],\n \"sizeBytes\": 44255363\n },\n {\n \"names\": [\n \"quay.io/k8scsi/csi-attacher@sha256:0aba670b4d9d6b2e720bbf575d733156c676b693ca26501235444490300db838\",\n \"quay.io/k8scsi/csi-attacher:v1.1.0\"\n ],\n \"sizeBytes\": 42839085\n },\n {\n \"names\": [\n \"quay.io/k8scsi/csi-resizer@sha256:7d46fb6eb8b890dc546029d1565d502b4a1d974d33625c6ee2bc7991b77fc1a1\",\n \"quay.io/k8scsi/csi-resizer:v0.2.0\"\n ],\n \"sizeBytes\": 42817100\n },\n {\n \"names\": [\n \"quay.io/k8scsi/csi-resizer@sha256:f315c9042e56def3c05c6b04fe79ec9da6d39ddc557ca365a76cf35964ea08b6\",\n \"quay.io/k8scsi/csi-resizer:v0.1.0\"\n ],\n \"sizeBytes\": 42623056\n },\n {\n \"names\": [\n \"eu.gcr.io/gardener-project/3rd/k8s_gcr_io/cpvpa-amd64@sha256:5843435c534f0368f8980b1635976976b087f0b2dcde01226d9216da2276d24d\",\n \"eu.gcr.io/gardener-project/3rd/k8s_gcr_io/cpvpa-amd64:v0.8.1\"\n ],\n \"sizeBytes\": 40616150\n },\n {\n \"names\": [\n \"eu.gcr.io/gardener-project/3rd/k8s_gcr_io/cluster-proportional-autoscaler-amd64@sha256:2cdb0f90aac21d3f648a945ef929bfb81159d7453499b2dce6164c78a348ac42\",\n \"eu.gcr.io/gardener-project/3rd/k8s_gcr_io/cluster-proportional-autoscaler-amd64:1.7.1\"\n ],\n \"sizeBytes\": 40067731\n },\n {\n \"names\": [\n \"eu.gcr.io/gardener-project/3rd/k8s_gcr_io/metrics-server-amd64@sha256:c3c8fb8757c3236343da9239a266c6ee9e16ac3c98b6f5d7a7cbb5f83058d4f1\",\n \"eu.gcr.io/gardener-project/3rd/k8s_gcr_io/metrics-server-amd64:v0.3.3\"\n ],\n \"sizeBytes\": 39933796\n },\n {\n \"names\": [\n \"redis@sha256:50899ea1ceed33fa03232f3ac57578a424faa1742c1ac9c7a7bdb95cdf19b858\",\n \"redis:5.0.5-alpine\"\n ],\n \"sizeBytes\": 29331594\n },\n {\n \"names\": [\n \"quay.io/k8scsi/hostpathplugin@sha256:b4826e492fc1762fceaf9726f41575ca0a4567864d3d235da874818de18039de\",\n \"quay.io/k8scsi/hostpathplugin:v1.2.0-rc5\"\n ],\n \"sizeBytes\": 28761497\n },\n {\n \"names\": [\n \"eu.gcr.io/gardener-project/3rd/quay_io/prometheus/node-exporter@sha256:fea82a3a79228af2840c72ff394d7446ace51ae035f5b26cd9767b250baf13b7\",\n \"eu.gcr.io/gardener-project/3rd/quay_io/prometheus/node-exporter:v0.18.1\"\n ],\n \"sizeBytes\": 22933477\n },\n {\n \"names\": [\n \"gcr.io/kubernetes-e2e-test-images/echoserver@sha256:e9ba514b896cdf559eef8788b66c2c3ee55f3572df617647b4b0d8b6bf81cf19\",\n \"gcr.io/kubernetes-e2e-test-images/echoserver:2.2\"\n ],\n \"sizeBytes\": 21692741\n },\n {\n \"names\": [\n \"eu.gcr.io/gardener-project/3rd/quay_io/prometheus/blackbox-exporter@sha256:c09cbb653e4708a0c14b205822f56026669c6a4a7d0502609c65da2dd741e669\",\n \"eu.gcr.io/gardener-project/3rd/quay_io/prometheus/blackbox-exporter:v0.14.0\"\n ],\n \"sizeBytes\": 17584252\n },\n {\n \"names\": [\n \"quay.io/k8scsi/mock-driver@sha256:e0eed916b7d970bad2b7d9875f9ad16932f987f0f3d91ec5d86da68b0b5cc9d1\",\n \"quay.io/k8scsi/mock-driver:v2.1.0\"\n ],\n \"sizeBytes\": 16226335\n },\n {\n \"names\": [\n \"nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n \"nginx:1.14-alpine\"\n ],\n \"sizeBytes\": 16032814\n },\n {\n \"names\": [\n \"quay.io/k8scsi/csi-node-driver-registrar@sha256:13daf82fb99e951a4bff8ae5fc7c17c3a8fe7130be6400990d8f6076c32d4599\",\n \"quay.io/k8scsi/csi-node-driver-registrar:v1.1.0\"\n ],\n \"sizeBytes\": 15815995\n },\n {\n \"names\": [\n \"quay.io/k8scsi/livenessprobe@sha256:dde617756e0f602adc566ab71fd885f1dad451ad3fb063ac991c95a2ff47aea5\",\n \"quay.io/k8scsi/livenessprobe:v1.1.0\"\n ],\n \"sizeBytes\": 14967303\n },\n {\n \"names\": [\n \"eu.gcr.io/gardener-project/gardener/vpn-shoot@sha256:6054c6ae62c2bca2f07c913390c3babf14bb8dfa80c707ee8d4fd03c06dbf93f\",\n \"eu.gcr.io/gardener-project/gardener/vpn-shoot:0.16.0\"\n ],\n \"sizeBytes\": 13732716\n },\n {\n \"names\": [\n \"gcr.io/google-containers/startup-script@sha256:be96df6845a2af0eb61b17817ed085ce41048e4044c541da7580570b61beff3e\",\n \"gcr.io/google-containers/startup-script:v1\"\n ],\n \"sizeBytes\": 12528443\n },\n {\n \"names\": [\n \"eu.gcr.io/gardener-project/3rd/quay_io/calico/pod2daemon-flexvol@sha256:fd246ba4eb5b96a7b97bfd8d99eb823ba179e6eeb9852cb3e3f7bf2f44a800a8\",\n \"eu.gcr.io/gardener-project/3rd/quay_io/calico/pod2daemon-flexvol:v3.8.2\"\n ],\n \"sizeBytes\": 9371181\n },\n {\n \"names\": [\n \"gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd\",\n \"gcr.io/kubernetes-e2e-test-images/dnsutils:1.1\"\n ],\n \"sizeBytes\": 9349974\n },\n {\n \"names\": [\n \"gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411\",\n \"gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0\"\n ],\n \"sizeBytes\": 6757579\n },\n {\n \"names\": [\n \"gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc\",\n \"gcr.io/kubernetes-e2e-test-images/nautilus:1.0\"\n ],\n \"sizeBytes\": 4753501\n },\n {\n \"names\": [\n \"gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e\",\n \"gcr.io/kubernetes-e2e-test-images/test-webserver:1.0\"\n ],\n \"sizeBytes\": 4732240\n },\n {\n \"names\": [\n \"gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0\",\n \"gcr.io/authenticated-image-pulling/alpine:3.7\"\n ],\n \"sizeBytes\": 4206620\n },\n {\n \"names\": [\n \"gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2\",\n \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\"\n ],\n \"sizeBytes\": 1563521\n },\n {\n \"names\": [\n \"busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796\",\n \"busybox:1.29\"\n ],\n \"sizeBytes\": 1154361\n },\n {\n \"names\": [\n \"eu.gcr.io/gardener-project/3rd/gcr_io/google_containers/pause-amd64@sha256:ffa28932647c3b6cab6a618aafe98d33dd185d96158ecf9b1addf042d6244025\",\n \"k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea\",\n \"eu.gcr.io/gardener-project/3rd/gcr_io/google_containers/pause-amd64:3.1\",\n \"k8s.gcr.io/pause:3.1\"\n ],\n \"sizeBytes\": 742472\n }\n ]\n }\n }\n ]\n}\n{\n \"kind\": \"EventList\",\n \"apiVersion\": \"v1\",\n \"metadata\": {\n \"selfLink\": \"/api/v1/namespaces/kube-system/events\",\n \"resourceVersion\": \"25461\"\n },\n \"items\": [\n {\n \"metadata\": {\n \"name\": \"pod0-system-node-critical.15e8ebda453ab2ad\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/api/v1/namespaces/kube-system/events/pod0-system-node-critical.15e8ebda453ab2ad\",\n \"uid\": \"06ecb949-9818-48e2-b686-48cd38cc36df\",\n \"resourceVersion\": \"8393\",\n \"creationTimestamp\": \"2020-01-11T19:29:52Z\"\n },\n \"involvedObject\": {\n \"kind\": \"Pod\",\n \"namespace\": \"kube-system\",\n \"name\": \"pod0-system-node-critical\",\n \"uid\": \"e448e7f4-4339-439c-b61c-1e2974dd9c36\",\n \"apiVersion\": \"v1\",\n \"resourceVersion\": \"43564\"\n },\n \"reason\": \"Scheduled\",\n \"message\": \"Successfully assigned kube-system/pod0-system-node-critical to ip-10-250-27-25.ec2.internal\",\n \"source\": {\n \"component\": \"default-scheduler\"\n },\n \"firstTimestamp\": null,\n \"lastTimestamp\": null,\n \"type\": \"Normal\",\n \"eventTime\": \"2020-01-11T19:29:52.090227Z\",\n \"action\": \"Binding\",\n \"reportingComponent\": \"default-scheduler\",\n \"reportingInstance\": \"default-scheduler-kube-scheduler-6f8f595df8-tfkxs\"\n },\n {\n \"metadata\": {\n \"name\": \"pod0-system-node-critical.15e8ebda71421246\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/api/v1/namespaces/kube-system/events/pod0-system-node-critical.15e8ebda71421246\",\n \"uid\": \"41eb509c-b4ec-4298-870a-66d68692990a\",\n \"resourceVersion\": \"8395\",\n \"creationTimestamp\": \"2020-01-11T19:29:52Z\"\n },\n \"involvedObject\": {\n \"kind\": \"Pod\",\n \"namespace\": \"kube-system\",\n \"name\": \"pod0-system-node-critical\",\n \"uid\": \"e448e7f4-4339-439c-b61c-1e2974dd9c36\",\n \"apiVersion\": \"v1\",\n \"resourceVersion\": \"43565\"\n },\n \"reason\": \"FailedCreatePodSandBox\",\n \"message\": \"Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container \\\"b3aac9b0ab1a570c8de4910391712e7908750081420373b4def02e8e4b1d0ac7\\\" network for pod \\\"pod0-system-node-critical\\\": networkPlugin cni failed to set up pod \\\"pod0-system-node-critical_kube-system\\\" network: pods \\\"pod0-system-node-critical\\\" not found\",\n \"source\": {\n \"component\": \"kubelet\",\n \"host\": \"ip-10-250-27-25.ec2.internal\"\n },\n \"firstTimestamp\": \"2020-01-11T19:29:52Z\",\n \"lastTimestamp\": \"2020-01-11T19:29:52Z\",\n \"count\": 1,\n \"type\": \"Warning\",\n \"eventTime\": null,\n \"reportingComponent\": \"\",\n \"reportingInstance\": \"\"\n },\n {\n \"metadata\": {\n \"name\": \"pod1-system-cluster-critical.15e8ebda5545353f\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/api/v1/namespaces/kube-system/events/pod1-system-cluster-critical.15e8ebda5545353f\",\n \"uid\": \"bb172de9-a19f-4547-ac8d-1ef66187c5cc\",\n \"resourceVersion\": \"8394\",\n \"creationTimestamp\": \"2020-01-11T19:29:52Z\"\n },\n \"involvedObject\": {\n \"kind\": \"Pod\",\n \"namespace\": \"kube-system\",\n \"name\": \"pod1-system-cluster-critical\",\n \"uid\": \"6925cb2e-686c-4b77-8d64-7bd3398b18ea\",\n \"apiVersion\": \"v1\",\n \"resourceVersion\": \"43567\"\n },\n \"reason\": \"Scheduled\",\n \"message\": \"Successfully assigned kube-system/pod1-system-cluster-critical to ip-10-250-27-25.ec2.internal\",\n \"source\": {\n \"component\": \"default-scheduler\"\n },\n \"firstTimestamp\": null,\n \"lastTimestamp\": null,\n \"type\": \"Normal\",\n \"eventTime\": \"2020-01-11T19:29:52.359351Z\",\n \"action\": \"Binding\",\n \"reportingComponent\": \"default-scheduler\",\n \"reportingInstance\": \"default-scheduler-kube-scheduler-6f8f595df8-tfkxs\"\n },\n {\n \"metadata\": {\n \"name\": \"pod1-system-cluster-critical.15e8ebf6f939abd7\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/api/v1/namespaces/kube-system/events/pod1-system-cluster-critical.15e8ebf6f939abd7\",\n \"uid\": \"e1befabc-ae3d-48ac-a2ee-834909507bd7\",\n \"resourceVersion\": \"8607\",\n \"creationTimestamp\": \"2020-01-11T19:31:55Z\"\n },\n \"involvedObject\": {\n \"kind\": \"Pod\",\n \"namespace\": \"kube-system\",\n \"name\": \"pod1-system-cluster-critical\",\n \"uid\": \"6925cb2e-686c-4b77-8d64-7bd3398b18ea\",\n \"apiVersion\": \"v1\",\n \"resourceVersion\": \"43568\"\n },\n \"reason\": \"FailedMount\",\n \"message\": \"Unable to attach or mount volumes: unmounted volumes=[default-token-2dtqk], unattached volumes=[default-token-2dtqk]: timed out waiting for the condition\",\n \"source\": {\n \"component\": \"kubelet\",\n \"host\": \"ip-10-250-27-25.ec2.internal\"\n },\n \"firstTimestamp\": \"2020-01-11T19:31:55Z\",\n \"lastTimestamp\": \"2020-01-11T19:31:55Z\",\n \"count\": 1,\n \"type\": \"Warning\",\n \"eventTime\": null,\n \"reportingComponent\": \"\",\n \"reportingInstance\": \"\"\n }\n ]\n}\n{\n \"kind\": \"ReplicationControllerList\",\n \"apiVersion\": \"v1\",\n \"metadata\": {\n \"selfLink\": \"/api/v1/namespaces/kube-system/replicationcontrollers\",\n \"resourceVersion\": \"83468\"\n },\n \"items\": []\n}\n{\n \"kind\": \"ServiceList\",\n \"apiVersion\": \"v1\",\n \"metadata\": {\n \"selfLink\": \"/api/v1/namespaces/kube-system/services\",\n \"resourceVersion\": \"83476\"\n },\n \"items\": [\n {\n \"metadata\": {\n \"name\": \"addons-nginx-ingress-controller\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/api/v1/namespaces/kube-system/services/addons-nginx-ingress-controller\",\n \"uid\": \"8e7f214c-6f07-46dd-86c9-8b5a8a791919\",\n \"resourceVersion\": \"476\",\n \"creationTimestamp\": \"2020-01-11T15:54:38Z\",\n \"labels\": {\n \"app\": \"nginx-ingress\",\n \"chart\": \"nginx-ingress-0.8.0\",\n \"component\": \"controller\",\n \"heritage\": \"Tiller\",\n \"release\": \"addons\",\n \"shoot.gardener.cloud/no-cleanup\": \"true\"\n },\n \"annotations\": {\n \"service.beta.kubernetes.io/aws-load-balancer-proxy-protocol\": \"*\"\n },\n \"finalizers\": [\n \"service.kubernetes.io/load-balancer-cleanup\"\n ]\n },\n \"spec\": {\n \"ports\": [\n {\n \"name\": \"https\",\n \"protocol\": \"TCP\",\n \"port\": 443,\n \"targetPort\": 443,\n \"nodePort\": 32298\n },\n {\n \"name\": \"http\",\n \"protocol\": \"TCP\",\n \"port\": 80,\n \"targetPort\": 80,\n \"nodePort\": 32046\n }\n ],\n \"selector\": {\n \"app\": \"nginx-ingress\",\n \"component\": \"controller\",\n \"release\": \"addons\"\n },\n \"clusterIP\": \"100.107.194.218\",\n \"type\": \"LoadBalancer\",\n \"sessionAffinity\": \"None\",\n \"externalTrafficPolicy\": \"Cluster\"\n },\n \"status\": {\n \"loadBalancer\": {\n \"ingress\": [\n {\n \"hostname\": \"a8e7f214c6f0746dd86c98b5a8a79191-1211236115.us-east-1.elb.amazonaws.com\"\n }\n ]\n }\n }\n },\n {\n \"metadata\": {\n \"name\": \"addons-nginx-ingress-nginx-ingress-k8s-backend\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/api/v1/namespaces/kube-system/services/addons-nginx-ingress-nginx-ingress-k8s-backend\",\n \"uid\": \"a180c1bd-0e3a-47ea-8b29-4faf2c31200c\",\n \"resourceVersion\": \"210\",\n \"creationTimestamp\": \"2020-01-11T15:54:38Z\",\n \"labels\": {\n \"app\": \"nginx-ingress\",\n \"chart\": \"nginx-ingress-0.8.0\",\n \"component\": \"nginx-ingress-k8s-backend\",\n \"heritage\": \"Tiller\",\n \"release\": \"addons\",\n \"shoot.gardener.cloud/no-cleanup\": \"true\"\n }\n },\n \"spec\": {\n \"ports\": [\n {\n \"protocol\": \"TCP\",\n \"port\": 80,\n \"targetPort\": 8080\n }\n ],\n \"selector\": {\n \"app\": \"nginx-ingress\",\n \"component\": \"nginx-ingress-k8s-backend\",\n \"release\": \"addons\"\n },\n \"clusterIP\": \"100.104.186.216\",\n \"type\": \"ClusterIP\",\n \"sessionAffinity\": \"None\"\n },\n \"status\": {\n \"loadBalancer\": {}\n }\n },\n {\n \"metadata\": {\n \"name\": \"blackbox-exporter\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/api/v1/namespaces/kube-system/services/blackbox-exporter\",\n \"uid\": \"0a27f77d-a909-40a5-b5bf-330c57009eb9\",\n \"resourceVersion\": \"334\",\n \"creationTimestamp\": \"2020-01-11T15:54:39Z\",\n \"labels\": {\n \"component\": \"blackbox-exporter\",\n \"shoot.gardener.cloud/no-cleanup\": \"true\"\n }\n },\n \"spec\": {\n \"ports\": [\n {\n \"name\": \"probe\",\n \"protocol\": \"TCP\",\n \"port\": 9115,\n \"targetPort\": 9115\n }\n ],\n \"selector\": {\n \"component\": \"blackbox-exporter\"\n },\n \"clusterIP\": \"100.107.248.105\",\n \"type\": \"ClusterIP\",\n \"sessionAffinity\": \"None\"\n },\n \"status\": {\n \"loadBalancer\": {}\n }\n },\n {\n \"metadata\": {\n \"name\": \"calico-typha\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/api/v1/namespaces/kube-system/services/calico-typha\",\n \"uid\": \"464e498d-6a2c-4a6f-9bdb-693e48354eb8\",\n \"resourceVersion\": \"260\",\n \"creationTimestamp\": \"2020-01-11T15:54:38Z\",\n \"labels\": {\n \"k8s-app\": \"calico-typha\",\n \"shoot.gardener.cloud/no-cleanup\": \"true\"\n }\n },\n \"spec\": {\n \"ports\": [\n {\n \"name\": \"calico-typha\",\n \"protocol\": \"TCP\",\n \"port\": 5473,\n \"targetPort\": \"calico-typha\"\n }\n ],\n \"selector\": {\n \"k8s-app\": \"calico-typha\"\n },\n \"clusterIP\": \"100.106.19.47\",\n \"type\": \"ClusterIP\",\n \"sessionAffinity\": \"None\"\n },\n \"status\": {\n \"loadBalancer\": {}\n }\n },\n {\n \"metadata\": {\n \"name\": \"kube-dns\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/api/v1/namespaces/kube-system/services/kube-dns\",\n \"uid\": \"c8e1a811-8257-4107-b32e-8f73d49eab51\",\n \"resourceVersion\": \"352\",\n \"creationTimestamp\": \"2020-01-11T15:54:40Z\",\n \"labels\": {\n \"k8s-app\": \"kube-dns\",\n \"kubernetes.io/cluster-service\": \"true\",\n \"kubernetes.io/name\": \"CoreDNS\",\n \"shoot.gardener.cloud/no-cleanup\": \"true\"\n }\n },\n \"spec\": {\n \"ports\": [\n {\n \"name\": \"dns\",\n \"protocol\": \"UDP\",\n \"port\": 53,\n \"targetPort\": 8053\n },\n {\n \"name\": \"dns-tcp\",\n \"protocol\": \"TCP\",\n \"port\": 53,\n \"targetPort\": 8053\n },\n {\n \"name\": \"metrics\",\n \"protocol\": \"TCP\",\n \"port\": 9153,\n \"targetPort\": 9153\n }\n ],\n \"selector\": {\n \"k8s-app\": \"kube-dns\"\n },\n \"clusterIP\": \"100.104.0.10\",\n \"type\": \"ClusterIP\",\n \"sessionAffinity\": \"None\"\n },\n \"status\": {\n \"loadBalancer\": {}\n }\n },\n {\n \"metadata\": {\n \"name\": \"kube-proxy\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/api/v1/namespaces/kube-system/services/kube-proxy\",\n \"uid\": \"5118d3a9-b982-4c24-84e7-4abf925d875e\",\n \"resourceVersion\": \"322\",\n \"creationTimestamp\": \"2020-01-11T15:54:39Z\",\n \"labels\": {\n \"app\": \"kubernetes\",\n \"role\": \"proxy\",\n \"shoot.gardener.cloud/no-cleanup\": \"true\"\n }\n },\n \"spec\": {\n \"ports\": [\n {\n \"name\": \"metrics\",\n \"protocol\": \"TCP\",\n \"port\": 10249,\n \"targetPort\": 10249\n }\n ],\n \"selector\": {\n \"app\": \"kubernetes\",\n \"role\": \"proxy\"\n },\n \"clusterIP\": \"None\",\n \"type\": \"ClusterIP\",\n \"sessionAffinity\": \"None\"\n },\n \"status\": {\n \"loadBalancer\": {}\n }\n },\n {\n \"metadata\": {\n \"name\": \"kubernetes-dashboard\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/api/v1/namespaces/kube-system/services/kubernetes-dashboard\",\n \"uid\": \"2920775c-edf2-4fa7-8796-d6a0ec51e766\",\n \"resourceVersion\": \"204\",\n \"creationTimestamp\": \"2020-01-11T15:54:38Z\",\n \"labels\": {\n \"app\": \"kubernetes-dashboard\",\n \"chart\": \"kubernetes-dashboard-0.2.0\",\n \"heritage\": \"Tiller\",\n \"kubernetes.io/cluster-service\": \"true\",\n \"release\": \"addons\",\n \"shoot.gardener.cloud/no-cleanup\": \"true\"\n }\n },\n \"spec\": {\n \"ports\": [\n {\n \"protocol\": \"TCP\",\n \"port\": 443,\n \"targetPort\": 8443\n }\n ],\n \"selector\": {\n \"app\": \"kubernetes-dashboard\",\n \"release\": \"addons\"\n },\n \"clusterIP\": \"100.106.164.167\",\n \"type\": \"ClusterIP\",\n \"sessionAffinity\": \"None\"\n },\n \"status\": {\n \"loadBalancer\": {}\n }\n },\n {\n \"metadata\": {\n \"name\": \"metrics-server\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/api/v1/namespaces/kube-system/services/metrics-server\",\n \"uid\": \"613a9d99-eb5d-4093-abc8-939c8c32ecf8\",\n \"resourceVersion\": \"324\",\n \"creationTimestamp\": \"2020-01-11T15:54:39Z\",\n \"labels\": {\n \"kubernetes.io/name\": \"metrics-server\",\n \"shoot.gardener.cloud/no-cleanup\": \"true\"\n }\n },\n \"spec\": {\n \"ports\": [\n {\n \"protocol\": \"TCP\",\n \"port\": 443,\n \"targetPort\": 8443\n }\n ],\n \"selector\": {\n \"k8s-app\": \"metrics-server\"\n },\n \"clusterIP\": \"100.108.63.140\",\n \"type\": \"ClusterIP\",\n \"sessionAffinity\": \"None\"\n },\n \"status\": {\n \"loadBalancer\": {}\n }\n },\n {\n \"metadata\": {\n \"name\": \"node-exporter\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/api/v1/namespaces/kube-system/services/node-exporter\",\n \"uid\": \"e4eb2685-b25f-406e-8db8-f0ae318ac6d0\",\n \"resourceVersion\": \"361\",\n \"creationTimestamp\": \"2020-01-11T15:54:40Z\",\n \"labels\": {\n \"component\": \"node-exporter\",\n \"shoot.gardener.cloud/no-cleanup\": \"true\"\n }\n },\n \"spec\": {\n \"ports\": [\n {\n \"name\": \"metrics\",\n \"protocol\": \"TCP\",\n \"port\": 16909,\n \"targetPort\": 16909\n }\n ],\n \"selector\": {\n \"component\": \"node-exporter\"\n },\n \"clusterIP\": \"None\",\n \"type\": \"ClusterIP\",\n \"sessionAffinity\": \"None\"\n },\n \"status\": {\n \"loadBalancer\": {}\n }\n },\n {\n \"metadata\": {\n \"name\": \"vpn-shoot\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/api/v1/namespaces/kube-system/services/vpn-shoot\",\n \"uid\": \"507c9c1d-adf7-46b9-86a7-6e4fc820213f\",\n \"resourceVersion\": \"478\",\n \"creationTimestamp\": \"2020-01-11T15:54:39Z\",\n \"labels\": {\n \"app\": \"vpn-shoot\",\n \"shoot.gardener.cloud/no-cleanup\": \"true\"\n },\n \"finalizers\": [\n \"service.kubernetes.io/load-balancer-cleanup\"\n ]\n },\n \"spec\": {\n \"ports\": [\n {\n \"name\": \"openvpn\",\n \"protocol\": \"TCP\",\n \"port\": 4314,\n \"targetPort\": 1194,\n \"nodePort\": 32265\n }\n ],\n \"selector\": {\n \"app\": \"vpn-shoot\"\n },\n \"clusterIP\": \"100.108.198.84\",\n \"type\": \"LoadBalancer\",\n \"sessionAffinity\": \"None\",\n \"externalTrafficPolicy\": \"Cluster\"\n },\n \"status\": {\n \"loadBalancer\": {\n \"ingress\": [\n {\n \"hostname\": \"a507c9c1dadf746b986a76e4fc820213-1778415213.us-east-1.elb.amazonaws.com\"\n }\n ]\n }\n }\n }\n ]\n}\n{\n \"kind\": \"DaemonSetList\",\n \"apiVersion\": \"apps/v1\",\n \"metadata\": {\n \"selfLink\": \"/apis/apps/v1/namespaces/kube-system/daemonsets\",\n \"resourceVersion\": \"83482\"\n },\n \"items\": [\n {\n \"metadata\": {\n \"name\": \"calico-node\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/apis/apps/v1/namespaces/kube-system/daemonsets/calico-node\",\n \"uid\": \"91e96e4f-caf1-4864-8705-46792ded2aad\",\n \"resourceVersion\": \"966\",\n \"generation\": 1,\n \"creationTimestamp\": \"2020-01-11T15:54:38Z\",\n \"labels\": {\n \"garden.sapcloud.io/role\": \"system-component\",\n \"k8s-app\": \"calico-node\",\n \"origin\": \"gardener\",\n \"shoot.gardener.cloud/no-cleanup\": \"true\"\n },\n \"annotations\": {\n \"deprecated.daemonset.template.generation\": \"1\"\n }\n },\n \"spec\": {\n \"selector\": {\n \"matchLabels\": {\n \"k8s-app\": \"calico-node\"\n }\n },\n \"template\": {\n \"metadata\": {\n \"creationTimestamp\": null,\n \"labels\": {\n \"garden.sapcloud.io/role\": \"system-component\",\n \"k8s-app\": \"calico-node\",\n \"origin\": \"gardener\",\n \"shoot.gardener.cloud/no-cleanup\": \"true\"\n },\n \"annotations\": {\n \"checksum/configmap-calico\": \"3bd46cb7beef613e0b3225b3776526289b7ba8abd2ae8dad55b1451c9465ae06\",\n \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n }\n },\n \"spec\": {\n \"volumes\": [\n {\n \"name\": \"lib-modules\",\n \"hostPath\": {\n \"path\": \"/lib/modules\",\n \"type\": \"\"\n }\n },\n {\n \"name\": \"var-run-calico\",\n \"hostPath\": {\n \"path\": \"/var/run/calico\",\n \"type\": \"\"\n }\n },\n {\n \"name\": \"var-lib-calico\",\n \"hostPath\": {\n \"path\": \"/var/lib/calico\",\n \"type\": \"\"\n }\n },\n {\n \"name\": \"xtables-lock\",\n \"hostPath\": {\n \"path\": \"/run/xtables.lock\",\n \"type\": \"FileOrCreate\"\n }\n },\n {\n \"name\": \"cni-bin-dir\",\n \"hostPath\": {\n \"path\": \"/opt/cni/bin\",\n \"type\": \"\"\n }\n },\n {\n \"name\": \"cni-net-dir\",\n \"hostPath\": {\n \"path\": \"/etc/cni/net.d\",\n \"type\": \"\"\n }\n },\n {\n \"name\": \"policysync\",\n \"hostPath\": {\n \"path\": \"/var/run/nodeagent\",\n \"type\": \"DirectoryOrCreate\"\n }\n },\n {\n \"name\": \"flexvol-driver-host\",\n \"hostPath\": {\n \"path\": \"/var/lib/kubelet/volumeplugins/nodeagent~uds\",\n \"type\": \"DirectoryOrCreate\"\n }\n }\n ],\n \"initContainers\": [\n {\n \"name\": \"install-cni\",\n \"image\": \"eu.gcr.io/gardener-project/3rd/quay_io/calico/cni:v3.8.2-mod-1\",\n \"command\": [\n \"/install-cni.sh\"\n ],\n \"env\": [\n {\n \"name\": \"CNI_CONF_NAME\",\n \"value\": \"10-calico.conflist\"\n },\n {\n \"name\": \"CNI_NETWORK_CONFIG\",\n \"valueFrom\": {\n \"configMapKeyRef\": {\n \"name\": \"calico-config\",\n \"key\": \"cni_network_config\"\n }\n }\n },\n {\n \"name\": \"KUBERNETES_NODE_NAME\",\n \"valueFrom\": {\n \"fieldRef\": {\n \"apiVersion\": \"v1\",\n \"fieldPath\": \"spec.nodeName\"\n }\n }\n },\n {\n \"name\": \"CNI_MTU\",\n \"valueFrom\": {\n \"configMapKeyRef\": {\n \"name\": \"calico-config\",\n \"key\": \"veth_mtu\"\n }\n }\n },\n {\n \"name\": \"SLEEP\",\n \"value\": \"false\"\n }\n ],\n \"resources\": {},\n \"volumeMounts\": [\n {\n \"name\": \"cni-bin-dir\",\n \"mountPath\": \"/host/opt/cni/bin\"\n },\n {\n \"name\": \"cni-net-dir\",\n \"mountPath\": \"/host/etc/cni/net.d\"\n }\n ],\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"imagePullPolicy\": \"IfNotPresent\"\n },\n {\n \"name\": \"flexvol-driver\",\n \"image\": \"eu.gcr.io/gardener-project/3rd/quay_io/calico/pod2daemon-flexvol:v3.8.2\",\n \"resources\": {},\n \"volumeMounts\": [\n {\n \"name\": \"flexvol-driver-host\",\n \"mountPath\": \"/host/driver\"\n }\n ],\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"imagePullPolicy\": \"IfNotPresent\"\n }\n ],\n \"containers\": [\n {\n \"name\": \"calico-node\",\n \"image\": \"eu.gcr.io/gardener-project/3rd/quay_io/calico/node:v3.8.2-mod-1\",\n \"env\": [\n {\n \"name\": \"USE_POD_CIDR\",\n \"value\": \"true\"\n },\n {\n \"name\": \"DATASTORE_TYPE\",\n \"value\": \"kubernetes\"\n },\n {\n \"name\": \"FELIX_TYPHAK8SSERVICENAME\",\n \"valueFrom\": {\n \"configMapKeyRef\": {\n \"name\": \"calico-config\",\n \"key\": \"typha_service_name\"\n }\n }\n },\n {\n \"name\": \"FELIX_LOGSEVERITYSCREEN\",\n \"value\": \"error\"\n },\n {\n \"name\": \"CLUSTER_TYPE\",\n \"value\": \"k8s,bgp\"\n },\n {\n \"name\": \"CALICO_DISABLE_FILE_LOGGING\",\n \"value\": \"true\"\n },\n {\n \"name\": \"FELIX_DEFAULTENDPOINTTOHOSTACTION\",\n \"value\": \"ACCEPT\"\n },\n {\n \"name\": \"IP\",\n \"value\": \"autodetect\"\n },\n {\n \"name\": \"FELIX_IPV6SUPPORT\",\n \"value\": \"false\"\n },\n {\n \"name\": \"FELIX_IPINIPMTU\",\n \"valueFrom\": {\n \"configMapKeyRef\": {\n \"name\": \"calico-config\",\n \"key\": \"veth_mtu\"\n }\n }\n },\n {\n \"name\": \"WAIT_FOR_DATASTORE\",\n \"value\": \"true\"\n },\n {\n \"name\": \"CALICO_IPV4POOL_CIDR\",\n \"value\": \"100.64.0.0/11\"\n },\n {\n \"name\": \"FELIX_IPINIPENABLED\",\n \"value\": \"true\"\n },\n {\n \"name\": \"CALICO_IPV4POOL_IPIP\",\n \"value\": \"Always\"\n },\n {\n \"name\": \"CALICO_NETWORKING_BACKEND\",\n \"valueFrom\": {\n \"configMapKeyRef\": {\n \"name\": \"calico-config\",\n \"key\": \"calico_backend\"\n }\n }\n },\n {\n \"name\": \"NODENAME\",\n \"valueFrom\": {\n \"fieldRef\": {\n \"apiVersion\": \"v1\",\n \"fieldPath\": \"spec.nodeName\"\n }\n }\n },\n {\n \"name\": \"FELIX_HEALTHENABLED\",\n \"value\": \"true\"\n },\n {\n \"name\": \"FELIX_NATPORTRANGE\",\n \"value\": \"32768:65535\"\n }\n ],\n \"resources\": {\n \"limits\": {\n \"cpu\": \"500m\",\n \"memory\": \"700Mi\"\n },\n \"requests\": {\n \"cpu\": \"100m\",\n \"memory\": \"100Mi\"\n }\n },\n \"volumeMounts\": [\n {\n \"name\": \"lib-modules\",\n \"readOnly\": true,\n \"mountPath\": \"/lib/modules\"\n },\n {\n \"name\": \"xtables-lock\",\n \"mountPath\": \"/run/xtables.lock\"\n },\n {\n \"name\": \"var-run-calico\",\n \"mountPath\": \"/var/run/calico\"\n },\n {\n \"name\": \"var-lib-calico\",\n \"mountPath\": \"/var/lib/calico\"\n },\n {\n \"name\": \"policysync\",\n \"mountPath\": \"/var/run/nodeagent\"\n }\n ],\n \"livenessProbe\": {\n \"httpGet\": {\n \"path\": \"/liveness\",\n \"port\": 9099,\n \"host\": \"localhost\",\n \"scheme\": \"HTTP\"\n },\n \"initialDelaySeconds\": 10,\n \"timeoutSeconds\": 1,\n \"periodSeconds\": 10,\n \"successThreshold\": 1,\n \"failureThreshold\": 6\n },\n \"readinessProbe\": {\n \"exec\": {\n \"command\": [\n \"/bin/calico-node\",\n \"-felix-ready\",\n \"-bird-ready\"\n ]\n },\n \"timeoutSeconds\": 1,\n \"periodSeconds\": 10,\n \"successThreshold\": 1,\n \"failureThreshold\": 3\n },\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"securityContext\": {\n \"privileged\": true\n }\n }\n ],\n \"restartPolicy\": \"Always\",\n \"terminationGracePeriodSeconds\": 0,\n \"dnsPolicy\": \"ClusterFirst\",\n \"nodeSelector\": {\n \"beta.kubernetes.io/os\": \"linux\"\n },\n \"serviceAccountName\": \"calico-node\",\n \"serviceAccount\": \"calico-node\",\n \"hostNetwork\": true,\n \"securityContext\": {},\n \"schedulerName\": \"default-scheduler\",\n \"tolerations\": [\n {\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"CriticalAddonsOnly\",\n \"operator\": \"Exists\"\n },\n {\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\"\n }\n ],\n \"priorityClassName\": \"system-node-critical\"\n }\n },\n \"updateStrategy\": {\n \"type\": \"RollingUpdate\",\n \"rollingUpdate\": {\n \"maxUnavailable\": 1\n }\n },\n \"revisionHistoryLimit\": 10\n },\n \"status\": {\n \"currentNumberScheduled\": 2,\n \"numberMisscheduled\": 0,\n \"desiredNumberScheduled\": 2,\n \"numberReady\": 2,\n \"observedGeneration\": 1,\n \"updatedNumberScheduled\": 2,\n \"numberAvailable\": 2\n }\n },\n {\n \"metadata\": {\n \"name\": \"kube-proxy\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/apis/apps/v1/namespaces/kube-system/daemonsets/kube-proxy\",\n \"uid\": \"c35c6d75-67ca-4cb1-bf4d-469bd2412bbb\",\n \"resourceVersion\": \"830\",\n \"generation\": 1,\n \"creationTimestamp\": \"2020-01-11T15:54:40Z\",\n \"labels\": {\n \"garden.sapcloud.io/role\": \"system-component\",\n \"origin\": \"gardener\",\n \"shoot.gardener.cloud/no-cleanup\": \"true\"\n },\n \"annotations\": {\n \"deprecated.daemonset.template.generation\": \"1\"\n }\n },\n \"spec\": {\n \"selector\": {\n \"matchLabels\": {\n \"app\": \"kubernetes\",\n \"role\": \"proxy\"\n }\n },\n \"template\": {\n \"metadata\": {\n \"creationTimestamp\": null,\n \"labels\": {\n \"app\": \"kubernetes\",\n \"garden.sapcloud.io/role\": \"system-component\",\n \"origin\": \"gardener\",\n \"role\": \"proxy\",\n \"shoot.gardener.cloud/no-cleanup\": \"true\"\n },\n \"annotations\": {\n \"checksum/configmap-componentconfig\": \"af9cca28054c46807a00143b1fe1cdb407f602386417797662022e8c2aea3637\",\n \"checksum/secret-kube-proxy\": \"b2444368a402b867fc3e94db0dd516877e7ff1d724e094d83d9c3ca5b6822b3c\",\n \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n }\n },\n \"spec\": {\n \"volumes\": [\n {\n \"name\": \"kubeconfig\",\n \"secret\": {\n \"secretName\": \"kube-proxy\",\n \"defaultMode\": 420\n }\n },\n {\n \"name\": \"kube-proxy-config\",\n \"configMap\": {\n \"name\": \"kube-proxy-config\",\n \"defaultMode\": 420\n }\n },\n {\n \"name\": \"ssl-certs-hosts\",\n \"hostPath\": {\n \"path\": \"/usr/share/ca-certificates\",\n \"type\": \"\"\n }\n },\n {\n \"name\": \"systembussocket\",\n \"hostPath\": {\n \"path\": \"/var/run/dbus/system_bus_socket\",\n \"type\": \"\"\n }\n },\n {\n \"name\": \"kernel-modules\",\n \"hostPath\": {\n \"path\": \"/lib/modules\",\n \"type\": \"\"\n }\n }\n ],\n \"containers\": [\n {\n \"name\": \"kube-proxy\",\n \"image\": \"eu.gcr.io/gardener-project/k8s.gcr.io/hyperkube:v1.16.4\",\n \"command\": [\n \"/hyperkube\",\n \"kube-proxy\",\n \"--config=/var/lib/kube-proxy-config/config.yaml\",\n \"--v=2\"\n ],\n \"ports\": [\n {\n \"name\": \"metrics\",\n \"hostPort\": 10249,\n \"containerPort\": 10249,\n \"protocol\": \"TCP\"\n }\n ],\n \"resources\": {\n \"requests\": {\n \"cpu\": \"20m\",\n \"memory\": \"64Mi\"\n }\n },\n \"volumeMounts\": [\n {\n \"name\": \"kubeconfig\",\n \"mountPath\": \"/var/lib/kube-proxy\"\n },\n {\n \"name\": \"kube-proxy-config\",\n \"mountPath\": \"/var/lib/kube-proxy-config\"\n },\n {\n \"name\": \"ssl-certs-hosts\",\n \"readOnly\": true,\n \"mountPath\": \"/etc/ssl/certs\"\n },\n {\n \"name\": \"systembussocket\",\n \"mountPath\": \"/var/run/dbus/system_bus_socket\"\n },\n {\n \"name\": \"kernel-modules\",\n \"mountPath\": \"/lib/modules\"\n }\n ],\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"securityContext\": {\n \"privileged\": true\n }\n }\n ],\n \"restartPolicy\": \"Always\",\n \"terminationGracePeriodSeconds\": 30,\n \"dnsPolicy\": \"ClusterFirst\",\n \"serviceAccountName\": \"kube-proxy\",\n \"serviceAccount\": \"kube-proxy\",\n \"automountServiceAccountToken\": false,\n \"hostNetwork\": true,\n \"securityContext\": {},\n \"schedulerName\": \"default-scheduler\",\n \"tolerations\": [\n {\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"CriticalAddonsOnly\",\n \"operator\": \"Exists\"\n },\n {\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\"\n }\n ],\n \"priorityClassName\": \"system-cluster-critical\"\n }\n },\n \"updateStrategy\": {\n \"type\": \"RollingUpdate\",\n \"rollingUpdate\": {\n \"maxUnavailable\": 1\n }\n },\n \"revisionHistoryLimit\": 10\n },\n \"status\": {\n \"currentNumberScheduled\": 2,\n \"numberMisscheduled\": 0,\n \"desiredNumberScheduled\": 2,\n \"numberReady\": 2,\n \"observedGeneration\": 1,\n \"updatedNumberScheduled\": 2,\n \"numberAvailable\": 2\n }\n },\n {\n \"metadata\": {\n \"name\": \"node-exporter\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/apis/apps/v1/namespaces/kube-system/daemonsets/node-exporter\",\n \"uid\": \"37e522a7-69f0-442e-9a91-566375272519\",\n \"resourceVersion\": \"941\",\n \"generation\": 1,\n \"creationTimestamp\": \"2020-01-11T15:54:40Z\",\n \"labels\": {\n \"component\": \"node-exporter\",\n \"garden.sapcloud.io/role\": \"monitoring\",\n \"origin\": \"gardener\",\n \"shoot.gardener.cloud/no-cleanup\": \"true\"\n },\n \"annotations\": {\n \"deprecated.daemonset.template.generation\": \"1\"\n }\n },\n \"spec\": {\n \"selector\": {\n \"matchLabels\": {\n \"component\": \"node-exporter\"\n }\n },\n \"template\": {\n \"metadata\": {\n \"creationTimestamp\": null,\n \"labels\": {\n \"component\": \"node-exporter\",\n \"garden.sapcloud.io/role\": \"monitoring\",\n \"origin\": \"gardener\",\n \"shoot.gardener.cloud/no-cleanup\": \"true\"\n },\n \"annotations\": {\n \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n }\n },\n \"spec\": {\n \"volumes\": [\n {\n \"name\": \"proc\",\n \"hostPath\": {\n \"path\": \"/proc\",\n \"type\": \"\"\n }\n },\n {\n \"name\": \"sys\",\n \"hostPath\": {\n \"path\": \"/sys\",\n \"type\": \"\"\n }\n },\n {\n \"name\": \"rootfs\",\n \"hostPath\": {\n \"path\": \"/\",\n \"type\": \"\"\n }\n }\n ],\n \"containers\": [\n {\n \"name\": \"node-exporter\",\n \"image\": \"eu.gcr.io/gardener-project/3rd/quay_io/prometheus/node-exporter:v0.18.1\",\n \"command\": [\n \"/bin/node_exporter\",\n \"--path.procfs=/host/proc\",\n \"--path.sysfs=/host/sys\",\n \"--collector.filesystem.ignored-fs-types=^(tmpfs|cgroup|nsfs|fuse\\\\.lxcfs|rpc_pipefs)$\",\n \"--collector.filesystem.ignored-mount-points=^/(rootfs/|host/)?(sys|proc|dev|host|etc|var/lib/docker)($|/)\",\n \"--web.listen-address=:16909\",\n \"--log.level=error\",\n \"--no-collector.netclass\"\n ],\n \"ports\": [\n {\n \"name\": \"scrape\",\n \"hostPort\": 16909,\n \"containerPort\": 16909,\n \"protocol\": \"TCP\"\n }\n ],\n \"resources\": {\n \"limits\": {\n \"cpu\": \"25m\",\n \"memory\": \"100Mi\"\n },\n \"requests\": {\n \"cpu\": \"5m\",\n \"memory\": \"10Mi\"\n }\n },\n \"volumeMounts\": [\n {\n \"name\": \"proc\",\n \"readOnly\": true,\n \"mountPath\": \"/host/proc\"\n },\n {\n \"name\": \"sys\",\n \"readOnly\": true,\n \"mountPath\": \"/host/sys\"\n },\n {\n \"name\": \"rootfs\",\n \"readOnly\": true,\n \"mountPath\": \"/rootfs\"\n }\n ],\n \"livenessProbe\": {\n \"httpGet\": {\n \"path\": \"/\",\n \"port\": 16909,\n \"scheme\": \"HTTP\"\n },\n \"initialDelaySeconds\": 5,\n \"timeoutSeconds\": 5,\n \"periodSeconds\": 10,\n \"successThreshold\": 1,\n \"failureThreshold\": 3\n },\n \"readinessProbe\": {\n \"httpGet\": {\n \"path\": \"/\",\n \"port\": 16909,\n \"scheme\": \"HTTP\"\n },\n \"initialDelaySeconds\": 5,\n \"timeoutSeconds\": 5,\n \"periodSeconds\": 10,\n \"successThreshold\": 1,\n \"failureThreshold\": 3\n },\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"imagePullPolicy\": \"IfNotPresent\"\n }\n ],\n \"restartPolicy\": \"Always\",\n \"terminationGracePeriodSeconds\": 30,\n \"dnsPolicy\": \"ClusterFirst\",\n \"serviceAccountName\": \"node-exporter\",\n \"serviceAccount\": \"node-exporter\",\n \"automountServiceAccountToken\": false,\n \"hostNetwork\": true,\n \"hostPID\": true,\n \"securityContext\": {\n \"runAsUser\": 65534,\n \"runAsNonRoot\": true\n },\n \"schedulerName\": \"default-scheduler\",\n \"tolerations\": [\n {\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"CriticalAddonsOnly\",\n \"operator\": \"Exists\"\n },\n {\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\"\n }\n ],\n \"priorityClassName\": \"system-cluster-critical\"\n }\n },\n \"updateStrategy\": {\n \"type\": \"RollingUpdate\",\n \"rollingUpdate\": {\n \"maxUnavailable\": 1\n }\n },\n \"revisionHistoryLimit\": 10\n },\n \"status\": {\n \"currentNumberScheduled\": 2,\n \"numberMisscheduled\": 0,\n \"desiredNumberScheduled\": 2,\n \"numberReady\": 2,\n \"observedGeneration\": 1,\n \"updatedNumberScheduled\": 2,\n \"numberAvailable\": 2\n }\n },\n {\n \"metadata\": {\n \"name\": \"node-problem-detector\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/apis/apps/v1/namespaces/kube-system/daemonsets/node-problem-detector\",\n \"uid\": \"6c84e849-b566-4928-b4e7-19d0b4e45433\",\n \"resourceVersion\": \"949\",\n \"generation\": 1,\n \"creationTimestamp\": \"2020-01-11T15:54:40Z\",\n \"labels\": {\n \"app.kubernetes.io/instance\": \"shoot-core\",\n \"app.kubernetes.io/name\": \"node-problem-detector\",\n \"garden.sapcloud.io/role\": \"system-component\",\n \"helm.sh/chart\": \"node-problem-detector-1.6.1\",\n \"origin\": \"gardener\",\n \"shoot.gardener.cloud/no-cleanup\": \"true\"\n },\n \"annotations\": {\n \"deprecated.daemonset.template.generation\": \"1\"\n }\n },\n \"spec\": {\n \"selector\": {\n \"matchLabels\": {\n \"app\": \"node-problem-detector\",\n \"app.kubernetes.io/instance\": \"shoot-core\",\n \"app.kubernetes.io/name\": \"node-problem-detector\"\n }\n },\n \"template\": {\n \"metadata\": {\n \"creationTimestamp\": null,\n \"labels\": {\n \"app\": \"node-problem-detector\",\n \"app.kubernetes.io/instance\": \"shoot-core\",\n \"app.kubernetes.io/name\": \"node-problem-detector\",\n \"garden.sapcloud.io/role\": \"system-component\",\n \"origin\": \"gardener\",\n \"shoot.gardener.cloud/no-cleanup\": \"true\"\n },\n \"annotations\": {\n \"checksum/config\": \"4f82034ff1169816c591ccb023d905e3fd124303811ad9fbbc4ac46e116bec88\",\n \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n }\n },\n \"spec\": {\n \"volumes\": [\n {\n \"name\": \"log\",\n \"hostPath\": {\n \"path\": \"/var/log/\",\n \"type\": \"\"\n }\n },\n {\n \"name\": \"localtime\",\n \"hostPath\": {\n \"path\": \"/etc/localtime\",\n \"type\": \"FileOrCreate\"\n }\n },\n {\n \"name\": \"custom-config\",\n \"configMap\": {\n \"name\": \"node-problem-detector-custom-config\",\n \"defaultMode\": 420\n }\n }\n ],\n \"containers\": [\n {\n \"name\": \"node-problem-detector\",\n \"image\": \"eu.gcr.io/gardener-project/3rd/k8s_gcr_io/node-problem-detector:v0.7.1-mod-1\",\n \"command\": [\n \"/bin/sh\",\n \"-c\",\n \"exec /node-problem-detector --logtostderr --config.system-log-monitor=/config/kernel-monitor.json,/config/docker-monitor.json,/config/systemd-monitor.json .. --config.custom-plugin-monitor=/config/kernel-monitor-counter.json,/config/systemd-monitor-counter.json .. --config.system-stats-monitor=/config/system-stats-monitor.json --prometheus-address=0.0.0.0 --prometheus-port=20257\"\n ],\n \"ports\": [\n {\n \"name\": \"exporter\",\n \"containerPort\": 20257,\n \"protocol\": \"TCP\"\n }\n ],\n \"env\": [\n {\n \"name\": \"NODE_NAME\",\n \"valueFrom\": {\n \"fieldRef\": {\n \"apiVersion\": \"v1\",\n \"fieldPath\": \"spec.nodeName\"\n }\n }\n }\n ],\n \"resources\": {\n \"limits\": {\n \"cpu\": \"200m\",\n \"memory\": \"100Mi\"\n },\n \"requests\": {\n \"cpu\": \"20m\",\n \"memory\": \"20Mi\"\n }\n },\n \"volumeMounts\": [\n {\n \"name\": \"log\",\n \"mountPath\": \"/var/log\"\n },\n {\n \"name\": \"localtime\",\n \"readOnly\": true,\n \"mountPath\": \"/etc/localtime\"\n },\n {\n \"name\": \"custom-config\",\n \"readOnly\": true,\n \"mountPath\": \"/custom-config\"\n }\n ],\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"securityContext\": {\n \"privileged\": true\n }\n }\n ],\n \"restartPolicy\": \"Always\",\n \"terminationGracePeriodSeconds\": 30,\n \"dnsPolicy\": \"ClusterFirst\",\n \"serviceAccountName\": \"node-problem-detector\",\n \"serviceAccount\": \"node-problem-detector\",\n \"securityContext\": {},\n \"schedulerName\": \"default-scheduler\",\n \"tolerations\": [\n {\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"CriticalAddonsOnly\",\n \"operator\": \"Exists\"\n },\n {\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\"\n }\n ]\n }\n },\n \"updateStrategy\": {\n \"type\": \"RollingUpdate\",\n \"rollingUpdate\": {\n \"maxUnavailable\": 1\n }\n },\n \"revisionHistoryLimit\": 10\n },\n \"status\": {\n \"currentNumberScheduled\": 2,\n \"numberMisscheduled\": 0,\n \"desiredNumberScheduled\": 2,\n \"numberReady\": 2,\n \"observedGeneration\": 1,\n \"updatedNumberScheduled\": 2,\n \"numberAvailable\": 2\n }\n }\n ]\n}\n{\n \"kind\": \"DeploymentList\",\n \"apiVersion\": \"apps/v1\",\n \"metadata\": {\n \"selfLink\": \"/apis/apps/v1/namespaces/kube-system/deployments\",\n \"resourceVersion\": \"83489\"\n },\n \"items\": [\n {\n \"metadata\": {\n \"name\": \"addons-kubernetes-dashboard\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/apis/apps/v1/namespaces/kube-system/deployments/addons-kubernetes-dashboard\",\n \"uid\": \"a81b67fe-cdd4-4ff6-8740-44c5d772b91d\",\n \"resourceVersion\": \"961\",\n \"generation\": 1,\n \"creationTimestamp\": \"2020-01-11T15:54:38Z\",\n \"labels\": {\n \"app\": \"kubernetes-dashboard\",\n \"chart\": \"kubernetes-dashboard-0.2.0\",\n \"garden.sapcloud.io/role\": \"optional-addon\",\n \"heritage\": \"Tiller\",\n \"origin\": \"gardener\",\n \"release\": \"addons\",\n \"shoot.gardener.cloud/no-cleanup\": \"true\"\n },\n \"annotations\": {\n \"deployment.kubernetes.io/revision\": \"1\"\n }\n },\n \"spec\": {\n \"replicas\": 1,\n \"selector\": {\n \"matchLabels\": {\n \"app\": \"kubernetes-dashboard\",\n \"chart\": \"kubernetes-dashboard-0.2.0\",\n \"heritage\": \"Tiller\",\n \"release\": \"addons\"\n }\n },\n \"template\": {\n \"metadata\": {\n \"creationTimestamp\": null,\n \"labels\": {\n \"app\": \"kubernetes-dashboard\",\n \"chart\": \"kubernetes-dashboard-0.2.0\",\n \"garden.sapcloud.io/role\": \"optional-addon\",\n \"heritage\": \"Tiller\",\n \"origin\": \"gardener\",\n \"release\": \"addons\",\n \"shoot.gardener.cloud/no-cleanup\": \"true\"\n },\n \"annotations\": {\n \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n }\n },\n \"spec\": {\n \"volumes\": [\n {\n \"name\": \"kubernetes-dashboard-certs\",\n \"secret\": {\n \"secretName\": \"kubernetes-dashboard-certs\",\n \"defaultMode\": 420\n }\n },\n {\n \"name\": \"tmp-volume\",\n \"emptyDir\": {}\n }\n ],\n \"containers\": [\n {\n \"name\": \"kubernetes-dashboard\",\n \"image\": \"eu.gcr.io/gardener-project/3rd/k8s_gcr_io/kubernetes-dashboard-amd64:v1.10.1\",\n \"args\": [\n \"--auto-generate-certificates\",\n \"--authentication-mode=token\"\n ],\n \"ports\": [\n {\n \"name\": \"https\",\n \"containerPort\": 8443,\n \"protocol\": \"TCP\"\n }\n ],\n \"resources\": {\n \"limits\": {\n \"cpu\": \"100m\",\n \"memory\": \"256Mi\"\n },\n \"requests\": {\n \"cpu\": \"50m\",\n \"memory\": \"50Mi\"\n }\n },\n \"volumeMounts\": [\n {\n \"name\": \"kubernetes-dashboard-certs\",\n \"mountPath\": \"/certs\"\n },\n {\n \"name\": \"tmp-volume\",\n \"mountPath\": \"/tmp\"\n }\n ],\n \"livenessProbe\": {\n \"httpGet\": {\n \"path\": \"/\",\n \"port\": 8443,\n \"scheme\": \"HTTPS\"\n },\n \"initialDelaySeconds\": 30,\n \"timeoutSeconds\": 30,\n \"periodSeconds\": 10,\n \"successThreshold\": 1,\n \"failureThreshold\": 3\n },\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"imagePullPolicy\": \"IfNotPresent\"\n }\n ],\n \"restartPolicy\": \"Always\",\n \"terminationGracePeriodSeconds\": 30,\n \"dnsPolicy\": \"ClusterFirst\",\n \"serviceAccountName\": \"addons-kubernetes-dashboard\",\n \"serviceAccount\": \"addons-kubernetes-dashboard\",\n \"securityContext\": {\n \"runAsUser\": 65534,\n \"fsGroup\": 65534\n },\n \"schedulerName\": \"default-scheduler\"\n }\n },\n \"strategy\": {\n \"type\": \"RollingUpdate\",\n \"rollingUpdate\": {\n \"maxUnavailable\": 1,\n \"maxSurge\": 0\n }\n },\n \"revisionHistoryLimit\": 0,\n \"progressDeadlineSeconds\": 600\n },\n \"status\": {\n \"observedGeneration\": 1,\n \"replicas\": 1,\n \"updatedReplicas\": 1,\n \"readyReplicas\": 1,\n \"availableReplicas\": 1,\n \"conditions\": [\n {\n \"type\": \"Available\",\n \"status\": \"True\",\n \"lastUpdateTime\": \"2020-01-11T15:55:18Z\",\n \"lastTransitionTime\": \"2020-01-11T15:55:18Z\",\n \"reason\": \"MinimumReplicasAvailable\",\n \"message\": \"Deployment has minimum availability.\"\n },\n {\n \"type\": \"Progressing\",\n \"status\": \"True\",\n \"lastUpdateTime\": \"2020-01-11T15:56:32Z\",\n \"lastTransitionTime\": \"2020-01-11T15:55:18Z\",\n \"reason\": \"NewReplicaSetAvailable\",\n \"message\": \"ReplicaSet \\\"addons-kubernetes-dashboard-78954cc66b\\\" has successfully progressed.\"\n }\n ]\n }\n },\n {\n \"metadata\": {\n \"name\": \"addons-nginx-ingress-controller\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/apis/apps/v1/namespaces/kube-system/deployments/addons-nginx-ingress-controller\",\n \"uid\": \"b3c002e8-e19f-46d9-8672-9e6cfca5c1e4\",\n \"resourceVersion\": \"1066\",\n \"generation\": 1,\n \"creationTimestamp\": \"2020-01-11T15:54:38Z\",\n \"labels\": {\n \"app\": \"nginx-ingress\",\n \"chart\": \"nginx-ingress-0.8.0\",\n \"component\": \"controller\",\n \"garden.sapcloud.io/role\": \"optional-addon\",\n \"heritage\": \"Tiller\",\n \"origin\": \"gardener\",\n \"release\": \"addons\",\n \"shoot.gardener.cloud/no-cleanup\": \"true\"\n },\n \"annotations\": {\n \"deployment.kubernetes.io/revision\": \"1\"\n }\n },\n \"spec\": {\n \"replicas\": 1,\n \"selector\": {\n \"matchLabels\": {\n \"app\": \"nginx-ingress\",\n \"component\": \"controller\",\n \"release\": \"addons\"\n }\n },\n \"template\": {\n \"metadata\": {\n \"creationTimestamp\": null,\n \"labels\": {\n \"app\": \"nginx-ingress\",\n \"component\": \"controller\",\n \"garden.sapcloud.io/role\": \"optional-addon\",\n \"origin\": \"gardener\",\n \"release\": \"addons\",\n \"shoot.gardener.cloud/no-cleanup\": \"true\"\n },\n \"annotations\": {\n \"checksum/config\": \"935e3cf465a66f78c2a14ed288dc13cc30649bd0147ef9707ce3da0fd5306c8c\",\n \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n }\n },\n \"spec\": {\n \"containers\": [\n {\n \"name\": \"nginx-ingress-controller\",\n \"image\": \"eu.gcr.io/gardener-project/3rd/quay_io/kubernetes-ingress-controller/nginx-ingress-controller:0.22.0\",\n \"args\": [\n \"/nginx-ingress-controller\",\n \"--default-backend-service=kube-system/addons-nginx-ingress-nginx-ingress-k8s-backend\",\n \"--enable-ssl-passthrough=true\",\n \"--publish-service=kube-system/addons-nginx-ingress-controller\",\n \"--election-id=ingress-controller-leader\",\n \"--ingress-class=nginx\",\n \"--update-status=true\",\n \"--annotations-prefix=nginx.ingress.kubernetes.io\",\n \"--configmap=kube-system/addons-nginx-ingress-controller\"\n ],\n \"ports\": [\n {\n \"name\": \"http\",\n \"containerPort\": 80,\n \"protocol\": \"TCP\"\n },\n {\n \"name\": \"https\",\n \"containerPort\": 443,\n \"protocol\": \"TCP\"\n }\n ],\n \"env\": [\n {\n \"name\": \"POD_NAME\",\n \"valueFrom\": {\n \"fieldRef\": {\n \"apiVersion\": \"v1\",\n \"fieldPath\": \"metadata.name\"\n }\n }\n },\n {\n \"name\": \"POD_NAMESPACE\",\n \"valueFrom\": {\n \"fieldRef\": {\n \"apiVersion\": \"v1\",\n \"fieldPath\": \"metadata.namespace\"\n }\n }\n }\n ],\n \"resources\": {\n \"limits\": {\n \"cpu\": \"2\",\n \"memory\": \"1Gi\"\n },\n \"requests\": {\n \"cpu\": \"100m\",\n \"memory\": \"100Mi\"\n }\n },\n \"livenessProbe\": {\n \"httpGet\": {\n \"path\": \"/healthz\",\n \"port\": 10254,\n \"scheme\": \"HTTP\"\n },\n \"initialDelaySeconds\": 10,\n \"timeoutSeconds\": 1,\n \"periodSeconds\": 10,\n \"successThreshold\": 1,\n \"failureThreshold\": 3\n },\n \"readinessProbe\": {\n \"httpGet\": {\n \"path\": \"/healthz\",\n \"port\": 10254,\n \"scheme\": \"HTTP\"\n },\n \"timeoutSeconds\": 1,\n \"periodSeconds\": 10,\n \"successThreshold\": 1,\n \"failureThreshold\": 3\n },\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"securityContext\": {\n \"capabilities\": {\n \"add\": [\n \"NET_BIND_SERVICE\"\n ],\n \"drop\": [\n \"ALL\"\n ]\n },\n \"runAsUser\": 33\n }\n }\n ],\n \"restartPolicy\": \"Always\",\n \"terminationGracePeriodSeconds\": 60,\n \"dnsPolicy\": \"ClusterFirst\",\n \"serviceAccountName\": \"addons-nginx-ingress\",\n \"serviceAccount\": \"addons-nginx-ingress\",\n \"securityContext\": {},\n \"schedulerName\": \"default-scheduler\",\n \"priorityClassName\": \"system-cluster-critical\"\n }\n },\n \"strategy\": {\n \"type\": \"RollingUpdate\",\n \"rollingUpdate\": {\n \"maxUnavailable\": \"25%\",\n \"maxSurge\": \"25%\"\n }\n },\n \"revisionHistoryLimit\": 0,\n \"progressDeadlineSeconds\": 600\n },\n \"status\": {\n \"observedGeneration\": 1,\n \"replicas\": 1,\n \"updatedReplicas\": 1,\n \"readyReplicas\": 1,\n \"availableReplicas\": 1,\n \"conditions\": [\n {\n \"type\": \"Available\",\n \"status\": \"True\",\n \"lastUpdateTime\": \"2020-01-11T15:57:02Z\",\n \"lastTransitionTime\": \"2020-01-11T15:57:02Z\",\n \"reason\": \"MinimumReplicasAvailable\",\n \"message\": \"Deployment has minimum availability.\"\n },\n {\n \"type\": \"Progressing\",\n \"status\": \"True\",\n \"lastUpdateTime\": \"2020-01-11T15:57:02Z\",\n \"lastTransitionTime\": \"2020-01-11T15:55:18Z\",\n \"reason\": \"NewReplicaSetAvailable\",\n \"message\": \"ReplicaSet \\\"addons-nginx-ingress-controller-7c75bb76db\\\" has successfully progressed.\"\n }\n ]\n }\n },\n {\n \"metadata\": {\n \"name\": \"addons-nginx-ingress-nginx-ingress-k8s-backend\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/apis/apps/v1/namespaces/kube-system/deployments/addons-nginx-ingress-nginx-ingress-k8s-backend\",\n \"uid\": \"91890f2c-dbd8-45c5-b2c2-d70899c5106f\",\n \"resourceVersion\": \"936\",\n \"generation\": 1,\n \"creationTimestamp\": \"2020-01-11T15:54:38Z\",\n \"labels\": {\n \"app\": \"nginx-ingress\",\n \"chart\": \"nginx-ingress-0.8.0\",\n \"component\": \"nginx-ingress-k8s-backend\",\n \"garden.sapcloud.io/role\": \"optional-addon\",\n \"heritage\": \"Tiller\",\n \"origin\": \"gardener\",\n \"release\": \"addons\",\n \"shoot.gardener.cloud/no-cleanup\": \"true\"\n },\n \"annotations\": {\n \"deployment.kubernetes.io/revision\": \"1\"\n }\n },\n \"spec\": {\n \"replicas\": 1,\n \"selector\": {\n \"matchLabels\": {\n \"app\": \"nginx-ingress\",\n \"component\": \"nginx-ingress-k8s-backend\",\n \"release\": \"addons\"\n }\n },\n \"template\": {\n \"metadata\": {\n \"creationTimestamp\": null,\n \"labels\": {\n \"app\": \"nginx-ingress\",\n \"component\": \"nginx-ingress-k8s-backend\",\n \"garden.sapcloud.io/role\": \"optional-addon\",\n \"origin\": \"gardener\",\n \"release\": \"addons\",\n \"shoot.gardener.cloud/no-cleanup\": \"true\"\n },\n \"annotations\": {\n \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n }\n },\n \"spec\": {\n \"containers\": [\n {\n \"name\": \"nginx-ingress-nginx-ingress-k8s-backend\",\n \"image\": \"eu.gcr.io/gardener-project/gardener/ingress-default-backend:0.7.0\",\n \"ports\": [\n {\n \"containerPort\": 8080,\n \"protocol\": \"TCP\"\n }\n ],\n \"resources\": {},\n \"livenessProbe\": {\n \"httpGet\": {\n \"path\": \"/healthy\",\n \"port\": 8080,\n \"scheme\": \"HTTP\"\n },\n \"initialDelaySeconds\": 30,\n \"timeoutSeconds\": 5,\n \"periodSeconds\": 10,\n \"successThreshold\": 1,\n \"failureThreshold\": 3\n },\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"imagePullPolicy\": \"IfNotPresent\"\n }\n ],\n \"restartPolicy\": \"Always\",\n \"terminationGracePeriodSeconds\": 60,\n \"dnsPolicy\": \"ClusterFirst\",\n \"securityContext\": {\n \"runAsUser\": 65534,\n \"fsGroup\": 65534\n },\n \"schedulerName\": \"default-scheduler\",\n \"priorityClassName\": \"system-cluster-critical\"\n }\n },\n \"strategy\": {\n \"type\": \"RollingUpdate\",\n \"rollingUpdate\": {\n \"maxUnavailable\": \"25%\",\n \"maxSurge\": \"25%\"\n }\n },\n \"revisionHistoryLimit\": 0,\n \"progressDeadlineSeconds\": 600\n },\n \"status\": {\n \"observedGeneration\": 1,\n \"replicas\": 1,\n \"updatedReplicas\": 1,\n \"readyReplicas\": 1,\n \"availableReplicas\": 1,\n \"conditions\": [\n {\n \"type\": \"Available\",\n \"status\": \"True\",\n \"lastUpdateTime\": \"2020-01-11T15:56:24Z\",\n \"lastTransitionTime\": \"2020-01-11T15:56:24Z\",\n \"reason\": \"MinimumReplicasAvailable\",\n \"message\": \"Deployment has minimum availability.\"\n },\n {\n \"type\": \"Progressing\",\n \"status\": \"True\",\n \"lastUpdateTime\": \"2020-01-11T15:56:24Z\",\n \"lastTransitionTime\": \"2020-01-11T15:55:18Z\",\n \"reason\": \"NewReplicaSetAvailable\",\n \"message\": \"ReplicaSet \\\"addons-nginx-ingress-nginx-ingress-k8s-backend-95f65778d\\\" has successfully progressed.\"\n }\n ]\n }\n },\n {\n \"metadata\": {\n \"name\": \"blackbox-exporter\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/apis/apps/v1/namespaces/kube-system/deployments/blackbox-exporter\",\n \"uid\": \"6422fbad-4b6a-48a2-8f5e-0c23c1280fc2\",\n \"resourceVersion\": \"1003\",\n \"generation\": 1,\n \"creationTimestamp\": \"2020-01-11T15:54:40Z\",\n \"labels\": {\n \"component\": \"blackbox-exporter\",\n \"garden.sapcloud.io/role\": \"system-component\",\n \"origin\": \"gardener\",\n \"shoot.gardener.cloud/no-cleanup\": \"true\"\n },\n \"annotations\": {\n \"deployment.kubernetes.io/revision\": \"1\"\n }\n },\n \"spec\": {\n \"replicas\": 1,\n \"selector\": {\n \"matchLabels\": {\n \"component\": \"blackbox-exporter\"\n }\n },\n \"template\": {\n \"metadata\": {\n \"creationTimestamp\": null,\n \"labels\": {\n \"component\": \"blackbox-exporter\",\n \"garden.sapcloud.io/role\": \"system-component\",\n \"networking.gardener.cloud/from-seed\": \"allowed\",\n \"networking.gardener.cloud/to-dns\": \"allowed\",\n \"networking.gardener.cloud/to-public-networks\": \"allowed\",\n \"origin\": \"gardener\",\n \"shoot.gardener.cloud/no-cleanup\": \"true\"\n },\n \"annotations\": {\n \"checksum/configmap-blackbox-exporter-config\": \"837ab259c403ac736c591b304338d575f29e6d82794e00685e39de9b867ddae9\"\n }\n },\n \"spec\": {\n \"volumes\": [\n {\n \"name\": \"blackbox-exporter-config\",\n \"configMap\": {\n \"name\": \"blackbox-exporter-config\",\n \"defaultMode\": 420\n }\n }\n ],\n \"containers\": [\n {\n \"name\": \"blackbox-exporter\",\n \"image\": \"eu.gcr.io/gardener-project/3rd/quay_io/prometheus/blackbox-exporter:v0.14.0\",\n \"args\": [\n \"--config.file=/etc/blackbox_exporter/blackbox.yaml\"\n ],\n \"ports\": [\n {\n \"name\": \"probe\",\n \"containerPort\": 9115,\n \"protocol\": \"TCP\"\n }\n ],\n \"resources\": {\n \"limits\": {\n \"cpu\": \"10m\",\n \"memory\": \"35Mi\"\n },\n \"requests\": {\n \"cpu\": \"5m\",\n \"memory\": \"5Mi\"\n }\n },\n \"volumeMounts\": [\n {\n \"name\": \"blackbox-exporter-config\",\n \"mountPath\": \"/etc/blackbox_exporter\"\n }\n ],\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"imagePullPolicy\": \"IfNotPresent\"\n }\n ],\n \"restartPolicy\": \"Always\",\n \"terminationGracePeriodSeconds\": 30,\n \"dnsPolicy\": \"ClusterFirst\",\n \"securityContext\": {\n \"runAsUser\": 65534,\n \"fsGroup\": 65534\n },\n \"schedulerName\": \"default-scheduler\",\n \"tolerations\": [\n {\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"CriticalAddonsOnly\",\n \"operator\": \"Exists\"\n },\n {\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\"\n }\n ],\n \"priorityClassName\": \"system-cluster-critical\",\n \"dnsConfig\": {\n \"options\": [\n {\n \"name\": \"ndots\",\n \"value\": \"3\"\n }\n ]\n }\n }\n },\n \"strategy\": {\n \"type\": \"RollingUpdate\",\n \"rollingUpdate\": {\n \"maxUnavailable\": \"25%\",\n \"maxSurge\": \"25%\"\n }\n },\n \"revisionHistoryLimit\": 0,\n \"progressDeadlineSeconds\": 600\n },\n \"status\": {\n \"observedGeneration\": 1,\n \"replicas\": 1,\n \"updatedReplicas\": 1,\n \"readyReplicas\": 1,\n \"availableReplicas\": 1,\n \"conditions\": [\n {\n \"type\": \"Available\",\n \"status\": \"True\",\n \"lastUpdateTime\": \"2020-01-11T15:56:40Z\",\n \"lastTransitionTime\": \"2020-01-11T15:56:40Z\",\n \"reason\": \"MinimumReplicasAvailable\",\n \"message\": \"Deployment has minimum availability.\"\n },\n {\n \"type\": \"Progressing\",\n \"status\": \"True\",\n \"lastUpdateTime\": \"2020-01-11T15:56:40Z\",\n \"lastTransitionTime\": \"2020-01-11T15:55:18Z\",\n \"reason\": \"NewReplicaSetAvailable\",\n \"message\": \"ReplicaSet \\\"blackbox-exporter-54bb5f55cc\\\" has successfully progressed.\"\n }\n ]\n }\n },\n {\n \"metadata\": {\n \"name\": \"calico-kube-controllers\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/apis/apps/v1/namespaces/kube-system/deployments/calico-kube-controllers\",\n \"uid\": \"6657786b-d4d3-4abc-9504-0373b2231c32\",\n \"resourceVersion\": \"975\",\n \"generation\": 1,\n \"creationTimestamp\": \"2020-01-11T15:54:38Z\",\n \"labels\": {\n \"garden.sapcloud.io/role\": \"system-component\",\n \"k8s-app\": \"calico-kube-controllers\",\n \"shoot.gardener.cloud/no-cleanup\": \"true\"\n },\n \"annotations\": {\n \"deployment.kubernetes.io/revision\": \"1\",\n \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n }\n },\n \"spec\": {\n \"replicas\": 1,\n \"selector\": {\n \"matchLabels\": {\n \"k8s-app\": \"calico-kube-controllers\"\n }\n },\n \"template\": {\n \"metadata\": {\n \"name\": \"calico-kube-controllers\",\n \"namespace\": \"kube-system\",\n \"creationTimestamp\": null,\n \"labels\": {\n \"garden.sapcloud.io/role\": \"system-component\",\n \"k8s-app\": \"calico-kube-controllers\",\n \"shoot.gardener.cloud/no-cleanup\": \"true\"\n }\n },\n \"spec\": {\n \"containers\": [\n {\n \"name\": \"calico-kube-controllers\",\n \"image\": \"eu.gcr.io/gardener-project/3rd/quay_io/calico/kube-controllers:v3.8.2\",\n \"env\": [\n {\n \"name\": \"ENABLED_CONTROLLERS\",\n \"value\": \"node\"\n },\n {\n \"name\": \"DATASTORE_TYPE\",\n \"value\": \"kubernetes\"\n }\n ],\n \"resources\": {},\n \"readinessProbe\": {\n \"exec\": {\n \"command\": [\n \"/usr/bin/check-status\",\n \"-r\"\n ]\n },\n \"timeoutSeconds\": 1,\n \"periodSeconds\": 10,\n \"successThreshold\": 1,\n \"failureThreshold\": 3\n },\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"securityContext\": {\n \"capabilities\": {\n \"drop\": [\n \"ALL\"\n ]\n },\n \"privileged\": true,\n \"allowPrivilegeEscalation\": true\n }\n }\n ],\n \"restartPolicy\": \"Always\",\n \"terminationGracePeriodSeconds\": 30,\n \"dnsPolicy\": \"ClusterFirst\",\n \"nodeSelector\": {\n \"beta.kubernetes.io/os\": \"linux\"\n },\n \"serviceAccountName\": \"calico-kube-controllers\",\n \"serviceAccount\": \"calico-kube-controllers\",\n \"securityContext\": {},\n \"schedulerName\": \"default-scheduler\",\n \"tolerations\": [\n {\n \"key\": \"CriticalAddonsOnly\",\n \"operator\": \"Exists\"\n },\n {\n \"key\": \"node-role.kubernetes.io/master\",\n \"effect\": \"NoSchedule\"\n }\n ],\n \"priorityClassName\": \"system-cluster-critical\"\n }\n },\n \"strategy\": {\n \"type\": \"Recreate\"\n },\n \"revisionHistoryLimit\": 0,\n \"progressDeadlineSeconds\": 600\n },\n \"status\": {\n \"observedGeneration\": 1,\n \"replicas\": 1,\n \"updatedReplicas\": 1,\n \"readyReplicas\": 1,\n \"availableReplicas\": 1,\n \"conditions\": [\n {\n \"type\": \"Available\",\n \"status\": \"True\",\n \"lastUpdateTime\": \"2020-01-11T15:56:35Z\",\n \"lastTransitionTime\": \"2020-01-11T15:56:35Z\",\n \"reason\": \"MinimumReplicasAvailable\",\n \"message\": \"Deployment has minimum availability.\"\n },\n {\n \"type\": \"Progressing\",\n \"status\": \"True\",\n \"lastUpdateTime\": \"2020-01-11T15:56:35Z\",\n \"lastTransitionTime\": \"2020-01-11T15:55:18Z\",\n \"reason\": \"NewReplicaSetAvailable\",\n \"message\": \"ReplicaSet \\\"calico-kube-controllers-79bcd784b6\\\" has successfully progressed.\"\n }\n ]\n }\n },\n {\n \"metadata\": {\n \"name\": \"calico-typha-deploy\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/apis/apps/v1/namespaces/kube-system/deployments/calico-typha-deploy\",\n \"uid\": \"7df27ef2-1d83-41d7-bf68-e62a84e2050d\",\n \"resourceVersion\": \"5071\",\n \"generation\": 3,\n \"creationTimestamp\": \"2020-01-11T15:54:38Z\",\n \"labels\": {\n \"garden.sapcloud.io/role\": \"system-component\",\n \"k8s-app\": \"calico-typha\",\n \"shoot.gardener.cloud/no-cleanup\": \"true\"\n },\n \"annotations\": {\n \"cluster-autoscaler.kubernetes.io/safe-to-evict\": \"true\",\n \"deployment.kubernetes.io/revision\": \"3\",\n \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n }\n },\n \"spec\": {\n \"replicas\": 1,\n \"selector\": {\n \"matchLabels\": {\n \"k8s-app\": \"calico-typha\"\n }\n },\n \"template\": {\n \"metadata\": {\n \"creationTimestamp\": null,\n \"labels\": {\n \"garden.sapcloud.io/role\": \"system-component\",\n \"k8s-app\": \"calico-typha\",\n \"origin\": \"gardener\",\n \"shoot.gardener.cloud/no-cleanup\": \"true\"\n },\n \"annotations\": {\n \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n }\n },\n \"spec\": {\n \"containers\": [\n {\n \"name\": \"calico-typha\",\n \"image\": \"eu.gcr.io/gardener-project/3rd/quay_io/calico/typha:v3.8.2\",\n \"ports\": [\n {\n \"name\": \"calico-typha\",\n \"hostPort\": 5473,\n \"containerPort\": 5473,\n \"protocol\": \"TCP\"\n }\n ],\n \"env\": [\n {\n \"name\": \"USE_POD_CIDR\",\n \"value\": \"true\"\n },\n {\n \"name\": \"TYPHA_LOGSEVERITYSCREEN\",\n \"value\": \"error\"\n },\n {\n \"name\": \"TYPHA_LOGFILEPATH\",\n \"value\": \"none\"\n },\n {\n \"name\": \"TYPHA_LOGSEVERITYSYS\",\n \"value\": \"none\"\n },\n {\n \"name\": \"TYPHA_CONNECTIONREBALANCINGMODE\",\n \"value\": \"kubernetes\"\n },\n {\n \"name\": \"TYPHA_DATASTORETYPE\",\n \"value\": \"kubernetes\"\n },\n {\n \"name\": \"TYPHA_HEALTHENABLED\",\n \"value\": \"true\"\n }\n ],\n \"resources\": {},\n \"livenessProbe\": {\n \"httpGet\": {\n \"path\": \"/liveness\",\n \"port\": 9098,\n \"host\": \"localhost\",\n \"scheme\": \"HTTP\"\n },\n \"initialDelaySeconds\": 30,\n \"timeoutSeconds\": 1,\n \"periodSeconds\": 30,\n \"successThreshold\": 1,\n \"failureThreshold\": 3\n },\n \"readinessProbe\": {\n \"httpGet\": {\n \"path\": \"/readiness\",\n \"port\": 9098,\n \"host\": \"localhost\",\n \"scheme\": \"HTTP\"\n },\n \"timeoutSeconds\": 1,\n \"periodSeconds\": 10,\n \"successThreshold\": 1,\n \"failureThreshold\": 3\n },\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"imagePullPolicy\": \"IfNotPresent\"\n }\n ],\n \"restartPolicy\": \"Always\",\n \"terminationGracePeriodSeconds\": 30,\n \"dnsPolicy\": \"ClusterFirst\",\n \"nodeSelector\": {\n \"beta.kubernetes.io/os\": \"linux\"\n },\n \"serviceAccountName\": \"calico-typha\",\n \"serviceAccount\": \"calico-typha\",\n \"hostNetwork\": true,\n \"securityContext\": {\n \"runAsUser\": 65534,\n \"fsGroup\": 65534\n },\n \"schedulerName\": \"default-scheduler\",\n \"tolerations\": [\n {\n \"key\": \"CriticalAddonsOnly\",\n \"operator\": \"Exists\"\n }\n ],\n \"priorityClassName\": \"system-cluster-critical\"\n }\n },\n \"strategy\": {\n \"type\": \"Recreate\"\n },\n \"revisionHistoryLimit\": 0,\n \"progressDeadlineSeconds\": 600\n },\n \"status\": {\n \"observedGeneration\": 3,\n \"replicas\": 1,\n \"updatedReplicas\": 1,\n \"readyReplicas\": 1,\n \"availableReplicas\": 1,\n \"conditions\": [\n {\n \"type\": \"Progressing\",\n \"status\": \"True\",\n \"lastUpdateTime\": \"2020-01-11T16:01:13Z\",\n \"lastTransitionTime\": \"2020-01-11T15:55:18Z\",\n \"reason\": \"NewReplicaSetAvailable\",\n \"message\": \"ReplicaSet \\\"calico-typha-deploy-9f6b455c4\\\" has successfully progressed.\"\n },\n {\n \"type\": \"Available\",\n \"status\": \"True\",\n \"lastUpdateTime\": \"2020-01-11T16:21:15Z\",\n \"lastTransitionTime\": \"2020-01-11T16:21:15Z\",\n \"reason\": \"MinimumReplicasAvailable\",\n \"message\": \"Deployment has minimum availability.\"\n }\n ]\n }\n },\n {\n \"metadata\": {\n \"name\": \"calico-typha-horizontal-autoscaler\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/apis/apps/v1/namespaces/kube-system/deployments/calico-typha-horizontal-autoscaler\",\n \"uid\": \"d70ec6c2-6538-499f-9c55-2d1195d52da3\",\n \"resourceVersion\": \"1061\",\n \"generation\": 1,\n \"creationTimestamp\": \"2020-01-11T15:54:38Z\",\n \"labels\": {\n \"k8s-app\": \"calico-typha-autoscaler\",\n \"shoot.gardener.cloud/no-cleanup\": \"true\"\n },\n \"annotations\": {\n \"deployment.kubernetes.io/revision\": \"1\"\n }\n },\n \"spec\": {\n \"replicas\": 1,\n \"selector\": {\n \"matchLabels\": {\n \"k8s-app\": \"calico-typha-autoscaler\"\n }\n },\n \"template\": {\n \"metadata\": {\n \"creationTimestamp\": null,\n \"labels\": {\n \"k8s-app\": \"calico-typha-autoscaler\",\n \"shoot.gardener.cloud/no-cleanup\": \"true\"\n },\n \"annotations\": {\n \"checksum/configmap-calico-typha-horizontal-autoscaler\": \"1a5d7c29390e7895360fe594609b63d25e5c0f738181e178558e481a280cc668\"\n }\n },\n \"spec\": {\n \"containers\": [\n {\n \"name\": \"autoscaler\",\n \"image\": \"eu.gcr.io/gardener-project/3rd/k8s_gcr_io/cluster-proportional-autoscaler-amd64:1.7.1\",\n \"command\": [\n \"/cluster-proportional-autoscaler\",\n \"--namespace=kube-system\",\n \"--configmap=calico-typha-horizontal-autoscaler\",\n \"--target=deployment/calico-typha-deploy\",\n \"--logtostderr=true\",\n \"--v=2\"\n ],\n \"resources\": {\n \"limits\": {\n \"cpu\": \"10m\"\n },\n \"requests\": {\n \"cpu\": \"10m\"\n }\n },\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"imagePullPolicy\": \"IfNotPresent\"\n }\n ],\n \"restartPolicy\": \"Always\",\n \"terminationGracePeriodSeconds\": 30,\n \"dnsPolicy\": \"ClusterFirst\",\n \"serviceAccountName\": \"typha-cpha\",\n \"serviceAccount\": \"typha-cpha\",\n \"securityContext\": {\n \"runAsUser\": 65534,\n \"supplementalGroups\": [\n 65534\n ],\n \"fsGroup\": 65534\n },\n \"schedulerName\": \"default-scheduler\",\n \"priorityClassName\": \"system-cluster-critical\"\n }\n },\n \"strategy\": {\n \"type\": \"RollingUpdate\",\n \"rollingUpdate\": {\n \"maxUnavailable\": \"25%\",\n \"maxSurge\": \"25%\"\n }\n },\n \"revisionHistoryLimit\": 0,\n \"progressDeadlineSeconds\": 600\n },\n \"status\": {\n \"observedGeneration\": 1,\n \"replicas\": 1,\n \"updatedReplicas\": 1,\n \"readyReplicas\": 1,\n \"availableReplicas\": 1,\n \"conditions\": [\n {\n \"type\": \"Available\",\n \"status\": \"True\",\n \"lastUpdateTime\": \"2020-01-11T15:57:01Z\",\n \"lastTransitionTime\": \"2020-01-11T15:57:01Z\",\n \"reason\": \"MinimumReplicasAvailable\",\n \"message\": \"Deployment has minimum availability.\"\n },\n {\n \"type\": \"Progressing\",\n \"status\": \"True\",\n \"lastUpdateTime\": \"2020-01-11T15:57:01Z\",\n \"lastTransitionTime\": \"2020-01-11T15:55:18Z\",\n \"reason\": \"NewReplicaSetAvailable\",\n \"message\": \"ReplicaSet \\\"calico-typha-horizontal-autoscaler-85c99966bb\\\" has successfully progressed.\"\n }\n ]\n }\n },\n {\n \"metadata\": {\n \"name\": \"calico-typha-vertical-autoscaler\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/apis/apps/v1/namespaces/kube-system/deployments/calico-typha-vertical-autoscaler\",\n \"uid\": \"e678b6ca-c63b-457d-b744-d9d746c1872e\",\n \"resourceVersion\": \"1431\",\n \"generation\": 1,\n \"creationTimestamp\": \"2020-01-11T15:54:38Z\",\n \"labels\": {\n \"k8s-app\": \"calico-typha-autoscaler\",\n \"shoot.gardener.cloud/no-cleanup\": \"true\"\n },\n \"annotations\": {\n \"deployment.kubernetes.io/revision\": \"1\"\n }\n },\n \"spec\": {\n \"replicas\": 1,\n \"selector\": {\n \"matchLabels\": {\n \"k8s-app\": \"calico-typha-autoscaler\"\n }\n },\n \"template\": {\n \"metadata\": {\n \"creationTimestamp\": null,\n \"labels\": {\n \"k8s-app\": \"calico-typha-autoscaler\",\n \"shoot.gardener.cloud/no-cleanup\": \"true\"\n },\n \"annotations\": {\n \"checksum/configmap-calico-typha-vertical-autoscaler\": \"19ab5f175584d9322622fd0316785e275a972fed46f30d9233d4f11c3cf33e91\"\n }\n },\n \"spec\": {\n \"volumes\": [\n {\n \"name\": \"config\",\n \"configMap\": {\n \"name\": \"calico-typha-vertical-autoscaler\",\n \"defaultMode\": 420\n }\n }\n ],\n \"containers\": [\n {\n \"name\": \"autoscaler\",\n \"image\": \"eu.gcr.io/gardener-project/3rd/k8s_gcr_io/cpvpa-amd64:v0.8.1\",\n \"command\": [\n \"/cpvpa\",\n \"--target=deployment/calico-typha-deploy\",\n \"--namespace=kube-system\",\n \"--logtostderr=true\",\n \"--poll-period-seconds=30\",\n \"--v=2\",\n \"--config-file=/etc/config/typha-autoscaler\"\n ],\n \"resources\": {},\n \"volumeMounts\": [\n {\n \"name\": \"config\",\n \"mountPath\": \"/etc/config\"\n }\n ],\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"imagePullPolicy\": \"IfNotPresent\"\n }\n ],\n \"restartPolicy\": \"Always\",\n \"terminationGracePeriodSeconds\": 30,\n \"dnsPolicy\": \"ClusterFirst\",\n \"serviceAccountName\": \"typha-cpva\",\n \"serviceAccount\": \"typha-cpva\",\n \"securityContext\": {\n \"runAsUser\": 65534\n },\n \"schedulerName\": \"default-scheduler\",\n \"priorityClassName\": \"system-cluster-critical\"\n }\n },\n \"strategy\": {\n \"type\": \"RollingUpdate\",\n \"rollingUpdate\": {\n \"maxUnavailable\": \"25%\",\n \"maxSurge\": \"25%\"\n }\n },\n \"revisionHistoryLimit\": 0,\n \"progressDeadlineSeconds\": 600\n },\n \"status\": {\n \"observedGeneration\": 1,\n \"replicas\": 1,\n \"updatedReplicas\": 1,\n \"readyReplicas\": 1,\n \"availableReplicas\": 1,\n \"conditions\": [\n {\n \"type\": \"Available\",\n \"status\": \"True\",\n \"lastUpdateTime\": \"2020-01-11T15:59:49Z\",\n \"lastTransitionTime\": \"2020-01-11T15:59:49Z\",\n \"reason\": \"MinimumReplicasAvailable\",\n \"message\": \"Deployment has minimum availability.\"\n },\n {\n \"type\": \"Progressing\",\n \"status\": \"True\",\n \"lastUpdateTime\": \"2020-01-11T15:59:49Z\",\n \"lastTransitionTime\": \"2020-01-11T15:55:18Z\",\n \"reason\": \"NewReplicaSetAvailable\",\n \"message\": \"ReplicaSet \\\"calico-typha-vertical-autoscaler-5769b74b58\\\" has successfully progressed.\"\n }\n ]\n }\n },\n {\n \"metadata\": {\n \"name\": \"coredns\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/apis/apps/v1/namespaces/kube-system/deployments/coredns\",\n \"uid\": \"82c526b8-8ca2-46b5-93a6-c5f80b69ca04\",\n \"resourceVersion\": \"1027\",\n \"generation\": 2,\n \"creationTimestamp\": \"2020-01-11T15:54:40Z\",\n \"labels\": {\n \"garden.sapcloud.io/role\": \"system-component\",\n \"k8s-app\": \"kube-dns\",\n \"shoot.gardener.cloud/no-cleanup\": \"true\"\n },\n \"annotations\": {\n \"deployment.kubernetes.io/revision\": \"1\"\n }\n },\n \"spec\": {\n \"replicas\": 2,\n \"selector\": {\n \"matchLabels\": {\n \"k8s-app\": \"kube-dns\"\n }\n },\n \"template\": {\n \"metadata\": {\n \"creationTimestamp\": null,\n \"labels\": {\n \"garden.sapcloud.io/role\": \"system-component\",\n \"k8s-app\": \"kube-dns\",\n \"shoot.gardener.cloud/no-cleanup\": \"true\"\n },\n \"annotations\": {\n \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n }\n },\n \"spec\": {\n \"volumes\": [\n {\n \"name\": \"config-volume\",\n \"configMap\": {\n \"name\": \"coredns\",\n \"items\": [\n {\n \"key\": \"Corefile\",\n \"path\": \"Corefile\"\n }\n ],\n \"defaultMode\": 420\n }\n },\n {\n \"name\": \"custom-config-volume\",\n \"configMap\": {\n \"name\": \"coredns-custom\",\n \"defaultMode\": 420,\n \"optional\": true\n }\n }\n ],\n \"containers\": [\n {\n \"name\": \"coredns\",\n \"image\": \"eu.gcr.io/gardener-project/3rd/coredns/coredns:1.6.3\",\n \"args\": [\n \"-conf\",\n \"/etc/coredns/Corefile\"\n ],\n \"ports\": [\n {\n \"name\": \"dns-udp\",\n \"containerPort\": 8053,\n \"protocol\": \"UDP\"\n },\n {\n \"name\": \"dns-tcp\",\n \"containerPort\": 8053,\n \"protocol\": \"TCP\"\n },\n {\n \"name\": \"metrics\",\n \"containerPort\": 9153,\n \"protocol\": \"TCP\"\n }\n ],\n \"resources\": {\n \"limits\": {\n \"cpu\": \"100m\",\n \"memory\": \"100Mi\"\n },\n \"requests\": {\n \"cpu\": \"50m\",\n \"memory\": \"15Mi\"\n }\n },\n \"volumeMounts\": [\n {\n \"name\": \"config-volume\",\n \"readOnly\": true,\n \"mountPath\": \"/etc/coredns\"\n },\n {\n \"name\": \"custom-config-volume\",\n \"readOnly\": true,\n \"mountPath\": \"/etc/coredns/custom\"\n }\n ],\n \"livenessProbe\": {\n \"httpGet\": {\n \"path\": \"/health\",\n \"port\": 8080,\n \"scheme\": \"HTTP\"\n },\n \"initialDelaySeconds\": 60,\n \"timeoutSeconds\": 5,\n \"periodSeconds\": 10,\n \"successThreshold\": 1,\n \"failureThreshold\": 5\n },\n \"readinessProbe\": {\n \"httpGet\": {\n \"path\": \"/ready\",\n \"port\": 8181,\n \"scheme\": \"HTTP\"\n },\n \"timeoutSeconds\": 1,\n \"periodSeconds\": 10,\n \"successThreshold\": 1,\n \"failureThreshold\": 3\n },\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"securityContext\": {\n \"capabilities\": {\n \"drop\": [\n \"all\"\n ]\n },\n \"readOnlyRootFilesystem\": true,\n \"allowPrivilegeEscalation\": false\n }\n }\n ],\n \"restartPolicy\": \"Always\",\n \"terminationGracePeriodSeconds\": 30,\n \"dnsPolicy\": \"Default\",\n \"serviceAccountName\": \"coredns\",\n \"serviceAccount\": \"coredns\",\n \"securityContext\": {\n \"runAsUser\": 65534,\n \"runAsNonRoot\": true\n },\n \"schedulerName\": \"default-scheduler\",\n \"tolerations\": [\n {\n \"key\": \"CriticalAddonsOnly\",\n \"operator\": \"Exists\"\n }\n ],\n \"priorityClassName\": \"system-cluster-critical\"\n }\n },\n \"strategy\": {\n \"type\": \"RollingUpdate\",\n \"rollingUpdate\": {\n \"maxUnavailable\": 1,\n \"maxSurge\": \"25%\"\n }\n },\n \"revisionHistoryLimit\": 0,\n \"progressDeadlineSeconds\": 600\n },\n \"status\": {\n \"observedGeneration\": 2,\n \"replicas\": 2,\n \"updatedReplicas\": 2,\n \"readyReplicas\": 2,\n \"availableReplicas\": 2,\n \"conditions\": [\n {\n \"type\": \"Available\",\n \"status\": \"True\",\n \"lastUpdateTime\": \"2020-01-11T15:56:38Z\",\n \"lastTransitionTime\": \"2020-01-11T15:56:38Z\",\n \"reason\": \"MinimumReplicasAvailable\",\n \"message\": \"Deployment has minimum availability.\"\n },\n {\n \"type\": \"Progressing\",\n \"status\": \"True\",\n \"lastUpdateTime\": \"2020-01-11T15:56:48Z\",\n \"lastTransitionTime\": \"2020-01-11T15:55:18Z\",\n \"reason\": \"NewReplicaSetAvailable\",\n \"message\": \"ReplicaSet \\\"coredns-59c969ffb8\\\" has successfully progressed.\"\n }\n ]\n }\n },\n {\n \"metadata\": {\n \"name\": \"metrics-server\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/apis/apps/v1/namespaces/kube-system/deployments/metrics-server\",\n \"uid\": \"b17a3423-9a78-49bd-9424-56c313014e69\",\n \"resourceVersion\": \"979\",\n \"generation\": 1,\n \"creationTimestamp\": \"2020-01-11T15:54:40Z\",\n \"labels\": {\n \"garden.sapcloud.io/role\": \"system-component\",\n \"k8s-app\": \"metrics-server\",\n \"origin\": \"gardener\",\n \"shoot.gardener.cloud/no-cleanup\": \"true\"\n },\n \"annotations\": {\n \"deployment.kubernetes.io/revision\": \"1\"\n }\n },\n \"spec\": {\n \"replicas\": 1,\n \"selector\": {\n \"matchLabels\": {\n \"k8s-app\": \"metrics-server\"\n }\n },\n \"template\": {\n \"metadata\": {\n \"name\": \"metrics-server\",\n \"creationTimestamp\": null,\n \"labels\": {\n \"garden.sapcloud.io/role\": \"system-component\",\n \"k8s-app\": \"metrics-server\",\n \"origin\": \"gardener\",\n \"shoot.gardener.cloud/no-cleanup\": \"true\"\n },\n \"annotations\": {\n \"checksum/secret-metrics-server\": \"18c8f7329c38af8674a3d83a23d53a96b4277faa82f6ca4a68e362dbe5481736\",\n \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n }\n },\n \"spec\": {\n \"volumes\": [\n {\n \"name\": \"metrics-server\",\n \"secret\": {\n \"secretName\": \"metrics-server\",\n \"defaultMode\": 420\n }\n }\n ],\n \"containers\": [\n {\n \"name\": \"metrics-server\",\n \"image\": \"eu.gcr.io/gardener-project/3rd/k8s_gcr_io/metrics-server-amd64:v0.3.3\",\n \"command\": [\n \"/metrics-server\",\n \"--profiling=false\",\n \"--cert-dir=/home/certdir\",\n \"--secure-port=8443\",\n \"--kubelet-insecure-tls\",\n \"--tls-cert-file=/srv/metrics-server/tls/tls.crt\",\n \"--tls-private-key-file=/srv/metrics-server/tls/tls.key\",\n \"--v=2\"\n ],\n \"resources\": {\n \"limits\": {\n \"cpu\": \"80m\",\n \"memory\": \"400Mi\"\n },\n \"requests\": {\n \"cpu\": \"20m\",\n \"memory\": \"100Mi\"\n }\n },\n \"volumeMounts\": [\n {\n \"name\": \"metrics-server\",\n \"mountPath\": \"/srv/metrics-server/tls\"\n }\n ],\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"imagePullPolicy\": \"Always\"\n }\n ],\n \"restartPolicy\": \"Always\",\n \"terminationGracePeriodSeconds\": 30,\n \"dnsPolicy\": \"ClusterFirst\",\n \"serviceAccountName\": \"metrics-server\",\n \"serviceAccount\": \"metrics-server\",\n \"securityContext\": {\n \"runAsUser\": 65534,\n \"fsGroup\": 65534\n },\n \"schedulerName\": \"default-scheduler\",\n \"tolerations\": [\n {\n \"key\": \"CriticalAddonsOnly\",\n \"operator\": \"Exists\"\n }\n ],\n \"priorityClassName\": \"system-cluster-critical\"\n }\n },\n \"strategy\": {\n \"type\": \"RollingUpdate\",\n \"rollingUpdate\": {\n \"maxUnavailable\": \"25%\",\n \"maxSurge\": \"25%\"\n }\n },\n \"revisionHistoryLimit\": 0,\n \"progressDeadlineSeconds\": 600\n },\n \"status\": {\n \"observedGeneration\": 1,\n \"replicas\": 1,\n \"updatedReplicas\": 1,\n \"readyReplicas\": 1,\n \"availableReplicas\": 1,\n \"conditions\": [\n {\n \"type\": \"Available\",\n \"status\": \"True\",\n \"lastUpdateTime\": \"2020-01-11T15:56:35Z\",\n \"lastTransitionTime\": \"2020-01-11T15:56:35Z\",\n \"reason\": \"MinimumReplicasAvailable\",\n \"message\": \"Deployment has minimum availability.\"\n },\n {\n \"type\": \"Progressing\",\n \"status\": \"True\",\n \"lastUpdateTime\": \"2020-01-11T15:56:35Z\",\n \"lastTransitionTime\": \"2020-01-11T15:55:18Z\",\n \"reason\": \"NewReplicaSetAvailable\",\n \"message\": \"ReplicaSet \\\"metrics-server-7c797fd994\\\" has successfully progressed.\"\n }\n ]\n }\n },\n {\n \"metadata\": {\n \"name\": \"vpn-shoot\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/apis/apps/v1/namespaces/kube-system/deployments/vpn-shoot\",\n \"uid\": \"6153232a-f0a2-4b15-bea8-69f498daa093\",\n \"resourceVersion\": \"1009\",\n \"generation\": 1,\n \"creationTimestamp\": \"2020-01-11T15:54:40Z\",\n \"labels\": {\n \"app\": \"vpn-shoot\",\n \"garden.sapcloud.io/role\": \"system-component\",\n \"origin\": \"gardener\",\n \"shoot.gardener.cloud/no-cleanup\": \"true\"\n },\n \"annotations\": {\n \"deployment.kubernetes.io/revision\": \"1\"\n }\n },\n \"spec\": {\n \"replicas\": 1,\n \"selector\": {\n \"matchLabels\": {\n \"app\": \"vpn-shoot\"\n }\n },\n \"template\": {\n \"metadata\": {\n \"creationTimestamp\": null,\n \"labels\": {\n \"app\": \"vpn-shoot\",\n \"garden.sapcloud.io/role\": \"system-component\",\n \"origin\": \"gardener\",\n \"shoot.gardener.cloud/no-cleanup\": \"true\"\n },\n \"annotations\": {\n \"checksum/secret-vpn-shoot\": \"4c8c1eea7f8805ec1de63090c4a6e5e8059ad270c4d6ec3de163dc88cbdbc62d\",\n \"checksum/secret-vpn-shoot-dh\": \"c4717efbc25c918c6a4f36a8118a448c861497005ab370cecf56c51216b726d3\",\n \"checksum/secret-vpn-shoot-tlsauth\": \"2845871f4fd567c8a83b7f3c08acdb14e85657e335eb1c1ca952880b8bb6ac28\",\n \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n }\n },\n \"spec\": {\n \"volumes\": [\n {\n \"name\": \"vpn-shoot\",\n \"secret\": {\n \"secretName\": \"vpn-shoot\",\n \"defaultMode\": 420\n }\n },\n {\n \"name\": \"vpn-shoot-tlsauth\",\n \"secret\": {\n \"secretName\": \"vpn-shoot-tlsauth\",\n \"defaultMode\": 420\n }\n },\n {\n \"name\": \"vpn-shoot-dh\",\n \"secret\": {\n \"secretName\": \"vpn-shoot-dh\",\n \"defaultMode\": 420\n }\n }\n ],\n \"containers\": [\n {\n \"name\": \"vpn-shoot\",\n \"image\": \"eu.gcr.io/gardener-project/gardener/vpn-shoot:0.16.0\",\n \"env\": [\n {\n \"name\": \"SERVICE_NETWORK\",\n \"value\": \"100.104.0.0/13\"\n },\n {\n \"name\": \"POD_NETWORK\",\n \"value\": \"100.64.0.0/11\"\n },\n {\n \"name\": \"NODE_NETWORK\",\n \"value\": \"10.250.0.0/16\"\n }\n ],\n \"resources\": {\n \"limits\": {\n \"cpu\": \"1\",\n \"memory\": \"1000Mi\"\n },\n \"requests\": {\n \"cpu\": \"100m\",\n \"memory\": \"100Mi\"\n }\n },\n \"volumeMounts\": [\n {\n \"name\": \"vpn-shoot\",\n \"mountPath\": \"/srv/secrets/vpn-shoot\"\n },\n {\n \"name\": \"vpn-shoot-tlsauth\",\n \"mountPath\": \"/srv/secrets/tlsauth\"\n },\n {\n \"name\": \"vpn-shoot-dh\",\n \"mountPath\": \"/srv/secrets/dh\"\n }\n ],\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"securityContext\": {\n \"capabilities\": {\n \"add\": [\n \"NET_ADMIN\"\n ]\n },\n \"privileged\": true\n }\n }\n ],\n \"restartPolicy\": \"Always\",\n \"terminationGracePeriodSeconds\": 30,\n \"dnsPolicy\": \"ClusterFirst\",\n \"serviceAccountName\": \"vpn-shoot\",\n \"serviceAccount\": \"vpn-shoot\",\n \"automountServiceAccountToken\": false,\n \"securityContext\": {},\n \"schedulerName\": \"default-scheduler\",\n \"tolerations\": [\n {\n \"key\": \"CriticalAddonsOnly\",\n \"operator\": \"Exists\"\n }\n ],\n \"priorityClassName\": \"system-cluster-critical\"\n }\n },\n \"strategy\": {\n \"type\": \"RollingUpdate\",\n \"rollingUpdate\": {\n \"maxUnavailable\": \"25%\",\n \"maxSurge\": \"25%\"\n }\n },\n \"revisionHistoryLimit\": 0,\n \"progressDeadlineSeconds\": 600\n },\n \"status\": {\n \"observedGeneration\": 1,\n \"replicas\": 1,\n \"updatedReplicas\": 1,\n \"readyReplicas\": 1,\n \"availableReplicas\": 1,\n \"conditions\": [\n {\n \"type\": \"Available\",\n \"status\": \"True\",\n \"lastUpdateTime\": \"2020-01-11T15:56:41Z\",\n \"lastTransitionTime\": \"2020-01-11T15:56:41Z\",\n \"reason\": \"MinimumReplicasAvailable\",\n \"message\": \"Deployment has minimum availability.\"\n },\n {\n \"type\": \"Progressing\",\n \"status\": \"True\",\n \"lastUpdateTime\": \"2020-01-11T15:56:41Z\",\n \"lastTransitionTime\": \"2020-01-11T15:55:18Z\",\n \"reason\": \"NewReplicaSetAvailable\",\n \"message\": \"ReplicaSet \\\"vpn-shoot-5d76665b65\\\" has successfully progressed.\"\n }\n ]\n }\n }\n ]\n}\n{\n \"kind\": \"ReplicaSetList\",\n \"apiVersion\": \"apps/v1\",\n \"metadata\": {\n \"selfLink\": \"/apis/apps/v1/namespaces/kube-system/replicasets\",\n \"resourceVersion\": \"83498\"\n },\n \"items\": [\n {\n \"metadata\": {\n \"name\": \"addons-kubernetes-dashboard-78954cc66b\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/apis/apps/v1/namespaces/kube-system/replicasets/addons-kubernetes-dashboard-78954cc66b\",\n \"uid\": \"cbf90de6-5a1d-4d2e-b6a5-8c2ea09e8454\",\n \"resourceVersion\": \"960\",\n \"generation\": 1,\n \"creationTimestamp\": \"2020-01-11T15:55:18Z\",\n \"labels\": {\n \"app\": \"kubernetes-dashboard\",\n \"chart\": \"kubernetes-dashboard-0.2.0\",\n \"garden.sapcloud.io/role\": \"optional-addon\",\n \"heritage\": \"Tiller\",\n \"origin\": \"gardener\",\n \"pod-template-hash\": \"78954cc66b\",\n \"release\": \"addons\",\n \"shoot.gardener.cloud/no-cleanup\": \"true\"\n },\n \"annotations\": {\n \"deployment.kubernetes.io/desired-replicas\": \"1\",\n \"deployment.kubernetes.io/max-replicas\": \"1\",\n \"deployment.kubernetes.io/revision\": \"1\"\n },\n \"ownerReferences\": [\n {\n \"apiVersion\": \"apps/v1\",\n \"kind\": \"Deployment\",\n \"name\": \"addons-kubernetes-dashboard\",\n \"uid\": \"a81b67fe-cdd4-4ff6-8740-44c5d772b91d\",\n \"controller\": true,\n \"blockOwnerDeletion\": true\n }\n ]\n },\n \"spec\": {\n \"replicas\": 1,\n \"selector\": {\n \"matchLabels\": {\n \"app\": \"kubernetes-dashboard\",\n \"chart\": \"kubernetes-dashboard-0.2.0\",\n \"heritage\": \"Tiller\",\n \"pod-template-hash\": \"78954cc66b\",\n \"release\": \"addons\"\n }\n },\n \"template\": {\n \"metadata\": {\n \"creationTimestamp\": null,\n \"labels\": {\n \"app\": \"kubernetes-dashboard\",\n \"chart\": \"kubernetes-dashboard-0.2.0\",\n \"garden.sapcloud.io/role\": \"optional-addon\",\n \"heritage\": \"Tiller\",\n \"origin\": \"gardener\",\n \"pod-template-hash\": \"78954cc66b\",\n \"release\": \"addons\",\n \"shoot.gardener.cloud/no-cleanup\": \"true\"\n },\n \"annotations\": {\n \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n }\n },\n \"spec\": {\n \"volumes\": [\n {\n \"name\": \"kubernetes-dashboard-certs\",\n \"secret\": {\n \"secretName\": \"kubernetes-dashboard-certs\",\n \"defaultMode\": 420\n }\n },\n {\n \"name\": \"tmp-volume\",\n \"emptyDir\": {}\n }\n ],\n \"containers\": [\n {\n \"name\": \"kubernetes-dashboard\",\n \"image\": \"eu.gcr.io/gardener-project/3rd/k8s_gcr_io/kubernetes-dashboard-amd64:v1.10.1\",\n \"args\": [\n \"--auto-generate-certificates\",\n \"--authentication-mode=token\"\n ],\n \"ports\": [\n {\n \"name\": \"https\",\n \"containerPort\": 8443,\n \"protocol\": \"TCP\"\n }\n ],\n \"resources\": {\n \"limits\": {\n \"cpu\": \"100m\",\n \"memory\": \"256Mi\"\n },\n \"requests\": {\n \"cpu\": \"50m\",\n \"memory\": \"50Mi\"\n }\n },\n \"volumeMounts\": [\n {\n \"name\": \"kubernetes-dashboard-certs\",\n \"mountPath\": \"/certs\"\n },\n {\n \"name\": \"tmp-volume\",\n \"mountPath\": \"/tmp\"\n }\n ],\n \"livenessProbe\": {\n \"httpGet\": {\n \"path\": \"/\",\n \"port\": 8443,\n \"scheme\": \"HTTPS\"\n },\n \"initialDelaySeconds\": 30,\n \"timeoutSeconds\": 30,\n \"periodSeconds\": 10,\n \"successThreshold\": 1,\n \"failureThreshold\": 3\n },\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"imagePullPolicy\": \"IfNotPresent\"\n }\n ],\n \"restartPolicy\": \"Always\",\n \"terminationGracePeriodSeconds\": 30,\n \"dnsPolicy\": \"ClusterFirst\",\n \"serviceAccountName\": \"addons-kubernetes-dashboard\",\n \"serviceAccount\": \"addons-kubernetes-dashboard\",\n \"securityContext\": {\n \"runAsUser\": 65534,\n \"fsGroup\": 65534\n },\n \"schedulerName\": \"default-scheduler\"\n }\n }\n },\n \"status\": {\n \"replicas\": 1,\n \"fullyLabeledReplicas\": 1,\n \"readyReplicas\": 1,\n \"availableReplicas\": 1,\n \"observedGeneration\": 1\n }\n },\n {\n \"metadata\": {\n \"name\": \"addons-nginx-ingress-controller-7c75bb76db\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/apis/apps/v1/namespaces/kube-system/replicasets/addons-nginx-ingress-controller-7c75bb76db\",\n \"uid\": \"96d45af5-8c2c-4d27-a4ee-80e7b43d48be\",\n \"resourceVersion\": \"1064\",\n \"generation\": 1,\n \"creationTimestamp\": \"2020-01-11T15:55:18Z\",\n \"labels\": {\n \"app\": \"nginx-ingress\",\n \"component\": \"controller\",\n \"garden.sapcloud.io/role\": \"optional-addon\",\n \"origin\": \"gardener\",\n \"pod-template-hash\": \"7c75bb76db\",\n \"release\": \"addons\",\n \"shoot.gardener.cloud/no-cleanup\": \"true\"\n },\n \"annotations\": {\n \"deployment.kubernetes.io/desired-replicas\": \"1\",\n \"deployment.kubernetes.io/max-replicas\": \"2\",\n \"deployment.kubernetes.io/revision\": \"1\"\n },\n \"ownerReferences\": [\n {\n \"apiVersion\": \"apps/v1\",\n \"kind\": \"Deployment\",\n \"name\": \"addons-nginx-ingress-controller\",\n \"uid\": \"b3c002e8-e19f-46d9-8672-9e6cfca5c1e4\",\n \"controller\": true,\n \"blockOwnerDeletion\": true\n }\n ]\n },\n \"spec\": {\n \"replicas\": 1,\n \"selector\": {\n \"matchLabels\": {\n \"app\": \"nginx-ingress\",\n \"component\": \"controller\",\n \"pod-template-hash\": \"7c75bb76db\",\n \"release\": \"addons\"\n }\n },\n \"template\": {\n \"metadata\": {\n \"creationTimestamp\": null,\n \"labels\": {\n \"app\": \"nginx-ingress\",\n \"component\": \"controller\",\n \"garden.sapcloud.io/role\": \"optional-addon\",\n \"origin\": \"gardener\",\n \"pod-template-hash\": \"7c75bb76db\",\n \"release\": \"addons\",\n \"shoot.gardener.cloud/no-cleanup\": \"true\"\n },\n \"annotations\": {\n \"checksum/config\": \"935e3cf465a66f78c2a14ed288dc13cc30649bd0147ef9707ce3da0fd5306c8c\",\n \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n }\n },\n \"spec\": {\n \"containers\": [\n {\n \"name\": \"nginx-ingress-controller\",\n \"image\": \"eu.gcr.io/gardener-project/3rd/quay_io/kubernetes-ingress-controller/nginx-ingress-controller:0.22.0\",\n \"args\": [\n \"/nginx-ingress-controller\",\n \"--default-backend-service=kube-system/addons-nginx-ingress-nginx-ingress-k8s-backend\",\n \"--enable-ssl-passthrough=true\",\n \"--publish-service=kube-system/addons-nginx-ingress-controller\",\n \"--election-id=ingress-controller-leader\",\n \"--ingress-class=nginx\",\n \"--update-status=true\",\n \"--annotations-prefix=nginx.ingress.kubernetes.io\",\n \"--configmap=kube-system/addons-nginx-ingress-controller\"\n ],\n \"ports\": [\n {\n \"name\": \"http\",\n \"containerPort\": 80,\n \"protocol\": \"TCP\"\n },\n {\n \"name\": \"https\",\n \"containerPort\": 443,\n \"protocol\": \"TCP\"\n }\n ],\n \"env\": [\n {\n \"name\": \"POD_NAME\",\n \"valueFrom\": {\n \"fieldRef\": {\n \"apiVersion\": \"v1\",\n \"fieldPath\": \"metadata.name\"\n }\n }\n },\n {\n \"name\": \"POD_NAMESPACE\",\n \"valueFrom\": {\n \"fieldRef\": {\n \"apiVersion\": \"v1\",\n \"fieldPath\": \"metadata.namespace\"\n }\n }\n }\n ],\n \"resources\": {\n \"limits\": {\n \"cpu\": \"2\",\n \"memory\": \"1Gi\"\n },\n \"requests\": {\n \"cpu\": \"100m\",\n \"memory\": \"100Mi\"\n }\n },\n \"livenessProbe\": {\n \"httpGet\": {\n \"path\": \"/healthz\",\n \"port\": 10254,\n \"scheme\": \"HTTP\"\n },\n \"initialDelaySeconds\": 10,\n \"timeoutSeconds\": 1,\n \"periodSeconds\": 10,\n \"successThreshold\": 1,\n \"failureThreshold\": 3\n },\n \"readinessProbe\": {\n \"httpGet\": {\n \"path\": \"/healthz\",\n \"port\": 10254,\n \"scheme\": \"HTTP\"\n },\n \"timeoutSeconds\": 1,\n \"periodSeconds\": 10,\n \"successThreshold\": 1,\n \"failureThreshold\": 3\n },\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"securityContext\": {\n \"capabilities\": {\n \"add\": [\n \"NET_BIND_SERVICE\"\n ],\n \"drop\": [\n \"ALL\"\n ]\n },\n \"runAsUser\": 33\n }\n }\n ],\n \"restartPolicy\": \"Always\",\n \"terminationGracePeriodSeconds\": 60,\n \"dnsPolicy\": \"ClusterFirst\",\n \"serviceAccountName\": \"addons-nginx-ingress\",\n \"serviceAccount\": \"addons-nginx-ingress\",\n \"securityContext\": {},\n \"schedulerName\": \"default-scheduler\",\n \"priorityClassName\": \"system-cluster-critical\"\n }\n }\n },\n \"status\": {\n \"replicas\": 1,\n \"fullyLabeledReplicas\": 1,\n \"readyReplicas\": 1,\n \"availableReplicas\": 1,\n \"observedGeneration\": 1\n }\n },\n {\n \"metadata\": {\n \"name\": \"addons-nginx-ingress-nginx-ingress-k8s-backend-95f65778d\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/apis/apps/v1/namespaces/kube-system/replicasets/addons-nginx-ingress-nginx-ingress-k8s-backend-95f65778d\",\n \"uid\": \"51d470b7-1ee4-4bdb-bb32-7926cede460d\",\n \"resourceVersion\": \"935\",\n \"generation\": 1,\n \"creationTimestamp\": \"2020-01-11T15:55:18Z\",\n \"labels\": {\n \"app\": \"nginx-ingress\",\n \"component\": \"nginx-ingress-k8s-backend\",\n \"garden.sapcloud.io/role\": \"optional-addon\",\n \"origin\": \"gardener\",\n \"pod-template-hash\": \"95f65778d\",\n \"release\": \"addons\",\n \"shoot.gardener.cloud/no-cleanup\": \"true\"\n },\n \"annotations\": {\n \"deployment.kubernetes.io/desired-replicas\": \"1\",\n \"deployment.kubernetes.io/max-replicas\": \"2\",\n \"deployment.kubernetes.io/revision\": \"1\"\n },\n \"ownerReferences\": [\n {\n \"apiVersion\": \"apps/v1\",\n \"kind\": \"Deployment\",\n \"name\": \"addons-nginx-ingress-nginx-ingress-k8s-backend\",\n \"uid\": \"91890f2c-dbd8-45c5-b2c2-d70899c5106f\",\n \"controller\": true,\n \"blockOwnerDeletion\": true\n }\n ]\n },\n \"spec\": {\n \"replicas\": 1,\n \"selector\": {\n \"matchLabels\": {\n \"app\": \"nginx-ingress\",\n \"component\": \"nginx-ingress-k8s-backend\",\n \"pod-template-hash\": \"95f65778d\",\n \"release\": \"addons\"\n }\n },\n \"template\": {\n \"metadata\": {\n \"creationTimestamp\": null,\n \"labels\": {\n \"app\": \"nginx-ingress\",\n \"component\": \"nginx-ingress-k8s-backend\",\n \"garden.sapcloud.io/role\": \"optional-addon\",\n \"origin\": \"gardener\",\n \"pod-template-hash\": \"95f65778d\",\n \"release\": \"addons\",\n \"shoot.gardener.cloud/no-cleanup\": \"true\"\n },\n \"annotations\": {\n \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n }\n },\n \"spec\": {\n \"containers\": [\n {\n \"name\": \"nginx-ingress-nginx-ingress-k8s-backend\",\n \"image\": \"eu.gcr.io/gardener-project/gardener/ingress-default-backend:0.7.0\",\n \"ports\": [\n {\n \"containerPort\": 8080,\n \"protocol\": \"TCP\"\n }\n ],\n \"resources\": {},\n \"livenessProbe\": {\n \"httpGet\": {\n \"path\": \"/healthy\",\n \"port\": 8080,\n \"scheme\": \"HTTP\"\n },\n \"initialDelaySeconds\": 30,\n \"timeoutSeconds\": 5,\n \"periodSeconds\": 10,\n \"successThreshold\": 1,\n \"failureThreshold\": 3\n },\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"imagePullPolicy\": \"IfNotPresent\"\n }\n ],\n \"restartPolicy\": \"Always\",\n \"terminationGracePeriodSeconds\": 60,\n \"dnsPolicy\": \"ClusterFirst\",\n \"securityContext\": {\n \"runAsUser\": 65534,\n \"fsGroup\": 65534\n },\n \"schedulerName\": \"default-scheduler\",\n \"priorityClassName\": \"system-cluster-critical\"\n }\n }\n },\n \"status\": {\n \"replicas\": 1,\n \"fullyLabeledReplicas\": 1,\n \"readyReplicas\": 1,\n \"availableReplicas\": 1,\n \"observedGeneration\": 1\n }\n },\n {\n \"metadata\": {\n \"name\": \"blackbox-exporter-54bb5f55cc\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/apis/apps/v1/namespaces/kube-system/replicasets/blackbox-exporter-54bb5f55cc\",\n \"uid\": \"ec0f120e-ef4c-49fc-a018-c9bd4c1e65a5\",\n \"resourceVersion\": \"1002\",\n \"generation\": 1,\n \"creationTimestamp\": \"2020-01-11T15:55:18Z\",\n \"labels\": {\n \"component\": \"blackbox-exporter\",\n \"garden.sapcloud.io/role\": \"system-component\",\n \"networking.gardener.cloud/from-seed\": \"allowed\",\n \"networking.gardener.cloud/to-dns\": \"allowed\",\n \"networking.gardener.cloud/to-public-networks\": \"allowed\",\n \"origin\": \"gardener\",\n \"pod-template-hash\": \"54bb5f55cc\",\n \"shoot.gardener.cloud/no-cleanup\": \"true\"\n },\n \"annotations\": {\n \"deployment.kubernetes.io/desired-replicas\": \"1\",\n \"deployment.kubernetes.io/max-replicas\": \"2\",\n \"deployment.kubernetes.io/revision\": \"1\"\n },\n \"ownerReferences\": [\n {\n \"apiVersion\": \"apps/v1\",\n \"kind\": \"Deployment\",\n \"name\": \"blackbox-exporter\",\n \"uid\": \"6422fbad-4b6a-48a2-8f5e-0c23c1280fc2\",\n \"controller\": true,\n \"blockOwnerDeletion\": true\n }\n ]\n },\n \"spec\": {\n \"replicas\": 1,\n \"selector\": {\n \"matchLabels\": {\n \"component\": \"blackbox-exporter\",\n \"pod-template-hash\": \"54bb5f55cc\"\n }\n },\n \"template\": {\n \"metadata\": {\n \"creationTimestamp\": null,\n \"labels\": {\n \"component\": \"blackbox-exporter\",\n \"garden.sapcloud.io/role\": \"system-component\",\n \"networking.gardener.cloud/from-seed\": \"allowed\",\n \"networking.gardener.cloud/to-dns\": \"allowed\",\n \"networking.gardener.cloud/to-public-networks\": \"allowed\",\n \"origin\": \"gardener\",\n \"pod-template-hash\": \"54bb5f55cc\",\n \"shoot.gardener.cloud/no-cleanup\": \"true\"\n },\n \"annotations\": {\n \"checksum/configmap-blackbox-exporter-config\": \"837ab259c403ac736c591b304338d575f29e6d82794e00685e39de9b867ddae9\"\n }\n },\n \"spec\": {\n \"volumes\": [\n {\n \"name\": \"blackbox-exporter-config\",\n \"configMap\": {\n \"name\": \"blackbox-exporter-config\",\n \"defaultMode\": 420\n }\n }\n ],\n \"containers\": [\n {\n \"name\": \"blackbox-exporter\",\n \"image\": \"eu.gcr.io/gardener-project/3rd/quay_io/prometheus/blackbox-exporter:v0.14.0\",\n \"args\": [\n \"--config.file=/etc/blackbox_exporter/blackbox.yaml\"\n ],\n \"ports\": [\n {\n \"name\": \"probe\",\n \"containerPort\": 9115,\n \"protocol\": \"TCP\"\n }\n ],\n \"resources\": {\n \"limits\": {\n \"cpu\": \"10m\",\n \"memory\": \"35Mi\"\n },\n \"requests\": {\n \"cpu\": \"5m\",\n \"memory\": \"5Mi\"\n }\n },\n \"volumeMounts\": [\n {\n \"name\": \"blackbox-exporter-config\",\n \"mountPath\": \"/etc/blackbox_exporter\"\n }\n ],\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"imagePullPolicy\": \"IfNotPresent\"\n }\n ],\n \"restartPolicy\": \"Always\",\n \"terminationGracePeriodSeconds\": 30,\n \"dnsPolicy\": \"ClusterFirst\",\n \"securityContext\": {\n \"runAsUser\": 65534,\n \"fsGroup\": 65534\n },\n \"schedulerName\": \"default-scheduler\",\n \"tolerations\": [\n {\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"CriticalAddonsOnly\",\n \"operator\": \"Exists\"\n },\n {\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\"\n }\n ],\n \"priorityClassName\": \"system-cluster-critical\",\n \"dnsConfig\": {\n \"options\": [\n {\n \"name\": \"ndots\",\n \"value\": \"3\"\n }\n ]\n }\n }\n }\n },\n \"status\": {\n \"replicas\": 1,\n \"fullyLabeledReplicas\": 1,\n \"readyReplicas\": 1,\n \"availableReplicas\": 1,\n \"observedGeneration\": 1\n }\n },\n {\n \"metadata\": {\n \"name\": \"calico-kube-controllers-79bcd784b6\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/apis/apps/v1/namespaces/kube-system/replicasets/calico-kube-controllers-79bcd784b6\",\n \"uid\": \"06bf176b-1a78-4951-85dd-d1e0a0375194\",\n \"resourceVersion\": \"974\",\n \"generation\": 1,\n \"creationTimestamp\": \"2020-01-11T15:55:18Z\",\n \"labels\": {\n \"garden.sapcloud.io/role\": \"system-component\",\n \"k8s-app\": \"calico-kube-controllers\",\n \"pod-template-hash\": \"79bcd784b6\",\n \"shoot.gardener.cloud/no-cleanup\": \"true\"\n },\n \"annotations\": {\n \"deployment.kubernetes.io/desired-replicas\": \"1\",\n \"deployment.kubernetes.io/max-replicas\": \"1\",\n \"deployment.kubernetes.io/revision\": \"1\"\n },\n \"ownerReferences\": [\n {\n \"apiVersion\": \"apps/v1\",\n \"kind\": \"Deployment\",\n \"name\": \"calico-kube-controllers\",\n \"uid\": \"6657786b-d4d3-4abc-9504-0373b2231c32\",\n \"controller\": true,\n \"blockOwnerDeletion\": true\n }\n ]\n },\n \"spec\": {\n \"replicas\": 1,\n \"selector\": {\n \"matchLabels\": {\n \"k8s-app\": \"calico-kube-controllers\",\n \"pod-template-hash\": \"79bcd784b6\"\n }\n },\n \"template\": {\n \"metadata\": {\n \"name\": \"calico-kube-controllers\",\n \"namespace\": \"kube-system\",\n \"creationTimestamp\": null,\n \"labels\": {\n \"garden.sapcloud.io/role\": \"system-component\",\n \"k8s-app\": \"calico-kube-controllers\",\n \"pod-template-hash\": \"79bcd784b6\",\n \"shoot.gardener.cloud/no-cleanup\": \"true\"\n }\n },\n \"spec\": {\n \"containers\": [\n {\n \"name\": \"calico-kube-controllers\",\n \"image\": \"eu.gcr.io/gardener-project/3rd/quay_io/calico/kube-controllers:v3.8.2\",\n \"env\": [\n {\n \"name\": \"ENABLED_CONTROLLERS\",\n \"value\": \"node\"\n },\n {\n \"name\": \"DATASTORE_TYPE\",\n \"value\": \"kubernetes\"\n }\n ],\n \"resources\": {},\n \"readinessProbe\": {\n \"exec\": {\n \"command\": [\n \"/usr/bin/check-status\",\n \"-r\"\n ]\n },\n \"timeoutSeconds\": 1,\n \"periodSeconds\": 10,\n \"successThreshold\": 1,\n \"failureThreshold\": 3\n },\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"securityContext\": {\n \"capabilities\": {\n \"drop\": [\n \"ALL\"\n ]\n },\n \"privileged\": true,\n \"allowPrivilegeEscalation\": true\n }\n }\n ],\n \"restartPolicy\": \"Always\",\n \"terminationGracePeriodSeconds\": 30,\n \"dnsPolicy\": \"ClusterFirst\",\n \"nodeSelector\": {\n \"beta.kubernetes.io/os\": \"linux\"\n },\n \"serviceAccountName\": \"calico-kube-controllers\",\n \"serviceAccount\": \"calico-kube-controllers\",\n \"securityContext\": {},\n \"schedulerName\": \"default-scheduler\",\n \"tolerations\": [\n {\n \"key\": \"CriticalAddonsOnly\",\n \"operator\": \"Exists\"\n },\n {\n \"key\": \"node-role.kubernetes.io/master\",\n \"effect\": \"NoSchedule\"\n }\n ],\n \"priorityClassName\": \"system-cluster-critical\"\n }\n }\n },\n \"status\": {\n \"replicas\": 1,\n \"fullyLabeledReplicas\": 1,\n \"readyReplicas\": 1,\n \"availableReplicas\": 1,\n \"observedGeneration\": 1\n }\n },\n {\n \"metadata\": {\n \"name\": \"calico-typha-deploy-9f6b455c4\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/apis/apps/v1/namespaces/kube-system/replicasets/calico-typha-deploy-9f6b455c4\",\n \"uid\": \"0dc78572-7e1f-462f-b6f1-be4c084dc1c9\",\n \"resourceVersion\": \"5069\",\n \"generation\": 1,\n \"creationTimestamp\": \"2020-01-11T16:01:03Z\",\n \"labels\": {\n \"garden.sapcloud.io/role\": \"system-component\",\n \"k8s-app\": \"calico-typha\",\n \"origin\": \"gardener\",\n \"pod-template-hash\": \"9f6b455c4\",\n \"shoot.gardener.cloud/no-cleanup\": \"true\"\n },\n \"annotations\": {\n \"cluster-autoscaler.kubernetes.io/safe-to-evict\": \"true\",\n \"deployment.kubernetes.io/desired-replicas\": \"1\",\n \"deployment.kubernetes.io/max-replicas\": \"1\",\n \"deployment.kubernetes.io/revision\": \"3\"\n },\n \"ownerReferences\": [\n {\n \"apiVersion\": \"apps/v1\",\n \"kind\": \"Deployment\",\n \"name\": \"calico-typha-deploy\",\n \"uid\": \"7df27ef2-1d83-41d7-bf68-e62a84e2050d\",\n \"controller\": true,\n \"blockOwnerDeletion\": true\n }\n ]\n },\n \"spec\": {\n \"replicas\": 1,\n \"selector\": {\n \"matchLabels\": {\n \"k8s-app\": \"calico-typha\",\n \"pod-template-hash\": \"9f6b455c4\"\n }\n },\n \"template\": {\n \"metadata\": {\n \"creationTimestamp\": null,\n \"labels\": {\n \"garden.sapcloud.io/role\": \"system-component\",\n \"k8s-app\": \"calico-typha\",\n \"origin\": \"gardener\",\n \"pod-template-hash\": \"9f6b455c4\",\n \"shoot.gardener.cloud/no-cleanup\": \"true\"\n },\n \"annotations\": {\n \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n }\n },\n \"spec\": {\n \"containers\": [\n {\n \"name\": \"calico-typha\",\n \"image\": \"eu.gcr.io/gardener-project/3rd/quay_io/calico/typha:v3.8.2\",\n \"ports\": [\n {\n \"name\": \"calico-typha\",\n \"hostPort\": 5473,\n \"containerPort\": 5473,\n \"protocol\": \"TCP\"\n }\n ],\n \"env\": [\n {\n \"name\": \"USE_POD_CIDR\",\n \"value\": \"true\"\n },\n {\n \"name\": \"TYPHA_LOGSEVERITYSCREEN\",\n \"value\": \"error\"\n },\n {\n \"name\": \"TYPHA_LOGFILEPATH\",\n \"value\": \"none\"\n },\n {\n \"name\": \"TYPHA_LOGSEVERITYSYS\",\n \"value\": \"none\"\n },\n {\n \"name\": \"TYPHA_CONNECTIONREBALANCINGMODE\",\n \"value\": \"kubernetes\"\n },\n {\n \"name\": \"TYPHA_DATASTORETYPE\",\n \"value\": \"kubernetes\"\n },\n {\n \"name\": \"TYPHA_HEALTHENABLED\",\n \"value\": \"true\"\n }\n ],\n \"resources\": {},\n \"livenessProbe\": {\n \"httpGet\": {\n \"path\": \"/liveness\",\n \"port\": 9098,\n \"host\": \"localhost\",\n \"scheme\": \"HTTP\"\n },\n \"initialDelaySeconds\": 30,\n \"timeoutSeconds\": 1,\n \"periodSeconds\": 30,\n \"successThreshold\": 1,\n \"failureThreshold\": 3\n },\n \"readinessProbe\": {\n \"httpGet\": {\n \"path\": \"/readiness\",\n \"port\": 9098,\n \"host\": \"localhost\",\n \"scheme\": \"HTTP\"\n },\n \"timeoutSeconds\": 1,\n \"periodSeconds\": 10,\n \"successThreshold\": 1,\n \"failureThreshold\": 3\n },\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"imagePullPolicy\": \"IfNotPresent\"\n }\n ],\n \"restartPolicy\": \"Always\",\n \"terminationGracePeriodSeconds\": 30,\n \"dnsPolicy\": \"ClusterFirst\",\n \"nodeSelector\": {\n \"beta.kubernetes.io/os\": \"linux\"\n },\n \"serviceAccountName\": \"calico-typha\",\n \"serviceAccount\": \"calico-typha\",\n \"hostNetwork\": true,\n \"securityContext\": {\n \"runAsUser\": 65534,\n \"fsGroup\": 65534\n },\n \"schedulerName\": \"default-scheduler\",\n \"tolerations\": [\n {\n \"key\": \"CriticalAddonsOnly\",\n \"operator\": \"Exists\"\n }\n ],\n \"priorityClassName\": \"system-cluster-critical\"\n }\n }\n },\n \"status\": {\n \"replicas\": 1,\n \"fullyLabeledReplicas\": 1,\n \"readyReplicas\": 1,\n \"availableReplicas\": 1,\n \"observedGeneration\": 1\n }\n },\n {\n \"metadata\": {\n \"name\": \"calico-typha-horizontal-autoscaler-85c99966bb\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/apis/apps/v1/namespaces/kube-system/replicasets/calico-typha-horizontal-autoscaler-85c99966bb\",\n \"uid\": \"b04a0ca0-edf2-4b26-933d-b4d073df4770\",\n \"resourceVersion\": \"1060\",\n \"generation\": 1,\n \"creationTimestamp\": \"2020-01-11T15:55:18Z\",\n \"labels\": {\n \"k8s-app\": \"calico-typha-autoscaler\",\n \"pod-template-hash\": \"85c99966bb\",\n \"shoot.gardener.cloud/no-cleanup\": \"true\"\n },\n \"annotations\": {\n \"deployment.kubernetes.io/desired-replicas\": \"1\",\n \"deployment.kubernetes.io/max-replicas\": \"2\",\n \"deployment.kubernetes.io/revision\": \"1\"\n },\n \"ownerReferences\": [\n {\n \"apiVersion\": \"apps/v1\",\n \"kind\": \"Deployment\",\n \"name\": \"calico-typha-horizontal-autoscaler\",\n \"uid\": \"d70ec6c2-6538-499f-9c55-2d1195d52da3\",\n \"controller\": true,\n \"blockOwnerDeletion\": true\n }\n ]\n },\n \"spec\": {\n \"replicas\": 1,\n \"selector\": {\n \"matchLabels\": {\n \"k8s-app\": \"calico-typha-autoscaler\",\n \"pod-template-hash\": \"85c99966bb\"\n }\n },\n \"template\": {\n \"metadata\": {\n \"creationTimestamp\": null,\n \"labels\": {\n \"k8s-app\": \"calico-typha-autoscaler\",\n \"pod-template-hash\": \"85c99966bb\",\n \"shoot.gardener.cloud/no-cleanup\": \"true\"\n },\n \"annotations\": {\n \"checksum/configmap-calico-typha-horizontal-autoscaler\": \"1a5d7c29390e7895360fe594609b63d25e5c0f738181e178558e481a280cc668\"\n }\n },\n \"spec\": {\n \"containers\": [\n {\n \"name\": \"autoscaler\",\n \"image\": \"eu.gcr.io/gardener-project/3rd/k8s_gcr_io/cluster-proportional-autoscaler-amd64:1.7.1\",\n \"command\": [\n \"/cluster-proportional-autoscaler\",\n \"--namespace=kube-system\",\n \"--configmap=calico-typha-horizontal-autoscaler\",\n \"--target=deployment/calico-typha-deploy\",\n \"--logtostderr=true\",\n \"--v=2\"\n ],\n \"resources\": {\n \"limits\": {\n \"cpu\": \"10m\"\n },\n \"requests\": {\n \"cpu\": \"10m\"\n }\n },\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"imagePullPolicy\": \"IfNotPresent\"\n }\n ],\n \"restartPolicy\": \"Always\",\n \"terminationGracePeriodSeconds\": 30,\n \"dnsPolicy\": \"ClusterFirst\",\n \"serviceAccountName\": \"typha-cpha\",\n \"serviceAccount\": \"typha-cpha\",\n \"securityContext\": {\n \"runAsUser\": 65534,\n \"supplementalGroups\": [\n 65534\n ],\n \"fsGroup\": 65534\n },\n \"schedulerName\": \"default-scheduler\",\n \"priorityClassName\": \"system-cluster-critical\"\n }\n }\n },\n \"status\": {\n \"replicas\": 1,\n \"fullyLabeledReplicas\": 1,\n \"readyReplicas\": 1,\n \"availableReplicas\": 1,\n \"observedGeneration\": 1\n }\n },\n {\n \"metadata\": {\n \"name\": \"calico-typha-vertical-autoscaler-5769b74b58\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/apis/apps/v1/namespaces/kube-system/replicasets/calico-typha-vertical-autoscaler-5769b74b58\",\n \"uid\": \"be560185-d2f7-4fc9-bfac-8274579be3f5\",\n \"resourceVersion\": \"1430\",\n \"generation\": 1,\n \"creationTimestamp\": \"2020-01-11T15:55:18Z\",\n \"labels\": {\n \"k8s-app\": \"calico-typha-autoscaler\",\n \"pod-template-hash\": \"5769b74b58\",\n \"shoot.gardener.cloud/no-cleanup\": \"true\"\n },\n \"annotations\": {\n \"deployment.kubernetes.io/desired-replicas\": \"1\",\n \"deployment.kubernetes.io/max-replicas\": \"2\",\n \"deployment.kubernetes.io/revision\": \"1\"\n },\n \"ownerReferences\": [\n {\n \"apiVersion\": \"apps/v1\",\n \"kind\": \"Deployment\",\n \"name\": \"calico-typha-vertical-autoscaler\",\n \"uid\": \"e678b6ca-c63b-457d-b744-d9d746c1872e\",\n \"controller\": true,\n \"blockOwnerDeletion\": true\n }\n ]\n },\n \"spec\": {\n \"replicas\": 1,\n \"selector\": {\n \"matchLabels\": {\n \"k8s-app\": \"calico-typha-autoscaler\",\n \"pod-template-hash\": \"5769b74b58\"\n }\n },\n \"template\": {\n \"metadata\": {\n \"creationTimestamp\": null,\n \"labels\": {\n \"k8s-app\": \"calico-typha-autoscaler\",\n \"pod-template-hash\": \"5769b74b58\",\n \"shoot.gardener.cloud/no-cleanup\": \"true\"\n },\n \"annotations\": {\n \"checksum/configmap-calico-typha-vertical-autoscaler\": \"19ab5f175584d9322622fd0316785e275a972fed46f30d9233d4f11c3cf33e91\"\n }\n },\n \"spec\": {\n \"volumes\": [\n {\n \"name\": \"config\",\n \"configMap\": {\n \"name\": \"calico-typha-vertical-autoscaler\",\n \"defaultMode\": 420\n }\n }\n ],\n \"containers\": [\n {\n \"name\": \"autoscaler\",\n \"image\": \"eu.gcr.io/gardener-project/3rd/k8s_gcr_io/cpvpa-amd64:v0.8.1\",\n \"command\": [\n \"/cpvpa\",\n \"--target=deployment/calico-typha-deploy\",\n \"--namespace=kube-system\",\n \"--logtostderr=true\",\n \"--poll-period-seconds=30\",\n \"--v=2\",\n \"--config-file=/etc/config/typha-autoscaler\"\n ],\n \"resources\": {},\n \"volumeMounts\": [\n {\n \"name\": \"config\",\n \"mountPath\": \"/etc/config\"\n }\n ],\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"imagePullPolicy\": \"IfNotPresent\"\n }\n ],\n \"restartPolicy\": \"Always\",\n \"terminationGracePeriodSeconds\": 30,\n \"dnsPolicy\": \"ClusterFirst\",\n \"serviceAccountName\": \"typha-cpva\",\n \"serviceAccount\": \"typha-cpva\",\n \"securityContext\": {\n \"runAsUser\": 65534\n },\n \"schedulerName\": \"default-scheduler\",\n \"priorityClassName\": \"system-cluster-critical\"\n }\n }\n },\n \"status\": {\n \"replicas\": 1,\n \"fullyLabeledReplicas\": 1,\n \"readyReplicas\": 1,\n \"availableReplicas\": 1,\n \"observedGeneration\": 1\n }\n },\n {\n \"metadata\": {\n \"name\": \"coredns-59c969ffb8\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/apis/apps/v1/namespaces/kube-system/replicasets/coredns-59c969ffb8\",\n \"uid\": \"6c1e353e-9a6b-454a-a02e-9c3fff1d87c8\",\n \"resourceVersion\": \"1026\",\n \"generation\": 2,\n \"creationTimestamp\": \"2020-01-11T15:55:18Z\",\n \"labels\": {\n \"garden.sapcloud.io/role\": \"system-component\",\n \"k8s-app\": \"kube-dns\",\n \"pod-template-hash\": \"59c969ffb8\",\n \"shoot.gardener.cloud/no-cleanup\": \"true\"\n },\n \"annotations\": {\n \"deployment.kubernetes.io/desired-replicas\": \"2\",\n \"deployment.kubernetes.io/max-replicas\": \"3\",\n \"deployment.kubernetes.io/revision\": \"1\"\n },\n \"ownerReferences\": [\n {\n \"apiVersion\": \"apps/v1\",\n \"kind\": \"Deployment\",\n \"name\": \"coredns\",\n \"uid\": \"82c526b8-8ca2-46b5-93a6-c5f80b69ca04\",\n \"controller\": true,\n \"blockOwnerDeletion\": true\n }\n ]\n },\n \"spec\": {\n \"replicas\": 2,\n \"selector\": {\n \"matchLabels\": {\n \"k8s-app\": \"kube-dns\",\n \"pod-template-hash\": \"59c969ffb8\"\n }\n },\n \"template\": {\n \"metadata\": {\n \"creationTimestamp\": null,\n \"labels\": {\n \"garden.sapcloud.io/role\": \"system-component\",\n \"k8s-app\": \"kube-dns\",\n \"pod-template-hash\": \"59c969ffb8\",\n \"shoot.gardener.cloud/no-cleanup\": \"true\"\n },\n \"annotations\": {\n \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n }\n },\n \"spec\": {\n \"volumes\": [\n {\n \"name\": \"config-volume\",\n \"configMap\": {\n \"name\": \"coredns\",\n \"items\": [\n {\n \"key\": \"Corefile\",\n \"path\": \"Corefile\"\n }\n ],\n \"defaultMode\": 420\n }\n },\n {\n \"name\": \"custom-config-volume\",\n \"configMap\": {\n \"name\": \"coredns-custom\",\n \"defaultMode\": 420,\n \"optional\": true\n }\n }\n ],\n \"containers\": [\n {\n \"name\": \"coredns\",\n \"image\": \"eu.gcr.io/gardener-project/3rd/coredns/coredns:1.6.3\",\n \"args\": [\n \"-conf\",\n \"/etc/coredns/Corefile\"\n ],\n \"ports\": [\n {\n \"name\": \"dns-udp\",\n \"containerPort\": 8053,\n \"protocol\": \"UDP\"\n },\n {\n \"name\": \"dns-tcp\",\n \"containerPort\": 8053,\n \"protocol\": \"TCP\"\n },\n {\n \"name\": \"metrics\",\n \"containerPort\": 9153,\n \"protocol\": \"TCP\"\n }\n ],\n \"resources\": {\n \"limits\": {\n \"cpu\": \"100m\",\n \"memory\": \"100Mi\"\n },\n \"requests\": {\n \"cpu\": \"50m\",\n \"memory\": \"15Mi\"\n }\n },\n \"volumeMounts\": [\n {\n \"name\": \"config-volume\",\n \"readOnly\": true,\n \"mountPath\": \"/etc/coredns\"\n },\n {\n \"name\": \"custom-config-volume\",\n \"readOnly\": true,\n \"mountPath\": \"/etc/coredns/custom\"\n }\n ],\n \"livenessProbe\": {\n \"httpGet\": {\n \"path\": \"/health\",\n \"port\": 8080,\n \"scheme\": \"HTTP\"\n },\n \"initialDelaySeconds\": 60,\n \"timeoutSeconds\": 5,\n \"periodSeconds\": 10,\n \"successThreshold\": 1,\n \"failureThreshold\": 5\n },\n \"readinessProbe\": {\n \"httpGet\": {\n \"path\": \"/ready\",\n \"port\": 8181,\n \"scheme\": \"HTTP\"\n },\n \"timeoutSeconds\": 1,\n \"periodSeconds\": 10,\n \"successThreshold\": 1,\n \"failureThreshold\": 3\n },\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"securityContext\": {\n \"capabilities\": {\n \"drop\": [\n \"all\"\n ]\n },\n \"readOnlyRootFilesystem\": true,\n \"allowPrivilegeEscalation\": false\n }\n }\n ],\n \"restartPolicy\": \"Always\",\n \"terminationGracePeriodSeconds\": 30,\n \"dnsPolicy\": \"Default\",\n \"serviceAccountName\": \"coredns\",\n \"serviceAccount\": \"coredns\",\n \"securityContext\": {\n \"runAsUser\": 65534,\n \"runAsNonRoot\": true\n },\n \"schedulerName\": \"default-scheduler\",\n \"tolerations\": [\n {\n \"key\": \"CriticalAddonsOnly\",\n \"operator\": \"Exists\"\n }\n ],\n \"priorityClassName\": \"system-cluster-critical\"\n }\n }\n },\n \"status\": {\n \"replicas\": 2,\n \"fullyLabeledReplicas\": 2,\n \"readyReplicas\": 2,\n \"availableReplicas\": 2,\n \"observedGeneration\": 2\n }\n },\n {\n \"metadata\": {\n \"name\": \"metrics-server-7c797fd994\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/apis/apps/v1/namespaces/kube-system/replicasets/metrics-server-7c797fd994\",\n \"uid\": \"7a2f1c80-d0a7-4a83-b4b4-bcded1da8bbc\",\n \"resourceVersion\": \"977\",\n \"generation\": 1,\n \"creationTimestamp\": \"2020-01-11T15:55:18Z\",\n \"labels\": {\n \"garden.sapcloud.io/role\": \"system-component\",\n \"k8s-app\": \"metrics-server\",\n \"origin\": \"gardener\",\n \"pod-template-hash\": \"7c797fd994\",\n \"shoot.gardener.cloud/no-cleanup\": \"true\"\n },\n \"annotations\": {\n \"deployment.kubernetes.io/desired-replicas\": \"1\",\n \"deployment.kubernetes.io/max-replicas\": \"2\",\n \"deployment.kubernetes.io/revision\": \"1\"\n },\n \"ownerReferences\": [\n {\n \"apiVersion\": \"apps/v1\",\n \"kind\": \"Deployment\",\n \"name\": \"metrics-server\",\n \"uid\": \"b17a3423-9a78-49bd-9424-56c313014e69\",\n \"controller\": true,\n \"blockOwnerDeletion\": true\n }\n ]\n },\n \"spec\": {\n \"replicas\": 1,\n \"selector\": {\n \"matchLabels\": {\n \"k8s-app\": \"metrics-server\",\n \"pod-template-hash\": \"7c797fd994\"\n }\n },\n \"template\": {\n \"metadata\": {\n \"name\": \"metrics-server\",\n \"creationTimestamp\": null,\n \"labels\": {\n \"garden.sapcloud.io/role\": \"system-component\",\n \"k8s-app\": \"metrics-server\",\n \"origin\": \"gardener\",\n \"pod-template-hash\": \"7c797fd994\",\n \"shoot.gardener.cloud/no-cleanup\": \"true\"\n },\n \"annotations\": {\n \"checksum/secret-metrics-server\": \"18c8f7329c38af8674a3d83a23d53a96b4277faa82f6ca4a68e362dbe5481736\",\n \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n }\n },\n \"spec\": {\n \"volumes\": [\n {\n \"name\": \"metrics-server\",\n \"secret\": {\n \"secretName\": \"metrics-server\",\n \"defaultMode\": 420\n }\n }\n ],\n \"containers\": [\n {\n \"name\": \"metrics-server\",\n \"image\": \"eu.gcr.io/gardener-project/3rd/k8s_gcr_io/metrics-server-amd64:v0.3.3\",\n \"command\": [\n \"/metrics-server\",\n \"--profiling=false\",\n \"--cert-dir=/home/certdir\",\n \"--secure-port=8443\",\n \"--kubelet-insecure-tls\",\n \"--tls-cert-file=/srv/metrics-server/tls/tls.crt\",\n \"--tls-private-key-file=/srv/metrics-server/tls/tls.key\",\n \"--v=2\"\n ],\n \"resources\": {\n \"limits\": {\n \"cpu\": \"80m\",\n \"memory\": \"400Mi\"\n },\n \"requests\": {\n \"cpu\": \"20m\",\n \"memory\": \"100Mi\"\n }\n },\n \"volumeMounts\": [\n {\n \"name\": \"metrics-server\",\n \"mountPath\": \"/srv/metrics-server/tls\"\n }\n ],\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"imagePullPolicy\": \"Always\"\n }\n ],\n \"restartPolicy\": \"Always\",\n \"terminationGracePeriodSeconds\": 30,\n \"dnsPolicy\": \"ClusterFirst\",\n \"serviceAccountName\": \"metrics-server\",\n \"serviceAccount\": \"metrics-server\",\n \"securityContext\": {\n \"runAsUser\": 65534,\n \"fsGroup\": 65534\n },\n \"schedulerName\": \"default-scheduler\",\n \"tolerations\": [\n {\n \"key\": \"CriticalAddonsOnly\",\n \"operator\": \"Exists\"\n }\n ],\n \"priorityClassName\": \"system-cluster-critical\"\n }\n }\n },\n \"status\": {\n \"replicas\": 1,\n \"fullyLabeledReplicas\": 1,\n \"readyReplicas\": 1,\n \"availableReplicas\": 1,\n \"observedGeneration\": 1\n }\n },\n {\n \"metadata\": {\n \"name\": \"vpn-shoot-5d76665b65\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/apis/apps/v1/namespaces/kube-system/replicasets/vpn-shoot-5d76665b65\",\n \"uid\": \"07a37206-d4cc-45f8-abcf-07a537152cbc\",\n \"resourceVersion\": \"1008\",\n \"generation\": 1,\n \"creationTimestamp\": \"2020-01-11T15:55:18Z\",\n \"labels\": {\n \"app\": \"vpn-shoot\",\n \"garden.sapcloud.io/role\": \"system-component\",\n \"origin\": \"gardener\",\n \"pod-template-hash\": \"5d76665b65\",\n \"shoot.gardener.cloud/no-cleanup\": \"true\"\n },\n \"annotations\": {\n \"deployment.kubernetes.io/desired-replicas\": \"1\",\n \"deployment.kubernetes.io/max-replicas\": \"2\",\n \"deployment.kubernetes.io/revision\": \"1\"\n },\n \"ownerReferences\": [\n {\n \"apiVersion\": \"apps/v1\",\n \"kind\": \"Deployment\",\n \"name\": \"vpn-shoot\",\n \"uid\": \"6153232a-f0a2-4b15-bea8-69f498daa093\",\n \"controller\": true,\n \"blockOwnerDeletion\": true\n }\n ]\n },\n \"spec\": {\n \"replicas\": 1,\n \"selector\": {\n \"matchLabels\": {\n \"app\": \"vpn-shoot\",\n \"pod-template-hash\": \"5d76665b65\"\n }\n },\n \"template\": {\n \"metadata\": {\n \"creationTimestamp\": null,\n \"labels\": {\n \"app\": \"vpn-shoot\",\n \"garden.sapcloud.io/role\": \"system-component\",\n \"origin\": \"gardener\",\n \"pod-template-hash\": \"5d76665b65\",\n \"shoot.gardener.cloud/no-cleanup\": \"true\"\n },\n \"annotations\": {\n \"checksum/secret-vpn-shoot\": \"4c8c1eea7f8805ec1de63090c4a6e5e8059ad270c4d6ec3de163dc88cbdbc62d\",\n \"checksum/secret-vpn-shoot-dh\": \"c4717efbc25c918c6a4f36a8118a448c861497005ab370cecf56c51216b726d3\",\n \"checksum/secret-vpn-shoot-tlsauth\": \"2845871f4fd567c8a83b7f3c08acdb14e85657e335eb1c1ca952880b8bb6ac28\",\n \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n }\n },\n \"spec\": {\n \"volumes\": [\n {\n \"name\": \"vpn-shoot\",\n \"secret\": {\n \"secretName\": \"vpn-shoot\",\n \"defaultMode\": 420\n }\n },\n {\n \"name\": \"vpn-shoot-tlsauth\",\n \"secret\": {\n \"secretName\": \"vpn-shoot-tlsauth\",\n \"defaultMode\": 420\n }\n },\n {\n \"name\": \"vpn-shoot-dh\",\n \"secret\": {\n \"secretName\": \"vpn-shoot-dh\",\n \"defaultMode\": 420\n }\n }\n ],\n \"containers\": [\n {\n \"name\": \"vpn-shoot\",\n \"image\": \"eu.gcr.io/gardener-project/gardener/vpn-shoot:0.16.0\",\n \"env\": [\n {\n \"name\": \"SERVICE_NETWORK\",\n \"value\": \"100.104.0.0/13\"\n },\n {\n \"name\": \"POD_NETWORK\",\n \"value\": \"100.64.0.0/11\"\n },\n {\n \"name\": \"NODE_NETWORK\",\n \"value\": \"10.250.0.0/16\"\n }\n ],\n \"resources\": {\n \"limits\": {\n \"cpu\": \"1\",\n \"memory\": \"1000Mi\"\n },\n \"requests\": {\n \"cpu\": \"100m\",\n \"memory\": \"100Mi\"\n }\n },\n \"volumeMounts\": [\n {\n \"name\": \"vpn-shoot\",\n \"mountPath\": \"/srv/secrets/vpn-shoot\"\n },\n {\n \"name\": \"vpn-shoot-tlsauth\",\n \"mountPath\": \"/srv/secrets/tlsauth\"\n },\n {\n \"name\": \"vpn-shoot-dh\",\n \"mountPath\": \"/srv/secrets/dh\"\n }\n ],\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"securityContext\": {\n \"capabilities\": {\n \"add\": [\n \"NET_ADMIN\"\n ]\n },\n \"privileged\": true\n }\n }\n ],\n \"restartPolicy\": \"Always\",\n \"terminationGracePeriodSeconds\": 30,\n \"dnsPolicy\": \"ClusterFirst\",\n \"serviceAccountName\": \"vpn-shoot\",\n \"serviceAccount\": \"vpn-shoot\",\n \"automountServiceAccountToken\": false,\n \"securityContext\": {},\n \"schedulerName\": \"default-scheduler\",\n \"tolerations\": [\n {\n \"key\": \"CriticalAddonsOnly\",\n \"operator\": \"Exists\"\n }\n ],\n \"priorityClassName\": \"system-cluster-critical\"\n }\n }\n },\n \"status\": {\n \"replicas\": 1,\n \"fullyLabeledReplicas\": 1,\n \"readyReplicas\": 1,\n \"availableReplicas\": 1,\n \"observedGeneration\": 1\n }\n }\n ]\n}\n{\n \"kind\": \"PodList\",\n \"apiVersion\": \"v1\",\n \"metadata\": {\n \"selfLink\": \"/api/v1/namespaces/kube-system/pods\",\n \"resourceVersion\": \"83504\"\n },\n \"items\": [\n {\n \"metadata\": {\n \"name\": \"addons-kubernetes-dashboard-78954cc66b-69k8m\",\n \"generateName\": \"addons-kubernetes-dashboard-78954cc66b-\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/api/v1/namespaces/kube-system/pods/addons-kubernetes-dashboard-78954cc66b-69k8m\",\n \"uid\": \"9a176290-02cb-40fd-9fa9-2dfafa61e279\",\n \"resourceVersion\": \"958\",\n \"creationTimestamp\": \"2020-01-11T15:55:18Z\",\n \"labels\": {\n \"app\": \"kubernetes-dashboard\",\n \"chart\": \"kubernetes-dashboard-0.2.0\",\n \"garden.sapcloud.io/role\": \"optional-addon\",\n \"heritage\": \"Tiller\",\n \"origin\": \"gardener\",\n \"pod-template-hash\": \"78954cc66b\",\n \"release\": \"addons\",\n \"shoot.gardener.cloud/no-cleanup\": \"true\"\n },\n \"annotations\": {\n \"cni.projectcalico.org/podIP\": \"100.64.0.4/32\",\n \"kubernetes.io/psp\": \"gardener.privileged\",\n \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n },\n \"ownerReferences\": [\n {\n \"apiVersion\": \"apps/v1\",\n \"kind\": \"ReplicaSet\",\n \"name\": \"addons-kubernetes-dashboard-78954cc66b\",\n \"uid\": \"cbf90de6-5a1d-4d2e-b6a5-8c2ea09e8454\",\n \"controller\": true,\n \"blockOwnerDeletion\": true\n }\n ]\n },\n \"spec\": {\n \"volumes\": [\n {\n \"name\": \"kubernetes-dashboard-certs\",\n \"secret\": {\n \"secretName\": \"kubernetes-dashboard-certs\",\n \"defaultMode\": 420\n }\n },\n {\n \"name\": \"tmp-volume\",\n \"emptyDir\": {}\n },\n {\n \"name\": \"addons-kubernetes-dashboard-token-n9jkv\",\n \"secret\": {\n \"secretName\": \"addons-kubernetes-dashboard-token-n9jkv\",\n \"defaultMode\": 420\n }\n }\n ],\n \"containers\": [\n {\n \"name\": \"kubernetes-dashboard\",\n \"image\": \"eu.gcr.io/gardener-project/3rd/k8s_gcr_io/kubernetes-dashboard-amd64:v1.10.1\",\n \"args\": [\n \"--auto-generate-certificates\",\n \"--authentication-mode=token\"\n ],\n \"ports\": [\n {\n \"name\": \"https\",\n \"containerPort\": 8443,\n \"protocol\": \"TCP\"\n }\n ],\n \"resources\": {\n \"limits\": {\n \"cpu\": \"100m\",\n \"memory\": \"256Mi\"\n },\n \"requests\": {\n \"cpu\": \"50m\",\n \"memory\": \"50Mi\"\n }\n },\n \"volumeMounts\": [\n {\n \"name\": \"kubernetes-dashboard-certs\",\n \"mountPath\": \"/certs\"\n },\n {\n \"name\": \"tmp-volume\",\n \"mountPath\": \"/tmp\"\n },\n {\n \"name\": \"addons-kubernetes-dashboard-token-n9jkv\",\n \"readOnly\": true,\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n }\n ],\n \"livenessProbe\": {\n \"httpGet\": {\n \"path\": \"/\",\n \"port\": 8443,\n \"scheme\": \"HTTPS\"\n },\n \"initialDelaySeconds\": 30,\n \"timeoutSeconds\": 30,\n \"periodSeconds\": 10,\n \"successThreshold\": 1,\n \"failureThreshold\": 3\n },\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"imagePullPolicy\": \"IfNotPresent\"\n }\n ],\n \"restartPolicy\": \"Always\",\n \"terminationGracePeriodSeconds\": 30,\n \"dnsPolicy\": \"ClusterFirst\",\n \"serviceAccountName\": \"addons-kubernetes-dashboard\",\n \"serviceAccount\": \"addons-kubernetes-dashboard\",\n \"nodeName\": \"ip-10-250-7-77.ec2.internal\",\n \"securityContext\": {\n \"runAsUser\": 65534,\n \"fsGroup\": 65534\n },\n \"schedulerName\": \"default-scheduler\",\n \"tolerations\": [\n {\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\",\n \"tolerationSeconds\": 300\n },\n {\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\",\n \"tolerationSeconds\": 300\n }\n ],\n \"priority\": 0,\n \"enableServiceLinks\": true\n },\n \"status\": {\n \"phase\": \"Running\",\n \"conditions\": [\n {\n \"type\": \"Initialized\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-11T15:56:08Z\"\n },\n {\n \"type\": \"Ready\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-11T15:56:32Z\"\n },\n {\n \"type\": \"ContainersReady\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-11T15:56:32Z\"\n },\n {\n \"type\": \"PodScheduled\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-11T15:56:08Z\"\n }\n ],\n \"hostIP\": \"10.250.7.77\",\n \"podIP\": \"100.64.0.4\",\n \"podIPs\": [\n {\n \"ip\": \"100.64.0.4\"\n }\n ],\n \"startTime\": \"2020-01-11T15:56:08Z\",\n \"containerStatuses\": [\n {\n \"name\": \"kubernetes-dashboard\",\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-01-11T15:56:31Z\"\n }\n },\n \"lastState\": {},\n \"ready\": true,\n \"restartCount\": 0,\n \"image\": \"eu.gcr.io/gardener-project/3rd/k8s_gcr_io/kubernetes-dashboard-amd64:v1.10.1\",\n \"imageID\": \"docker-pullable://eu.gcr.io/gardener-project/3rd/k8s_gcr_io/kubernetes-dashboard-amd64@sha256:2f4fefeb964b1b7b09a3d2607a963506a47a6628d5268825e8b45b8a4c5ace93\",\n \"containerID\": \"docker://b05ccbc412cc06e0073b0cd512b2e303f97089b4f074c8c6467d33aef55600f5\",\n \"started\": true\n }\n ],\n \"qosClass\": \"Burstable\"\n }\n },\n {\n \"metadata\": {\n \"name\": \"addons-nginx-ingress-controller-7c75bb76db-cd9r9\",\n \"generateName\": \"addons-nginx-ingress-controller-7c75bb76db-\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/api/v1/namespaces/kube-system/pods/addons-nginx-ingress-controller-7c75bb76db-cd9r9\",\n \"uid\": \"dadf6972-ee19-49b7-8644-a47b9f8576b7\",\n \"resourceVersion\": \"1063\",\n \"creationTimestamp\": \"2020-01-11T15:55:18Z\",\n \"labels\": {\n \"app\": \"nginx-ingress\",\n \"component\": \"controller\",\n \"garden.sapcloud.io/role\": \"optional-addon\",\n \"origin\": \"gardener\",\n \"pod-template-hash\": \"7c75bb76db\",\n \"release\": \"addons\",\n \"shoot.gardener.cloud/no-cleanup\": \"true\"\n },\n \"annotations\": {\n \"checksum/config\": \"935e3cf465a66f78c2a14ed288dc13cc30649bd0147ef9707ce3da0fd5306c8c\",\n \"cni.projectcalico.org/podIP\": \"100.64.0.12/32\",\n \"kubernetes.io/psp\": \"gardener.privileged\",\n \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n },\n \"ownerReferences\": [\n {\n \"apiVersion\": \"apps/v1\",\n \"kind\": \"ReplicaSet\",\n \"name\": \"addons-nginx-ingress-controller-7c75bb76db\",\n \"uid\": \"96d45af5-8c2c-4d27-a4ee-80e7b43d48be\",\n \"controller\": true,\n \"blockOwnerDeletion\": true\n }\n ]\n },\n \"spec\": {\n \"volumes\": [\n {\n \"name\": \"addons-nginx-ingress-token-lwbgg\",\n \"secret\": {\n \"secretName\": \"addons-nginx-ingress-token-lwbgg\",\n \"defaultMode\": 420\n }\n }\n ],\n \"containers\": [\n {\n \"name\": \"nginx-ingress-controller\",\n \"image\": \"eu.gcr.io/gardener-project/3rd/quay_io/kubernetes-ingress-controller/nginx-ingress-controller:0.22.0\",\n \"args\": [\n \"/nginx-ingress-controller\",\n \"--default-backend-service=kube-system/addons-nginx-ingress-nginx-ingress-k8s-backend\",\n \"--enable-ssl-passthrough=true\",\n \"--publish-service=kube-system/addons-nginx-ingress-controller\",\n \"--election-id=ingress-controller-leader\",\n \"--ingress-class=nginx\",\n \"--update-status=true\",\n \"--annotations-prefix=nginx.ingress.kubernetes.io\",\n \"--configmap=kube-system/addons-nginx-ingress-controller\"\n ],\n \"ports\": [\n {\n \"name\": \"http\",\n \"containerPort\": 80,\n \"protocol\": \"TCP\"\n },\n {\n \"name\": \"https\",\n \"containerPort\": 443,\n \"protocol\": \"TCP\"\n }\n ],\n \"env\": [\n {\n \"name\": \"POD_NAME\",\n \"valueFrom\": {\n \"fieldRef\": {\n \"apiVersion\": \"v1\",\n \"fieldPath\": \"metadata.name\"\n }\n }\n },\n {\n \"name\": \"POD_NAMESPACE\",\n \"valueFrom\": {\n \"fieldRef\": {\n \"apiVersion\": \"v1\",\n \"fieldPath\": \"metadata.namespace\"\n }\n }\n }\n ],\n \"resources\": {\n \"limits\": {\n \"cpu\": \"2\",\n \"memory\": \"1Gi\"\n },\n \"requests\": {\n \"cpu\": \"100m\",\n \"memory\": \"100Mi\"\n }\n },\n \"volumeMounts\": [\n {\n \"name\": \"addons-nginx-ingress-token-lwbgg\",\n \"readOnly\": true,\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n }\n ],\n \"livenessProbe\": {\n \"httpGet\": {\n \"path\": \"/healthz\",\n \"port\": 10254,\n \"scheme\": \"HTTP\"\n },\n \"initialDelaySeconds\": 10,\n \"timeoutSeconds\": 1,\n \"periodSeconds\": 10,\n \"successThreshold\": 1,\n \"failureThreshold\": 3\n },\n \"readinessProbe\": {\n \"httpGet\": {\n \"path\": \"/healthz\",\n \"port\": 10254,\n \"scheme\": \"HTTP\"\n },\n \"timeoutSeconds\": 1,\n \"periodSeconds\": 10,\n \"successThreshold\": 1,\n \"failureThreshold\": 3\n },\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"securityContext\": {\n \"capabilities\": {\n \"add\": [\n \"NET_BIND_SERVICE\"\n ],\n \"drop\": [\n \"ALL\"\n ]\n },\n \"runAsUser\": 33\n }\n }\n ],\n \"restartPolicy\": \"Always\",\n \"terminationGracePeriodSeconds\": 60,\n \"dnsPolicy\": \"ClusterFirst\",\n \"serviceAccountName\": \"addons-nginx-ingress\",\n \"serviceAccount\": \"addons-nginx-ingress\",\n \"nodeName\": \"ip-10-250-7-77.ec2.internal\",\n \"securityContext\": {},\n \"schedulerName\": \"default-scheduler\",\n \"tolerations\": [\n {\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\",\n \"tolerationSeconds\": 300\n },\n {\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\",\n \"tolerationSeconds\": 300\n }\n ],\n \"priorityClassName\": \"system-cluster-critical\",\n \"priority\": 2000000000,\n \"enableServiceLinks\": true\n },\n \"status\": {\n \"phase\": \"Running\",\n \"conditions\": [\n {\n \"type\": \"Initialized\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-11T15:56:13Z\"\n },\n {\n \"type\": \"Ready\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-11T15:57:02Z\"\n },\n {\n \"type\": \"ContainersReady\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-11T15:57:02Z\"\n },\n {\n \"type\": \"PodScheduled\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-11T15:56:13Z\"\n }\n ],\n \"hostIP\": \"10.250.7.77\",\n \"podIP\": \"100.64.0.12\",\n \"podIPs\": [\n {\n \"ip\": \"100.64.0.12\"\n }\n ],\n \"startTime\": \"2020-01-11T15:56:13Z\",\n \"containerStatuses\": [\n {\n \"name\": \"nginx-ingress-controller\",\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-01-11T15:56:57Z\"\n }\n },\n \"lastState\": {},\n \"ready\": true,\n \"restartCount\": 0,\n \"image\": \"eu.gcr.io/gardener-project/3rd/quay_io/kubernetes-ingress-controller/nginx-ingress-controller:0.22.0\",\n \"imageID\": \"docker-pullable://eu.gcr.io/gardener-project/3rd/quay_io/kubernetes-ingress-controller/nginx-ingress-controller@sha256:4980f4ee069f767334c6fb6a7d75fbdc87236542fd749e22af5d80f2217959f4\",\n \"containerID\": \"docker://a7f6e543392dfc5469cba97f08e626de06720c6b1dc8bd46d4492b74d0576d67\",\n \"started\": true\n }\n ],\n \"qosClass\": \"Burstable\"\n }\n },\n {\n \"metadata\": {\n \"name\": \"addons-nginx-ingress-nginx-ingress-k8s-backend-95f65778d-4fk7d\",\n \"generateName\": \"addons-nginx-ingress-nginx-ingress-k8s-backend-95f65778d-\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/api/v1/namespaces/kube-system/pods/addons-nginx-ingress-nginx-ingress-k8s-backend-95f65778d-4fk7d\",\n \"uid\": \"b39937e5-66f3-4f3b-a4de-b85f1b8b8e44\",\n \"resourceVersion\": \"933\",\n \"creationTimestamp\": \"2020-01-11T15:55:19Z\",\n \"labels\": {\n \"app\": \"nginx-ingress\",\n \"component\": \"nginx-ingress-k8s-backend\",\n \"garden.sapcloud.io/role\": \"optional-addon\",\n \"origin\": \"gardener\",\n \"pod-template-hash\": \"95f65778d\",\n \"release\": \"addons\",\n \"shoot.gardener.cloud/no-cleanup\": \"true\"\n },\n \"annotations\": {\n \"cni.projectcalico.org/podIP\": \"100.64.0.2/32\",\n \"kubernetes.io/psp\": \"gardener.privileged\",\n \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n },\n \"ownerReferences\": [\n {\n \"apiVersion\": \"apps/v1\",\n \"kind\": \"ReplicaSet\",\n \"name\": \"addons-nginx-ingress-nginx-ingress-k8s-backend-95f65778d\",\n \"uid\": \"51d470b7-1ee4-4bdb-bb32-7926cede460d\",\n \"controller\": true,\n \"blockOwnerDeletion\": true\n }\n ]\n },\n \"spec\": {\n \"volumes\": [\n {\n \"name\": \"default-token-2dtqk\",\n \"secret\": {\n \"secretName\": \"default-token-2dtqk\",\n \"defaultMode\": 420\n }\n }\n ],\n \"containers\": [\n {\n \"name\": \"nginx-ingress-nginx-ingress-k8s-backend\",\n \"image\": \"eu.gcr.io/gardener-project/gardener/ingress-default-backend:0.7.0\",\n \"ports\": [\n {\n \"containerPort\": 8080,\n \"protocol\": \"TCP\"\n }\n ],\n \"resources\": {},\n \"volumeMounts\": [\n {\n \"name\": \"default-token-2dtqk\",\n \"readOnly\": true,\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n }\n ],\n \"livenessProbe\": {\n \"httpGet\": {\n \"path\": \"/healthy\",\n \"port\": 8080,\n \"scheme\": \"HTTP\"\n },\n \"initialDelaySeconds\": 30,\n \"timeoutSeconds\": 5,\n \"periodSeconds\": 10,\n \"successThreshold\": 1,\n \"failureThreshold\": 3\n },\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"imagePullPolicy\": \"IfNotPresent\"\n }\n ],\n \"restartPolicy\": \"Always\",\n \"terminationGracePeriodSeconds\": 60,\n \"dnsPolicy\": \"ClusterFirst\",\n \"serviceAccountName\": \"default\",\n \"serviceAccount\": \"default\",\n \"nodeName\": \"ip-10-250-7-77.ec2.internal\",\n \"securityContext\": {\n \"runAsUser\": 65534,\n \"fsGroup\": 65534\n },\n \"schedulerName\": \"default-scheduler\",\n \"tolerations\": [\n {\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\",\n \"tolerationSeconds\": 300\n },\n {\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\",\n \"tolerationSeconds\": 300\n }\n ],\n \"priorityClassName\": \"system-cluster-critical\",\n \"priority\": 2000000000,\n \"enableServiceLinks\": true\n },\n \"status\": {\n \"phase\": \"Running\",\n \"conditions\": [\n {\n \"type\": \"Initialized\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-11T15:56:08Z\"\n },\n {\n \"type\": \"Ready\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-11T15:56:24Z\"\n },\n {\n \"type\": \"ContainersReady\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-11T15:56:24Z\"\n },\n {\n \"type\": \"PodScheduled\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-11T15:56:08Z\"\n }\n ],\n \"hostIP\": \"10.250.7.77\",\n \"podIP\": \"100.64.0.2\",\n \"podIPs\": [\n {\n \"ip\": \"100.64.0.2\"\n }\n ],\n \"startTime\": \"2020-01-11T15:56:08Z\",\n \"containerStatuses\": [\n {\n \"name\": \"nginx-ingress-nginx-ingress-k8s-backend\",\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-01-11T15:56:24Z\"\n }\n },\n \"lastState\": {},\n \"ready\": true,\n \"restartCount\": 0,\n \"image\": \"eu.gcr.io/gardener-project/gardener/ingress-default-backend:0.7.0\",\n \"imageID\": \"docker-pullable://eu.gcr.io/gardener-project/gardener/ingress-default-backend@sha256:17b68928ead12cc9df88ee60d9c638d3fd642a7e122c2bb7586da1a21eb2de45\",\n \"containerID\": \"docker://22b811d9774b63d7197d845da4ccb617c1a1fbe68cc6304de7cc3a66307a1d8c\",\n \"started\": true\n }\n ],\n \"qosClass\": \"BestEffort\"\n }\n },\n {\n \"metadata\": {\n \"name\": \"blackbox-exporter-54bb5f55cc-452fk\",\n \"generateName\": \"blackbox-exporter-54bb5f55cc-\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/api/v1/namespaces/kube-system/pods/blackbox-exporter-54bb5f55cc-452fk\",\n \"uid\": \"cc645668-c667-4c27-b56f-d4994b23bc4b\",\n \"resourceVersion\": \"1000\",\n \"creationTimestamp\": \"2020-01-11T15:55:19Z\",\n \"labels\": {\n \"component\": \"blackbox-exporter\",\n \"garden.sapcloud.io/role\": \"system-component\",\n \"networking.gardener.cloud/from-seed\": \"allowed\",\n \"networking.gardener.cloud/to-dns\": \"allowed\",\n \"networking.gardener.cloud/to-public-networks\": \"allowed\",\n \"origin\": \"gardener\",\n \"pod-template-hash\": \"54bb5f55cc\",\n \"shoot.gardener.cloud/no-cleanup\": \"true\"\n },\n \"annotations\": {\n \"checksum/configmap-blackbox-exporter-config\": \"837ab259c403ac736c591b304338d575f29e6d82794e00685e39de9b867ddae9\",\n \"cni.projectcalico.org/podIP\": \"100.64.0.8/32\",\n \"kubernetes.io/psp\": \"gardener.privileged\"\n },\n \"ownerReferences\": [\n {\n \"apiVersion\": \"apps/v1\",\n \"kind\": \"ReplicaSet\",\n \"name\": \"blackbox-exporter-54bb5f55cc\",\n \"uid\": \"ec0f120e-ef4c-49fc-a018-c9bd4c1e65a5\",\n \"controller\": true,\n \"blockOwnerDeletion\": true\n }\n ]\n },\n \"spec\": {\n \"volumes\": [\n {\n \"name\": \"blackbox-exporter-config\",\n \"configMap\": {\n \"name\": \"blackbox-exporter-config\",\n \"defaultMode\": 420\n }\n },\n {\n \"name\": \"default-token-2dtqk\",\n \"secret\": {\n \"secretName\": \"default-token-2dtqk\",\n \"defaultMode\": 420\n }\n }\n ],\n \"containers\": [\n {\n \"name\": \"blackbox-exporter\",\n \"image\": \"eu.gcr.io/gardener-project/3rd/quay_io/prometheus/blackbox-exporter:v0.14.0\",\n \"args\": [\n \"--config.file=/etc/blackbox_exporter/blackbox.yaml\"\n ],\n \"ports\": [\n {\n \"name\": \"probe\",\n \"containerPort\": 9115,\n \"protocol\": \"TCP\"\n }\n ],\n \"resources\": {\n \"limits\": {\n \"cpu\": \"10m\",\n \"memory\": \"35Mi\"\n },\n \"requests\": {\n \"cpu\": \"5m\",\n \"memory\": \"5Mi\"\n }\n },\n \"volumeMounts\": [\n {\n \"name\": \"blackbox-exporter-config\",\n \"mountPath\": \"/etc/blackbox_exporter\"\n },\n {\n \"name\": \"default-token-2dtqk\",\n \"readOnly\": true,\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n }\n ],\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"imagePullPolicy\": \"IfNotPresent\"\n }\n ],\n \"restartPolicy\": \"Always\",\n \"terminationGracePeriodSeconds\": 30,\n \"dnsPolicy\": \"ClusterFirst\",\n \"serviceAccountName\": \"default\",\n \"serviceAccount\": \"default\",\n \"nodeName\": \"ip-10-250-7-77.ec2.internal\",\n \"securityContext\": {\n \"runAsUser\": 65534,\n \"fsGroup\": 65534\n },\n \"schedulerName\": \"default-scheduler\",\n \"tolerations\": [\n {\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"CriticalAddonsOnly\",\n \"operator\": \"Exists\"\n },\n {\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\"\n }\n ],\n \"priorityClassName\": \"system-cluster-critical\",\n \"priority\": 2000000000,\n \"dnsConfig\": {\n \"options\": [\n {\n \"name\": \"ndots\",\n \"value\": \"3\"\n }\n ]\n },\n \"enableServiceLinks\": true\n },\n \"status\": {\n \"phase\": \"Running\",\n \"conditions\": [\n {\n \"type\": \"Initialized\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-11T15:55:58Z\"\n },\n {\n \"type\": \"Ready\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-11T15:56:40Z\"\n },\n {\n \"type\": \"ContainersReady\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-11T15:56:40Z\"\n },\n {\n \"type\": \"PodScheduled\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-11T15:55:58Z\"\n }\n ],\n \"hostIP\": \"10.250.7.77\",\n \"podIP\": \"100.64.0.8\",\n \"podIPs\": [\n {\n \"ip\": \"100.64.0.8\"\n }\n ],\n \"startTime\": \"2020-01-11T15:55:58Z\",\n \"containerStatuses\": [\n {\n \"name\": \"blackbox-exporter\",\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-01-11T15:56:39Z\"\n }\n },\n \"lastState\": {},\n \"ready\": true,\n \"restartCount\": 0,\n \"image\": \"eu.gcr.io/gardener-project/3rd/quay_io/prometheus/blackbox-exporter:v0.14.0\",\n \"imageID\": \"docker-pullable://eu.gcr.io/gardener-project/3rd/quay_io/prometheus/blackbox-exporter@sha256:c09cbb653e4708a0c14b205822f56026669c6a4a7d0502609c65da2dd741e669\",\n \"containerID\": \"docker://23bf7dd0231556b20b21a79ee8117fd6a051ec4e7649b37a06b234a61586e6ac\",\n \"started\": true\n }\n ],\n \"qosClass\": \"Burstable\"\n }\n },\n {\n \"metadata\": {\n \"name\": \"calico-kube-controllers-79bcd784b6-c46r9\",\n \"generateName\": \"calico-kube-controllers-79bcd784b6-\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/api/v1/namespaces/kube-system/pods/calico-kube-controllers-79bcd784b6-c46r9\",\n \"uid\": \"43de3050-5210-4724-9344-779621f5e8b0\",\n \"resourceVersion\": \"36639\",\n \"creationTimestamp\": \"2020-01-11T15:55:18Z\",\n \"labels\": {\n \"garden.sapcloud.io/role\": \"system-component\",\n \"k8s-app\": \"calico-kube-controllers\",\n \"pod-template-hash\": \"79bcd784b6\",\n \"shoot.gardener.cloud/no-cleanup\": \"true\"\n },\n \"annotations\": {\n \"cni.projectcalico.org/podIP\": \"100.64.0.5/32\",\n \"kubernetes.io/psp\": \"gardener.kube-system.calico-kube-controllers\"\n },\n \"ownerReferences\": [\n {\n \"apiVersion\": \"apps/v1\",\n \"kind\": \"ReplicaSet\",\n \"name\": \"calico-kube-controllers-79bcd784b6\",\n \"uid\": \"06bf176b-1a78-4951-85dd-d1e0a0375194\",\n \"controller\": true,\n \"blockOwnerDeletion\": true\n }\n ]\n },\n \"spec\": {\n \"volumes\": [\n {\n \"name\": \"calico-kube-controllers-token-dsjs5\",\n \"secret\": {\n \"secretName\": \"calico-kube-controllers-token-dsjs5\",\n \"defaultMode\": 420\n }\n }\n ],\n \"containers\": [\n {\n \"name\": \"calico-kube-controllers\",\n \"image\": \"eu.gcr.io/gardener-project/3rd/quay_io/calico/kube-controllers:v3.8.2\",\n \"env\": [\n {\n \"name\": \"ENABLED_CONTROLLERS\",\n \"value\": \"node\"\n },\n {\n \"name\": \"DATASTORE_TYPE\",\n \"value\": \"kubernetes\"\n }\n ],\n \"resources\": {},\n \"volumeMounts\": [\n {\n \"name\": \"calico-kube-controllers-token-dsjs5\",\n \"readOnly\": true,\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n }\n ],\n \"readinessProbe\": {\n \"exec\": {\n \"command\": [\n \"/usr/bin/check-status\",\n \"-r\"\n ]\n },\n \"timeoutSeconds\": 1,\n \"periodSeconds\": 10,\n \"successThreshold\": 1,\n \"failureThreshold\": 3\n },\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"securityContext\": {\n \"capabilities\": {\n \"drop\": [\n \"ALL\"\n ]\n },\n \"privileged\": true,\n \"allowPrivilegeEscalation\": true\n }\n }\n ],\n \"restartPolicy\": \"Always\",\n \"terminationGracePeriodSeconds\": 30,\n \"dnsPolicy\": \"ClusterFirst\",\n \"nodeSelector\": {\n \"beta.kubernetes.io/os\": \"linux\"\n },\n \"serviceAccountName\": \"calico-kube-controllers\",\n \"serviceAccount\": \"calico-kube-controllers\",\n \"nodeName\": \"ip-10-250-7-77.ec2.internal\",\n \"securityContext\": {},\n \"schedulerName\": \"default-scheduler\",\n \"tolerations\": [\n {\n \"key\": \"CriticalAddonsOnly\",\n \"operator\": \"Exists\"\n },\n {\n \"key\": \"node-role.kubernetes.io/master\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\",\n \"tolerationSeconds\": 300\n },\n {\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\",\n \"tolerationSeconds\": 300\n }\n ],\n \"priorityClassName\": \"system-cluster-critical\",\n \"priority\": 2000000000,\n \"enableServiceLinks\": true\n },\n \"status\": {\n \"phase\": \"Running\",\n \"conditions\": [\n {\n \"type\": \"Initialized\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-11T15:56:08Z\"\n },\n {\n \"type\": \"Ready\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-11T19:01:55Z\"\n },\n {\n \"type\": \"ContainersReady\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-11T19:01:55Z\"\n },\n {\n \"type\": \"PodScheduled\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-11T15:56:08Z\"\n }\n ],\n \"hostIP\": \"10.250.7.77\",\n \"podIP\": \"100.64.0.5\",\n \"podIPs\": [\n {\n \"ip\": \"100.64.0.5\"\n }\n ],\n \"startTime\": \"2020-01-11T15:56:08Z\",\n \"containerStatuses\": [\n {\n \"name\": \"calico-kube-controllers\",\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-01-11T15:56:33Z\"\n }\n },\n \"lastState\": {},\n \"ready\": true,\n \"restartCount\": 0,\n \"image\": \"eu.gcr.io/gardener-project/3rd/quay_io/calico/kube-controllers:v3.8.2\",\n \"imageID\": \"docker-pullable://eu.gcr.io/gardener-project/3rd/quay_io/calico/kube-controllers@sha256:242c3e83e41c5ad4a246cba351360d92fb90e1c140cd24e42140e640a0ed3290\",\n \"containerID\": \"docker://b61073aae12f61cffdb468edbf2e348ceedf5582a30bfc1b3f5d5d1eda706b35\",\n \"started\": true\n }\n ],\n \"qosClass\": \"BestEffort\"\n }\n },\n {\n \"metadata\": {\n \"name\": \"calico-node-dl8nk\",\n \"generateName\": \"calico-node-\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/api/v1/namespaces/kube-system/pods/calico-node-dl8nk\",\n \"uid\": \"48ee3577-2045-4919-ae4e-4f82b5633b0c\",\n \"resourceVersion\": \"965\",\n \"creationTimestamp\": \"2020-01-11T15:55:58Z\",\n \"labels\": {\n \"controller-revision-hash\": \"d57f46ddd\",\n \"garden.sapcloud.io/role\": \"system-component\",\n \"k8s-app\": \"calico-node\",\n \"origin\": \"gardener\",\n \"pod-template-generation\": \"1\",\n \"shoot.gardener.cloud/no-cleanup\": \"true\"\n },\n \"annotations\": {\n \"checksum/configmap-calico\": \"3bd46cb7beef613e0b3225b3776526289b7ba8abd2ae8dad55b1451c9465ae06\",\n \"kubernetes.io/psp\": \"gardener.kube-system.calico\",\n \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n },\n \"ownerReferences\": [\n {\n \"apiVersion\": \"apps/v1\",\n \"kind\": \"DaemonSet\",\n \"name\": \"calico-node\",\n \"uid\": \"91e96e4f-caf1-4864-8705-46792ded2aad\",\n \"controller\": true,\n \"blockOwnerDeletion\": true\n }\n ]\n },\n \"spec\": {\n \"volumes\": [\n {\n \"name\": \"lib-modules\",\n \"hostPath\": {\n \"path\": \"/lib/modules\",\n \"type\": \"\"\n }\n },\n {\n \"name\": \"var-run-calico\",\n \"hostPath\": {\n \"path\": \"/var/run/calico\",\n \"type\": \"\"\n }\n },\n {\n \"name\": \"var-lib-calico\",\n \"hostPath\": {\n \"path\": \"/var/lib/calico\",\n \"type\": \"\"\n }\n },\n {\n \"name\": \"xtables-lock\",\n \"hostPath\": {\n \"path\": \"/run/xtables.lock\",\n \"type\": \"FileOrCreate\"\n }\n },\n {\n \"name\": \"cni-bin-dir\",\n \"hostPath\": {\n \"path\": \"/opt/cni/bin\",\n \"type\": \"\"\n }\n },\n {\n \"name\": \"cni-net-dir\",\n \"hostPath\": {\n \"path\": \"/etc/cni/net.d\",\n \"type\": \"\"\n }\n },\n {\n \"name\": \"policysync\",\n \"hostPath\": {\n \"path\": \"/var/run/nodeagent\",\n \"type\": \"DirectoryOrCreate\"\n }\n },\n {\n \"name\": \"flexvol-driver-host\",\n \"hostPath\": {\n \"path\": \"/var/lib/kubelet/volumeplugins/nodeagent~uds\",\n \"type\": \"DirectoryOrCreate\"\n }\n },\n {\n \"name\": \"calico-node-token-nbfh2\",\n \"secret\": {\n \"secretName\": \"calico-node-token-nbfh2\",\n \"defaultMode\": 420\n }\n }\n ],\n \"initContainers\": [\n {\n \"name\": \"install-cni\",\n \"image\": \"eu.gcr.io/gardener-project/3rd/quay_io/calico/cni:v3.8.2-mod-1\",\n \"command\": [\n \"/install-cni.sh\"\n ],\n \"env\": [\n {\n \"name\": \"CNI_CONF_NAME\",\n \"value\": \"10-calico.conflist\"\n },\n {\n \"name\": \"CNI_NETWORK_CONFIG\",\n \"valueFrom\": {\n \"configMapKeyRef\": {\n \"name\": \"calico-config\",\n \"key\": \"cni_network_config\"\n }\n }\n },\n {\n \"name\": \"KUBERNETES_NODE_NAME\",\n \"valueFrom\": {\n \"fieldRef\": {\n \"apiVersion\": \"v1\",\n \"fieldPath\": \"spec.nodeName\"\n }\n }\n },\n {\n \"name\": \"CNI_MTU\",\n \"valueFrom\": {\n \"configMapKeyRef\": {\n \"name\": \"calico-config\",\n \"key\": \"veth_mtu\"\n }\n }\n },\n {\n \"name\": \"SLEEP\",\n \"value\": \"false\"\n }\n ],\n \"resources\": {},\n \"volumeMounts\": [\n {\n \"name\": \"cni-bin-dir\",\n \"mountPath\": \"/host/opt/cni/bin\"\n },\n {\n \"name\": \"cni-net-dir\",\n \"mountPath\": \"/host/etc/cni/net.d\"\n },\n {\n \"name\": \"calico-node-token-nbfh2\",\n \"readOnly\": true,\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n }\n ],\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"imagePullPolicy\": \"IfNotPresent\"\n },\n {\n \"name\": \"flexvol-driver\",\n \"image\": \"eu.gcr.io/gardener-project/3rd/quay_io/calico/pod2daemon-flexvol:v3.8.2\",\n \"resources\": {},\n \"volumeMounts\": [\n {\n \"name\": \"flexvol-driver-host\",\n \"mountPath\": \"/host/driver\"\n },\n {\n \"name\": \"calico-node-token-nbfh2\",\n \"readOnly\": true,\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n }\n ],\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"imagePullPolicy\": \"IfNotPresent\"\n }\n ],\n \"containers\": [\n {\n \"name\": \"calico-node\",\n \"image\": \"eu.gcr.io/gardener-project/3rd/quay_io/calico/node:v3.8.2-mod-1\",\n \"env\": [\n {\n \"name\": \"USE_POD_CIDR\",\n \"value\": \"true\"\n },\n {\n \"name\": \"DATASTORE_TYPE\",\n \"value\": \"kubernetes\"\n },\n {\n \"name\": \"FELIX_TYPHAK8SSERVICENAME\",\n \"valueFrom\": {\n \"configMapKeyRef\": {\n \"name\": \"calico-config\",\n \"key\": \"typha_service_name\"\n }\n }\n },\n {\n \"name\": \"FELIX_LOGSEVERITYSCREEN\",\n \"value\": \"error\"\n },\n {\n \"name\": \"CLUSTER_TYPE\",\n \"value\": \"k8s,bgp\"\n },\n {\n \"name\": \"CALICO_DISABLE_FILE_LOGGING\",\n \"value\": \"true\"\n },\n {\n \"name\": \"FELIX_DEFAULTENDPOINTTOHOSTACTION\",\n \"value\": \"ACCEPT\"\n },\n {\n \"name\": \"IP\",\n \"value\": \"autodetect\"\n },\n {\n \"name\": \"FELIX_IPV6SUPPORT\",\n \"value\": \"false\"\n },\n {\n \"name\": \"FELIX_IPINIPMTU\",\n \"valueFrom\": {\n \"configMapKeyRef\": {\n \"name\": \"calico-config\",\n \"key\": \"veth_mtu\"\n }\n }\n },\n {\n \"name\": \"WAIT_FOR_DATASTORE\",\n \"value\": \"true\"\n },\n {\n \"name\": \"CALICO_IPV4POOL_CIDR\",\n \"value\": \"100.64.0.0/11\"\n },\n {\n \"name\": \"FELIX_IPINIPENABLED\",\n \"value\": \"true\"\n },\n {\n \"name\": \"CALICO_IPV4POOL_IPIP\",\n \"value\": \"Always\"\n },\n {\n \"name\": \"CALICO_NETWORKING_BACKEND\",\n \"valueFrom\": {\n \"configMapKeyRef\": {\n \"name\": \"calico-config\",\n \"key\": \"calico_backend\"\n }\n }\n },\n {\n \"name\": \"NODENAME\",\n \"valueFrom\": {\n \"fieldRef\": {\n \"apiVersion\": \"v1\",\n \"fieldPath\": \"spec.nodeName\"\n }\n }\n },\n {\n \"name\": \"FELIX_HEALTHENABLED\",\n \"value\": \"true\"\n },\n {\n \"name\": \"FELIX_NATPORTRANGE\",\n \"value\": \"32768:65535\"\n }\n ],\n \"resources\": {\n \"limits\": {\n \"cpu\": \"500m\",\n \"memory\": \"700Mi\"\n },\n \"requests\": {\n \"cpu\": \"100m\",\n \"memory\": \"100Mi\"\n }\n },\n \"volumeMounts\": [\n {\n \"name\": \"lib-modules\",\n \"readOnly\": true,\n \"mountPath\": \"/lib/modules\"\n },\n {\n \"name\": \"xtables-lock\",\n \"mountPath\": \"/run/xtables.lock\"\n },\n {\n \"name\": \"var-run-calico\",\n \"mountPath\": \"/var/run/calico\"\n },\n {\n \"name\": \"var-lib-calico\",\n \"mountPath\": \"/var/lib/calico\"\n },\n {\n \"name\": \"policysync\",\n \"mountPath\": \"/var/run/nodeagent\"\n },\n {\n \"name\": \"calico-node-token-nbfh2\",\n \"readOnly\": true,\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n }\n ],\n \"livenessProbe\": {\n \"httpGet\": {\n \"path\": \"/liveness\",\n \"port\": 9099,\n \"host\": \"localhost\",\n \"scheme\": \"HTTP\"\n },\n \"initialDelaySeconds\": 10,\n \"timeoutSeconds\": 1,\n \"periodSeconds\": 10,\n \"successThreshold\": 1,\n \"failureThreshold\": 6\n },\n \"readinessProbe\": {\n \"exec\": {\n \"command\": [\n \"/bin/calico-node\",\n \"-felix-ready\",\n \"-bird-ready\"\n ]\n },\n \"timeoutSeconds\": 1,\n \"periodSeconds\": 10,\n \"successThreshold\": 1,\n \"failureThreshold\": 3\n },\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"securityContext\": {\n \"privileged\": true\n }\n }\n ],\n \"restartPolicy\": \"Always\",\n \"terminationGracePeriodSeconds\": 0,\n \"dnsPolicy\": \"ClusterFirst\",\n \"nodeSelector\": {\n \"beta.kubernetes.io/os\": \"linux\"\n },\n \"serviceAccountName\": \"calico-node\",\n \"serviceAccount\": \"calico-node\",\n \"nodeName\": \"ip-10-250-7-77.ec2.internal\",\n \"hostNetwork\": true,\n \"securityContext\": {},\n \"affinity\": {\n \"nodeAffinity\": {\n \"requiredDuringSchedulingIgnoredDuringExecution\": {\n \"nodeSelectorTerms\": [\n {\n \"matchFields\": [\n {\n \"key\": \"metadata.name\",\n \"operator\": \"In\",\n \"values\": [\n \"ip-10-250-7-77.ec2.internal\"\n ]\n }\n ]\n }\n ]\n }\n }\n },\n \"schedulerName\": \"default-scheduler\",\n \"tolerations\": [\n {\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"CriticalAddonsOnly\",\n \"operator\": \"Exists\"\n },\n {\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\"\n },\n {\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\"\n },\n {\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\"\n },\n {\n \"key\": \"node.kubernetes.io/disk-pressure\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/memory-pressure\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/pid-pressure\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/unschedulable\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/network-unavailable\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n }\n ],\n \"priorityClassName\": \"system-node-critical\",\n \"priority\": 2000001000,\n \"enableServiceLinks\": true\n },\n \"status\": {\n \"phase\": \"Running\",\n \"conditions\": [\n {\n \"type\": \"Initialized\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-11T15:56:07Z\"\n },\n {\n \"type\": \"Ready\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-11T15:56:33Z\"\n },\n {\n \"type\": \"ContainersReady\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-11T15:56:33Z\"\n },\n {\n \"type\": \"PodScheduled\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-11T15:55:58Z\"\n }\n ],\n \"hostIP\": \"10.250.7.77\",\n \"podIP\": \"10.250.7.77\",\n \"podIPs\": [\n {\n \"ip\": \"10.250.7.77\"\n }\n ],\n \"startTime\": \"2020-01-11T15:55:58Z\",\n \"initContainerStatuses\": [\n {\n \"name\": \"install-cni\",\n \"state\": {\n \"terminated\": {\n \"exitCode\": 0,\n \"reason\": \"Completed\",\n \"startedAt\": \"2020-01-11T15:56:04Z\",\n \"finishedAt\": \"2020-01-11T15:56:04Z\",\n \"containerID\": \"docker://8a66957f91bf9990b18d8d1e4b46ba7abc897d158f72323b907dc4f66c196451\"\n }\n },\n \"lastState\": {},\n \"ready\": true,\n \"restartCount\": 0,\n \"image\": \"eu.gcr.io/gardener-project/3rd/quay_io/calico/cni:v3.8.2-mod-1\",\n \"imageID\": \"docker-pullable://eu.gcr.io/gardener-project/3rd/quay_io/calico/cni@sha256:fe6cb51f30add991b76eadfa26ec10fa8796383a1ddf807be5d4228725207b9d\",\n \"containerID\": \"docker://8a66957f91bf9990b18d8d1e4b46ba7abc897d158f72323b907dc4f66c196451\"\n },\n {\n \"name\": \"flexvol-driver\",\n \"state\": {\n \"terminated\": {\n \"exitCode\": 0,\n \"reason\": \"Completed\",\n \"startedAt\": \"2020-01-11T15:56:07Z\",\n \"finishedAt\": \"2020-01-11T15:56:07Z\",\n \"containerID\": \"docker://7f1422a92414196ddef7da0e3f40d44429c6962f8b095a258594614502b0bb09\"\n }\n },\n \"lastState\": {},\n \"ready\": true,\n \"restartCount\": 0,\n \"image\": \"eu.gcr.io/gardener-project/3rd/quay_io/calico/pod2daemon-flexvol:v3.8.2\",\n \"imageID\": \"docker-pullable://eu.gcr.io/gardener-project/3rd/quay_io/calico/pod2daemon-flexvol@sha256:fd246ba4eb5b96a7b97bfd8d99eb823ba179e6eeb9852cb3e3f7bf2f44a800a8\",\n \"containerID\": \"docker://7f1422a92414196ddef7da0e3f40d44429c6962f8b095a258594614502b0bb09\"\n }\n ],\n \"containerStatuses\": [\n {\n \"name\": \"calico-node\",\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-01-11T15:56:16Z\"\n }\n },\n \"lastState\": {},\n \"ready\": true,\n \"restartCount\": 0,\n \"image\": \"eu.gcr.io/gardener-project/3rd/quay_io/calico/node:v3.8.2-mod-1\",\n \"imageID\": \"docker-pullable://eu.gcr.io/gardener-project/3rd/quay_io/calico/node@sha256:d017c694acb9df5ad8e957a14b4c5a613c3a42771a34904f40c279dd2f61461e\",\n \"containerID\": \"docker://b100d259b464b2704bf3bbad28bcd621ffcb5fa68d82b25e54a1af98bb761478\",\n \"started\": true\n }\n ],\n \"qosClass\": \"Burstable\"\n }\n },\n {\n \"metadata\": {\n \"name\": \"calico-node-m8r2d\",\n \"generateName\": \"calico-node-\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/api/v1/namespaces/kube-system/pods/calico-node-m8r2d\",\n \"uid\": \"07051280-5955-4674-a80a-990dfe210b2c\",\n \"resourceVersion\": \"956\",\n \"creationTimestamp\": \"2020-01-11T15:56:03Z\",\n \"labels\": {\n \"controller-revision-hash\": \"d57f46ddd\",\n \"garden.sapcloud.io/role\": \"system-component\",\n \"k8s-app\": \"calico-node\",\n \"origin\": \"gardener\",\n \"pod-template-generation\": \"1\",\n \"shoot.gardener.cloud/no-cleanup\": \"true\"\n },\n \"annotations\": {\n \"checksum/configmap-calico\": \"3bd46cb7beef613e0b3225b3776526289b7ba8abd2ae8dad55b1451c9465ae06\",\n \"kubernetes.io/psp\": \"gardener.kube-system.calico\",\n \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n },\n \"ownerReferences\": [\n {\n \"apiVersion\": \"apps/v1\",\n \"kind\": \"DaemonSet\",\n \"name\": \"calico-node\",\n \"uid\": \"91e96e4f-caf1-4864-8705-46792ded2aad\",\n \"controller\": true,\n \"blockOwnerDeletion\": true\n }\n ]\n },\n \"spec\": {\n \"volumes\": [\n {\n \"name\": \"lib-modules\",\n \"hostPath\": {\n \"path\": \"/lib/modules\",\n \"type\": \"\"\n }\n },\n {\n \"name\": \"var-run-calico\",\n \"hostPath\": {\n \"path\": \"/var/run/calico\",\n \"type\": \"\"\n }\n },\n {\n \"name\": \"var-lib-calico\",\n \"hostPath\": {\n \"path\": \"/var/lib/calico\",\n \"type\": \"\"\n }\n },\n {\n \"name\": \"xtables-lock\",\n \"hostPath\": {\n \"path\": \"/run/xtables.lock\",\n \"type\": \"FileOrCreate\"\n }\n },\n {\n \"name\": \"cni-bin-dir\",\n \"hostPath\": {\n \"path\": \"/opt/cni/bin\",\n \"type\": \"\"\n }\n },\n {\n \"name\": \"cni-net-dir\",\n \"hostPath\": {\n \"path\": \"/etc/cni/net.d\",\n \"type\": \"\"\n }\n },\n {\n \"name\": \"policysync\",\n \"hostPath\": {\n \"path\": \"/var/run/nodeagent\",\n \"type\": \"DirectoryOrCreate\"\n }\n },\n {\n \"name\": \"flexvol-driver-host\",\n \"hostPath\": {\n \"path\": \"/var/lib/kubelet/volumeplugins/nodeagent~uds\",\n \"type\": \"DirectoryOrCreate\"\n }\n },\n {\n \"name\": \"calico-node-token-nbfh2\",\n \"secret\": {\n \"secretName\": \"calico-node-token-nbfh2\",\n \"defaultMode\": 420\n }\n }\n ],\n \"initContainers\": [\n {\n \"name\": \"install-cni\",\n \"image\": \"eu.gcr.io/gardener-project/3rd/quay_io/calico/cni:v3.8.2-mod-1\",\n \"command\": [\n \"/install-cni.sh\"\n ],\n \"env\": [\n {\n \"name\": \"CNI_CONF_NAME\",\n \"value\": \"10-calico.conflist\"\n },\n {\n \"name\": \"CNI_NETWORK_CONFIG\",\n \"valueFrom\": {\n \"configMapKeyRef\": {\n \"name\": \"calico-config\",\n \"key\": \"cni_network_config\"\n }\n }\n },\n {\n \"name\": \"KUBERNETES_NODE_NAME\",\n \"valueFrom\": {\n \"fieldRef\": {\n \"apiVersion\": \"v1\",\n \"fieldPath\": \"spec.nodeName\"\n }\n }\n },\n {\n \"name\": \"CNI_MTU\",\n \"valueFrom\": {\n \"configMapKeyRef\": {\n \"name\": \"calico-config\",\n \"key\": \"veth_mtu\"\n }\n }\n },\n {\n \"name\": \"SLEEP\",\n \"value\": \"false\"\n }\n ],\n \"resources\": {},\n \"volumeMounts\": [\n {\n \"name\": \"cni-bin-dir\",\n \"mountPath\": \"/host/opt/cni/bin\"\n },\n {\n \"name\": \"cni-net-dir\",\n \"mountPath\": \"/host/etc/cni/net.d\"\n },\n {\n \"name\": \"calico-node-token-nbfh2\",\n \"readOnly\": true,\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n }\n ],\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"imagePullPolicy\": \"IfNotPresent\"\n },\n {\n \"name\": \"flexvol-driver\",\n \"image\": \"eu.gcr.io/gardener-project/3rd/quay_io/calico/pod2daemon-flexvol:v3.8.2\",\n \"resources\": {},\n \"volumeMounts\": [\n {\n \"name\": \"flexvol-driver-host\",\n \"mountPath\": \"/host/driver\"\n },\n {\n \"name\": \"calico-node-token-nbfh2\",\n \"readOnly\": true,\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n }\n ],\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"imagePullPolicy\": \"IfNotPresent\"\n }\n ],\n \"containers\": [\n {\n \"name\": \"calico-node\",\n \"image\": \"eu.gcr.io/gardener-project/3rd/quay_io/calico/node:v3.8.2-mod-1\",\n \"env\": [\n {\n \"name\": \"USE_POD_CIDR\",\n \"value\": \"true\"\n },\n {\n \"name\": \"DATASTORE_TYPE\",\n \"value\": \"kubernetes\"\n },\n {\n \"name\": \"FELIX_TYPHAK8SSERVICENAME\",\n \"valueFrom\": {\n \"configMapKeyRef\": {\n \"name\": \"calico-config\",\n \"key\": \"typha_service_name\"\n }\n }\n },\n {\n \"name\": \"FELIX_LOGSEVERITYSCREEN\",\n \"value\": \"error\"\n },\n {\n \"name\": \"CLUSTER_TYPE\",\n \"value\": \"k8s,bgp\"\n },\n {\n \"name\": \"CALICO_DISABLE_FILE_LOGGING\",\n \"value\": \"true\"\n },\n {\n \"name\": \"FELIX_DEFAULTENDPOINTTOHOSTACTION\",\n \"value\": \"ACCEPT\"\n },\n {\n \"name\": \"IP\",\n \"value\": \"autodetect\"\n },\n {\n \"name\": \"FELIX_IPV6SUPPORT\",\n \"value\": \"false\"\n },\n {\n \"name\": \"FELIX_IPINIPMTU\",\n \"valueFrom\": {\n \"configMapKeyRef\": {\n \"name\": \"calico-config\",\n \"key\": \"veth_mtu\"\n }\n }\n },\n {\n \"name\": \"WAIT_FOR_DATASTORE\",\n \"value\": \"true\"\n },\n {\n \"name\": \"CALICO_IPV4POOL_CIDR\",\n \"value\": \"100.64.0.0/11\"\n },\n {\n \"name\": \"FELIX_IPINIPENABLED\",\n \"value\": \"true\"\n },\n {\n \"name\": \"CALICO_IPV4POOL_IPIP\",\n \"value\": \"Always\"\n },\n {\n \"name\": \"CALICO_NETWORKING_BACKEND\",\n \"valueFrom\": {\n \"configMapKeyRef\": {\n \"name\": \"calico-config\",\n \"key\": \"calico_backend\"\n }\n }\n },\n {\n \"name\": \"NODENAME\",\n \"valueFrom\": {\n \"fieldRef\": {\n \"apiVersion\": \"v1\",\n \"fieldPath\": \"spec.nodeName\"\n }\n }\n },\n {\n \"name\": \"FELIX_HEALTHENABLED\",\n \"value\": \"true\"\n },\n {\n \"name\": \"FELIX_NATPORTRANGE\",\n \"value\": \"32768:65535\"\n }\n ],\n \"resources\": {\n \"limits\": {\n \"cpu\": \"500m\",\n \"memory\": \"700Mi\"\n },\n \"requests\": {\n \"cpu\": \"100m\",\n \"memory\": \"100Mi\"\n }\n },\n \"volumeMounts\": [\n {\n \"name\": \"lib-modules\",\n \"readOnly\": true,\n \"mountPath\": \"/lib/modules\"\n },\n {\n \"name\": \"xtables-lock\",\n \"mountPath\": \"/run/xtables.lock\"\n },\n {\n \"name\": \"var-run-calico\",\n \"mountPath\": \"/var/run/calico\"\n },\n {\n \"name\": \"var-lib-calico\",\n \"mountPath\": \"/var/lib/calico\"\n },\n {\n \"name\": \"policysync\",\n \"mountPath\": \"/var/run/nodeagent\"\n },\n {\n \"name\": \"calico-node-token-nbfh2\",\n \"readOnly\": true,\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n }\n ],\n \"livenessProbe\": {\n \"httpGet\": {\n \"path\": \"/liveness\",\n \"port\": 9099,\n \"host\": \"localhost\",\n \"scheme\": \"HTTP\"\n },\n \"initialDelaySeconds\": 10,\n \"timeoutSeconds\": 1,\n \"periodSeconds\": 10,\n \"successThreshold\": 1,\n \"failureThreshold\": 6\n },\n \"readinessProbe\": {\n \"exec\": {\n \"command\": [\n \"/bin/calico-node\",\n \"-felix-ready\",\n \"-bird-ready\"\n ]\n },\n \"timeoutSeconds\": 1,\n \"periodSeconds\": 10,\n \"successThreshold\": 1,\n \"failureThreshold\": 3\n },\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"securityContext\": {\n \"privileged\": true\n }\n }\n ],\n \"restartPolicy\": \"Always\",\n \"terminationGracePeriodSeconds\": 0,\n \"dnsPolicy\": \"ClusterFirst\",\n \"nodeSelector\": {\n \"beta.kubernetes.io/os\": \"linux\"\n },\n \"serviceAccountName\": \"calico-node\",\n \"serviceAccount\": \"calico-node\",\n \"nodeName\": \"ip-10-250-27-25.ec2.internal\",\n \"hostNetwork\": true,\n \"securityContext\": {},\n \"affinity\": {\n \"nodeAffinity\": {\n \"requiredDuringSchedulingIgnoredDuringExecution\": {\n \"nodeSelectorTerms\": [\n {\n \"matchFields\": [\n {\n \"key\": \"metadata.name\",\n \"operator\": \"In\",\n \"values\": [\n \"ip-10-250-27-25.ec2.internal\"\n ]\n }\n ]\n }\n ]\n }\n }\n },\n \"schedulerName\": \"default-scheduler\",\n \"tolerations\": [\n {\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"CriticalAddonsOnly\",\n \"operator\": \"Exists\"\n },\n {\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\"\n },\n {\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\"\n },\n {\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\"\n },\n {\n \"key\": \"node.kubernetes.io/disk-pressure\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/memory-pressure\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/pid-pressure\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/unschedulable\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/network-unavailable\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n }\n ],\n \"priorityClassName\": \"system-node-critical\",\n \"priority\": 2000001000,\n \"enableServiceLinks\": true\n },\n \"status\": {\n \"phase\": \"Running\",\n \"conditions\": [\n {\n \"type\": \"Initialized\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-11T15:56:13Z\"\n },\n {\n \"type\": \"Ready\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-11T15:56:32Z\"\n },\n {\n \"type\": \"ContainersReady\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-11T15:56:32Z\"\n },\n {\n \"type\": \"PodScheduled\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-11T15:56:03Z\"\n }\n ],\n \"hostIP\": \"10.250.27.25\",\n \"podIP\": \"10.250.27.25\",\n \"podIPs\": [\n {\n \"ip\": \"10.250.27.25\"\n }\n ],\n \"startTime\": \"2020-01-11T15:56:04Z\",\n \"initContainerStatuses\": [\n {\n \"name\": \"install-cni\",\n \"state\": {\n \"terminated\": {\n \"exitCode\": 0,\n \"reason\": \"Completed\",\n \"startedAt\": \"2020-01-11T15:56:10Z\",\n \"finishedAt\": \"2020-01-11T15:56:10Z\",\n \"containerID\": \"docker://099b2bdfb4fab33c1650bdc91038876ca8c7371da7152e23e32004baa7fadcce\"\n }\n },\n \"lastState\": {},\n \"ready\": true,\n \"restartCount\": 0,\n \"image\": \"eu.gcr.io/gardener-project/3rd/quay_io/calico/cni:v3.8.2-mod-1\",\n \"imageID\": \"docker-pullable://eu.gcr.io/gardener-project/3rd/quay_io/calico/cni@sha256:fe6cb51f30add991b76eadfa26ec10fa8796383a1ddf807be5d4228725207b9d\",\n \"containerID\": \"docker://099b2bdfb4fab33c1650bdc91038876ca8c7371da7152e23e32004baa7fadcce\"\n },\n {\n \"name\": \"flexvol-driver\",\n \"state\": {\n \"terminated\": {\n \"exitCode\": 0,\n \"reason\": \"Completed\",\n \"startedAt\": \"2020-01-11T15:56:13Z\",\n \"finishedAt\": \"2020-01-11T15:56:13Z\",\n \"containerID\": \"docker://3a56edaa4f13aaa8f46e5e4d334a8e3eb8bfe405c2d0c21f90fbe137277745fa\"\n }\n },\n \"lastState\": {},\n \"ready\": true,\n \"restartCount\": 0,\n \"image\": \"eu.gcr.io/gardener-project/3rd/quay_io/calico/pod2daemon-flexvol:v3.8.2\",\n \"imageID\": \"docker-pullable://eu.gcr.io/gardener-project/3rd/quay_io/calico/pod2daemon-flexvol@sha256:fd246ba4eb5b96a7b97bfd8d99eb823ba179e6eeb9852cb3e3f7bf2f44a800a8\",\n \"containerID\": \"docker://3a56edaa4f13aaa8f46e5e4d334a8e3eb8bfe405c2d0c21f90fbe137277745fa\"\n }\n ],\n \"containerStatuses\": [\n {\n \"name\": \"calico-node\",\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-01-11T15:56:18Z\"\n }\n },\n \"lastState\": {},\n \"ready\": true,\n \"restartCount\": 0,\n \"image\": \"eu.gcr.io/gardener-project/3rd/quay_io/calico/node:v3.8.2-mod-1\",\n \"imageID\": \"docker-pullable://eu.gcr.io/gardener-project/3rd/quay_io/calico/node@sha256:d017c694acb9df5ad8e957a14b4c5a613c3a42771a34904f40c279dd2f61461e\",\n \"containerID\": \"docker://ead4d722c2ad5ea4862b03e91a660e3b6cb8f1ddcb57eac9ceef13a770e7bde8\",\n \"started\": true\n }\n ],\n \"qosClass\": \"Burstable\"\n }\n },\n {\n \"metadata\": {\n \"name\": \"calico-typha-deploy-9f6b455c4-vdrzx\",\n \"generateName\": \"calico-typha-deploy-9f6b455c4-\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/api/v1/namespaces/kube-system/pods/calico-typha-deploy-9f6b455c4-vdrzx\",\n \"uid\": \"642aba6d-35ff-4057-bd34-a693d6f40cce\",\n \"resourceVersion\": \"5067\",\n \"creationTimestamp\": \"2020-01-11T16:21:07Z\",\n \"labels\": {\n \"garden.sapcloud.io/role\": \"system-component\",\n \"k8s-app\": \"calico-typha\",\n \"origin\": \"gardener\",\n \"pod-template-hash\": \"9f6b455c4\",\n \"shoot.gardener.cloud/no-cleanup\": \"true\"\n },\n \"annotations\": {\n \"kubernetes.io/psp\": \"gardener.privileged\",\n \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n },\n \"ownerReferences\": [\n {\n \"apiVersion\": \"apps/v1\",\n \"kind\": \"ReplicaSet\",\n \"name\": \"calico-typha-deploy-9f6b455c4\",\n \"uid\": \"0dc78572-7e1f-462f-b6f1-be4c084dc1c9\",\n \"controller\": true,\n \"blockOwnerDeletion\": true\n }\n ]\n },\n \"spec\": {\n \"volumes\": [\n {\n \"name\": \"calico-typha-token-cf4bv\",\n \"secret\": {\n \"secretName\": \"calico-typha-token-cf4bv\",\n \"defaultMode\": 420\n }\n }\n ],\n \"containers\": [\n {\n \"name\": \"calico-typha\",\n \"image\": \"eu.gcr.io/gardener-project/3rd/quay_io/calico/typha:v3.8.2\",\n \"ports\": [\n {\n \"name\": \"calico-typha\",\n \"hostPort\": 5473,\n \"containerPort\": 5473,\n \"protocol\": \"TCP\"\n }\n ],\n \"env\": [\n {\n \"name\": \"USE_POD_CIDR\",\n \"value\": \"true\"\n },\n {\n \"name\": \"TYPHA_LOGSEVERITYSCREEN\",\n \"value\": \"error\"\n },\n {\n \"name\": \"TYPHA_LOGFILEPATH\",\n \"value\": \"none\"\n },\n {\n \"name\": \"TYPHA_LOGSEVERITYSYS\",\n \"value\": \"none\"\n },\n {\n \"name\": \"TYPHA_CONNECTIONREBALANCINGMODE\",\n \"value\": \"kubernetes\"\n },\n {\n \"name\": \"TYPHA_DATASTORETYPE\",\n \"value\": \"kubernetes\"\n },\n {\n \"name\": \"TYPHA_HEALTHENABLED\",\n \"value\": \"true\"\n }\n ],\n \"resources\": {},\n \"volumeMounts\": [\n {\n \"name\": \"calico-typha-token-cf4bv\",\n \"readOnly\": true,\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n }\n ],\n \"livenessProbe\": {\n \"httpGet\": {\n \"path\": \"/liveness\",\n \"port\": 9098,\n \"host\": \"localhost\",\n \"scheme\": \"HTTP\"\n },\n \"initialDelaySeconds\": 30,\n \"timeoutSeconds\": 1,\n \"periodSeconds\": 30,\n \"successThreshold\": 1,\n \"failureThreshold\": 3\n },\n \"readinessProbe\": {\n \"httpGet\": {\n \"path\": \"/readiness\",\n \"port\": 9098,\n \"host\": \"localhost\",\n \"scheme\": \"HTTP\"\n },\n \"timeoutSeconds\": 1,\n \"periodSeconds\": 10,\n \"successThreshold\": 1,\n \"failureThreshold\": 3\n },\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"imagePullPolicy\": \"IfNotPresent\"\n }\n ],\n \"restartPolicy\": \"Always\",\n \"terminationGracePeriodSeconds\": 30,\n \"dnsPolicy\": \"ClusterFirst\",\n \"nodeSelector\": {\n \"beta.kubernetes.io/os\": \"linux\"\n },\n \"serviceAccountName\": \"calico-typha\",\n \"serviceAccount\": \"calico-typha\",\n \"nodeName\": \"ip-10-250-7-77.ec2.internal\",\n \"hostNetwork\": true,\n \"securityContext\": {\n \"runAsUser\": 65534,\n \"fsGroup\": 65534\n },\n \"schedulerName\": \"default-scheduler\",\n \"tolerations\": [\n {\n \"key\": \"CriticalAddonsOnly\",\n \"operator\": \"Exists\"\n },\n {\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\",\n \"tolerationSeconds\": 300\n },\n {\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\",\n \"tolerationSeconds\": 300\n }\n ],\n \"priorityClassName\": \"system-cluster-critical\",\n \"priority\": 2000000000,\n \"enableServiceLinks\": true\n },\n \"status\": {\n \"phase\": \"Running\",\n \"conditions\": [\n {\n \"type\": \"Initialized\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-11T16:21:07Z\"\n },\n {\n \"type\": \"Ready\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-11T16:21:15Z\"\n },\n {\n \"type\": \"ContainersReady\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-11T16:21:15Z\"\n },\n {\n \"type\": \"PodScheduled\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-11T16:21:07Z\"\n }\n ],\n \"hostIP\": \"10.250.7.77\",\n \"podIP\": \"10.250.7.77\",\n \"podIPs\": [\n {\n \"ip\": \"10.250.7.77\"\n }\n ],\n \"startTime\": \"2020-01-11T16:21:07Z\",\n \"containerStatuses\": [\n {\n \"name\": \"calico-typha\",\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-01-11T16:21:07Z\"\n }\n },\n \"lastState\": {},\n \"ready\": true,\n \"restartCount\": 0,\n \"image\": \"eu.gcr.io/gardener-project/3rd/quay_io/calico/typha:v3.8.2\",\n \"imageID\": \"docker-pullable://eu.gcr.io/gardener-project/3rd/quay_io/calico/typha@sha256:52298609a808087c774e95ded163e91828106bed6cf3117c51aba3f4d3b7943c\",\n \"containerID\": \"docker://3d17750708badc2c7d02429744b1c9d16b8f97c82b5df97dcd6b6349deb5ec0c\",\n \"started\": true\n }\n ],\n \"qosClass\": \"BestEffort\"\n }\n },\n {\n \"metadata\": {\n \"name\": \"calico-typha-horizontal-autoscaler-85c99966bb-6j6rp\",\n \"generateName\": \"calico-typha-horizontal-autoscaler-85c99966bb-\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/api/v1/namespaces/kube-system/pods/calico-typha-horizontal-autoscaler-85c99966bb-6j6rp\",\n \"uid\": \"bbdd8961-5d73-4dbb-ab3e-d553c5871fee\",\n \"resourceVersion\": \"1059\",\n \"creationTimestamp\": \"2020-01-11T15:55:18Z\",\n \"labels\": {\n \"k8s-app\": \"calico-typha-autoscaler\",\n \"pod-template-hash\": \"85c99966bb\",\n \"shoot.gardener.cloud/no-cleanup\": \"true\"\n },\n \"annotations\": {\n \"checksum/configmap-calico-typha-horizontal-autoscaler\": \"1a5d7c29390e7895360fe594609b63d25e5c0f738181e178558e481a280cc668\",\n \"cni.projectcalico.org/podIP\": \"100.64.0.13/32\",\n \"kubernetes.io/psp\": \"gardener.kube-system.typha-cpa\"\n },\n \"ownerReferences\": [\n {\n \"apiVersion\": \"apps/v1\",\n \"kind\": \"ReplicaSet\",\n \"name\": \"calico-typha-horizontal-autoscaler-85c99966bb\",\n \"uid\": \"b04a0ca0-edf2-4b26-933d-b4d073df4770\",\n \"controller\": true,\n \"blockOwnerDeletion\": true\n }\n ]\n },\n \"spec\": {\n \"volumes\": [\n {\n \"name\": \"typha-cpha-token-tpktc\",\n \"secret\": {\n \"secretName\": \"typha-cpha-token-tpktc\",\n \"defaultMode\": 420\n }\n }\n ],\n \"containers\": [\n {\n \"name\": \"autoscaler\",\n \"image\": \"eu.gcr.io/gardener-project/3rd/k8s_gcr_io/cluster-proportional-autoscaler-amd64:1.7.1\",\n \"command\": [\n \"/cluster-proportional-autoscaler\",\n \"--namespace=kube-system\",\n \"--configmap=calico-typha-horizontal-autoscaler\",\n \"--target=deployment/calico-typha-deploy\",\n \"--logtostderr=true\",\n \"--v=2\"\n ],\n \"resources\": {\n \"limits\": {\n \"cpu\": \"10m\"\n },\n \"requests\": {\n \"cpu\": \"10m\"\n }\n },\n \"volumeMounts\": [\n {\n \"name\": \"typha-cpha-token-tpktc\",\n \"readOnly\": true,\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n }\n ],\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"imagePullPolicy\": \"IfNotPresent\"\n }\n ],\n \"restartPolicy\": \"Always\",\n \"terminationGracePeriodSeconds\": 30,\n \"dnsPolicy\": \"ClusterFirst\",\n \"serviceAccountName\": \"typha-cpha\",\n \"serviceAccount\": \"typha-cpha\",\n \"nodeName\": \"ip-10-250-7-77.ec2.internal\",\n \"securityContext\": {\n \"runAsUser\": 65534,\n \"supplementalGroups\": [\n 65534\n ],\n \"fsGroup\": 65534\n },\n \"schedulerName\": \"default-scheduler\",\n \"tolerations\": [\n {\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\",\n \"tolerationSeconds\": 300\n },\n {\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\",\n \"tolerationSeconds\": 300\n }\n ],\n \"priorityClassName\": \"system-cluster-critical\",\n \"priority\": 2000000000,\n \"enableServiceLinks\": true\n },\n \"status\": {\n \"phase\": \"Running\",\n \"conditions\": [\n {\n \"type\": \"Initialized\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-11T15:56:08Z\"\n },\n {\n \"type\": \"Ready\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-11T15:57:01Z\"\n },\n {\n \"type\": \"ContainersReady\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-11T15:57:01Z\"\n },\n {\n \"type\": \"PodScheduled\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-11T15:56:08Z\"\n }\n ],\n \"hostIP\": \"10.250.7.77\",\n \"podIP\": \"100.64.0.13\",\n \"podIPs\": [\n {\n \"ip\": \"100.64.0.13\"\n }\n ],\n \"startTime\": \"2020-01-11T15:56:08Z\",\n \"containerStatuses\": [\n {\n \"name\": \"autoscaler\",\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-01-11T15:57:00Z\"\n }\n },\n \"lastState\": {},\n \"ready\": true,\n \"restartCount\": 0,\n \"image\": \"eu.gcr.io/gardener-project/3rd/k8s_gcr_io/cluster-proportional-autoscaler-amd64:1.7.1\",\n \"imageID\": \"docker-pullable://eu.gcr.io/gardener-project/3rd/k8s_gcr_io/cluster-proportional-autoscaler-amd64@sha256:2cdb0f90aac21d3f648a945ef929bfb81159d7453499b2dce6164c78a348ac42\",\n \"containerID\": \"docker://3b7a1c32e8f731f538d66c0674fd69ac8280f3e9f4b2fdebaf6e6e2f8fd397e8\",\n \"started\": true\n }\n ],\n \"qosClass\": \"Burstable\"\n }\n },\n {\n \"metadata\": {\n \"name\": \"calico-typha-vertical-autoscaler-5769b74b58-r8t6r\",\n \"generateName\": \"calico-typha-vertical-autoscaler-5769b74b58-\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/api/v1/namespaces/kube-system/pods/calico-typha-vertical-autoscaler-5769b74b58-r8t6r\",\n \"uid\": \"31621ad3-c3b3-44c2-82a6-15fae21bacc8\",\n \"resourceVersion\": \"1429\",\n \"creationTimestamp\": \"2020-01-11T15:55:18Z\",\n \"labels\": {\n \"k8s-app\": \"calico-typha-autoscaler\",\n \"pod-template-hash\": \"5769b74b58\",\n \"shoot.gardener.cloud/no-cleanup\": \"true\"\n },\n \"annotations\": {\n \"checksum/configmap-calico-typha-vertical-autoscaler\": \"19ab5f175584d9322622fd0316785e275a972fed46f30d9233d4f11c3cf33e91\",\n \"cni.projectcalico.org/podIP\": \"100.64.0.10/32\",\n \"kubernetes.io/psp\": \"gardener.privileged\"\n },\n \"ownerReferences\": [\n {\n \"apiVersion\": \"apps/v1\",\n \"kind\": \"ReplicaSet\",\n \"name\": \"calico-typha-vertical-autoscaler-5769b74b58\",\n \"uid\": \"be560185-d2f7-4fc9-bfac-8274579be3f5\",\n \"controller\": true,\n \"blockOwnerDeletion\": true\n }\n ]\n },\n \"spec\": {\n \"volumes\": [\n {\n \"name\": \"config\",\n \"configMap\": {\n \"name\": \"calico-typha-vertical-autoscaler\",\n \"defaultMode\": 420\n }\n },\n {\n \"name\": \"typha-cpva-token-4sd27\",\n \"secret\": {\n \"secretName\": \"typha-cpva-token-4sd27\",\n \"defaultMode\": 420\n }\n }\n ],\n \"containers\": [\n {\n \"name\": \"autoscaler\",\n \"image\": \"eu.gcr.io/gardener-project/3rd/k8s_gcr_io/cpvpa-amd64:v0.8.1\",\n \"command\": [\n \"/cpvpa\",\n \"--target=deployment/calico-typha-deploy\",\n \"--namespace=kube-system\",\n \"--logtostderr=true\",\n \"--poll-period-seconds=30\",\n \"--v=2\",\n \"--config-file=/etc/config/typha-autoscaler\"\n ],\n \"resources\": {},\n \"volumeMounts\": [\n {\n \"name\": \"config\",\n \"mountPath\": \"/etc/config\"\n },\n {\n \"name\": \"typha-cpva-token-4sd27\",\n \"readOnly\": true,\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n }\n ],\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"imagePullPolicy\": \"IfNotPresent\"\n }\n ],\n \"restartPolicy\": \"Always\",\n \"terminationGracePeriodSeconds\": 30,\n \"dnsPolicy\": \"ClusterFirst\",\n \"serviceAccountName\": \"typha-cpva\",\n \"serviceAccount\": \"typha-cpva\",\n \"nodeName\": \"ip-10-250-7-77.ec2.internal\",\n \"securityContext\": {\n \"runAsUser\": 65534\n },\n \"schedulerName\": \"default-scheduler\",\n \"tolerations\": [\n {\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\",\n \"tolerationSeconds\": 300\n },\n {\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\",\n \"tolerationSeconds\": 300\n }\n ],\n \"priorityClassName\": \"system-cluster-critical\",\n \"priority\": 2000000000,\n \"enableServiceLinks\": true\n },\n \"status\": {\n \"phase\": \"Running\",\n \"conditions\": [\n {\n \"type\": \"Initialized\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-11T15:56:13Z\"\n },\n {\n \"type\": \"Ready\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-11T15:59:49Z\"\n },\n {\n \"type\": \"ContainersReady\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-11T15:59:49Z\"\n },\n {\n \"type\": \"PodScheduled\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-11T15:56:13Z\"\n }\n ],\n \"hostIP\": \"10.250.7.77\",\n \"podIP\": \"100.64.0.10\",\n \"podIPs\": [\n {\n \"ip\": \"100.64.0.10\"\n }\n ],\n \"startTime\": \"2020-01-11T15:56:13Z\",\n \"containerStatuses\": [\n {\n \"name\": \"autoscaler\",\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-01-11T15:59:49Z\"\n }\n },\n \"lastState\": {\n \"terminated\": {\n \"exitCode\": 1,\n \"reason\": \"Error\",\n \"startedAt\": \"2020-01-11T15:58:16Z\",\n \"finishedAt\": \"2020-01-11T15:58:16Z\",\n \"containerID\": \"docker://7967a477ce96dad4d958bcc2e52bdb2a4e184e62b465973787067dd887bcb7c5\"\n }\n },\n \"ready\": true,\n \"restartCount\": 5,\n \"image\": \"eu.gcr.io/gardener-project/3rd/k8s_gcr_io/cpvpa-amd64:v0.8.1\",\n \"imageID\": \"docker-pullable://eu.gcr.io/gardener-project/3rd/k8s_gcr_io/cpvpa-amd64@sha256:5843435c534f0368f8980b1635976976b087f0b2dcde01226d9216da2276d24d\",\n \"containerID\": \"docker://625d358bc989046ee2ef40fccc47cd9c3d4c01ed43c394af272cfabeab426513\",\n \"started\": true\n }\n ],\n \"qosClass\": \"BestEffort\"\n }\n },\n {\n \"metadata\": {\n \"name\": \"coredns-59c969ffb8-57m7v\",\n \"generateName\": \"coredns-59c969ffb8-\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/api/v1/namespaces/kube-system/pods/coredns-59c969ffb8-57m7v\",\n \"uid\": \"88e1bd7a-99ad-42b6-a2f2-61991d3aae30\",\n \"resourceVersion\": \"1024\",\n \"creationTimestamp\": \"2020-01-11T15:55:18Z\",\n \"labels\": {\n \"garden.sapcloud.io/role\": \"system-component\",\n \"k8s-app\": \"kube-dns\",\n \"pod-template-hash\": \"59c969ffb8\",\n \"shoot.gardener.cloud/no-cleanup\": \"true\"\n },\n \"annotations\": {\n \"cni.projectcalico.org/podIP\": \"100.64.0.9/32\",\n \"kubernetes.io/psp\": \"gardener.privileged\",\n \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n },\n \"ownerReferences\": [\n {\n \"apiVersion\": \"apps/v1\",\n \"kind\": \"ReplicaSet\",\n \"name\": \"coredns-59c969ffb8\",\n \"uid\": \"6c1e353e-9a6b-454a-a02e-9c3fff1d87c8\",\n \"controller\": true,\n \"blockOwnerDeletion\": true\n }\n ]\n },\n \"spec\": {\n \"volumes\": [\n {\n \"name\": \"config-volume\",\n \"configMap\": {\n \"name\": \"coredns\",\n \"items\": [\n {\n \"key\": \"Corefile\",\n \"path\": \"Corefile\"\n }\n ],\n \"defaultMode\": 420\n }\n },\n {\n \"name\": \"custom-config-volume\",\n \"configMap\": {\n \"name\": \"coredns-custom\",\n \"defaultMode\": 420,\n \"optional\": true\n }\n },\n {\n \"name\": \"coredns-token-7krz2\",\n \"secret\": {\n \"secretName\": \"coredns-token-7krz2\",\n \"defaultMode\": 420\n }\n }\n ],\n \"containers\": [\n {\n \"name\": \"coredns\",\n \"image\": \"eu.gcr.io/gardener-project/3rd/coredns/coredns:1.6.3\",\n \"args\": [\n \"-conf\",\n \"/etc/coredns/Corefile\"\n ],\n \"ports\": [\n {\n \"name\": \"dns-udp\",\n \"containerPort\": 8053,\n \"protocol\": \"UDP\"\n },\n {\n \"name\": \"dns-tcp\",\n \"containerPort\": 8053,\n \"protocol\": \"TCP\"\n },\n {\n \"name\": \"metrics\",\n \"containerPort\": 9153,\n \"protocol\": \"TCP\"\n }\n ],\n \"resources\": {\n \"limits\": {\n \"cpu\": \"100m\",\n \"memory\": \"100Mi\"\n },\n \"requests\": {\n \"cpu\": \"50m\",\n \"memory\": \"15Mi\"\n }\n },\n \"volumeMounts\": [\n {\n \"name\": \"config-volume\",\n \"readOnly\": true,\n \"mountPath\": \"/etc/coredns\"\n },\n {\n \"name\": \"custom-config-volume\",\n \"readOnly\": true,\n \"mountPath\": \"/etc/coredns/custom\"\n },\n {\n \"name\": \"coredns-token-7krz2\",\n \"readOnly\": true,\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n }\n ],\n \"livenessProbe\": {\n \"httpGet\": {\n \"path\": \"/health\",\n \"port\": 8080,\n \"scheme\": \"HTTP\"\n },\n \"initialDelaySeconds\": 60,\n \"timeoutSeconds\": 5,\n \"periodSeconds\": 10,\n \"successThreshold\": 1,\n \"failureThreshold\": 5\n },\n \"readinessProbe\": {\n \"httpGet\": {\n \"path\": \"/ready\",\n \"port\": 8181,\n \"scheme\": \"HTTP\"\n },\n \"timeoutSeconds\": 1,\n \"periodSeconds\": 10,\n \"successThreshold\": 1,\n \"failureThreshold\": 3\n },\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"securityContext\": {\n \"capabilities\": {\n \"drop\": [\n \"all\"\n ]\n },\n \"readOnlyRootFilesystem\": true,\n \"allowPrivilegeEscalation\": false\n }\n }\n ],\n \"restartPolicy\": \"Always\",\n \"terminationGracePeriodSeconds\": 30,\n \"dnsPolicy\": \"Default\",\n \"serviceAccountName\": \"coredns\",\n \"serviceAccount\": \"coredns\",\n \"nodeName\": \"ip-10-250-7-77.ec2.internal\",\n \"securityContext\": {\n \"runAsUser\": 65534,\n \"runAsNonRoot\": true\n },\n \"schedulerName\": \"default-scheduler\",\n \"tolerations\": [\n {\n \"key\": \"CriticalAddonsOnly\",\n \"operator\": \"Exists\"\n },\n {\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\",\n \"tolerationSeconds\": 300\n },\n {\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\",\n \"tolerationSeconds\": 300\n }\n ],\n \"priorityClassName\": \"system-cluster-critical\",\n \"priority\": 2000000000,\n \"enableServiceLinks\": true\n },\n \"status\": {\n \"phase\": \"Running\",\n \"conditions\": [\n {\n \"type\": \"Initialized\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-11T15:56:11Z\"\n },\n {\n \"type\": \"Ready\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-11T15:56:48Z\"\n },\n {\n \"type\": \"ContainersReady\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-11T15:56:48Z\"\n },\n {\n \"type\": \"PodScheduled\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-11T15:56:11Z\"\n }\n ],\n \"hostIP\": \"10.250.7.77\",\n \"podIP\": \"100.64.0.9\",\n \"podIPs\": [\n {\n \"ip\": \"100.64.0.9\"\n }\n ],\n \"startTime\": \"2020-01-11T15:56:11Z\",\n \"containerStatuses\": [\n {\n \"name\": \"coredns\",\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-01-11T15:56:37Z\"\n }\n },\n \"lastState\": {},\n \"ready\": true,\n \"restartCount\": 0,\n \"image\": \"eu.gcr.io/gardener-project/3rd/coredns/coredns:1.6.3\",\n \"imageID\": \"docker-pullable://eu.gcr.io/gardener-project/3rd/coredns/coredns@sha256:b1f81b52011f91ebcf512111caa6d6d0896a65251188210cd3145d5b23204531\",\n \"containerID\": \"docker://7c03f641570b878a75c4f16741ca1d22ba4dfec766a7c4b3769598356c385949\",\n \"started\": true\n }\n ],\n \"qosClass\": \"Burstable\"\n }\n },\n {\n \"metadata\": {\n \"name\": \"coredns-59c969ffb8-fqq79\",\n \"generateName\": \"coredns-59c969ffb8-\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/api/v1/namespaces/kube-system/pods/coredns-59c969ffb8-fqq79\",\n \"uid\": \"c971f33b-c375-4fe0-92d5-989fccc9d7c5\",\n \"resourceVersion\": \"989\",\n \"creationTimestamp\": \"2020-01-11T15:55:48Z\",\n \"labels\": {\n \"garden.sapcloud.io/role\": \"system-component\",\n \"k8s-app\": \"kube-dns\",\n \"pod-template-hash\": \"59c969ffb8\",\n \"shoot.gardener.cloud/no-cleanup\": \"true\"\n },\n \"annotations\": {\n \"cni.projectcalico.org/podIP\": \"100.64.0.7/32\",\n \"kubernetes.io/psp\": \"gardener.privileged\",\n \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n },\n \"ownerReferences\": [\n {\n \"apiVersion\": \"apps/v1\",\n \"kind\": \"ReplicaSet\",\n \"name\": \"coredns-59c969ffb8\",\n \"uid\": \"6c1e353e-9a6b-454a-a02e-9c3fff1d87c8\",\n \"controller\": true,\n \"blockOwnerDeletion\": true\n }\n ]\n },\n \"spec\": {\n \"volumes\": [\n {\n \"name\": \"config-volume\",\n \"configMap\": {\n \"name\": \"coredns\",\n \"items\": [\n {\n \"key\": \"Corefile\",\n \"path\": \"Corefile\"\n }\n ],\n \"defaultMode\": 420\n }\n },\n {\n \"name\": \"custom-config-volume\",\n \"configMap\": {\n \"name\": \"coredns-custom\",\n \"defaultMode\": 420,\n \"optional\": true\n }\n },\n {\n \"name\": \"coredns-token-7krz2\",\n \"secret\": {\n \"secretName\": \"coredns-token-7krz2\",\n \"defaultMode\": 420\n }\n }\n ],\n \"containers\": [\n {\n \"name\": \"coredns\",\n \"image\": \"eu.gcr.io/gardener-project/3rd/coredns/coredns:1.6.3\",\n \"args\": [\n \"-conf\",\n \"/etc/coredns/Corefile\"\n ],\n \"ports\": [\n {\n \"name\": \"dns-udp\",\n \"containerPort\": 8053,\n \"protocol\": \"UDP\"\n },\n {\n \"name\": \"dns-tcp\",\n \"containerPort\": 8053,\n \"protocol\": \"TCP\"\n },\n {\n \"name\": \"metrics\",\n \"containerPort\": 9153,\n \"protocol\": \"TCP\"\n }\n ],\n \"resources\": {\n \"limits\": {\n \"cpu\": \"100m\",\n \"memory\": \"100Mi\"\n },\n \"requests\": {\n \"cpu\": \"50m\",\n \"memory\": \"15Mi\"\n }\n },\n \"volumeMounts\": [\n {\n \"name\": \"config-volume\",\n \"readOnly\": true,\n \"mountPath\": \"/etc/coredns\"\n },\n {\n \"name\": \"custom-config-volume\",\n \"readOnly\": true,\n \"mountPath\": \"/etc/coredns/custom\"\n },\n {\n \"name\": \"coredns-token-7krz2\",\n \"readOnly\": true,\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n }\n ],\n \"livenessProbe\": {\n \"httpGet\": {\n \"path\": \"/health\",\n \"port\": 8080,\n \"scheme\": \"HTTP\"\n },\n \"initialDelaySeconds\": 60,\n \"timeoutSeconds\": 5,\n \"periodSeconds\": 10,\n \"successThreshold\": 1,\n \"failureThreshold\": 5\n },\n \"readinessProbe\": {\n \"httpGet\": {\n \"path\": \"/ready\",\n \"port\": 8181,\n \"scheme\": \"HTTP\"\n },\n \"timeoutSeconds\": 1,\n \"periodSeconds\": 10,\n \"successThreshold\": 1,\n \"failureThreshold\": 3\n },\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"securityContext\": {\n \"capabilities\": {\n \"drop\": [\n \"all\"\n ]\n },\n \"readOnlyRootFilesystem\": true,\n \"allowPrivilegeEscalation\": false\n }\n }\n ],\n \"restartPolicy\": \"Always\",\n \"terminationGracePeriodSeconds\": 30,\n \"dnsPolicy\": \"Default\",\n \"serviceAccountName\": \"coredns\",\n \"serviceAccount\": \"coredns\",\n \"nodeName\": \"ip-10-250-7-77.ec2.internal\",\n \"securityContext\": {\n \"runAsUser\": 65534,\n \"runAsNonRoot\": true\n },\n \"schedulerName\": \"default-scheduler\",\n \"tolerations\": [\n {\n \"key\": \"CriticalAddonsOnly\",\n \"operator\": \"Exists\"\n },\n {\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\",\n \"tolerationSeconds\": 300\n },\n {\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\",\n \"tolerationSeconds\": 300\n }\n ],\n \"priorityClassName\": \"system-cluster-critical\",\n \"priority\": 2000000000,\n \"enableServiceLinks\": true\n },\n \"status\": {\n \"phase\": \"Running\",\n \"conditions\": [\n {\n \"type\": \"Initialized\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-11T15:56:08Z\"\n },\n {\n \"type\": \"Ready\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-11T15:56:38Z\"\n },\n {\n \"type\": \"ContainersReady\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-11T15:56:38Z\"\n },\n {\n \"type\": \"PodScheduled\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-11T15:56:08Z\"\n }\n ],\n \"hostIP\": \"10.250.7.77\",\n \"podIP\": \"100.64.0.7\",\n \"podIPs\": [\n {\n \"ip\": \"100.64.0.7\"\n }\n ],\n \"startTime\": \"2020-01-11T15:56:08Z\",\n \"containerStatuses\": [\n {\n \"name\": \"coredns\",\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-01-11T15:56:36Z\"\n }\n },\n \"lastState\": {},\n \"ready\": true,\n \"restartCount\": 0,\n \"image\": \"eu.gcr.io/gardener-project/3rd/coredns/coredns:1.6.3\",\n \"imageID\": \"docker-pullable://eu.gcr.io/gardener-project/3rd/coredns/coredns@sha256:b1f81b52011f91ebcf512111caa6d6d0896a65251188210cd3145d5b23204531\",\n \"containerID\": \"docker://2cc536069ef9ef5064d44fcaecba3948313d302dc10f97083f2af12614b5709b\",\n \"started\": true\n }\n ],\n \"qosClass\": \"Burstable\"\n }\n },\n {\n \"metadata\": {\n \"name\": \"kube-proxy-nn5px\",\n \"generateName\": \"kube-proxy-\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/api/v1/namespaces/kube-system/pods/kube-proxy-nn5px\",\n \"uid\": \"2fa77d2f-fb67-48d3-a194-96dddc594da4\",\n \"resourceVersion\": \"777\",\n \"creationTimestamp\": \"2020-01-11T15:55:58Z\",\n \"labels\": {\n \"app\": \"kubernetes\",\n \"controller-revision-hash\": \"594fb5cfd5\",\n \"garden.sapcloud.io/role\": \"system-component\",\n \"origin\": \"gardener\",\n \"pod-template-generation\": \"1\",\n \"role\": \"proxy\",\n \"shoot.gardener.cloud/no-cleanup\": \"true\"\n },\n \"annotations\": {\n \"checksum/configmap-componentconfig\": \"af9cca28054c46807a00143b1fe1cdb407f602386417797662022e8c2aea3637\",\n \"checksum/secret-kube-proxy\": \"b2444368a402b867fc3e94db0dd516877e7ff1d724e094d83d9c3ca5b6822b3c\",\n \"kubernetes.io/psp\": \"gardener.kube-system.kube-proxy\",\n \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n },\n \"ownerReferences\": [\n {\n \"apiVersion\": \"apps/v1\",\n \"kind\": \"DaemonSet\",\n \"name\": \"kube-proxy\",\n \"uid\": \"c35c6d75-67ca-4cb1-bf4d-469bd2412bbb\",\n \"controller\": true,\n \"blockOwnerDeletion\": true\n }\n ]\n },\n \"spec\": {\n \"volumes\": [\n {\n \"name\": \"kubeconfig\",\n \"secret\": {\n \"secretName\": \"kube-proxy\",\n \"defaultMode\": 420\n }\n },\n {\n \"name\": \"kube-proxy-config\",\n \"configMap\": {\n \"name\": \"kube-proxy-config\",\n \"defaultMode\": 420\n }\n },\n {\n \"name\": \"ssl-certs-hosts\",\n \"hostPath\": {\n \"path\": \"/usr/share/ca-certificates\",\n \"type\": \"\"\n }\n },\n {\n \"name\": \"systembussocket\",\n \"hostPath\": {\n \"path\": \"/var/run/dbus/system_bus_socket\",\n \"type\": \"\"\n }\n },\n {\n \"name\": \"kernel-modules\",\n \"hostPath\": {\n \"path\": \"/lib/modules\",\n \"type\": \"\"\n }\n }\n ],\n \"containers\": [\n {\n \"name\": \"kube-proxy\",\n \"image\": \"eu.gcr.io/gardener-project/k8s.gcr.io/hyperkube:v1.16.4\",\n \"command\": [\n \"/hyperkube\",\n \"kube-proxy\",\n \"--config=/var/lib/kube-proxy-config/config.yaml\",\n \"--v=2\"\n ],\n \"ports\": [\n {\n \"name\": \"metrics\",\n \"hostPort\": 10249,\n \"containerPort\": 10249,\n \"protocol\": \"TCP\"\n }\n ],\n \"resources\": {\n \"requests\": {\n \"cpu\": \"20m\",\n \"memory\": \"64Mi\"\n }\n },\n \"volumeMounts\": [\n {\n \"name\": \"kubeconfig\",\n \"mountPath\": \"/var/lib/kube-proxy\"\n },\n {\n \"name\": \"kube-proxy-config\",\n \"mountPath\": \"/var/lib/kube-proxy-config\"\n },\n {\n \"name\": \"ssl-certs-hosts\",\n \"readOnly\": true,\n \"mountPath\": \"/etc/ssl/certs\"\n },\n {\n \"name\": \"systembussocket\",\n \"mountPath\": \"/var/run/dbus/system_bus_socket\"\n },\n {\n \"name\": \"kernel-modules\",\n \"mountPath\": \"/lib/modules\"\n }\n ],\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"securityContext\": {\n \"privileged\": true\n }\n }\n ],\n \"restartPolicy\": \"Always\",\n \"terminationGracePeriodSeconds\": 30,\n \"dnsPolicy\": \"ClusterFirst\",\n \"serviceAccountName\": \"kube-proxy\",\n \"serviceAccount\": \"kube-proxy\",\n \"automountServiceAccountToken\": false,\n \"nodeName\": \"ip-10-250-7-77.ec2.internal\",\n \"hostNetwork\": true,\n \"securityContext\": {},\n \"affinity\": {\n \"nodeAffinity\": {\n \"requiredDuringSchedulingIgnoredDuringExecution\": {\n \"nodeSelectorTerms\": [\n {\n \"matchFields\": [\n {\n \"key\": \"metadata.name\",\n \"operator\": \"In\",\n \"values\": [\n \"ip-10-250-7-77.ec2.internal\"\n ]\n }\n ]\n }\n ]\n }\n }\n },\n \"schedulerName\": \"default-scheduler\",\n \"tolerations\": [\n {\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"CriticalAddonsOnly\",\n \"operator\": \"Exists\"\n },\n {\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\"\n },\n {\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\"\n },\n {\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\"\n },\n {\n \"key\": \"node.kubernetes.io/disk-pressure\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/memory-pressure\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/pid-pressure\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/unschedulable\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/network-unavailable\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n }\n ],\n \"priorityClassName\": \"system-cluster-critical\",\n \"priority\": 2000000000,\n \"enableServiceLinks\": true\n },\n \"status\": {\n \"phase\": \"Running\",\n \"conditions\": [\n {\n \"type\": \"Initialized\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-11T15:55:58Z\"\n },\n {\n \"type\": \"Ready\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-11T15:56:00Z\"\n },\n {\n \"type\": \"ContainersReady\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-11T15:56:00Z\"\n },\n {\n \"type\": \"PodScheduled\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-11T15:55:58Z\"\n }\n ],\n \"hostIP\": \"10.250.7.77\",\n \"podIP\": \"10.250.7.77\",\n \"podIPs\": [\n {\n \"ip\": \"10.250.7.77\"\n }\n ],\n \"startTime\": \"2020-01-11T15:55:58Z\",\n \"containerStatuses\": [\n {\n \"name\": \"kube-proxy\",\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-01-11T15:55:59Z\"\n }\n },\n \"lastState\": {},\n \"ready\": true,\n \"restartCount\": 0,\n \"image\": \"eu.gcr.io/gardener-project/k8s.gcr.io/hyperkube:v1.16.4\",\n \"imageID\": \"docker-pullable://eu.gcr.io/gardener-project/k8s.gcr.io/hyperkube@sha256:1d8d7ef8bae1a6c8564d97a7d83a3661ea4b43127b0a6d901f3cd4b1126ee102\",\n \"containerID\": \"docker://b22dc74fa6beac05e08734dada751ece967f59c3ae18d80c5856798532269987\",\n \"started\": true\n }\n ],\n \"qosClass\": \"Burstable\"\n }\n },\n {\n \"metadata\": {\n \"name\": \"kube-proxy-rq4kf\",\n \"generateName\": \"kube-proxy-\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/api/v1/namespaces/kube-system/pods/kube-proxy-rq4kf\",\n \"uid\": \"5fced6c3-6024-4686-9fd0-4ad73ada7dd7\",\n \"resourceVersion\": \"828\",\n \"creationTimestamp\": \"2020-01-11T15:56:03Z\",\n \"labels\": {\n \"app\": \"kubernetes\",\n \"controller-revision-hash\": \"594fb5cfd5\",\n \"garden.sapcloud.io/role\": \"system-component\",\n \"origin\": \"gardener\",\n \"pod-template-generation\": \"1\",\n \"role\": \"proxy\",\n \"shoot.gardener.cloud/no-cleanup\": \"true\"\n },\n \"annotations\": {\n \"checksum/configmap-componentconfig\": \"af9cca28054c46807a00143b1fe1cdb407f602386417797662022e8c2aea3637\",\n \"checksum/secret-kube-proxy\": \"b2444368a402b867fc3e94db0dd516877e7ff1d724e094d83d9c3ca5b6822b3c\",\n \"kubernetes.io/psp\": \"gardener.kube-system.kube-proxy\",\n \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n },\n \"ownerReferences\": [\n {\n \"apiVersion\": \"apps/v1\",\n \"kind\": \"DaemonSet\",\n \"name\": \"kube-proxy\",\n \"uid\": \"c35c6d75-67ca-4cb1-bf4d-469bd2412bbb\",\n \"controller\": true,\n \"blockOwnerDeletion\": true\n }\n ]\n },\n \"spec\": {\n \"volumes\": [\n {\n \"name\": \"kubeconfig\",\n \"secret\": {\n \"secretName\": \"kube-proxy\",\n \"defaultMode\": 420\n }\n },\n {\n \"name\": \"kube-proxy-config\",\n \"configMap\": {\n \"name\": \"kube-proxy-config\",\n \"defaultMode\": 420\n }\n },\n {\n \"name\": \"ssl-certs-hosts\",\n \"hostPath\": {\n \"path\": \"/usr/share/ca-certificates\",\n \"type\": \"\"\n }\n },\n {\n \"name\": \"systembussocket\",\n \"hostPath\": {\n \"path\": \"/var/run/dbus/system_bus_socket\",\n \"type\": \"\"\n }\n },\n {\n \"name\": \"kernel-modules\",\n \"hostPath\": {\n \"path\": \"/lib/modules\",\n \"type\": \"\"\n }\n }\n ],\n \"containers\": [\n {\n \"name\": \"kube-proxy\",\n \"image\": \"eu.gcr.io/gardener-project/k8s.gcr.io/hyperkube:v1.16.4\",\n \"command\": [\n \"/hyperkube\",\n \"kube-proxy\",\n \"--config=/var/lib/kube-proxy-config/config.yaml\",\n \"--v=2\"\n ],\n \"ports\": [\n {\n \"name\": \"metrics\",\n \"hostPort\": 10249,\n \"containerPort\": 10249,\n \"protocol\": \"TCP\"\n }\n ],\n \"resources\": {\n \"requests\": {\n \"cpu\": \"20m\",\n \"memory\": \"64Mi\"\n }\n },\n \"volumeMounts\": [\n {\n \"name\": \"kubeconfig\",\n \"mountPath\": \"/var/lib/kube-proxy\"\n },\n {\n \"name\": \"kube-proxy-config\",\n \"mountPath\": \"/var/lib/kube-proxy-config\"\n },\n {\n \"name\": \"ssl-certs-hosts\",\n \"readOnly\": true,\n \"mountPath\": \"/etc/ssl/certs\"\n },\n {\n \"name\": \"systembussocket\",\n \"mountPath\": \"/var/run/dbus/system_bus_socket\"\n },\n {\n \"name\": \"kernel-modules\",\n \"mountPath\": \"/lib/modules\"\n }\n ],\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"securityContext\": {\n \"privileged\": true\n }\n }\n ],\n \"restartPolicy\": \"Always\",\n \"terminationGracePeriodSeconds\": 30,\n \"dnsPolicy\": \"ClusterFirst\",\n \"serviceAccountName\": \"kube-proxy\",\n \"serviceAccount\": \"kube-proxy\",\n \"automountServiceAccountToken\": false,\n \"nodeName\": \"ip-10-250-27-25.ec2.internal\",\n \"hostNetwork\": true,\n \"securityContext\": {},\n \"affinity\": {\n \"nodeAffinity\": {\n \"requiredDuringSchedulingIgnoredDuringExecution\": {\n \"nodeSelectorTerms\": [\n {\n \"matchFields\": [\n {\n \"key\": \"metadata.name\",\n \"operator\": \"In\",\n \"values\": [\n \"ip-10-250-27-25.ec2.internal\"\n ]\n }\n ]\n }\n ]\n }\n }\n },\n \"schedulerName\": \"default-scheduler\",\n \"tolerations\": [\n {\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"CriticalAddonsOnly\",\n \"operator\": \"Exists\"\n },\n {\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\"\n },\n {\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\"\n },\n {\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\"\n },\n {\n \"key\": \"node.kubernetes.io/disk-pressure\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/memory-pressure\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/pid-pressure\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/unschedulable\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/network-unavailable\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n }\n ],\n \"priorityClassName\": \"system-cluster-critical\",\n \"priority\": 2000000000,\n \"enableServiceLinks\": true\n },\n \"status\": {\n \"phase\": \"Running\",\n \"conditions\": [\n {\n \"type\": \"Initialized\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-11T15:56:04Z\"\n },\n {\n \"type\": \"Ready\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-11T15:56:06Z\"\n },\n {\n \"type\": \"ContainersReady\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-11T15:56:06Z\"\n },\n {\n \"type\": \"PodScheduled\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-11T15:56:03Z\"\n }\n ],\n \"hostIP\": \"10.250.27.25\",\n \"podIP\": \"10.250.27.25\",\n \"podIPs\": [\n {\n \"ip\": \"10.250.27.25\"\n }\n ],\n \"startTime\": \"2020-01-11T15:56:04Z\",\n \"containerStatuses\": [\n {\n \"name\": \"kube-proxy\",\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-01-11T15:56:05Z\"\n }\n },\n \"lastState\": {},\n \"ready\": true,\n \"restartCount\": 0,\n \"image\": \"eu.gcr.io/gardener-project/k8s.gcr.io/hyperkube:v1.16.4\",\n \"imageID\": \"docker-pullable://eu.gcr.io/gardener-project/k8s.gcr.io/hyperkube@sha256:1d8d7ef8bae1a6c8564d97a7d83a3661ea4b43127b0a6d901f3cd4b1126ee102\",\n \"containerID\": \"docker://17cb2da566ab2b8434c1fb95e4ba8b919829a9aee07472efb1b2895c24f14e85\",\n \"started\": true\n }\n ],\n \"qosClass\": \"Burstable\"\n }\n },\n {\n \"metadata\": {\n \"name\": \"metrics-server-7c797fd994-4x7v9\",\n \"generateName\": \"metrics-server-7c797fd994-\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/api/v1/namespaces/kube-system/pods/metrics-server-7c797fd994-4x7v9\",\n \"uid\": \"1fc0d5a3-c321-4fff-901a-648bee0dc456\",\n \"resourceVersion\": \"976\",\n \"creationTimestamp\": \"2020-01-11T15:55:18Z\",\n \"labels\": {\n \"garden.sapcloud.io/role\": \"system-component\",\n \"k8s-app\": \"metrics-server\",\n \"origin\": \"gardener\",\n \"pod-template-hash\": \"7c797fd994\",\n \"shoot.gardener.cloud/no-cleanup\": \"true\"\n },\n \"annotations\": {\n \"checksum/secret-metrics-server\": \"18c8f7329c38af8674a3d83a23d53a96b4277faa82f6ca4a68e362dbe5481736\",\n \"cni.projectcalico.org/podIP\": \"100.64.0.6/32\",\n \"kubernetes.io/psp\": \"gardener.privileged\",\n \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n },\n \"ownerReferences\": [\n {\n \"apiVersion\": \"apps/v1\",\n \"kind\": \"ReplicaSet\",\n \"name\": \"metrics-server-7c797fd994\",\n \"uid\": \"7a2f1c80-d0a7-4a83-b4b4-bcded1da8bbc\",\n \"controller\": true,\n \"blockOwnerDeletion\": true\n }\n ]\n },\n \"spec\": {\n \"volumes\": [\n {\n \"name\": \"metrics-server\",\n \"secret\": {\n \"secretName\": \"metrics-server\",\n \"defaultMode\": 420\n }\n },\n {\n \"name\": \"metrics-server-token-pp46x\",\n \"secret\": {\n \"secretName\": \"metrics-server-token-pp46x\",\n \"defaultMode\": 420\n }\n }\n ],\n \"containers\": [\n {\n \"name\": \"metrics-server\",\n \"image\": \"eu.gcr.io/gardener-project/3rd/k8s_gcr_io/metrics-server-amd64:v0.3.3\",\n \"command\": [\n \"/metrics-server\",\n \"--profiling=false\",\n \"--cert-dir=/home/certdir\",\n \"--secure-port=8443\",\n \"--kubelet-insecure-tls\",\n \"--tls-cert-file=/srv/metrics-server/tls/tls.crt\",\n \"--tls-private-key-file=/srv/metrics-server/tls/tls.key\",\n \"--v=2\"\n ],\n \"resources\": {\n \"limits\": {\n \"cpu\": \"80m\",\n \"memory\": \"400Mi\"\n },\n \"requests\": {\n \"cpu\": \"20m\",\n \"memory\": \"100Mi\"\n }\n },\n \"volumeMounts\": [\n {\n \"name\": \"metrics-server\",\n \"mountPath\": \"/srv/metrics-server/tls\"\n },\n {\n \"name\": \"metrics-server-token-pp46x\",\n \"readOnly\": true,\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n }\n ],\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"imagePullPolicy\": \"Always\"\n }\n ],\n \"restartPolicy\": \"Always\",\n \"terminationGracePeriodSeconds\": 30,\n \"dnsPolicy\": \"ClusterFirst\",\n \"serviceAccountName\": \"metrics-server\",\n \"serviceAccount\": \"metrics-server\",\n \"nodeName\": \"ip-10-250-7-77.ec2.internal\",\n \"securityContext\": {\n \"runAsUser\": 65534,\n \"fsGroup\": 65534\n },\n \"schedulerName\": \"default-scheduler\",\n \"tolerations\": [\n {\n \"key\": \"CriticalAddonsOnly\",\n \"operator\": \"Exists\"\n },\n {\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\",\n \"tolerationSeconds\": 300\n },\n {\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\",\n \"tolerationSeconds\": 300\n }\n ],\n \"priorityClassName\": \"system-cluster-critical\",\n \"priority\": 2000000000,\n \"enableServiceLinks\": true\n },\n \"status\": {\n \"phase\": \"Running\",\n \"conditions\": [\n {\n \"type\": \"Initialized\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-11T15:56:08Z\"\n },\n {\n \"type\": \"Ready\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-11T15:56:35Z\"\n },\n {\n \"type\": \"ContainersReady\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-11T15:56:35Z\"\n },\n {\n \"type\": \"PodScheduled\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-11T15:56:08Z\"\n }\n ],\n \"hostIP\": \"10.250.7.77\",\n \"podIP\": \"100.64.0.6\",\n \"podIPs\": [\n {\n \"ip\": \"100.64.0.6\"\n }\n ],\n \"startTime\": \"2020-01-11T15:56:08Z\",\n \"containerStatuses\": [\n {\n \"name\": \"metrics-server\",\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-01-11T15:56:34Z\"\n }\n },\n \"lastState\": {},\n \"ready\": true,\n \"restartCount\": 0,\n \"image\": \"eu.gcr.io/gardener-project/3rd/k8s_gcr_io/metrics-server-amd64:v0.3.3\",\n \"imageID\": \"docker-pullable://eu.gcr.io/gardener-project/3rd/k8s_gcr_io/metrics-server-amd64@sha256:c3c8fb8757c3236343da9239a266c6ee9e16ac3c98b6f5d7a7cbb5f83058d4f1\",\n \"containerID\": \"docker://49b545145f5f81888923ceaaf0021b94310fc74651a20e4452cffb77a8c4cc91\",\n \"started\": true\n }\n ],\n \"qosClass\": \"Burstable\"\n }\n },\n {\n \"metadata\": {\n \"name\": \"node-exporter-gp57h\",\n \"generateName\": \"node-exporter-\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/api/v1/namespaces/kube-system/pods/node-exporter-gp57h\",\n \"uid\": \"a90e5c0c-21f4-4434-a662-4c1ba16d8256\",\n \"resourceVersion\": \"877\",\n \"creationTimestamp\": \"2020-01-11T15:55:58Z\",\n \"labels\": {\n \"component\": \"node-exporter\",\n \"controller-revision-hash\": \"7b5bfd5694\",\n \"garden.sapcloud.io/role\": \"monitoring\",\n \"origin\": \"gardener\",\n \"pod-template-generation\": \"1\",\n \"shoot.gardener.cloud/no-cleanup\": \"true\"\n },\n \"annotations\": {\n \"kubernetes.io/psp\": \"gardener.kube-system.node-exporter\",\n \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n },\n \"ownerReferences\": [\n {\n \"apiVersion\": \"apps/v1\",\n \"kind\": \"DaemonSet\",\n \"name\": \"node-exporter\",\n \"uid\": \"37e522a7-69f0-442e-9a91-566375272519\",\n \"controller\": true,\n \"blockOwnerDeletion\": true\n }\n ]\n },\n \"spec\": {\n \"volumes\": [\n {\n \"name\": \"proc\",\n \"hostPath\": {\n \"path\": \"/proc\",\n \"type\": \"\"\n }\n },\n {\n \"name\": \"sys\",\n \"hostPath\": {\n \"path\": \"/sys\",\n \"type\": \"\"\n }\n },\n {\n \"name\": \"rootfs\",\n \"hostPath\": {\n \"path\": \"/\",\n \"type\": \"\"\n }\n }\n ],\n \"containers\": [\n {\n \"name\": \"node-exporter\",\n \"image\": \"eu.gcr.io/gardener-project/3rd/quay_io/prometheus/node-exporter:v0.18.1\",\n \"command\": [\n \"/bin/node_exporter\",\n \"--path.procfs=/host/proc\",\n \"--path.sysfs=/host/sys\",\n \"--collector.filesystem.ignored-fs-types=^(tmpfs|cgroup|nsfs|fuse\\\\.lxcfs|rpc_pipefs)$\",\n \"--collector.filesystem.ignored-mount-points=^/(rootfs/|host/)?(sys|proc|dev|host|etc|var/lib/docker)($|/)\",\n \"--web.listen-address=:16909\",\n \"--log.level=error\",\n \"--no-collector.netclass\"\n ],\n \"ports\": [\n {\n \"name\": \"scrape\",\n \"hostPort\": 16909,\n \"containerPort\": 16909,\n \"protocol\": \"TCP\"\n }\n ],\n \"resources\": {\n \"limits\": {\n \"cpu\": \"25m\",\n \"memory\": \"100Mi\"\n },\n \"requests\": {\n \"cpu\": \"5m\",\n \"memory\": \"10Mi\"\n }\n },\n \"volumeMounts\": [\n {\n \"name\": \"proc\",\n \"readOnly\": true,\n \"mountPath\": \"/host/proc\"\n },\n {\n \"name\": \"sys\",\n \"readOnly\": true,\n \"mountPath\": \"/host/sys\"\n },\n {\n \"name\": \"rootfs\",\n \"readOnly\": true,\n \"mountPath\": \"/rootfs\"\n }\n ],\n \"livenessProbe\": {\n \"httpGet\": {\n \"path\": \"/\",\n \"port\": 16909,\n \"scheme\": \"HTTP\"\n },\n \"initialDelaySeconds\": 5,\n \"timeoutSeconds\": 5,\n \"periodSeconds\": 10,\n \"successThreshold\": 1,\n \"failureThreshold\": 3\n },\n \"readinessProbe\": {\n \"httpGet\": {\n \"path\": \"/\",\n \"port\": 16909,\n \"scheme\": \"HTTP\"\n },\n \"initialDelaySeconds\": 5,\n \"timeoutSeconds\": 5,\n \"periodSeconds\": 10,\n \"successThreshold\": 1,\n \"failureThreshold\": 3\n },\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"imagePullPolicy\": \"IfNotPresent\"\n }\n ],\n \"restartPolicy\": \"Always\",\n \"terminationGracePeriodSeconds\": 30,\n \"dnsPolicy\": \"ClusterFirst\",\n \"serviceAccountName\": \"node-exporter\",\n \"serviceAccount\": \"node-exporter\",\n \"automountServiceAccountToken\": false,\n \"nodeName\": \"ip-10-250-7-77.ec2.internal\",\n \"hostNetwork\": true,\n \"hostPID\": true,\n \"securityContext\": {\n \"runAsUser\": 65534,\n \"runAsNonRoot\": true\n },\n \"affinity\": {\n \"nodeAffinity\": {\n \"requiredDuringSchedulingIgnoredDuringExecution\": {\n \"nodeSelectorTerms\": [\n {\n \"matchFields\": [\n {\n \"key\": \"metadata.name\",\n \"operator\": \"In\",\n \"values\": [\n \"ip-10-250-7-77.ec2.internal\"\n ]\n }\n ]\n }\n ]\n }\n }\n },\n \"schedulerName\": \"default-scheduler\",\n \"tolerations\": [\n {\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"CriticalAddonsOnly\",\n \"operator\": \"Exists\"\n },\n {\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\"\n },\n {\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\"\n },\n {\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\"\n },\n {\n \"key\": \"node.kubernetes.io/disk-pressure\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/memory-pressure\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/pid-pressure\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/unschedulable\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/network-unavailable\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n }\n ],\n \"priorityClassName\": \"system-cluster-critical\",\n \"priority\": 2000000000,\n \"enableServiceLinks\": true\n },\n \"status\": {\n \"phase\": \"Running\",\n \"conditions\": [\n {\n \"type\": \"Initialized\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-11T15:55:58Z\"\n },\n {\n \"type\": \"Ready\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-11T15:56:14Z\"\n },\n {\n \"type\": \"ContainersReady\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-11T15:56:14Z\"\n },\n {\n \"type\": \"PodScheduled\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-11T15:55:58Z\"\n }\n ],\n \"hostIP\": \"10.250.7.77\",\n \"podIP\": \"10.250.7.77\",\n \"podIPs\": [\n {\n \"ip\": \"10.250.7.77\"\n }\n ],\n \"startTime\": \"2020-01-11T15:55:58Z\",\n \"containerStatuses\": [\n {\n \"name\": \"node-exporter\",\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-01-11T15:56:06Z\"\n }\n },\n \"lastState\": {},\n \"ready\": true,\n \"restartCount\": 0,\n \"image\": \"eu.gcr.io/gardener-project/3rd/quay_io/prometheus/node-exporter:v0.18.1\",\n \"imageID\": \"docker-pullable://eu.gcr.io/gardener-project/3rd/quay_io/prometheus/node-exporter@sha256:fea82a3a79228af2840c72ff394d7446ace51ae035f5b26cd9767b250baf13b7\",\n \"containerID\": \"docker://befa01b9ab3466aca514043b97b8ff6c552002f440fcb4aa2cb5bad52bb91497\",\n \"started\": true\n }\n ],\n \"qosClass\": \"Burstable\"\n }\n },\n {\n \"metadata\": {\n \"name\": \"node-exporter-l6q84\",\n \"generateName\": \"node-exporter-\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/api/v1/namespaces/kube-system/pods/node-exporter-l6q84\",\n \"uid\": \"ab132837-e9d6-4f43-9cbd-316dd1baf2ea\",\n \"resourceVersion\": \"939\",\n \"creationTimestamp\": \"2020-01-11T15:56:03Z\",\n \"labels\": {\n \"component\": \"node-exporter\",\n \"controller-revision-hash\": \"7b5bfd5694\",\n \"garden.sapcloud.io/role\": \"monitoring\",\n \"origin\": \"gardener\",\n \"pod-template-generation\": \"1\",\n \"shoot.gardener.cloud/no-cleanup\": \"true\"\n },\n \"annotations\": {\n \"kubernetes.io/psp\": \"gardener.kube-system.node-exporter\",\n \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n },\n \"ownerReferences\": [\n {\n \"apiVersion\": \"apps/v1\",\n \"kind\": \"DaemonSet\",\n \"name\": \"node-exporter\",\n \"uid\": \"37e522a7-69f0-442e-9a91-566375272519\",\n \"controller\": true,\n \"blockOwnerDeletion\": true\n }\n ]\n },\n \"spec\": {\n \"volumes\": [\n {\n \"name\": \"proc\",\n \"hostPath\": {\n \"path\": \"/proc\",\n \"type\": \"\"\n }\n },\n {\n \"name\": \"sys\",\n \"hostPath\": {\n \"path\": \"/sys\",\n \"type\": \"\"\n }\n },\n {\n \"name\": \"rootfs\",\n \"hostPath\": {\n \"path\": \"/\",\n \"type\": \"\"\n }\n }\n ],\n \"containers\": [\n {\n \"name\": \"node-exporter\",\n \"image\": \"eu.gcr.io/gardener-project/3rd/quay_io/prometheus/node-exporter:v0.18.1\",\n \"command\": [\n \"/bin/node_exporter\",\n \"--path.procfs=/host/proc\",\n \"--path.sysfs=/host/sys\",\n \"--collector.filesystem.ignored-fs-types=^(tmpfs|cgroup|nsfs|fuse\\\\.lxcfs|rpc_pipefs)$\",\n \"--collector.filesystem.ignored-mount-points=^/(rootfs/|host/)?(sys|proc|dev|host|etc|var/lib/docker)($|/)\",\n \"--web.listen-address=:16909\",\n \"--log.level=error\",\n \"--no-collector.netclass\"\n ],\n \"ports\": [\n {\n \"name\": \"scrape\",\n \"hostPort\": 16909,\n \"containerPort\": 16909,\n \"protocol\": \"TCP\"\n }\n ],\n \"resources\": {\n \"limits\": {\n \"cpu\": \"25m\",\n \"memory\": \"100Mi\"\n },\n \"requests\": {\n \"cpu\": \"5m\",\n \"memory\": \"10Mi\"\n }\n },\n \"volumeMounts\": [\n {\n \"name\": \"proc\",\n \"readOnly\": true,\n \"mountPath\": \"/host/proc\"\n },\n {\n \"name\": \"sys\",\n \"readOnly\": true,\n \"mountPath\": \"/host/sys\"\n },\n {\n \"name\": \"rootfs\",\n \"readOnly\": true,\n \"mountPath\": \"/rootfs\"\n }\n ],\n \"livenessProbe\": {\n \"httpGet\": {\n \"path\": \"/\",\n \"port\": 16909,\n \"scheme\": \"HTTP\"\n },\n \"initialDelaySeconds\": 5,\n \"timeoutSeconds\": 5,\n \"periodSeconds\": 10,\n \"successThreshold\": 1,\n \"failureThreshold\": 3\n },\n \"readinessProbe\": {\n \"httpGet\": {\n \"path\": \"/\",\n \"port\": 16909,\n \"scheme\": \"HTTP\"\n },\n \"initialDelaySeconds\": 5,\n \"timeoutSeconds\": 5,\n \"periodSeconds\": 10,\n \"successThreshold\": 1,\n \"failureThreshold\": 3\n },\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"imagePullPolicy\": \"IfNotPresent\"\n }\n ],\n \"restartPolicy\": \"Always\",\n \"terminationGracePeriodSeconds\": 30,\n \"dnsPolicy\": \"ClusterFirst\",\n \"serviceAccountName\": \"node-exporter\",\n \"serviceAccount\": \"node-exporter\",\n \"automountServiceAccountToken\": false,\n \"nodeName\": \"ip-10-250-27-25.ec2.internal\",\n \"hostNetwork\": true,\n \"hostPID\": true,\n \"securityContext\": {\n \"runAsUser\": 65534,\n \"runAsNonRoot\": true\n },\n \"affinity\": {\n \"nodeAffinity\": {\n \"requiredDuringSchedulingIgnoredDuringExecution\": {\n \"nodeSelectorTerms\": [\n {\n \"matchFields\": [\n {\n \"key\": \"metadata.name\",\n \"operator\": \"In\",\n \"values\": [\n \"ip-10-250-27-25.ec2.internal\"\n ]\n }\n ]\n }\n ]\n }\n }\n },\n \"schedulerName\": \"default-scheduler\",\n \"tolerations\": [\n {\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"CriticalAddonsOnly\",\n \"operator\": \"Exists\"\n },\n {\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\"\n },\n {\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\"\n },\n {\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\"\n },\n {\n \"key\": \"node.kubernetes.io/disk-pressure\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/memory-pressure\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/pid-pressure\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/unschedulable\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/network-unavailable\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n }\n ],\n \"priorityClassName\": \"system-cluster-critical\",\n \"priority\": 2000000000,\n \"enableServiceLinks\": true\n },\n \"status\": {\n \"phase\": \"Running\",\n \"conditions\": [\n {\n \"type\": \"Initialized\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-11T15:56:04Z\"\n },\n {\n \"type\": \"Ready\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-11T15:56:25Z\"\n },\n {\n \"type\": \"ContainersReady\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-11T15:56:25Z\"\n },\n {\n \"type\": \"PodScheduled\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-11T15:56:03Z\"\n }\n ],\n \"hostIP\": \"10.250.27.25\",\n \"podIP\": \"10.250.27.25\",\n \"podIPs\": [\n {\n \"ip\": \"10.250.27.25\"\n }\n ],\n \"startTime\": \"2020-01-11T15:56:04Z\",\n \"containerStatuses\": [\n {\n \"name\": \"node-exporter\",\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-01-11T15:56:12Z\"\n }\n },\n \"lastState\": {},\n \"ready\": true,\n \"restartCount\": 0,\n \"image\": \"eu.gcr.io/gardener-project/3rd/quay_io/prometheus/node-exporter:v0.18.1\",\n \"imageID\": \"docker-pullable://eu.gcr.io/gardener-project/3rd/quay_io/prometheus/node-exporter@sha256:fea82a3a79228af2840c72ff394d7446ace51ae035f5b26cd9767b250baf13b7\",\n \"containerID\": \"docker://8505122dde90c5c1b08e759d3f8d1d52973933a96bfd4a5b4f8e3b369d098d88\",\n \"started\": true\n }\n ],\n \"qosClass\": \"Burstable\"\n }\n },\n {\n \"metadata\": {\n \"name\": \"node-problem-detector-9z5sq\",\n \"generateName\": \"node-problem-detector-\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/api/v1/namespaces/kube-system/pods/node-problem-detector-9z5sq\",\n \"uid\": \"a66c58c4-5c01-4036-bf14-5f990e451206\",\n \"resourceVersion\": \"929\",\n \"creationTimestamp\": \"2020-01-11T15:56:03Z\",\n \"labels\": {\n \"app\": \"node-problem-detector\",\n \"app.kubernetes.io/instance\": \"shoot-core\",\n \"app.kubernetes.io/name\": \"node-problem-detector\",\n \"controller-revision-hash\": \"5d667cd5c4\",\n \"garden.sapcloud.io/role\": \"system-component\",\n \"origin\": \"gardener\",\n \"pod-template-generation\": \"1\",\n \"shoot.gardener.cloud/no-cleanup\": \"true\"\n },\n \"annotations\": {\n \"checksum/config\": \"4f82034ff1169816c591ccb023d905e3fd124303811ad9fbbc4ac46e116bec88\",\n \"cni.projectcalico.org/podIP\": \"100.64.1.2/32\",\n \"kubernetes.io/psp\": \"gardener.privileged\",\n \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n },\n \"ownerReferences\": [\n {\n \"apiVersion\": \"apps/v1\",\n \"kind\": \"DaemonSet\",\n \"name\": \"node-problem-detector\",\n \"uid\": \"6c84e849-b566-4928-b4e7-19d0b4e45433\",\n \"controller\": true,\n \"blockOwnerDeletion\": true\n }\n ]\n },\n \"spec\": {\n \"volumes\": [\n {\n \"name\": \"log\",\n \"hostPath\": {\n \"path\": \"/var/log/\",\n \"type\": \"\"\n }\n },\n {\n \"name\": \"localtime\",\n \"hostPath\": {\n \"path\": \"/etc/localtime\",\n \"type\": \"FileOrCreate\"\n }\n },\n {\n \"name\": \"custom-config\",\n \"configMap\": {\n \"name\": \"node-problem-detector-custom-config\",\n \"defaultMode\": 420\n }\n },\n {\n \"name\": \"node-problem-detector-token-24wwl\",\n \"secret\": {\n \"secretName\": \"node-problem-detector-token-24wwl\",\n \"defaultMode\": 420\n }\n }\n ],\n \"containers\": [\n {\n \"name\": \"node-problem-detector\",\n \"image\": \"eu.gcr.io/gardener-project/3rd/k8s_gcr_io/node-problem-detector:v0.7.1-mod-1\",\n \"command\": [\n \"/bin/sh\",\n \"-c\",\n \"exec /node-problem-detector --logtostderr --config.system-log-monitor=/config/kernel-monitor.json,/config/docker-monitor.json,/config/systemd-monitor.json .. --config.custom-plugin-monitor=/config/kernel-monitor-counter.json,/config/systemd-monitor-counter.json .. --config.system-stats-monitor=/config/system-stats-monitor.json --prometheus-address=0.0.0.0 --prometheus-port=20257\"\n ],\n \"ports\": [\n {\n \"name\": \"exporter\",\n \"containerPort\": 20257,\n \"protocol\": \"TCP\"\n }\n ],\n \"env\": [\n {\n \"name\": \"NODE_NAME\",\n \"valueFrom\": {\n \"fieldRef\": {\n \"apiVersion\": \"v1\",\n \"fieldPath\": \"spec.nodeName\"\n }\n }\n }\n ],\n \"resources\": {\n \"limits\": {\n \"cpu\": \"200m\",\n \"memory\": \"100Mi\"\n },\n \"requests\": {\n \"cpu\": \"20m\",\n \"memory\": \"20Mi\"\n }\n },\n \"volumeMounts\": [\n {\n \"name\": \"log\",\n \"mountPath\": \"/var/log\"\n },\n {\n \"name\": \"localtime\",\n \"readOnly\": true,\n \"mountPath\": \"/etc/localtime\"\n },\n {\n \"name\": \"custom-config\",\n \"readOnly\": true,\n \"mountPath\": \"/custom-config\"\n },\n {\n \"name\": \"node-problem-detector-token-24wwl\",\n \"readOnly\": true,\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n }\n ],\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"securityContext\": {\n \"privileged\": true\n }\n }\n ],\n \"restartPolicy\": \"Always\",\n \"terminationGracePeriodSeconds\": 30,\n \"dnsPolicy\": \"ClusterFirst\",\n \"serviceAccountName\": \"node-problem-detector\",\n \"serviceAccount\": \"node-problem-detector\",\n \"nodeName\": \"ip-10-250-27-25.ec2.internal\",\n \"securityContext\": {},\n \"affinity\": {\n \"nodeAffinity\": {\n \"requiredDuringSchedulingIgnoredDuringExecution\": {\n \"nodeSelectorTerms\": [\n {\n \"matchFields\": [\n {\n \"key\": \"metadata.name\",\n \"operator\": \"In\",\n \"values\": [\n \"ip-10-250-27-25.ec2.internal\"\n ]\n }\n ]\n }\n ]\n }\n }\n },\n \"schedulerName\": \"default-scheduler\",\n \"tolerations\": [\n {\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"CriticalAddonsOnly\",\n \"operator\": \"Exists\"\n },\n {\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\"\n },\n {\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\"\n },\n {\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\"\n },\n {\n \"key\": \"node.kubernetes.io/disk-pressure\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/memory-pressure\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/pid-pressure\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/unschedulable\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n }\n ],\n \"priority\": 0,\n \"enableServiceLinks\": true\n },\n \"status\": {\n \"phase\": \"Running\",\n \"conditions\": [\n {\n \"type\": \"Initialized\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-11T15:56:04Z\"\n },\n {\n \"type\": \"Ready\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-11T15:56:23Z\"\n },\n {\n \"type\": \"ContainersReady\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-11T15:56:23Z\"\n },\n {\n \"type\": \"PodScheduled\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-11T15:56:03Z\"\n }\n ],\n \"hostIP\": \"10.250.27.25\",\n \"podIP\": \"100.64.1.2\",\n \"podIPs\": [\n {\n \"ip\": \"100.64.1.2\"\n }\n ],\n \"startTime\": \"2020-01-11T15:56:04Z\",\n \"containerStatuses\": [\n {\n \"name\": \"node-problem-detector\",\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-01-11T15:56:23Z\"\n }\n },\n \"lastState\": {},\n \"ready\": true,\n \"restartCount\": 0,\n \"image\": \"eu.gcr.io/gardener-project/3rd/k8s_gcr_io/node-problem-detector:v0.7.1-mod-1\",\n \"imageID\": \"docker-pullable://eu.gcr.io/gardener-project/3rd/k8s_gcr_io/node-problem-detector@sha256:00aceed3b4ef20d0d578aff3f904212daa2f0aaf18350d3e213cf4ca0703ccf0\",\n \"containerID\": \"docker://96d6af6c708838cc7babd467accc7c46b8e08abbe5676064b4acd11da2e66147\",\n \"started\": true\n }\n ],\n \"qosClass\": \"Burstable\"\n }\n },\n {\n \"metadata\": {\n \"name\": \"node-problem-detector-jx2p4\",\n \"generateName\": \"node-problem-detector-\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/api/v1/namespaces/kube-system/pods/node-problem-detector-jx2p4\",\n \"uid\": \"5673e5c6-cf7a-4123-9eeb-890dbe85064b\",\n \"resourceVersion\": \"948\",\n \"creationTimestamp\": \"2020-01-11T15:55:58Z\",\n \"labels\": {\n \"app\": \"node-problem-detector\",\n \"app.kubernetes.io/instance\": \"shoot-core\",\n \"app.kubernetes.io/name\": \"node-problem-detector\",\n \"controller-revision-hash\": \"5d667cd5c4\",\n \"garden.sapcloud.io/role\": \"system-component\",\n \"origin\": \"gardener\",\n \"pod-template-generation\": \"1\",\n \"shoot.gardener.cloud/no-cleanup\": \"true\"\n },\n \"annotations\": {\n \"checksum/config\": \"4f82034ff1169816c591ccb023d905e3fd124303811ad9fbbc4ac46e116bec88\",\n \"cni.projectcalico.org/podIP\": \"100.64.0.3/32\",\n \"kubernetes.io/psp\": \"gardener.privileged\",\n \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n },\n \"ownerReferences\": [\n {\n \"apiVersion\": \"apps/v1\",\n \"kind\": \"DaemonSet\",\n \"name\": \"node-problem-detector\",\n \"uid\": \"6c84e849-b566-4928-b4e7-19d0b4e45433\",\n \"controller\": true,\n \"blockOwnerDeletion\": true\n }\n ]\n },\n \"spec\": {\n \"volumes\": [\n {\n \"name\": \"log\",\n \"hostPath\": {\n \"path\": \"/var/log/\",\n \"type\": \"\"\n }\n },\n {\n \"name\": \"localtime\",\n \"hostPath\": {\n \"path\": \"/etc/localtime\",\n \"type\": \"FileOrCreate\"\n }\n },\n {\n \"name\": \"custom-config\",\n \"configMap\": {\n \"name\": \"node-problem-detector-custom-config\",\n \"defaultMode\": 420\n }\n },\n {\n \"name\": \"node-problem-detector-token-24wwl\",\n \"secret\": {\n \"secretName\": \"node-problem-detector-token-24wwl\",\n \"defaultMode\": 420\n }\n }\n ],\n \"containers\": [\n {\n \"name\": \"node-problem-detector\",\n \"image\": \"eu.gcr.io/gardener-project/3rd/k8s_gcr_io/node-problem-detector:v0.7.1-mod-1\",\n \"command\": [\n \"/bin/sh\",\n \"-c\",\n \"exec /node-problem-detector --logtostderr --config.system-log-monitor=/config/kernel-monitor.json,/config/docker-monitor.json,/config/systemd-monitor.json .. --config.custom-plugin-monitor=/config/kernel-monitor-counter.json,/config/systemd-monitor-counter.json .. --config.system-stats-monitor=/config/system-stats-monitor.json --prometheus-address=0.0.0.0 --prometheus-port=20257\"\n ],\n \"ports\": [\n {\n \"name\": \"exporter\",\n \"containerPort\": 20257,\n \"protocol\": \"TCP\"\n }\n ],\n \"env\": [\n {\n \"name\": \"NODE_NAME\",\n \"valueFrom\": {\n \"fieldRef\": {\n \"apiVersion\": \"v1\",\n \"fieldPath\": \"spec.nodeName\"\n }\n }\n }\n ],\n \"resources\": {\n \"limits\": {\n \"cpu\": \"200m\",\n \"memory\": \"100Mi\"\n },\n \"requests\": {\n \"cpu\": \"20m\",\n \"memory\": \"20Mi\"\n }\n },\n \"volumeMounts\": [\n {\n \"name\": \"log\",\n \"mountPath\": \"/var/log\"\n },\n {\n \"name\": \"localtime\",\n \"readOnly\": true,\n \"mountPath\": \"/etc/localtime\"\n },\n {\n \"name\": \"custom-config\",\n \"readOnly\": true,\n \"mountPath\": \"/custom-config\"\n },\n {\n \"name\": \"node-problem-detector-token-24wwl\",\n \"readOnly\": true,\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n }\n ],\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"securityContext\": {\n \"privileged\": true\n }\n }\n ],\n \"restartPolicy\": \"Always\",\n \"terminationGracePeriodSeconds\": 30,\n \"dnsPolicy\": \"ClusterFirst\",\n \"serviceAccountName\": \"node-problem-detector\",\n \"serviceAccount\": \"node-problem-detector\",\n \"nodeName\": \"ip-10-250-7-77.ec2.internal\",\n \"securityContext\": {},\n \"affinity\": {\n \"nodeAffinity\": {\n \"requiredDuringSchedulingIgnoredDuringExecution\": {\n \"nodeSelectorTerms\": [\n {\n \"matchFields\": [\n {\n \"key\": \"metadata.name\",\n \"operator\": \"In\",\n \"values\": [\n \"ip-10-250-7-77.ec2.internal\"\n ]\n }\n ]\n }\n ]\n }\n }\n },\n \"schedulerName\": \"default-scheduler\",\n \"tolerations\": [\n {\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"CriticalAddonsOnly\",\n \"operator\": \"Exists\"\n },\n {\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\"\n },\n {\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\"\n },\n {\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\"\n },\n {\n \"key\": \"node.kubernetes.io/disk-pressure\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/memory-pressure\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/pid-pressure\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/unschedulable\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n }\n ],\n \"priority\": 0,\n \"enableServiceLinks\": true\n },\n \"status\": {\n \"phase\": \"Running\",\n \"conditions\": [\n {\n \"type\": \"Initialized\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-11T15:55:58Z\"\n },\n {\n \"type\": \"Ready\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-11T15:56:29Z\"\n },\n {\n \"type\": \"ContainersReady\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-11T15:56:29Z\"\n },\n {\n \"type\": \"PodScheduled\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-11T15:55:58Z\"\n }\n ],\n \"hostIP\": \"10.250.7.77\",\n \"podIP\": \"100.64.0.3\",\n \"podIPs\": [\n {\n \"ip\": \"100.64.0.3\"\n }\n ],\n \"startTime\": \"2020-01-11T15:55:58Z\",\n \"containerStatuses\": [\n {\n \"name\": \"node-problem-detector\",\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-01-11T15:56:27Z\"\n }\n },\n \"lastState\": {},\n \"ready\": true,\n \"restartCount\": 0,\n \"image\": \"eu.gcr.io/gardener-project/3rd/k8s_gcr_io/node-problem-detector:v0.7.1-mod-1\",\n \"imageID\": \"docker-pullable://eu.gcr.io/gardener-project/3rd/k8s_gcr_io/node-problem-detector@sha256:00aceed3b4ef20d0d578aff3f904212daa2f0aaf18350d3e213cf4ca0703ccf0\",\n \"containerID\": \"docker://382c866331c512713048020363357178aca5edfc173a5dc5ee7bc9a136483b47\",\n \"started\": true\n }\n ],\n \"qosClass\": \"Burstable\"\n }\n },\n {\n \"metadata\": {\n \"name\": \"vpn-shoot-5d76665b65-6rkww\",\n \"generateName\": \"vpn-shoot-5d76665b65-\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/api/v1/namespaces/kube-system/pods/vpn-shoot-5d76665b65-6rkww\",\n \"uid\": \"f60c5a3a-1908-408d-b5ef-941aa4008de9\",\n \"resourceVersion\": \"1006\",\n \"creationTimestamp\": \"2020-01-11T15:55:18Z\",\n \"labels\": {\n \"app\": \"vpn-shoot\",\n \"garden.sapcloud.io/role\": \"system-component\",\n \"origin\": \"gardener\",\n \"pod-template-hash\": \"5d76665b65\",\n \"shoot.gardener.cloud/no-cleanup\": \"true\"\n },\n \"annotations\": {\n \"checksum/secret-vpn-shoot\": \"4c8c1eea7f8805ec1de63090c4a6e5e8059ad270c4d6ec3de163dc88cbdbc62d\",\n \"checksum/secret-vpn-shoot-dh\": \"c4717efbc25c918c6a4f36a8118a448c861497005ab370cecf56c51216b726d3\",\n \"checksum/secret-vpn-shoot-tlsauth\": \"2845871f4fd567c8a83b7f3c08acdb14e85657e335eb1c1ca952880b8bb6ac28\",\n \"cni.projectcalico.org/podIP\": \"100.64.0.11/32\",\n \"kubernetes.io/psp\": \"gardener.kube-system.vpn-shoot\",\n \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n },\n \"ownerReferences\": [\n {\n \"apiVersion\": \"apps/v1\",\n \"kind\": \"ReplicaSet\",\n \"name\": \"vpn-shoot-5d76665b65\",\n \"uid\": \"07a37206-d4cc-45f8-abcf-07a537152cbc\",\n \"controller\": true,\n \"blockOwnerDeletion\": true\n }\n ]\n },\n \"spec\": {\n \"volumes\": [\n {\n \"name\": \"vpn-shoot\",\n \"secret\": {\n \"secretName\": \"vpn-shoot\",\n \"defaultMode\": 420\n }\n },\n {\n \"name\": \"vpn-shoot-tlsauth\",\n \"secret\": {\n \"secretName\": \"vpn-shoot-tlsauth\",\n \"defaultMode\": 420\n }\n },\n {\n \"name\": \"vpn-shoot-dh\",\n \"secret\": {\n \"secretName\": \"vpn-shoot-dh\",\n \"defaultMode\": 420\n }\n }\n ],\n \"containers\": [\n {\n \"name\": \"vpn-shoot\",\n \"image\": \"eu.gcr.io/gardener-project/gardener/vpn-shoot:0.16.0\",\n \"env\": [\n {\n \"name\": \"SERVICE_NETWORK\",\n \"value\": \"100.104.0.0/13\"\n },\n {\n \"name\": \"POD_NETWORK\",\n \"value\": \"100.64.0.0/11\"\n },\n {\n \"name\": \"NODE_NETWORK\",\n \"value\": \"10.250.0.0/16\"\n }\n ],\n \"resources\": {\n \"limits\": {\n \"cpu\": \"1\",\n \"memory\": \"1000Mi\"\n },\n \"requests\": {\n \"cpu\": \"100m\",\n \"memory\": \"100Mi\"\n }\n },\n \"volumeMounts\": [\n {\n \"name\": \"vpn-shoot\",\n \"mountPath\": \"/srv/secrets/vpn-shoot\"\n },\n {\n \"name\": \"vpn-shoot-tlsauth\",\n \"mountPath\": \"/srv/secrets/tlsauth\"\n },\n {\n \"name\": \"vpn-shoot-dh\",\n \"mountPath\": \"/srv/secrets/dh\"\n }\n ],\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"securityContext\": {\n \"capabilities\": {\n \"add\": [\n \"NET_ADMIN\"\n ]\n },\n \"privileged\": true\n }\n }\n ],\n \"restartPolicy\": \"Always\",\n \"terminationGracePeriodSeconds\": 30,\n \"dnsPolicy\": \"ClusterFirst\",\n \"serviceAccountName\": \"vpn-shoot\",\n \"serviceAccount\": \"vpn-shoot\",\n \"automountServiceAccountToken\": false,\n \"nodeName\": \"ip-10-250-7-77.ec2.internal\",\n \"securityContext\": {},\n \"schedulerName\": \"default-scheduler\",\n \"tolerations\": [\n {\n \"key\": \"CriticalAddonsOnly\",\n \"operator\": \"Exists\"\n },\n {\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\",\n \"tolerationSeconds\": 300\n },\n {\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\",\n \"tolerationSeconds\": 300\n }\n ],\n \"priorityClassName\": \"system-cluster-critical\",\n \"priority\": 2000000000,\n \"enableServiceLinks\": true\n },\n \"status\": {\n \"phase\": \"Running\",\n \"conditions\": [\n {\n \"type\": \"Initialized\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-11T15:56:13Z\"\n },\n {\n \"type\": \"Ready\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-11T15:56:41Z\"\n },\n {\n \"type\": \"ContainersReady\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-11T15:56:41Z\"\n },\n {\n \"type\": \"PodScheduled\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-11T15:56:13Z\"\n }\n ],\n \"hostIP\": \"10.250.7.77\",\n \"podIP\": \"100.64.0.11\",\n \"podIPs\": [\n {\n \"ip\": \"100.64.0.11\"\n }\n ],\n \"startTime\": \"2020-01-11T15:56:13Z\",\n \"containerStatuses\": [\n {\n \"name\": \"vpn-shoot\",\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-01-11T15:56:40Z\"\n }\n },\n \"lastState\": {},\n \"ready\": true,\n \"restartCount\": 0,\n \"image\": \"eu.gcr.io/gardener-project/gardener/vpn-shoot:0.16.0\",\n \"imageID\": \"docker-pullable://eu.gcr.io/gardener-project/gardener/vpn-shoot@sha256:6054c6ae62c2bca2f07c913390c3babf14bb8dfa80c707ee8d4fd03c06dbf93f\",\n \"containerID\": \"docker://dada62da99b4acf0a30f91aa34681b94ccf6c8ee8a49239492519599be4d9a13\",\n \"started\": true\n }\n ],\n \"qosClass\": \"Burstable\"\n }\n }\n ]\n}\n==== START logs for container kubernetes-dashboard of pod kube-system/addons-kubernetes-dashboard-78954cc66b-69k8m ====\n2020/01/11 15:56:31 Starting overwatch\n2020/01/11 15:56:31 Using in-cluster config to connect to apiserver\n2020/01/11 15:56:31 Using service account token for csrf signing\n2020/01/11 15:56:31 Successful initial request to the apiserver, version: v1.16.4\n2020/01/11 15:56:31 Generating JWE encryption key\n2020/01/11 15:56:31 New synchronizer has been registered: kubernetes-dashboard-key-holder-kube-system. Starting\n2020/01/11 15:56:31 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 15:56:34 Storing encryption key in a secret\n2020/01/11 15:56:34 Creating in-cluster Heapster client\n2020/01/11 15:56:34 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 15:56:34 Auto-generating certificates\n2020/01/11 15:56:35 Successfully created certificates\n2020/01/11 15:56:35 Serving securely on HTTPS port: 8443\n2020/01/11 15:57:04 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 15:57:34 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 15:58:04 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 15:58:34 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 15:59:04 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 15:59:34 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 16:00:04 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 16:00:34 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 16:01:04 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 16:01:34 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 16:02:04 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 16:02:34 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 16:03:04 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 16:03:34 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 16:04:04 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 16:04:34 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 16:05:04 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 16:05:34 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 16:06:04 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 16:06:34 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 16:07:04 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 16:07:34 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 16:08:04 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 16:08:34 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 16:09:04 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 16:09:34 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 16:10:04 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 16:10:34 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 16:11:04 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 16:11:34 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 16:12:04 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 16:12:34 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 16:13:04 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 16:13:34 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 16:14:04 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 16:14:34 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 16:15:04 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 16:15:34 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 16:16:04 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 16:16:34 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 16:17:04 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 16:17:34 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 16:18:04 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 16:18:34 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 16:19:04 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 16:19:35 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 16:20:05 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 16:20:35 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 16:21:05 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 16:21:35 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 16:22:05 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 16:22:35 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 16:23:05 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 16:23:35 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 16:24:05 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 16:24:35 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 16:25:05 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 16:25:35 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 16:26:05 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 16:26:35 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 16:27:05 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 16:27:35 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 16:28:05 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 16:28:35 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 16:29:05 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 16:29:35 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 16:30:05 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 16:30:35 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 16:31:05 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 16:31:35 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 16:32:05 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 16:32:35 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 16:33:05 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 16:33:35 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 16:34:05 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 16:34:35 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 16:35:05 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 16:35:35 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 16:36:05 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 16:36:35 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 16:37:05 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 16:37:35 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 16:38:05 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 16:38:35 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 16:39:05 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 16:39:35 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 16:40:05 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 16:40:35 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 16:41:05 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 16:41:35 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 16:42:05 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 16:42:35 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 16:43:05 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 16:43:35 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 16:44:05 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 16:44:35 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 16:45:05 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 16:45:35 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 16:46:05 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 16:46:35 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 16:47:05 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 16:47:35 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 16:48:05 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 16:48:35 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 16:49:05 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 16:49:35 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 16:50:05 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 16:50:35 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 16:51:05 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 16:51:35 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 16:52:05 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 16:52:35 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 16:53:05 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 16:53:35 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 16:54:05 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 16:54:35 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 16:55:05 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 16:55:35 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 16:56:05 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 16:56:35 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 16:57:05 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 16:57:35 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 16:58:05 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 16:58:35 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 16:59:05 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 16:59:35 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 17:00:05 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 17:00:35 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 17:01:05 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 17:01:35 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 17:02:05 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 17:02:35 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 17:03:05 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 17:03:35 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 17:04:05 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 17:04:35 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 17:05:05 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 17:05:35 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 17:06:05 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 17:06:35 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 17:07:05 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 17:07:35 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 17:08:05 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 17:08:35 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 17:09:05 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 17:09:35 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 17:10:05 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 17:10:35 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 17:11:05 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 17:11:41 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: unexpected object: &Secret{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string][]byte{},Type:,StringData:map[string]string{},}\n2020/01/11 17:11:43 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:11:43 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:11:43 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:11:45 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 17:11:45 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:11:45 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:11:45 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:11:47 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:11:47 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:11:47 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:11:49 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:11:49 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:11:49 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:11:51 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:11:51 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:11:51 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:11:53 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:11:53 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:11:53 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:11:55 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:11:55 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:11:55 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:11:57 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:11:57 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:11:57 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:11:59 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:11:59 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:11:59 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:12:01 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:12:01 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:12:01 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:12:03 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:12:03 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:12:03 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:12:05 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:12:05 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:12:05 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:12:07 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:12:07 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:12:07 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:12:09 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:12:09 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:12:09 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:12:11 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:12:11 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:12:11 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:12:13 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:12:13 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:12:13 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:12:15 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 17:12:15 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:12:15 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:12:15 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:12:17 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:12:17 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:12:17 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:12:19 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:12:19 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:12:19 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:12:21 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:12:21 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:12:21 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:12:23 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:12:23 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:12:23 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:12:25 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:12:25 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:12:25 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:12:27 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:12:27 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:12:27 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:12:29 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:12:29 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:12:29 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:12:31 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:12:31 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:12:31 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:12:33 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:12:33 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:12:33 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:12:35 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:12:35 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:12:35 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:12:37 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:12:37 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:12:37 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:12:39 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:12:39 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:12:39 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:12:41 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:12:41 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:12:41 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:12:43 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:12:43 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:12:43 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:12:45 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 17:12:45 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:12:45 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:12:45 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:12:47 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:12:47 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:12:47 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:12:49 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:12:49 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:12:49 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:12:51 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:12:51 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:12:51 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:12:53 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:12:53 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:12:53 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:12:55 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:12:55 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:12:55 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:12:57 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:12:57 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:12:57 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:12:59 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:12:59 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:12:59 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:13:01 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:13:01 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:13:01 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:13:03 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:13:03 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:13:03 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:13:05 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:13:05 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:13:05 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:13:07 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:13:07 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:13:07 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:13:09 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:13:09 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:13:09 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:13:11 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:13:11 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:13:11 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:13:13 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:13:13 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:13:13 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:13:15 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 17:13:15 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:13:15 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:13:15 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:13:17 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:13:17 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:13:17 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:13:19 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:13:19 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:13:19 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:13:21 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:13:21 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:13:21 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:13:23 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:13:23 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:13:23 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:13:25 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:13:25 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:13:25 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:13:27 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:13:27 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:13:27 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:13:29 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:13:29 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:13:29 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:13:31 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:13:31 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:13:31 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:13:33 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:13:33 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:13:33 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:13:35 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:13:35 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:13:35 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:13:37 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:13:37 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:13:37 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:13:39 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:13:39 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:13:39 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:13:41 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:13:41 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:13:41 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:13:43 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:13:43 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:13:43 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:13:45 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 17:13:45 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:13:45 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:13:45 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:13:47 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:13:47 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:13:47 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:13:49 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:13:49 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:13:49 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:13:51 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:13:51 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:13:51 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:13:53 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:13:53 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:13:53 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:13:55 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:13:55 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:13:55 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:13:57 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:13:57 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:13:57 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:13:59 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:13:59 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:13:59 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:14:01 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:14:01 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:14:01 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:14:03 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:14:03 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:14:03 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:14:05 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:14:05 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:14:05 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:14:07 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:14:07 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:14:07 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:14:09 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:14:09 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:14:09 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:14:11 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:14:11 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:14:11 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:14:13 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:14:13 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:14:13 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:14:15 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 17:14:15 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:14:15 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:14:15 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:14:17 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:14:17 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:14:17 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:14:19 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:14:19 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:14:19 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:14:21 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:14:21 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:14:21 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:14:23 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:14:23 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:14:23 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:14:25 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:14:25 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:14:25 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:14:27 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:14:27 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:14:27 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:14:29 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:14:29 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:14:29 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:14:31 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:14:31 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:14:31 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:14:33 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:14:33 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:14:33 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:14:35 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:14:35 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:14:35 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:14:37 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:14:37 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:14:37 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:14:39 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:14:39 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:14:39 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:14:41 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:14:41 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:14:41 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:14:43 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:14:43 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:14:43 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:14:45 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 17:14:45 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:14:45 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:14:45 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:14:47 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:14:47 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:14:47 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:14:49 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:14:49 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:14:49 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:14:51 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:14:51 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:14:51 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:14:53 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:14:53 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:14:53 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:14:55 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:14:55 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:14:55 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:14:57 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:14:57 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:14:57 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:14:59 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:14:59 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:14:59 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:15:01 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:15:01 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:15:01 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:15:03 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:15:03 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:15:03 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:15:05 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:15:05 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:15:05 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:15:07 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:15:07 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:15:07 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:15:09 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:15:09 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:15:09 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:15:11 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:15:11 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:15:11 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:15:13 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:15:13 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:15:13 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:15:15 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 17:15:15 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:15:15 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:15:15 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:15:17 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:15:17 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:15:17 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:15:19 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:15:19 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:15:19 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:15:21 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:15:21 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:15:21 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:15:23 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:15:23 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:15:23 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:15:25 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:15:25 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:15:25 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:15:27 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:15:27 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:15:27 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:15:29 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:15:29 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:15:29 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:15:31 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:15:31 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:15:31 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:15:33 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:15:33 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:15:33 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:15:35 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:15:35 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:15:35 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:15:37 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:15:37 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:15:37 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:15:39 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:15:39 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:15:39 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:15:41 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:15:41 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:15:41 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:15:43 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:15:43 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:15:43 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:15:45 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 17:15:45 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:15:45 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:15:45 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:15:47 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:15:47 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:15:47 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:15:49 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:15:49 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:15:49 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:15:51 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:15:51 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:15:51 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:15:53 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:15:53 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:15:53 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:15:55 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:15:55 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:15:55 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:15:57 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:15:57 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:15:57 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:15:59 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:15:59 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:15:59 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:16:01 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:16:01 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:16:01 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:16:03 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:16:03 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:16:03 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:16:05 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:16:05 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:16:05 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:16:07 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:16:07 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:16:07 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:16:09 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:16:09 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:16:09 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:16:11 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:16:11 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:16:11 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:16:13 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:16:13 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:16:13 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:16:15 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 17:16:15 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:16:15 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:16:15 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:16:17 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:16:17 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:16:17 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:16:19 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:16:19 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:16:19 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:16:21 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:16:21 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:16:21 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:16:23 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:16:23 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:16:23 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:16:25 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:16:25 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:16:25 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:16:27 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:16:27 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:16:27 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:16:29 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:16:29 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:16:29 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:16:31 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:16:31 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:16:31 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:16:33 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:16:33 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:16:33 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:16:35 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:16:35 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:16:35 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:16:37 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:16:37 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:16:37 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:16:39 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:16:39 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:16:39 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:16:41 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:16:41 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:16:41 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:16:43 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:16:43 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:16:43 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:16:45 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 17:16:45 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:16:45 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:16:45 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:16:47 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:16:47 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:16:47 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:16:49 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:16:49 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:16:49 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:16:51 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:16:51 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:16:51 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:16:53 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:16:53 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:16:53 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:16:55 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:16:55 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:16:55 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:16:57 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:16:57 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:16:57 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:16:59 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:16:59 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:16:59 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:17:01 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:17:01 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:17:01 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:17:03 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:17:03 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:17:03 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:17:05 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:17:05 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:17:05 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:17:07 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:17:07 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:17:07 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:17:09 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:17:09 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:17:09 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:17:11 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:17:11 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:17:11 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:17:13 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:17:13 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:17:13 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:17:15 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 17:17:15 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:17:15 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:17:15 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:17:17 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:17:17 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:17:17 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:17:19 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:17:19 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:17:19 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:17:21 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:17:21 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:17:21 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:17:23 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:17:23 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:17:23 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:17:25 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:17:25 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:17:25 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:17:27 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:17:27 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:17:27 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:17:29 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:17:29 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:17:29 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:17:31 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:17:31 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:17:31 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:17:33 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:17:33 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:17:33 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:17:35 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:17:35 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:17:35 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:17:37 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:17:37 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:17:37 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:17:39 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:17:39 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:17:39 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:17:41 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:17:41 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:17:41 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:17:43 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:17:43 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:17:43 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:17:45 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 17:17:45 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:17:45 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:17:45 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:17:47 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:17:47 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:17:47 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:17:49 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:17:49 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:17:49 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:17:51 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:17:51 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:17:51 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:17:53 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:17:53 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:17:53 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:17:55 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:17:55 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:17:55 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:17:57 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:17:57 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:17:57 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:17:59 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:17:59 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:17:59 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:18:01 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:18:01 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:18:01 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:18:03 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:18:03 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:18:03 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:18:05 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:18:05 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:18:05 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:18:07 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:18:07 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:18:07 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:18:09 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:18:09 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:18:09 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:18:11 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:18:11 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:18:11 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:18:13 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:18:13 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:18:13 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:18:15 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 17:18:15 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:18:15 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:18:15 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:18:17 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:18:17 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:18:17 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:18:19 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:18:19 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:18:19 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:18:21 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:18:21 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:18:21 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:18:23 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:18:23 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:18:23 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:18:25 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:18:25 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:18:25 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:18:27 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:18:27 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:18:27 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:18:29 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:18:29 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:18:29 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:18:31 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:18:31 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:18:31 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:18:33 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:18:33 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:18:33 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:18:35 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:18:35 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:18:35 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:18:37 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:18:37 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:18:37 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:18:39 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:18:39 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:18:39 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:18:41 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:18:41 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:18:41 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:18:43 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:18:43 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:18:43 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:18:45 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 17:18:45 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:18:45 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:18:45 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:18:47 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:18:47 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:18:47 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:18:49 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:18:49 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:18:49 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:18:51 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:18:51 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:18:51 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:18:53 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:18:53 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:18:53 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:18:55 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:18:55 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:18:55 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:18:57 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:18:57 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:18:57 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:18:59 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:18:59 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:18:59 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:19:01 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:19:01 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:19:01 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:19:03 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:19:03 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:19:03 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:19:05 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:19:05 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:19:05 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:19:07 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:19:07 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:19:07 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:19:09 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:19:09 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:19:09 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:19:11 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:19:11 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:19:11 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:19:13 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:19:13 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:19:13 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:19:15 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 17:19:15 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:19:15 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:19:15 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:19:17 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:19:17 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:19:17 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:19:19 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:19:19 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:19:19 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:19:21 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:19:21 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:19:21 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:19:23 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:19:23 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:19:23 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:19:25 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:19:25 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:19:25 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:19:27 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:19:27 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:19:27 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:19:29 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:19:29 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:19:29 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:19:31 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:19:31 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:19:31 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:19:33 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:19:33 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:19:33 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:19:35 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:19:35 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:19:35 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:19:37 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:19:37 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:19:37 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:19:39 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:19:39 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:19:39 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:19:41 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:19:41 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:19:41 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:19:43 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:19:43 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:19:43 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:19:45 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 17:19:45 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:19:45 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:19:45 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:19:47 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:19:47 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:19:47 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:19:49 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:19:49 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:19:49 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:19:51 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:19:51 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:19:51 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:19:53 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:19:53 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:19:53 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:19:55 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:19:55 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:19:55 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:19:57 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:19:57 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:19:57 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:19:59 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:19:59 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:19:59 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:20:01 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:20:01 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:20:01 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:20:03 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:20:03 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:20:03 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:20:05 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:20:05 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:20:05 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:20:07 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:20:07 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:20:07 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:20:09 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:20:09 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:20:09 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:20:11 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:20:11 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:20:11 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:20:13 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:20:13 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:20:13 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:20:15 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 17:20:15 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:20:15 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:20:15 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:20:17 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:20:17 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:20:17 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:20:19 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:20:19 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:20:19 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:20:21 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:20:21 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:20:21 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:20:23 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:20:23 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:20:23 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:20:25 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:20:25 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:20:25 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:20:27 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:20:27 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:20:27 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:20:29 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:20:29 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:20:29 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:20:31 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:20:31 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:20:31 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:20:33 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:20:33 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:20:33 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:20:35 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:20:35 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:20:35 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:20:37 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:20:37 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:20:37 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:20:39 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:20:39 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:20:39 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:20:41 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:20:41 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:20:41 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:20:43 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:20:43 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:20:43 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:20:45 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 17:20:45 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:20:45 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:20:45 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:20:47 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:20:47 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:20:47 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:20:49 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:20:49 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:20:49 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:20:51 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:20:51 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:20:51 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:20:53 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:20:53 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:20:53 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:20:55 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:20:55 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:20:55 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:20:57 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:20:57 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:20:57 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:20:59 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:20:59 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:20:59 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:21:01 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:21:01 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:21:01 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:21:03 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:21:03 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:21:03 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:21:05 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:21:05 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:21:05 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:21:07 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:21:07 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:21:07 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:21:09 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:21:09 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:21:09 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:21:11 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:21:11 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:21:11 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:21:13 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:21:13 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:21:13 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:21:15 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 17:21:15 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:21:15 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:21:15 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:21:17 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:21:17 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:21:17 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:21:19 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:21:19 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:21:19 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:21:21 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:21:21 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:21:21 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:21:23 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:21:23 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:21:23 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:21:25 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:21:25 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:21:25 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:21:27 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:21:27 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:21:27 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:21:29 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:21:29 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:21:29 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:21:31 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:21:31 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:21:31 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:21:33 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:21:33 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:21:33 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:21:35 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:21:35 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:21:35 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:21:37 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:21:37 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:21:37 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:21:39 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:21:39 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:21:39 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:21:41 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:21:41 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:21:41 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:21:43 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:21:43 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:21:43 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:21:45 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 17:21:45 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:21:45 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:21:45 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:21:47 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:21:47 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:21:47 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:21:49 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:21:49 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:21:49 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:21:51 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:21:51 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:21:51 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:21:53 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:21:53 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:21:53 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:21:55 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:21:55 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:21:55 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:21:57 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:21:57 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:21:57 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:21:59 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:21:59 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:21:59 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:22:01 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:22:01 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:22:01 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:22:03 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:22:03 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:22:03 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:22:05 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:22:05 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:22:05 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:22:07 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:22:07 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:22:07 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:22:09 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:22:09 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:22:09 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:22:11 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:22:11 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:22:11 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:22:13 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:22:13 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:22:13 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:22:15 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 17:22:15 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:22:15 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:22:15 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:22:17 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:22:17 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:22:17 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:22:19 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:22:19 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:22:19 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:22:21 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:22:21 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:22:21 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:22:23 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:22:23 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:22:23 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:22:25 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:22:25 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:22:25 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:22:27 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:22:27 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:22:27 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:22:29 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:22:29 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:22:29 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:22:31 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:22:31 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:22:31 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:22:33 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:22:33 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:22:33 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:22:35 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:22:35 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:22:35 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:22:37 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:22:37 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:22:37 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:22:39 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:22:39 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:22:39 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:22:41 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:22:41 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:22:41 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:22:43 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:22:43 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:22:43 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:22:45 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 17:22:45 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:22:45 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:22:45 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:22:47 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:22:47 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:22:47 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:22:49 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:22:49 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:22:49 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:22:51 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:22:51 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:22:51 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:22:53 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:22:53 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:22:53 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:22:55 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:22:55 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:22:55 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:22:57 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:22:57 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:22:57 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:22:59 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:22:59 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:22:59 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:23:01 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:23:01 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:23:01 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:23:03 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:23:03 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:23:03 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:23:05 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:23:05 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:23:05 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:23:07 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:23:07 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:23:07 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:23:09 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:23:09 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:23:09 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:23:11 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:23:11 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:23:11 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:23:13 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:23:13 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:23:13 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:23:15 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 17:23:15 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:23:15 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:23:15 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:23:17 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:23:17 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:23:17 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:23:19 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:23:19 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:23:19 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:23:21 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:23:21 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:23:21 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:23:23 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:23:23 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:23:23 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:23:25 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:23:25 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:23:25 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:23:27 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:23:27 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:23:27 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:23:29 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:23:29 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:23:29 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:23:31 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:23:31 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:23:31 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:23:33 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:23:33 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:23:33 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:23:35 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:23:35 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:23:35 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:23:37 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:23:37 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:23:37 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:23:39 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:23:39 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:23:39 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:23:41 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:23:41 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:23:41 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:23:43 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:23:43 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:23:43 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:23:45 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 17:23:45 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:23:45 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:23:45 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:23:47 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:23:47 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:23:47 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:23:49 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:23:49 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:23:49 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:23:51 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:23:51 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:23:51 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:23:53 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:23:53 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:23:53 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:23:55 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:23:55 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:23:55 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:23:57 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:23:57 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:23:57 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:23:59 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:23:59 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:23:59 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:24:01 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:24:01 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:24:01 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:24:03 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:24:03 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:24:03 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:24:05 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:24:05 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:24:05 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:24:07 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:24:07 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:24:07 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:24:09 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:24:09 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:24:09 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:24:11 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:24:11 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:24:11 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:24:13 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:24:13 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:24:13 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:24:15 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 17:24:15 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:24:15 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:24:15 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:24:17 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:24:17 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:24:17 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:24:19 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:24:19 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:24:19 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:24:21 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:24:21 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:24:21 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:24:23 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:24:23 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:24:23 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:24:25 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:24:25 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:24:25 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:24:27 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:24:27 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:24:27 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:24:29 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:24:29 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:24:29 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:24:31 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:24:31 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:24:31 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:24:33 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:24:33 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:24:33 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:24:35 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:24:35 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:24:35 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:24:37 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:24:37 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:24:37 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:24:39 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:24:39 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:24:39 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:24:41 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:24:41 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:24:41 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:24:43 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:24:43 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:24:43 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:24:45 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 17:24:45 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:24:45 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:24:45 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:24:47 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:24:47 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:24:47 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:24:49 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:24:49 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:24:49 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:24:51 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:24:51 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:24:51 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:24:53 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:24:53 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:24:53 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:24:55 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:24:55 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:24:55 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:24:57 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:24:57 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:24:57 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:24:59 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:24:59 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:24:59 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:25:01 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:25:01 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:25:01 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:25:03 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:25:03 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:25:03 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:25:05 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:25:05 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:25:05 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:25:07 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:25:07 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:25:07 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:25:09 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:25:09 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:25:09 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:25:11 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:25:11 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:25:11 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:25:13 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:25:13 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:25:13 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:25:15 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 17:25:15 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:25:15 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:25:15 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:25:17 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:25:17 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:25:17 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:25:19 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:25:19 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:25:19 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:25:21 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:25:21 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:25:21 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:25:23 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:25:23 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:25:23 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:25:25 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:25:25 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:25:25 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:25:27 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:25:27 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:25:27 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:25:29 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:25:29 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:25:29 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:25:31 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:25:31 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:25:31 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:25:33 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:25:33 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:25:33 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:25:35 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:25:35 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:25:35 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:25:37 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:25:37 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:25:37 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:25:39 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:25:39 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:25:39 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:25:41 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:25:41 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:25:41 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:25:43 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:25:43 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:25:43 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:25:45 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 17:25:45 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:25:45 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:25:45 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:25:47 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:25:47 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:25:47 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:25:49 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:25:49 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:25:49 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:25:51 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:25:51 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:25:51 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:25:53 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:25:53 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:25:53 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:25:55 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:25:55 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:25:55 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:25:57 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:25:57 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:25:57 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:25:59 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:25:59 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:25:59 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:26:01 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:26:01 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:26:01 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:26:03 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:26:03 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:26:03 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:26:05 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:26:05 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:26:05 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:26:07 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:26:07 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:26:07 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:26:09 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:26:09 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:26:09 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:26:11 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:26:11 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:26:11 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:26:13 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:26:13 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:26:13 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:26:15 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 17:26:15 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:26:15 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:26:15 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:26:17 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:26:17 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:26:17 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:26:19 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:26:19 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:26:19 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:26:21 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:26:21 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:26:21 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:26:23 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:26:23 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:26:23 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:26:25 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:26:25 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:26:25 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:26:27 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:26:27 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:26:27 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:26:29 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:26:29 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:26:29 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:26:31 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:26:31 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:26:31 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:26:33 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:26:33 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:26:33 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:26:35 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:26:35 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:26:35 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:26:37 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:26:37 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:26:37 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:26:39 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:26:39 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:26:39 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:26:41 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:26:41 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:26:41 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:26:43 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:26:43 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:26:43 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:26:45 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 17:26:45 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:26:45 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:26:45 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:26:47 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:26:47 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:26:47 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:26:49 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:26:49 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:26:49 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:26:51 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:26:51 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:26:51 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:26:53 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:26:53 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:26:53 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:26:55 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:26:55 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:26:55 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:26:57 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:26:57 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:26:57 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:26:59 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:26:59 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:26:59 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:27:01 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:27:01 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:27:01 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:27:03 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:27:03 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:27:03 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:27:05 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:27:05 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:27:05 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:27:07 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:27:07 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:27:07 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:27:09 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:27:09 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:27:09 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:27:11 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:27:11 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:27:11 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:27:13 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:27:13 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:27:13 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:27:15 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 17:27:15 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:27:15 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:27:15 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:27:17 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:27:17 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:27:17 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:27:19 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:27:19 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:27:19 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:27:21 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:27:21 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:27:21 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:27:23 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:27:23 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:27:23 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:27:25 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:27:25 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:27:25 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:27:27 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:27:27 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:27:27 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:27:29 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:27:29 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:27:29 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:27:31 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:27:31 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:27:31 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:27:33 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:27:33 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:27:33 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:27:35 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:27:35 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:27:35 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:27:37 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:27:37 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:27:37 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:27:39 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:27:39 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:27:39 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:27:41 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:27:41 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:27:41 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:27:43 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:27:43 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:27:43 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:27:45 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 17:27:45 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:27:45 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:27:45 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:27:47 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:27:47 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:27:47 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:27:49 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:27:49 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:27:49 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:27:51 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:27:51 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:27:51 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:27:53 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:27:53 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:27:53 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:27:55 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:27:55 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:27:55 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:27:57 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:27:57 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:27:57 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:27:59 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:27:59 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:27:59 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:28:01 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:28:01 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:28:01 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:28:03 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:28:03 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:28:03 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:28:05 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:28:05 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:28:05 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:28:07 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:28:07 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:28:07 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:28:09 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:28:09 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:28:09 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:28:11 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:28:11 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:28:11 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:28:13 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:28:13 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:28:13 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:28:15 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 17:28:15 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:28:15 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:28:15 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:28:17 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:28:17 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:28:17 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:28:19 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:28:19 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:28:19 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:28:21 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:28:21 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:28:21 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:28:23 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:28:23 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:28:23 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:28:25 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:28:25 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:28:25 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:28:27 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:28:27 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:28:27 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:28:29 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:28:29 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:28:29 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:28:31 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:28:31 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:28:31 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:28:33 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:28:33 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:28:33 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:28:35 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:28:35 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:28:35 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:28:37 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:28:37 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:28:37 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:28:39 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:28:39 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:28:39 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:28:41 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:28:41 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:28:41 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:28:43 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:28:43 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:28:43 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:28:45 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 17:28:45 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:28:45 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:28:45 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:28:47 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:28:47 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:28:47 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:28:49 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:28:49 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:28:49 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:28:51 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:28:51 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:28:51 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:28:53 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:28:53 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:28:53 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:28:55 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:28:55 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:28:55 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:28:57 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:28:57 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:28:57 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:28:59 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:28:59 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:28:59 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:29:01 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:29:01 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:29:01 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:29:03 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:29:03 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:29:03 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:29:05 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:29:05 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:29:05 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:29:07 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:29:07 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:29:07 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:29:09 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:29:09 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:29:09 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:29:11 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:29:11 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:29:11 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:29:13 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:29:13 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:29:13 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:29:15 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 17:29:15 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:29:15 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:29:15 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:29:17 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:29:17 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:29:17 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:29:19 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:29:19 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:29:19 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:29:21 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:29:21 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:29:21 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:29:23 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:29:23 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:29:23 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:29:25 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:29:25 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:29:25 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:29:27 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:29:27 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:29:27 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:29:29 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:29:29 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:29:29 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:29:31 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:29:31 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:29:31 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:29:33 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:29:33 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:29:33 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:29:35 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:29:35 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:29:35 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:29:37 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:29:37 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:29:37 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:29:39 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:29:39 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:29:39 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:29:41 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:29:41 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:29:41 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:29:43 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:29:43 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:29:43 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:29:45 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 17:29:45 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:29:45 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:29:45 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:29:47 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:29:47 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:29:47 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:29:49 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:29:49 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:29:49 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:29:51 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:29:51 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:29:51 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:29:53 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:29:53 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:29:53 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:29:55 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:29:55 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:29:55 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:29:57 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:29:57 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:29:57 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:29:59 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:29:59 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:29:59 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:30:01 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:30:01 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:30:01 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:30:03 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:30:03 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:30:03 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:30:05 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:30:05 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:30:05 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:30:07 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:30:07 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:30:07 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:30:09 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:30:09 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:30:09 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:30:11 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:30:11 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:30:11 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:30:13 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:30:13 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:30:13 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:30:15 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 17:30:15 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:30:15 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:30:15 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:30:17 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:30:17 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:30:17 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:30:19 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:30:19 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:30:19 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:30:21 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:30:21 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:30:21 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:30:23 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:30:23 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:30:23 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:30:25 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:30:25 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:30:25 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:30:27 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:30:27 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:30:27 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:30:29 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:30:29 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:30:29 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:30:31 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:30:31 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:30:31 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:30:33 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:30:33 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:30:33 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:30:35 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:30:35 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:30:35 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:30:37 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:30:37 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:30:37 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:30:39 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:30:39 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:30:39 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:30:41 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:30:41 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:30:41 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:30:43 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:30:43 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:30:43 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:30:45 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 17:30:45 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:30:45 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:30:45 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:30:47 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:30:47 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:30:47 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:30:49 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:30:49 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:30:49 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:30:51 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:30:51 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:30:51 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:30:53 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:30:53 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:30:53 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:30:55 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:30:55 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:30:55 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:30:57 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:30:57 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:30:57 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:30:59 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:30:59 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:30:59 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:31:01 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:31:01 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:31:01 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:31:03 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:31:03 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:31:03 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:31:05 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:31:05 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:31:05 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:31:07 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:31:07 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:31:07 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:31:09 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:31:09 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:31:09 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:31:11 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:31:11 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:31:11 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:31:13 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:31:13 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:31:13 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:31:15 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 17:31:15 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:31:15 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:31:15 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:31:17 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:31:17 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:31:17 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:31:19 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:31:19 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:31:19 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:31:21 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:31:21 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:31:21 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:31:23 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:31:23 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:31:23 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:31:25 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:31:25 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:31:25 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:31:27 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:31:27 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:31:27 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:31:29 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:31:29 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:31:29 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:31:31 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:31:31 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:31:31 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:31:33 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:31:33 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:31:33 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:31:35 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:31:35 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:31:35 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:31:37 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:31:37 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:31:37 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:31:39 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:31:39 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:31:39 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:31:41 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:31:41 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:31:41 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:31:43 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:31:43 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:31:43 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:31:45 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 17:31:45 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:31:45 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:31:45 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:31:47 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:31:47 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:31:47 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:31:49 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:31:49 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:31:49 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:31:51 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:31:51 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:31:51 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:31:53 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:31:53 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:31:53 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:31:55 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:31:55 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:31:55 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:31:57 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:31:57 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:31:57 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:31:59 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:31:59 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:31:59 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:32:01 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:32:01 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:32:01 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:32:03 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:32:03 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:32:03 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:32:05 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:32:05 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:32:05 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:32:07 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:32:07 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:32:07 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:32:09 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:32:09 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:32:09 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:32:11 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:32:11 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:32:11 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:32:13 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:32:13 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:32:13 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:32:15 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 17:32:15 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:32:15 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:32:15 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:32:17 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:32:17 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:32:17 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:32:19 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:32:19 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:32:19 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:32:21 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:32:21 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:32:21 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:32:23 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:32:23 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:32:23 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:32:25 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:32:25 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:32:25 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:32:27 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:32:27 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:32:27 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:32:29 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:32:29 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:32:29 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:32:31 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:32:31 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:32:31 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:32:33 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:32:33 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:32:33 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:32:35 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:32:35 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:32:35 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:32:37 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:32:37 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:32:37 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:32:39 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:32:39 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:32:39 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:32:41 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:32:41 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:32:41 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:32:43 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:32:43 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:32:43 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:32:45 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 17:32:45 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:32:45 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:32:45 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:32:47 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:32:47 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:32:47 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:32:49 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:32:49 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:32:49 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:32:51 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:32:51 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:32:51 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:32:53 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:32:53 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:32:53 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:32:55 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:32:55 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:32:55 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:32:57 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:32:57 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:32:57 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:32:59 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:32:59 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:32:59 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:33:01 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:33:01 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:33:01 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:33:03 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:33:03 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:33:03 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:33:05 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:33:05 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:33:05 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:33:07 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:33:07 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:33:07 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:33:09 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:33:09 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:33:09 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:33:11 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:33:11 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:33:11 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:33:13 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:33:13 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:33:13 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:33:15 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 17:33:15 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:33:15 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:33:15 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:33:17 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:33:17 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:33:17 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:33:19 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:33:19 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:33:19 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:33:21 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:33:21 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:33:21 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:33:23 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:33:23 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:33:23 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:33:25 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:33:25 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:33:25 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:33:27 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:33:27 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:33:27 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:33:29 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:33:29 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:33:29 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:33:31 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:33:31 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:33:31 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:33:33 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:33:33 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:33:33 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:33:35 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:33:35 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:33:35 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:33:37 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:33:37 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:33:37 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:33:39 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:33:39 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:33:39 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:33:41 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:33:41 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:33:41 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:33:43 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:33:43 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:33:43 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:33:45 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 17:33:45 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:33:45 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:33:45 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:33:47 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:33:47 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:33:47 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:33:49 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:33:49 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:33:49 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:33:51 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:33:51 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:33:51 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:33:53 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:33:53 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:33:53 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:33:55 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:33:55 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:33:55 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:33:57 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:33:57 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:33:57 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:33:59 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:33:59 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:33:59 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:34:01 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:34:01 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:34:01 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:34:03 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:34:03 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:34:03 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:34:05 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:34:05 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:34:05 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:34:07 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:34:07 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:34:07 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:34:09 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:34:09 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:34:09 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:34:11 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:34:11 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:34:11 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:34:13 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:34:13 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:34:13 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:34:15 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 17:34:15 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:34:15 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:34:15 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:34:17 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:34:17 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:34:17 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:34:19 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:34:19 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:34:19 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:34:21 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:34:21 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:34:21 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:34:23 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:34:23 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:34:23 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:34:25 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:34:25 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:34:25 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:34:27 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:34:27 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:34:27 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:34:29 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:34:29 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:34:29 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:34:31 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:34:31 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:34:31 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:34:33 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:34:33 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:34:33 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:34:35 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:34:35 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:34:35 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:34:37 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:34:37 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:34:37 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:34:39 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:34:39 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:34:39 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:34:41 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:34:41 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:34:41 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:34:43 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:34:43 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:34:43 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:34:45 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 17:34:45 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:34:45 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:34:45 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:34:47 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:34:47 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:34:47 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:34:49 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:34:49 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:34:49 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:34:51 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:34:51 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:34:51 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:34:53 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:34:53 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:34:53 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:34:55 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:34:55 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:34:55 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:34:57 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:34:57 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:34:57 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:34:59 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:34:59 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:34:59 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:35:01 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:35:01 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:35:01 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:35:03 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:35:03 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:35:03 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:35:05 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:35:05 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:35:05 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:35:07 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:35:07 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:35:07 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:35:09 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:35:09 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:35:09 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:35:11 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:35:11 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:35:11 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:35:13 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:35:13 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:35:13 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:35:15 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 17:35:15 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:35:15 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:35:15 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:35:17 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:35:17 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:35:17 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:35:19 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:35:19 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:35:19 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:35:21 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:35:21 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:35:21 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:35:23 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:35:23 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:35:23 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:35:25 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:35:25 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:35:25 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:35:27 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:35:27 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:35:27 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:35:29 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:35:29 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:35:29 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:35:31 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:35:31 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:35:31 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:35:33 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:35:33 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:35:33 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:35:35 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:35:35 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:35:35 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:35:37 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:35:37 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:35:37 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:35:39 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:35:39 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:35:39 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:35:41 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:35:41 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:35:41 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:35:44 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:35:44 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:35:44 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:35:45 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 17:35:46 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:35:46 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:35:46 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:35:48 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:35:48 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:35:48 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:35:50 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:35:50 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:35:50 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:35:52 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:35:52 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:35:52 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:35:54 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:35:54 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:35:54 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:35:56 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:35:56 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:35:56 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:35:58 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:35:58 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:35:58 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:36:00 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:36:00 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:36:00 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:36:02 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:36:02 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:36:02 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:36:04 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:36:04 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:36:04 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:36:06 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:36:06 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:36:06 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:36:08 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:36:08 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:36:08 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:36:10 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:36:10 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:36:10 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:36:12 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:36:12 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:36:12 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:36:14 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:36:14 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:36:14 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:36:15 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 17:36:16 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:36:16 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:36:16 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:36:18 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:36:18 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:36:18 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:36:20 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:36:20 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:36:20 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:36:22 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:36:22 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:36:22 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:36:24 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:36:24 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:36:24 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:36:26 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:36:26 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:36:26 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:36:28 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:36:28 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:36:28 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:36:30 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:36:30 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:36:30 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:36:32 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:36:32 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:36:32 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:36:34 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:36:34 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:36:34 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:36:36 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:36:36 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:36:36 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:36:38 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:36:38 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:36:38 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:36:40 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:36:40 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:36:40 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:36:42 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:36:42 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:36:42 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:36:44 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:36:44 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:36:44 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:36:45 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 17:36:46 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:36:46 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:36:46 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:36:48 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:36:48 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:36:48 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:36:50 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:36:50 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:36:50 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:36:52 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:36:52 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:36:52 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:36:54 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:36:54 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:36:54 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:36:56 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:36:56 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:36:56 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:36:58 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:36:58 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:36:58 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:37:00 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:37:00 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:37:00 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:37:02 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:37:02 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:37:02 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:37:04 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:37:04 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:37:04 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:37:06 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:37:06 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:37:06 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:37:08 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:37:08 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:37:08 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:37:10 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:37:10 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:37:10 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:37:12 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:37:12 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:37:12 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:37:14 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:37:14 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:37:14 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:37:15 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 17:37:16 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:37:16 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:37:16 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:37:18 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:37:18 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:37:18 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:37:20 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:37:20 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:37:20 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:37:22 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:37:22 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:37:22 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:37:24 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:37:24 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:37:24 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:37:26 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:37:26 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:37:26 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:37:28 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:37:28 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:37:28 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:37:30 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:37:30 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:37:30 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:37:32 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:37:32 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:37:32 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:37:34 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:37:34 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:37:34 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:37:36 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:37:36 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:37:36 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:37:38 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:37:38 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:37:38 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:37:40 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:37:40 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:37:40 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:37:42 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:37:42 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:37:42 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:37:44 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:37:44 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:37:44 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:37:45 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 17:37:46 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:37:46 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:37:46 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:37:48 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:37:48 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:37:48 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:37:50 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:37:50 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:37:50 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:37:52 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:37:52 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:37:52 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:37:54 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:37:54 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:37:54 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:37:56 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:37:56 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:37:56 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:37:58 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:37:58 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:37:58 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:38:00 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:38:00 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:38:00 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:38:02 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:38:02 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:38:02 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:38:04 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:38:04 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:38:04 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:38:06 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:38:06 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:38:06 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:38:08 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:38:08 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:38:08 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:38:10 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:38:10 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:38:10 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:38:12 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:38:12 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:38:12 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:38:14 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:38:14 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:38:14 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:38:15 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 17:38:16 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:38:16 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:38:16 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:38:18 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:38:18 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:38:18 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:38:20 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:38:20 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:38:20 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:38:22 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:38:22 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:38:22 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:38:24 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:38:24 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:38:24 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:38:26 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:38:26 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:38:26 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:38:28 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:38:28 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:38:28 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:38:30 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:38:30 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:38:30 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:38:32 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:38:32 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:38:32 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:38:34 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:38:34 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:38:34 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:38:36 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:38:36 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:38:36 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:38:38 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:38:38 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:38:38 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:38:40 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:38:40 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:38:40 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:38:42 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:38:42 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:38:42 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:38:44 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:38:44 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:38:44 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:38:45 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 17:38:46 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:38:46 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:38:46 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:38:48 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:38:48 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:38:48 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:38:50 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:38:50 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:38:50 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:38:52 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:38:52 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:38:52 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:38:54 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:38:54 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:38:54 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:38:56 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:38:56 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:38:56 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:38:58 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:38:58 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:38:58 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:39:00 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:39:00 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:39:00 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:39:02 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:39:02 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:39:02 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:39:04 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:39:04 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:39:04 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:39:06 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:39:06 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:39:06 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:39:08 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:39:08 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:39:08 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:39:10 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:39:10 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:39:10 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:39:12 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:39:12 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:39:12 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:39:14 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:39:14 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:39:14 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:39:15 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 17:39:16 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:39:16 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:39:16 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:39:18 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:39:18 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:39:18 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:39:20 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:39:20 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:39:20 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:39:22 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:39:22 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:39:22 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:39:24 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:39:24 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:39:24 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:39:26 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:39:26 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:39:26 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:39:28 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:39:28 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:39:28 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:39:30 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:39:30 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:39:30 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:39:32 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:39:32 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:39:32 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:39:34 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:39:34 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:39:34 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:39:36 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:39:36 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:39:36 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:39:38 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:39:38 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:39:38 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:39:40 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:39:40 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:39:40 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:39:42 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:39:42 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:39:42 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:39:44 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:39:44 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:39:44 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:39:45 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 17:39:46 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:39:46 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:39:46 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:39:48 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:39:48 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:39:48 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:39:50 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:39:50 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:39:50 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:39:52 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:39:52 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:39:52 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:39:54 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:39:54 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:39:54 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:39:56 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:39:56 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:39:56 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:39:58 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:39:58 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:39:58 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:40:00 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:40:00 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:40:00 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:40:02 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:40:02 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:40:02 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:40:04 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:40:04 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:40:04 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:40:06 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:40:06 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:40:06 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:40:08 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:40:08 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:40:08 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:40:10 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:40:10 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:40:10 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:40:12 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:40:12 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:40:12 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:40:14 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:40:14 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:40:14 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:40:15 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 17:40:16 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:40:16 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:40:16 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:40:18 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:40:18 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:40:18 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:40:20 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:40:20 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:40:20 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:40:22 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:40:22 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:40:22 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:40:24 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:40:24 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:40:24 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:40:26 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:40:26 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:40:26 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:40:28 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:40:28 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:40:28 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:40:30 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:40:30 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:40:30 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:40:32 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:40:32 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:40:32 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:40:34 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:40:34 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:40:34 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:40:36 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:40:36 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:40:36 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:40:38 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:40:38 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:40:38 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:40:40 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:40:40 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:40:40 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:40:42 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:40:42 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:40:42 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:40:44 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:40:44 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:40:44 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:40:45 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 17:40:46 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:40:46 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:40:46 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:40:48 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:40:48 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:40:48 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:40:50 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:40:50 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:40:50 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:40:52 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:40:52 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:40:52 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:40:54 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:40:54 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:40:54 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:40:56 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:40:56 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:40:56 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:40:58 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:40:58 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:40:58 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:41:00 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:41:00 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:41:00 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:41:02 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:41:02 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:41:02 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:41:04 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:41:04 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:41:04 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:41:06 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:41:06 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:41:06 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:41:08 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:41:08 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:41:08 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:41:10 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:41:10 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:41:10 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:41:12 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:41:12 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:41:12 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:41:14 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:41:14 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:41:14 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:41:15 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 17:41:16 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:41:16 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:41:16 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:41:18 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:41:18 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:41:18 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:41:20 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:41:20 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:41:20 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:41:22 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:41:22 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:41:22 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:41:24 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:41:24 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:41:24 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:41:26 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:41:26 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:41:26 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:41:28 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:41:28 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:41:28 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:41:30 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:41:30 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:41:30 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:41:32 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:41:32 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:41:32 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:41:34 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:41:34 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:41:34 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:41:36 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:41:36 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:41:36 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:41:38 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:41:38 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:41:38 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:41:40 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:41:40 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:41:40 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:41:42 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:41:42 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:41:42 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:41:44 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:41:44 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:41:44 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:41:45 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 17:41:46 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:41:46 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:41:46 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:41:48 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:41:48 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:41:48 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:41:50 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:41:50 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:41:50 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:41:52 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:41:52 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:41:52 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:41:54 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:41:54 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:41:54 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:41:56 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:41:56 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:41:56 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:41:58 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:41:58 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:41:58 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:42:00 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:42:00 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:42:00 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:42:02 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:42:02 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:42:02 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:42:04 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:42:04 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:42:04 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:42:06 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:42:06 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:42:06 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:42:08 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:42:08 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:42:08 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:42:10 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:42:10 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:42:10 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:42:12 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:42:12 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:42:12 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:42:14 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:42:14 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:42:14 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:42:15 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 17:42:16 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:42:16 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:42:16 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:42:18 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:42:18 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:42:18 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:42:20 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:42:20 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:42:20 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:42:22 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:42:22 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:42:22 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:42:24 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:42:24 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:42:24 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:42:26 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:42:26 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:42:26 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:42:28 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:42:28 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:42:28 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:42:30 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:42:30 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:42:30 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:42:32 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:42:32 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:42:32 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:42:34 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:42:34 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:42:34 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:42:36 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:42:36 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:42:36 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:42:38 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:42:38 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:42:38 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:42:40 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:42:40 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:42:40 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:42:42 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:42:42 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:42:42 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:42:44 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:42:44 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:42:44 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:42:45 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 17:42:46 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:42:46 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:42:46 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:42:48 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:42:48 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:42:48 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:42:50 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:42:50 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:42:50 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:42:52 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:42:52 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:42:52 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:42:54 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:42:54 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:42:54 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:42:56 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:42:56 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:42:56 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:42:58 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:42:58 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:42:58 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:43:00 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:43:00 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:43:00 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:43:02 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:43:02 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:43:02 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:43:04 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:43:04 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:43:04 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:43:06 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:43:06 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:43:06 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:43:08 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:43:08 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:43:08 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:43:10 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:43:10 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:43:10 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:43:12 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:43:12 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:43:12 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:43:14 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:43:14 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:43:14 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:43:15 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 17:43:16 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:43:16 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:43:16 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:43:18 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:43:18 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:43:18 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:43:20 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:43:20 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:43:20 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:43:22 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:43:22 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:43:22 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:43:24 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:43:24 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:43:24 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:43:26 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:43:26 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:43:26 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:43:28 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:43:28 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:43:28 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:43:30 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:43:30 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:43:30 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:43:32 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:43:32 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:43:32 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:43:34 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:43:34 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:43:34 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:43:36 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:43:36 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:43:36 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:43:38 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:43:38 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:43:38 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:43:40 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:43:40 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:43:40 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:43:42 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:43:42 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:43:42 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:43:44 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:43:44 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:43:44 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:43:45 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 17:43:46 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:43:46 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:43:46 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:43:48 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:43:48 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:43:48 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:43:50 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:43:50 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:43:50 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:43:52 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:43:52 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:43:52 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:43:54 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:43:54 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:43:54 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:43:56 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:43:56 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:43:56 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:43:58 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:43:58 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:43:58 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:44:00 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:44:00 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:44:00 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:44:02 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:44:02 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:44:02 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:44:04 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:44:04 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:44:04 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:44:06 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:44:06 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:44:06 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:44:08 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:44:08 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:44:08 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:44:10 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:44:10 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:44:10 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:44:12 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:44:12 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:44:12 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:44:14 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:44:14 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:44:14 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:44:15 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 17:44:16 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:44:16 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:44:16 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:44:18 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:44:18 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:44:18 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:44:20 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:44:20 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:44:20 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:44:22 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:44:22 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:44:22 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:44:24 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:44:24 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:44:24 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:44:26 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:44:26 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:44:26 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:44:28 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:44:28 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:44:28 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:44:30 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:44:30 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:44:30 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:44:32 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:44:32 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:44:32 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:44:34 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:44:34 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:44:34 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:44:36 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:44:36 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:44:36 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:44:38 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:44:38 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:44:38 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:44:40 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:44:40 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:44:40 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:44:42 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:44:42 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:44:42 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:44:44 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:44:44 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:44:44 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:44:45 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 17:44:46 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:44:46 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:44:46 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:44:48 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:44:48 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:44:48 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:44:50 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:44:50 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:44:50 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:44:52 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:44:52 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:44:52 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:44:54 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:44:54 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:44:54 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:44:56 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:44:56 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:44:56 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:44:58 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:44:58 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:44:58 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:45:00 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:45:00 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:45:00 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:45:02 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:45:02 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:45:02 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:45:04 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:45:04 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:45:04 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:45:06 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:45:06 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:45:06 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:45:08 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:45:08 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:45:08 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:45:10 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:45:10 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:45:10 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:45:12 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:45:12 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:45:12 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:45:14 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:45:14 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:45:14 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:45:15 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 17:45:16 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:45:16 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:45:16 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:45:18 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:45:18 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:45:18 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:45:20 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:45:20 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:45:20 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:45:22 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:45:22 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:45:22 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:45:24 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:45:24 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:45:24 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:45:26 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:45:26 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:45:26 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:45:28 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:45:28 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:45:28 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:45:30 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:45:30 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:45:30 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:45:32 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:45:32 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:45:32 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:45:34 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:45:34 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:45:34 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:45:36 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:45:36 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:45:36 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:45:38 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:45:38 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:45:38 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:45:40 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:45:40 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:45:40 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:45:42 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:45:42 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:45:42 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:45:44 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:45:44 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:45:44 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:45:45 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 17:45:46 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:45:46 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:45:46 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:45:48 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:45:48 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:45:48 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:45:50 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:45:50 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:45:50 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:45:52 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:45:52 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:45:52 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:45:54 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:45:54 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:45:54 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:45:56 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:45:56 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:45:56 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:45:58 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:45:58 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:45:58 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:46:00 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:46:00 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:46:00 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:46:02 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:46:02 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:46:02 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:46:04 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:46:04 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:46:04 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:46:06 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:46:06 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:46:06 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:46:08 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:46:08 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:46:08 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:46:10 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:46:10 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:46:10 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:46:12 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:46:12 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:46:12 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:46:14 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:46:14 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:46:14 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:46:15 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 17:46:16 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:46:16 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:46:16 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:46:18 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:46:18 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:46:18 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:46:20 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:46:20 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:46:20 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:46:22 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:46:22 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:46:22 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:46:24 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:46:24 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:46:24 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:46:26 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:46:26 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:46:26 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:46:28 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:46:28 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:46:28 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:46:30 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:46:30 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:46:30 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:46:32 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:46:32 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:46:32 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:46:34 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:46:34 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:46:34 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:46:36 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:46:36 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:46:36 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:46:38 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:46:38 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:46:38 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:46:40 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:46:40 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:46:40 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:46:42 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:46:42 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:46:42 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:46:44 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:46:44 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:46:44 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:46:45 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 17:46:46 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:46:46 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:46:46 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:46:48 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:46:48 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:46:48 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:46:50 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:46:50 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:46:50 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:46:52 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:46:52 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:46:52 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:46:54 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:46:54 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:46:54 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:46:56 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:46:56 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:46:56 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:46:58 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:46:58 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:46:58 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:47:00 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:47:00 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:47:00 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:47:02 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:47:02 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:47:02 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:47:04 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:47:04 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:47:04 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:47:06 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:47:06 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:47:06 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:47:08 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:47:08 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:47:08 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:47:10 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:47:10 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:47:10 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:47:12 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:47:12 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:47:12 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:47:14 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:47:14 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:47:14 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:47:15 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 17:47:16 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:47:16 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:47:16 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:47:18 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:47:18 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:47:18 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:47:20 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:47:20 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:47:20 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:47:22 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:47:22 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:47:22 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:47:24 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:47:24 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:47:24 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:47:26 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:47:26 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:47:26 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:47:28 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:47:28 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:47:28 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:47:30 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:47:30 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:47:30 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:47:32 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:47:32 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:47:32 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:47:34 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:47:34 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:47:34 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:47:36 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:47:36 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:47:36 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:47:38 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:47:38 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:47:38 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:47:40 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:47:40 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:47:40 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:47:42 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:47:42 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:47:42 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:47:44 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:47:44 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:47:44 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:47:45 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 17:47:46 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:47:46 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:47:46 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:47:48 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:47:48 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:47:48 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:47:50 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:47:50 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:47:50 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:47:52 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:47:52 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:47:52 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:47:54 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:47:54 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:47:54 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:47:56 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:47:56 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:47:56 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:47:58 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:47:58 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:47:58 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:48:00 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:48:00 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:48:00 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:48:02 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:48:02 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:48:02 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:48:04 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:48:04 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:48:04 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:48:06 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:48:06 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:48:06 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:48:08 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:48:08 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:48:08 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:48:10 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:48:10 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:48:10 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:48:12 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:48:12 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:48:12 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:48:14 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:48:14 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:48:14 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:48:15 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 17:48:16 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:48:16 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:48:16 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:48:18 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:48:18 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:48:18 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:48:20 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:48:20 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:48:20 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:48:22 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:48:22 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:48:22 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:48:24 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:48:24 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:48:24 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:48:26 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:48:26 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:48:26 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:48:28 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:48:28 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:48:28 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:48:30 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:48:30 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:48:30 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:48:32 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:48:32 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:48:32 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:48:34 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:48:34 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:48:34 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:48:36 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:48:36 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:48:36 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:48:38 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:48:38 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:48:38 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:48:40 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:48:40 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:48:40 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:48:42 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:48:42 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:48:42 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:48:44 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:48:44 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:48:44 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:48:45 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 17:48:46 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:48:46 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:48:46 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:48:48 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:48:48 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:48:48 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:48:50 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:48:50 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:48:50 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:48:52 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:48:52 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:48:52 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:48:54 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:48:54 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:48:54 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:48:56 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:48:56 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:48:56 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:48:58 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:48:58 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:48:58 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:49:00 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:49:00 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:49:00 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:49:02 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:49:02 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:49:02 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:49:04 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:49:04 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:49:04 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:49:06 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:49:06 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:49:06 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:49:08 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:49:08 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:49:08 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:49:10 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:49:10 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:49:10 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:49:12 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:49:12 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:49:12 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:49:14 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:49:14 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:49:14 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:49:15 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 17:49:16 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:49:16 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:49:16 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:49:18 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:49:18 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:49:18 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:49:20 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:49:20 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:49:20 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:49:22 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:49:22 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:49:22 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:49:24 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:49:24 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:49:24 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:49:26 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:49:26 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:49:26 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:49:28 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:49:28 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:49:28 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:49:30 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:49:30 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:49:30 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:49:32 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:49:32 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:49:32 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:49:34 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:49:34 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:49:34 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:49:36 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:49:36 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:49:36 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:49:38 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:49:38 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:49:38 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:49:40 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:49:40 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:49:40 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:49:42 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:49:42 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:49:42 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:49:44 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:49:44 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:49:44 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:49:45 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 17:49:46 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:49:46 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:49:46 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:49:48 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:49:48 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:49:48 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:49:50 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:49:50 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:49:50 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:49:52 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:49:52 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:49:52 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:49:54 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:49:54 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:49:54 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:49:56 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:49:56 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:49:56 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:49:58 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:49:58 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:49:58 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:50:00 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:50:00 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:50:00 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:50:02 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:50:02 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:50:02 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:50:04 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:50:04 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:50:04 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:50:06 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:50:06 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:50:06 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:50:08 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:50:08 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:50:08 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:50:10 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:50:10 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:50:10 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:50:12 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:50:12 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:50:12 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:50:14 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:50:14 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:50:14 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:50:15 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 17:50:16 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:50:16 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:50:16 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:50:18 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:50:18 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:50:18 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:50:20 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:50:20 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:50:20 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:50:22 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:50:22 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:50:22 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:50:24 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:50:24 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:50:24 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:50:26 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:50:26 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:50:26 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:50:28 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:50:28 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:50:28 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:50:30 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:50:30 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:50:30 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:50:32 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:50:32 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:50:32 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:50:34 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:50:34 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:50:34 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:50:36 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:50:36 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:50:36 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:50:38 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:50:38 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:50:38 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:50:40 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:50:40 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:50:40 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:50:42 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:50:42 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:50:42 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:50:44 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:50:44 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:50:44 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:50:45 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 17:50:46 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:50:46 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:50:46 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:50:48 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:50:48 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:50:48 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:50:50 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:50:50 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:50:50 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:50:52 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:50:52 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:50:52 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:50:54 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:50:54 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:50:54 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:50:56 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:50:56 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:50:56 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:50:58 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:50:58 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:50:58 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:51:00 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:51:00 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:51:00 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:51:02 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:51:02 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:51:02 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:51:04 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:51:04 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:51:04 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:51:06 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:51:06 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:51:06 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:51:08 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:51:08 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:51:08 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:51:10 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:51:10 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:51:10 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:51:12 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:51:12 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:51:12 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:51:14 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:51:14 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:51:14 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:51:15 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 17:51:16 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:51:16 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:51:16 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:51:18 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:51:18 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:51:18 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:51:20 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:51:20 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:51:20 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:51:22 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:51:22 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:51:22 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:51:24 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:51:24 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:51:24 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:51:26 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:51:26 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:51:26 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:51:28 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:51:28 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:51:28 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:51:30 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:51:30 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:51:30 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:51:32 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:51:32 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:51:32 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:51:34 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:51:34 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:51:34 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:51:36 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:51:36 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:51:36 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:51:38 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:51:38 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:51:38 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:51:40 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:51:40 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:51:40 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:51:42 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:51:42 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:51:42 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:51:44 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:51:44 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:51:44 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:51:45 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 17:51:46 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:51:46 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:51:46 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:51:48 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:51:48 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:51:48 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:51:50 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:51:50 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:51:50 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:51:52 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:51:52 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:51:52 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:51:54 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:51:54 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:51:54 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:51:56 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:51:56 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:51:56 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:51:58 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:51:58 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:51:58 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:52:00 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:52:00 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:52:00 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:52:02 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:52:02 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:52:02 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:52:04 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:52:04 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:52:04 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:52:06 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:52:06 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:52:06 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:52:08 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:52:08 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:52:08 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:52:10 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:52:10 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:52:10 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:52:12 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:52:12 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:52:12 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:52:14 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:52:14 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:52:14 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:52:15 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 17:52:16 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:52:16 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:52:16 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:52:18 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:52:18 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:52:18 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:52:20 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:52:20 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:52:20 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:52:22 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:52:22 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:52:22 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:52:24 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:52:24 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:52:24 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:52:26 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:52:26 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:52:26 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:52:28 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:52:28 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:52:28 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:52:30 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:52:30 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:52:30 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:52:32 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:52:32 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:52:32 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:52:34 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:52:34 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:52:34 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:52:36 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:52:36 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:52:36 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:52:38 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:52:38 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:52:38 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:52:40 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:52:40 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:52:40 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:52:42 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:52:42 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:52:42 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:52:44 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:52:44 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:52:44 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:52:45 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 17:52:46 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:52:46 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:52:46 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:52:48 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:52:48 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:52:48 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:52:50 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:52:50 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:52:50 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:52:52 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:52:52 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:52:52 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:52:54 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:52:54 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:52:54 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:52:56 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:52:56 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:52:56 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:52:58 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:52:58 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:52:58 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:53:00 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:53:00 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:53:00 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:53:02 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:53:02 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:53:02 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:53:04 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:53:04 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:53:04 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:53:06 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:53:06 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:53:06 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:53:08 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:53:08 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:53:08 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:53:10 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:53:10 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:53:10 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:53:12 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:53:12 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:53:12 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:53:14 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:53:14 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:53:14 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:53:15 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 17:53:16 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:53:16 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:53:16 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:53:18 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:53:18 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:53:18 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:53:20 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:53:20 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:53:20 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:53:22 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:53:22 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:53:22 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:53:24 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:53:24 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:53:24 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:53:26 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:53:26 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:53:26 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:53:28 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:53:28 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:53:28 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:53:30 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:53:30 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:53:30 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:53:32 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:53:32 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:53:32 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:53:34 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:53:34 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:53:34 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:53:36 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:53:36 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:53:36 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:53:38 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:53:38 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:53:38 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:53:40 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:53:40 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:53:40 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:53:42 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:53:42 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:53:42 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:53:44 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:53:44 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:53:44 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:53:45 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 17:53:46 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:53:46 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:53:46 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:53:48 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:53:48 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:53:48 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:53:50 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:53:50 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:53:50 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:53:52 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:53:52 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:53:52 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:53:54 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:53:54 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:53:54 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:53:56 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:53:56 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:53:56 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:53:58 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:53:58 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:53:58 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:54:00 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:54:00 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:54:00 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:54:02 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:54:02 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:54:02 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:54:04 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:54:04 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:54:04 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:54:06 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:54:06 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:54:06 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:54:08 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:54:08 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:54:08 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:54:10 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:54:10 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:54:10 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:54:12 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:54:12 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:54:12 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:54:14 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:54:14 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:54:14 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:54:15 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 17:54:16 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:54:16 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:54:16 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:54:18 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:54:18 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:54:18 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:54:20 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:54:20 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:54:20 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:54:22 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:54:22 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:54:22 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:54:24 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:54:24 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:54:24 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:54:26 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:54:26 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:54:26 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:54:28 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:54:28 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:54:28 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:54:30 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:54:30 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:54:30 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:54:32 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:54:32 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:54:32 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:54:34 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:54:34 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:54:34 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:54:36 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:54:36 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:54:36 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:54:38 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:54:38 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:54:38 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:54:40 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:54:40 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:54:40 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:54:42 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:54:42 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:54:42 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:54:44 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:54:44 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:54:44 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:54:45 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 17:54:46 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:54:46 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:54:46 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:54:48 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:54:48 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:54:48 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:54:50 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:54:50 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:54:50 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:54:52 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:54:52 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:54:52 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:54:54 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:54:54 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:54:54 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:54:56 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:54:56 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:54:56 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:54:58 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:54:58 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:54:58 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:55:00 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:55:00 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:55:00 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:55:02 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:55:02 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:55:02 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:55:04 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:55:04 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:55:04 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:55:06 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:55:06 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:55:06 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:55:08 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:55:08 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:55:08 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:55:10 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:55:10 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:55:10 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:55:12 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:55:12 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:55:12 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:55:14 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:55:14 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:55:14 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:55:15 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 17:55:16 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:55:16 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:55:16 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:55:18 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:55:18 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:55:18 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:55:20 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:55:20 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:55:20 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:55:22 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:55:22 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:55:22 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:55:24 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:55:24 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:55:24 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:55:26 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:55:26 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:55:26 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:55:28 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:55:28 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:55:28 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:55:30 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:55:30 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:55:30 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:55:32 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:55:32 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:55:32 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:55:34 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:55:34 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:55:34 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:55:36 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:55:36 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:55:36 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:55:38 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:55:38 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:55:38 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:55:40 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:55:40 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:55:40 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:55:42 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:55:42 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:55:42 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:55:44 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:55:44 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:55:44 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:55:45 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 17:55:46 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:55:46 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:55:46 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:55:48 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:55:48 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:55:48 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:55:50 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:55:50 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:55:50 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:55:52 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:55:52 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:55:52 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:55:54 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:55:54 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:55:54 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:55:56 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:55:56 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:55:56 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:55:58 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:55:58 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:55:58 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:56:00 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:56:00 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:56:00 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:56:02 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:56:02 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:56:02 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:56:04 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:56:04 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:56:04 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:56:06 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:56:06 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:56:06 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:56:08 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:56:08 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:56:08 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:56:10 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:56:10 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:56:10 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:56:12 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:56:12 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:56:12 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:56:14 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:56:14 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:56:14 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:56:15 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 17:56:16 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:56:16 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:56:16 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:56:18 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:56:18 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:56:18 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:56:20 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:56:20 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:56:20 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:56:22 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:56:22 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:56:22 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:56:24 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:56:24 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:56:24 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:56:26 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:56:26 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:56:26 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:56:28 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:56:28 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:56:28 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:56:30 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:56:30 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:56:30 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:56:32 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:56:32 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:56:32 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:56:34 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:56:34 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:56:34 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:56:36 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:56:36 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:56:36 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:56:38 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:56:38 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:56:38 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:56:40 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:56:40 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:56:40 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:56:42 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:56:42 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:56:42 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:56:44 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:56:44 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:56:44 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:56:45 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 17:56:46 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:56:46 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:56:46 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:56:48 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:56:48 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:56:48 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:56:50 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:56:50 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:56:50 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:56:52 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:56:52 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:56:52 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:56:54 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:56:54 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:56:54 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:56:56 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:56:56 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:56:56 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:56:58 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:56:58 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:56:58 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:57:00 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:57:00 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:57:00 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:57:02 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:57:02 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:57:02 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:57:04 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:57:04 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:57:04 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:57:06 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:57:06 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:57:06 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:57:08 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:57:08 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:57:08 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:57:10 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:57:10 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:57:10 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:57:12 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:57:12 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:57:12 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:57:14 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:57:14 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:57:14 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:57:15 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 17:57:16 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:57:16 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:57:16 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:57:18 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:57:18 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:57:18 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:57:20 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:57:20 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:57:20 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:57:22 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:57:22 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:57:22 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:57:24 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:57:24 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:57:24 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:57:26 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:57:26 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:57:26 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:57:28 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:57:28 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:57:28 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:57:30 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:57:30 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:57:30 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:57:32 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:57:32 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:57:32 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:57:34 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:57:34 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:57:34 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:57:36 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:57:36 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:57:36 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:57:38 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:57:38 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:57:38 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:57:40 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:57:40 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:57:40 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:57:42 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:57:42 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:57:42 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:57:44 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:57:44 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:57:44 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:57:45 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 17:57:46 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:57:46 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:57:46 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:57:48 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:57:48 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:57:48 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:57:50 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:57:50 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:57:50 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:57:52 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:57:52 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:57:52 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:57:54 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:57:54 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:57:54 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:57:56 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:57:56 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:57:56 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:57:58 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:57:58 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:57:58 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:58:00 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:58:00 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:58:00 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:58:02 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:58:02 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:58:02 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:58:04 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:58:04 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:58:04 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:58:06 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:58:06 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:58:06 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:58:08 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:58:08 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:58:08 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:58:10 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:58:10 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:58:10 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:58:12 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:58:12 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:58:12 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:58:14 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:58:14 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:58:14 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:58:15 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 17:58:16 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:58:16 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:58:16 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:58:18 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:58:18 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:58:18 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:58:20 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:58:20 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:58:20 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:58:22 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:58:22 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:58:22 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:58:24 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:58:24 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:58:24 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:58:26 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:58:26 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:58:26 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:58:28 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:58:28 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:58:28 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:58:30 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:58:30 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:58:30 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:58:32 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:58:32 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:58:32 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:58:34 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:58:34 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:58:34 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:58:36 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:58:36 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:58:36 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:58:38 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:58:38 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:58:38 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:58:40 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:58:40 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:58:40 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:58:42 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:58:42 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:58:42 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:58:44 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:58:44 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:58:44 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:58:45 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 17:58:46 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:58:46 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:58:46 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:58:48 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:58:48 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:58:48 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:58:50 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:58:50 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:58:50 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:58:52 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:58:52 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:58:52 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:58:54 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:58:54 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:58:54 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:58:56 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:58:56 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:58:56 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:58:58 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:58:58 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:58:58 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:59:00 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:59:00 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:59:00 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:59:02 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:59:02 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:59:02 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:59:04 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:59:04 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:59:04 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:59:06 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:59:06 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:59:06 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:59:08 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:59:08 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:59:08 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:59:10 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:59:10 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:59:10 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:59:12 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:59:12 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:59:12 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:59:14 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:59:14 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:59:14 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:59:15 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 17:59:16 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:59:16 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:59:16 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:59:18 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:59:18 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:59:18 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:59:20 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:59:20 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:59:20 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:59:22 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:59:22 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:59:22 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:59:24 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:59:24 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:59:24 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:59:26 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:59:26 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:59:26 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:59:28 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:59:28 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:59:28 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:59:30 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:59:30 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:59:30 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:59:32 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:59:32 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:59:32 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:59:34 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:59:34 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:59:34 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:59:36 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:59:36 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:59:36 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:59:38 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:59:38 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:59:38 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:59:40 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:59:40 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:59:40 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:59:42 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:59:42 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:59:42 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:59:44 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:59:44 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:59:44 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:59:45 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 17:59:46 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:59:46 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:59:46 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:59:48 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:59:48 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:59:48 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:59:50 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:59:50 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:59:50 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:59:52 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:59:52 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:59:52 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:59:54 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:59:54 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:59:54 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:59:56 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:59:56 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:59:56 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 17:59:58 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 17:59:58 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 17:59:58 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:00:00 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:00:00 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:00:00 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:00:02 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:00:02 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:00:02 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:00:04 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:00:04 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:00:04 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:00:06 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:00:06 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:00:06 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:00:08 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:00:08 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:00:08 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:00:10 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:00:10 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:00:10 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:00:12 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:00:12 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:00:12 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:00:14 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:00:14 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:00:14 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:00:15 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 18:00:16 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:00:16 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:00:16 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:00:18 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:00:18 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:00:18 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:00:20 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:00:20 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:00:20 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:00:22 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:00:22 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:00:22 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:00:24 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:00:24 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:00:24 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:00:26 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:00:26 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:00:26 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:00:28 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:00:28 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:00:28 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:00:30 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:00:30 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:00:30 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:00:32 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:00:32 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:00:32 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:00:34 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:00:34 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:00:34 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:00:36 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:00:36 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:00:36 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:00:38 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:00:38 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:00:38 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:00:40 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:00:40 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:00:40 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:00:42 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:00:42 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:00:42 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:00:44 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:00:44 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:00:44 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:00:45 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 18:00:46 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:00:46 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:00:46 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:00:48 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:00:48 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:00:48 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:00:50 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:00:50 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:00:50 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:00:52 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:00:52 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:00:52 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:00:54 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:00:54 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:00:54 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:00:56 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:00:56 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:00:56 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:00:58 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:00:58 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:00:58 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:01:00 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:01:00 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:01:00 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:01:02 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:01:02 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:01:02 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:01:04 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:01:04 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:01:04 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:01:06 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:01:06 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:01:06 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:01:08 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:01:08 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:01:08 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:01:10 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:01:10 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:01:10 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:01:12 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:01:12 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:01:12 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:01:14 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:01:14 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:01:14 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:01:15 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 18:01:16 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:01:16 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:01:16 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:01:18 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:01:18 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:01:18 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:01:20 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:01:20 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:01:20 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:01:22 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:01:22 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:01:22 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:01:24 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:01:24 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:01:24 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:01:26 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:01:26 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:01:26 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:01:28 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:01:28 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:01:28 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:01:30 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:01:30 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:01:30 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:01:32 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:01:32 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:01:32 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:01:34 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:01:34 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:01:34 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:01:36 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:01:36 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:01:36 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:01:38 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:01:38 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:01:38 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:01:40 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:01:40 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:01:40 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:01:42 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:01:42 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:01:42 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:01:44 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:01:44 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:01:44 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:01:45 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 18:01:46 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:01:46 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:01:46 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:01:48 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:01:48 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:01:48 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:01:50 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:01:50 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:01:50 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:01:52 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:01:52 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:01:52 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:01:54 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:01:54 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:01:54 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:01:56 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:01:56 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:01:56 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:01:58 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:01:58 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:01:58 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:02:00 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:02:00 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:02:00 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:02:02 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:02:02 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:02:02 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:02:04 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:02:04 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:02:04 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:02:06 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:02:06 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:02:06 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:02:08 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:02:08 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:02:08 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:02:10 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:02:10 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:02:10 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:02:12 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:02:12 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:02:12 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:02:14 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:02:14 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:02:14 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:02:15 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 18:02:16 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:02:16 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:02:16 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:02:18 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:02:18 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:02:18 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:02:20 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:02:20 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:02:20 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:02:22 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:02:22 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:02:22 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:02:24 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:02:24 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:02:24 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:02:26 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:02:26 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:02:26 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:02:28 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:02:28 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:02:28 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:02:30 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:02:30 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:02:30 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:02:32 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:02:32 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:02:32 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:02:34 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:02:34 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:02:34 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:02:36 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:02:36 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:02:36 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:02:38 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:02:38 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:02:38 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:02:40 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:02:40 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:02:40 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:02:42 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:02:42 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:02:42 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:02:44 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:02:44 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:02:44 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:02:45 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 18:02:46 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:02:46 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:02:46 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:02:48 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:02:48 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:02:48 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:02:50 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:02:50 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:02:50 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:02:52 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:02:52 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:02:52 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:02:54 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:02:54 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:02:54 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:02:56 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:02:56 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:02:56 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:02:58 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:02:58 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:02:58 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:03:00 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:03:00 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:03:00 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:03:02 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:03:02 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:03:02 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:03:04 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:03:04 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:03:04 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:03:06 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:03:06 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:03:06 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:03:08 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:03:08 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:03:08 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:03:10 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:03:10 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:03:10 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:03:12 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:03:12 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:03:12 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:03:14 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:03:14 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:03:14 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:03:15 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 18:03:16 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:03:16 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:03:16 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:03:18 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:03:18 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:03:18 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:03:20 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:03:20 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:03:20 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:03:22 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:03:22 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:03:22 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:03:24 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:03:24 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:03:24 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:03:26 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:03:26 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:03:26 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:03:28 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:03:28 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:03:28 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:03:30 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:03:30 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:03:30 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:03:32 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:03:32 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:03:32 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:03:34 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:03:34 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:03:34 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:03:36 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:03:36 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:03:36 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:03:38 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:03:38 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:03:38 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:03:40 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:03:40 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:03:40 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:03:42 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:03:42 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:03:42 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:03:44 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:03:44 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:03:44 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:03:45 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 18:03:46 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:03:46 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:03:46 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:03:48 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:03:48 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:03:48 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:03:50 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:03:50 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:03:50 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:03:52 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:03:52 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:03:52 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:03:54 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:03:54 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:03:54 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:03:56 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:03:56 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:03:56 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:03:58 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:03:58 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:03:58 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:04:00 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:04:00 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:04:00 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:04:02 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:04:02 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:04:02 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:04:04 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:04:04 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:04:04 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:04:06 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:04:06 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:04:06 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:04:08 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:04:08 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:04:08 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:04:10 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:04:10 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:04:10 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:04:12 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:04:12 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:04:12 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:04:14 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:04:14 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:04:14 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:04:15 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 18:04:16 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:04:16 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:04:16 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:04:18 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:04:18 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:04:18 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:04:20 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:04:20 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:04:20 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:04:22 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:04:22 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:04:22 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:04:24 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:04:24 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:04:24 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:04:26 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:04:26 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:04:26 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:04:28 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:04:28 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:04:28 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:04:30 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:04:30 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:04:30 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:04:32 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:04:32 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:04:32 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:04:34 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:04:34 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:04:34 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:04:36 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:04:36 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:04:36 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:04:38 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:04:38 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:04:38 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:04:40 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:04:40 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:04:40 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:04:42 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:04:42 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:04:42 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:04:44 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:04:44 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:04:44 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:04:45 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 18:04:46 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:04:46 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:04:46 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:04:48 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:04:48 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:04:48 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:04:50 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:04:50 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:04:50 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:04:52 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:04:52 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:04:52 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:04:54 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:04:54 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:04:54 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:04:56 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:04:56 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:04:56 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:04:58 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:04:58 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:04:58 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:05:00 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:05:00 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:05:00 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:05:02 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:05:02 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:05:02 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:05:04 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:05:04 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:05:04 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:05:06 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:05:06 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:05:06 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:05:08 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:05:08 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:05:08 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:05:10 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:05:10 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:05:10 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:05:12 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:05:12 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:05:12 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:05:14 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:05:14 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:05:14 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:05:15 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 18:05:16 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:05:16 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:05:16 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:05:18 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:05:18 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:05:18 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:05:20 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:05:20 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:05:20 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:05:22 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:05:22 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:05:22 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:05:24 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:05:24 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:05:24 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:05:26 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:05:26 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:05:26 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:05:28 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:05:28 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:05:28 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:05:30 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:05:30 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:05:30 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:05:32 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:05:32 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:05:32 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:05:34 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:05:34 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:05:34 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:05:36 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:05:36 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:05:36 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:05:38 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:05:38 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:05:38 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:05:40 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:05:40 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:05:40 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:05:42 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:05:42 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:05:42 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:05:44 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:05:44 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:05:44 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:05:45 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 18:05:46 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:05:46 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:05:46 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:05:48 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:05:48 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:05:48 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:05:50 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:05:50 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:05:50 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:05:52 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:05:52 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:05:52 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:05:54 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:05:54 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:05:54 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:05:56 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:05:56 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:05:56 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:05:58 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:05:58 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:05:58 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:06:00 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:06:00 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:06:00 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:06:02 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:06:02 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:06:02 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:06:04 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:06:04 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:06:04 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:06:06 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:06:06 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:06:06 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:06:08 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:06:08 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:06:08 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:06:10 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:06:10 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:06:10 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:06:12 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:06:12 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:06:12 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:06:14 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:06:14 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:06:14 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:06:15 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 18:06:16 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:06:16 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:06:16 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:06:18 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:06:18 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:06:18 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:06:20 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:06:20 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:06:20 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:06:22 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:06:22 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:06:22 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:06:24 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:06:24 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:06:24 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:06:26 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:06:26 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:06:26 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:06:28 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:06:28 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:06:28 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:06:30 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:06:30 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:06:30 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:06:32 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:06:32 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:06:32 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:06:34 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:06:34 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:06:34 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:06:36 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:06:36 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:06:36 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:06:38 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:06:38 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:06:38 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:06:40 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:06:40 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:06:40 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:06:42 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:06:42 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:06:42 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:06:44 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:06:44 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:06:44 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:06:45 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 18:06:46 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:06:46 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:06:46 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:06:48 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:06:48 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:06:48 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:06:50 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:06:50 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:06:50 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:06:52 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:06:52 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:06:52 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:06:54 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:06:54 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:06:54 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:06:56 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:06:56 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:06:56 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:06:58 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:06:58 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:06:58 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:07:00 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:07:00 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:07:00 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:07:02 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:07:02 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:07:02 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:07:04 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:07:04 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:07:04 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:07:06 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:07:06 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:07:06 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:07:08 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:07:08 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:07:08 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:07:10 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:07:10 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:07:10 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:07:12 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:07:12 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:07:12 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:07:14 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:07:14 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:07:14 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:07:15 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 18:07:16 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:07:16 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:07:16 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:07:18 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:07:18 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:07:18 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:07:20 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:07:20 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:07:20 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:07:22 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:07:22 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:07:22 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:07:24 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:07:24 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:07:24 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:07:26 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:07:26 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:07:26 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:07:28 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:07:28 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:07:28 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:07:30 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:07:30 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:07:30 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:07:32 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:07:32 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:07:32 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:07:34 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:07:34 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:07:34 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:07:36 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:07:36 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:07:36 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:07:38 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:07:38 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:07:38 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:07:40 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:07:40 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:07:40 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:07:42 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:07:42 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:07:42 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:07:44 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:07:44 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:07:44 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:07:45 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 18:07:46 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:07:46 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:07:46 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:07:48 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:07:48 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:07:48 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:07:50 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:07:50 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:07:50 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:07:52 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:07:52 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:07:52 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:07:54 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:07:54 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:07:54 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:07:56 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:07:56 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:07:56 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:07:58 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:07:58 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:07:58 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:08:00 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:08:00 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:08:00 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:08:02 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:08:02 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:08:02 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:08:04 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:08:04 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:08:04 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:08:06 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:08:06 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:08:06 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:08:08 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:08:08 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:08:08 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:08:10 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:08:10 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:08:10 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:08:12 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:08:12 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:08:12 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:08:14 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:08:14 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:08:14 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:08:15 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 18:08:16 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:08:16 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:08:16 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:08:18 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:08:18 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:08:18 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:08:20 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:08:20 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:08:20 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:08:22 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:08:22 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:08:22 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:08:24 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:08:24 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:08:24 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:08:26 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:08:26 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:08:26 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:08:28 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:08:28 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:08:28 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:08:30 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:08:30 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:08:30 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:08:32 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:08:32 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:08:32 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:08:34 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:08:34 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:08:34 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:08:36 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:08:36 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:08:36 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:08:38 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:08:38 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:08:38 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:08:40 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:08:40 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:08:40 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:08:42 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:08:42 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:08:42 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:08:44 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:08:44 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:08:44 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:08:45 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 18:08:46 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:08:46 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:08:46 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:08:48 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:08:48 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:08:48 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:08:50 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:08:50 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:08:50 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:08:52 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:08:52 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:08:52 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:08:54 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:08:54 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:08:54 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:08:56 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:08:56 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:08:56 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:08:58 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:08:58 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:08:58 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:09:00 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:09:00 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:09:00 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:09:02 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:09:02 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:09:02 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:09:04 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:09:04 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:09:04 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:09:06 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:09:06 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:09:06 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:09:08 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:09:08 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:09:08 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:09:10 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:09:10 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:09:10 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:09:12 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:09:12 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:09:12 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:09:14 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:09:14 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:09:14 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:09:15 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 18:09:16 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:09:16 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:09:16 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:09:18 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:09:18 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:09:18 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:09:20 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:09:20 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:09:20 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:09:22 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:09:22 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:09:22 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:09:24 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:09:24 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:09:24 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:09:26 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:09:26 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:09:26 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:09:28 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:09:28 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:09:28 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:09:30 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:09:30 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:09:30 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:09:32 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:09:32 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:09:32 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:09:34 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:09:34 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:09:34 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:09:36 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:09:36 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:09:36 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:09:38 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:09:38 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:09:38 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:09:40 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:09:40 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:09:40 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:09:42 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:09:42 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:09:42 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:09:44 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:09:44 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:09:44 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:09:45 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 18:09:46 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:09:46 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:09:46 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:09:48 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:09:48 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:09:48 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:09:50 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:09:50 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:09:50 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:09:52 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:09:52 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:09:52 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:09:54 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:09:54 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:09:54 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:09:56 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:09:56 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:09:56 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:09:58 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:09:58 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:09:58 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:10:00 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:10:00 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:10:00 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:10:02 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:10:02 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:10:02 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:10:04 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:10:04 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:10:04 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:10:06 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:10:06 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:10:06 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:10:08 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:10:08 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:10:08 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:10:10 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:10:10 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:10:10 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:10:12 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:10:12 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:10:12 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:10:14 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:10:14 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:10:14 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:10:15 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 18:10:16 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:10:16 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:10:16 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:10:18 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:10:18 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:10:18 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:10:20 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:10:20 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:10:20 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:10:22 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:10:22 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:10:22 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:10:24 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:10:24 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:10:24 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:10:26 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:10:26 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:10:26 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:10:28 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:10:28 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:10:28 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:10:30 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:10:30 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:10:30 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:10:32 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:10:32 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:10:32 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:10:34 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:10:34 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:10:34 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:10:36 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:10:36 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:10:36 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:10:38 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:10:38 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:10:38 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:10:40 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:10:40 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:10:40 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:10:42 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:10:42 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:10:42 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:10:44 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:10:44 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:10:44 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:10:45 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 18:10:46 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:10:46 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:10:46 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:10:48 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:10:48 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:10:48 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:10:50 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:10:50 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:10:50 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:10:52 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:10:52 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:10:52 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:10:54 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:10:54 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:10:54 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:10:56 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:10:56 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:10:56 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:10:58 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:10:58 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:10:58 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:11:00 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:11:00 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:11:00 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:11:02 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:11:02 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:11:02 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:11:04 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:11:04 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:11:04 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:11:06 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:11:06 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:11:06 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:11:08 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:11:08 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:11:08 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:11:10 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:11:10 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:11:10 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:11:12 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:11:12 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:11:12 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:11:14 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:11:14 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:11:14 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:11:15 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 18:11:16 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:11:16 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:11:16 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:11:18 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:11:18 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:11:18 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:11:20 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:11:20 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:11:20 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:11:22 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:11:22 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:11:22 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:11:24 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:11:24 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:11:24 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:11:26 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:11:26 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:11:26 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:11:28 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:11:28 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:11:28 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:11:30 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:11:30 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:11:30 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:11:32 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:11:32 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:11:32 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:11:34 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:11:34 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:11:34 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:11:36 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:11:36 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:11:36 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:11:38 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:11:38 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:11:38 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:11:40 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:11:40 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:11:40 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:11:42 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:11:42 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:11:42 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:11:44 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:11:44 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:11:44 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:11:45 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 18:11:46 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:11:46 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:11:46 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:11:48 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:11:48 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:11:48 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:11:50 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:11:50 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:11:50 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:11:52 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:11:52 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:11:52 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:11:54 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:11:54 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:11:54 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:11:56 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:11:56 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:11:56 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:11:58 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:11:58 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:11:58 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:12:00 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:12:00 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:12:00 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:12:02 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:12:02 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:12:02 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:12:04 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:12:04 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:12:04 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:12:06 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:12:06 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:12:06 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:12:08 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:12:08 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:12:08 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:12:10 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:12:10 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:12:10 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:12:12 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:12:12 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:12:12 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:12:14 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:12:14 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:12:14 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:12:15 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 18:12:16 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:12:16 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:12:16 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:12:18 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:12:18 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:12:18 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:12:20 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:12:20 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:12:20 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:12:22 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:12:22 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:12:22 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:12:24 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:12:24 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:12:24 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:12:26 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:12:26 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:12:26 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:12:28 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:12:28 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:12:28 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:12:30 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:12:30 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:12:30 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:12:32 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:12:32 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:12:32 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:12:34 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:12:34 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:12:34 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:12:36 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:12:36 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:12:36 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:12:38 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:12:38 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:12:38 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:12:40 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:12:40 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:12:40 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:12:42 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:12:42 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:12:42 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:12:44 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:12:44 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:12:44 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:12:45 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 18:12:46 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:12:46 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:12:46 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:12:48 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:12:48 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:12:48 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:12:50 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:12:50 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:12:50 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:12:52 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:12:52 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:12:52 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:12:54 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:12:54 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:12:54 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:12:56 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:12:56 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:12:56 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:12:58 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:12:58 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:12:58 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:13:00 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:13:00 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:13:00 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:13:02 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:13:02 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:13:02 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:13:04 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:13:04 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:13:04 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:13:06 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:13:06 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:13:06 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:13:08 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:13:08 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:13:08 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:13:10 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:13:10 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:13:10 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:13:12 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:13:12 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:13:12 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:13:14 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:13:14 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:13:14 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:13:15 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 18:13:16 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:13:16 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:13:16 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:13:18 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:13:18 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:13:18 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:13:20 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:13:20 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:13:20 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:13:22 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:13:22 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:13:22 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:13:24 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:13:24 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:13:24 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:13:26 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:13:26 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:13:26 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:13:28 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:13:28 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:13:28 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:13:30 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:13:30 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:13:30 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:13:32 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:13:32 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:13:32 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:13:34 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:13:34 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:13:34 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:13:36 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:13:36 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:13:36 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:13:38 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:13:38 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:13:38 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:13:40 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:13:40 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:13:40 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:13:42 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:13:42 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:13:42 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:13:44 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:13:44 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:13:44 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:13:45 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 18:13:46 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:13:46 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:13:46 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:13:48 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:13:48 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:13:48 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:13:50 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:13:50 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:13:50 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:13:52 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:13:52 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:13:52 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:13:54 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:13:54 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:13:54 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:13:56 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:13:56 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:13:56 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:13:58 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:13:58 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:13:58 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:14:00 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:14:00 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:14:00 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:14:02 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:14:02 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:14:02 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:14:04 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:14:04 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:14:04 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:14:06 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:14:06 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:14:06 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:14:08 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:14:08 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:14:08 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:14:10 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:14:10 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:14:10 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:14:12 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:14:12 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:14:12 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:14:14 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:14:14 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:14:14 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:14:15 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 18:14:16 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:14:16 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:14:16 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:14:18 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:14:18 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:14:18 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:14:20 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:14:20 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:14:20 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:14:22 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:14:22 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:14:22 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:14:24 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:14:24 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:14:24 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:14:26 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:14:26 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:14:26 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:14:28 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:14:28 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:14:28 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:14:30 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:14:30 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:14:30 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:14:32 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:14:32 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:14:32 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:14:34 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:14:34 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:14:34 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:14:36 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:14:36 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:14:36 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:14:38 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:14:38 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:14:38 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:14:40 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:14:40 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:14:40 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:14:42 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:14:42 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:14:42 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:14:44 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:14:44 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:14:44 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:14:45 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 18:14:46 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:14:46 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:14:46 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:14:48 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:14:48 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:14:48 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:14:50 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:14:50 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:14:50 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:14:52 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:14:52 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:14:52 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:14:54 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:14:54 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:14:54 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:14:56 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:14:56 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:14:56 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:14:58 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:14:58 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:14:58 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:15:00 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:15:00 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:15:00 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:15:02 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:15:02 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:15:02 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:15:04 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:15:04 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:15:04 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:15:06 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:15:06 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:15:06 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:15:08 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:15:08 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:15:08 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:15:10 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:15:10 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:15:10 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:15:12 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:15:12 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:15:12 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:15:14 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:15:14 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:15:14 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:15:15 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 18:15:16 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:15:16 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:15:16 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:15:18 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:15:18 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:15:18 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:15:20 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:15:20 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:15:20 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:15:22 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:15:22 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:15:22 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:15:24 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:15:24 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:15:24 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:15:26 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:15:26 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:15:26 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:15:28 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:15:28 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:15:28 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:15:30 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:15:30 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:15:30 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:15:32 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:15:32 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:15:32 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:15:34 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:15:34 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:15:34 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:15:36 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:15:36 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:15:36 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:15:38 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:15:38 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:15:38 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:15:40 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:15:40 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:15:40 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:15:42 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:15:42 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:15:42 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:15:44 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:15:44 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:15:44 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:15:45 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 18:15:46 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:15:46 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:15:46 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:15:48 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:15:48 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:15:48 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:15:50 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:15:50 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:15:50 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:15:52 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:15:52 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:15:52 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:15:54 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:15:54 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:15:54 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:15:56 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:15:56 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:15:56 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:15:58 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:15:58 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:15:58 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:16:00 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:16:00 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:16:00 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:16:02 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:16:02 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:16:02 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:16:04 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:16:04 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:16:04 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:16:06 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:16:06 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:16:06 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:16:08 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:16:08 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:16:08 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:16:10 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:16:10 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:16:10 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:16:12 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:16:12 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:16:12 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:16:14 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:16:14 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:16:14 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:16:15 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 18:16:16 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:16:16 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:16:16 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:16:18 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:16:18 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:16:18 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:16:20 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:16:20 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:16:20 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:16:22 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:16:22 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:16:22 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:16:24 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:16:24 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:16:24 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:16:26 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:16:26 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:16:26 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:16:28 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:16:28 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:16:28 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:16:30 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:16:30 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:16:30 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:16:32 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:16:32 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:16:32 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:16:34 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:16:34 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:16:34 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:16:36 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:16:36 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:16:36 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:16:38 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:16:38 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:16:38 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:16:40 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:16:40 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:16:40 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:16:42 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:16:42 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:16:42 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:16:44 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:16:44 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:16:44 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:16:45 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 18:16:46 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:16:46 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:16:46 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:16:48 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:16:48 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:16:48 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:16:50 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:16:50 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:16:50 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:16:52 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:16:52 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:16:52 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:16:54 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:16:54 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:16:54 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:16:56 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:16:56 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:16:56 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:16:58 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:16:58 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:16:58 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:17:00 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:17:00 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:17:00 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:17:02 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:17:02 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:17:02 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:17:04 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:17:04 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:17:04 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:17:06 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:17:06 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:17:06 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:17:08 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:17:08 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:17:08 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:17:10 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:17:10 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:17:10 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:17:12 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:17:12 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:17:12 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:17:14 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:17:14 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:17:14 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:17:15 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 18:17:16 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:17:16 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:17:16 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:17:18 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:17:18 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:17:18 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:17:20 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:17:20 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:17:20 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:17:22 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:17:22 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:17:22 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:17:24 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:17:24 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:17:24 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:17:26 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:17:26 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:17:26 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:17:28 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:17:28 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:17:28 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:17:30 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:17:30 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:17:30 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:17:32 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:17:32 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:17:32 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:17:34 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:17:34 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:17:34 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:17:36 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:17:36 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:17:36 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:17:38 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:17:38 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:17:38 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:17:40 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:17:40 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:17:40 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:17:42 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:17:42 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:17:42 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:17:44 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:17:44 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:17:44 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:17:45 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 18:17:46 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:17:46 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:17:46 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:17:48 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:17:48 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:17:48 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:17:50 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:17:50 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:17:50 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:17:52 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:17:52 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:17:52 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:17:54 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:17:54 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:17:54 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:17:56 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:17:56 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:17:56 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:17:58 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:17:58 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:17:58 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:18:00 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:18:00 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:18:00 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:18:02 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:18:02 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:18:02 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:18:04 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:18:04 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:18:04 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:18:06 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:18:06 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:18:06 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:18:08 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:18:08 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:18:08 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:18:10 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:18:10 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:18:10 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:18:12 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:18:12 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:18:12 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:18:14 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:18:14 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:18:14 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:18:15 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 18:18:16 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:18:16 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:18:16 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:18:18 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:18:18 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:18:18 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:18:20 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:18:20 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:18:20 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:18:22 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:18:22 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:18:22 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:18:24 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:18:24 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:18:24 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:18:26 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:18:26 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:18:26 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:18:28 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:18:28 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:18:28 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:18:30 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:18:30 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:18:30 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:18:32 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:18:32 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:18:32 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:18:34 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:18:34 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:18:34 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:18:36 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:18:36 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:18:36 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:18:38 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:18:38 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:18:38 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:18:40 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:18:40 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:18:40 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:18:42 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:18:42 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:18:42 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:18:44 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:18:44 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:18:44 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:18:45 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 18:18:46 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:18:46 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:18:46 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:18:48 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:18:48 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:18:48 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:18:50 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:18:50 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:18:50 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:18:52 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:18:52 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:18:52 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:18:54 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:18:54 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:18:54 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:18:56 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:18:56 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:18:56 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:18:58 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:18:58 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:18:58 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:19:00 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:19:00 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:19:00 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:19:02 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:19:02 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:19:02 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:19:04 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:19:04 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:19:04 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:19:06 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:19:06 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:19:06 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:19:08 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:19:08 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:19:08 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:19:10 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:19:10 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:19:10 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:19:12 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:19:12 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:19:12 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:19:14 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:19:14 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:19:14 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:19:15 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 18:19:16 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:19:16 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:19:16 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:19:18 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:19:18 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:19:18 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:19:20 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:19:20 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:19:20 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:19:22 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:19:22 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:19:22 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:19:24 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:19:24 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:19:24 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:19:26 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:19:26 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:19:26 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:19:28 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:19:28 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:19:28 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:19:30 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:19:30 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:19:30 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:19:32 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:19:32 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:19:32 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:19:34 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:19:34 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:19:34 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:19:36 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:19:36 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:19:36 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:19:38 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:19:38 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:19:38 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:19:40 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:19:40 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:19:40 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:19:42 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:19:42 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:19:42 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:19:44 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:19:44 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:19:44 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:19:45 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 18:19:46 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:19:46 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:19:46 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:19:48 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:19:48 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:19:48 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:19:50 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:19:50 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:19:50 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:19:52 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:19:52 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:19:52 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:19:54 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:19:54 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:19:54 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:19:56 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:19:56 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:19:56 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:19:58 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:19:58 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:19:58 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:20:00 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:20:00 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:20:00 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:20:02 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:20:02 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:20:02 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:20:04 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:20:04 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:20:04 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:20:06 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:20:06 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:20:06 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:20:08 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:20:08 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:20:08 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:20:10 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:20:10 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:20:10 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:20:12 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:20:12 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:20:12 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:20:14 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:20:14 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:20:14 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:20:15 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 18:20:16 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:20:16 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:20:16 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:20:18 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:20:18 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:20:18 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:20:20 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:20:20 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:20:20 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:20:22 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:20:22 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:20:22 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:20:24 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:20:24 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:20:24 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:20:26 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:20:26 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:20:26 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:20:28 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:20:28 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:20:28 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:20:30 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:20:30 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:20:30 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:20:32 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:20:32 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:20:32 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:20:34 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:20:34 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:20:34 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:20:36 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:20:36 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:20:36 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:20:38 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:20:38 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:20:38 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:20:40 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:20:40 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:20:40 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:20:42 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:20:42 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:20:42 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:20:44 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:20:44 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:20:44 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:20:45 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 18:20:46 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:20:46 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:20:46 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:20:48 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:20:48 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:20:48 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:20:50 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:20:50 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:20:50 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:20:52 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:20:52 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:20:52 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:20:54 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:20:54 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:20:54 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:20:56 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:20:56 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:20:56 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:20:58 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:20:58 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:20:58 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:21:00 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:21:00 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:21:00 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:21:02 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:21:02 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:21:02 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:21:04 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:21:04 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:21:04 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:21:06 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:21:06 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:21:06 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:21:08 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:21:08 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:21:08 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:21:10 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:21:10 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:21:10 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:21:12 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:21:12 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:21:12 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:21:14 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:21:14 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:21:14 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:21:15 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 18:21:16 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:21:16 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:21:16 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:21:18 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:21:18 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:21:18 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:21:20 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:21:20 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:21:20 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:21:22 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:21:22 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:21:22 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:21:24 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:21:24 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:21:24 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:21:26 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:21:26 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:21:26 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:21:28 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:21:28 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:21:28 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:21:30 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:21:30 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:21:30 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:21:32 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:21:32 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:21:32 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:21:34 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:21:34 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:21:34 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:21:36 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:21:36 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:21:36 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:21:38 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:21:38 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:21:38 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:21:40 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:21:40 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:21:40 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:21:42 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:21:42 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:21:42 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:21:44 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:21:44 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:21:44 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:21:45 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 18:21:46 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:21:46 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:21:46 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:21:48 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:21:48 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:21:48 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:21:50 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:21:50 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:21:50 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:21:52 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:21:52 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:21:52 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:21:54 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:21:54 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:21:54 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:21:56 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:21:56 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:21:56 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:21:58 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:21:58 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:21:58 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:22:00 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:22:00 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:22:00 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:22:02 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:22:02 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:22:02 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:22:04 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:22:04 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:22:04 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:22:06 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:22:06 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:22:06 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:22:08 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:22:08 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:22:08 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:22:10 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:22:10 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:22:10 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:22:12 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:22:12 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:22:12 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:22:14 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:22:14 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:22:14 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:22:15 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 18:22:16 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:22:16 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:22:16 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:22:18 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:22:18 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:22:18 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:22:20 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:22:20 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:22:20 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:22:22 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:22:22 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:22:22 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:22:24 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:22:24 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:22:24 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:22:26 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:22:26 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:22:26 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:22:28 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:22:28 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:22:28 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:22:30 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:22:30 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:22:30 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:22:32 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:22:32 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:22:32 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:22:34 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:22:34 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:22:34 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:22:36 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:22:36 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:22:36 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:22:38 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:22:38 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:22:38 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:22:40 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:22:40 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:22:40 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:22:42 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:22:42 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:22:42 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:22:44 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:22:44 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:22:44 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:22:45 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 18:22:46 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:22:46 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:22:46 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:22:48 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:22:48 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:22:48 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:22:50 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:22:50 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:22:50 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:22:52 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:22:52 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:22:52 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:22:54 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:22:54 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:22:54 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:22:56 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:22:56 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:22:56 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:22:58 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:22:58 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:22:58 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:23:00 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:23:00 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:23:00 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:23:02 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:23:02 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:23:02 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:23:04 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:23:04 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:23:04 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:23:06 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:23:06 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:23:06 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:23:08 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:23:08 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:23:08 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:23:10 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:23:10 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:23:10 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:23:12 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:23:12 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:23:12 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:23:14 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:23:14 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:23:14 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:23:15 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 18:23:16 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:23:16 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:23:16 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:23:18 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:23:18 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:23:18 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:23:20 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:23:20 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:23:20 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:23:22 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:23:22 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:23:22 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:23:24 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:23:24 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:23:24 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:23:26 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:23:26 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:23:26 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:23:28 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:23:28 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:23:28 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:23:30 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:23:30 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:23:30 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:23:32 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:23:32 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:23:32 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:23:34 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:23:34 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:23:34 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:23:36 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:23:36 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:23:36 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:23:38 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:23:38 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:23:38 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:23:40 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:23:40 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:23:40 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:23:42 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:23:42 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:23:42 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:23:44 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:23:44 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:23:44 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:23:45 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 18:23:46 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:23:46 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:23:46 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:23:48 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:23:48 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:23:48 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:23:50 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:23:50 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:23:50 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:23:52 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:23:52 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:23:52 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:23:54 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:23:54 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:23:54 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:23:56 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:23:56 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:23:56 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:23:58 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:23:58 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:23:58 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:24:00 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:24:00 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:24:00 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:24:02 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:24:02 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:24:02 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:24:04 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:24:04 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:24:04 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:24:06 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:24:06 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:24:06 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:24:08 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:24:08 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:24:08 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:24:10 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:24:10 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:24:10 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:24:12 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:24:12 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:24:12 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:24:14 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:24:14 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:24:14 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:24:15 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 18:24:16 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:24:16 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:24:16 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:24:18 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:24:18 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:24:18 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:24:20 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:24:20 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:24:20 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:24:22 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:24:22 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:24:22 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:24:24 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:24:24 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:24:24 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:24:26 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:24:26 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:24:26 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:24:28 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:24:28 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:24:28 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:24:30 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:24:30 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:24:30 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:24:32 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:24:32 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:24:32 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:24:34 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:24:34 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:24:34 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:24:36 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:24:36 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:24:36 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:24:38 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:24:38 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:24:38 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:24:40 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:24:40 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:24:40 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:24:42 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:24:42 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:24:42 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:24:44 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:24:44 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:24:44 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:24:45 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 18:24:46 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:24:46 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:24:46 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:24:48 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:24:48 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:24:48 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:24:50 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:24:50 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:24:50 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:24:52 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:24:52 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:24:52 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:24:54 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:24:54 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:24:54 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:24:56 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:24:56 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:24:56 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:24:58 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:24:58 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:24:58 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:25:00 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:25:00 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:25:00 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:25:02 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:25:02 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:25:02 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:25:04 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:25:04 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:25:04 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:25:06 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:25:06 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:25:06 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:25:08 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:25:08 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:25:08 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:25:10 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:25:10 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:25:10 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:25:12 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:25:12 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:25:12 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:25:14 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:25:14 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:25:14 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:25:15 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 18:25:16 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:25:16 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:25:16 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:25:18 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:25:18 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:25:18 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:25:20 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:25:20 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:25:20 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:25:22 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:25:22 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:25:22 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:25:24 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:25:24 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:25:24 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:25:26 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:25:26 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:25:26 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:25:28 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:25:28 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:25:28 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:25:30 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:25:30 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:25:30 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:25:32 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:25:32 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:25:32 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:25:34 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:25:34 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:25:34 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:25:36 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:25:36 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:25:36 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:25:38 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:25:38 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:25:38 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:25:40 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:25:40 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:25:40 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:25:42 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:25:42 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:25:42 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:25:44 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:25:44 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:25:44 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:25:45 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 18:25:46 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:25:46 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:25:46 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:25:48 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:25:48 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:25:48 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:25:50 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:25:50 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:25:50 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:25:52 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:25:52 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:25:52 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:25:54 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:25:54 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:25:54 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:25:56 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:25:56 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:25:56 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:25:58 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:25:58 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:25:58 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:26:00 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:26:00 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:26:00 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:26:02 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:26:02 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:26:02 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:26:04 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:26:04 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:26:04 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:26:06 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:26:06 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:26:06 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:26:08 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:26:08 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:26:08 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:26:10 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:26:10 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:26:10 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:26:12 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:26:12 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:26:12 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:26:14 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:26:14 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:26:14 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:26:15 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 18:26:16 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:26:16 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:26:16 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:26:18 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:26:18 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:26:18 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:26:20 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:26:20 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:26:20 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:26:22 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:26:22 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:26:22 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:26:24 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:26:24 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:26:24 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:26:26 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:26:26 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:26:26 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:26:28 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:26:28 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:26:28 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:26:30 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:26:30 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:26:30 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:26:32 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:26:32 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:26:32 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:26:34 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:26:34 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:26:34 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:26:36 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:26:36 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:26:36 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:26:38 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:26:38 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:26:38 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:26:40 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:26:40 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:26:40 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:26:42 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:26:42 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:26:42 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:26:44 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:26:44 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:26:44 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:26:45 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 18:26:46 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:26:46 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:26:46 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:26:48 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:26:48 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:26:48 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:26:50 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:26:50 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:26:50 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:26:52 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:26:52 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:26:52 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:26:54 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:26:54 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:26:54 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:26:56 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:26:56 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:26:56 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:26:58 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:26:58 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:26:58 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:27:00 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:27:00 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:27:00 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:27:02 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:27:02 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:27:02 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:27:04 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:27:04 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:27:04 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:27:06 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:27:06 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:27:06 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:27:08 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:27:08 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:27:08 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:27:10 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:27:10 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:27:10 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:27:12 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:27:12 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:27:12 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:27:14 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:27:14 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:27:14 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:27:15 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 18:27:16 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:27:16 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:27:16 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:27:18 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:27:18 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:27:18 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:27:20 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:27:20 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:27:20 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:27:22 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:27:22 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:27:22 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:27:24 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:27:24 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:27:24 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:27:26 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:27:26 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:27:26 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:27:28 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:27:28 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:27:28 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:27:30 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:27:30 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:27:30 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:27:32 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:27:32 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:27:32 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:27:34 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:27:34 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:27:34 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:27:36 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:27:36 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:27:36 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:27:38 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:27:38 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:27:38 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:27:40 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:27:40 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:27:40 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:27:42 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:27:42 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:27:42 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:27:44 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:27:44 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:27:44 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:27:45 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 18:27:46 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:27:46 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:27:46 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:27:48 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:27:48 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:27:48 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:27:50 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:27:50 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:27:50 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:27:52 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:27:52 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:27:52 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:27:54 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:27:54 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:27:54 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:27:56 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:27:56 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:27:56 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:27:58 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:27:58 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:27:58 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:28:00 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:28:00 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:28:00 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:28:02 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:28:02 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:28:02 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:28:04 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:28:04 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:28:04 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:28:06 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:28:06 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:28:06 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:28:08 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:28:08 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:28:08 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:28:10 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:28:10 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:28:10 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:28:12 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:28:12 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:28:12 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:28:14 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:28:14 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:28:14 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:28:15 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 18:28:16 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:28:16 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:28:16 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:28:18 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:28:18 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:28:18 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:28:20 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:28:20 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:28:20 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:28:22 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:28:22 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:28:22 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:28:24 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:28:24 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:28:24 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:28:26 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:28:26 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:28:26 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:28:28 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:28:28 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:28:28 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:28:30 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:28:30 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:28:30 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:28:32 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:28:32 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:28:32 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:28:34 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:28:34 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:28:34 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:28:36 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:28:36 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:28:36 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:28:38 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:28:38 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:28:38 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:28:40 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:28:40 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:28:40 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:28:42 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:28:42 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:28:42 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:28:44 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:28:44 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:28:44 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:28:45 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 18:28:46 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:28:46 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:28:46 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:28:48 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:28:48 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:28:48 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:28:50 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:28:50 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:28:50 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:28:52 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:28:52 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:28:52 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:28:54 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:28:54 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:28:54 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:28:56 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:28:56 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:28:56 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:28:58 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:28:58 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:28:58 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:29:00 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:29:00 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:29:00 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:29:02 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:29:02 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:29:02 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:29:04 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:29:04 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:29:04 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:29:06 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:29:06 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:29:06 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:29:08 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:29:08 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:29:08 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:29:10 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:29:10 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:29:10 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:29:12 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:29:12 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:29:12 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:29:14 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:29:14 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:29:14 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:29:15 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 18:29:16 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:29:16 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:29:16 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:29:18 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:29:18 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:29:18 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:29:20 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:29:20 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:29:20 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:29:22 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:29:22 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:29:22 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:29:24 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:29:24 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:29:24 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:29:26 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:29:26 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:29:26 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:29:28 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:29:28 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:29:28 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:29:30 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:29:30 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:29:30 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:29:32 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:29:32 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:29:32 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:29:34 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:29:34 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:29:34 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:29:36 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:29:36 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:29:36 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:29:38 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:29:38 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:29:38 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:29:40 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:29:40 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:29:40 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:29:42 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:29:42 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:29:42 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:29:44 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:29:44 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:29:44 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:29:45 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 18:29:46 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:29:46 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:29:46 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:29:48 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:29:48 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:29:48 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:29:50 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:29:50 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:29:50 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:29:52 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:29:52 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:29:52 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:29:54 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:29:54 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:29:54 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:29:56 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:29:56 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:29:56 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:29:58 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:29:58 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:29:58 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:30:00 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:30:00 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:30:00 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:30:02 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:30:02 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:30:02 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:30:04 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:30:04 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:30:04 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:30:06 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:30:06 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:30:06 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:30:08 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:30:08 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:30:08 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:30:10 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:30:10 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:30:10 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:30:12 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:30:12 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:30:12 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:30:14 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:30:14 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:30:14 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:30:15 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 18:30:16 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:30:16 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:30:16 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:30:18 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:30:18 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:30:18 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:30:20 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:30:20 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:30:20 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:30:22 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:30:22 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:30:22 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:30:24 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:30:24 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:30:24 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:30:26 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:30:26 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:30:26 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:30:28 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:30:28 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:30:28 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:30:30 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:30:30 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:30:30 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:30:32 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:30:32 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:30:32 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:30:34 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:30:34 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:30:34 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:30:36 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:30:36 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:30:36 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:30:38 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:30:38 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:30:38 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:30:40 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:30:40 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:30:40 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:30:42 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:30:42 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:30:42 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:30:44 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:30:44 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:30:44 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:30:45 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 18:30:46 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:30:46 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:30:46 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:30:48 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:30:48 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:30:48 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:30:50 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:30:50 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:30:50 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:30:52 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:30:52 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:30:52 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:30:54 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:30:54 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:30:54 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:30:56 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:30:56 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:30:56 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:30:58 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:30:58 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:30:58 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:31:00 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:31:00 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:31:00 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:31:02 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:31:02 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:31:02 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:31:04 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:31:04 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:31:04 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:31:06 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:31:06 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:31:06 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:31:08 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:31:08 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:31:08 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:31:10 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:31:10 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:31:10 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:31:12 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:31:12 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:31:12 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:31:14 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:31:14 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:31:14 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:31:15 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 18:31:16 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:31:16 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:31:16 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:31:18 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:31:18 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:31:18 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:31:20 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:31:20 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:31:20 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:31:22 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:31:22 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:31:22 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:31:24 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:31:24 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:31:24 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:31:26 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:31:26 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:31:26 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:31:28 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:31:28 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:31:28 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:31:30 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:31:30 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:31:30 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:31:32 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:31:32 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:31:32 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:31:34 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:31:34 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:31:34 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:31:36 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:31:36 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:31:36 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:31:38 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:31:38 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:31:38 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:31:40 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:31:40 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:31:40 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:31:42 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:31:42 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:31:42 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:31:44 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:31:44 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:31:44 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:31:45 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 18:31:46 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:31:46 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:31:46 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:31:48 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:31:48 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:31:48 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:31:50 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:31:50 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:31:50 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:31:52 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:31:52 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:31:52 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:31:54 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:31:54 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:31:54 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:31:56 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:31:56 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:31:56 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:31:58 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:31:58 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:31:58 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:32:00 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:32:00 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:32:00 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:32:02 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:32:02 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:32:02 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:32:04 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:32:04 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:32:04 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:32:06 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:32:06 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:32:06 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:32:08 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:32:08 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:32:08 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:32:10 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:32:10 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:32:10 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:32:12 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:32:12 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:32:12 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:32:14 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:32:14 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:32:14 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:32:15 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 18:32:16 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:32:16 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:32:16 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:32:18 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:32:18 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:32:18 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:32:20 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:32:20 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:32:20 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:32:22 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:32:22 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:32:22 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:32:24 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:32:24 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:32:24 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:32:26 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:32:26 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:32:26 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:32:28 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:32:28 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:32:28 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:32:30 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:32:30 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:32:30 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:32:32 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:32:32 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:32:32 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:32:34 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:32:34 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:32:34 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:32:36 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:32:36 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:32:36 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:32:38 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:32:38 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:32:38 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:32:40 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:32:40 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:32:40 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:32:42 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:32:42 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:32:42 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:32:44 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:32:44 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:32:44 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:32:45 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 18:32:46 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:32:46 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:32:46 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:32:48 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:32:48 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:32:48 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:32:50 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:32:50 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:32:50 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:32:52 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:32:52 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:32:52 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:32:54 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:32:54 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:32:54 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:32:56 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:32:56 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:32:56 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:32:58 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:32:58 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:32:58 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:33:00 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:33:00 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:33:00 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:33:02 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:33:02 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:33:02 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:33:04 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:33:04 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:33:04 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:33:06 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:33:06 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:33:06 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:33:08 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:33:08 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:33:08 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:33:10 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:33:10 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:33:10 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:33:12 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:33:12 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:33:12 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:33:14 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:33:14 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:33:14 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:33:15 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 18:33:16 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:33:16 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:33:16 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:33:18 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:33:18 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:33:18 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:33:20 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:33:20 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:33:20 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:33:22 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:33:22 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:33:22 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:33:24 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:33:24 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:33:24 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:33:26 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:33:26 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:33:26 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:33:28 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:33:28 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:33:28 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:33:30 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:33:30 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:33:30 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:33:32 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:33:32 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:33:32 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:33:34 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:33:34 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:33:34 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:33:36 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:33:36 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:33:36 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:33:38 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:33:38 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:33:38 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:33:40 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:33:40 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:33:40 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:33:42 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:33:42 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:33:42 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:33:44 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:33:44 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:33:44 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:33:45 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 18:33:46 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:33:46 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:33:46 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:33:48 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:33:48 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:33:48 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:33:50 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:33:50 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:33:50 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:33:52 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:33:52 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:33:52 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:33:54 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:33:54 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:33:54 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:33:56 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:33:56 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:33:56 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:33:58 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:33:58 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:33:58 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:34:00 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:34:00 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:34:00 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:34:02 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:34:02 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:34:02 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:34:04 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:34:04 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:34:04 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:34:06 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:34:06 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:34:06 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:34:08 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:34:08 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:34:08 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:34:10 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:34:10 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:34:10 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:34:12 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:34:12 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:34:12 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:34:14 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:34:14 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:34:14 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:34:15 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 18:34:16 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:34:16 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:34:16 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:34:18 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:34:18 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:34:18 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:34:20 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:34:20 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:34:20 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:34:22 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:34:22 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:34:22 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:34:24 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:34:24 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:34:24 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:34:26 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:34:26 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:34:26 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:34:28 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:34:28 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:34:28 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:34:30 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:34:30 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:34:30 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:34:32 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:34:32 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:34:32 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:34:34 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:34:34 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:34:34 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:34:36 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:34:36 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:34:36 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:34:38 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:34:38 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:34:38 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:34:40 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:34:40 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:34:40 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:34:42 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:34:42 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:34:42 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:34:44 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:34:44 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:34:44 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:34:45 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 18:34:46 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:34:46 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:34:46 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:34:48 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:34:48 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:34:48 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:34:50 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:34:50 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:34:50 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:34:52 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:34:52 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:34:52 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:34:54 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:34:54 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:34:54 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:34:56 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:34:56 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:34:56 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:34:58 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:34:58 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:34:58 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:35:00 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:35:00 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:35:00 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:35:02 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:35:02 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:35:02 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:35:04 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:35:04 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:35:04 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:35:06 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:35:06 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:35:06 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:35:08 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:35:08 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:35:08 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:35:10 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:35:10 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:35:10 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:35:12 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:35:12 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:35:12 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:35:14 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:35:14 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:35:14 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:35:15 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 18:35:16 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:35:16 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:35:16 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:35:18 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:35:18 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:35:18 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:35:20 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:35:20 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:35:20 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:35:22 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:35:22 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:35:22 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:35:24 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:35:24 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:35:24 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:35:26 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:35:26 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:35:26 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:35:28 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:35:28 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:35:28 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:35:30 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:35:30 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:35:30 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:35:32 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:35:32 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:35:32 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:35:34 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:35:34 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:35:34 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:35:36 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:35:36 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:35:36 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:35:38 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:35:38 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:35:38 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:35:40 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:35:40 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:35:40 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:35:42 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:35:42 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:35:42 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:35:44 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:35:44 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:35:44 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:35:45 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 18:35:46 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:35:46 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:35:46 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:35:48 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:35:48 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:35:48 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:35:50 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:35:50 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:35:50 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:35:52 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:35:52 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:35:52 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:35:54 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:35:54 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:35:54 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:35:56 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:35:56 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:35:56 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:35:58 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:35:58 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:35:58 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:36:00 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:36:00 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:36:00 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:36:02 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:36:02 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:36:02 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:36:04 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:36:04 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:36:04 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:36:06 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:36:06 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:36:06 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:36:08 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:36:08 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:36:08 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:36:10 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:36:10 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:36:10 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:36:12 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:36:12 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:36:12 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:36:14 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:36:14 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:36:14 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:36:15 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 18:36:16 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:36:16 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:36:16 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:36:18 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:36:18 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:36:18 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:36:20 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:36:20 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:36:20 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:36:22 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:36:22 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:36:22 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:36:24 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:36:24 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:36:24 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:36:26 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:36:26 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:36:26 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:36:28 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:36:28 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:36:28 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:36:30 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:36:30 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:36:30 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:36:32 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:36:32 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:36:32 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:36:34 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:36:34 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:36:34 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:36:36 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:36:36 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:36:36 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:36:38 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:36:38 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:36:38 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:36:40 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:36:40 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:36:40 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:36:42 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:36:42 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:36:42 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:36:44 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:36:44 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:36:44 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:36:45 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 18:36:46 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:36:46 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:36:46 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:36:48 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:36:48 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:36:48 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:36:50 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:36:50 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:36:50 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:36:52 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:36:52 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:36:52 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:36:54 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:36:54 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:36:54 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:36:56 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:36:56 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:36:56 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:36:58 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:36:58 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:36:58 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:37:00 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:37:00 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:37:00 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:37:02 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:37:02 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:37:02 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:37:04 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:37:04 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:37:04 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:37:06 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:37:06 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:37:06 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:37:08 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:37:08 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:37:08 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:37:10 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:37:10 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:37:10 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:37:12 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:37:12 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:37:12 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:37:14 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:37:14 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:37:14 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:37:15 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 18:37:16 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:37:16 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:37:16 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:37:18 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:37:18 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:37:18 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:37:20 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:37:20 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:37:20 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:37:22 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:37:22 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:37:22 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:37:24 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:37:24 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:37:24 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:37:26 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:37:26 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:37:26 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:37:28 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:37:28 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:37:28 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:37:30 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:37:30 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:37:30 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:37:32 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:37:32 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:37:32 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:37:34 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:37:34 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:37:34 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:37:36 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:37:36 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:37:36 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:37:38 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:37:38 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:37:38 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:37:40 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:37:40 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:37:40 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:37:42 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:37:42 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:37:42 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:37:44 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:37:44 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:37:44 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:37:45 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 18:37:46 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:37:46 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:37:46 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:37:48 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:37:48 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:37:48 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:37:50 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:37:50 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:37:50 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:37:52 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:37:52 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:37:52 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:37:54 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:37:54 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:37:54 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:37:56 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:37:56 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:37:56 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:37:58 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:37:58 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:37:58 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:38:00 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:38:00 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:38:00 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:38:02 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:38:02 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:38:02 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:38:04 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:38:04 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:38:04 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:38:06 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:38:06 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:38:06 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:38:08 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:38:08 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:38:08 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:38:10 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:38:10 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:38:10 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:38:12 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:38:12 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:38:12 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:38:14 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:38:14 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:38:14 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:38:15 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 18:38:16 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:38:16 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:38:16 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:38:18 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:38:18 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:38:18 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:38:20 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:38:20 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:38:20 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:38:22 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:38:22 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:38:22 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:38:24 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:38:24 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:38:24 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:38:26 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:38:26 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:38:26 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:38:28 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:38:28 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:38:28 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:38:30 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:38:30 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:38:30 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:38:32 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:38:32 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:38:32 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:38:34 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:38:34 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:38:34 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:38:36 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:38:36 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:38:36 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:38:38 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:38:38 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:38:38 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:38:40 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:38:40 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:38:40 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:38:42 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:38:42 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:38:42 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:38:44 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:38:44 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:38:44 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:38:45 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 18:38:46 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:38:46 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:38:46 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:38:48 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:38:48 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:38:48 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:38:50 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:38:50 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:38:50 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:38:52 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:38:52 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:38:52 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:38:54 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:38:54 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:38:54 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:38:56 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:38:56 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:38:56 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:38:58 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:38:58 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:38:58 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:39:00 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:39:00 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:39:00 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:39:02 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:39:02 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:39:02 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:39:04 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:39:04 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:39:04 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:39:06 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:39:06 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:39:06 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:39:08 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:39:08 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:39:08 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:39:10 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:39:10 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:39:10 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:39:12 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:39:12 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:39:12 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:39:14 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:39:14 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:39:14 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:39:15 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 18:39:16 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:39:16 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:39:16 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:39:18 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:39:18 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:39:18 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:39:20 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:39:20 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:39:20 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:39:22 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:39:22 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:39:22 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:39:24 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:39:24 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:39:24 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:39:26 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:39:26 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:39:26 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:39:28 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:39:28 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:39:28 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:39:30 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:39:30 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:39:30 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:39:32 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:39:32 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:39:32 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:39:34 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:39:34 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:39:34 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:39:36 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:39:36 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:39:36 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:39:38 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:39:38 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:39:38 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:39:40 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:39:40 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:39:40 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:39:42 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:39:42 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:39:42 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:39:44 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:39:44 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:39:44 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:39:45 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 18:39:46 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:39:46 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:39:46 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:39:48 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:39:48 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:39:48 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:39:50 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:39:50 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:39:50 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:39:52 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:39:52 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:39:52 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:39:54 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:39:54 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:39:54 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:39:56 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:39:56 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:39:56 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:39:58 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:39:58 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:39:58 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:40:00 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:40:00 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:40:00 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:40:02 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:40:02 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:40:02 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:40:04 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:40:04 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:40:04 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:40:06 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:40:06 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:40:06 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:40:08 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:40:08 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:40:08 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:40:10 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:40:10 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:40:10 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:40:12 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:40:12 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:40:12 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:40:14 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:40:14 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:40:14 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:40:15 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 18:40:16 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:40:16 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:40:16 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:40:18 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:40:18 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:40:18 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:40:20 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:40:20 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:40:20 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:40:22 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:40:22 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:40:22 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:40:24 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:40:24 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:40:24 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:40:26 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:40:26 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:40:26 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:40:28 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:40:28 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:40:28 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:40:30 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:40:30 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:40:30 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:40:32 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:40:32 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:40:32 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:40:34 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:40:34 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:40:34 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:40:36 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:40:36 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:40:36 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:40:38 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:40:38 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:40:38 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:40:40 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:40:40 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:40:40 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:40:42 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:40:42 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:40:42 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:40:44 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:40:44 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:40:44 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:40:45 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 18:40:46 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:40:46 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:40:46 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:40:48 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:40:48 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:40:48 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:40:50 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:40:50 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:40:50 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:40:52 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:40:52 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:40:52 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:40:54 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:40:54 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:40:54 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:40:56 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:40:56 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:40:56 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:40:58 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:40:58 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:40:58 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:41:00 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:41:00 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:41:00 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:41:02 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:41:02 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:41:02 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:41:04 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:41:04 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:41:04 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:41:06 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:41:06 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:41:06 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:41:08 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:41:08 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:41:08 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:41:10 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:41:10 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:41:10 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:41:12 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:41:12 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:41:12 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:41:14 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:41:14 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:41:14 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:41:15 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 18:41:16 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:41:16 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:41:16 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:41:18 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:41:18 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:41:18 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:41:20 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:41:20 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:41:20 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:41:22 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:41:22 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:41:22 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:41:24 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:41:24 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:41:24 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:41:26 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:41:26 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:41:26 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:41:28 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:41:28 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:41:28 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:41:30 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:41:30 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:41:30 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:41:32 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:41:32 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:41:32 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:41:34 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:41:34 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:41:34 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:41:36 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:41:36 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:41:36 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:41:38 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:41:38 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:41:38 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:41:40 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:41:40 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:41:40 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:41:42 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:41:42 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:41:42 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:41:44 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:41:44 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:41:44 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:41:45 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 18:41:46 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:41:46 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:41:46 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:41:48 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:41:48 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:41:48 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:41:50 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:41:50 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:41:50 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:41:52 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:41:52 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:41:52 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:41:54 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:41:54 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:41:54 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:41:56 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:41:56 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:41:56 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:41:58 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:41:58 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:41:58 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:42:00 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:42:00 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:42:00 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:42:02 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:42:02 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:42:02 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:42:04 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:42:04 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:42:04 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:42:06 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:42:06 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:42:06 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:42:08 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:42:08 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:42:08 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:42:10 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:42:10 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:42:10 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:42:12 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:42:12 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:42:12 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:42:14 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:42:14 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:42:14 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:42:15 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 18:42:16 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:42:16 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:42:16 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:42:18 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:42:18 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:42:18 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:42:20 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:42:20 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:42:20 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:42:22 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:42:22 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:42:22 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:42:24 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:42:24 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:42:24 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:42:26 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:42:26 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:42:26 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:42:28 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:42:28 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:42:28 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:42:30 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:42:30 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:42:30 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:42:32 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:42:32 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:42:32 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:42:34 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:42:34 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:42:34 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:42:36 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:42:36 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:42:36 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:42:38 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:42:38 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:42:38 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:42:40 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:42:40 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:42:40 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:42:42 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:42:42 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:42:42 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:42:44 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:42:44 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:42:44 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:42:45 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 18:42:46 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:42:46 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:42:46 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:42:48 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:42:48 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:42:48 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:42:50 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:42:50 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:42:50 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:42:52 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:42:52 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:42:52 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:42:54 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:42:54 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:42:54 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:42:56 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:42:56 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:42:56 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:42:58 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:42:58 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:42:58 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:43:00 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:43:00 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:43:00 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:43:02 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:43:02 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:43:02 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:43:04 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:43:04 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:43:04 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:43:06 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:43:06 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:43:06 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:43:08 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:43:08 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:43:08 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:43:10 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:43:10 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:43:10 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:43:12 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:43:12 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:43:12 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:43:14 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:43:14 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:43:14 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:43:15 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 18:43:16 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:43:16 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:43:16 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:43:18 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:43:18 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:43:18 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:43:20 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:43:20 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:43:20 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:43:22 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:43:22 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:43:22 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:43:24 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:43:24 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:43:24 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:43:26 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:43:26 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:43:26 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:43:28 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:43:28 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:43:28 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:43:30 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:43:30 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:43:30 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:43:32 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:43:32 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:43:32 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:43:34 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:43:34 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:43:34 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:43:36 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:43:36 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:43:36 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:43:38 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:43:38 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:43:38 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:43:40 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:43:40 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:43:40 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:43:42 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:43:42 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:43:42 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:43:44 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:43:44 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:43:44 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:43:45 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 18:43:46 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:43:46 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:43:46 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:43:48 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:43:48 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:43:48 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:43:50 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:43:50 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:43:50 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:43:52 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:43:52 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:43:52 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:43:54 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:43:54 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:43:54 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:43:56 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:43:56 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:43:56 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:43:58 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:43:58 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:43:58 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:44:00 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:44:00 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:44:00 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:44:02 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:44:02 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:44:02 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:44:04 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:44:04 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:44:04 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:44:06 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:44:06 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:44:06 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:44:08 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:44:08 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:44:08 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:44:10 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:44:10 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:44:10 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:44:12 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:44:12 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:44:12 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:44:14 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:44:14 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:44:14 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:44:15 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 18:44:16 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:44:16 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:44:16 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:44:18 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:44:18 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:44:18 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:44:20 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:44:20 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:44:20 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:44:22 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:44:22 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:44:22 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:44:24 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:44:24 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:44:24 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:44:26 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:44:26 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:44:26 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:44:28 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:44:28 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:44:28 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:44:30 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:44:30 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:44:30 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:44:32 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:44:32 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:44:32 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:44:34 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:44:34 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:44:34 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:44:36 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:44:36 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:44:36 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:44:38 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:44:38 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:44:38 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:44:40 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:44:40 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:44:40 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:44:42 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:44:42 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:44:42 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:44:44 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:44:44 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:44:44 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:44:45 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 18:44:46 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:44:46 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:44:46 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:44:48 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:44:48 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:44:48 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:44:50 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:44:50 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:44:50 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:44:52 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:44:52 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:44:52 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:44:54 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:44:54 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:44:54 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:44:56 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:44:56 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:44:56 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:44:58 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:44:58 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:44:58 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:45:00 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:45:00 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:45:00 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:45:02 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:45:02 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:45:02 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:45:04 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:45:04 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:45:04 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:45:06 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:45:06 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:45:06 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:45:08 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:45:08 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:45:08 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:45:10 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:45:10 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:45:10 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:45:12 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:45:12 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:45:12 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:45:14 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:45:14 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:45:14 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:45:15 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 18:45:16 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:45:16 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:45:16 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:45:18 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:45:18 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:45:18 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:45:20 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:45:20 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:45:20 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:45:22 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:45:22 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:45:22 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:45:24 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:45:24 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:45:24 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:45:26 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:45:26 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:45:26 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:45:28 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:45:28 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:45:28 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:45:30 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:45:30 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:45:30 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:45:32 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:45:32 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:45:32 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:45:34 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:45:34 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:45:34 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:45:36 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:45:36 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:45:36 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:45:38 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:45:38 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:45:38 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:45:40 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:45:40 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:45:40 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:45:42 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:45:42 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:45:42 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:45:44 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:45:44 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:45:44 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:45:45 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 18:45:46 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:45:46 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:45:46 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:45:48 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:45:48 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:45:48 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:45:50 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:45:50 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:45:50 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:45:52 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:45:52 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:45:52 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:45:54 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:45:54 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:45:54 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:45:56 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:45:56 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:45:56 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:45:58 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:45:58 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:45:58 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:46:00 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:46:00 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:46:00 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:46:02 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:46:02 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:46:02 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:46:04 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:46:04 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:46:04 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:46:06 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:46:06 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:46:06 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:46:08 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:46:08 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:46:08 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:46:10 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:46:10 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:46:10 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:46:12 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:46:12 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:46:12 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:46:14 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:46:14 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:46:14 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:46:15 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 18:46:16 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:46:16 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:46:16 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:46:18 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:46:18 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:46:18 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:46:20 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:46:20 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:46:20 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:46:22 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:46:22 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:46:22 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:46:24 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:46:24 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:46:24 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:46:26 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:46:26 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:46:26 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:46:28 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:46:28 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:46:28 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:46:30 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:46:30 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:46:30 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:46:32 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:46:32 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:46:32 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:46:34 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:46:34 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:46:34 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:46:36 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:46:36 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:46:36 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:46:38 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:46:38 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:46:38 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:46:40 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:46:40 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:46:40 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:46:42 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:46:42 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:46:42 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:46:44 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:46:44 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:46:44 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:46:45 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 18:46:46 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:46:46 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:46:46 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:46:48 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:46:48 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:46:48 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:46:50 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:46:50 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:46:50 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:46:52 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:46:52 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:46:52 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:46:54 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:46:54 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:46:54 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:46:56 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:46:56 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:46:56 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:46:58 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:46:58 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:46:58 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:47:00 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:47:00 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:47:00 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:47:02 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:47:02 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:47:02 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:47:04 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:47:04 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:47:04 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:47:06 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:47:06 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:47:06 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:47:08 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:47:08 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:47:08 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:47:10 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:47:10 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:47:10 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:47:12 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:47:12 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:47:12 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:47:14 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:47:14 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:47:14 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:47:15 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 18:47:16 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:47:16 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:47:16 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:47:18 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:47:18 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:47:18 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:47:20 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:47:20 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:47:20 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:47:22 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:47:22 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:47:22 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:47:24 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:47:24 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:47:24 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:47:26 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:47:26 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:47:26 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:47:28 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:47:28 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:47:28 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:47:30 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:47:30 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:47:30 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:47:32 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:47:32 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:47:32 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:47:34 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:47:34 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:47:34 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:47:36 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:47:36 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:47:36 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:47:38 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:47:38 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:47:38 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:47:40 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:47:40 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:47:40 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:47:42 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:47:42 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:47:42 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:47:44 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:47:44 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:47:44 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:47:45 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 18:47:46 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:47:46 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:47:46 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:47:48 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:47:48 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:47:48 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:47:50 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:47:50 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:47:50 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:47:52 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:47:52 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:47:52 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:47:54 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:47:54 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:47:54 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:47:56 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:47:56 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:47:56 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:47:58 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:47:58 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:47:58 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:48:00 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:48:00 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:48:00 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:48:02 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:48:02 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:48:02 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:48:04 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:48:04 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:48:04 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:48:06 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:48:06 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:48:06 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:48:08 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:48:08 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:48:08 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:48:10 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:48:10 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:48:10 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:48:12 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:48:12 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:48:12 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:48:14 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:48:14 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:48:14 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:48:15 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 18:48:16 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:48:16 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:48:16 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:48:18 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:48:18 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:48:18 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:48:20 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:48:20 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:48:20 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:48:22 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:48:22 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:48:22 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:48:24 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:48:24 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:48:24 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:48:26 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:48:26 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:48:26 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:48:28 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:48:28 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:48:28 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:48:30 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:48:30 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:48:30 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:48:32 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:48:32 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:48:32 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:48:34 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:48:34 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:48:34 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:48:36 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:48:36 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:48:36 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:48:38 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:48:38 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:48:38 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:48:40 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:48:40 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:48:40 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:48:42 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:48:42 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:48:42 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:48:44 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:48:44 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:48:44 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:48:45 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 18:48:46 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:48:46 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:48:46 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:48:48 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:48:48 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:48:48 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:48:50 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:48:50 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:48:50 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:48:52 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:48:52 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:48:52 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:48:54 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:48:54 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:48:54 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:48:56 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:48:56 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:48:56 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:48:58 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:48:58 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:48:58 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:49:00 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:49:00 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:49:00 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:49:02 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:49:02 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:49:02 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:49:04 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:49:04 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:49:04 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:49:06 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:49:06 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:49:06 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:49:08 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:49:08 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:49:08 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:49:10 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:49:10 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:49:10 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:49:12 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:49:12 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:49:12 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:49:14 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:49:14 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:49:14 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:49:15 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 18:49:16 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:49:16 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:49:16 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:49:18 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:49:18 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:49:18 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:49:20 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:49:20 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:49:20 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:49:22 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:49:22 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:49:22 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:49:24 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:49:24 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:49:24 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:49:26 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:49:26 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:49:26 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:49:28 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:49:28 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:49:28 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:49:30 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:49:30 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:49:30 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:49:32 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:49:32 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:49:32 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:49:34 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:49:34 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:49:34 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:49:36 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:49:36 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:49:36 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:49:38 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:49:38 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:49:38 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:49:40 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:49:40 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:49:40 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:49:42 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:49:42 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:49:42 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:49:44 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:49:44 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:49:44 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:49:45 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 18:49:46 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:49:46 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:49:46 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:49:48 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:49:48 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:49:48 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:49:50 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:49:50 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:49:50 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:49:52 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:49:52 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:49:52 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:49:54 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:49:54 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:49:54 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:49:56 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:49:56 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:49:56 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:49:58 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:49:58 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:49:58 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:50:00 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:50:00 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:50:00 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:50:02 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:50:02 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:50:02 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:50:04 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:50:04 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:50:04 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:50:06 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:50:06 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:50:06 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:50:08 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:50:08 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:50:08 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:50:10 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:50:10 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:50:10 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:50:12 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:50:12 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:50:12 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:50:14 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:50:14 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:50:14 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:50:15 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 18:50:16 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:50:16 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:50:16 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:50:18 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:50:18 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:50:18 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:50:20 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:50:20 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:50:20 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:50:22 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:50:22 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:50:22 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:50:24 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:50:24 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:50:24 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:50:26 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:50:26 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:50:26 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:50:28 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:50:28 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:50:28 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:50:30 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:50:30 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:50:30 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:50:32 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:50:32 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:50:32 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:50:34 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:50:34 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:50:34 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:50:36 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:50:36 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:50:36 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:50:38 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:50:38 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:50:38 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:50:40 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:50:40 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:50:40 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:50:42 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:50:42 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:50:42 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:50:44 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:50:44 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:50:44 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:50:45 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 18:50:46 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:50:46 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:50:46 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:50:48 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:50:48 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:50:48 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:50:50 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:50:50 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:50:50 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:50:52 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:50:52 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:50:52 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:50:54 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:50:54 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:50:54 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:50:56 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:50:56 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:50:56 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:50:58 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:50:58 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:50:58 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:51:00 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:51:00 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:51:00 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:51:02 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:51:02 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:51:02 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:51:04 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:51:04 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:51:04 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:51:06 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:51:06 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:51:06 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:51:08 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:51:08 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:51:08 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:51:10 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:51:10 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:51:10 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:51:12 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:51:12 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:51:12 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:51:14 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:51:14 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:51:14 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:51:15 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 18:51:16 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:51:16 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:51:16 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:51:18 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:51:18 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:51:18 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:51:20 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:51:20 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:51:20 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:51:22 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:51:22 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:51:22 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:51:24 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:51:24 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:51:24 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:51:26 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:51:26 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:51:26 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:51:28 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:51:28 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:51:28 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:51:30 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:51:30 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:51:30 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:51:32 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:51:32 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:51:32 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:51:34 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:51:34 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:51:34 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:51:36 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:51:36 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:51:36 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:51:38 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:51:38 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:51:38 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:51:40 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:51:40 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:51:40 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:51:42 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:51:42 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:51:42 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:51:44 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:51:44 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:51:44 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:51:45 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 18:51:46 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:51:46 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:51:46 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:51:48 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:51:48 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:51:48 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:51:50 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:51:50 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:51:50 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:51:52 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:51:52 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:51:52 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:51:54 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:51:54 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:51:54 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:51:56 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:51:56 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:51:56 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:51:58 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:51:58 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:51:58 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:52:00 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:52:00 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:52:00 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:52:02 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:52:02 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:52:02 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:52:04 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:52:04 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:52:04 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:52:06 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:52:06 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:52:06 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:52:08 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:52:08 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:52:08 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:52:10 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:52:10 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:52:10 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:52:12 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:52:12 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:52:12 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:52:14 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:52:14 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:52:14 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:52:15 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 18:52:16 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:52:16 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:52:16 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:52:18 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:52:18 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:52:18 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:52:20 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:52:20 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:52:20 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:52:22 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:52:22 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:52:22 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:52:24 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:52:24 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:52:24 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:52:26 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:52:26 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:52:26 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:52:28 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:52:28 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:52:28 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:52:30 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:52:30 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:52:30 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:52:32 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:52:32 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:52:32 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:52:34 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:52:34 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:52:34 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:52:36 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:52:36 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:52:36 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:52:38 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:52:38 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:52:38 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:52:40 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:52:40 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:52:40 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:52:42 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:52:42 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:52:42 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:52:44 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:52:44 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:52:44 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:52:45 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 18:52:46 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:52:46 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:52:46 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:52:48 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:52:48 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:52:48 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:52:50 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:52:50 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:52:50 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:52:52 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:52:52 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:52:52 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:52:54 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:52:54 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:52:54 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:52:56 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:52:56 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:52:56 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:52:58 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:52:58 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:52:58 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:53:00 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:53:00 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:53:00 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:53:02 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:53:02 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:53:02 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:53:04 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:53:04 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:53:04 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:53:06 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:53:06 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:53:06 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:53:08 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:53:08 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:53:08 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:53:10 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:53:10 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:53:10 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:53:12 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:53:12 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:53:12 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:53:14 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:53:14 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:53:14 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:53:15 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 18:53:16 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:53:16 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:53:16 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:53:18 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:53:18 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:53:18 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:53:20 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:53:20 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:53:20 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:53:22 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:53:22 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:53:22 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:53:24 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:53:24 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:53:24 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:53:26 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:53:26 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:53:26 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:53:28 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:53:28 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:53:28 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:53:30 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:53:30 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:53:30 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:53:32 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:53:32 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:53:32 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:53:34 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:53:34 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:53:34 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:53:36 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:53:36 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:53:36 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:53:38 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:53:38 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:53:38 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:53:40 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:53:40 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:53:40 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:53:42 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:53:42 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:53:42 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:53:44 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:53:44 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:53:44 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:53:45 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 18:53:46 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:53:46 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:53:46 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:53:48 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:53:48 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:53:48 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:53:50 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:53:50 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:53:50 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:53:52 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:53:52 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:53:52 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:53:54 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:53:54 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:53:54 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:53:56 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:53:56 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:53:56 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:53:58 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:53:58 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:53:58 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:54:00 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:54:00 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:54:00 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:54:02 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:54:02 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:54:02 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:54:04 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:54:04 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:54:04 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:54:06 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:54:06 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:54:06 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:54:08 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:54:08 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:54:08 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:54:10 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:54:10 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:54:10 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:54:12 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:54:12 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:54:12 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:54:14 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:54:14 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:54:14 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:54:15 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 18:54:16 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:54:16 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:54:16 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:54:18 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:54:18 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:54:18 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:54:20 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:54:20 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:54:20 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:54:22 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:54:22 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:54:22 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:54:24 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:54:24 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:54:24 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:54:26 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:54:26 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:54:26 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:54:28 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:54:28 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:54:28 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:54:30 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:54:30 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:54:30 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:54:32 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:54:32 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:54:32 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:54:34 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:54:34 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:54:34 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:54:36 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:54:36 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:54:36 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:54:38 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:54:38 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:54:38 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:54:40 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:54:40 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:54:40 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:54:42 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:54:42 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:54:42 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:54:44 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:54:44 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:54:44 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:54:45 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 18:54:46 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:54:46 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:54:46 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:54:48 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:54:48 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:54:48 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:54:50 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:54:50 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:54:50 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:54:52 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:54:52 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:54:52 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:54:54 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:54:54 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:54:54 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:54:56 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:54:56 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:54:56 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:54:58 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:54:58 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:54:58 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:55:00 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:55:00 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:55:00 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:55:02 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:55:02 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:55:02 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:55:04 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:55:04 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:55:04 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:55:06 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:55:06 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:55:06 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:55:08 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:55:08 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:55:08 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:55:10 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:55:10 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:55:10 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:55:12 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:55:12 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:55:12 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:55:14 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:55:14 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:55:14 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:55:15 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 18:55:16 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:55:16 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:55:16 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:55:18 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:55:18 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:55:18 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:55:20 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:55:20 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:55:20 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:55:22 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:55:22 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:55:22 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:55:24 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:55:24 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:55:24 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:55:26 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:55:26 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:55:26 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:55:28 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:55:28 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:55:28 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:55:30 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:55:30 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:55:30 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:55:32 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:55:32 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:55:32 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:55:34 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:55:34 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:55:34 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:55:36 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:55:36 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:55:36 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:55:38 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:55:38 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:55:38 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:55:40 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:55:40 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:55:40 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:55:42 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:55:42 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:55:42 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:55:44 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:55:44 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:55:44 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:55:45 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 18:55:46 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:55:46 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:55:46 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:55:48 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:55:48 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:55:48 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:55:50 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:55:50 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:55:50 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:55:52 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:55:52 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:55:52 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:55:54 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:55:54 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:55:54 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:55:56 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:55:56 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:55:56 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:55:58 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:55:58 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:55:58 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:56:00 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:56:00 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:56:00 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:56:02 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:56:02 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:56:02 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:56:04 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:56:04 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:56:04 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:56:06 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:56:06 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:56:06 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:56:08 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:56:08 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:56:08 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:56:10 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:56:10 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:56:10 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:56:12 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:56:12 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:56:12 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:56:14 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:56:14 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:56:14 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:56:15 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 18:56:16 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:56:16 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:56:16 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:56:18 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:56:18 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:56:18 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:56:20 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:56:20 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:56:20 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:56:22 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:56:22 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:56:22 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:56:24 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:56:24 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:56:24 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:56:26 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:56:26 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:56:26 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:56:28 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:56:28 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:56:28 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:56:30 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:56:30 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:56:30 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:56:32 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:56:32 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:56:32 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:56:34 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:56:34 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:56:34 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:56:36 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:56:36 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:56:36 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:56:38 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:56:38 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:56:38 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:56:40 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:56:40 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:56:40 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:56:42 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:56:42 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:56:42 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:56:44 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:56:44 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:56:44 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:56:45 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 18:56:46 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:56:46 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:56:46 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:56:48 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:56:48 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:56:48 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:56:50 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:56:50 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:56:50 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:56:52 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:56:52 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:56:52 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:56:54 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:56:54 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:56:54 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:56:56 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:56:56 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:56:56 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:56:58 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:56:58 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:56:58 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:57:00 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:57:00 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:57:00 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:57:02 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:57:02 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:57:02 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:57:04 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:57:04 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:57:04 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:57:06 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:57:06 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:57:06 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:57:08 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:57:08 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:57:08 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:57:10 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:57:10 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:57:10 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:57:12 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:57:12 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:57:12 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:57:14 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:57:14 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:57:14 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:57:15 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 18:57:16 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:57:16 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:57:16 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:57:18 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:57:18 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:57:18 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:57:20 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:57:20 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:57:20 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:57:22 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:57:22 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:57:22 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:57:24 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:57:24 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:57:24 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:57:26 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:57:26 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:57:26 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:57:28 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:57:28 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:57:28 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:57:30 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:57:30 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:57:30 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:57:32 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:57:32 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:57:32 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:57:34 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:57:34 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:57:34 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:57:36 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:57:36 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:57:36 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:57:38 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:57:38 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:57:38 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:57:40 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:57:40 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:57:40 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:57:42 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:57:42 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:57:42 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:57:44 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:57:44 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:57:44 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:57:45 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 18:57:46 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:57:46 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:57:46 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:57:48 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:57:48 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:57:48 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:57:50 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:57:50 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:57:50 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:57:52 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:57:52 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:57:52 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:57:54 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:57:54 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:57:54 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:57:56 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:57:56 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:57:56 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:57:58 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:57:58 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:57:58 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:58:00 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:58:00 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:58:00 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:58:02 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:58:02 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:58:02 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:58:04 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:58:04 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:58:04 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:58:06 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:58:06 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:58:06 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:58:08 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:58:08 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:58:08 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:58:10 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:58:10 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:58:10 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:58:12 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:58:12 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:58:12 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:58:14 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:58:14 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:58:14 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:58:15 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 18:58:16 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:58:16 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:58:16 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:58:18 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:58:18 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:58:18 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:58:20 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:58:20 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:58:20 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:58:22 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:58:22 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:58:22 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:58:24 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:58:24 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:58:24 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:58:26 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:58:26 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:58:26 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:58:28 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:58:28 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:58:28 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:58:30 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:58:30 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:58:30 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:58:32 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:58:32 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:58:32 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:58:34 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:58:34 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:58:34 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:58:36 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:58:36 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:58:36 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:58:38 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:58:38 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:58:38 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:58:40 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:58:40 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:58:40 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:58:42 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:58:42 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:58:42 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:58:44 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:58:44 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:58:44 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:58:45 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 18:58:46 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:58:46 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:58:46 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:58:48 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:58:48 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:58:48 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:58:50 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:58:50 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:58:50 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:58:52 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:58:52 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:58:52 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:58:54 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:58:54 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:58:54 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:58:56 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:58:56 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:58:56 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:58:58 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:58:58 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:58:58 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:59:00 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:59:00 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:59:00 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:59:02 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:59:02 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:59:02 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:59:04 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:59:04 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:59:04 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:59:06 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:59:06 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:59:06 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:59:08 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:59:08 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:59:08 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:59:10 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:59:10 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:59:10 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:59:12 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:59:12 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:59:12 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:59:14 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:59:14 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:59:14 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:59:15 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 18:59:16 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:59:16 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:59:16 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:59:18 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:59:18 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:59:18 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:59:20 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:59:20 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:59:20 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:59:22 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:59:22 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:59:22 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:59:24 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:59:24 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:59:24 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:59:26 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:59:26 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:59:26 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:59:28 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:59:28 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:59:28 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:59:30 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:59:30 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:59:30 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:59:32 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:59:32 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:59:32 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:59:34 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:59:34 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:59:34 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:59:36 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:59:36 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:59:36 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:59:38 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:59:38 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:59:38 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:59:40 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:59:40 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:59:40 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:59:42 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:59:42 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:59:42 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:59:44 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:59:44 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:59:44 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:59:45 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 18:59:46 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:59:46 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:59:46 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:59:48 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:59:48 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:59:48 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:59:50 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:59:50 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:59:50 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:59:52 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:59:52 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:59:52 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:59:54 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:59:54 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:59:54 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:59:56 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:59:56 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:59:56 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 18:59:58 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 18:59:58 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 18:59:58 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:00:00 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:00:00 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:00:00 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:00:02 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:00:02 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:00:02 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:00:04 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:00:04 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:00:04 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:00:06 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:00:06 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:00:06 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:00:08 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:00:08 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:00:08 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:00:10 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:00:10 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:00:10 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:00:12 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:00:12 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:00:12 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:00:14 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:00:14 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:00:14 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:00:15 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 19:00:16 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:00:16 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:00:16 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:00:18 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:00:18 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:00:18 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:00:20 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:00:20 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:00:20 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:00:22 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:00:22 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:00:22 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:00:24 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:00:24 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:00:24 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:00:26 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:00:26 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:00:26 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:00:28 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:00:28 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:00:28 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:00:30 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:00:30 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:00:30 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:00:32 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:00:32 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:00:32 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:00:34 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:00:34 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:00:34 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:00:36 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:00:36 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:00:36 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:00:38 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:00:38 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:00:38 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:00:40 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:00:40 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:00:40 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:00:42 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:00:42 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:00:42 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:00:44 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:00:44 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:00:44 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:00:45 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 19:00:46 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:00:46 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:00:46 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:00:48 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:00:48 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:00:48 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:00:50 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:00:50 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:00:50 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:00:52 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:00:52 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:00:52 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:00:54 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:00:54 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:00:54 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:00:56 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:00:56 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:00:56 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:00:58 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:00:58 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:00:58 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:01:00 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:01:00 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:01:00 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:01:02 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:01:02 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:01:02 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:01:04 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:01:04 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:01:04 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:01:06 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:01:06 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:01:06 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:01:08 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:01:08 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:01:08 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:01:10 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:01:10 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:01:10 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:01:12 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:01:12 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:01:12 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:01:14 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:01:14 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:01:14 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:01:15 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 19:01:16 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:01:16 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:01:16 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:01:18 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:01:18 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:01:18 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:01:20 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:01:20 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:01:20 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:01:22 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:01:22 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:01:22 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:01:24 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:01:24 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:01:24 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:01:26 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:01:26 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:01:26 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:01:28 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:01:28 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:01:28 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:01:30 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:01:30 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:01:30 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:01:32 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:01:32 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:01:32 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:01:34 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:01:34 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:01:34 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:01:36 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:01:36 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:01:36 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:01:38 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:01:38 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:01:38 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:01:40 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:01:40 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:01:40 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:01:42 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:01:42 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:01:42 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:01:44 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:01:44 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:01:44 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:01:46 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:01:46 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:01:46 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:01:48 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:01:48 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:01:48 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:01:50 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:01:50 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:01:50 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:01:51 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 19:01:52 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:01:52 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:01:52 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:01:54 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:01:54 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:01:54 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:01:56 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:01:56 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:01:56 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:01:58 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:01:58 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:01:58 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:02:00 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:02:00 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:02:00 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:02:02 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:02:02 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:02:02 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:02:04 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:02:04 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:02:04 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:02:06 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:02:06 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:02:06 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:02:08 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:02:08 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:02:08 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:02:10 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:02:10 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:02:10 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:02:12 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:02:12 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:02:12 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:02:14 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:02:14 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:02:14 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:02:16 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:02:16 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:02:16 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:02:18 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:02:18 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:02:18 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:02:20 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:02:20 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:02:20 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:02:21 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 19:02:22 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:02:22 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:02:22 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:02:24 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:02:24 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:02:24 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:02:26 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:02:26 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:02:26 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:02:28 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:02:28 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:02:28 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:02:30 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:02:30 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:02:30 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:02:32 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:02:32 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:02:32 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:02:34 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:02:34 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:02:34 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:02:36 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:02:36 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:02:36 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:02:38 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:02:38 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:02:38 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:02:40 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:02:40 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:02:40 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:02:42 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:02:42 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:02:42 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:02:44 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:02:44 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:02:44 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:02:46 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:02:46 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:02:46 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:02:48 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:02:48 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:02:48 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:02:50 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:02:50 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:02:50 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:02:51 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 19:02:52 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:02:52 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:02:52 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:02:54 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:02:54 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:02:54 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:02:56 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:02:56 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:02:56 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:02:58 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:02:58 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:02:58 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:03:00 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:03:00 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:03:00 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:03:02 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:03:02 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:03:02 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:03:04 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:03:04 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:03:04 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:03:06 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:03:06 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:03:06 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:03:08 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:03:08 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:03:08 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:03:10 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:03:10 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:03:10 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:03:12 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:03:12 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:03:12 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:03:14 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:03:14 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:03:14 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:03:16 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:03:16 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:03:16 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:03:18 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:03:18 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:03:18 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:03:20 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:03:20 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:03:20 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:03:21 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 19:03:22 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:03:22 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:03:22 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:03:24 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:03:24 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:03:24 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:03:26 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:03:26 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:03:26 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:03:28 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:03:28 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:03:28 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:03:30 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:03:30 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:03:30 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:03:32 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:03:32 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:03:32 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:03:34 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:03:34 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:03:34 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:03:36 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:03:36 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:03:36 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:03:38 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:03:38 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:03:38 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:03:40 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:03:40 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:03:40 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:03:42 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:03:42 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:03:42 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:03:44 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:03:44 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:03:44 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:03:46 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:03:46 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:03:46 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:03:48 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:03:48 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:03:48 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:03:50 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:03:50 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:03:50 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:03:51 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 19:03:52 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:03:52 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:03:52 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:03:54 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:03:54 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:03:54 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:03:56 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:03:56 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:03:56 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:03:58 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:03:58 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:03:58 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:04:00 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:04:00 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:04:00 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:04:02 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:04:02 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:04:02 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:04:04 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:04:04 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:04:04 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:04:06 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:04:06 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:04:06 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:04:08 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:04:08 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:04:08 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:04:10 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:04:10 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:04:10 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:04:12 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:04:12 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:04:12 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:04:14 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:04:14 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:04:14 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:04:16 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:04:16 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:04:16 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:04:18 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:04:18 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:04:18 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:04:20 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:04:20 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:04:20 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:04:21 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 19:04:22 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:04:22 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:04:22 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:04:24 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:04:24 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:04:24 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:04:26 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:04:26 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:04:26 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:04:28 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:04:28 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:04:28 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:04:30 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:04:30 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:04:30 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:04:32 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:04:32 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:04:32 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:04:34 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:04:34 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:04:34 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:04:36 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:04:36 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:04:36 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:04:38 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:04:38 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:04:38 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:04:40 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:04:40 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:04:40 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:04:42 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:04:42 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:04:42 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:04:44 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:04:44 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:04:44 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:04:46 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:04:46 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:04:46 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:04:48 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:04:48 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:04:48 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:04:50 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:04:50 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:04:50 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:04:51 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 19:04:52 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:04:52 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:04:52 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:04:54 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:04:54 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:04:54 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:04:56 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:04:56 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:04:56 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:04:58 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:04:58 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:04:58 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:05:00 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:05:00 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:05:00 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:05:02 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:05:02 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:05:02 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:05:04 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:05:04 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:05:04 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:05:06 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:05:06 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:05:06 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:05:08 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:05:08 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:05:08 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:05:10 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:05:10 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:05:10 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:05:12 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:05:12 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:05:12 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:05:14 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:05:14 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:05:14 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:05:16 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:05:16 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:05:16 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:05:18 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:05:18 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:05:18 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:05:20 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:05:20 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:05:20 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:05:21 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 19:05:22 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:05:22 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:05:22 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:05:25 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:05:25 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:05:25 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:05:27 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:05:27 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:05:27 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:05:29 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:05:29 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:05:29 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:05:31 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:05:31 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:05:31 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:05:33 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:05:33 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:05:33 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:05:35 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:05:35 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:05:35 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:05:37 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:05:37 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:05:37 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:05:39 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:05:39 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:05:39 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:05:41 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:05:41 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:05:41 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:05:43 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:05:43 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:05:43 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:05:45 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:05:45 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:05:45 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:05:47 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:05:47 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:05:47 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:05:49 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:05:49 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:05:49 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:05:51 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:05:51 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:05:51 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:05:51 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 19:05:53 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:05:53 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:05:53 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:05:55 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:05:55 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:05:55 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:05:57 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:05:57 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:05:57 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:05:59 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:05:59 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:05:59 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:06:01 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:06:01 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:06:01 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:06:03 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:06:03 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:06:03 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:06:05 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:06:05 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:06:05 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:06:07 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:06:07 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:06:07 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:06:09 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:06:09 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:06:09 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:06:11 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:06:11 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:06:11 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:06:13 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:06:13 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:06:13 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:06:15 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:06:15 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:06:15 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:06:17 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:06:17 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:06:17 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:06:19 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:06:19 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:06:19 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:06:21 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:06:21 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:06:21 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:06:21 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 19:06:23 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:06:23 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:06:23 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:06:25 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:06:25 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:06:25 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:06:27 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:06:27 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:06:27 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:06:29 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:06:29 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:06:29 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:06:31 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:06:31 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:06:31 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:06:33 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:06:33 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:06:33 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:06:35 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:06:35 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:06:35 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:06:37 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:06:37 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:06:37 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:06:39 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:06:39 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:06:39 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:06:41 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:06:41 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:06:41 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:06:43 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:06:43 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:06:43 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:06:45 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:06:45 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:06:45 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:06:47 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:06:47 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:06:47 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:06:49 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:06:49 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:06:49 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:06:51 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:06:51 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:06:51 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:06:51 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 19:06:53 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:06:53 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:06:53 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:06:55 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:06:55 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:06:55 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:06:57 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:06:57 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:06:57 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:06:59 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:06:59 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:06:59 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:07:01 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:07:01 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:07:01 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:07:03 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:07:03 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:07:03 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:07:05 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:07:05 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:07:05 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:07:07 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:07:07 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:07:07 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:07:09 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:07:09 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:07:09 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:07:11 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:07:11 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:07:11 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:07:13 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:07:13 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:07:13 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:07:15 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:07:15 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:07:15 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:07:17 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:07:17 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:07:17 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:07:19 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:07:19 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:07:19 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:07:21 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:07:21 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:07:21 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:07:21 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 19:07:23 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:07:23 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:07:23 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:07:25 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:07:25 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:07:25 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:07:27 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:07:27 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:07:27 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:07:29 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:07:29 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:07:29 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:07:31 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:07:31 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:07:31 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:07:33 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:07:33 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:07:33 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:07:35 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:07:35 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:07:35 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:07:37 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:07:37 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:07:37 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:07:39 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:07:39 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:07:39 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:07:41 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:07:41 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:07:41 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:07:43 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:07:43 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:07:43 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:07:45 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:07:45 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:07:45 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:07:47 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:07:47 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:07:47 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:07:49 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:07:49 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:07:49 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:07:51 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:07:51 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:07:51 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:07:51 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 19:07:53 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:07:53 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:07:53 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:07:55 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:07:55 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:07:55 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:07:57 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:07:57 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:07:57 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:07:59 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:07:59 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:07:59 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:08:01 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:08:01 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:08:01 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:08:03 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:08:03 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:08:03 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:08:05 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:08:05 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:08:05 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:08:07 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:08:07 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:08:07 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:08:09 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:08:09 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:08:09 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:08:11 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:08:11 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:08:11 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:08:13 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:08:13 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:08:13 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:08:15 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:08:15 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:08:15 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:08:17 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:08:17 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:08:17 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:08:19 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:08:19 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:08:19 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:08:21 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:08:21 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:08:21 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:08:21 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 19:08:23 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:08:23 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:08:23 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:08:25 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:08:25 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:08:25 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:08:27 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:08:27 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:08:27 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:08:29 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:08:29 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:08:29 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:08:31 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:08:31 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:08:31 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:08:33 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:08:33 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:08:33 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:08:35 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:08:35 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:08:35 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:08:37 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:08:37 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:08:37 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:08:39 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:08:39 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:08:39 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:08:41 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:08:41 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:08:41 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:08:43 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:08:43 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:08:43 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:08:45 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:08:45 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:08:45 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:08:47 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:08:47 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:08:47 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:08:49 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:08:49 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:08:49 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:08:51 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:08:51 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:08:51 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:08:51 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 19:08:53 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:08:53 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:08:53 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:08:55 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:08:55 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:08:55 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:08:57 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:08:57 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:08:57 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:08:59 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:08:59 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:08:59 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:09:01 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:09:01 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:09:01 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:09:03 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:09:03 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:09:03 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:09:05 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:09:05 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:09:05 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:09:07 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:09:07 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:09:07 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:09:09 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:09:09 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:09:09 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:09:11 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:09:11 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:09:11 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:09:13 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:09:13 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:09:13 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:09:15 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:09:15 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:09:15 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:09:17 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:09:17 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:09:17 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:09:19 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:09:19 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:09:19 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:09:21 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:09:21 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:09:21 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:09:21 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 19:09:23 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:09:23 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:09:23 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:09:25 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:09:25 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:09:25 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:09:27 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:09:27 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:09:27 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:09:29 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:09:29 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:09:29 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:09:31 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:09:31 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:09:31 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:09:33 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:09:33 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:09:33 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:09:35 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:09:35 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:09:35 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:09:37 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:09:37 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:09:37 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:09:39 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:09:39 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:09:39 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:09:41 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:09:41 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:09:41 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:09:43 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:09:43 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:09:43 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:09:45 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:09:45 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:09:45 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:09:47 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:09:47 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:09:47 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:09:49 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:09:49 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:09:49 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:09:51 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:09:51 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:09:51 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:09:51 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 19:09:53 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:09:53 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:09:53 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:09:55 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:09:55 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:09:55 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:09:57 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:09:57 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:09:57 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:09:59 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:09:59 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:09:59 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:10:01 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:10:01 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:10:01 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:10:03 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:10:03 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:10:03 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:10:05 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:10:05 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:10:05 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:10:07 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:10:07 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:10:07 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:10:09 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:10:09 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:10:09 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:10:11 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:10:11 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:10:11 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:10:13 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:10:13 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:10:13 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:10:15 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:10:15 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:10:15 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:10:17 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:10:17 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:10:17 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:10:19 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:10:19 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:10:19 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:10:21 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:10:21 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:10:21 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:10:21 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 19:10:23 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:10:23 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:10:23 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:10:25 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:10:25 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:10:25 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:10:27 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:10:27 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:10:27 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:10:29 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:10:29 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:10:29 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:10:31 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:10:31 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:10:31 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:10:33 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:10:33 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:10:33 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:10:35 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:10:35 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:10:35 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:10:37 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:10:37 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:10:37 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:10:39 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:10:39 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:10:39 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:10:41 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:10:41 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:10:41 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:10:43 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:10:43 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:10:43 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:10:45 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:10:45 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:10:45 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:10:47 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:10:47 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:10:47 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:10:49 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:10:49 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:10:49 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:10:51 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:10:51 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:10:51 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:10:51 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 19:10:53 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:10:53 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:10:53 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:10:55 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:10:55 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:10:55 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:10:57 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:10:57 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:10:57 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:10:59 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:10:59 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:10:59 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:11:01 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:11:01 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:11:01 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:11:03 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:11:03 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:11:03 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:11:05 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:11:05 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:11:05 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:11:07 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:11:07 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:11:07 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:11:09 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:11:09 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:11:09 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:11:11 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:11:11 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:11:11 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:11:13 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:11:13 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:11:13 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:11:15 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:11:15 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:11:15 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:11:17 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:11:17 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:11:17 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:11:19 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:11:19 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:11:19 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:11:21 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:11:21 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:11:21 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:11:21 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 19:11:23 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:11:23 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:11:23 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:11:25 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:11:25 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:11:25 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:11:27 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:11:27 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:11:27 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:11:29 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:11:29 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:11:29 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:11:31 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:11:31 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:11:31 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:11:33 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:11:33 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:11:33 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:11:35 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:11:35 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:11:35 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:11:37 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:11:37 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:11:37 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:11:39 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:11:39 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:11:39 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:11:41 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:11:41 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:11:41 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:11:43 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:11:43 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:11:43 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:11:45 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:11:45 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:11:45 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:11:47 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:11:47 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:11:47 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:11:49 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:11:49 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:11:49 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:11:51 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:11:51 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:11:51 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:11:51 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 19:11:53 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:11:53 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:11:53 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:11:55 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:11:55 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:11:55 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:11:57 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:11:57 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:11:57 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:11:59 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:11:59 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:11:59 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:12:01 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:12:01 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:12:01 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:12:03 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:12:03 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:12:03 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:12:05 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:12:05 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:12:05 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:12:07 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:12:07 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:12:07 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:12:09 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:12:09 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:12:09 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:12:11 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:12:11 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:12:11 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:12:13 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:12:13 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:12:13 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:12:15 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:12:15 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:12:15 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:12:17 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:12:17 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:12:17 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:12:19 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:12:19 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:12:19 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:12:21 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:12:21 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:12:21 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:12:21 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 19:12:23 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:12:23 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:12:23 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:12:25 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:12:25 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:12:25 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:12:27 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:12:27 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:12:27 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:12:29 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:12:29 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:12:29 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:12:31 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:12:31 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:12:31 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:12:33 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:12:33 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:12:33 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:12:35 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:12:35 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:12:35 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:12:37 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:12:37 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:12:37 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:12:39 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:12:39 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:12:39 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:12:41 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:12:41 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:12:41 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:12:43 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:12:43 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:12:43 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:12:45 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:12:45 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:12:45 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:12:47 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:12:47 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:12:47 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:12:49 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:12:49 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:12:49 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:12:51 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:12:51 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:12:51 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:12:51 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 19:12:53 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:12:53 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:12:53 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:12:55 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:12:55 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:12:55 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:12:57 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:12:57 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:12:57 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:12:59 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:12:59 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:12:59 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:13:01 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:13:01 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:13:01 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:13:03 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:13:03 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:13:03 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:13:05 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:13:05 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:13:05 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:13:07 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:13:07 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:13:07 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:13:09 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:13:09 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:13:09 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:13:11 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:13:11 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:13:11 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:13:13 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:13:13 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:13:13 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:13:15 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:13:15 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:13:15 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:13:17 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:13:17 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:13:17 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:13:19 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:13:19 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:13:19 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:13:21 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:13:21 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:13:21 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:13:21 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 19:13:23 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:13:23 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:13:23 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:13:25 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:13:25 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:13:25 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:13:27 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:13:27 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:13:27 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:13:29 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:13:29 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:13:29 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:13:31 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:13:31 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:13:31 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:13:33 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:13:33 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:13:33 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:13:35 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:13:35 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:13:35 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:13:37 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:13:37 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:13:37 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:13:39 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:13:39 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:13:39 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:13:41 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:13:41 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:13:41 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:13:43 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:13:43 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:13:43 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:13:45 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:13:45 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:13:45 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:13:47 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:13:47 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:13:47 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:13:49 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:13:49 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:13:49 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:13:51 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:13:51 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:13:51 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:13:51 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 19:13:53 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:13:53 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:13:53 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:13:55 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:13:55 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:13:55 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:13:57 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:13:57 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:13:57 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:13:59 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:13:59 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:13:59 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:14:01 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:14:01 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:14:01 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:14:03 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:14:03 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:14:03 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:14:05 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:14:05 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:14:05 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:14:07 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:14:07 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:14:07 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:14:09 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:14:09 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:14:09 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:14:11 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:14:11 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:14:11 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:14:13 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:14:13 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:14:13 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:14:15 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:14:15 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:14:15 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:14:17 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:14:17 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:14:17 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:14:19 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:14:19 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:14:19 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:14:21 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:14:21 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:14:21 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:14:21 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 19:14:23 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:14:23 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:14:23 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:14:25 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:14:25 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:14:25 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:14:27 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:14:27 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:14:27 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:14:29 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:14:29 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:14:29 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:14:31 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:14:31 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:14:31 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:14:33 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:14:33 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:14:33 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:14:35 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:14:35 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:14:35 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:14:37 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:14:37 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:14:37 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:14:39 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:14:39 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:14:39 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:14:41 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:14:41 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:14:41 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:14:43 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:14:43 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:14:43 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:14:45 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:14:45 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:14:45 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:14:47 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:14:47 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:14:47 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:14:49 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:14:49 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:14:49 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:14:51 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:14:51 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:14:51 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:14:51 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 19:14:53 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:14:53 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:14:53 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:14:55 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:14:55 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:14:55 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:14:57 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:14:57 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:14:57 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:14:59 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:14:59 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:14:59 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:15:01 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:15:01 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:15:01 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:15:03 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:15:03 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:15:03 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:15:05 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:15:05 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:15:05 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:15:07 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:15:07 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:15:07 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:15:09 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:15:09 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:15:09 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:15:11 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:15:11 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:15:11 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:15:13 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:15:13 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:15:13 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:15:15 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:15:15 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:15:15 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:15:17 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:15:17 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:15:17 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:15:19 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:15:19 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:15:19 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:15:21 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:15:21 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:15:21 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:15:21 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 19:15:23 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:15:23 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:15:23 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:15:25 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:15:25 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:15:25 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:15:27 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:15:27 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:15:27 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:15:29 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:15:29 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:15:29 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:15:31 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:15:31 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:15:31 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:15:33 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:15:33 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:15:33 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:15:35 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:15:35 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:15:35 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:15:37 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:15:37 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:15:37 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:15:39 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:15:39 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:15:39 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:15:41 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:15:41 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:15:41 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:15:43 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:15:43 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:15:43 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:15:45 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:15:45 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:15:45 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:15:47 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:15:47 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:15:47 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:15:49 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:15:49 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:15:49 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:15:51 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:15:51 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:15:51 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:15:51 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 19:15:53 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:15:53 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:15:53 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:15:55 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:15:55 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:15:55 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:15:57 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:15:57 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:15:57 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:15:59 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:15:59 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:15:59 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:16:01 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:16:01 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:16:01 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:16:03 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:16:03 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:16:03 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:16:05 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:16:05 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:16:05 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:16:07 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:16:07 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:16:07 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:16:09 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:16:09 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:16:09 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:16:11 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:16:11 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:16:11 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:16:13 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:16:13 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:16:13 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:16:15 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:16:15 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:16:15 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:16:17 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:16:17 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:16:17 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:16:19 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:16:19 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:16:19 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:16:21 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:16:21 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:16:21 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:16:21 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 19:16:23 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:16:23 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:16:23 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:16:25 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:16:25 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:16:25 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:16:27 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:16:27 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:16:27 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:16:29 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:16:29 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:16:29 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:16:31 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:16:31 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:16:31 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:16:33 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:16:33 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:16:33 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:16:35 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:16:35 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:16:35 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:16:37 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:16:37 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:16:37 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:16:39 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:16:39 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:16:39 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:16:41 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:16:41 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:16:41 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:16:43 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:16:43 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:16:43 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:16:45 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:16:45 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:16:45 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:16:47 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:16:47 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:16:47 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:16:49 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:16:49 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:16:49 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:16:51 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:16:51 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:16:51 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:16:51 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 19:16:53 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:16:53 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:16:53 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:16:55 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:16:55 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:16:55 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:16:57 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:16:57 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:16:57 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:16:59 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:16:59 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:16:59 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:17:01 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:17:01 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:17:01 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:17:03 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:17:03 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:17:03 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:17:05 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:17:05 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:17:05 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:17:07 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:17:07 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:17:07 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:17:09 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:17:09 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:17:09 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:17:11 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:17:11 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:17:11 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:17:13 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:17:13 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:17:13 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:17:15 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:17:15 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:17:15 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:17:17 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:17:17 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:17:17 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:17:19 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:17:19 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:17:19 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:17:21 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:17:21 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:17:21 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:17:21 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 19:17:23 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:17:23 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:17:23 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:17:25 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:17:25 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:17:25 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:17:27 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:17:27 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:17:27 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:17:29 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:17:29 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:17:29 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:17:31 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:17:31 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:17:31 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:17:33 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:17:33 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:17:33 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:17:35 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:17:35 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:17:35 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:17:37 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:17:37 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:17:37 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:17:39 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:17:39 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:17:39 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:17:41 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:17:41 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:17:41 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:17:43 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:17:43 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:17:43 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:17:45 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:17:45 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:17:45 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:17:47 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:17:47 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:17:47 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:17:49 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:17:49 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:17:49 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:17:51 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:17:51 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:17:51 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:17:51 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 19:17:53 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:17:53 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:17:53 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:17:55 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:17:55 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:17:55 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:17:57 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:17:57 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:17:57 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:17:59 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:17:59 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:17:59 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:18:01 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:18:01 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:18:01 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:18:03 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:18:03 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:18:03 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:18:05 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:18:05 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:18:05 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:18:07 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:18:07 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:18:07 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:18:09 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:18:09 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:18:09 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:18:11 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:18:11 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:18:11 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:18:13 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:18:13 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:18:13 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:18:15 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:18:15 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:18:15 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:18:17 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:18:17 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:18:17 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:18:19 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:18:19 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:18:19 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:18:21 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:18:21 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:18:21 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:18:21 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 19:18:23 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:18:23 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:18:23 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:18:25 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:18:25 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:18:25 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:18:27 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:18:27 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:18:27 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:18:29 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:18:29 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:18:29 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:18:31 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:18:31 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:18:31 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:18:33 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:18:33 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:18:33 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:18:35 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:18:35 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:18:35 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:18:37 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:18:37 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:18:37 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:18:39 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:18:39 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:18:39 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:18:41 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:18:41 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:18:41 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:18:43 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:18:43 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:18:43 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:18:45 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:18:45 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:18:45 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:18:47 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:18:47 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:18:47 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:18:49 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:18:49 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:18:49 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:18:51 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:18:51 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:18:51 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:18:51 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 19:18:53 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:18:53 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:18:53 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:18:55 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:18:55 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:18:55 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:18:57 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:18:57 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:18:57 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:18:59 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:18:59 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:18:59 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:19:01 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:19:01 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:19:01 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:19:03 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:19:03 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:19:03 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:19:05 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:19:05 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:19:05 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:19:07 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:19:07 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:19:07 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:19:09 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:19:09 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:19:09 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:19:11 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:19:11 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:19:11 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:19:13 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:19:13 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:19:13 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:19:15 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:19:15 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:19:15 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:19:17 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:19:17 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:19:17 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:19:19 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:19:19 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:19:19 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:19:21 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:19:21 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:19:21 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:19:21 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 19:19:23 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:19:23 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:19:23 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:19:25 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:19:25 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:19:25 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:19:27 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:19:27 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:19:27 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:19:29 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:19:29 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:19:29 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:19:31 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:19:31 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:19:31 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:19:33 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:19:33 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:19:33 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:19:35 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:19:35 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:19:35 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:19:37 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:19:37 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:19:37 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:19:39 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:19:39 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:19:39 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:19:41 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:19:41 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:19:41 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:19:43 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:19:43 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:19:43 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:19:45 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:19:45 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:19:45 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:19:47 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:19:47 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:19:47 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:19:49 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:19:49 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:19:49 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:19:51 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:19:51 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:19:51 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:19:51 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 19:19:53 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:19:53 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:19:53 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:19:55 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:19:55 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:19:55 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:19:57 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:19:57 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:19:57 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:19:59 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:19:59 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:19:59 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:20:01 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:20:01 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:20:01 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:20:03 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:20:03 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:20:03 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:20:05 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:20:05 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:20:05 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:20:07 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:20:07 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:20:07 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:20:09 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:20:09 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:20:09 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:20:11 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:20:11 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:20:11 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:20:13 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:20:13 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:20:13 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:20:15 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:20:15 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:20:15 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:20:17 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:20:17 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:20:17 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:20:19 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:20:19 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:20:19 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:20:21 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:20:21 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:20:21 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:20:21 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 19:20:23 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:20:23 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:20:23 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:20:25 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:20:25 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:20:25 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:20:27 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:20:27 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:20:27 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:20:29 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:20:29 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:20:29 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:20:31 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:20:31 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:20:31 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:20:33 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:20:33 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:20:33 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:20:35 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:20:35 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:20:35 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:20:37 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:20:37 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:20:37 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:20:39 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:20:39 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:20:39 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:20:41 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:20:41 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:20:41 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:20:43 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:20:43 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:20:43 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:20:45 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:20:45 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:20:45 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:20:47 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:20:47 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:20:47 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:20:49 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:20:49 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:20:49 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:20:51 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:20:51 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:20:51 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:20:51 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 19:20:53 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:20:53 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:20:53 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:20:55 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:20:55 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:20:55 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:20:57 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:20:57 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:20:57 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:20:59 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:20:59 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:20:59 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:21:01 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:21:01 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:21:01 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:21:03 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:21:03 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:21:03 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:21:05 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:21:05 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:21:05 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:21:07 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:21:07 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:21:07 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:21:09 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:21:09 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:21:09 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:21:11 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:21:11 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:21:11 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:21:13 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:21:13 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:21:13 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:21:15 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:21:15 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:21:15 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:21:17 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:21:17 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:21:17 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:21:19 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:21:19 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:21:19 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:21:21 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:21:21 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:21:21 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:21:21 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 19:21:23 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:21:23 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:21:23 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:21:25 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:21:25 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:21:25 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:21:27 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:21:27 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:21:27 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:21:29 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:21:29 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:21:29 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:21:31 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:21:31 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:21:31 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:21:33 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:21:33 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:21:33 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:21:35 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:21:35 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:21:35 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:21:37 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:21:37 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:21:37 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:21:39 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:21:39 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:21:39 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:21:41 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:21:41 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:21:41 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:21:43 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:21:43 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:21:43 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:21:45 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:21:45 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:21:45 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:21:47 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:21:47 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:21:47 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:21:49 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:21:49 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:21:49 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:21:51 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:21:51 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:21:51 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:21:51 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 19:21:53 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:21:53 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:21:53 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:21:55 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:21:55 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:21:55 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:21:57 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:21:57 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:21:57 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:21:59 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:21:59 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:21:59 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:22:01 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:22:01 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:22:01 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:22:03 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:22:03 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:22:03 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:22:05 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:22:05 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:22:05 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:22:07 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:22:07 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:22:07 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:22:09 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:22:09 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:22:09 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:22:11 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:22:11 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:22:11 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:22:13 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:22:13 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:22:13 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:22:15 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:22:15 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:22:15 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:22:17 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:22:17 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:22:17 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:22:19 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:22:19 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:22:19 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:22:21 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:22:21 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:22:21 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:22:21 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 19:22:23 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:22:23 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:22:23 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:22:25 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:22:25 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:22:25 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:22:27 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:22:27 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:22:27 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:22:29 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:22:29 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:22:29 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:22:31 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:22:31 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:22:31 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:22:33 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:22:33 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:22:33 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:22:35 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:22:35 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:22:35 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:22:37 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:22:37 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:22:37 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:22:39 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:22:39 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:22:39 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:22:41 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:22:41 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:22:41 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:22:43 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:22:43 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:22:43 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:22:45 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:22:45 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:22:45 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:22:47 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:22:47 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:22:47 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:22:49 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:22:49 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:22:49 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:22:51 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:22:51 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:22:51 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:22:51 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 19:22:53 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:22:53 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:22:53 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:22:55 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:22:55 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:22:55 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:22:57 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:22:57 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:22:57 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:22:59 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:22:59 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:22:59 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:23:01 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:23:01 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:23:01 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:23:03 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:23:03 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:23:03 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:23:05 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:23:05 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:23:05 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:23:07 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:23:07 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:23:07 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:23:09 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:23:09 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:23:09 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:23:11 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:23:11 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:23:11 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:23:13 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:23:13 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:23:13 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:23:15 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:23:15 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:23:15 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:23:17 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:23:17 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:23:17 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:23:19 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:23:19 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:23:19 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:23:21 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:23:21 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:23:21 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:23:21 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 19:23:23 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:23:23 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:23:23 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:23:25 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:23:25 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:23:25 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:23:27 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:23:27 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:23:27 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:23:29 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:23:29 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:23:29 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:23:31 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:23:31 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:23:31 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:23:33 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:23:33 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:23:33 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:23:35 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:23:35 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:23:35 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:23:37 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:23:37 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:23:37 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:23:39 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:23:39 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:23:39 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:23:41 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:23:41 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:23:41 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:23:43 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:23:43 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:23:43 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:23:45 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:23:45 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:23:45 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:23:47 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:23:47 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:23:47 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:23:49 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:23:49 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:23:49 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:23:51 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:23:51 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:23:51 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:23:51 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 19:23:53 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:23:53 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:23:53 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:23:55 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:23:55 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:23:55 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:23:57 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:23:57 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:23:57 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:23:59 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:23:59 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:23:59 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:24:01 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:24:01 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:24:01 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:24:03 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:24:03 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:24:03 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:24:05 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:24:05 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:24:05 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:24:07 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:24:07 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:24:07 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:24:09 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:24:09 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:24:09 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:24:11 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:24:11 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:24:11 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:24:13 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:24:13 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:24:13 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:24:15 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:24:15 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:24:15 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:24:17 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:24:17 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:24:17 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:24:19 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:24:19 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:24:19 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:24:21 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:24:21 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:24:21 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:24:21 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 19:24:23 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:24:23 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:24:23 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:24:25 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:24:25 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:24:25 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:24:27 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:24:27 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:24:27 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:24:29 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:24:29 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:24:29 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:24:31 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:24:31 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:24:31 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:24:33 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:24:33 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:24:33 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:24:35 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:24:35 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:24:35 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:24:37 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:24:37 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:24:37 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:24:39 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:24:39 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:24:39 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:24:41 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:24:41 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:24:41 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:24:43 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:24:43 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:24:43 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:24:45 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:24:45 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:24:45 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:24:47 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:24:47 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:24:47 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:24:49 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:24:49 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:24:49 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:24:51 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:24:51 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:24:51 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:24:51 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 19:24:53 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:24:53 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:24:53 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:24:55 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:24:55 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:24:55 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:24:57 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:24:57 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:24:57 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:24:59 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:24:59 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:24:59 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:25:01 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:25:01 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:25:01 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:25:03 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:25:03 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:25:03 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:25:05 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:25:05 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:25:05 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:25:07 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:25:07 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:25:07 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:25:09 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:25:09 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:25:09 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:25:11 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:25:11 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:25:11 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:25:13 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:25:13 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:25:13 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:25:15 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:25:15 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:25:15 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:25:17 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:25:17 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:25:17 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:25:19 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:25:19 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:25:19 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:25:21 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:25:21 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:25:21 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:25:21 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 19:25:23 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:25:23 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:25:23 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:25:25 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:25:25 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:25:25 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:25:27 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:25:27 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:25:27 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:25:29 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:25:29 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:25:29 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:25:31 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:25:31 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:25:31 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:25:33 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:25:33 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:25:33 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:25:35 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:25:35 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:25:35 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:25:37 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:25:37 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:25:37 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:25:39 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:25:39 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:25:39 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:25:41 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:25:41 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:25:41 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:25:43 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:25:43 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:25:43 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:25:45 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:25:45 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:25:45 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:25:47 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:25:47 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:25:47 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:25:49 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:25:49 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:25:49 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:25:51 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:25:51 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:25:51 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:25:51 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 19:25:53 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:25:53 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:25:53 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:25:55 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:25:55 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:25:55 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:25:57 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:25:57 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:25:57 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:25:59 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:25:59 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:25:59 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:26:01 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:26:01 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:26:01 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:26:03 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:26:03 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:26:03 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:26:05 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:26:05 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:26:05 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:26:07 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:26:07 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:26:07 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:26:09 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:26:09 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:26:09 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:26:11 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:26:11 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:26:11 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:26:13 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:26:13 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:26:13 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:26:15 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:26:15 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:26:15 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:26:17 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:26:17 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:26:17 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:26:19 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:26:19 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:26:19 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:26:21 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:26:21 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:26:21 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:26:21 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 19:26:23 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:26:23 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:26:23 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:26:25 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:26:25 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:26:25 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:26:27 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:26:27 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:26:27 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:26:29 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:26:29 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:26:29 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:26:31 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:26:31 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:26:31 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:26:33 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:26:33 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:26:33 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:26:35 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:26:35 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:26:35 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:26:37 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:26:37 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:26:37 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:26:39 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:26:39 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:26:39 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:26:41 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:26:41 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:26:41 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:26:43 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:26:43 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:26:43 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:26:45 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:26:45 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:26:45 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:26:47 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:26:47 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:26:47 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:26:49 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:26:49 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:26:49 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:26:51 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:26:51 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:26:51 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:26:51 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 19:26:53 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:26:53 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:26:53 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:26:55 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:26:55 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:26:55 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:26:57 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:26:57 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:26:57 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:26:59 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:26:59 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:26:59 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:27:01 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:27:01 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:27:01 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:27:03 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:27:03 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:27:03 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:27:05 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:27:05 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:27:05 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:27:07 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:27:07 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:27:07 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:27:09 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:27:09 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:27:09 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:27:11 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:27:11 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:27:11 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:27:13 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:27:13 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:27:13 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:27:15 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:27:15 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:27:15 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:27:17 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:27:17 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:27:17 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:27:19 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:27:19 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:27:19 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:27:21 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:27:21 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:27:21 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:27:21 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 19:27:23 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:27:23 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:27:23 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:27:25 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:27:25 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:27:25 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:27:27 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:27:27 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:27:27 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:27:29 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:27:29 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:27:29 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:27:31 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:27:31 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:27:31 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:27:33 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:27:33 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:27:33 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:27:35 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:27:35 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:27:35 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:27:37 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:27:37 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:27:37 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:27:39 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:27:39 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:27:39 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:27:41 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:27:41 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:27:41 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:27:43 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:27:43 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:27:43 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:27:45 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:27:45 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:27:45 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:27:47 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:27:47 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:27:47 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:27:49 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:27:49 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:27:49 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:27:51 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:27:51 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:27:51 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:27:51 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 19:27:53 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:27:53 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:27:53 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:27:55 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:27:55 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:27:55 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:27:57 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:27:57 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:27:57 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:27:59 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:27:59 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:27:59 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:28:01 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:28:01 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:28:01 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:28:03 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:28:03 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:28:03 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:28:05 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:28:05 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:28:05 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:28:07 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:28:07 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:28:07 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:28:09 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:28:09 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:28:09 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:28:11 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:28:11 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:28:11 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:28:13 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:28:13 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:28:13 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:28:15 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:28:15 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:28:15 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:28:17 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:28:17 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:28:17 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:28:19 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:28:19 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:28:19 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:28:21 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:28:21 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:28:21 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:28:21 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 19:28:23 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:28:23 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:28:23 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:28:25 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:28:25 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:28:25 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:28:27 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:28:27 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:28:27 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:28:29 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:28:29 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:28:29 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:28:31 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:28:31 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:28:31 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:28:33 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:28:33 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:28:33 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:28:35 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:28:35 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:28:35 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:28:37 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:28:37 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:28:37 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:28:39 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:28:39 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:28:39 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:28:41 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:28:41 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:28:41 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:28:43 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:28:43 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:28:43 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:28:45 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:28:45 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:28:45 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:28:47 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:28:47 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:28:47 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:28:49 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:28:49 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:28:49 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:28:51 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:28:51 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:28:51 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:28:51 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 19:28:53 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:28:53 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:28:53 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:28:55 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:28:55 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:28:55 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:28:57 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:28:57 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:28:57 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:28:59 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:28:59 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:28:59 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:29:01 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:29:01 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:29:01 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:29:03 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:29:03 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:29:03 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:29:05 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:29:05 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:29:05 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:29:07 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:29:07 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:29:07 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:29:09 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:29:09 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:29:09 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:29:11 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:29:11 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:29:11 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:29:13 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:29:13 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:29:13 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:29:15 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:29:15 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:29:15 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:29:17 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:29:17 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:29:17 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:29:19 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:29:19 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:29:19 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:29:21 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:29:21 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:29:21 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:29:21 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 19:29:23 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:29:23 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:29:23 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:29:25 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:29:25 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:29:25 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:29:27 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:29:27 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:29:27 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:29:29 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:29:29 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:29:29 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:29:31 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:29:31 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:29:31 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:29:33 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:29:33 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:29:33 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:29:35 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:29:35 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:29:35 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:29:37 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:29:37 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:29:37 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:29:39 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:29:39 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:29:39 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:29:41 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:29:41 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:29:41 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:29:43 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:29:43 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:29:43 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:29:45 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:29:45 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:29:45 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:29:47 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:29:47 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:29:47 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:29:49 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:29:49 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:29:49 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:29:51 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:29:51 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:29:51 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:29:51 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 19:29:53 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:29:53 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:29:53 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:29:55 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:29:55 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:29:55 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:29:57 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:29:57 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:29:57 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:29:59 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:29:59 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:29:59 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:30:01 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:30:01 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:30:01 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:30:03 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:30:03 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:30:03 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:30:05 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:30:05 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:30:05 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:30:07 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:30:07 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:30:07 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:30:09 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:30:09 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:30:09 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:30:11 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:30:11 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:30:11 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:30:13 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:30:13 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:30:13 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:30:15 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:30:15 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:30:15 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:30:17 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:30:17 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:30:17 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:30:19 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:30:19 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:30:19 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:30:21 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:30:21 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:30:21 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:30:21 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 19:30:23 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:30:23 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:30:23 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:30:25 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:30:25 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:30:25 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:30:27 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:30:27 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:30:27 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:30:29 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:30:29 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:30:29 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:30:31 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:30:31 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:30:31 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:30:33 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:30:33 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:30:33 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:30:35 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:30:35 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:30:35 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:30:37 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:30:37 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:30:37 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:30:39 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:30:39 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:30:39 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:30:41 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:30:41 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:30:41 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:30:43 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:30:43 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:30:43 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:30:45 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:30:45 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:30:45 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:30:47 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:30:47 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:30:47 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:30:49 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:30:49 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:30:49 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:30:51 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:30:51 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:30:51 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:30:51 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 19:30:53 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:30:53 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:30:53 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:30:55 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:30:55 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:30:55 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:30:57 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:30:57 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:30:57 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:30:59 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:30:59 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:30:59 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:31:01 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:31:01 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:31:01 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:31:03 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:31:03 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:31:03 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:31:05 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:31:05 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:31:05 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:31:07 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:31:07 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:31:07 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:31:09 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:31:09 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:31:09 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:31:11 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:31:11 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:31:11 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:31:13 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:31:13 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:31:13 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:31:15 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:31:15 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:31:15 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:31:17 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:31:17 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:31:17 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:31:19 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:31:19 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:31:19 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:31:21 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:31:21 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:31:21 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:31:21 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 19:31:23 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:31:23 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:31:23 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:31:25 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:31:25 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:31:25 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:31:27 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:31:27 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:31:27 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:31:29 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:31:29 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:31:29 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:31:31 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:31:31 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:31:31 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:31:33 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:31:33 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:31:33 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:31:35 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:31:35 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:31:35 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:31:37 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:31:37 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:31:37 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:31:39 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:31:39 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:31:39 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:31:41 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:31:41 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:31:41 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:31:43 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:31:43 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:31:43 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:31:45 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:31:45 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:31:45 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:31:47 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:31:47 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:31:47 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:31:49 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:31:49 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:31:49 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:31:51 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:31:51 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:31:51 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:31:51 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 19:31:53 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:31:53 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:31:53 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:31:55 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:31:55 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:31:55 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:31:57 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:31:57 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:31:57 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:31:59 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:31:59 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:31:59 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:32:01 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:32:01 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:32:01 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:32:03 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:32:03 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:32:03 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:32:05 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:32:05 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:32:05 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:32:07 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:32:07 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:32:07 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:32:09 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:32:09 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:32:09 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:32:11 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:32:11 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:32:11 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:32:13 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:32:13 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:32:13 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:32:15 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:32:15 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:32:15 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:32:17 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:32:17 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:32:17 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:32:19 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:32:19 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:32:19 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:32:21 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:32:21 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:32:21 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:32:21 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 19:32:23 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:32:23 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:32:23 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:32:25 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:32:25 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:32:25 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:32:27 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:32:27 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:32:27 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:32:29 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:32:29 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:32:29 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:32:31 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:32:31 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:32:31 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:32:33 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:32:33 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:32:33 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:32:35 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:32:35 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:32:35 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:32:37 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:32:37 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:32:37 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:32:39 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:32:39 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:32:39 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:32:41 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:32:41 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:32:41 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:32:43 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:32:43 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:32:43 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:32:45 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:32:45 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:32:45 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:32:47 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:32:47 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:32:47 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:32:49 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:32:49 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:32:49 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:32:51 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:32:51 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:32:51 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:32:51 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 19:32:53 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:32:53 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:32:53 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:32:55 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:32:55 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:32:55 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:32:57 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:32:57 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:32:57 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:32:59 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:32:59 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:32:59 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:33:01 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:33:01 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:33:01 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:33:03 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:33:03 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:33:03 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:33:05 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:33:05 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:33:05 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:33:07 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:33:07 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:33:07 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:33:09 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:33:09 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:33:09 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:33:11 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:33:11 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:33:11 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:33:13 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:33:13 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:33:13 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:33:15 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:33:15 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:33:15 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:33:17 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:33:17 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:33:17 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:33:19 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:33:19 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:33:19 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:33:21 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:33:21 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:33:21 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:33:21 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 19:33:23 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:33:23 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:33:23 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:33:25 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:33:25 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:33:25 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:33:27 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:33:27 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:33:27 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:33:29 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:33:29 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:33:29 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:33:31 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:33:31 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:33:31 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:33:33 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:33:33 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:33:33 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:33:35 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:33:35 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:33:35 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:33:37 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:33:37 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:33:37 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:33:39 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:33:39 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:33:39 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:33:41 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:33:41 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:33:41 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:33:43 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:33:43 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:33:43 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:33:45 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:33:45 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:33:45 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:33:47 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:33:47 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:33:47 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:33:49 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:33:49 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:33:49 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:33:51 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:33:51 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:33:51 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:33:51 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 19:33:53 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:33:53 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:33:53 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:33:55 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:33:55 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:33:55 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:33:57 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:33:57 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:33:57 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:33:59 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:33:59 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:33:59 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:34:01 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:34:01 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:34:01 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:34:03 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:34:03 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:34:03 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:34:05 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:34:05 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:34:05 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:34:07 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:34:07 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:34:07 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:34:09 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:34:09 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:34:09 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:34:11 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:34:11 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:34:11 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:34:13 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:34:13 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:34:13 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:34:15 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:34:15 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:34:15 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:34:17 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:34:17 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:34:17 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:34:19 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:34:19 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:34:19 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:34:21 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:34:21 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:34:21 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:34:21 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 19:34:23 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:34:23 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:34:23 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:34:25 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:34:25 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:34:25 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:34:27 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:34:27 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:34:27 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:34:29 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:34:29 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:34:29 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:34:31 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:34:31 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:34:31 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:34:33 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:34:33 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:34:33 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:34:35 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:34:35 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:34:35 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:34:37 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:34:37 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:34:37 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:34:39 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:34:39 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:34:39 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:34:41 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:34:41 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:34:41 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:34:43 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:34:43 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:34:43 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:34:45 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:34:45 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:34:45 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:34:47 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:34:47 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:34:47 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:34:49 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:34:49 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:34:49 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:34:51 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:34:51 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:34:51 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:34:51 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 19:34:53 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:34:53 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:34:53 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:34:55 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:34:55 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:34:55 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:34:57 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:34:57 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:34:57 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:34:59 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:34:59 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:34:59 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:35:01 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:35:01 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:35:01 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:35:03 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:35:03 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:35:03 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:35:05 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:35:05 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:35:05 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:35:07 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:35:07 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:35:07 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:35:09 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:35:09 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:35:09 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:35:11 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:35:11 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:35:11 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:35:13 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:35:13 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:35:13 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:35:15 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:35:15 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:35:15 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:35:17 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:35:17 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:35:17 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:35:19 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:35:19 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:35:19 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:35:21 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:35:21 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:35:21 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:35:21 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 19:35:23 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:35:23 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:35:23 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:35:25 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:35:25 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:35:25 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:35:27 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:35:27 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:35:27 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:35:29 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:35:29 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:35:29 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:35:31 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:35:31 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:35:31 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:35:33 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:35:33 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:35:33 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:35:35 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:35:35 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:35:35 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:35:37 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:35:37 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:35:37 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:35:39 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:35:39 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:35:39 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:35:41 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:35:41 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:35:41 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:35:43 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:35:43 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:35:43 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:35:45 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:35:45 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:35:45 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:35:47 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:35:47 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:35:47 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:35:49 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:35:49 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:35:49 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:35:51 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:35:51 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:35:51 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:35:51 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 19:35:53 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:35:53 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:35:53 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:35:55 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:35:55 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:35:55 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:35:57 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:35:57 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:35:57 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:35:59 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:35:59 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:35:59 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:36:01 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:36:01 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:36:01 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:36:03 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:36:03 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:36:03 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:36:05 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:36:05 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:36:05 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:36:07 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:36:07 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:36:07 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:36:09 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:36:09 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:36:09 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:36:11 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:36:11 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:36:11 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:36:13 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:36:13 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:36:13 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:36:15 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:36:15 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:36:15 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:36:17 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:36:17 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:36:17 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:36:19 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:36:19 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:36:19 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:36:21 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:36:21 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:36:21 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:36:21 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 19:36:23 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:36:23 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:36:23 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:36:25 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:36:25 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:36:25 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:36:27 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:36:27 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:36:27 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:36:29 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:36:29 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:36:29 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:36:31 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:36:31 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:36:31 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:36:33 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:36:33 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:36:33 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:36:35 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:36:35 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:36:35 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:36:37 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:36:37 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:36:37 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:36:39 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:36:39 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:36:39 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:36:41 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:36:41 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:36:41 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:36:43 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:36:43 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:36:43 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:36:45 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:36:45 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:36:45 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:36:47 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:36:47 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:36:47 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:36:49 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:36:49 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:36:49 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:36:51 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:36:51 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:36:51 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:36:51 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 19:36:53 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:36:53 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:36:53 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:36:55 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:36:55 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:36:55 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:36:57 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:36:57 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:36:57 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:36:59 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:36:59 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:36:59 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:37:01 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:37:01 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:37:01 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:37:03 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:37:03 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:37:03 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:37:05 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:37:05 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:37:05 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:37:07 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:37:07 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:37:07 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:37:09 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:37:09 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:37:09 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:37:11 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:37:11 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:37:11 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:37:13 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:37:13 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:37:13 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:37:15 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:37:15 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:37:15 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:37:17 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:37:17 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:37:17 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:37:19 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:37:19 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:37:19 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:37:21 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:37:21 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:37:21 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:37:21 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 19:37:23 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:37:23 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:37:23 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:37:25 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:37:25 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:37:25 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:37:27 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:37:27 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:37:27 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:37:29 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:37:29 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:37:29 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:37:31 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:37:31 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:37:31 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:37:33 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:37:33 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:37:33 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:37:35 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:37:35 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:37:35 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:37:37 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:37:37 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:37:37 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:37:39 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:37:39 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:37:39 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:37:41 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:37:41 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:37:41 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:37:43 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:37:43 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:37:43 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:37:45 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:37:45 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:37:45 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:37:47 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:37:47 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:37:47 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:37:49 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:37:49 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:37:49 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:37:51 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:37:51 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:37:51 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:37:51 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 19:37:53 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:37:53 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:37:53 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:37:55 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:37:55 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:37:55 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:37:57 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:37:57 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:37:57 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:37:59 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:37:59 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:37:59 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:38:01 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:38:01 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:38:01 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:38:03 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:38:03 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:38:03 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:38:05 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:38:05 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:38:05 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:38:07 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:38:07 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:38:07 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:38:09 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:38:09 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:38:09 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:38:11 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:38:11 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:38:11 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:38:13 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:38:13 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:38:13 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:38:15 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:38:15 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:38:15 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:38:17 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:38:17 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:38:17 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:38:19 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:38:19 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:38:19 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:38:21 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:38:21 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:38:21 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:38:21 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 19:38:23 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:38:23 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:38:23 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:38:25 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:38:25 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:38:25 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:38:27 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:38:27 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:38:27 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:38:29 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:38:29 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:38:29 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:38:31 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:38:31 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:38:31 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:38:33 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:38:33 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:38:33 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:38:35 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:38:35 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:38:35 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:38:37 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:38:37 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:38:37 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:38:39 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:38:39 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:38:39 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:38:41 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:38:41 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:38:41 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:38:43 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:38:43 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:38:43 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:38:45 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:38:45 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:38:45 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:38:47 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:38:47 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:38:47 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:38:49 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:38:49 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:38:49 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:38:51 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:38:51 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:38:51 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:38:51 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 19:38:53 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:38:53 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:38:53 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:38:55 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:38:55 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:38:55 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:38:57 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:38:57 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:38:57 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:38:59 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:38:59 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:38:59 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:39:01 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:39:01 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:39:01 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:39:03 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:39:03 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:39:03 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:39:05 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:39:05 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:39:05 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:39:07 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:39:07 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:39:07 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:39:09 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:39:09 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:39:09 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:39:11 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:39:11 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:39:11 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:39:13 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:39:13 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:39:13 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:39:15 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:39:15 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:39:15 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:39:17 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:39:17 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:39:17 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:39:19 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:39:19 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:39:19 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:39:21 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:39:21 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:39:21 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:39:21 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 19:39:23 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:39:23 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:39:23 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:39:25 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:39:25 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:39:25 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:39:27 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:39:27 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:39:27 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:39:29 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:39:29 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:39:29 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:39:31 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:39:31 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:39:31 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:39:33 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:39:33 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:39:33 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:39:35 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:39:35 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:39:35 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:39:37 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:39:37 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:39:37 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:39:39 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:39:39 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:39:39 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:39:41 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:39:41 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:39:41 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:39:43 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:39:43 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:39:43 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:39:45 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:39:45 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:39:45 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:39:47 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:39:47 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:39:47 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:39:49 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:39:49 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:39:49 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:39:51 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:39:51 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:39:51 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:39:51 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 19:39:53 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:39:53 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:39:53 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:39:55 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:39:55 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:39:55 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:39:57 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:39:57 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:39:57 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:39:59 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:39:59 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:39:59 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:40:01 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:40:01 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:40:01 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:40:03 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:40:03 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:40:03 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:40:05 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:40:05 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:40:05 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:40:07 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:40:07 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:40:07 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:40:09 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:40:09 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:40:09 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:40:11 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:40:11 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:40:11 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:40:13 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:40:13 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:40:13 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:40:15 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:40:15 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:40:15 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:40:17 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:40:17 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:40:17 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:40:19 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:40:19 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:40:19 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:40:21 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:40:21 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:40:21 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:40:21 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 19:40:23 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:40:23 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:40:23 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:40:25 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:40:25 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:40:25 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:40:27 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:40:27 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:40:27 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:40:29 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:40:29 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:40:29 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:40:31 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:40:31 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:40:31 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:40:33 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:40:33 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:40:33 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:40:35 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:40:35 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:40:35 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:40:37 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:40:37 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:40:37 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:40:39 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:40:39 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:40:39 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:40:41 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:40:41 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:40:41 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:40:43 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:40:43 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:40:43 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:40:45 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:40:45 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:40:45 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:40:47 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:40:47 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:40:47 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:40:49 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:40:49 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:40:49 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:40:51 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:40:51 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:40:51 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:40:51 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 19:40:53 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:40:53 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:40:53 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:40:55 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:40:55 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:40:55 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:40:57 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:40:57 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:40:57 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:40:59 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:40:59 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:40:59 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:41:01 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:41:01 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:41:01 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:41:03 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:41:03 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:41:03 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:41:05 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:41:05 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:41:05 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:41:07 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:41:07 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:41:07 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:41:09 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:41:09 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:41:09 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:41:11 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:41:11 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:41:11 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:41:13 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:41:13 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:41:13 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:41:15 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:41:15 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:41:15 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:41:17 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:41:17 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:41:17 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:41:19 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:41:19 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:41:19 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:41:21 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:41:21 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:41:21 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:41:21 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 19:41:23 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:41:23 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:41:23 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:41:25 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:41:25 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:41:25 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:41:27 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:41:27 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:41:27 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:41:29 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:41:29 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:41:29 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:41:31 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:41:31 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:41:31 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:41:33 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:41:33 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:41:33 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:41:35 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:41:35 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:41:35 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:41:37 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:41:37 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:41:37 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:41:39 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:41:39 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:41:39 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:41:41 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:41:41 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:41:41 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:41:43 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:41:43 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:41:43 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:41:45 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:41:45 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:41:45 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:41:47 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:41:47 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:41:47 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:41:49 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:41:49 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:41:49 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:41:51 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:41:51 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:41:51 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:41:51 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 19:41:53 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:41:53 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:41:53 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:41:55 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:41:55 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:41:55 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:41:57 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:41:57 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:41:57 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:41:59 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:41:59 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:41:59 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:42:01 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:42:01 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:42:01 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:42:03 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:42:03 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:42:03 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:42:05 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:42:05 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:42:05 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:42:07 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:42:07 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:42:07 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:42:09 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:42:09 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:42:09 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:42:11 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:42:11 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:42:11 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:42:13 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:42:13 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:42:13 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:42:15 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:42:15 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:42:15 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:42:17 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:42:17 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:42:17 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:42:19 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:42:19 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:42:19 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:42:21 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:42:21 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:42:21 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:42:21 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 19:42:23 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:42:23 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:42:23 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:42:25 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:42:25 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:42:25 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:42:27 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:42:27 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:42:27 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:42:29 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:42:29 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:42:29 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:42:31 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:42:31 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:42:31 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:42:33 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:42:33 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:42:33 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:42:35 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:42:35 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:42:35 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:42:37 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:42:37 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:42:37 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:42:39 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:42:39 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:42:39 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:42:41 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:42:41 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:42:41 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:42:43 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:42:43 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:42:43 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:42:45 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:42:45 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:42:45 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:42:47 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:42:47 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:42:47 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:42:49 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:42:49 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:42:49 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:42:51 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:42:51 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:42:51 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:42:51 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 19:42:53 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:42:53 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:42:53 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:42:55 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:42:55 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:42:55 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:42:57 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:42:57 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:42:57 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:42:59 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:42:59 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:42:59 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:43:01 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:43:01 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:43:01 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:43:03 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:43:03 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:43:03 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:43:05 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:43:05 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:43:05 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:43:07 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:43:07 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:43:07 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:43:09 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:43:09 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:43:09 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:43:11 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:43:11 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:43:11 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:43:13 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:43:13 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:43:13 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:43:15 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:43:15 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:43:15 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:43:17 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:43:17 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:43:17 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:43:19 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:43:19 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:43:19 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:43:21 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:43:21 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:43:21 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:43:21 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 19:43:23 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:43:23 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:43:23 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:43:25 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:43:25 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:43:25 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:43:27 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:43:27 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:43:27 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:43:29 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:43:29 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:43:29 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:43:31 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:43:31 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:43:31 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:43:33 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:43:33 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:43:33 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:43:35 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:43:35 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:43:35 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:43:37 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:43:37 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:43:37 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:43:39 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:43:39 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:43:39 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:43:41 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:43:41 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:43:41 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:43:43 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:43:43 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:43:43 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:43:45 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:43:45 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:43:45 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:43:47 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:43:47 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:43:47 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:43:49 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:43:49 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:43:49 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:43:51 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:43:51 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:43:51 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:43:51 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 19:43:53 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:43:53 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:43:53 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:43:55 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:43:55 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:43:55 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:43:57 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:43:57 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:43:57 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:43:59 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:43:59 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:43:59 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:44:01 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:44:01 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:44:01 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:44:03 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:44:03 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:44:03 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:44:05 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:44:05 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:44:05 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:44:07 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:44:07 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:44:07 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:44:09 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:44:09 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:44:09 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:44:11 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:44:11 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:44:11 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:44:13 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:44:13 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:44:13 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:44:15 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:44:15 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:44:15 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:44:17 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:44:17 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:44:17 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:44:19 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:44:19 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:44:19 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:44:21 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:44:21 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:44:21 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:44:21 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 19:44:23 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:44:23 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:44:23 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:44:25 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:44:25 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:44:25 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:44:27 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:44:27 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:44:27 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:44:29 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:44:29 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:44:29 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:44:31 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:44:31 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:44:31 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:44:33 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:44:33 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:44:33 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:44:35 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:44:35 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:44:35 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:44:37 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:44:37 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:44:37 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:44:39 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:44:39 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:44:39 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:44:41 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:44:41 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:44:41 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:44:43 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:44:43 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:44:43 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:44:45 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:44:45 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:44:45 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:44:47 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:44:47 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:44:47 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:44:49 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:44:49 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:44:49 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:44:51 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:44:51 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:44:51 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:44:51 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 19:44:53 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:44:53 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:44:53 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:44:55 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:44:55 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:44:55 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:44:57 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:44:57 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:44:57 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:44:59 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:44:59 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:44:59 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:45:01 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:45:01 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:45:01 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:45:03 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:45:03 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:45:03 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:45:05 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:45:05 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:45:05 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:45:07 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:45:07 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:45:07 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:45:09 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:45:09 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:45:09 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:45:11 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:45:11 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:45:11 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:45:13 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:45:13 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:45:13 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:45:15 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:45:15 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:45:15 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:45:17 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:45:17 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:45:17 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:45:19 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:45:19 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:45:19 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:45:21 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:45:21 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:45:21 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:45:21 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 19:45:23 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:45:23 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:45:23 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:45:25 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:45:25 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:45:25 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:45:27 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:45:27 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:45:27 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:45:29 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:45:29 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:45:29 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:45:31 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:45:31 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:45:31 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:45:33 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:45:33 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:45:33 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:45:35 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:45:35 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:45:35 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:45:37 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:45:37 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:45:37 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:45:39 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:45:39 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:45:39 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:45:41 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:45:41 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:45:41 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:45:43 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:45:43 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:45:43 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:45:45 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:45:45 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:45:45 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:45:47 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:45:47 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:45:47 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:45:49 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:45:49 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:45:49 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:45:51 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:45:51 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:45:51 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:45:51 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 19:45:53 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:45:53 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:45:53 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:45:55 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:45:55 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:45:55 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:45:57 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:45:57 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:45:57 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:45:59 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:45:59 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:45:59 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:46:01 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:46:01 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:46:01 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:46:03 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:46:03 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:46:03 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:46:05 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:46:05 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:46:05 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:46:07 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:46:07 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:46:07 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:46:09 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:46:09 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:46:09 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:46:11 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:46:11 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:46:11 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:46:13 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:46:13 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:46:13 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:46:15 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:46:15 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:46:15 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:46:17 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:46:17 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:46:17 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:46:19 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:46:19 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:46:19 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:46:21 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:46:21 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:46:21 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:46:21 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 19:46:23 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:46:23 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:46:23 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:46:25 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:46:25 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:46:25 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:46:27 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:46:27 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:46:27 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:46:29 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:46:29 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:46:29 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:46:31 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:46:31 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:46:31 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:46:33 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:46:33 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:46:33 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:46:35 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:46:35 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:46:35 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:46:37 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:46:37 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:46:37 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:46:39 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:46:39 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:46:39 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:46:41 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:46:41 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:46:41 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:46:43 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:46:43 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:46:43 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:46:45 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:46:45 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:46:45 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:46:47 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:46:47 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:46:47 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:46:49 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:46:49 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:46:49 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:46:51 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:46:51 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:46:51 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:46:51 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 19:46:53 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:46:53 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:46:53 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:46:55 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:46:55 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:46:55 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:46:57 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:46:57 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:46:57 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:46:59 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:46:59 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:46:59 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:47:01 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:47:01 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:47:01 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:47:03 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:47:03 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:47:03 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:47:05 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:47:05 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:47:05 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:47:07 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:47:07 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:47:07 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:47:09 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:47:09 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:47:09 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:47:11 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:47:11 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:47:11 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:47:13 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:47:13 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:47:13 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:47:15 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:47:15 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:47:15 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:47:17 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:47:17 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:47:17 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:47:19 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:47:19 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:47:19 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:47:21 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:47:21 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:47:21 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:47:21 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 19:47:23 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:47:23 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:47:23 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:47:25 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:47:25 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:47:25 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:47:27 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:47:27 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:47:27 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:47:29 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:47:29 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:47:29 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:47:31 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:47:31 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:47:31 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:47:33 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:47:33 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:47:33 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:47:35 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:47:35 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:47:35 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:47:37 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:47:37 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:47:37 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:47:39 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:47:39 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:47:39 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:47:41 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:47:41 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:47:41 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:47:43 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:47:43 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:47:43 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:47:45 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:47:45 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:47:45 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:47:47 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:47:47 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:47:47 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:47:49 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:47:49 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:47:49 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:47:51 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:47:51 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:47:51 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:47:51 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 19:47:53 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:47:53 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:47:53 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:47:55 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:47:55 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:47:55 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:47:57 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:47:57 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:47:57 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:47:59 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:47:59 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:47:59 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:48:01 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:48:01 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:48:01 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:48:03 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:48:03 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:48:03 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:48:05 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:48:05 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:48:05 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:48:07 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:48:07 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:48:07 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:48:09 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:48:09 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:48:09 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:48:11 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:48:11 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:48:11 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:48:13 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:48:13 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:48:13 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:48:15 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:48:15 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:48:15 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:48:17 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:48:17 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:48:17 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:48:19 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:48:19 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:48:19 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:48:21 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:48:21 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:48:21 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:48:21 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 19:48:23 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:48:23 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:48:23 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:48:25 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:48:25 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:48:25 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:48:27 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:48:27 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:48:27 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:48:29 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:48:29 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:48:29 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:48:31 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:48:31 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:48:31 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:48:33 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:48:33 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:48:33 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:48:35 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:48:35 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:48:35 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:48:37 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:48:37 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:48:37 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:48:39 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:48:39 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:48:39 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:48:41 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:48:41 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:48:41 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:48:43 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:48:43 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:48:43 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:48:45 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:48:45 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:48:45 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:48:47 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:48:47 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:48:47 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:48:49 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:48:49 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:48:49 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:48:51 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:48:51 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:48:51 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:48:51 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 19:48:53 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:48:53 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:48:53 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:48:55 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:48:55 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:48:55 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:48:57 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:48:57 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:48:57 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:48:59 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:48:59 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:48:59 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:49:01 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:49:01 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:49:01 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:49:03 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:49:03 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:49:03 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:49:05 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:49:05 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:49:05 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:49:07 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:49:07 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:49:07 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:49:09 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:49:09 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:49:09 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:49:11 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:49:11 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:49:11 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:49:13 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:49:13 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:49:13 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:49:15 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:49:15 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:49:15 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:49:17 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:49:17 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:49:17 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:49:19 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:49:19 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:49:19 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:49:21 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:49:21 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:49:21 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:49:21 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 19:49:23 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:49:23 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:49:23 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:49:25 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:49:25 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:49:25 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:49:27 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:49:27 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:49:27 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:49:29 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:49:29 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:49:29 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:49:31 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:49:31 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:49:31 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:49:33 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:49:33 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:49:33 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:49:35 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:49:35 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:49:35 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:49:37 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:49:37 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:49:37 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:49:39 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:49:39 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:49:39 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:49:41 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:49:41 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:49:41 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:49:43 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:49:43 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:49:43 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:49:45 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:49:45 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:49:45 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:49:47 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:49:47 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:49:47 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:49:49 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:49:49 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:49:49 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:49:51 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:49:51 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:49:51 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:49:51 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 19:49:53 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:49:53 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:49:53 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:49:55 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:49:55 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:49:55 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:49:57 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:49:57 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:49:57 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:49:59 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:49:59 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:49:59 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:50:01 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:50:01 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:50:01 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:50:03 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:50:03 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:50:03 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:50:05 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:50:05 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:50:05 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:50:07 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:50:07 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:50:07 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:50:09 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:50:09 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:50:09 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:50:11 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:50:11 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:50:11 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:50:13 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:50:13 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:50:13 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:50:15 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:50:15 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:50:15 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:50:17 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:50:17 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:50:17 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:50:19 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:50:19 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:50:19 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:50:21 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:50:21 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:50:21 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:50:21 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 19:50:23 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:50:23 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:50:23 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:50:25 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:50:25 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:50:25 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:50:27 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:50:27 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:50:27 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:50:29 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:50:29 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:50:29 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:50:31 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:50:31 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:50:31 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:50:33 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:50:33 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:50:33 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:50:35 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:50:35 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:50:35 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:50:37 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:50:37 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:50:37 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:50:39 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:50:39 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:50:39 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:50:41 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:50:41 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:50:41 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:50:43 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:50:43 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:50:43 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:50:45 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:50:45 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:50:45 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:50:47 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:50:47 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:50:47 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:50:49 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:50:49 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:50:49 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:50:51 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:50:51 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:50:51 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:50:51 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 19:50:53 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:50:53 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:50:53 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:50:55 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:50:55 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:50:55 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:50:57 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:50:57 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:50:57 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:50:59 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:50:59 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:50:59 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:51:01 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:51:01 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:51:01 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:51:03 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:51:03 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:51:03 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:51:05 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:51:05 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:51:05 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:51:07 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:51:07 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:51:07 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:51:09 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:51:09 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:51:09 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:51:11 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:51:11 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:51:11 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:51:13 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:51:13 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:51:13 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:51:15 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:51:15 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:51:15 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:51:17 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:51:17 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:51:17 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:51:19 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:51:19 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:51:19 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:51:21 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:51:21 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:51:21 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:51:21 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 19:51:23 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:51:23 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:51:23 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:51:25 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:51:25 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:51:25 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:51:27 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:51:27 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:51:27 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:51:29 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:51:29 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:51:29 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:51:31 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:51:31 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:51:31 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:51:33 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:51:33 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:51:33 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:51:35 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:51:35 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:51:35 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:51:37 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:51:37 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:51:37 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:51:39 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:51:39 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:51:39 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:51:41 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:51:41 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:51:41 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:51:43 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:51:43 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:51:43 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:51:45 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:51:45 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:51:45 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:51:47 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:51:47 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:51:47 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:51:49 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:51:49 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:51:49 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:51:51 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:51:51 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:51:51 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:51:51 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 19:51:53 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:51:53 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:51:53 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:51:55 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:51:55 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:51:55 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:51:57 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:51:57 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:51:57 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:51:59 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:51:59 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:51:59 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:52:01 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:52:01 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:52:01 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:52:03 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:52:03 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:52:03 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:52:05 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:52:05 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:52:05 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:52:07 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:52:07 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:52:07 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:52:09 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:52:09 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:52:09 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:52:11 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:52:11 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:52:11 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:52:13 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:52:13 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:52:13 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:52:15 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:52:15 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:52:15 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:52:17 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:52:17 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:52:17 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:52:19 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:52:19 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:52:19 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:52:21 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:52:21 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:52:21 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:52:21 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 19:52:23 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:52:23 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:52:23 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:52:25 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:52:25 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:52:25 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:52:27 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:52:27 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:52:27 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:52:29 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:52:29 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:52:29 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:52:31 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:52:31 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:52:31 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:52:33 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:52:33 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:52:33 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:52:35 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:52:35 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:52:35 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:52:37 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:52:37 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:52:37 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:52:39 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:52:39 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:52:39 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:52:41 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:52:41 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:52:41 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:52:43 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:52:43 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:52:43 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:52:45 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:52:45 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:52:45 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:52:47 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:52:47 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:52:47 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:52:49 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:52:49 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:52:49 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:52:51 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:52:51 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:52:51 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:52:51 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 19:52:53 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:52:53 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:52:53 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:52:55 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:52:55 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:52:55 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:52:57 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:52:57 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:52:57 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:52:59 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:52:59 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:52:59 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:53:01 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:53:01 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:53:01 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:53:03 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:53:03 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:53:03 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:53:05 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:53:05 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:53:05 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:53:07 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:53:07 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:53:07 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:53:09 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:53:09 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:53:09 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:53:11 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:53:11 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:53:11 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:53:13 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:53:13 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:53:13 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:53:15 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:53:15 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:53:15 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:53:17 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:53:17 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:53:17 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:53:19 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:53:19 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:53:19 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:53:21 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:53:21 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:53:21 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:53:21 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 19:53:23 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:53:23 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:53:23 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:53:25 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:53:25 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:53:25 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:53:27 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:53:27 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:53:27 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:53:29 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:53:29 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:53:29 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:53:31 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:53:31 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:53:31 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:53:33 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:53:33 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:53:33 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:53:35 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:53:35 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:53:35 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:53:37 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:53:37 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:53:37 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:53:39 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:53:39 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:53:39 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:53:41 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:53:41 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:53:41 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:53:43 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:53:43 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:53:43 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:53:45 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:53:45 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:53:45 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:53:47 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:53:47 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:53:47 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:53:49 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:53:49 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:53:49 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:53:51 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:53:51 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:53:51 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:53:51 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 19:53:53 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:53:53 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:53:53 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:53:55 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:53:55 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:53:55 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:53:57 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:53:57 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:53:57 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:53:59 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:53:59 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:53:59 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:54:01 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:54:01 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:54:01 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:54:03 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:54:03 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:54:03 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:54:05 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:54:05 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:54:05 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:54:07 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:54:07 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:54:07 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:54:09 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:54:09 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:54:09 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:54:11 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:54:11 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:54:11 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:54:13 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:54:13 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:54:13 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:54:15 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:54:15 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:54:15 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:54:17 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:54:17 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:54:17 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:54:19 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:54:19 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:54:19 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:54:21 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:54:21 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:54:21 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:54:21 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 19:54:23 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:54:23 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:54:23 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:54:25 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:54:25 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:54:25 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:54:27 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:54:27 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:54:27 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:54:29 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:54:29 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:54:29 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:54:31 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:54:31 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:54:31 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:54:33 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:54:33 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:54:33 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:54:35 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:54:35 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:54:35 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:54:37 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:54:37 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:54:37 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:54:39 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:54:39 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:54:39 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:54:41 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:54:41 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:54:41 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:54:43 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:54:43 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:54:43 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:54:45 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:54:45 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:54:45 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:54:47 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:54:47 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:54:47 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:54:49 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:54:49 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:54:49 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:54:51 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:54:51 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:54:51 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:54:51 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 19:54:53 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:54:53 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:54:53 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:54:55 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:54:55 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:54:55 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:54:57 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:54:57 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:54:57 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:54:59 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:54:59 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:54:59 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:55:01 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:55:01 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:55:01 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:55:03 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:55:03 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:55:03 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:55:05 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:55:05 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:55:05 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:55:07 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:55:07 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:55:07 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:55:09 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:55:09 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:55:09 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:55:11 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:55:11 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:55:11 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:55:13 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:55:13 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:55:13 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:55:15 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:55:15 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:55:15 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:55:17 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:55:17 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:55:17 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:55:19 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:55:19 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:55:19 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:55:21 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:55:21 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:55:21 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:55:21 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 19:55:23 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:55:23 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:55:23 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:55:25 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:55:25 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:55:25 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:55:27 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:55:27 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:55:27 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:55:29 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:55:29 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:55:29 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:55:31 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:55:31 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:55:31 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:55:33 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:55:33 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:55:33 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:55:35 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:55:35 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:55:35 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:55:37 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:55:37 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:55:37 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:55:39 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:55:39 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:55:39 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:55:41 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:55:41 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:55:41 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:55:43 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:55:43 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:55:43 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:55:45 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:55:45 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:55:45 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:55:47 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:55:47 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:55:47 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:55:49 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:55:49 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:55:49 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:55:51 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:55:51 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:55:51 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:55:51 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 19:55:53 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:55:53 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:55:53 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:55:55 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:55:55 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:55:55 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:55:57 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:55:57 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:55:57 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:55:59 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:55:59 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:55:59 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:56:01 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:56:01 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:56:01 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:56:03 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:56:03 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:56:03 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:56:05 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:56:05 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:56:05 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:56:07 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:56:07 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:56:07 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:56:09 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:56:09 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:56:09 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:56:11 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:56:11 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:56:11 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:56:13 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:56:13 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:56:13 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:56:15 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:56:15 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:56:15 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:56:17 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:56:17 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:56:17 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:56:19 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:56:19 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:56:19 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:56:21 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:56:21 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:56:21 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:56:21 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 19:56:23 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:56:23 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:56:23 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:56:25 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:56:25 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:56:25 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:56:27 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:56:27 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:56:27 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:56:29 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:56:29 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:56:29 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:56:31 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:56:31 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:56:31 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:56:33 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:56:33 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:56:33 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:56:35 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:56:35 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:56:35 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:56:37 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:56:37 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:56:37 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:56:39 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:56:39 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:56:39 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:56:41 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:56:41 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:56:41 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:56:43 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:56:43 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:56:43 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:56:45 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:56:45 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:56:45 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:56:47 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:56:47 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:56:47 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:56:49 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:56:49 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:56:49 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:56:51 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:56:51 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:56:51 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:56:51 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 19:56:53 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:56:53 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:56:53 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:56:55 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:56:55 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:56:55 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:56:57 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:56:57 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:56:57 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:56:59 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:56:59 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:56:59 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:57:01 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:57:01 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:57:01 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:57:03 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:57:03 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:57:03 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:57:05 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:57:05 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:57:05 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:57:07 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:57:07 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:57:07 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:57:09 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:57:09 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:57:09 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:57:11 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:57:11 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:57:11 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:57:13 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:57:13 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:57:13 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:57:15 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:57:15 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:57:15 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:57:17 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:57:17 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:57:17 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:57:19 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:57:19 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:57:19 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:57:21 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:57:21 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:57:21 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:57:21 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 19:57:23 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:57:23 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:57:23 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:57:25 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:57:25 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:57:25 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:57:27 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:57:27 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:57:27 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:57:29 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:57:29 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:57:29 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:57:31 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:57:31 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:57:31 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:57:33 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:57:33 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:57:33 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:57:35 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:57:35 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:57:35 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:57:37 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:57:37 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:57:37 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:57:39 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:57:39 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:57:39 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:57:41 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:57:41 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:57:41 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:57:43 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:57:43 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:57:43 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:57:45 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:57:45 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:57:45 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:57:47 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:57:47 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:57:47 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:57:49 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:57:49 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:57:49 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:57:51 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:57:51 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:57:51 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:57:51 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 19:57:53 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:57:53 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:57:53 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:57:55 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:57:55 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:57:55 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:57:57 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:57:57 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:57:57 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:57:59 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:57:59 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:57:59 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:58:01 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:58:01 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:58:01 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:58:03 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:58:03 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:58:03 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:58:05 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:58:05 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:58:05 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:58:07 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:58:07 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:58:07 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:58:09 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:58:09 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:58:09 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:58:11 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:58:11 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:58:11 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:58:13 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:58:13 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:58:13 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:58:15 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:58:15 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:58:15 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:58:17 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:58:17 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:58:17 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:58:19 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:58:19 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:58:19 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:58:21 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:58:21 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:58:21 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:58:21 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 19:58:23 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:58:23 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:58:23 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:58:25 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:58:25 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:58:25 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:58:27 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:58:27 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:58:27 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:58:29 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:58:29 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:58:29 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:58:31 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:58:31 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:58:31 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:58:33 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:58:33 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:58:33 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:58:35 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:58:35 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:58:35 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:58:37 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:58:37 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:58:37 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:58:39 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:58:39 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:58:39 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:58:41 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:58:41 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:58:41 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:58:43 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:58:43 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:58:43 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:58:45 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:58:45 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:58:45 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:58:47 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:58:47 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:58:47 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:58:49 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:58:49 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:58:49 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:58:51 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:58:51 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:58:51 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:58:51 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 19:58:53 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:58:53 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:58:53 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:58:55 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:58:55 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:58:55 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:58:57 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:58:57 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:58:57 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:58:59 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:58:59 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:58:59 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:59:01 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:59:01 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:59:01 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:59:03 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:59:03 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:59:03 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:59:05 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:59:05 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:59:05 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:59:07 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:59:07 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:59:07 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:59:09 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:59:09 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:59:09 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:59:11 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:59:11 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:59:11 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:59:13 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:59:13 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:59:13 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:59:15 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:59:15 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:59:15 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:59:17 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:59:17 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:59:17 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:59:19 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:59:19 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:59:19 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:59:21 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:59:21 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:59:21 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:59:21 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 19:59:23 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:59:23 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:59:23 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:59:25 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:59:25 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:59:25 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:59:27 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:59:27 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:59:27 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:59:29 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:59:29 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:59:29 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:59:31 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:59:31 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:59:31 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:59:33 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:59:33 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:59:33 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:59:35 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:59:35 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:59:35 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:59:37 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:59:37 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:59:37 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:59:39 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:59:39 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:59:39 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:59:41 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:59:41 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:59:41 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:59:43 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:59:43 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:59:43 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:59:45 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:59:45 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:59:45 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:59:47 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:59:47 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:59:47 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:59:49 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:59:49 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:59:49 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:59:51 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:59:51 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:59:51 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:59:51 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 19:59:53 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:59:53 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:59:53 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:59:55 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:59:55 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:59:55 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:59:57 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:59:57 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:59:57 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 19:59:59 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 19:59:59 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 19:59:59 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:00:01 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:00:01 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:00:01 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:00:03 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:00:03 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:00:03 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:00:05 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:00:05 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:00:05 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:00:07 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:00:07 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:00:07 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:00:09 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:00:09 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:00:09 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:00:11 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:00:11 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:00:11 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:00:13 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:00:13 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:00:13 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:00:15 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:00:15 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:00:15 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:00:17 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:00:17 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:00:17 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:00:19 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:00:19 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:00:19 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:00:21 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:00:21 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:00:21 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:00:21 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 20:00:23 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:00:23 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:00:23 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:00:25 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:00:25 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:00:25 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:00:27 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:00:27 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:00:27 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:00:29 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:00:29 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:00:29 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:00:31 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:00:31 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:00:31 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:00:33 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:00:33 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:00:33 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:00:35 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:00:35 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:00:35 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:00:37 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:00:37 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:00:37 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:00:39 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:00:39 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:00:39 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:00:41 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:00:41 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:00:41 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:00:43 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:00:43 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:00:43 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:00:45 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:00:45 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:00:45 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:00:47 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:00:47 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:00:47 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:00:49 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:00:49 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:00:49 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:00:51 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:00:51 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:00:51 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:00:51 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 20:00:53 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:00:53 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:00:53 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:00:55 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:00:55 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:00:55 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:00:57 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:00:57 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:00:57 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:00:59 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:00:59 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:00:59 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:01:01 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:01:01 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:01:01 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:01:03 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:01:03 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:01:03 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:01:05 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:01:05 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:01:05 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:01:07 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:01:07 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:01:07 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:01:09 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:01:09 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:01:09 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:01:11 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:01:11 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:01:11 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:01:13 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:01:13 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:01:13 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:01:15 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:01:15 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:01:15 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:01:17 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:01:17 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:01:17 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:01:19 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:01:19 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:01:19 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:01:21 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:01:21 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:01:21 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:01:21 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 20:01:23 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:01:23 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:01:23 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:01:25 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:01:25 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:01:25 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:01:27 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:01:27 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:01:27 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:01:29 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:01:29 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:01:29 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:01:31 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:01:31 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:01:31 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:01:33 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:01:33 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:01:33 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:01:35 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:01:35 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:01:35 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:01:37 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:01:37 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:01:37 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:01:39 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:01:39 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:01:39 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:01:41 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:01:41 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:01:41 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:01:43 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:01:43 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:01:43 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:01:45 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:01:45 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:01:45 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:01:47 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:01:47 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:01:47 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:01:49 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:01:49 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:01:49 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:01:51 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:01:51 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:01:51 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:01:51 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 20:01:53 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:01:53 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:01:53 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:01:55 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:01:55 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:01:55 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:01:57 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:01:57 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:01:57 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:01:59 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:01:59 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:01:59 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:02:01 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:02:01 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:02:01 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:02:03 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:02:03 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:02:03 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:02:05 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:02:05 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:02:05 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:02:07 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:02:07 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:02:07 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:02:09 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:02:09 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:02:09 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:02:11 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:02:11 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:02:11 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:02:13 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:02:13 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:02:13 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:02:15 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:02:15 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:02:15 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:02:17 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:02:17 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:02:17 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:02:19 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:02:19 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:02:19 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:02:21 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:02:21 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:02:21 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:02:21 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 20:02:23 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:02:23 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:02:23 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:02:25 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:02:25 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:02:25 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:02:27 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:02:27 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:02:27 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:02:29 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:02:29 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:02:29 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:02:31 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:02:31 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:02:31 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:02:33 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:02:33 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:02:33 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:02:35 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:02:35 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:02:35 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:02:37 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:02:37 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:02:37 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:02:39 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:02:39 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:02:39 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:02:41 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:02:41 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:02:41 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:02:43 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:02:43 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:02:43 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:02:45 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:02:45 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:02:45 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:02:47 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:02:47 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:02:47 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:02:49 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:02:49 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:02:49 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:02:51 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:02:51 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:02:51 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:02:51 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 20:02:53 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:02:53 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:02:53 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:02:55 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:02:55 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:02:55 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:02:57 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:02:57 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:02:57 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:02:59 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:02:59 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:02:59 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:03:01 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:03:01 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:03:01 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:03:03 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:03:03 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:03:03 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:03:05 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:03:05 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:03:05 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:03:07 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:03:07 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:03:07 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:03:09 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:03:09 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:03:09 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:03:11 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:03:11 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:03:11 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:03:13 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:03:13 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:03:13 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:03:15 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:03:15 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:03:15 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:03:17 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:03:17 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:03:17 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:03:19 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:03:19 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:03:19 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:03:21 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:03:21 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:03:21 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:03:21 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 20:03:23 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:03:23 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:03:23 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:03:25 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:03:25 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:03:25 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:03:27 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:03:27 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:03:27 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:03:29 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:03:29 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:03:29 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:03:31 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:03:31 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:03:31 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:03:33 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:03:33 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:03:33 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:03:35 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:03:35 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:03:35 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:03:37 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:03:37 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:03:37 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:03:39 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:03:39 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:03:39 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:03:41 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:03:41 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:03:41 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:03:43 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:03:43 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:03:43 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:03:45 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:03:45 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:03:45 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:03:47 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:03:47 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:03:47 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:03:49 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:03:49 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:03:49 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:03:51 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:03:51 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:03:51 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:03:51 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 20:03:53 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:03:53 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:03:53 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:03:55 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:03:55 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:03:55 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:03:57 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:03:57 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:03:57 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:03:59 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:03:59 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:03:59 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:04:01 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:04:01 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:04:01 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:04:03 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:04:03 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:04:03 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:04:05 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:04:05 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:04:05 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:04:07 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:04:07 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:04:07 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:04:09 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:04:09 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:04:09 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:04:11 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:04:11 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:04:11 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:04:13 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:04:13 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:04:13 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:04:15 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:04:15 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:04:15 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:04:17 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:04:17 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:04:17 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:04:19 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:04:19 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:04:19 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:04:21 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:04:21 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:04:21 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:04:21 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 20:04:23 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:04:23 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:04:23 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:04:25 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:04:25 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:04:25 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:04:27 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:04:27 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:04:27 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:04:29 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:04:29 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:04:29 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:04:31 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:04:31 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:04:31 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:04:33 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:04:33 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:04:33 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:04:35 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:04:35 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:04:35 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:04:37 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:04:37 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:04:37 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:04:39 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:04:39 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:04:39 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:04:41 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:04:41 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:04:41 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:04:43 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:04:43 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:04:43 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:04:45 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:04:45 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:04:45 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:04:47 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:04:47 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:04:47 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:04:49 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:04:49 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:04:49 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:04:51 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:04:51 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:04:51 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:04:51 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 20:04:53 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:04:53 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:04:53 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:04:55 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:04:55 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:04:55 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:04:57 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:04:57 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:04:57 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:04:59 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:04:59 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:04:59 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:05:01 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:05:01 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:05:01 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:05:03 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:05:03 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:05:03 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:05:05 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:05:05 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:05:05 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:05:07 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:05:07 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:05:07 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:05:09 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:05:09 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:05:09 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:05:11 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:05:11 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:05:11 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:05:13 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:05:13 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:05:13 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:05:15 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:05:15 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:05:15 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:05:17 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:05:17 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:05:17 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:05:19 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:05:19 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:05:19 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:05:21 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:05:21 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:05:21 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:05:21 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 20:05:23 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:05:23 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:05:23 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:05:25 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:05:25 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:05:25 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:05:27 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:05:27 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:05:27 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:05:29 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:05:29 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:05:29 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:05:31 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:05:31 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:05:31 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:05:33 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:05:33 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:05:33 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:05:35 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:05:35 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:05:35 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:05:37 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:05:37 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:05:37 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:05:39 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:05:39 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:05:39 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:05:41 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:05:41 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:05:41 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:05:43 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:05:43 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:05:43 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:05:45 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:05:45 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:05:45 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:05:47 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:05:47 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:05:47 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:05:49 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:05:49 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:05:49 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:05:51 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:05:51 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:05:51 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:05:51 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 20:05:53 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:05:53 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:05:53 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:05:55 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:05:55 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:05:55 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:05:57 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:05:57 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:05:57 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:05:59 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:05:59 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:05:59 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:06:01 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:06:01 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:06:01 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:06:03 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:06:03 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:06:03 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:06:05 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:06:05 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:06:05 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:06:07 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:06:07 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:06:07 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:06:09 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:06:09 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:06:09 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:06:11 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:06:11 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:06:11 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:06:13 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:06:13 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:06:13 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:06:15 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:06:15 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:06:15 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:06:17 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:06:17 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:06:17 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:06:19 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:06:19 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:06:19 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:06:21 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:06:21 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:06:21 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:06:21 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 20:06:23 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:06:23 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:06:23 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:06:25 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:06:25 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:06:25 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:06:27 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:06:27 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:06:27 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:06:29 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:06:29 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:06:29 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:06:31 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:06:31 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:06:31 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:06:33 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:06:33 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:06:33 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:06:35 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:06:35 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:06:35 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:06:37 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:06:37 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:06:37 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:06:39 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:06:39 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:06:39 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:06:41 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:06:41 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:06:41 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:06:43 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:06:43 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:06:43 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:06:45 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:06:45 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:06:45 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:06:47 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:06:47 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:06:47 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:06:49 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:06:49 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:06:49 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:06:51 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:06:51 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:06:51 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:06:51 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 20:06:53 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:06:53 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:06:53 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:06:55 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:06:55 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:06:55 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:06:57 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:06:57 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:06:57 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:06:59 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:06:59 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:06:59 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:07:01 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:07:01 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:07:01 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:07:03 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:07:03 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:07:03 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:07:05 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:07:05 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:07:05 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:07:07 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:07:07 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:07:07 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:07:09 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:07:09 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:07:09 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:07:11 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:07:11 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:07:11 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:07:13 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:07:13 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:07:13 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:07:15 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:07:15 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:07:15 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:07:17 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:07:17 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:07:17 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:07:19 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:07:19 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:07:19 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:07:21 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:07:21 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:07:21 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:07:21 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 20:07:23 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:07:23 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:07:23 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:07:25 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:07:25 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:07:25 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:07:27 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:07:27 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:07:27 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:07:29 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:07:29 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:07:29 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:07:31 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:07:31 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:07:31 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:07:33 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:07:33 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:07:33 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:07:35 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:07:35 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:07:35 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:07:37 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:07:37 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:07:37 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:07:39 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:07:39 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:07:39 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:07:41 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:07:41 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:07:41 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:07:43 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:07:43 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:07:43 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:07:45 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:07:45 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:07:45 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:07:47 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:07:47 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:07:47 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:07:49 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:07:49 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:07:49 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:07:51 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:07:51 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:07:51 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:07:51 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 20:07:53 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:07:53 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:07:53 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:07:55 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:07:55 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:07:55 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:07:57 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:07:57 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:07:57 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:07:59 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:07:59 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:07:59 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:08:01 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:08:01 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:08:01 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:08:03 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:08:03 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:08:03 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:08:05 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:08:05 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:08:05 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:08:07 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:08:07 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:08:07 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:08:09 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:08:09 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:08:09 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:08:11 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:08:11 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:08:11 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:08:13 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:08:13 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:08:13 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:08:15 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:08:15 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:08:15 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:08:17 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:08:17 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:08:17 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:08:19 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:08:19 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:08:19 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:08:21 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:08:21 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:08:21 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:08:21 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 20:08:23 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:08:23 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:08:23 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:08:25 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:08:25 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:08:25 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:08:27 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:08:27 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:08:27 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:08:29 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:08:29 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:08:29 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:08:31 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:08:31 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:08:31 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:08:33 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:08:33 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:08:33 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:08:35 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:08:35 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:08:35 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:08:37 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:08:37 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:08:37 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:08:39 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:08:39 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:08:39 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:08:41 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:08:41 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:08:41 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:08:43 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:08:43 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:08:43 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:08:45 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:08:45 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:08:45 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:08:47 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:08:47 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:08:47 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:08:49 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:08:49 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:08:49 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:08:51 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:08:51 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:08:51 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:08:51 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 20:08:53 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:08:53 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:08:53 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:08:55 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:08:55 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:08:55 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:08:57 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:08:57 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:08:57 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:08:59 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:08:59 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:08:59 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:09:01 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:09:01 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:09:01 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:09:03 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:09:03 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:09:03 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:09:05 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:09:05 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:09:05 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:09:07 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:09:07 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:09:07 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:09:09 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:09:09 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:09:09 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:09:11 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:09:11 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:09:11 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:09:13 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:09:13 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:09:13 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:09:15 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:09:15 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:09:15 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:09:17 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:09:17 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:09:17 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:09:19 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:09:19 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:09:19 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:09:21 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:09:21 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:09:21 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:09:21 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 20:09:23 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:09:23 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:09:23 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:09:25 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:09:25 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:09:25 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:09:27 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:09:27 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:09:27 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:09:29 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:09:29 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:09:29 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:09:31 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:09:31 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:09:31 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:09:33 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:09:33 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:09:33 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:09:35 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:09:35 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:09:35 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:09:37 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:09:37 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:09:37 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:09:39 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:09:39 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:09:39 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:09:41 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:09:41 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:09:41 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:09:43 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:09:43 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:09:43 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:09:45 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:09:45 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:09:45 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:09:47 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:09:47 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:09:47 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:09:49 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:09:49 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:09:49 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:09:51 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:09:51 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:09:51 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:09:51 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 20:09:53 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:09:53 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:09:53 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:09:55 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:09:55 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:09:55 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:09:57 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:09:57 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:09:57 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:09:59 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:09:59 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:09:59 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:10:01 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:10:01 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:10:01 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:10:03 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:10:03 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:10:03 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:10:05 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:10:05 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:10:05 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:10:07 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:10:07 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:10:07 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:10:09 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:10:09 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:10:09 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:10:11 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:10:11 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:10:11 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:10:13 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:10:13 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:10:13 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:10:15 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:10:15 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:10:15 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:10:17 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:10:17 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:10:17 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:10:19 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:10:19 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:10:19 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:10:21 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:10:21 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:10:21 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:10:21 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 20:10:23 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:10:23 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:10:23 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:10:25 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:10:25 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:10:25 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:10:27 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:10:27 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:10:27 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:10:29 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:10:29 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:10:29 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:10:31 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:10:31 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:10:31 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:10:33 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:10:33 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:10:33 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:10:35 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:10:35 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:10:35 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:10:37 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:10:37 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:10:37 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:10:39 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:10:39 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:10:39 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:10:41 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:10:41 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:10:41 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:10:43 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:10:43 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:10:43 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:10:45 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:10:45 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:10:45 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:10:47 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:10:47 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:10:47 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:10:49 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:10:49 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:10:49 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:10:51 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:10:51 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:10:51 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:10:51 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 20:10:53 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:10:53 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:10:53 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:10:55 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:10:55 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:10:55 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:10:57 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:10:57 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:10:57 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:10:59 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:10:59 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:10:59 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:11:01 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:11:01 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:11:01 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:11:03 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:11:03 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:11:03 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:11:05 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:11:05 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:11:05 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:11:07 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:11:07 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:11:07 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:11:09 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:11:09 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:11:09 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:11:11 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:11:11 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:11:11 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:11:13 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:11:13 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:11:13 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:11:15 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:11:15 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:11:15 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:11:17 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:11:17 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:11:17 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:11:19 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:11:19 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:11:19 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:11:21 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:11:21 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:11:21 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:11:21 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 20:11:23 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:11:23 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:11:23 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:11:25 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:11:25 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:11:25 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:11:27 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:11:27 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:11:27 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:11:29 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:11:29 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:11:29 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:11:31 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:11:31 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:11:31 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:11:33 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:11:33 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:11:33 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:11:35 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:11:35 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:11:35 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:11:37 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:11:37 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:11:37 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:11:39 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:11:39 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:11:39 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:11:41 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:11:41 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:11:41 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:11:43 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:11:43 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:11:43 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:11:45 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:11:45 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:11:45 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:11:47 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:11:47 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:11:47 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:11:49 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:11:49 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:11:49 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:11:51 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:11:51 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:11:51 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:11:51 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 20:11:53 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:11:53 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:11:53 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:11:55 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:11:55 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:11:55 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:11:57 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:11:57 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:11:57 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:11:59 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:11:59 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:11:59 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:12:01 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:12:01 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:12:01 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:12:03 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:12:03 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:12:03 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:12:05 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:12:05 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:12:05 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:12:07 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:12:07 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:12:07 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:12:09 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:12:09 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:12:09 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:12:11 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:12:11 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:12:11 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:12:13 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:12:13 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:12:13 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:12:15 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:12:15 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:12:15 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:12:17 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:12:17 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:12:17 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:12:19 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:12:19 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:12:19 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:12:21 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:12:21 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:12:21 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:12:21 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 20:12:23 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:12:23 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:12:23 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:12:25 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:12:25 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:12:25 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:12:27 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:12:27 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:12:27 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:12:29 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:12:29 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:12:29 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:12:31 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:12:31 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:12:31 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:12:33 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:12:33 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:12:33 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:12:35 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:12:35 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:12:35 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:12:37 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:12:37 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:12:37 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:12:39 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:12:39 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:12:39 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:12:41 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:12:41 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:12:41 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:12:43 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:12:43 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:12:43 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:12:45 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:12:45 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:12:45 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:12:47 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:12:47 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:12:47 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:12:49 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:12:49 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:12:49 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:12:51 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:12:51 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:12:51 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:12:51 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 20:12:53 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:12:53 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:12:53 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:12:55 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:12:55 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:12:55 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:12:57 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:12:57 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:12:57 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:12:59 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:12:59 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:12:59 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:13:01 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:13:01 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:13:01 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:13:03 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:13:03 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:13:03 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:13:05 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:13:05 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:13:05 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:13:07 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:13:07 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:13:07 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:13:09 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:13:09 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:13:09 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:13:11 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:13:11 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:13:11 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:13:13 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:13:13 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:13:13 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:13:15 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:13:15 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:13:15 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:13:17 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:13:17 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:13:17 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:13:19 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:13:19 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:13:19 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:13:21 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:13:21 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:13:21 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:13:21 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 20:13:23 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:13:23 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:13:23 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:13:25 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:13:25 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:13:25 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:13:27 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:13:27 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:13:27 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:13:29 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:13:29 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:13:29 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:13:31 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:13:31 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:13:31 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:13:33 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:13:33 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:13:33 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:13:35 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:13:35 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:13:35 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:13:37 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:13:37 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:13:37 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:13:39 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:13:39 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:13:39 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:13:41 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:13:41 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:13:41 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:13:43 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:13:43 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:13:43 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:13:45 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:13:45 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:13:45 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:13:47 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:13:47 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:13:47 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:13:49 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:13:49 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:13:49 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:13:51 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:13:51 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:13:51 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:13:51 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 20:13:53 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:13:53 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:13:53 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:13:55 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:13:55 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:13:55 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:13:57 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:13:57 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:13:57 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:13:59 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:13:59 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:13:59 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:14:01 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:14:01 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:14:01 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:14:03 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:14:03 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:14:03 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:14:05 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:14:05 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:14:05 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:14:07 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:14:07 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:14:07 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:14:09 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:14:09 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:14:09 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:14:11 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:14:11 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:14:11 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:14:13 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:14:13 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:14:13 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:14:15 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:14:15 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:14:15 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:14:17 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:14:17 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:14:17 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:14:19 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:14:19 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:14:19 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:14:21 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:14:21 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:14:21 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:14:21 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 20:14:23 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:14:23 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:14:23 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:14:25 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:14:25 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:14:25 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:14:27 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:14:27 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:14:27 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:14:29 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:14:29 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:14:29 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:14:31 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:14:31 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:14:31 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:14:33 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:14:33 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:14:33 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:14:35 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:14:35 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:14:35 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:14:37 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:14:37 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:14:37 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:14:39 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:14:39 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:14:39 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:14:41 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:14:41 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:14:41 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:14:43 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:14:43 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:14:43 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:14:45 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:14:45 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:14:45 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:14:47 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:14:47 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:14:47 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:14:49 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:14:49 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:14:49 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:14:51 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:14:51 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:14:51 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:14:51 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 20:14:53 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:14:53 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:14:53 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:14:55 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:14:55 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:14:55 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:14:57 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:14:57 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:14:57 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:14:59 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:14:59 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:14:59 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:15:01 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:15:01 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:15:01 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:15:03 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:15:03 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:15:03 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:15:05 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:15:05 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:15:05 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:15:07 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:15:07 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:15:07 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:15:09 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:15:09 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:15:09 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:15:11 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:15:11 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:15:11 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:15:13 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:15:13 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:15:13 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:15:15 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:15:15 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:15:15 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:15:17 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:15:17 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:15:17 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:15:19 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:15:19 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:15:19 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:15:21 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:15:21 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:15:21 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:15:21 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 20:15:23 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:15:23 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:15:23 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:15:25 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:15:25 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:15:25 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:15:27 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:15:27 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:15:27 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:15:29 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:15:29 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:15:29 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:15:31 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:15:31 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:15:31 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:15:33 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:15:33 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:15:33 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:15:35 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:15:35 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:15:35 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:15:37 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:15:37 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:15:37 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:15:39 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:15:39 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:15:39 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:15:41 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:15:41 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:15:41 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:15:43 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:15:43 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:15:43 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:15:45 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:15:45 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:15:45 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:15:47 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:15:47 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:15:47 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:15:49 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:15:49 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:15:49 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:15:51 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:15:51 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:15:51 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:15:51 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 20:15:53 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:15:53 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:15:53 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:15:55 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:15:55 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:15:55 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:15:57 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:15:57 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:15:57 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:15:59 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:15:59 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:15:59 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:16:01 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:16:01 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:16:01 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:16:03 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:16:03 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:16:03 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:16:05 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:16:05 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:16:05 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:16:07 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:16:07 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:16:07 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:16:09 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:16:09 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:16:09 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:16:11 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:16:11 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:16:11 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:16:13 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:16:13 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:16:13 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:16:15 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:16:15 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:16:15 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:16:17 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:16:17 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:16:17 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:16:19 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:16:19 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:16:19 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:16:21 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:16:21 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:16:21 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:16:21 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 20:16:23 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:16:23 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:16:23 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:16:25 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:16:25 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:16:25 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:16:27 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:16:27 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:16:27 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:16:29 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:16:29 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:16:29 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:16:31 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:16:31 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:16:31 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:16:33 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:16:33 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:16:33 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:16:35 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:16:35 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:16:35 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:16:37 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:16:37 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:16:37 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:16:39 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:16:39 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:16:39 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:16:41 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:16:41 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:16:41 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:16:43 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:16:43 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:16:43 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:16:45 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:16:45 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:16:45 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:16:47 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:16:47 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:16:47 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:16:49 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:16:49 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:16:49 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:16:51 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:16:51 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:16:51 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:16:51 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 20:16:53 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:16:53 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:16:53 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:16:55 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:16:55 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:16:55 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:16:57 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:16:57 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:16:57 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:16:59 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:16:59 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:16:59 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:17:01 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:17:01 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:17:01 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:17:03 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:17:03 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:17:03 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:17:05 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:17:05 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:17:05 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:17:07 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:17:07 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:17:07 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:17:09 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:17:09 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:17:09 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:17:11 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:17:11 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:17:11 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:17:13 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:17:13 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:17:13 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:17:15 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:17:15 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:17:15 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:17:17 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:17:17 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:17:17 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:17:19 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:17:19 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:17:19 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:17:21 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:17:21 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:17:21 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:17:21 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 20:17:23 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:17:23 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:17:23 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:17:25 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:17:25 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:17:25 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:17:27 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:17:27 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:17:27 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:17:29 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:17:29 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:17:29 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:17:31 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:17:31 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:17:31 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:17:33 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:17:33 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:17:33 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:17:35 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:17:35 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:17:35 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:17:37 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:17:37 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:17:37 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:17:39 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:17:39 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:17:39 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:17:41 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:17:41 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:17:41 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:17:43 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:17:43 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:17:43 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:17:45 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:17:45 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:17:45 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:17:47 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:17:47 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:17:47 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:17:49 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:17:49 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:17:49 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:17:51 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:17:51 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:17:51 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:17:51 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 20:17:53 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:17:53 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:17:53 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:17:55 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:17:55 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:17:55 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:17:57 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:17:57 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:17:57 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:17:59 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:17:59 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:17:59 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:18:01 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:18:01 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:18:01 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:18:03 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:18:03 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:18:03 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:18:05 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:18:05 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:18:05 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:18:07 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:18:07 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:18:07 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:18:09 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:18:09 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:18:09 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:18:11 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:18:11 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:18:11 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:18:13 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:18:13 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:18:13 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:18:15 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:18:15 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:18:15 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:18:17 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:18:17 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:18:17 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:18:19 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:18:19 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:18:19 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:18:21 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:18:21 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:18:21 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:18:21 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 20:18:23 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:18:23 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:18:23 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:18:25 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:18:25 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:18:25 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:18:27 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:18:27 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:18:27 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:18:29 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:18:29 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:18:29 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:18:31 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:18:31 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:18:31 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:18:33 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:18:33 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:18:33 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:18:35 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:18:35 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:18:35 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:18:37 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:18:37 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:18:37 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:18:39 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:18:39 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:18:39 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:18:41 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:18:41 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:18:41 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:18:43 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:18:43 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:18:43 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:18:45 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:18:45 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:18:45 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:18:47 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:18:47 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:18:47 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:18:49 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:18:49 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:18:49 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:18:51 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:18:51 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:18:51 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:18:51 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 20:18:53 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:18:53 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:18:53 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:18:55 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:18:55 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:18:55 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:18:57 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:18:57 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:18:57 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:18:59 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:18:59 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:18:59 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:19:01 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:19:01 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:19:01 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:19:03 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:19:03 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:19:03 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:19:05 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:19:05 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:19:05 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:19:07 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:19:07 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:19:07 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:19:09 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:19:09 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:19:09 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:19:11 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:19:11 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:19:11 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:19:13 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:19:13 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:19:13 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:19:15 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:19:15 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:19:15 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:19:17 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:19:17 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:19:17 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:19:19 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:19:19 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:19:19 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:19:21 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:19:21 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:19:21 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:19:21 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 20:19:23 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:19:23 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:19:23 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:19:25 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:19:25 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:19:25 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:19:27 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:19:27 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:19:27 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:19:29 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:19:29 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:19:29 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:19:31 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:19:31 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:19:31 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:19:33 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:19:33 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:19:33 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:19:35 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:19:35 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:19:35 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:19:37 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:19:37 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:19:37 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:19:39 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:19:39 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:19:39 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:19:41 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:19:41 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:19:41 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:19:43 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:19:43 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:19:43 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:19:45 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:19:45 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:19:45 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:19:47 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:19:47 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:19:47 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:19:49 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:19:49 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:19:49 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:19:51 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:19:51 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:19:51 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:19:51 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 20:19:53 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:19:53 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:19:53 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:19:55 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:19:55 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:19:55 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:19:57 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:19:57 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:19:57 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:19:59 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:19:59 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:19:59 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:20:01 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:20:01 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:20:01 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:20:03 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:20:03 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:20:03 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:20:05 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:20:05 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:20:05 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:20:07 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:20:07 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:20:07 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:20:09 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:20:09 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:20:09 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:20:11 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:20:11 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:20:11 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:20:13 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:20:13 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:20:13 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:20:15 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:20:15 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:20:15 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:20:17 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:20:17 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:20:17 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:20:19 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:20:19 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:20:19 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:20:21 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:20:21 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:20:21 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:20:21 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 20:20:23 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:20:23 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:20:23 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:20:25 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:20:25 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:20:25 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:20:27 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:20:27 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:20:27 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:20:29 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:20:29 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:20:29 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:20:31 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:20:31 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:20:31 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:20:33 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:20:33 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:20:33 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:20:35 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:20:35 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:20:35 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:20:37 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:20:37 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:20:37 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:20:39 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:20:39 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:20:39 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:20:41 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:20:41 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:20:41 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:20:43 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:20:43 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:20:43 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:20:45 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:20:45 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:20:45 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:20:47 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:20:47 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:20:47 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:20:49 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:20:49 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:20:49 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:20:51 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:20:51 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:20:51 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:20:51 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 20:20:53 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:20:53 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:20:53 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:20:55 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:20:55 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:20:55 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:20:57 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:20:57 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:20:57 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:20:59 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:20:59 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:20:59 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:21:01 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:21:01 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:21:01 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:21:03 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:21:03 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:21:03 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:21:05 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:21:05 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:21:05 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:21:07 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:21:07 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:21:07 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:21:09 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:21:09 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:21:09 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:21:11 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:21:11 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:21:11 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:21:13 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:21:13 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:21:13 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:21:15 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:21:15 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:21:15 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:21:17 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:21:17 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:21:17 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:21:19 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:21:19 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:21:19 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:21:21 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:21:21 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:21:21 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:21:21 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 20:21:23 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:21:23 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:21:23 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:21:25 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:21:25 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:21:25 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:21:27 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:21:27 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:21:27 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:21:29 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:21:29 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:21:29 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:21:31 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:21:31 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:21:31 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:21:33 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:21:33 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:21:33 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:21:35 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:21:35 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:21:35 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:21:37 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:21:37 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:21:37 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:21:39 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:21:39 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:21:39 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:21:41 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:21:41 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:21:41 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:21:43 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:21:43 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:21:43 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:21:45 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:21:45 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:21:45 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:21:47 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:21:47 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:21:47 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:21:49 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:21:49 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:21:49 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:21:51 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:21:51 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:21:51 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:21:51 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 20:21:53 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:21:53 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:21:53 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:21:55 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:21:55 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:21:55 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:21:57 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:21:57 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:21:57 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:21:59 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:21:59 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:21:59 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:22:01 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:22:01 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:22:01 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:22:03 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:22:03 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:22:03 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:22:05 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:22:05 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:22:05 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:22:07 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:22:07 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:22:07 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:22:09 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:22:09 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:22:09 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:22:11 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:22:11 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:22:11 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:22:13 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:22:13 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:22:13 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:22:15 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:22:15 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:22:15 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:22:17 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:22:17 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:22:17 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:22:19 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:22:19 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:22:19 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:22:21 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:22:21 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:22:21 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:22:21 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 20:22:23 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:22:23 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:22:23 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:22:25 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:22:25 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:22:25 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:22:27 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:22:27 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:22:27 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:22:29 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:22:29 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:22:29 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:22:31 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:22:31 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:22:31 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:22:33 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:22:33 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:22:33 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:22:35 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:22:35 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:22:35 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:22:37 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:22:37 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:22:37 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:22:39 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:22:39 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:22:39 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:22:41 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:22:41 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:22:41 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:22:43 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:22:43 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:22:43 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:22:45 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:22:45 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:22:45 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:22:47 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:22:47 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:22:47 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:22:49 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:22:49 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:22:49 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:22:51 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:22:51 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:22:51 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:22:51 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 20:22:53 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:22:53 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:22:53 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:22:55 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:22:55 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:22:55 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:22:57 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:22:57 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:22:57 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:22:59 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:22:59 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:22:59 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:23:01 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:23:01 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:23:01 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:23:03 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:23:03 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:23:03 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:23:05 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:23:05 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:23:05 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:23:07 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:23:07 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:23:07 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:23:09 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:23:09 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:23:09 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:23:11 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:23:11 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:23:11 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:23:13 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:23:13 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:23:13 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:23:15 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:23:15 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:23:15 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:23:17 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:23:17 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:23:17 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:23:19 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:23:19 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:23:19 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:23:21 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:23:21 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:23:21 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:23:21 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 20:23:23 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:23:23 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:23:23 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:23:25 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:23:25 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:23:25 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:23:27 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:23:27 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:23:27 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:23:29 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:23:29 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:23:29 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:23:31 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:23:31 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:23:31 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:23:33 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:23:33 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:23:33 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:23:35 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:23:35 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:23:35 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:23:37 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:23:37 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:23:37 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:23:39 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:23:39 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:23:39 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:23:41 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:23:41 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:23:41 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:23:43 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:23:43 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:23:43 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:23:45 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:23:45 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:23:45 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:23:47 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:23:47 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:23:47 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:23:49 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:23:49 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:23:49 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:23:51 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:23:51 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:23:51 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:23:51 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 20:23:53 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:23:53 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:23:53 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:23:55 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:23:55 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:23:55 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:23:57 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:23:57 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:23:57 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:23:59 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:23:59 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:23:59 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:24:01 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:24:01 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:24:01 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:24:03 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:24:03 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:24:03 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:24:05 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:24:05 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:24:05 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:24:07 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:24:07 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:24:07 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:24:09 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:24:09 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:24:09 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:24:11 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:24:11 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:24:11 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:24:13 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:24:13 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:24:13 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:24:15 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:24:15 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:24:15 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:24:17 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:24:17 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:24:17 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:24:19 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:24:19 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:24:19 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:24:21 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:24:21 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:24:21 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:24:21 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 20:24:23 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:24:23 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:24:23 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:24:25 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:24:25 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:24:25 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:24:27 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:24:27 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:24:27 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:24:29 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:24:29 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:24:29 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:24:31 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:24:31 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:24:31 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:24:33 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:24:33 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:24:33 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:24:35 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:24:35 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:24:35 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:24:37 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:24:37 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:24:37 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:24:39 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:24:39 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:24:39 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:24:41 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:24:41 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:24:41 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:24:43 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:24:43 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:24:43 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:24:45 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:24:45 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:24:45 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:24:47 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:24:47 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:24:47 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:24:49 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:24:49 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:24:49 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:24:51 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:24:51 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:24:51 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:24:51 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 20:24:53 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:24:53 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:24:53 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:24:55 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:24:55 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:24:55 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:24:57 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:24:57 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:24:57 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:24:59 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:24:59 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:24:59 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:25:01 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:25:01 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:25:01 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:25:03 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:25:03 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:25:03 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:25:05 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:25:05 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:25:05 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:25:07 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:25:07 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:25:07 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:25:09 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:25:09 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:25:09 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:25:11 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:25:11 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:25:11 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:25:13 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:25:13 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:25:13 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:25:15 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:25:15 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:25:15 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:25:17 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:25:17 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:25:17 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:25:19 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:25:19 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:25:19 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:25:21 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:25:21 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:25:21 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:25:21 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 20:25:23 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:25:23 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:25:23 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:25:25 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:25:25 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:25:25 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:25:27 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:25:27 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:25:27 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:25:29 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:25:29 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:25:29 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:25:31 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:25:31 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:25:31 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:25:33 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:25:33 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:25:33 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:25:35 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:25:35 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:25:35 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:25:37 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:25:37 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:25:37 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:25:39 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:25:39 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:25:39 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:25:41 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:25:41 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:25:41 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:25:43 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:25:43 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:25:43 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:25:45 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:25:45 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:25:45 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:25:47 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:25:47 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:25:47 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:25:49 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:25:49 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:25:49 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:25:51 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:25:51 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:25:51 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:25:51 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 20:25:53 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:25:53 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:25:53 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:25:55 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:25:55 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:25:55 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:25:57 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:25:57 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:25:57 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:25:59 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:25:59 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:25:59 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:26:01 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:26:01 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:26:01 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:26:03 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:26:03 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:26:03 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:26:05 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:26:05 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:26:05 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:26:07 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:26:07 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:26:07 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:26:09 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:26:09 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:26:09 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:26:11 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:26:11 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:26:11 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:26:13 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:26:13 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:26:13 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:26:15 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:26:15 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:26:15 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:26:17 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:26:17 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:26:17 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:26:19 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:26:19 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:26:19 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:26:21 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:26:21 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:26:21 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:26:21 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 20:26:23 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:26:23 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:26:23 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:26:25 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:26:25 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:26:25 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:26:27 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:26:27 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:26:27 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:26:29 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:26:29 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:26:29 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:26:31 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:26:31 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:26:31 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:26:33 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:26:33 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:26:33 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:26:35 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:26:35 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:26:35 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:26:37 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:26:37 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:26:37 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:26:39 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:26:39 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:26:39 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:26:41 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:26:41 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:26:41 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:26:43 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:26:43 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:26:43 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:26:45 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:26:45 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:26:45 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:26:47 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:26:47 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:26:47 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:26:49 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:26:49 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:26:49 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:26:51 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:26:51 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:26:51 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:26:51 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 20:26:53 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:26:53 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:26:53 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:26:55 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:26:55 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:26:55 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:26:57 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:26:57 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:26:57 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:26:59 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:26:59 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:26:59 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:27:01 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:27:01 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:27:01 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:27:03 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:27:03 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:27:03 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:27:05 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:27:05 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:27:05 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:27:07 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:27:07 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:27:07 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:27:09 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:27:09 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:27:09 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:27:11 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:27:11 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:27:11 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:27:13 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:27:13 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:27:13 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:27:15 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:27:15 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:27:15 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:27:17 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:27:17 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:27:17 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:27:19 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:27:19 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:27:19 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:27:21 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:27:21 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:27:21 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:27:21 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 20:27:23 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:27:23 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:27:23 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:27:25 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:27:25 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:27:25 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:27:27 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:27:27 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:27:27 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:27:29 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:27:29 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:27:29 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:27:31 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:27:31 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:27:31 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:27:33 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:27:33 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:27:33 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:27:35 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:27:35 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:27:35 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:27:37 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:27:37 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:27:37 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:27:39 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:27:39 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:27:39 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:27:41 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:27:41 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:27:41 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:27:43 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:27:43 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:27:43 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:27:45 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:27:45 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:27:45 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:27:47 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:27:47 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:27:47 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:27:49 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:27:49 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:27:49 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:27:51 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:27:51 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:27:51 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:27:51 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 20:27:53 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:27:53 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:27:53 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:27:55 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:27:55 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:27:55 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:27:57 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:27:57 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:27:57 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:27:59 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:27:59 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:27:59 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:28:01 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:28:01 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:28:01 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:28:03 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:28:03 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:28:03 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:28:05 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:28:05 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:28:05 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:28:07 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:28:07 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:28:07 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:28:09 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:28:09 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:28:09 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:28:11 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:28:11 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:28:11 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:28:13 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:28:13 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:28:13 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:28:15 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:28:15 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:28:15 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:28:17 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:28:17 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:28:17 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:28:19 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:28:19 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:28:19 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:28:21 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:28:21 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:28:21 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:28:21 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 20:28:23 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:28:23 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:28:23 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:28:25 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:28:25 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:28:25 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:28:27 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:28:27 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:28:27 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:28:29 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:28:29 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:28:29 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:28:31 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:28:31 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:28:31 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:28:33 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:28:33 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:28:33 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:28:35 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:28:35 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:28:35 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:28:37 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:28:37 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:28:37 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:28:39 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:28:39 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:28:39 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:28:41 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:28:41 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:28:41 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:28:43 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:28:43 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:28:43 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:28:45 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:28:45 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:28:45 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:28:47 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:28:47 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:28:47 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:28:49 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:28:49 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:28:49 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:28:51 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:28:51 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:28:51 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:28:51 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.\n2020/01/11 20:28:53 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:28:53 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:28:53 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:28:55 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:28:55 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:28:55 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:28:57 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:28:57 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:28:57 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:28:59 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:28:59 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:28:59 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:29:01 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:29:01 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:29:01 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:29:03 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:29:03 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:29:03 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:29:05 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:29:05 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:29:05 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:29:07 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:29:07 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:29:07 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:29:09 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:29:09 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:29:09 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:29:11 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:29:11 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:29:11 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:29:13 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:29:13 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:29:13 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:29:15 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:29:15 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:29:15 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:29:17 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:29:17 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:29:17 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n2020/01/11 20:29:19 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.\n2020/01/11 20:29:19 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system\n2020/01/11 20:29:19 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout\n==== END logs for container kubernetes-dashboard of pod kube-system/addons-kubernetes-dashboard-78954cc66b-69k8m ====\n==== START logs for container nginx-ingress-controller of pod kube-system/addons-nginx-ingress-controller-7c75bb76db-cd9r9 ====\n-------------------------------------------------------------------------------\nNGINX Ingress controller\n Release: 0.22.0\n Build: git-f7c42b78a\n Repository: https://github.com/kubernetes/ingress-nginx\n-------------------------------------------------------------------------------\n\nI0111 15:56:57.351450 7 flags.go:183] Watching for Ingress class: nginx\nW0111 15:56:57.351823 7 flags.go:216] SSL certificate chain completion is disabled (--enable-ssl-chain-completion=false)\nnginx version: nginx/1.15.8\nW0111 15:56:57.355466 7 client_config.go:548] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.\nI0111 15:56:57.355611 7 main.go:200] Creating API client for https://100.104.0.1:443\nI0111 15:56:57.369753 7 main.go:244] Running in Kubernetes cluster version v1.16 (v1.16.4) - git (clean) commit 224be7bdce5a9dd0c2fd0d46b83865648e2fe0ba - platform linux/amd64\nI0111 15:56:57.373313 7 main.go:102] Validated kube-system/addons-nginx-ingress-nginx-ingress-k8s-backend as the default backend.\nI0111 15:56:57.562830 7 nginx.go:267] Starting NGINX Ingress controller\nI0111 15:56:57.571716 7 event.go:221] Event(v1.ObjectReference{Kind:\"ConfigMap\", Namespace:\"kube-system\", Name:\"addons-nginx-ingress-controller\", UID:\"95abbbf4-f64d-4f3d-b475-ab952183ae80\", APIVersion:\"v1\", ResourceVersion:\"199\", FieldPath:\"\"}): type: 'Normal' reason: 'CREATE' ConfigMap kube-system/addons-nginx-ingress-controller\nI0111 15:56:58.763320 7 nginx.go:700] Starting TLS proxy for SSL Passthrough\nI0111 15:56:58.763368 7 leaderelection.go:205] attempting to acquire leader lease kube-system/ingress-controller-leader-nginx...\nI0111 15:56:58.763395 7 nginx.go:288] Starting NGINX process\nI0111 15:56:58.764043 7 controller.go:172] Configuration changes detected, backend reload required.\nI0111 15:56:58.771697 7 leaderelection.go:214] successfully acquired lease kube-system/ingress-controller-leader-nginx\nI0111 15:56:58.771872 7 status.go:148] new leader elected: addons-nginx-ingress-controller-7c75bb76db-cd9r9\nI0111 15:56:58.883292 7 controller.go:190] Backend successfully reloaded.\nI0111 15:56:58.883446 7 controller.go:202] Initial sync, sleeping for 1 second.\n[11/Jan/2020:15:56:59 +0000]TCP200000.000\nW0111 16:02:27.622554 7 reflector.go:270] k8s.io/ingress-nginx/internal/ingress/controller/store/store.go:153: watch of *v1.Service ended with: too old resource version: 478 (1668)\nW0111 16:02:27.628972 7 reflector.go:270] k8s.io/ingress-nginx/internal/ingress/controller/store/store.go:156: watch of *v1.Pod ended with: too old resource version: 1063 (1668)\nW0111 16:02:27.637350 7 reflector.go:270] k8s.io/ingress-nginx/internal/ingress/controller/store/store.go:176: watch of *v1beta1.Ingress ended with: too old resource version: 1 (1669)\nW0111 16:56:55.655164 7 reflector.go:270] k8s.io/ingress-nginx/internal/ingress/controller/store/store.go:156: watch of *v1.Pod ended with: too old resource version: 1670 (4164)\nI0111 17:10:49.126891 7 leaderelection.go:249] failed to renew lease kube-system/ingress-controller-leader-nginx: failed to tryAcquireOrRenew context deadline exceeded\nI0111 17:10:49.127015 7 leaderelection.go:205] attempting to acquire leader lease kube-system/ingress-controller-leader-nginx...\nE0111 17:11:23.409749 7 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=2449, ErrCode=NO_ERROR, debug=\"\"\nE0111 17:11:23.409999 7 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=2449, ErrCode=NO_ERROR, debug=\"\"\nW0111 17:11:23.410174 7 queue.go:130] requeuing &ObjectMeta{Name:sync status,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,}, err Get https://100.104.0.1:443/api/v1/namespaces/kube-system/services/addons-nginx-ingress-controller: http2: server sent GOAWAY and closed the connection; LastStreamID=2449, ErrCode=NO_ERROR, debug=\"\"\nE0111 17:11:23.410288 7 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=2449, ErrCode=NO_ERROR, debug=\"\"\nE0111 17:11:23.410486 7 leaderelection.go:270] error retrieving resource lock kube-system/ingress-controller-leader-nginx: Get https://100.104.0.1:443/api/v1/namespaces/kube-system/configmaps/ingress-controller-leader-nginx: http2: server sent GOAWAY and closed the connection; LastStreamID=2449, ErrCode=NO_ERROR, debug=\"\"\nE0111 17:11:23.410509 7 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=2449, ErrCode=NO_ERROR, debug=\"\"\nE0111 17:11:23.410740 7 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=2449, ErrCode=NO_ERROR, debug=\"\"\nE0111 17:11:23.410922 7 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=2449, ErrCode=NO_ERROR, debug=\"\"\nE0111 17:11:23.411087 7 leaderelection.go:270] error retrieving resource lock kube-system/ingress-controller-leader-nginx: Get https://100.104.0.1:443/api/v1/namespaces/kube-system/configmaps/ingress-controller-leader-nginx: http2: server sent GOAWAY and closed the connection; LastStreamID=2449, ErrCode=NO_ERROR, debug=\"\"\nE0111 17:11:33.413588 7 reflector.go:251] k8s.io/ingress-nginx/internal/ingress/controller/store/store.go:153: Failed to watch *v1.Service: Get https://100.104.0.1:443/api/v1/services?resourceVersion=14276&timeout=9m53s&timeoutSeconds=593&watch=true: net/http: TLS handshake timeout\nE0111 17:11:33.413587 7 reflector.go:251] k8s.io/ingress-nginx/internal/ingress/controller/store/store.go:156: Failed to watch *v1.Pod: Get https://100.104.0.1:443/api/v1/namespaces/kube-system/pods?labelSelector=app%3Dnginx-ingress%2Ccomponent%3Dcontroller%2Cgarden.sapcloud.io%2Frole%3Doptional-addon%2Corigin%3Dgardener%2Cpod-template-hash%3D7c75bb76db%2Crelease%3Daddons%2Cshoot.gardener.cloud%2Fno-cleanup%3Dtrue&resourceVersion=11934&timeout=6m2s&timeoutSeconds=362&watch=true: net/http: TLS handshake timeout\nE0111 17:11:33.413686 7 reflector.go:251] k8s.io/ingress-nginx/internal/ingress/controller/store/store.go:154: Failed to watch *v1.Secret: Get https://100.104.0.1:443/api/v1/secrets?resourceVersion=14292&timeout=7m15s&timeoutSeconds=435&watch=true: net/http: TLS handshake timeout\nE0111 17:11:33.413691 7 reflector.go:251] k8s.io/ingress-nginx/internal/ingress/controller/store/store.go:176: Failed to watch *v1beta1.Ingress: Get https://100.104.0.1:443/apis/extensions/v1beta1/ingresses?resourceVersion=1669&timeout=5m0s&timeoutSeconds=300&watch=true: net/http: TLS handshake timeout\nE0111 17:11:33.413740 7 reflector.go:251] k8s.io/ingress-nginx/internal/ingress/controller/store/store.go:155: Failed to watch *v1.ConfigMap: Get https://100.104.0.1:443/api/v1/configmaps?resourceVersion=14359&timeout=9m23s&timeoutSeconds=563&watch=true: net/http: TLS handshake timeout\nE0111 17:11:33.413792 7 reflector.go:251] k8s.io/ingress-nginx/internal/ingress/controller/store/store.go:152: Failed to watch *v1.Endpoints: Get https://100.104.0.1:443/api/v1/endpoints?resourceVersion=14365&timeout=5m58s&timeoutSeconds=358&watch=true: net/http: TLS handshake timeout\nW0111 17:11:33.419259 7 queue.go:130] requeuing &ObjectMeta{Name:sync status,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,}, err Get https://100.104.0.1:443/api/v1/namespaces/kube-system/services/addons-nginx-ingress-controller: net/http: TLS handshake timeout\nW0111 17:11:43.427051 7 queue.go:130] requeuing &ObjectMeta{Name:sync status,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,}, err Get https://100.104.0.1:443/api/v1/namespaces/kube-system/services/addons-nginx-ingress-controller: net/http: TLS handshake timeout\nE0111 17:11:44.420101 7 reflector.go:134] k8s.io/ingress-nginx/internal/ingress/controller/store/store.go:176: Failed to list *v1beta1.Ingress: Get https://100.104.0.1:443/apis/extensions/v1beta1/ingresses?limit=500&resourceVersion=0: net/http: TLS handshake timeout\nE0111 17:11:44.420743 7 reflector.go:134] k8s.io/ingress-nginx/internal/ingress/controller/store/store.go:155: Failed to list *v1.ConfigMap: Get https://100.104.0.1:443/api/v1/configmaps?limit=500&resourceVersion=0: net/http: TLS handshake timeout\nE0111 17:11:44.421970 7 reflector.go:134] k8s.io/ingress-nginx/internal/ingress/controller/store/store.go:152: Failed to list *v1.Endpoints: Get https://100.104.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: net/http: TLS handshake timeout\nI0111 17:11:45.444096 7 leaderelection.go:214] successfully acquired lease kube-system/ingress-controller-leader-nginx\nW0111 17:11:53.434821 7 queue.go:130] requeuing &ObjectMeta{Name:sync status,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,}, err Get https://100.104.0.1:443/api/v1/namespaces/kube-system/services/addons-nginx-ingress-controller: net/http: TLS handshake timeout\n24.161.90.163 - [24.161.90.163] - - [11/Jan/2020:17:39:36 +0000] \"GET / HTTP/1.1\" 400 157 \"-\" \"-\" 18 0.029 [] - - - - 7d95794fdb0c288f66dec68d545a112e\nE0111 17:48:58.025481 7 leaderelection.go:270] error retrieving resource lock kube-system/ingress-controller-leader-nginx: Get https://100.104.0.1:443/api/v1/namespaces/kube-system/configmaps/ingress-controller-leader-nginx: unexpected EOF\nW0111 17:48:58.040183 7 reflector.go:270] k8s.io/ingress-nginx/internal/ingress/controller/store/store.go:156: watch of *v1.Pod ended with: too old resource version: 14365 (14371)\nW0111 17:48:58.040379 7 reflector.go:270] k8s.io/ingress-nginx/internal/ingress/controller/store/store.go:176: watch of *v1beta1.Ingress ended with: too old resource version: 14365 (14371)\nW0111 18:31:45.063680 7 reflector.go:270] k8s.io/ingress-nginx/internal/ingress/controller/store/store.go:156: watch of *v1.Pod ended with: too old resource version: 20629 (23201)\nW0111 18:50:37.074783 7 reflector.go:270] k8s.io/ingress-nginx/internal/ingress/controller/store/store.go:156: watch of *v1.Pod ended with: too old resource version: 29875 (33759)\nI0111 19:01:17.889569 7 leaderelection.go:249] failed to renew lease kube-system/ingress-controller-leader-nginx: failed to tryAcquireOrRenew context deadline exceeded\nI0111 19:01:17.889915 7 leaderelection.go:205] attempting to acquire leader lease kube-system/ingress-controller-leader-nginx...\nE0111 19:01:47.236931 7 leaderelection.go:304] Failed to update lock: Operation cannot be fulfilled on configmaps \"ingress-controller-leader-nginx\": the object has been modified; please apply your changes to the latest version and try again\nW0111 19:01:48.698053 7 reflector.go:270] k8s.io/ingress-nginx/internal/ingress/controller/store/store.go:176: watch of *v1beta1.Ingress ended with: too old resource version: 14371 (36630)\nW0111 19:01:58.335488 7 reflector.go:270] k8s.io/ingress-nginx/internal/ingress/controller/store/store.go:153: watch of *v1.Service ended with: too old resource version: 36201 (36628)\nW0111 19:01:58.335566 7 reflector.go:270] k8s.io/ingress-nginx/internal/ingress/controller/store/store.go:154: watch of *v1.Secret ended with: too old resource version: 36587 (36628)\nI0111 19:02:00.549728 7 leaderelection.go:214] successfully acquired lease kube-system/ingress-controller-leader-nginx\nW0111 19:02:01.494840 7 reflector.go:270] k8s.io/ingress-nginx/internal/ingress/controller/store/store.go:156: watch of *v1.Pod ended with: too old resource version: 34948 (36628)\nE0111 19:06:26.671521 7 leaderelection.go:270] error retrieving resource lock kube-system/ingress-controller-leader-nginx: Get https://100.104.0.1:443/api/v1/namespaces/kube-system/configmaps/ingress-controller-leader-nginx: unexpected EOF\nW0111 19:36:46.701120 7 reflector.go:270] k8s.io/ingress-nginx/internal/ingress/controller/store/store.go:156: watch of *v1.Pod ended with: too old resource version: 36639 (44188)\nW0111 19:41:51.709555 7 reflector.go:270] k8s.io/ingress-nginx/internal/ingress/controller/store/store.go:156: watch of *v1.Pod ended with: too old resource version: 46879 (49374)\nW0111 19:48:12.718229 7 reflector.go:270] k8s.io/ingress-nginx/internal/ingress/controller/store/store.go:156: watch of *v1.Pod ended with: too old resource version: 51959 (53026)\nW0111 19:57:41.726750 7 reflector.go:270] k8s.io/ingress-nginx/internal/ingress/controller/store/store.go:156: watch of *v1.Pod ended with: too old resource version: 56046 (60041)\nW0111 20:03:58.736055 7 reflector.go:270] k8s.io/ingress-nginx/internal/ingress/controller/store/store.go:156: watch of *v1.Pod ended with: too old resource version: 63548 (63746)\nW0111 20:10:01.744784 7 reflector.go:270] k8s.io/ingress-nginx/internal/ingress/controller/store/store.go:156: watch of *v1.Pod ended with: too old resource version: 67094 (68498)\nW0111 20:19:15.759942 7 reflector.go:270] k8s.io/ingress-nginx/internal/ingress/controller/store/store.go:156: watch of *v1.Pod ended with: too old resource version: 71316 (75261)\n==== END logs for container nginx-ingress-controller of pod kube-system/addons-nginx-ingress-controller-7c75bb76db-cd9r9 ====\n==== START logs for container nginx-ingress-nginx-ingress-k8s-backend of pod kube-system/addons-nginx-ingress-nginx-ingress-k8s-backend-95f65778d-4fk7d ====\n\n> ingress-default-backend@0.1.0 start /usr/src/ingress-default-backend\n> node ./server.js\n\nKubernetes backend started on port 8080...\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m7.098 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.806 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.446 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.317 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.321 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.317 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.335 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.328 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.316 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.338 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.324 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.334 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.328 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.313 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.326 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.330 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.335 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.336 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.331 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.334 ms - 2\x1b[0m\n\x1b[0mGET / \x1b[33m404 \x1b[0m359.992 ms - 1050\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.596 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.724 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.638 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.422 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.339 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.351 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.423 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.340 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.339 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.333 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.331 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.339 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.394 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.331 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.358 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.334 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.363 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.350 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.346 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.346 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.330 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.336 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.344 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.343 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.817 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.333 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.336 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.332 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.355 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.332 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.336 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.495 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.356 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.348 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.353 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.348 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.342 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.332 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.339 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.330 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.339 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.333 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.357 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.354 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.344 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.355 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.339 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.341 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.344 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.343 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.339 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.350 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.369 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.347 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.327 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.457 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.342 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.334 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.343 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.339 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.335 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.578 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.461 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.332 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.330 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.341 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.327 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.333 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.334 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.330 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.369 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.332 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.377 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.332 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.357 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.343 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.348 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.456 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.333 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.327 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.349 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.328 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.333 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.351 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.328 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.341 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.355 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.332 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.346 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.350 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.334 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.445 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.346 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.341 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.336 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.341 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.333 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.337 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.330 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.355 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.452 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.337 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.336 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.347 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.333 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.329 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.330 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.322 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.328 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.361 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.404 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.335 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.333 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.332 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.325 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.344 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.322 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.327 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.326 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.324 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.352 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.360 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.332 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.337 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.342 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.324 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.329 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.323 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.818 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.331 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.333 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.334 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.332 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.325 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.329 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.338 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.329 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.325 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.321 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.334 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.320 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.407 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.326 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.318 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.329 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.633 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.327 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.328 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.477 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.533 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.343 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.495 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.329 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.332 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.323 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.341 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.347 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.460 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.318 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.332 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.328 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.324 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.323 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.333 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.365 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.332 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.320 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.319 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.330 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.312 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.345 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.329 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.322 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.325 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.321 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.325 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.427 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.332 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.325 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.346 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.321 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.330 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.327 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.332 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.323 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.323 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.332 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.353 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.337 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.326 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.322 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.325 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.744 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.320 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.728 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.323 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.326 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.316 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.316 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.326 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.312 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.436 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.314 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.308 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.314 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.328 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.330 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.338 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.698 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.316 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.330 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.313 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.330 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.327 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m1.315 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.308 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.312 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.427 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.566 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.318 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.312 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.312 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.377 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.308 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.358 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.353 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.304 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.314 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.329 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.322 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.307 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.313 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.314 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.310 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.312 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.309 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.466 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.310 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.319 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.305 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.300 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.309 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.308 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.315 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.758 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.312 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.305 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.310 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.317 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.307 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.303 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.312 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.309 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.337 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.314 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.316 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.303 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.307 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.316 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.308 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.313 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.319 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.315 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.312 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.305 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.303 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.310 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.309 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.304 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.310 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.353 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.308 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.314 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.327 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.306 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.313 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.324 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.305 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.304 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.302 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.405 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.305 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.323 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.313 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.315 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.318 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.333 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.311 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.310 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.326 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.313 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.309 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.308 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.311 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.448 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.317 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.336 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.316 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.320 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.329 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.309 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.308 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.302 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.306 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.320 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.938 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.315 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.335 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.299 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.302 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.298 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.301 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.320 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.304 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.308 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.314 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.314 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.308 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.297 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.351 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.307 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.391 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.322 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.327 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.301 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.305 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.311 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.297 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.510 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.306 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.302 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.317 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.302 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.308 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.307 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.302 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.304 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.298 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.393 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.305 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.294 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.300 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.438 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.308 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.311 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.306 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.304 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.309 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.306 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.335 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.306 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.312 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.310 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.319 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.319 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.511 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.313 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.309 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.315 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.314 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.327 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.330 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.315 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.312 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.318 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.304 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m1.058 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.299 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.320 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.401 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.308 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.305 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.300 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.790 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.300 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.340 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.308 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.296 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.306 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.316 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.347 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.303 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.581 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.302 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.306 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.296 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.302 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.297 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.307 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.305 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.316 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.302 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.304 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.304 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.313 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.304 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.302 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.305 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.320 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.768 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.306 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.302 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.307 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.317 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.304 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.302 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.314 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.302 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.304 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.306 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.312 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.301 ms - 2\x1b[0m\n\x1b[0mGET / \x1b[33m404 \x1b[0m21.185 ms - 1050\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.305 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.301 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.306 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.440 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.297 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m1.207 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.510 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.299 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.295 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.302 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.305 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.301 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.301 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.317 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.315 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.304 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.316 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.302 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.311 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.303 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.309 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.308 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.297 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.304 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.299 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.296 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.314 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.306 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.306 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.302 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m2.814 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m2.104 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.295 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.322 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.627 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.891 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.311 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.316 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.303 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.318 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.302 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.308 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.320 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.309 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.300 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m1.392 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.298 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.308 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.304 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.306 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.300 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.307 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.303 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.310 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.319 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.301 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.326 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.303 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.304 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.300 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.308 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.355 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.316 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.344 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.303 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.426 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.303 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.303 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.306 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.301 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.301 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.309 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.320 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.298 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.301 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.298 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.311 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.301 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.303 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.306 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.307 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.310 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.325 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.299 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.311 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.319 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.312 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.621 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.320 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.305 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.302 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.298 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.305 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.310 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.302 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.305 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.302 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.296 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.306 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.313 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.306 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.300 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.302 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.337 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.354 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.305 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.324 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.309 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.319 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.308 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.307 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.310 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.301 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.299 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.339 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.301 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.303 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.389 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.302 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.307 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.299 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.305 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.308 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.307 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.304 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.309 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.304 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.308 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.317 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.315 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.306 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.304 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.326 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.300 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.308 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.299 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.308 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.308 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.319 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.321 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.660 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.299 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.301 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.315 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.319 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.306 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.325 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.303 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.320 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.306 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.370 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.302 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.320 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.307 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.300 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.303 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.305 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.313 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.305 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.304 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.305 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.302 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.301 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.297 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.304 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.304 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.308 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.365 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.301 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.315 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.303 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.297 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.301 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.342 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.304 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.449 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.310 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.312 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.305 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.306 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.323 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.309 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.305 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.302 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.305 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.310 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.320 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.315 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.308 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.295 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.304 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.301 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.308 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.628 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.312 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.353 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.305 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.306 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.304 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.303 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.390 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.312 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.324 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.307 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.318 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.306 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.320 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.306 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.301 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.303 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.311 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.319 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.312 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.311 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.307 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.309 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.329 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.309 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.311 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.306 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.339 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.302 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.331 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.780 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.303 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.445 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.308 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.305 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.301 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.300 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.315 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.305 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.299 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.326 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.317 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.304 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.316 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.302 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.407 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.293 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.301 ms - 2\x1b[0m\n\x1b[0mGET / \x1b[33m404 \x1b[0m26.287 ms - 1050\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.328 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.311 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.321 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.319 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.306 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.302 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.313 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m1.611 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.302 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.312 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.304 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.498 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.316 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.302 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.314 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.310 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.306 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.307 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.432 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.305 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.301 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.300 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.300 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.302 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.318 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.308 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.299 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.301 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m1.145 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.306 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.307 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.306 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.301 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.387 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.308 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.301 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m1.565 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.303 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.305 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.302 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.304 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.304 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.315 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.301 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.299 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.315 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.304 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.302 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.321 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.297 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.301 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.310 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.312 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.303 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.304 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.301 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.305 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.311 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.302 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.304 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m1.346 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.309 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.298 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.298 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.306 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.311 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.303 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.301 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.303 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.300 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.305 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.307 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.301 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.308 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.309 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.298 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.348 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.400 ms - 2\x1b[0m\n\x1b[0mGET / \x1b[33m404 \x1b[0m17.056 ms - 1050\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.493 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.312 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.303 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.298 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.304 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.859 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.307 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.312 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.311 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.309 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m2.739 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.306 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.301 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.458 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.303 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.304 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.310 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.297 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.305 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.299 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.304 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.302 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.315 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.307 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.397 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.298 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.298 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.310 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.299 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.367 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.303 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.300 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.307 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.309 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.310 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.301 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.313 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.303 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.306 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.299 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.292 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.318 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.300 ms - 2\x1b[0m\n\x1b[0mGET / \x1b[33m404 \x1b[0m18.572 ms - 1050\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.297 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.299 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.306 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.294 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.331 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.309 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.307 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.313 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.307 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.301 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.301 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.309 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.321 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.316 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.697 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.318 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.477 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m1.352 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.296 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.295 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.295 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.298 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.293 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.305 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.300 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.298 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.293 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.296 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.297 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.321 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.310 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.290 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.297 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.293 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.294 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.297 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.299 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.298 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.296 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.298 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.300 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.956 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.297 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.295 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.392 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.296 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.292 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.304 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.299 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m1.523 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.294 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.295 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.301 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.292 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.300 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.297 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.298 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.297 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.327 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.294 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.302 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.332 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.294 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.398 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.294 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.302 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.303 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.297 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.307 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.312 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.311 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.299 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.292 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.294 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.299 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.295 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.289 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.313 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.297 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.294 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.292 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.292 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.294 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.300 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.300 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.309 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.294 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.559 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.341 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.294 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.299 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.293 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.291 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.297 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.753 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.312 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.536 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.292 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.298 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.314 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.308 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.317 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.398 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.372 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.293 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.294 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.311 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.298 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.306 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.297 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m2.327 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m1.193 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.295 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.292 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.288 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.294 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.290 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m1.789 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.290 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.296 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.295 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.788 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.304 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.299 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.288 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.304 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.292 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.304 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.296 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.291 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.284 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.293 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.293 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.291 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.293 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.297 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.824 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.295 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.283 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.283 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.298 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.301 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.285 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.322 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.285 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.291 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.311 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.843 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.281 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.288 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.284 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.291 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.288 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.296 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.290 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.295 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.310 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.287 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.298 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.303 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.299 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.403 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.284 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.290 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.295 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.294 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.294 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.300 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.285 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.293 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.300 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.311 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.291 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.290 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.729 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.345 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.288 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.293 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.281 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.300 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.285 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.292 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.293 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.391 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.307 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.284 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.282 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.284 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.286 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.285 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.298 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.285 ms - 2\x1b[0m\n\x1b[0mGET / \x1b[33m404 \x1b[0m31.744 ms - 1050\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.284 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.287 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.286 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.289 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.312 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.302 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.289 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.294 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.291 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.293 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.288 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.291 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.294 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m1.001 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.288 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.280 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.287 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.292 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.300 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.287 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.282 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.298 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.291 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.288 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.306 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.294 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.426 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.377 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.417 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.290 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.284 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.288 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.282 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.284 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.305 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.289 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.293 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.292 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.951 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.290 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.308 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.282 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.285 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.297 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.283 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.291 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.302 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.513 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.435 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.413 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.527 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.438 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.285 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.447 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.417 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.455 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.291 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.289 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.287 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.284 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.304 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.297 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.288 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.297 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.288 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.289 ms - 2\x1b[0m\n\x1b[0mGET / \x1b[33m404 \x1b[0m21.288 ms - 1050\x1b[0m\n\x1b[0mGET /robots.txt \x1b[33m404 \x1b[0m15.083 ms - 1050\x1b[0m\n\x1b[0mPOST /Adminac2c1434/Login.php \x1b[33m404 \x1b[0m13.578 ms - 163\x1b[0m\n\x1b[0mGET / \x1b[33m404 \x1b[0m15.734 ms - 1050\x1b[0m\n\x1b[0mGET /l.php \x1b[33m404 \x1b[0m16.120 ms - 1050\x1b[0m\n\x1b[0mGET /phpinfo.php \x1b[33m404 \x1b[0m12.649 ms - 1050\x1b[0m\n\x1b[0mGET /test.php \x1b[33m404 \x1b[0m21.780 ms - 1050\x1b[0m\n\x1b[0mPOST /index.php \x1b[33m404 \x1b[0m0.841 ms - 149\x1b[0m\n\x1b[0mPOST /bbs.php \x1b[33m404 \x1b[0m0.413 ms - 147\x1b[0m\n\x1b[0mPOST /forum.php \x1b[33m404 \x1b[0m0.829 ms - 149\x1b[0m\n\x1b[0mPOST /forums.php \x1b[33m404 \x1b[0m0.370 ms - 150\x1b[0m\n\x1b[0mPOST /bbs/index.php \x1b[33m404 \x1b[0m0.361 ms - 153\x1b[0m\n\x1b[0mPOST /forum/index.php \x1b[33m404 \x1b[0m0.347 ms - 155\x1b[0m\n\x1b[0mPOST /forums/index.php \x1b[33m404 \x1b[0m0.339 ms - 156\x1b[0m\n\x1b[0mGET /webdav/ \x1b[33m404 \x1b[0m9.913 ms - 1050\x1b[0m\n\x1b[0mGET /help.php \x1b[33m404 \x1b[0m12.930 ms - 1050\x1b[0m\n\x1b[0mGET /java.php \x1b[33m404 \x1b[0m10.437 ms - 1050\x1b[0m\n\x1b[0mGET /_query.php \x1b[33m404 \x1b[0m13.683 ms - 1050\x1b[0m\n\x1b[0mGET /test.php \x1b[33m404 \x1b[0m13.529 ms - 1050\x1b[0m\n\x1b[0mGET /db_cts.php \x1b[33m404 \x1b[0m10.837 ms - 1050\x1b[0m\n\x1b[0mGET /db_pma.php \x1b[33m404 \x1b[0m10.047 ms - 1050\x1b[0m\n\x1b[0mGET /logon.php \x1b[33m404 \x1b[0m17.974 ms - 1050\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.335 ms - 2\x1b[0m\n\x1b[0mGET /license.php \x1b[33m404 \x1b[0m9.438 ms - 1050\x1b[0m\n\x1b[0mGET /log.php \x1b[33m404 \x1b[0m12.796 ms - 1050\x1b[0m\n\x1b[0mGET /hell.php \x1b[33m404 \x1b[0m17.035 ms - 1050\x1b[0m\n\x1b[0mGET /pmd_online.php \x1b[33m404 \x1b[0m8.862 ms - 1050\x1b[0m\n\x1b[0mGET /x.php \x1b[33m404 \x1b[0m10.087 ms - 1050\x1b[0m\n\x1b[0mGET /shell.php \x1b[33m404 \x1b[0m9.055 ms - 1050\x1b[0m\n\x1b[0mGET /htdocs.php \x1b[33m404 \x1b[0m20.227 ms - 1050\x1b[0m\n\x1b[0mGET /b.php \x1b[33m404 \x1b[0m9.425 ms - 1050\x1b[0m\n\x1b[0mGET /sane.php \x1b[33m404 \x1b[0m10.374 ms - 1050\x1b[0m\n\x1b[0mGET /desktop.ini.php \x1b[33m404 \x1b[0m20.779 ms - 1050\x1b[0m\n\x1b[0mGET /z.php \x1b[33m404 \x1b[0m8.777 ms - 1050\x1b[0m\n\x1b[0mGET /lala-dpr.php \x1b[33m404 \x1b[0m9.134 ms - 1050\x1b[0m\n\x1b[0mGET /wpc.php \x1b[33m404 \x1b[0m8.002 ms - 1050\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.663 ms - 2\x1b[0m\n\x1b[0mGET /wpo.php \x1b[33m404 \x1b[0m10.379 ms - 1050\x1b[0m\n\x1b[0mGET /t6nv.php \x1b[33m404 \x1b[0m9.279 ms - 1050\x1b[0m\n\x1b[0mGET /muhstik.php \x1b[33m404 \x1b[0m8.716 ms - 1050\x1b[0m\n\x1b[0mGET /text.php \x1b[33m404 \x1b[0m9.228 ms - 1050\x1b[0m\n\x1b[0mGET /muhstik.php \x1b[33m404 \x1b[0m10.086 ms - 1050\x1b[0m\n\x1b[0mGET /muhstik2.php \x1b[33m404 \x1b[0m9.106 ms - 1050\x1b[0m\n\x1b[0mGET /muhstiks.php \x1b[33m404 \x1b[0m9.343 ms - 1050\x1b[0m\n\x1b[0mGET /muhstik-dpr.php \x1b[33m404 \x1b[0m17.379 ms - 1050\x1b[0m\n\x1b[0mGET /lol.php \x1b[33m404 \x1b[0m8.121 ms - 1050\x1b[0m\n\x1b[0mGET /uploader.php \x1b[33m404 \x1b[0m10.407 ms - 1050\x1b[0m\n\x1b[0mGET /cmv.php \x1b[33m404 \x1b[0m11.024 ms - 1050\x1b[0m\n\x1b[0mGET /cmdd.php \x1b[33m404 \x1b[0m8.123 ms - 1050\x1b[0m\n\x1b[0mGET /knal.php \x1b[33m404 \x1b[0m7.568 ms - 1050\x1b[0m\n\x1b[0mGET /cmd.php \x1b[33m404 \x1b[0m7.889 ms - 1050\x1b[0m\n\x1b[0mGET /shell.php \x1b[33m404 \x1b[0m7.922 ms - 1050\x1b[0m\n\x1b[0mGET /appserv.php \x1b[33m404 \x1b[0m10.880 ms - 1050\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.277 ms - 2\x1b[0m\n\x1b[0mGET /phpmyadmin/scripts/setup.php \x1b[33m404 \x1b[0m15.009 ms - 1050\x1b[0m\n\x1b[0mGET /phpMyAdmin/scripts/setup.php \x1b[33m404 \x1b[0m8.568 ms - 1050\x1b[0m\n\x1b[0mGET /scripts/db___.init.php \x1b[33m404 \x1b[0m7.933 ms - 1050\x1b[0m\n\x1b[0mGET /phpmyadmin/scripts/db___.init.php \x1b[33m404 \x1b[0m8.680 ms - 1050\x1b[0m\n\x1b[0mGET /phpMyAdmin/scripts/db___.init.php \x1b[33m404 \x1b[0m9.924 ms - 1050\x1b[0m\n\x1b[0mGET /pma/scripts/setup.php \x1b[33m404 \x1b[0m8.081 ms - 1050\x1b[0m\n\x1b[0mGET /myadmin/scripts/setup.php \x1b[33m404 \x1b[0m7.935 ms - 1050\x1b[0m\n\x1b[0mGET /MyAdmin/scripts/setup.php \x1b[33m404 \x1b[0m7.535 ms - 1050\x1b[0m\n\x1b[0mGET /pma/scripts/db___.init.php \x1b[33m404 \x1b[0m7.986 ms - 1050\x1b[0m\n\x1b[0mGET /PMA/scripts/db___.init.php \x1b[33m404 \x1b[0m7.517 ms - 1050\x1b[0m\n\x1b[0mGET /myadmin/scripts/db___.init.php \x1b[33m404 \x1b[0m8.421 ms - 1050\x1b[0m\n\x1b[0mGET /MyAdmin/scripts/db___.init.php \x1b[33m404 \x1b[0m8.456 ms - 1050\x1b[0m\n\x1b[0mGET /cacti/plugins/weathermap/editor.php \x1b[33m404 \x1b[0m10.430 ms - 1050\x1b[0m\n\x1b[0mGET /weathermap/editor.php \x1b[33m404 \x1b[0m7.645 ms - 1050\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.281 ms - 2\x1b[0m\n\x1b[0mGET /index.php?s=%2f%69%6e%64%65%78%2f%5c%74%68%69%6e%6b%5c%61%70%70%2f%69%6e%76%6f%6b%65%66%75%6e%63%74%69%6f%6e&function=%63%61%6c%6c%5f%75%73%65%72%5f%66%75%6e%63%5f%61%72%72%61%79&vars[0]=%6d%645&vars[1][]=%48%65%6c%6c%6f%54%68%69%6e%6b%50%48%50 \x1b[33m404 \x1b[0m12.935 ms - 1050\x1b[0m\n\x1b[0mGET /elrekt.php?s=%2f%69%6e%64%65%78%2f%5c%74%68%69%6e%6b%5c%61%70%70%2f%69%6e%76%6f%6b%65%66%75%6e%63%74%69%6f%6e&function=%63%61%6c%6c%5f%75%73%65%72%5f%66%75%6e%63%5f%61%72%72%61%79&vars[0]=%6d%645&vars[1][]=%48%65%6c%6c%6f%54%68%69%6e%6b%50%48%50 \x1b[33m404 \x1b[0m7.396 ms - 1050\x1b[0m\n\x1b[0mGET /App/?content=die(md5(HelloThinkPHP)) \x1b[33m404 \x1b[0m8.723 ms - 1050\x1b[0m\n\x1b[0mGET /index.php/module/action/param1/${@die(md5(HelloThinkPHP))} \x1b[33m404 \x1b[0m13.417 ms - 1050\x1b[0m\n\x1b[0mGET /?a=fetch&content=die(@md5(HelloThinkCMF)) \x1b[33m404 \x1b[0m7.284 ms - 1050\x1b[0m\n\x1b[0mGET / \x1b[33m404 \x1b[0m10.494 ms - 1050\x1b[0m\n\x1b[0mGET /joomla/ \x1b[33m404 \x1b[0m8.064 ms - 1050\x1b[0m\n\x1b[0mGET /Joomla/ \x1b[33m404 \x1b[0m10.850 ms - 1050\x1b[0m\n\x1b[0mGET /?a=echo%20-n%20HelloNginx%7Cmd5sum \x1b[33m404 \x1b[0m8.772 ms - 1050\x1b[0m\n\x1b[0mGET /d7.php \x1b[33m404 \x1b[0m8.596 ms - 1050\x1b[0m\n\x1b[0mGET /1x.php \x1b[33m404 \x1b[0m7.152 ms - 1050\x1b[0m\n\x1b[0mGET /home.php \x1b[33m404 \x1b[0m7.836 ms - 1050\x1b[0m\n\x1b[0mGET /undx.php \x1b[33m404 \x1b[0m9.656 ms - 1050\x1b[0m\n\x1b[0mGET /spider.php \x1b[33m404 \x1b[0m7.391 ms - 1050\x1b[0m\n\x1b[0mGET /payload.php \x1b[33m404 \x1b[0m7.701 ms - 1050\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.295 ms - 2\x1b[0m\n\x1b[0mGET /izom.php \x1b[33m404 \x1b[0m8.740 ms - 1050\x1b[0m\n\x1b[0mGET /composer.php \x1b[33m404 \x1b[0m8.559 ms - 1050\x1b[0m\n\x1b[0mGET /hue2.php \x1b[33m404 \x1b[0m7.282 ms - 1050\x1b[0m\n\x1b[0mGET /Drupal.php \x1b[33m404 \x1b[0m7.821 ms - 1050\x1b[0m\n\x1b[0mGET /lang.php?f=1 \x1b[33m404 \x1b[0m10.444 ms - 1050\x1b[0m\n\x1b[0mGET /izom.php \x1b[33m404 \x1b[0m7.722 ms - 1050\x1b[0m\n\x1b[0mGET /new_license.php \x1b[33m404 \x1b[0m13.635 ms - 1050\x1b[0m\n\x1b[0mGET /images/!.php \x1b[33m404 \x1b[0m7.154 ms - 1050\x1b[0m\n\x1b[0mGET /images/vuln.php \x1b[33m404 \x1b[0m11.725 ms - 1050\x1b[0m\n\x1b[0mGET /hd.php \x1b[33m404 \x1b[0m7.016 ms - 1050\x1b[0m\n\x1b[0mGET /images/up.php \x1b[33m404 \x1b[0m7.447 ms - 1050\x1b[0m\n\x1b[0mGET /images/attari.php \x1b[33m404 \x1b[0m8.338 ms - 1050\x1b[0m\n\x1b[0mGET /images/stories/cmd.php \x1b[33m404 \x1b[0m8.203 ms - 1050\x1b[0m\n\x1b[0mGET /images/stories/filemga.php?ssp=RfVbHu \x1b[33m404 \x1b[0m7.223 ms - 1050\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.288 ms - 2\x1b[0m\n\x1b[0mGET /up.php \x1b[33m404 \x1b[0m7.774 ms - 1050\x1b[0m\n\x1b[0mGET /laravel.php \x1b[33m404 \x1b[0m7.153 ms - 1050\x1b[0m\n\x1b[0mGET /huoshan.php \x1b[33m404 \x1b[0m7.706 ms - 1050\x1b[0m\n\x1b[0mGET /yu.php \x1b[33m404 \x1b[0m9.543 ms - 1050\x1b[0m\n\x1b[0mGET /ftmabc.php \x1b[33m404 \x1b[0m7.052 ms - 1050\x1b[0m\n\x1b[0mGET /doudou.php \x1b[33m404 \x1b[0m7.508 ms - 1050\x1b[0m\n\x1b[0mGET /mjx.php \x1b[33m404 \x1b[0m7.335 ms - 1050\x1b[0m\n\x1b[0mGET /xiaoxia.php \x1b[33m404 \x1b[0m7.485 ms - 1050\x1b[0m\n\x1b[0mGET /yuyang.php \x1b[33m404 \x1b[0m7.290 ms - 1050\x1b[0m\n\x1b[0mGET /zz.php \x1b[33m404 \x1b[0m7.534 ms - 1050\x1b[0m\n\x1b[0mGET /ak.php \x1b[33m404 \x1b[0m7.130 ms - 1050\x1b[0m\n\x1b[0mGET /baidoubi.php \x1b[33m404 \x1b[0m12.635 ms - 1050\x1b[0m\n\x1b[0mGET /hhhhhh.php \x1b[33m404 \x1b[0m7.141 ms - 1050\x1b[0m\n\x1b[0mGET /meijianxue.php \x1b[33m404 \x1b[0m7.503 ms - 1050\x1b[0m\n\x1b[0mGET /no1.php \x1b[33m404 \x1b[0m7.619 ms - 1050\x1b[0m\n\x1b[0mGET /python.php \x1b[33m404 \x1b[0m8.339 ms - 1050\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.289 ms - 2\x1b[0m\n\x1b[0mGET /indea.php \x1b[33m404 \x1b[0m7.694 ms - 1050\x1b[0m\n\x1b[0mGET /taisui.php \x1b[33m404 \x1b[0m10.351 ms - 1050\x1b[0m\n\x1b[0mGET /xiaxia.php \x1b[33m404 \x1b[0m7.642 ms - 1050\x1b[0m\n\x1b[0mGET /kk.php \x1b[33m404 \x1b[0m7.093 ms - 1050\x1b[0m\n\x1b[0mGET /xsser.php \x1b[33m404 \x1b[0m11.379 ms - 1050\x1b[0m\n\x1b[0mGET /99.php \x1b[33m404 \x1b[0m7.263 ms - 1050\x1b[0m\n\x1b[0mGET /dp.php \x1b[33m404 \x1b[0m10.208 ms - 1050\x1b[0m\n\x1b[0mGET /hs.php \x1b[33m404 \x1b[0m9.444 ms - 1050\x1b[0m\n\x1b[0mGET /1ts.php \x1b[33m404 \x1b[0m12.184 ms - 1050\x1b[0m\n\x1b[0mGET /haiyan.php \x1b[33m404 \x1b[0m8.778 ms - 1050\x1b[0m\n\x1b[0mGET /phpdm.php \x1b[33m404 \x1b[0m12.766 ms - 1050\x1b[0m\n\x1b[0mGET /5678.php \x1b[33m404 \x1b[0m7.561 ms - 1050\x1b[0m\n\x1b[0mGET /root11.php \x1b[33m404 \x1b[0m6.905 ms - 1050\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.296 ms - 2\x1b[0m\n\x1b[0mGET /xiu.php \x1b[33m404 \x1b[0m7.530 ms - 1050\x1b[0m\n\x1b[0mPOST /wuwu11.php \x1b[33m404 \x1b[0m0.332 ms - 150\x1b[0m\n\x1b[0mPOST /xw.php \x1b[33m404 \x1b[0m0.341 ms - 146\x1b[0m\n\x1b[0mPOST /xw1.php \x1b[33m404 \x1b[0m0.370 ms - 147\x1b[0m\n\x1b[0mPOST /9678.php \x1b[33m404 \x1b[0m0.341 ms - 148\x1b[0m\n\x1b[0mPOST /wc.php \x1b[33m404 \x1b[0m0.318 ms - 146\x1b[0m\n\x1b[0mPOST /xx.php \x1b[33m404 \x1b[0m0.326 ms - 146\x1b[0m\n\x1b[0mPOST /xx.php \x1b[33m404 \x1b[0m0.368 ms - 146\x1b[0m\n\x1b[0mPOST /s.php \x1b[33m404 \x1b[0m0.365 ms - 145\x1b[0m\n\x1b[0mPOST /w.php \x1b[33m404 \x1b[0m0.305 ms - 145\x1b[0m\n\x1b[0mPOST /sheep.php \x1b[33m404 \x1b[0m0.359 ms - 149\x1b[0m\n\x1b[0mPOST /qaq.php \x1b[33m404 \x1b[0m0.309 ms - 147\x1b[0m\n\x1b[0mPOST /my.php \x1b[33m404 \x1b[0m0.326 ms - 146\x1b[0m\n\x1b[0mPOST /qq.php \x1b[33m404 \x1b[0m0.335 ms - 146\x1b[0m\n\x1b[0mPOST /aaa.php \x1b[33m404 \x1b[0m0.340 ms - 147\x1b[0m\n\x1b[0mPOST /hhh.php \x1b[33m404 \x1b[0m0.443 ms - 147\x1b[0m\n\x1b[0mPOST /jjj.php \x1b[33m404 \x1b[0m0.308 ms - 147\x1b[0m\n\x1b[0mPOST /vvv.php \x1b[33m404 \x1b[0m0.425 ms - 147\x1b[0m\n\x1b[0mPOST /www.php \x1b[33m404 \x1b[0m0.314 ms - 147\x1b[0m\n\x1b[0mPOST /ffr.php \x1b[33m404 \x1b[0m0.330 ms - 147\x1b[0m\n\x1b[0mPOST /411.php \x1b[33m404 \x1b[0m0.308 ms - 147\x1b[0m\n\x1b[0mPOST /415.php \x1b[33m404 \x1b[0m0.314 ms - 147\x1b[0m\n\x1b[0mPOST /421.php \x1b[33m404 \x1b[0m0.308 ms - 147\x1b[0m\n\x1b[0mPOST /444.php \x1b[33m404 \x1b[0m0.319 ms - 147\x1b[0m\n\x1b[0mPOST /a411.php \x1b[33m404 \x1b[0m0.311 ms - 148\x1b[0m\n\x1b[0mPOST /whoami.php \x1b[33m404 \x1b[0m0.310 ms - 150\x1b[0m\n\x1b[0mPOST /whoami.php.php \x1b[33m404 \x1b[0m0.310 ms - 154\x1b[0m\n\x1b[0mPOST /9.php \x1b[33m404 \x1b[0m0.307 ms - 145\x1b[0m\n\x1b[0mPOST /98k.php \x1b[33m404 \x1b[0m0.314 ms - 147\x1b[0m\n\x1b[0mPOST /981.php \x1b[33m404 \x1b[0m0.386 ms - 147\x1b[0m\n\x1b[0mPOST /887.php \x1b[33m404 \x1b[0m0.316 ms - 147\x1b[0m\n\x1b[0mPOST /888.php \x1b[33m404 \x1b[0m0.337 ms - 147\x1b[0m\n\x1b[0mPOST /aa.php \x1b[33m404 \x1b[0m0.346 ms - 146\x1b[0m\n\x1b[0mPOST /bb.php \x1b[33m404 \x1b[0m1.915 ms - 146\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.271 ms - 2\x1b[0m\n\x1b[0mPOST /pp.php \x1b[33m404 \x1b[0m0.294 ms - 146\x1b[0m\n\x1b[0mPOST /tt.php \x1b[33m404 \x1b[0m0.335 ms - 146\x1b[0m\n\x1b[0mPOST /bbq.php \x1b[33m404 \x1b[0m0.305 ms - 147\x1b[0m\n\x1b[0mPOST /jj1.php \x1b[33m404 \x1b[0m0.304 ms - 147\x1b[0m\n\x1b[0mPOST /jbb.php \x1b[33m404 \x1b[0m0.307 ms - 147\x1b[0m\n\x1b[0mPOST /7o.php \x1b[33m404 \x1b[0m0.303 ms - 146\x1b[0m\n\x1b[0mPOST /qwq.php \x1b[33m404 \x1b[0m0.299 ms - 147\x1b[0m\n\x1b[0mPOST /nb.php \x1b[33m404 \x1b[0m0.303 ms - 146\x1b[0m\n\x1b[0mPOST /kpl.php \x1b[33m404 \x1b[0m0.950 ms - 147\x1b[0m\n\x1b[0mPOST /hgx.php \x1b[33m404 \x1b[0m0.441 ms - 147\x1b[0m\n\x1b[0mPOST /ppl.php \x1b[33m404 \x1b[0m0.323 ms - 147\x1b[0m\n\x1b[0mPOST /tty.php \x1b[33m404 \x1b[0m0.304 ms - 147\x1b[0m\n\x1b[0mPOST /ooi.php \x1b[33m404 \x1b[0m0.304 ms - 147\x1b[0m\n\x1b[0mPOST /aap.php \x1b[33m404 \x1b[0m0.351 ms - 147\x1b[0m\n\x1b[0mPOST /app.php \x1b[33m404 \x1b[0m0.299 ms - 147\x1b[0m\n\x1b[0mPOST /bbr.php \x1b[33m404 \x1b[0m0.308 ms - 147\x1b[0m\n\x1b[0mPOST /ioi.php \x1b[33m404 \x1b[0m0.322 ms - 147\x1b[0m\n\x1b[0mPOST /uuu.php \x1b[33m404 \x1b[0m0.302 ms - 147\x1b[0m\n\x1b[0mPOST /yyy.php \x1b[33m404 \x1b[0m0.340 ms - 147\x1b[0m\n\x1b[0mPOST /ack.php \x1b[33m404 \x1b[0m0.333 ms - 147\x1b[0m\n\x1b[0mPOST /shh.php \x1b[33m404 \x1b[0m0.360 ms - 147\x1b[0m\n\x1b[0mPOST /ddd.php \x1b[33m404 \x1b[0m0.324 ms - 147\x1b[0m\n\x1b[0mPOST /nnn.php \x1b[33m404 \x1b[0m0.327 ms - 147\x1b[0m\n\x1b[0mPOST /rrr.php \x1b[33m404 \x1b[0m0.321 ms - 147\x1b[0m\n\x1b[0mPOST /ttt.php \x1b[33m404 \x1b[0m0.300 ms - 147\x1b[0m\n\x1b[0mPOST /bbqq.php \x1b[33m404 \x1b[0m0.352 ms - 148\x1b[0m\n\x1b[0mPOST /tyrant.php \x1b[33m404 \x1b[0m0.320 ms - 150\x1b[0m\n\x1b[0mPOST /qiqi.php \x1b[33m404 \x1b[0m0.298 ms - 148\x1b[0m\n\x1b[0mPOST /qiqi1.php \x1b[33m404 \x1b[0m0.299 ms - 149\x1b[0m\n\x1b[0mPOST /zhk.php \x1b[33m404 \x1b[0m0.293 ms - 147\x1b[0m\n\x1b[0mPOST /bbv.php \x1b[33m404 \x1b[0m0.297 ms - 147\x1b[0m\n\x1b[0mPOST /605.php \x1b[33m404 \x1b[0m0.327 ms - 147\x1b[0m\n\x1b[0mPOST /admin1.php \x1b[33m404 \x1b[0m0.299 ms - 150\x1b[0m\n\x1b[0mPOST /xi.php \x1b[33m404 \x1b[0m0.301 ms - 146\x1b[0m\n\x1b[0mPOST /999.php \x1b[33m404 \x1b[0m0.322 ms - 147\x1b[0m\n\x1b[0mPOST /jsc.php \x1b[33m404 \x1b[0m0.327 ms - 147\x1b[0m\n\x1b[0mPOST /jsc.php.php \x1b[33m404 \x1b[0m0.371 ms - 151\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.274 ms - 2\x1b[0m\n\x1b[0mPOST /jsc.php \x1b[33m404 \x1b[0m0.354 ms - 147\x1b[0m\n\x1b[0mPOST /11a.php \x1b[33m404 \x1b[0m0.299 ms - 147\x1b[0m\n\x1b[0mPOST /kkl.php \x1b[33m404 \x1b[0m0.294 ms - 147\x1b[0m\n\x1b[0mPOST /ks1.php \x1b[33m404 \x1b[0m0.297 ms - 147\x1b[0m\n\x1b[0mPOST /ooo.php \x1b[33m404 \x1b[0m0.354 ms - 147\x1b[0m\n\x1b[0mPOST /wsx.php \x1b[33m404 \x1b[0m0.295 ms - 147\x1b[0m\n\x1b[0mPOST /lz.php \x1b[33m404 \x1b[0m0.565 ms - 146\x1b[0m\n\x1b[0mPOST /zmp.php \x1b[33m404 \x1b[0m0.324 ms - 147\x1b[0m\n\x1b[0mPOST /803.php \x1b[33m404 \x1b[0m0.330 ms - 147\x1b[0m\n\x1b[0mPOST /zzz.php \x1b[33m404 \x1b[0m0.514 ms - 147\x1b[0m\n\x1b[0mPOST /ze.php \x1b[33m404 \x1b[0m0.329 ms - 146\x1b[0m\n\x1b[0mPOST /nnb.php \x1b[33m404 \x1b[0m0.294 ms - 147\x1b[0m\n\x1b[0mPOST /lkio.php \x1b[33m404 \x1b[0m0.678 ms - 148\x1b[0m\n\x1b[0mPOST /mm.php \x1b[33m404 \x1b[0m0.331 ms - 146\x1b[0m\n\x1b[0mPOST /mmp.php \x1b[33m404 \x1b[0m0.295 ms - 147\x1b[0m\n\x1b[0mPOST /hades.php \x1b[33m404 \x1b[0m0.294 ms - 149\x1b[0m\n\x1b[0mPOST /muma.php \x1b[33m404 \x1b[0m0.298 ms - 148\x1b[0m\n\x1b[0mPOST /shell.php \x1b[33m404 \x1b[0m0.301 ms - 149\x1b[0m\n\x1b[0mPOST /zza.php \x1b[33m404 \x1b[0m0.344 ms - 147\x1b[0m\n\x1b[0mPOST /ag.php \x1b[33m404 \x1b[0m0.293 ms - 146\x1b[0m\n\x1b[0mPOST /2ndex.php \x1b[33m404 \x1b[0m0.297 ms - 149\x1b[0m\n\x1b[0mPOST /my.php \x1b[33m404 \x1b[0m0.331 ms - 146\x1b[0m\n\x1b[0mPOST /aa.php \x1b[33m404 \x1b[0m0.295 ms - 146\x1b[0m\n\x1b[0mPOST /qq.php \x1b[33m404 \x1b[0m0.899 ms - 146\x1b[0m\n\x1b[0mPOST /config.php \x1b[33m404 \x1b[0m0.298 ms - 150\x1b[0m\n\x1b[0mPOST /1.php \x1b[33m404 \x1b[0m0.289 ms - 145\x1b[0m\n\x1b[0mPOST /1.php \x1b[33m404 \x1b[0m0.320 ms - 145\x1b[0m\n\x1b[0mPOST /miao.php \x1b[33m404 \x1b[0m0.298 ms - 148\x1b[0m\n\x1b[0mPOST /j.php \x1b[33m404 \x1b[0m0.331 ms - 145\x1b[0m\n\x1b[0mPOST /cc.php \x1b[33m404 \x1b[0m0.308 ms - 146\x1b[0m\n\x1b[0mPOST /xiaodai.php \x1b[33m404 \x1b[0m0.345 ms - 151\x1b[0m\n\x1b[0mPOST /abak.php \x1b[33m404 \x1b[0m0.323 ms - 148\x1b[0m\n\x1b[0mPOST /pass.php \x1b[33m404 \x1b[0m0.352 ms - 148\x1b[0m\n\x1b[0mPOST /olelist.php \x1b[33m404 \x1b[0m0.366 ms - 151\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.279 ms - 2\x1b[0m\n\x1b[0mPOST /a.php \x1b[33m404 \x1b[0m0.276 ms - 145\x1b[0m\n\x1b[0mPOST /t00ls.php \x1b[33m404 \x1b[0m0.323 ms - 149\x1b[0m\n\x1b[0mPOST /about_ver.php \x1b[33m404 \x1b[0m0.325 ms - 153\x1b[0m\n\x1b[0mPOST /edmin.php \x1b[33m404 \x1b[0m0.297 ms - 149\x1b[0m\n\x1b[0mPOST /sconfig.php \x1b[33m404 \x1b[0m0.389 ms - 151\x1b[0m\n\x1b[0mPOST /indax.php \x1b[33m404 \x1b[0m0.332 ms - 149\x1b[0m\n\x1b[0mPOST /logo.php \x1b[33m404 \x1b[0m0.385 ms - 148\x1b[0m\n\x1b[0mPOST /o.php \x1b[33m404 \x1b[0m0.791 ms - 145\x1b[0m\n\x1b[0mPOST /shell.php \x1b[33m404 \x1b[0m0.309 ms - 149\x1b[0m\n\x1b[0mPOST /tools.php \x1b[33m404 \x1b[0m0.354 ms - 149\x1b[0m\n\x1b[0mPOST /asjc.php \x1b[33m404 \x1b[0m0.454 ms - 148\x1b[0m\n\x1b[0mPOST /test.php \x1b[33m404 \x1b[0m0.343 ms - 148\x1b[0m\n\x1b[0mPOST /fuck.php \x1b[33m404 \x1b[0m0.304 ms - 148\x1b[0m\n\x1b[0mPOST /freebook.php \x1b[33m404 \x1b[0m0.305 ms - 152\x1b[0m\n\x1b[0mPOST /goodbook.php \x1b[33m404 \x1b[0m0.347 ms - 152\x1b[0m\n\x1b[0mPOST /tools.php \x1b[33m404 \x1b[0m0.352 ms - 149\x1b[0m\n\x1b[0mPOST /indexl.php \x1b[33m404 \x1b[0m0.320 ms - 150\x1b[0m\n\x1b[0mPOST /gotemp.php \x1b[33m404 \x1b[0m0.375 ms - 150\x1b[0m\n\x1b[0mPOST /sql.php \x1b[33m404 \x1b[0m0.334 ms - 147\x1b[0m\n\x1b[0mPOST /conf.php \x1b[33m404 \x1b[0m0.343 ms - 148\x1b[0m\n\x1b[0mPOST /pagefile.php \x1b[33m404 \x1b[0m0.782 ms - 152\x1b[0m\n\x1b[0mPOST /settings.php \x1b[33m404 \x1b[0m0.447 ms - 152\x1b[0m\n\x1b[0mPOST /system.php \x1b[33m404 \x1b[0m0.357 ms - 150\x1b[0m\n\x1b[0mPOST /test123.php \x1b[33m404 \x1b[0m0.404 ms - 151\x1b[0m\n\x1b[0mPOST /think.php \x1b[33m404 \x1b[0m0.315 ms - 149\x1b[0m\n\x1b[0mPOST /db.init.php \x1b[33m404 \x1b[0m0.334 ms - 151\x1b[0m\n\x1b[0mPOST /db_session.init.php \x1b[33m404 \x1b[0m0.312 ms - 159\x1b[0m\n\x1b[0mPOST /db__.init.php \x1b[33m404 \x1b[0m0.521 ms - 153\x1b[0m\n\x1b[0mPOST /wp-admins.php \x1b[33m404 \x1b[0m0.373 ms - 153\x1b[0m\n\x1b[0mPOST /m.php?pbid=open \x1b[33m404 \x1b[0m0.315 ms - 145\x1b[0m\n\x1b[0mPOST /error.php \x1b[33m404 \x1b[0m0.334 ms - 149\x1b[0m\n\x1b[0mPOST /099.php \x1b[33m404 \x1b[0m0.337 ms - 147\x1b[0m\n\x1b[0mPOST /_404.php \x1b[33m404 \x1b[0m0.328 ms - 148\x1b[0m\n\x1b[0mPOST /Alarg53.php \x1b[33m404 \x1b[0m0.320 ms - 151\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.279 ms - 2\x1b[0m\n\x1b[0mPOST /lapan.php \x1b[33m404 \x1b[0m0.319 ms - 149\x1b[0m\n\x1b[0mPOST /p34ky1337.php \x1b[33m404 \x1b[0m0.325 ms - 153\x1b[0m\n\x1b[0mPOST /pk1914.php \x1b[33m404 \x1b[0m0.322 ms - 150\x1b[0m\n\x1b[0mPOST /sllolx.php \x1b[33m404 \x1b[0m0.321 ms - 150\x1b[0m\n\x1b[0mPOST /Skri.php \x1b[33m404 \x1b[0m0.431 ms - 148\x1b[0m\n\x1b[0mPOST /db_dataml.php \x1b[33m404 \x1b[0m0.325 ms - 153\x1b[0m\n\x1b[0mPOST /db_desql.php \x1b[33m404 \x1b[0m0.322 ms - 152\x1b[0m\n\x1b[0mPOST /mx.php \x1b[33m404 \x1b[0m0.393 ms - 146\x1b[0m\n\x1b[0mPOST /wshell.php \x1b[33m404 \x1b[0m0.339 ms - 150\x1b[0m\n\x1b[0mPOST /xshell.php \x1b[33m404 \x1b[0m0.530 ms - 150\x1b[0m\n\x1b[0mPOST /qq.php \x1b[33m404 \x1b[0m0.335 ms - 146\x1b[0m\n\x1b[0mPOST /conflg.php \x1b[33m404 \x1b[0m0.357 ms - 150\x1b[0m\n\x1b[0mPOST /conflg.php \x1b[33m404 \x1b[0m0.367 ms - 150\x1b[0m\n\x1b[0mPOST /lindex.php \x1b[33m404 \x1b[0m0.351 ms - 150\x1b[0m\n\x1b[0mPOST /phpstudy.php \x1b[33m404 \x1b[0m0.347 ms - 152\x1b[0m\n\x1b[0mPOST /phpStudy.php \x1b[33m404 \x1b[0m0.362 ms - 152\x1b[0m\n\x1b[0mPOST /weixiao.php \x1b[33m404 \x1b[0m0.454 ms - 151\x1b[0m\n\x1b[0mPOST /feixiang.php \x1b[33m404 \x1b[0m0.333 ms - 152\x1b[0m\n\x1b[0mPOST /ak47.php \x1b[33m404 \x1b[0m0.333 ms - 148\x1b[0m\n\x1b[0mPOST /ak48.php \x1b[33m404 \x1b[0m0.318 ms - 148\x1b[0m\n\x1b[0mPOST /xiao.php \x1b[33m404 \x1b[0m0.313 ms - 148\x1b[0m\n\x1b[0mPOST /yao.php \x1b[33m404 \x1b[0m0.387 ms - 147\x1b[0m\n\x1b[0mPOST /defect.php \x1b[33m404 \x1b[0m0.327 ms - 150\x1b[0m\n\x1b[0mPOST /webslee.php \x1b[33m404 \x1b[0m0.342 ms - 151\x1b[0m\n\x1b[0mPOST /q.php \x1b[33m404 \x1b[0m0.334 ms - 145\x1b[0m\n\x1b[0mPOST /pe.php \x1b[33m404 \x1b[0m0.329 ms - 146\x1b[0m\n\x1b[0mPOST /hm.php \x1b[33m404 \x1b[0m0.399 ms - 146\x1b[0m\n\x1b[0mPOST /sz.php \x1b[33m404 \x1b[0m0.306 ms - 146\x1b[0m\n\x1b[0mPOST /cainiao.php \x1b[33m404 \x1b[0m0.352 ms - 151\x1b[0m\n\x1b[0mPOST /zuoshou.php \x1b[33m404 \x1b[0m0.322 ms - 151\x1b[0m\n\x1b[0mPOST /zuo.php \x1b[33m404 \x1b[0m0.325 ms - 147\x1b[0m\n\x1b[0mPOST /aotu.php \x1b[33m404 \x1b[0m0.341 ms - 148\x1b[0m\n\x1b[0mPOST /aotu7.php \x1b[33m404 \x1b[0m0.347 ms - 149\x1b[0m\n\x1b[0mPOST /cmd.php \x1b[33m404 \x1b[0m0.428 ms - 147\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.287 ms - 2\x1b[0m\n\x1b[0mPOST /cmd.php \x1b[33m404 \x1b[0m0.300 ms - 147\x1b[0m\n\x1b[0mPOST /bak.php \x1b[33m404 \x1b[0m0.335 ms - 147\x1b[0m\n\x1b[0mPOST /system.php \x1b[33m404 \x1b[0m0.309 ms - 150\x1b[0m\n\x1b[0mPOST /l6.php \x1b[33m404 \x1b[0m0.334 ms - 146\x1b[0m\n\x1b[0mPOST /l7.php \x1b[33m404 \x1b[0m0.299 ms - 146\x1b[0m\n\x1b[0mPOST /l8.php \x1b[33m404 \x1b[0m0.441 ms - 146\x1b[0m\n\x1b[0mPOST /q.php \x1b[33m404 \x1b[0m0.326 ms - 145\x1b[0m\n\x1b[0mPOST /56.php \x1b[33m404 \x1b[0m0.337 ms - 146\x1b[0m\n\x1b[0mPOST /mz.php \x1b[33m404 \x1b[0m0.369 ms - 146\x1b[0m\n\x1b[0mPOST /yumo.php \x1b[33m404 \x1b[0m0.317 ms - 148\x1b[0m\n\x1b[0mPOST /min.php \x1b[33m404 \x1b[0m0.350 ms - 147\x1b[0m\n\x1b[0mPOST /wan.php \x1b[33m404 \x1b[0m0.313 ms - 147\x1b[0m\n\x1b[0mPOST /wanan.php \x1b[33m404 \x1b[0m0.306 ms - 149\x1b[0m\n\x1b[0mPOST /ssaa.php \x1b[33m404 \x1b[0m0.339 ms - 148\x1b[0m\n\x1b[0mPOST /ssaa.php \x1b[33m404 \x1b[0m0.331 ms - 148\x1b[0m\n\x1b[0mPOST /qq.php \x1b[33m404 \x1b[0m0.397 ms - 146\x1b[0m\n\x1b[0mPOST /aw.php \x1b[33m404 \x1b[0m0.522 ms - 146\x1b[0m\n\x1b[0mPOST /12.php \x1b[33m404 \x1b[0m0.348 ms - 146\x1b[0m\n\x1b[0mPOST /hh.php \x1b[33m404 \x1b[0m0.337 ms - 146\x1b[0m\n\x1b[0mPOST /ak.php \x1b[33m404 \x1b[0m0.321 ms - 146\x1b[0m\n\x1b[0mPOST /ip.php \x1b[33m404 \x1b[0m0.325 ms - 146\x1b[0m\n\x1b[0mPOST /infoo.php \x1b[33m404 \x1b[0m0.326 ms - 149\x1b[0m\n\x1b[0mPOST /qwe.php \x1b[33m404 \x1b[0m0.306 ms - 147\x1b[0m\n\x1b[0mPOST /1213.php \x1b[33m404 \x1b[0m0.307 ms - 148\x1b[0m\n\x1b[0mPOST /post.php \x1b[33m404 \x1b[0m0.329 ms - 148\x1b[0m\n\x1b[0mPOST /aaaa.php \x1b[33m404 \x1b[0m0.351 ms - 148\x1b[0m\n\x1b[0mPOST /h1.php \x1b[33m404 \x1b[0m0.351 ms - 146\x1b[0m\n\x1b[0mPOST /test.php \x1b[33m404 \x1b[0m0.389 ms - 148\x1b[0m\n\x1b[0mPOST /3.php \x1b[33m404 \x1b[0m0.325 ms - 145\x1b[0m\n\x1b[0mPOST /4.php \x1b[33m404 \x1b[0m0.356 ms - 145\x1b[0m\n\x1b[0mPOST /phpinfi.php \x1b[33m404 \x1b[0m0.325 ms - 151\x1b[0m\n\x1b[0mPOST /9510.php \x1b[33m404 \x1b[0m0.305 ms - 148\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.262 ms - 2\x1b[0m\n\x1b[0mPOST /python.php \x1b[33m404 \x1b[0m0.308 ms - 150\x1b[0m\n\x1b[0mPOST /default.php \x1b[33m404 \x1b[0m0.304 ms - 151\x1b[0m\n\x1b[0mPOST /sean.php \x1b[33m404 \x1b[0m0.333 ms - 148\x1b[0m\n\x1b[0mPOST /app.php \x1b[33m404 \x1b[0m0.346 ms - 147\x1b[0m\n\x1b[0mPOST /help.php \x1b[33m404 \x1b[0m0.338 ms - 148\x1b[0m\n\x1b[0mPOST /tiandi.php \x1b[33m404 \x1b[0m0.317 ms - 150\x1b[0m\n\x1b[0mPOST /xz.php \x1b[33m404 \x1b[0m0.325 ms - 146\x1b[0m\n\x1b[0mPOST /beimeng.php \x1b[33m404 \x1b[0m0.331 ms - 151\x1b[0m\n\x1b[0mPOST /linuxse.php \x1b[33m404 \x1b[0m0.302 ms - 151\x1b[0m\n\x1b[0mPOST /zuoindex.php \x1b[33m404 \x1b[0m0.306 ms - 152\x1b[0m\n\x1b[0mPOST /zshmindex.php \x1b[33m404 \x1b[0m0.419 ms - 153\x1b[0m\n\x1b[0mPOST /tomcat.php \x1b[33m404 \x1b[0m0.313 ms - 150\x1b[0m\n\x1b[0mPOST /ceshi.php \x1b[33m404 \x1b[0m0.315 ms - 149\x1b[0m\n\x1b[0mPOST /1hou.php \x1b[33m404 \x1b[0m0.309 ms - 148\x1b[0m\n\x1b[0mPOST /ou2.php \x1b[33m404 \x1b[0m0.340 ms - 147\x1b[0m\n\x1b[0mPOST /zuos.php \x1b[33m404 \x1b[0m0.304 ms - 148\x1b[0m\n\x1b[0mPOST /zuoss.php \x1b[33m404 \x1b[0m0.305 ms - 149\x1b[0m\n\x1b[0mPOST /zuoshss.php \x1b[33m404 \x1b[0m0.309 ms - 151\x1b[0m\n\x1b[0mPOST /789056.php \x1b[33m404 \x1b[0m0.389 ms - 150\x1b[0m\n\x1b[0mPOST /abc776.php \x1b[33m404 \x1b[0m0.343 ms - 150\x1b[0m\n\x1b[0mPOST /afafaf.php \x1b[33m404 \x1b[0m0.575 ms - 150\x1b[0m\n\x1b[0mPOST /jyyy.php \x1b[33m404 \x1b[0m0.305 ms - 148\x1b[0m\n\x1b[0mPOST /ooo23.php \x1b[33m404 \x1b[0m0.343 ms - 149\x1b[0m\n\x1b[0mPOST /htfr.php \x1b[33m404 \x1b[0m0.306 ms - 148\x1b[0m\n\x1b[0mPOST /boots.php \x1b[33m404 \x1b[0m0.918 ms - 149\x1b[0m\n\x1b[0mPOST /she.php \x1b[33m404 \x1b[0m0.313 ms - 147\x1b[0m\n\x1b[0mPOST /s.php \x1b[33m404 \x1b[0m0.300 ms - 145\x1b[0m\n\x1b[0mPOST /qw.php \x1b[33m404 \x1b[0m0.359 ms - 146\x1b[0m\n\x1b[0mPOST /test.php \x1b[33m404 \x1b[0m0.310 ms - 148\x1b[0m\n\x1b[0mPOST /caonma.php \x1b[33m404 \x1b[0m0.333 ms - 150\x1b[0m\n\x1b[0mPOST /wcp.php \x1b[33m404 \x1b[0m0.309 ms - 147\x1b[0m\n\x1b[0mPOST /u.php \x1b[33m404 \x1b[0m0.339 ms - 145\x1b[0m\n\x1b[0mPOST /uu.php \x1b[33m404 \x1b[0m0.311 ms - 146\x1b[0m\n\x1b[0mPOST /uuu.php \x1b[33m404 \x1b[0m0.317 ms - 147\x1b[0m\n\x1b[0mPOST /sss.php \x1b[33m404 \x1b[0m0.356 ms - 147\x1b[0m\n\x1b[0mPOST /ooo.php \x1b[33m404 \x1b[0m0.317 ms - 147\x1b[0m\n\x1b[0mPOST /ss.php \x1b[33m404 \x1b[0m0.449 ms - 146\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.289 ms - 2\x1b[0m\n\x1b[0mPOST /ss.php \x1b[33m404 \x1b[0m0.322 ms - 146\x1b[0m\n\x1b[0mPOST /sss.php \x1b[33m404 \x1b[0m0.311 ms - 147\x1b[0m\n\x1b[0mPOST /mazi.php \x1b[33m404 \x1b[0m0.312 ms - 148\x1b[0m\n\x1b[0mPOST /phpini.php \x1b[33m404 \x1b[0m0.309 ms - 150\x1b[0m\n\x1b[0mPOST /1.php \x1b[33m404 \x1b[0m0.323 ms - 145\x1b[0m\n\x1b[0mPOST /2.php \x1b[33m404 \x1b[0m0.321 ms - 145\x1b[0m\n\x1b[0mPOST /core.php \x1b[33m404 \x1b[0m0.320 ms - 148\x1b[0m\n\x1b[0mPOST /qaz.php \x1b[33m404 \x1b[0m0.513 ms - 147\x1b[0m\n\x1b[0mPOST /sha.php \x1b[33m404 \x1b[0m0.323 ms - 147\x1b[0m\n\x1b[0mPOST /ppx.php \x1b[33m404 \x1b[0m0.322 ms - 147\x1b[0m\n\x1b[0mPOST /confg.php \x1b[33m404 \x1b[0m0.367 ms - 149\x1b[0m\n\x1b[0mPOST /conf1g.php \x1b[33m404 \x1b[0m0.344 ms - 150\x1b[0m\n\x1b[0mPOST /confg.php \x1b[33m404 \x1b[0m0.347 ms - 149\x1b[0m\n\x1b[0mPOST /confg.php \x1b[33m404 \x1b[0m0.344 ms - 149\x1b[0m\n\x1b[0mPOST /confg.php \x1b[33m404 \x1b[0m0.323 ms - 149\x1b[0m\n\x1b[0mPOST /ver.php \x1b[33m404 \x1b[0m0.754 ms - 147\x1b[0m\n\x1b[0mPOST /hack.php \x1b[33m404 \x1b[0m0.334 ms - 148\x1b[0m\n\x1b[0mPOST /hack.php \x1b[33m404 \x1b[0m0.360 ms - 148\x1b[0m\n\x1b[0mPOST /qa.php \x1b[33m404 \x1b[0m0.307 ms - 146\x1b[0m\n\x1b[0mPOST /Ss.php \x1b[33m404 \x1b[0m0.336 ms - 146\x1b[0m\n\x1b[0mPOST /xxx.php \x1b[33m404 \x1b[0m0.310 ms - 147\x1b[0m\n\x1b[0mPOST /92.php \x1b[33m404 \x1b[0m0.324 ms - 146\x1b[0m\n\x1b[0mPOST /z.php \x1b[33m404 \x1b[0m0.324 ms - 145\x1b[0m\n\x1b[0mPOST /x.php \x1b[33m404 \x1b[0m0.326 ms - 145\x1b[0m\n\x1b[0mPOST /dexgp.php \x1b[33m404 \x1b[0m0.347 ms - 149\x1b[0m\n\x1b[0mPOST /nuoxi.php \x1b[33m404 \x1b[0m0.391 ms - 149\x1b[0m\n\x1b[0mPOST /godkey.php \x1b[33m404 \x1b[0m0.335 ms - 150\x1b[0m\n\x1b[0mPOST /okokok.php \x1b[33m404 \x1b[0m0.304 ms - 150\x1b[0m\n\x1b[0mPOST /erwa.php \x1b[33m404 \x1b[0m0.328 ms - 148\x1b[0m\n\x1b[0mPOST /pma.php \x1b[33m404 \x1b[0m0.334 ms - 147\x1b[0m\n\x1b[0mPOST /ruyi.php \x1b[33m404 \x1b[0m0.423 ms - 148\x1b[0m\n\x1b[0mPOST /51314.php \x1b[33m404 \x1b[0m0.520 ms - 149\x1b[0m\n\x1b[0mPOST /5201314.php \x1b[33m404 \x1b[0m0.311 ms - 151\x1b[0m\n\x1b[0mPOST /fusheng.php \x1b[33m404 \x1b[0m0.369 ms - 151\x1b[0m\n\x1b[0mPOST /general.php \x1b[33m404 \x1b[0m0.359 ms - 151\x1b[0m\n\x1b[0mPOST /repeat.php \x1b[33m404 \x1b[0m0.331 ms - 150\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.280 ms - 2\x1b[0m\n\x1b[0mPOST /ldw.php \x1b[33m404 \x1b[0m0.323 ms - 147\x1b[0m\n\x1b[0mPOST /api.php \x1b[33m404 \x1b[0m0.337 ms - 147\x1b[0m\n\x1b[0mPOST /s1.php \x1b[33m404 \x1b[0m0.336 ms - 146\x1b[0m\n\x1b[0mPOST /hello.php \x1b[33m404 \x1b[0m0.357 ms - 149\x1b[0m\n\x1b[0mPOST /hello.php \x1b[33m404 \x1b[0m0.304 ms - 149\x1b[0m\n\x1b[0mPOST /admn.php \x1b[33m404 \x1b[0m0.336 ms - 148\x1b[0m\n\x1b[0mPOST /hell.php \x1b[33m404 \x1b[0m0.316 ms - 148\x1b[0m\n\x1b[0mPOST /hell.php \x1b[33m404 \x1b[0m0.307 ms - 148\x1b[0m\n\x1b[0mPOST /xp.php \x1b[33m404 \x1b[0m0.302 ms - 146\x1b[0m\n\x1b[0mPOST /1.php \x1b[33m404 \x1b[0m0.395 ms - 145\x1b[0m\n\x1b[0mPOST /2.php \x1b[33m404 \x1b[0m0.326 ms - 145\x1b[0m\n\x1b[0mPOST /p.php \x1b[33m404 \x1b[0m0.302 ms - 145\x1b[0m\n\x1b[0mPOST /1.php \x1b[33m404 \x1b[0m0.307 ms - 145\x1b[0m\n\x1b[0mPOST /a.php \x1b[33m404 \x1b[0m0.334 ms - 145\x1b[0m\n\x1b[0mPOST /m.php \x1b[33m404 \x1b[0m0.364 ms - 145\x1b[0m\n\x1b[0mPOST /conf.php \x1b[33m404 \x1b[0m0.328 ms - 148\x1b[0m\n\x1b[0mPOST /123.php \x1b[33m404 \x1b[0m0.314 ms - 147\x1b[0m\n\x1b[0mPOST /1234.php \x1b[33m404 \x1b[0m0.390 ms - 148\x1b[0m\n\x1b[0mPOST /HX.php \x1b[33m404 \x1b[0m0.312 ms - 146\x1b[0m\n\x1b[0mPOST /diy.php \x1b[33m404 \x1b[0m0.332 ms - 147\x1b[0m\n\x1b[0mPOST /666.php \x1b[33m404 \x1b[0m0.337 ms - 147\x1b[0m\n\x1b[0mPOST /777.php \x1b[33m404 \x1b[0m0.334 ms - 147\x1b[0m\n\x1b[0mPOST /qwq.php \x1b[33m404 \x1b[0m0.339 ms - 147\x1b[0m\n\x1b[0mPOST /qwqw.php \x1b[33m404 \x1b[0m0.388 ms - 148\x1b[0m\n\x1b[0mPOST /.php \x1b[33m404 \x1b[0m0.310 ms - 144\x1b[0m\n\x1b[0mPOST /infos.php \x1b[33m404 \x1b[0m0.315 ms - 149\x1b[0m\n\x1b[0mPOST /x.php \x1b[33m404 \x1b[0m0.305 ms - 145\x1b[0m\n\x1b[0mPOST /lucky.php \x1b[33m404 \x1b[0m0.335 ms - 149\x1b[0m\n\x1b[0mPOST /zzk.php \x1b[33m404 \x1b[0m0.350 ms - 147\x1b[0m\n\x1b[0mPOST /toor.php \x1b[33m404 \x1b[0m0.334 ms - 148\x1b[0m\n\x1b[0mPOST /uu.php \x1b[33m404 \x1b[0m0.316 ms - 146\x1b[0m\n\x1b[0mPOST /a.php \x1b[33m404 \x1b[0m0.849 ms - 145\x1b[0m\n\x1b[0mPOST /aaa.php \x1b[33m404 \x1b[0m0.416 ms - 147\x1b[0m\n\x1b[0mPOST /wb.php \x1b[33m404 \x1b[0m0.318 ms - 146\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m1.044 ms - 2\x1b[0m\n\x1b[0mPOST /yj.php \x1b[33m404 \x1b[0m0.321 ms - 146\x1b[0m\n\x1b[0mPOST /z.php \x1b[33m404 \x1b[0m0.345 ms - 145\x1b[0m\n\x1b[0mPOST /7.php \x1b[33m404 \x1b[0m0.365 ms - 145\x1b[0m\n\x1b[0mPOST /xiaoma.php \x1b[33m404 \x1b[0m0.312 ms - 150\x1b[0m\n\x1b[0mPOST /xiaomae.php \x1b[33m404 \x1b[0m0.308 ms - 151\x1b[0m\n\x1b[0mPOST /xiaomar.php \x1b[33m404 \x1b[0m0.315 ms - 151\x1b[0m\n\x1b[0mPOST /qq.php \x1b[33m404 \x1b[0m0.307 ms - 146\x1b[0m\n\x1b[0mPOST /data.php \x1b[33m404 \x1b[0m0.314 ms - 148\x1b[0m\n\x1b[0mPOST /log.php \x1b[33m404 \x1b[0m0.324 ms - 147\x1b[0m\n\x1b[0mPOST /fack.php \x1b[33m404 \x1b[0m0.391 ms - 148\x1b[0m\n\x1b[0mPOST /angge.php \x1b[33m404 \x1b[0m0.441 ms - 149\x1b[0m\n\x1b[0mPOST /cxfm666.php \x1b[33m404 \x1b[0m0.332 ms - 151\x1b[0m\n\x1b[0mPOST /db.php \x1b[33m404 \x1b[0m0.447 ms - 146\x1b[0m\n\x1b[0mPOST /hacly.php \x1b[33m404 \x1b[0m0.438 ms - 149\x1b[0m\n\x1b[0mPOST /xiaomo.php \x1b[33m404 \x1b[0m0.333 ms - 150\x1b[0m\n\x1b[0mPOST /xiaoyu.php \x1b[33m404 \x1b[0m0.353 ms - 150\x1b[0m\n\x1b[0mPOST /xiaohei.php \x1b[33m404 \x1b[0m0.437 ms - 151\x1b[0m\n\x1b[0mPOST /qq5262.php \x1b[33m404 \x1b[0m0.329 ms - 150\x1b[0m\n\x1b[0mPOST /lost.php \x1b[33m404 \x1b[0m0.322 ms - 148\x1b[0m\n\x1b[0mPOST /php.php \x1b[33m404 \x1b[0m0.372 ms - 147\x1b[0m\n\x1b[0mPOST /win.php \x1b[33m404 \x1b[0m0.331 ms - 147\x1b[0m\n\x1b[0mPOST /win1.php \x1b[33m404 \x1b[0m0.338 ms - 148\x1b[0m\n\x1b[0mPOST /linux.php \x1b[33m404 \x1b[0m0.430 ms - 149\x1b[0m\n\x1b[0mPOST /linux1.php \x1b[33m404 \x1b[0m0.309 ms - 150\x1b[0m\n\x1b[0mPOST /CC.php \x1b[33m404 \x1b[0m0.338 ms - 146\x1b[0m\n\x1b[0mPOST /x.php \x1b[33m404 \x1b[0m0.314 ms - 145\x1b[0m\n\x1b[0mPOST /lanke.php \x1b[33m404 \x1b[0m0.335 ms - 149\x1b[0m\n\x1b[0mPOST /neko.php \x1b[33m404 \x1b[0m0.304 ms - 148\x1b[0m\n\x1b[0mPOST /super.php \x1b[33m404 \x1b[0m0.348 ms - 149\x1b[0m\n\x1b[0mPOST /cer.php \x1b[33m404 \x1b[0m0.303 ms - 147\x1b[0m\n\x1b[0mPOST /cere.php \x1b[33m404 \x1b[0m0.311 ms - 148\x1b[0m\n\x1b[0mPOST /aaa.php \x1b[33m404 \x1b[0m0.336 ms - 147\x1b[0m\n\x1b[0mPOST /Administrator.php \x1b[33m404 \x1b[0m0.334 ms - 157\x1b[0m\n\x1b[0mPOST /liangchen.php \x1b[33m404 \x1b[0m0.331 ms - 153\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.278 ms - 2\x1b[0m\n\x1b[0mPOST /lucky.php \x1b[33m404 \x1b[0m0.286 ms - 149\x1b[0m\n\x1b[0mPOST /meng.php \x1b[33m404 \x1b[0m0.448 ms - 148\x1b[0m\n\x1b[0mPOST /no.php \x1b[33m404 \x1b[0m0.383 ms - 146\x1b[0m\n\x1b[0mPOST /mysql.php \x1b[33m404 \x1b[0m0.304 ms - 149\x1b[0m\n\x1b[0mPOST /Updata.php \x1b[33m404 \x1b[0m0.305 ms - 150\x1b[0m\n\x1b[0mPOST /xxxx.php \x1b[33m404 \x1b[0m0.401 ms - 148\x1b[0m\n\x1b[0mPOST /guai.php \x1b[33m404 \x1b[0m0.603 ms - 148\x1b[0m\n\x1b[0mPOST /ljb.php \x1b[33m404 \x1b[0m0.388 ms - 147\x1b[0m\n\x1b[0mPOST /www.php \x1b[33m404 \x1b[0m0.351 ms - 147\x1b[0m\n\x1b[0mPOST /1.php \x1b[33m404 \x1b[0m0.305 ms - 145\x1b[0m\n\x1b[0mPOST /chaoda.php \x1b[33m404 \x1b[0m0.459 ms - 150\x1b[0m\n\x1b[0mPOST /qq.php \x1b[33m404 \x1b[0m0.685 ms - 146\x1b[0m\n\x1b[0mPOST /vuln.php \x1b[33m404 \x1b[0m0.296 ms - 148\x1b[0m\n\x1b[0mPOST /vuln1.php \x1b[33m404 \x1b[0m0.301 ms - 149\x1b[0m\n\x1b[0mPOST /orange.php \x1b[33m404 \x1b[0m0.369 ms - 150\x1b[0m\n\x1b[0mPOST /erba.php \x1b[33m404 \x1b[0m0.340 ms - 148\x1b[0m\n\x1b[0mPOST /link.php \x1b[33m404 \x1b[0m0.837 ms - 148\x1b[0m\n\x1b[0mPOST /linkr.php \x1b[33m404 \x1b[0m0.493 ms - 149\x1b[0m\n\x1b[0mPOST /linkx.php \x1b[33m404 \x1b[0m0.334 ms - 149\x1b[0m\n\x1b[0mPOST /kvast.php \x1b[33m404 \x1b[0m0.355 ms - 149\x1b[0m\n\x1b[0mPOST /xiaobin.php \x1b[33m404 \x1b[0m0.307 ms - 151\x1b[0m\n\x1b[0mPOST /ppp.php \x1b[33m404 \x1b[0m0.308 ms - 147\x1b[0m\n\x1b[0mPOST /ppp.php \x1b[33m404 \x1b[0m0.367 ms - 147\x1b[0m\n\x1b[0mPOST /lm.php \x1b[33m404 \x1b[0m0.306 ms - 146\x1b[0m\n\x1b[0mPOST /zzz.php \x1b[33m404 \x1b[0m0.359 ms - 147\x1b[0m\n\x1b[0mPOST /520.php \x1b[33m404 \x1b[0m0.354 ms - 147\x1b[0m\n\x1b[0mPOST /jkl.php \x1b[33m404 \x1b[0m0.386 ms - 147\x1b[0m\n\x1b[0mPOST /lmn.php \x1b[33m404 \x1b[0m0.413 ms - 147\x1b[0m\n\x1b[0mPOST /bx.php \x1b[33m404 \x1b[0m0.370 ms - 146\x1b[0m\n\x1b[0mPOST /Moxin.PHP \x1b[33m404 \x1b[0m0.328 ms - 149\x1b[0m\n\x1b[0mPOST /g.php \x1b[33m404 \x1b[0m0.334 ms - 145\x1b[0m\n\x1b[0mPOST /CCC.PHP \x1b[33m404 \x1b[0m0.302 ms - 147\x1b[0m\n\x1b[0mPOST /CCCC.PHP \x1b[33m404 \x1b[0m0.312 ms - 148\x1b[0m\n\x1b[0mPOST /mobai.PHP \x1b[33m404 \x1b[0m0.468 ms - 149\x1b[0m\n\x1b[0mPOST /avast.php \x1b[33m404 \x1b[0m0.428 ms - 149\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.268 ms - 2\x1b[0m\n\x1b[0mPOST /abc.php \x1b[33m404 \x1b[0m0.366 ms - 147\x1b[0m\n\x1b[0mPOST /Pings.php \x1b[33m404 \x1b[0m0.337 ms - 149\x1b[0m\n\x1b[0mPOST /123.php \x1b[33m404 \x1b[0m0.297 ms - 147\x1b[0m\n\x1b[0mPOST /log.php \x1b[33m404 \x1b[0m0.299 ms - 147\x1b[0m\n\x1b[0mPOST /log.php \x1b[33m404 \x1b[0m0.351 ms - 147\x1b[0m\n\x1b[0mPOST /log1.php \x1b[33m404 \x1b[0m0.311 ms - 148\x1b[0m\n\x1b[0mPOST /alipay.php \x1b[33m404 \x1b[0m0.302 ms - 150\x1b[0m\n\x1b[0mPOST /vf.php \x1b[33m404 \x1b[0m0.769 ms - 146\x1b[0m\n\x1b[0mPOST /tianqi.php \x1b[33m404 \x1b[0m0.311 ms - 150\x1b[0m\n\x1b[0mPOST /can.php \x1b[33m404 \x1b[0m0.395 ms - 147\x1b[0m\n\x1b[0mPOST /can.php \x1b[33m404 \x1b[0m0.355 ms - 147\x1b[0m\n\x1b[0mPOST /dns.php \x1b[33m404 \x1b[0m0.315 ms - 147\x1b[0m\n\x1b[0mPOST /dns.php \x1b[33m404 \x1b[0m0.323 ms - 147\x1b[0m\n\x1b[0mPOST /cmd.php \x1b[33m404 \x1b[0m0.342 ms - 147\x1b[0m\n\x1b[0mPOST /juji.php \x1b[33m404 \x1b[0m0.563 ms - 148\x1b[0m\n\x1b[0mPOST /n24.php \x1b[33m404 \x1b[0m0.395 ms - 147\x1b[0m\n\x1b[0mPOST /temp.php \x1b[33m404 \x1b[0m0.302 ms - 148\x1b[0m\n\x1b[0mPOST /jiaochi.php \x1b[33m404 \x1b[0m0.374 ms - 151\x1b[0m\n\x1b[0mPOST /ganzhuolang.php \x1b[33m404 \x1b[0m0.323 ms - 155\x1b[0m\n\x1b[0mPOST /987.php \x1b[33m404 \x1b[0m0.306 ms - 147\x1b[0m\n\x1b[0mPOST /h156.php \x1b[33m404 \x1b[0m0.303 ms - 148\x1b[0m\n\x1b[0mPOST /666666.php \x1b[33m404 \x1b[0m0.319 ms - 150\x1b[0m\n\x1b[0mPOST /xh.php \x1b[33m404 \x1b[0m0.410 ms - 146\x1b[0m\n\x1b[0mPOST /key.php \x1b[33m404 \x1b[0m0.386 ms - 147\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.290 ms - 2\x1b[0m\n\x1b[0mPOST /jb.php \x1b[33m404 \x1b[0m0.380 ms - 146\x1b[0m\n\x1b[0mPOST /duke.php \x1b[33m404 \x1b[0m0.362 ms - 148\x1b[0m\n\x1b[0mPOST /llld.php \x1b[33m404 \x1b[0m0.329 ms - 148\x1b[0m\n\x1b[0mPOST /404.php \x1b[33m404 \x1b[0m0.365 ms - 147\x1b[0m\n\x1b[0mPOST /jy.php \x1b[33m404 \x1b[0m0.356 ms - 146\x1b[0m\n\x1b[0mPOST /123.php \x1b[33m404 \x1b[0m0.346 ms - 147\x1b[0m\n\x1b[0mPOST /v.php \x1b[33m404 \x1b[0m0.333 ms - 145\x1b[0m\n\x1b[0mPOST /luoke.php \x1b[33m404 \x1b[0m0.370 ms - 149\x1b[0m\n\x1b[0mPOST /nidage.php \x1b[33m404 \x1b[0m0.303 ms - 150\x1b[0m\n\x1b[0mPOST /sanan.php \x1b[33m404 \x1b[0m1.742 ms - 149\x1b[0m\n\x1b[0mPOST /02.php \x1b[33m404 \x1b[0m0.307 ms - 146\x1b[0m\n\x1b[0mPOST /ddd.php \x1b[33m404 \x1b[0m0.298 ms - 147\x1b[0m\n\x1b[0mPOST /mo.php \x1b[33m404 \x1b[0m0.303 ms - 146\x1b[0m\n\x1b[0mPOST /sbkc.php \x1b[33m404 \x1b[0m0.378 ms - 148\x1b[0m\n\x1b[0mPOST /sbkcb.php \x1b[33m404 \x1b[0m0.374 ms - 149\x1b[0m\n\x1b[0mPOST /cnm.php \x1b[33m404 \x1b[0m0.356 ms - 147\x1b[0m\n\x1b[0mPOST /tests.php \x1b[33m404 \x1b[0m0.429 ms - 149\x1b[0m\n\x1b[0mPOST /luoran.php \x1b[33m404 \x1b[0m0.345 ms - 150\x1b[0m\n\x1b[0mPOST /luoran6.php \x1b[33m404 \x1b[0m2.179 ms - 151\x1b[0m\n\x1b[0mPOST /asen.php \x1b[33m404 \x1b[0m0.329 ms - 148\x1b[0m\n\x1b[0mPOST /fx.php \x1b[33m404 \x1b[0m0.350 ms - 146\x1b[0m\n\x1b[0mPOST /hl.php \x1b[33m404 \x1b[0m0.316 ms - 146\x1b[0m\n\x1b[0mPOST /1556189185.php \x1b[33m404 \x1b[0m0.308 ms - 154\x1b[0m\n\x1b[0mPOST /que.php \x1b[33m404 \x1b[0m0.361 ms - 147\x1b[0m\n\x1b[0mPOST /shanzhi.php \x1b[33m404 \x1b[0m0.307 ms - 151\x1b[0m\n\x1b[0mPOST /yc.php \x1b[33m404 \x1b[0m0.434 ms - 146\x1b[0m\n\x1b[0mPOST /ycc.php \x1b[33m404 \x1b[0m0.338 ms - 147\x1b[0m\n\x1b[0mPOST /yccc.php \x1b[33m404 \x1b[0m0.297 ms - 148\x1b[0m\n\x1b[0mPOST /lr.php \x1b[33m404 \x1b[0m0.327 ms - 146\x1b[0m\n\x1b[0mPOST /lr.php \x1b[33m404 \x1b[0m0.311 ms - 146\x1b[0m\n\x1b[0mPOST /2.php \x1b[33m404 \x1b[0m0.309 ms - 145\x1b[0m\n\x1b[0mPOST /xixi.php \x1b[33m404 \x1b[0m0.353 ms - 148\x1b[0m\n\x1b[0mPOST /qiqi.php \x1b[33m404 \x1b[0m0.388 ms - 148\x1b[0m\n\x1b[0mPOST /qiqi11.php \x1b[33m404 \x1b[0m0.344 ms - 150\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.265 ms - 2\x1b[0m\n\x1b[0mPOST /ruii.php \x1b[33m404 \x1b[0m0.324 ms - 148\x1b[0m\n\x1b[0mPOST /ci.php \x1b[33m404 \x1b[0m0.351 ms - 146\x1b[0m\n\x1b[0mPOST /mutuba.php \x1b[33m404 \x1b[0m0.321 ms - 150\x1b[0m\n\x1b[0mPOST /taocishun.php \x1b[33m404 \x1b[0m0.328 ms - 153\x1b[0m\n\x1b[0mPOST /gg.php \x1b[33m404 \x1b[0m0.312 ms - 146\x1b[0m\n\x1b[0mPOST /xiong.php \x1b[33m404 \x1b[0m0.300 ms - 149\x1b[0m\n\x1b[0mPOST /jing.php \x1b[33m404 \x1b[0m0.299 ms - 148\x1b[0m\n\x1b[0mPOST /ganshiqiang.php \x1b[33m404 \x1b[0m0.392 ms - 155\x1b[0m\n\x1b[0mPOST /n23.php \x1b[33m404 \x1b[0m0.289 ms - 147\x1b[0m\n\x1b[0mPOST /infos.php \x1b[33m404 \x1b[0m0.432 ms - 149\x1b[0m\n\x1b[0mPOST /api.php \x1b[33m404 \x1b[0m0.446 ms - 147\x1b[0m\n\x1b[0mPOST /zxc.php \x1b[33m404 \x1b[0m0.328 ms - 147\x1b[0m\n\x1b[0mPOST /sqlk.php \x1b[33m404 \x1b[0m0.327 ms - 148\x1b[0m\n\x1b[0mPOST /xx33.php \x1b[33m404 \x1b[0m0.322 ms - 148\x1b[0m\n\x1b[0mPOST /aotian.php \x1b[33m404 \x1b[0m0.419 ms - 150\x1b[0m\n\x1b[0mPOST /buluya.php \x1b[33m404 \x1b[0m0.300 ms - 150\x1b[0m\n\x1b[0mPOST /oumi.php \x1b[33m404 \x1b[0m0.319 ms - 148\x1b[0m\n\x1b[0mPOST /qiangkezhi.php \x1b[33m404 \x1b[0m0.287 ms - 154\x1b[0m\n\x1b[0mPOST /ce.PHP \x1b[33m404 \x1b[0m0.406 ms - 146\x1b[0m\n\x1b[0mPOST /cs.php \x1b[33m404 \x1b[0m0.308 ms - 146\x1b[0m\n\x1b[0mPOST /ww.php \x1b[33m404 \x1b[0m0.312 ms - 146\x1b[0m\n\x1b[0mPOST /zyc.php \x1b[33m404 \x1b[0m0.316 ms - 147\x1b[0m\n\x1b[0mPOST /inde.php \x1b[33m404 \x1b[0m0.364 ms - 148\x1b[0m\n\x1b[0mPOST /1.php \x1b[33m404 \x1b[0m0.288 ms - 145\x1b[0m\n\x1b[0mPOST /info8.php \x1b[33m404 \x1b[0m0.318 ms - 149\x1b[0m\n\x1b[0mPOST /qqq.php \x1b[33m404 \x1b[0m0.352 ms - 147\x1b[0m\n\x1b[0mPOST /lequ.php \x1b[33m404 \x1b[0m0.325 ms - 148\x1b[0m\n\x1b[0mPOST /anyi.php \x1b[33m404 \x1b[0m0.293 ms - 148\x1b[0m\n\x1b[0mPOST /user.php \x1b[33m404 \x1b[0m0.869 ms - 148\x1b[0m\n\x1b[0mPOST /xiao.php \x1b[33m404 \x1b[0m0.293 ms - 148\x1b[0m\n\x1b[0mPOST /wanmei.php \x1b[33m404 \x1b[0m0.294 ms - 150\x1b[0m\n\x1b[0mPOST /wuwu.php \x1b[33m404 \x1b[0m0.313 ms - 148\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.255 ms - 2\x1b[0m\n\x1b[0mPOST /bf.php \x1b[33m404 \x1b[0m0.303 ms - 146\x1b[0m\n\x1b[0mPOST /bf.php \x1b[33m404 \x1b[0m0.310 ms - 146\x1b[0m\n\x1b[0mPOST /bn.php \x1b[33m404 \x1b[0m0.296 ms - 146\x1b[0m\n\x1b[0mPOST /gsy.php \x1b[33m404 \x1b[0m0.293 ms - 147\x1b[0m\n\x1b[0mPOST /iis.php \x1b[33m404 \x1b[0m0.286 ms - 147\x1b[0m\n\x1b[0mPOST /zxy.php \x1b[33m404 \x1b[0m0.308 ms - 147\x1b[0m\n\x1b[0mPOST /zxy.php \x1b[33m404 \x1b[0m0.377 ms - 147\x1b[0m\n\x1b[0mPOST /zxy.php \x1b[33m404 \x1b[0m0.486 ms - 147\x1b[0m\n\x1b[0mPOST /yyx.php \x1b[33m404 \x1b[0m0.313 ms - 147\x1b[0m\n\x1b[0mPOST /ml.php \x1b[33m404 \x1b[0m0.316 ms - 146\x1b[0m\n\x1b[0mPOST /xs.php \x1b[33m404 \x1b[0m0.341 ms - 146\x1b[0m\n\x1b[0mPOST /phplil.php \x1b[33m404 \x1b[0m0.289 ms - 150\x1b[0m\n\x1b[0mPOST /config.inc.php \x1b[33m404 \x1b[0m0.416 ms - 154\x1b[0m\n\x1b[0mPOST /ss.php \x1b[33m404 \x1b[0m0.318 ms - 146\x1b[0m\n\x1b[0mPOST /ll.php \x1b[33m404 \x1b[0m0.409 ms - 146\x1b[0m\n\x1b[0mPOST /secure.php \x1b[33m404 \x1b[0m0.326 ms - 150\x1b[0m\n\x1b[0mPOST /secure.php \x1b[33m404 \x1b[0m0.322 ms - 150\x1b[0m\n\x1b[0mPOST /secure1.php \x1b[33m404 \x1b[0m0.288 ms - 151\x1b[0m\n\x1b[0mPOST /7.php \x1b[33m404 \x1b[0m0.302 ms - 145\x1b[0m\n\x1b[0mPOST /go.php \x1b[33m404 \x1b[0m0.759 ms - 146\x1b[0m\n\x1b[0mPOST /web.php \x1b[33m404 \x1b[0m0.326 ms - 147\x1b[0m\n\x1b[0mPOST /wulv.php \x1b[33m404 \x1b[0m0.322 ms - 148\x1b[0m\n\x1b[0mPOST /xiaomi.php \x1b[33m404 \x1b[0m0.414 ms - 150\x1b[0m\n\x1b[0mPOST /fans.php \x1b[33m404 \x1b[0m0.294 ms - 148\x1b[0m\n\x1b[0mPOST /infos.php \x1b[33m404 \x1b[0m0.314 ms - 149\x1b[0m\n\x1b[0mPOST /phpinf.php \x1b[33m404 \x1b[0m0.345 ms - 150\x1b[0m\n\x1b[0mPOST /MCLi.php \x1b[33m404 \x1b[0m0.306 ms - 148\x1b[0m\n\x1b[0mPOST /MCLi.php \x1b[33m404 \x1b[0m0.320 ms - 148\x1b[0m\n\x1b[0mPOST /coon.php \x1b[33m404 \x1b[0m0.344 ms - 148\x1b[0m\n\x1b[0mPOST /1.php \x1b[33m404 \x1b[0m0.339 ms - 145\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.275 ms - 2\x1b[0m\n\x1b[0mPOST /6.php \x1b[33m404 \x1b[0m0.326 ms - 145\x1b[0m\n\x1b[0mPOST /d.php \x1b[33m404 \x1b[0m0.316 ms - 145\x1b[0m\n\x1b[0mPOST /function.inc.php \x1b[33m404 \x1b[0m0.433 ms - 156\x1b[0m\n\x1b[0mPOST /userr.php \x1b[33m404 \x1b[0m0.336 ms - 149\x1b[0m\n\x1b[0mPOST /ysy.php \x1b[33m404 \x1b[0m0.350 ms - 147\x1b[0m\n\x1b[0mPOST /3.php \x1b[33m404 \x1b[0m0.348 ms - 145\x1b[0m\n\x1b[0mPOST /zxc.php \x1b[33m404 \x1b[0m0.316 ms - 147\x1b[0m\n\x1b[0mPOST /Hzllaga.php \x1b[33m404 \x1b[0m0.327 ms - 151\x1b[0m\n\x1b[0mPOST /inc.php \x1b[33m404 \x1b[0m0.299 ms - 147\x1b[0m\n\x1b[0mPOST /webconfig.php \x1b[33m404 \x1b[0m0.339 ms - 153\x1b[0m\n\x1b[0mPOST /code.php \x1b[33m404 \x1b[0m0.403 ms - 148\x1b[0m\n\x1b[0mPOST /temtel.php \x1b[33m404 \x1b[0m0.370 ms - 150\x1b[0m\n\x1b[0mPOST /data.php \x1b[33m404 \x1b[0m0.314 ms - 148\x1b[0m\n\x1b[0mPOST /fuck.php \x1b[33m404 \x1b[0m0.341 ms - 148\x1b[0m\n\x1b[0mPOST /.config.php \x1b[33m404 \x1b[0m0.302 ms - 151\x1b[0m\n\x1b[0mPOST /test.php \x1b[33m404 \x1b[0m0.340 ms - 148\x1b[0m\n\x1b[0mPOST /cron.php \x1b[33m404 \x1b[0m0.321 ms - 148\x1b[0m\n\x1b[0mPOST /v.php \x1b[33m404 \x1b[0m0.825 ms - 145\x1b[0m\n\x1b[0mPOST /vulnspy.php \x1b[33m404 \x1b[0m0.321 ms - 151\x1b[0m\n\x1b[0mPOST /jsc.php \x1b[33m404 \x1b[0m0.302 ms - 147\x1b[0m\n\x1b[0mPOST /soga.php \x1b[33m404 \x1b[0m0.320 ms - 148\x1b[0m\n\x1b[0mPOST /in.php \x1b[33m404 \x1b[0m0.318 ms - 146\x1b[0m\n\x1b[0mPOST /zxc1.php \x1b[33m404 \x1b[0m0.311 ms - 148\x1b[0m\n\x1b[0mPOST /zxc0.php \x1b[33m404 \x1b[0m0.322 ms - 148\x1b[0m\n\x1b[0mPOST /zxc1.php \x1b[33m404 \x1b[0m0.319 ms - 148\x1b[0m\n\x1b[0mPOST /zxc2.php \x1b[33m404 \x1b[0m0.356 ms - 148\x1b[0m\n\x1b[0mPOST /indexa.php \x1b[33m404 \x1b[0m0.421 ms - 150\x1b[0m\n\x1b[0mPOST /lx.php \x1b[33m404 \x1b[0m0.293 ms - 146\x1b[0m\n\x1b[0mPOST /cn.php \x1b[33m404 \x1b[0m0.316 ms - 146\x1b[0m\n\x1b[0mPOST /api.php \x1b[33m404 \x1b[0m0.323 ms - 147\x1b[0m\n\x1b[0mPOST /index1.php \x1b[33m404 \x1b[0m0.322 ms - 150\x1b[0m\n\x1b[0mPOST /info.php \x1b[33m404 \x1b[0m0.298 ms - 148\x1b[0m\n\x1b[0mPOST /info1.php \x1b[33m404 \x1b[0m0.302 ms - 149\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.285 ms - 2\x1b[0m\n\x1b[0mPOST /aaaaaa1.php \x1b[33m404 \x1b[0m0.305 ms - 151\x1b[0m\n\x1b[0mPOST /up.php \x1b[33m404 \x1b[0m0.295 ms - 146\x1b[0m\n\x1b[0mPOST /test123.php \x1b[33m404 \x1b[0m0.340 ms - 151\x1b[0m\n\x1b[0mPOST /test123.php \x1b[33m404 \x1b[0m0.320 ms - 151\x1b[0m\n\x1b[0mPOST /fb.php \x1b[33m404 \x1b[0m0.317 ms - 146\x1b[0m\n\x1b[0mPOST /paylog.php \x1b[33m404 \x1b[0m0.303 ms - 150\x1b[0m\n\x1b[0mPOST /paylog.php \x1b[33m404 \x1b[0m0.337 ms - 150\x1b[0m\n\x1b[0mPOST /x.php \x1b[33m404 \x1b[0m0.290 ms - 145\x1b[0m\n\x1b[0mPOST /cnm.php \x1b[33m404 \x1b[0m0.371 ms - 147\x1b[0m\n\x1b[0mPOST /test404.php \x1b[33m404 \x1b[0m0.295 ms - 151\x1b[0m\n\x1b[0mPOST /test.php \x1b[33m404 \x1b[0m0.415 ms - 148\x1b[0m\n\x1b[0mPOST /phpinf0.php \x1b[33m404 \x1b[0m0.309 ms - 151\x1b[0m\n\x1b[0mPOST /1ndex.php \x1b[33m404 \x1b[0m0.323 ms - 149\x1b[0m\n\x1b[0mPOST /autoloader.php \x1b[33m404 \x1b[0m0.743 ms - 154\x1b[0m\n\x1b[0mPOST /class1.php \x1b[33m404 \x1b[0m0.311 ms - 150\x1b[0m\n\x1b[0mPOST /test404.php \x1b[33m404 \x1b[0m0.295 ms - 151\x1b[0m\n\x1b[0mPOST /shi.php \x1b[33m404 \x1b[0m0.313 ms - 147\x1b[0m\n\x1b[0mPOST /think.php \x1b[33m404 \x1b[0m0.317 ms - 149\x1b[0m\n\x1b[0mPOST /back.php \x1b[33m404 \x1b[0m0.329 ms - 148\x1b[0m\n\x1b[0mPOST /DJ.php \x1b[33m404 \x1b[0m0.297 ms - 146\x1b[0m\n\x1b[0mPOST /.git.php \x1b[33m404 \x1b[0m0.289 ms - 148\x1b[0m\n\x1b[0mPOST /shipu.php \x1b[33m404 \x1b[0m0.310 ms - 149\x1b[0m\n\x1b[0mPOST /fantao.php \x1b[33m404 \x1b[0m0.298 ms - 150\x1b[0m\n\x1b[0mPOST /config.php \x1b[33m404 \x1b[0m0.325 ms - 150\x1b[0m\n\x1b[0mPOST /Config_Shell.php \x1b[33m404 \x1b[0m0.346 ms - 156\x1b[0m\n\x1b[0mPOST /fdgq.php \x1b[33m404 \x1b[0m0.320 ms - 148\x1b[0m\n\x1b[0mPOST /info.php \x1b[33m404 \x1b[0m0.289 ms - 148\x1b[0m\n\x1b[0mPOST /51.php \x1b[33m404 \x1b[0m0.324 ms - 146\x1b[0m\n\x1b[0mPOST /cadre.php \x1b[33m404 \x1b[0m0.286 ms - 149\x1b[0m\n\x1b[0mPOST /mm.php \x1b[33m404 \x1b[0m0.367 ms - 146\x1b[0m\n\x1b[0mPOST /test.php \x1b[33m404 \x1b[0m0.332 ms - 148\x1b[0m\n\x1b[0mPOST /1q.php \x1b[33m404 \x1b[0m0.289 ms - 146\x1b[0m\n\x1b[0mPOST /1111.php \x1b[33m404 \x1b[0m0.304 ms - 148\x1b[0m\n\x1b[0mPOST /errors.php \x1b[33m404 \x1b[0m0.312 ms - 150\x1b[0m\n\x1b[0mPOST /q.php \x1b[33m404 \x1b[0m0.289 ms - 145\x1b[0m\n\x1b[0mPOST /lanyecn.php \x1b[33m404 \x1b[0m0.321 ms - 151\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.279 ms - 2\x1b[0m\n\x1b[0mPOST /lanyecn.php \x1b[33m404 \x1b[0m0.291 ms - 151\x1b[0m\n\x1b[0mPOST /mybestloves.php \x1b[33m404 \x1b[0m0.469 ms - 155\x1b[0m\n\x1b[0mPOST /xiaoxi.php \x1b[33m404 \x1b[0m0.365 ms - 150\x1b[0m\n\x1b[0mPOST /xiaoxi.php \x1b[33m404 \x1b[0m0.294 ms - 150\x1b[0m\n\x1b[0mPOST /ww.php \x1b[33m404 \x1b[0m0.292 ms - 146\x1b[0m\n\x1b[0mPOST /pop.php \x1b[33m404 \x1b[0m0.727 ms - 147\x1b[0m\n\x1b[0mPOST /ok.php \x1b[33m404 \x1b[0m0.312 ms - 146\x1b[0m\n\x1b[0mPOST /test.php \x1b[33m404 \x1b[0m0.385 ms - 148\x1b[0m\n\x1b[0mPOST /conf.php \x1b[33m404 \x1b[0m0.288 ms - 148\x1b[0m\n\x1b[0mPOST /dashu.php \x1b[33m404 \x1b[0m0.286 ms - 149\x1b[0m\n\x1b[0mPOST /shell.php \x1b[33m404 \x1b[0m0.295 ms - 149\x1b[0m\n\x1b[0mPOST /queqiao.php \x1b[33m404 \x1b[0m0.320 ms - 151\x1b[0m\n\x1b[0mPOST /12345.php \x1b[33m404 \x1b[0m0.386 ms - 149\x1b[0m\n\x1b[0mPOST /qqq.php \x1b[33m404 \x1b[0m0.310 ms - 147\x1b[0m\n\x1b[0mPOST /15.php \x1b[33m404 \x1b[0m0.298 ms - 146\x1b[0m\n\x1b[0mPOST /slider.php \x1b[33m404 \x1b[0m0.347 ms - 150\x1b[0m\n\x1b[0mPOST /qunhuang.php \x1b[33m404 \x1b[0m0.313 ms - 152\x1b[0m\n\x1b[0mPOST /hannan.php \x1b[33m404 \x1b[0m0.389 ms - 150\x1b[0m\n\x1b[0mPOST /confie.php \x1b[33m404 \x1b[0m0.336 ms - 150\x1b[0m\n\x1b[0mPOST /igo.php \x1b[33m404 \x1b[0m0.320 ms - 147\x1b[0m\n\x1b[0mPOST /code.php \x1b[33m404 \x1b[0m0.298 ms - 148\x1b[0m\n\x1b[0mPOST /ss.php \x1b[33m404 \x1b[0m0.322 ms - 146\x1b[0m\n\x1b[0mPOST /php.php \x1b[33m404 \x1b[0m0.313 ms - 147\x1b[0m\n\x1b[0mPOST /about.php \x1b[33m404 \x1b[0m0.317 ms - 149\x1b[0m\n\x1b[0mPOST /incs.php \x1b[33m404 \x1b[0m0.324 ms - 148\x1b[0m\n\x1b[0mPOST /159.php \x1b[33m404 \x1b[0m0.292 ms - 147\x1b[0m\n\x1b[0mPOST /test.php \x1b[33m404 \x1b[0m0.360 ms - 148\x1b[0m\n\x1b[0mPOST /test1.php \x1b[33m404 \x1b[0m0.289 ms - 149\x1b[0m\n\x1b[0mPOST /images/1.php \x1b[33m404 \x1b[0m0.432 ms - 152\x1b[0m\n\x1b[0mPOST /images/asp.php \x1b[33m404 \x1b[0m0.302 ms - 154\x1b[0m\n\x1b[0mPOST /images/entyy.php \x1b[33m404 \x1b[0m0.401 ms - 156\x1b[0m\n\x1b[0mPOST /images/1ndex.php \x1b[33m404 \x1b[0m0.333 ms - 156\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.278 ms - 2\x1b[0m\n\x1b[0mPOST /images/defau1t.php \x1b[33m404 \x1b[0m0.294 ms - 158\x1b[0m\n\x1b[0mPOST /webconfig.txt.php \x1b[33m404 \x1b[0m0.299 ms - 157\x1b[0m\n\x1b[0mPOST /administrator/webconfig.txt.php \x1b[33m404 \x1b[0m0.809 ms - 171\x1b[0m\n\x1b[0mPOST /api.php \x1b[33m404 \x1b[0m0.333 ms - 147\x1b[0m\n\x1b[0mPOST /luso.php \x1b[33m404 \x1b[0m0.340 ms - 148\x1b[0m\n\x1b[0mPOST /1ndex.php \x1b[33m404 \x1b[0m0.291 ms - 149\x1b[0m\n\x1b[0mPOST /indexbak.php \x1b[33m404 \x1b[0m0.305 ms - 152\x1b[0m\n\x1b[0mPOST /4o4.php \x1b[33m404 \x1b[0m0.336 ms - 147\x1b[0m\n\x1b[0mPOST /xmlrpc.php \x1b[33m404 \x1b[0m0.312 ms - 150\x1b[0m\n\x1b[0mPOST /blog/xmlrpc.php \x1b[33m404 \x1b[0m0.405 ms - 155\x1b[0m\n\x1b[0mPOST /errors/processor.php \x1b[33m404 \x1b[0m0.329 ms - 160\x1b[0m\n\x1b[0mPOST /vendor/phpunit/phpunit/src/Util/PHP/eval-stdin.php \x1b[33m404 \x1b[0m0.434 ms - 190\x1b[0m\n\x1b[0mPOST /plus/90sec.php \x1b[33m404 \x1b[0m0.318 ms - 154\x1b[0m\n\x1b[0mPOST /plus/read.php \x1b[33m404 \x1b[0m0.324 ms - 153\x1b[0m\n\x1b[0mPOST /plus/moon.php \x1b[33m404 \x1b[0m0.298 ms - 153\x1b[0m\n\x1b[0mPOST /plus/laobiao.php \x1b[33m404 \x1b[0m0.284 ms - 156\x1b[0m\n\x1b[0mPOST /plus/laobiaoaien.php \x1b[33m404 \x1b[0m0.411 ms - 160\x1b[0m\n\x1b[0mPOST /plus/e7xue.php \x1b[33m404 \x1b[0m0.350 ms - 154\x1b[0m\n\x1b[0mPOST /plus/mybak.php \x1b[33m404 \x1b[0m0.351 ms - 154\x1b[0m\n\x1b[0mPOST /plus/service.php \x1b[33m404 \x1b[0m0.321 ms - 156\x1b[0m\n\x1b[0mPOST /plus/xsvip.php \x1b[33m404 \x1b[0m0.441 ms - 154\x1b[0m\n\x1b[0mPOST /plus/bakup.php \x1b[33m404 \x1b[0m0.417 ms - 154\x1b[0m\n\x1b[0mPOST /include/tags.php \x1b[33m404 \x1b[0m0.369 ms - 156\x1b[0m\n\x1b[0mPOST /include/data/tags.php \x1b[33m404 \x1b[0m0.317 ms - 161\x1b[0m\n\x1b[0mPOST /images/swfupload/tags.php \x1b[33m404 \x1b[0m0.300 ms - 165\x1b[0m\n\x1b[0mPOST /dong.php \x1b[33m404 \x1b[0m0.290 ms - 148\x1b[0m\n\x1b[0mPOST /xun.php \x1b[33m404 \x1b[0m0.291 ms - 147\x1b[0m\n\x1b[0mPOST /plus/gu.php \x1b[33m404 \x1b[0m0.302 ms - 151\x1b[0m\n\x1b[0mPOST /plus/tou.php \x1b[33m404 \x1b[0m0.293 ms - 152\x1b[0m\n\x1b[0mPOST /plus/ma.php \x1b[33m404 \x1b[0m0.319 ms - 151\x1b[0m\n\x1b[0mPOST /plus/mytag.php \x1b[33m404 \x1b[0m0.289 ms - 154\x1b[0m\n\x1b[0mPOST /plus/dajihi.php \x1b[33m404 \x1b[0m0.295 ms - 155\x1b[0m\n\x1b[0mPOST /plus/shaoyong.php \x1b[33m404 \x1b[0m0.329 ms - 157\x1b[0m\n\x1b[0mPOST /datas.php \x1b[33m404 \x1b[0m0.325 ms - 149\x1b[0m\n\x1b[0mPOST /aojiao.php \x1b[33m404 \x1b[0m0.298 ms - 150\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.286 ms - 2\x1b[0m\n\x1b[0mPOST /guipu.php \x1b[33m404 \x1b[0m0.382 ms - 149\x1b[0m\n\x1b[0mPOST /zhui.php \x1b[33m404 \x1b[0m0.318 ms - 148\x1b[0m\n\x1b[0mPOST /plus/lucas.php \x1b[33m404 \x1b[0m0.324 ms - 154\x1b[0m\n\x1b[0mPOST /plus/canshi.php \x1b[33m404 \x1b[0m0.343 ms - 155\x1b[0m\n\x1b[0mPOST /plus/yunjitan.php \x1b[33m404 \x1b[0m0.288 ms - 157\x1b[0m\n\x1b[0mPOST /ji.php \x1b[33m404 \x1b[0m0.310 ms - 146\x1b[0m\n\x1b[0mPOST /xing.php \x1b[33m404 \x1b[0m0.287 ms - 148\x1b[0m\n\x1b[0mPOST /plus/huai.php \x1b[33m404 \x1b[0m0.303 ms - 153\x1b[0m\n\x1b[0mPOST /plus/qiang.php \x1b[33m404 \x1b[0m0.316 ms - 154\x1b[0m\n\x1b[0mPOST /plus/result.php \x1b[33m404 \x1b[0m0.331 ms - 155\x1b[0m\n\x1b[0mPOST /c.php \x1b[33m404 \x1b[0m0.437 ms - 145\x1b[0m\n\x1b[0mPOST /c.php \x1b[33m404 \x1b[0m0.302 ms - 145\x1b[0m\n\x1b[0mPOST /test.php \x1b[33m404 \x1b[0m0.324 ms - 148\x1b[0m\n\x1b[0mPOST /laobiao.php \x1b[33m404 \x1b[0m0.289 ms - 151\x1b[0m\n\x1b[0mPOST /sample.php \x1b[33m404 \x1b[0m0.327 ms - 150\x1b[0m\n\x1b[0mPOST /wp-includes/css/modules.php \x1b[33m404 \x1b[0m0.304 ms - 167\x1b[0m\n\x1b[0mPOST /wp-includes/css/wp-config.php \x1b[33m404 \x1b[0m0.310 ms - 169\x1b[0m\n\x1b[0mPOST /wp-includes/css/wp-login.php \x1b[33m404 \x1b[0m0.311 ms - 168\x1b[0m\n\x1b[0mPOST /wp-includes/fonts/modules.php \x1b[33m404 \x1b[0m0.326 ms - 169\x1b[0m\n\x1b[0mPOST /wp-includes/fonts/wp-config.php \x1b[33m404 \x1b[0m0.410 ms - 171\x1b[0m\n\x1b[0mPOST /wp-includes/fonts/wp-login.php \x1b[33m404 \x1b[0m0.298 ms - 170\x1b[0m\n\x1b[0mPOST /wp-includes/modules/modules.php \x1b[33m404 \x1b[0m0.294 ms - 171\x1b[0m\n\x1b[0mPOST /wp-includes/modules/wp-config.php \x1b[33m404 \x1b[0m0.331 ms - 173\x1b[0m\n\x1b[0mPOST /wp-includes/modules/wp-login.php \x1b[33m404 \x1b[0m0.317 ms - 172\x1b[0m\n\x1b[0mPOST /shell.php \x1b[33m404 \x1b[0m0.344 ms - 149\x1b[0m\n\x1b[0mPOST /data/admin/help.php \x1b[33m404 \x1b[0m0.287 ms - 159\x1b[0m\n\x1b[0mPOST /12.php \x1b[33m404 \x1b[0m0.288 ms - 146\x1b[0m\n\x1b[0mPOST /ecmsmod.php \x1b[33m404 \x1b[0m0.321 ms - 151\x1b[0m\n\x1b[0mGET /%73%65%65%79%6F%6E/%68%74%6D%6C%6F%66%66%69%63%65%73%65%72%76%6C%65%74 \x1b[33m404 \x1b[0m8.284 ms - 1050\x1b[0m\n\x1b[0mGET /secure/ContactAdministrators!default.jspa \x1b[33m404 \x1b[0m8.717 ms - 1050\x1b[0m\n\x1b[0mGET /weaver/bsh.servlet.BshServlet \x1b[33m404 \x1b[0m9.499 ms - 1050\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.282 ms - 2\x1b[0m\n\x1b[0mGET /solr/ \x1b[33m404 \x1b[0m11.988 ms - 1050\x1b[0m\n\x1b[0mPOST /index.php \x1b[33m404 \x1b[0m0.498 ms - 149\x1b[0m\n\x1b[0mPOST /%75%73%65%72/%72%65%67%69%73%74%65%72?%65%6c%65%6d%65%6e%74%5f%70%61%72%65%6e%74%73=%74%69%6d%65%7a%6f%6e%65%2f%74%69%6d%65%7a%6f%6e%65%2f%23%76%61%6c%75%65&%61%6a%61%78%5f%66%6f%72%6d=1&%5f%77%72%61%70%70%65%72%5f%66%6f%72%6d%61%74=%64%72%75%70%61%6c%5f%61%6a%61%78 \x1b[33m404 \x1b[0m0.548 ms - 177\x1b[0m\n\x1b[0mGET / \x1b[33m404 \x1b[0m9.168 ms - 1050\x1b[0m\n\x1b[0mPOST /%75%73%65%72%2e%70%68%70 \x1b[33m404 \x1b[0m0.982 ms - 164\x1b[0m\n\x1b[0mGET /phpmyadmin/index.php \x1b[33m404 \x1b[0m9.417 ms - 1050\x1b[0m\n\x1b[0mGET /phpMyAdmin/index.php \x1b[33m404 \x1b[0m7.489 ms - 1050\x1b[0m\n\x1b[0mGET /pmd/index.php \x1b[33m404 \x1b[0m7.566 ms - 1050\x1b[0m\n\x1b[0mGET /pma/index.php \x1b[33m404 \x1b[0m8.667 ms - 1050\x1b[0m\n\x1b[0mGET /PMA/index.php \x1b[33m404 \x1b[0m7.694 ms - 1050\x1b[0m\n\x1b[0mGET /PMA2/index.php \x1b[33m404 \x1b[0m12.864 ms - 1050\x1b[0m\n\x1b[0mGET /pmamy2/index.php \x1b[33m404 \x1b[0m8.154 ms - 1050\x1b[0m\n\x1b[0mGET /mysql/index.php \x1b[33m404 \x1b[0m8.099 ms - 1050\x1b[0m\n\x1b[0mGET /admin/index.php \x1b[33m404 \x1b[0m7.604 ms - 1050\x1b[0m\n\x1b[0mGET /db/index.php \x1b[33m404 \x1b[0m7.693 ms - 1050\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.284 ms - 2\x1b[0m\n\x1b[0mGET /web/phpMyAdmin/index.php \x1b[33m404 \x1b[0m8.428 ms - 1050\x1b[0m\n\x1b[0mGET /admin/pma/index.php \x1b[33m404 \x1b[0m7.906 ms - 1050\x1b[0m\n\x1b[0mGET /admin/PMA/index.php \x1b[33m404 \x1b[0m8.477 ms - 1050\x1b[0m\n\x1b[0mGET /admin/mysql/index.php \x1b[33m404 \x1b[0m7.650 ms - 1050\x1b[0m\n\x1b[0mGET /admin/mysql2/index.php \x1b[33m404 \x1b[0m8.344 ms - 1050\x1b[0m\n\x1b[0mGET /admin/phpmyadmin/index.php \x1b[33m404 \x1b[0m7.271 ms - 1050\x1b[0m\n\x1b[0mGET /admin/phpmyadmin2/index.php \x1b[33m404 \x1b[0m7.724 ms - 1050\x1b[0m\n\x1b[0mGET /mysqladmin/index.php \x1b[33m404 \x1b[0m7.642 ms - 1050\x1b[0m\n\x1b[0mGET /mysql-admin/index.php \x1b[33m404 \x1b[0m7.316 ms - 1050\x1b[0m\n\x1b[0mGET /mysql_admin/index.php \x1b[33m404 \x1b[0m7.300 ms - 1050\x1b[0m\n\x1b[0mGET /phpadmin/index.php \x1b[33m404 \x1b[0m9.290 ms - 1050\x1b[0m\n\x1b[0mGET /phpAdmin/index.php \x1b[33m404 \x1b[0m7.516 ms - 1050\x1b[0m\n\x1b[0mGET /phpmyadmin1/index.php \x1b[33m404 \x1b[0m7.333 ms - 1050\x1b[0m\n\x1b[0mGET /phpmyadmin2/index.php \x1b[33m404 \x1b[0m7.411 ms - 1050\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.280 ms - 2\x1b[0m\n\x1b[0mGET /phpMyAdmin-4.4.0/index.php \x1b[33m404 \x1b[0m8.593 ms - 1050\x1b[0m\n\x1b[0mGET /phpMyAdmin4.8.0/index.php \x1b[33m404 \x1b[0m7.545 ms - 1050\x1b[0m\n\x1b[0mGET /phpMyAdmin4.8.1/index.php \x1b[33m404 \x1b[0m6.966 ms - 1050\x1b[0m\n\x1b[0mGET /phpMyAdmin4.8.2/index.php \x1b[33m404 \x1b[0m11.813 ms - 1050\x1b[0m\n\x1b[0mGET /phpMyAdmin4.8.4/index.php \x1b[33m404 \x1b[0m9.029 ms - 1050\x1b[0m\n\x1b[0mGET /phpMyAdmin4.8.5/index.php \x1b[33m404 \x1b[0m7.325 ms - 1050\x1b[0m\n\x1b[0mGET /myadmin/index.php \x1b[33m404 \x1b[0m7.325 ms - 1050\x1b[0m\n\x1b[0mGET /myadmin2/index.php \x1b[33m404 \x1b[0m12.581 ms - 1050\x1b[0m\n\x1b[0mGET /xampp/phpmyadmin/index.php \x1b[33m404 \x1b[0m8.418 ms - 1050\x1b[0m\n\x1b[0mGET /phpMyadmin_bak/index.php \x1b[33m404 \x1b[0m7.840 ms - 1050\x1b[0m\n\x1b[0mGET /tools/phpMyAdmin/index.php \x1b[33m404 \x1b[0m13.310 ms - 1050\x1b[0m\n\x1b[0mGET /phpmyadmin-old/index.php \x1b[33m404 \x1b[0m9.315 ms - 1050\x1b[0m\n\x1b[0mGET /phpMyAdminold/index.php \x1b[33m404 \x1b[0m6.909 ms - 1050\x1b[0m\n\x1b[0mGET /phpMyAdmin.old/index.php \x1b[33m404 \x1b[0m7.340 ms - 1050\x1b[0m\n\x1b[0mGET /pma-old/index.php \x1b[33m404 \x1b[0m7.340 ms - 1050\x1b[0m\n\x1b[0mGET /claroline/phpMyAdmin/index.php \x1b[33m404 \x1b[0m7.792 ms - 1050\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.277 ms - 2\x1b[0m\n\x1b[0mGET /phpma/index.php \x1b[33m404 \x1b[0m11.150 ms - 1050\x1b[0m\n\x1b[0mGET /phpmyadmin/phpmyadmin/index.php \x1b[33m404 \x1b[0m7.493 ms - 1050\x1b[0m\n\x1b[0mGET /phpMyAdmin/phpMyAdmin/index.php \x1b[33m404 \x1b[0m7.314 ms - 1050\x1b[0m\n\x1b[0mGET /phpMyAbmin/index.php \x1b[33m404 \x1b[0m8.413 ms - 1050\x1b[0m\n\x1b[0mGET /phpMyAdmin__/index.php \x1b[33m404 \x1b[0m10.967 ms - 1050\x1b[0m\n\x1b[0mGET /phpMyAdmin+++---/index.php \x1b[33m404 \x1b[0m7.826 ms - 1050\x1b[0m\n\x1b[0mGET /phpmyadm1n/index.php \x1b[33m404 \x1b[0m7.936 ms - 1050\x1b[0m\n\x1b[0mGET /phpMyAdm1n/index.php \x1b[33m404 \x1b[0m8.554 ms - 1050\x1b[0m\n\x1b[0mGET /shaAdmin/index.php \x1b[33m404 \x1b[0m10.917 ms - 1050\x1b[0m\n\x1b[0mGET /phpMyadmi/index.php \x1b[33m404 \x1b[0m8.810 ms - 1050\x1b[0m\n\x1b[0mGET /phpMyAdmion/index.php \x1b[33m404 \x1b[0m8.054 ms - 1050\x1b[0m\n\x1b[0mGET /s/index.php \x1b[33m404 \x1b[0m7.724 ms - 1050\x1b[0m\n\x1b[0mGET /phpMyAdmin1/index.php \x1b[33m404 \x1b[0m10.982 ms - 1050\x1b[0m\n\x1b[0mGET /phpMyAdmin123/index.php \x1b[33m404 \x1b[0m9.187 ms - 1050\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.322 ms - 2\x1b[0m\n\x1b[0mGET /pwd/index.php \x1b[33m404 \x1b[0m10.111 ms - 1050\x1b[0m\n\x1b[0mGET /phpMyAdmina/index.php \x1b[33m404 \x1b[0m7.055 ms - 1050\x1b[0m\n\x1b[0mGET /phpMydmin/index.php \x1b[33m404 \x1b[0m7.893 ms - 1050\x1b[0m\n\x1b[0mGET /phpMyAdmins/index.php \x1b[33m404 \x1b[0m6.907 ms - 1050\x1b[0m\n\x1b[0mGET /phpMyAdmin._2/index.php \x1b[33m404 \x1b[0m7.641 ms - 1050\x1b[0m\n\x1b[0mGET /phpmyadmin2222/index.php \x1b[33m404 \x1b[0m6.904 ms - 1050\x1b[0m\n\x1b[0mGET /phpMyAdmin333/index.php \x1b[33m404 \x1b[0m7.023 ms - 1050\x1b[0m\n\x1b[0mGET /phpmyadmin3333/index.php \x1b[33m404 \x1b[0m13.198 ms - 1050\x1b[0m\n\x1b[0mGET /phpiMyAdmin/index.php \x1b[33m404 \x1b[0m9.404 ms - 1050\x1b[0m\n\x1b[0mGET /phpNyAdmin/index.php \x1b[33m404 \x1b[0m7.484 ms - 1050\x1b[0m\n\x1b[0mGET /1/index.php \x1b[33m404 \x1b[0m11.706 ms - 1050\x1b[0m\n\x1b[0mGET /download/index.php \x1b[33m404 \x1b[0m7.527 ms - 1050\x1b[0m\n\x1b[0mGET /phpMyAdmin_111/index.php \x1b[33m404 \x1b[0m7.899 ms - 1050\x1b[0m\n\x1b[0mGET /phpmadmin/index.php \x1b[33m404 \x1b[0m7.520 ms - 1050\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.290 ms - 2\x1b[0m\n\x1b[0mGET /123131/index.php \x1b[33m404 \x1b[0m13.451 ms - 1050\x1b[0m\n\x1b[0mGET /phpMyAdminn/index.php \x1b[33m404 \x1b[0m7.695 ms - 1050\x1b[0m\n\x1b[0mGET /phpMyAdminhf/index.php \x1b[33m404 \x1b[0m7.024 ms - 1050\x1b[0m\n\x1b[0mGET /sbb/index.php \x1b[33m404 \x1b[0m8.475 ms - 1050\x1b[0m\n\x1b[0mGET /WWW/phpMyAdmin/index.php \x1b[33m404 \x1b[0m6.954 ms - 1050\x1b[0m\n\x1b[0mGET /phpMyAdmln/index.php \x1b[33m404 \x1b[0m7.090 ms - 1050\x1b[0m\n\x1b[0mGET /__phpMyAdmin/index.php \x1b[33m404 \x1b[0m7.830 ms - 1050\x1b[0m\n\x1b[0mGET /program/index.php \x1b[33m404 \x1b[0m8.227 ms - 1050\x1b[0m\n\x1b[0mGET /shopdb/index.php \x1b[33m404 \x1b[0m8.073 ms - 1050\x1b[0m\n\x1b[0mGET /phppma/index.php \x1b[33m404 \x1b[0m6.989 ms - 1050\x1b[0m\n\x1b[0mGET /phpmy/index.php \x1b[33m404 \x1b[0m11.999 ms - 1050\x1b[0m\n\x1b[0mGET /mysql/admin/index.php \x1b[33m404 \x1b[0m11.333 ms - 1050\x1b[0m\n\x1b[0mGET /mysql/sqlmanager/index.php \x1b[33m404 \x1b[0m7.766 ms - 1050\x1b[0m\n\x1b[0mGET /mysql/mysqlmanager/index.php \x1b[33m404 \x1b[0m6.839 ms - 1050\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.304 ms - 2\x1b[0m\n\x1b[0mGET /wp-content/plugins/portable-phpmyadmin/wp-pma-mod/index.php \x1b[33m404 \x1b[0m7.514 ms - 1050\x1b[0m\n\x1b[0mGET /sqladmin/index.php \x1b[33m404 \x1b[0m6.957 ms - 1050\x1b[0m\n\x1b[0mGET /sql/index.php \x1b[33m404 \x1b[0m7.228 ms - 1050\x1b[0m\n\x1b[0mGET /SQL/index.php \x1b[33m404 \x1b[0m7.373 ms - 1050\x1b[0m\n\x1b[0mGET /MySQLAdmin/index.php \x1b[33m404 \x1b[0m6.880 ms - 1050\x1b[0m\n\x1b[0mGET /manager/html \x1b[33m404 \x1b[0m7.245 ms - 1050\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.305 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.290 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.293 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.304 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.313 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.311 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.297 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.290 ms - 2\x1b[0m\n\x1b[0mGET / \x1b[33m404 \x1b[0m7.048 ms - 1050\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.291 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.310 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.292 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.569 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.293 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.296 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.296 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.309 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.315 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.311 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.288 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.303 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.379 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.319 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.299 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.300 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.308 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.293 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.295 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.294 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.298 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.297 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.432 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.309 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.312 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.300 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.401 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.293 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.290 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.297 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.291 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.289 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.340 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.301 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.299 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.300 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.291 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.292 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.435 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.292 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m3.882 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.299 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.414 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.301 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.288 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.978 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.306 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.321 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.304 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.317 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.309 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.318 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.308 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.310 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.362 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.458 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.308 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.311 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.300 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.314 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.309 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.297 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.305 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.678 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.309 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.301 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.306 ms - 2\x1b[0m\n\x1b[0mGET / \x1b[33m404 \x1b[0m13.989 ms - 1050\x1b[0m\n\x1b[0mGET /robots.txt \x1b[33m404 \x1b[0m8.932 ms - 1050\x1b[0m\n\x1b[0mGET / \x1b[33m404 \x1b[0m9.339 ms - 1050\x1b[0m\n\x1b[0mGET /img/gardener-large.png \x1b[32m200 \x1b[0m2.639 ms - 14378\x1b[0m\n\x1b[0mGET /robots.txt \x1b[33m404 \x1b[0m8.636 ms - 1050\x1b[0m\n\x1b[0mGET /img/gardener-large.png \x1b[32m200 \x1b[0m0.645 ms - 14378\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.434 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.305 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.299 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.303 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.315 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.309 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.299 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.300 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.321 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.298 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.309 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.351 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.300 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.308 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.312 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.300 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.298 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.302 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.732 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.299 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.317 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.312 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.310 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.314 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.321 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.293 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.301 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.449 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.315 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.313 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.302 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.346 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.297 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.302 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.305 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.295 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.421 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.296 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.310 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.298 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.302 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.297 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.301 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.294 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.295 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.339 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.307 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.311 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.303 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.315 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.306 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.302 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.298 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.299 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.309 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.296 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.305 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.295 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.312 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.293 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.295 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m2.211 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.291 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.294 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.297 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.300 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.295 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.298 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.347 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.291 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.302 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.398 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.304 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.292 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.504 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.297 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.296 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.297 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.297 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.305 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.310 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.304 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.297 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.333 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.299 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.303 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.300 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.306 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m4.413 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.294 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.300 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.302 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.511 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.298 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.297 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.301 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.362 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.772 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.321 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.300 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.298 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.307 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.303 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.294 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.301 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.298 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.299 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.297 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.328 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.308 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.301 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.303 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.297 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.305 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.297 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.296 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.303 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.300 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.309 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.311 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.306 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.304 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.310 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.758 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.316 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.309 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.301 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.297 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.321 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.308 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.310 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.310 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.354 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.316 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.314 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.317 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.307 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.306 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.309 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.401 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.317 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.306 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.305 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.317 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.314 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.405 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.299 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.327 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.310 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.817 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.312 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.315 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.303 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.299 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.298 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.334 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.353 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.299 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.424 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.373 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.312 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.315 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.426 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.296 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.302 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.406 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.315 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.427 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.436 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.298 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.293 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m1.986 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.311 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.410 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.566 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.293 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.334 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.404 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.294 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.420 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.294 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.308 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.293 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.295 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.295 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.294 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.626 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.297 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.407 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.411 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.296 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.302 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.292 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.306 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.299 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.303 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.461 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.299 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.298 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.433 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.314 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.296 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.554 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.292 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.297 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.294 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.296 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.301 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.291 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.294 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.297 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.318 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.303 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.365 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.295 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.295 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.295 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.316 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.292 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.547 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.294 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.316 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.294 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.302 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.305 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.400 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.298 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.439 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.665 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.294 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.307 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.301 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.306 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.293 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.303 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.295 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.300 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.302 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.289 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.310 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.320 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.317 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.295 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.301 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.415 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.453 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.466 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.435 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.432 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.394 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.297 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.474 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.294 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.290 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.706 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.298 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.459 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.302 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.363 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.304 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.309 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.313 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.294 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.302 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.295 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.299 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.296 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.290 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.310 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.343 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.295 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.308 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.294 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.298 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.302 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.298 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.295 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.312 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.295 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.313 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.328 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.302 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.312 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.405 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.298 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.298 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.300 ms - 2\x1b[0m\n\x1b[0mGET / \x1b[33m404 \x1b[0m9.926 ms - 1050\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.312 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.299 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.344 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.309 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.302 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.515 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.308 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.311 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.409 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.339 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.324 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.314 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.309 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.308 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m2.144 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.295 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.298 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.313 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.356 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.317 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.291 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.301 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.888 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.309 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.413 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.294 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.440 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.301 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.461 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.297 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.302 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.316 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.299 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.300 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.310 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.295 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.459 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.448 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.299 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.580 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.300 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.310 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.309 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m1.763 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.301 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.312 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.325 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.296 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.410 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.492 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.294 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.294 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.297 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.292 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.292 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.309 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.297 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.298 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.298 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.294 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m1.052 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.301 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.314 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.300 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.395 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.300 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.413 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.296 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.295 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.291 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.303 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.313 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.301 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.305 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.807 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.295 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.291 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.297 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.300 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.307 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.290 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m1.569 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m1.723 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.436 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.464 ms - 2\x1b[0m\n\x1b[0mGET / \x1b[33m404 \x1b[0m9.259 ms - 1050\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.526 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.294 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.301 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.289 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.377 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m9.249 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.299 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.314 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.293 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.307 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.317 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.306 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.297 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.306 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.300 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.304 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.296 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.298 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.287 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.294 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.297 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m1.398 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.292 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m1.169 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.294 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.296 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.296 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.306 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.292 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.448 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.296 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.308 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.298 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.299 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.309 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.297 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.475 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.488 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.308 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.291 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.293 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.317 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.308 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.301 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.295 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.312 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.302 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.294 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.299 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.297 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.297 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.429 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.311 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.299 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.301 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m2.132 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.297 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.451 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m1.446 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.291 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.300 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m3.737 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.470 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.297 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.289 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.297 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.288 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.308 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.291 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.287 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.398 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.299 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.895 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.430 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.294 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.304 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.290 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.289 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.437 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.301 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.293 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.300 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.293 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.311 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.297 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.301 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.316 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.292 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.296 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.298 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.293 ms - 2\x1b[0m\n\x1b[0mGET / \x1b[33m404 \x1b[0m8.646 ms - 1050\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.294 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.479 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.302 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.311 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.293 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.305 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.302 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.305 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.295 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.368 ms - 2\x1b[0m\n\x1b[0mGET /healthy \x1b[32m200 \x1b[0m0.446 ms - 2\x1b[0m\n==== END logs for container nginx-ingress-nginx-ingress-k8s-backend of pod kube-system/addons-nginx-ingress-nginx-ingress-k8s-backend-95f65778d-4fk7d ====\n==== START logs for container blackbox-exporter of pod kube-system/blackbox-exporter-54bb5f55cc-452fk ====\nlevel=info ts=2020-01-11T15:56:40.134603391Z caller=main.go:213 msg=\"Starting blackbox_exporter\" version=\"(version=0.14.0, branch=HEAD, revision=bba7ef76193948a333a5868a1ab38b864f7d968a)\"\nlevel=info ts=2020-01-11T15:56:40.13510694Z caller=main.go:226 msg=\"Loaded config file\"\nlevel=info ts=2020-01-11T15:56:40.135306275Z caller=main.go:330 msg=\"Listening on address\" address=:9115\n==== END logs for container blackbox-exporter of pod kube-system/blackbox-exporter-54bb5f55cc-452fk ====\n==== START logs for container calico-kube-controllers of pod kube-system/calico-kube-controllers-79bcd784b6-c46r9 ====\n2020-01-11 15:56:33.258 [INFO][1] main.go 92: Loaded configuration from environment config=&config.Config{LogLevel:\"info\", ReconcilerPeriod:\"5m\", CompactionPeriod:\"10m\", EnabledControllers:\"node\", WorkloadEndpointWorkers:1, ProfileWorkers:1, PolicyWorkers:1, NodeWorkers:1, Kubeconfig:\"\", HealthEnabled:true, SyncNodeLabels:true, DatastoreType:\"kubernetes\"}\n2020-01-11 15:56:33.260 [INFO][1] k8s.go 228: Using Calico IPAM\nW0111 15:56:33.260505 1 client_config.go:552] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.\n2020-01-11 15:56:33.261 [INFO][1] main.go 113: Ensuring Calico datastore is initialized\n2020-01-11 15:56:33.275 [INFO][1] main.go 175: Starting status report routine\n2020-01-11 15:56:33.275 [INFO][1] main.go 357: Starting controller ControllerType=\"Node\"\n2020-01-11 15:56:33.276 [INFO][1] node_controller.go 133: Starting Node controller\n2020-01-11 15:56:33.376 [INFO][1] node_controller.go 146: Node controller is now running\n2020-01-11 15:56:33.387 [INFO][1] kdd.go 167: Node and IPAM data is in sync\n2020-01-11 17:10:07.303 [ERROR][1] client.go 255: Error getting cluster information config ClusterInformation=\"default\" error=Get https://100.104.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default: unexpected EOF\n2020-01-11 17:10:07.303 [ERROR][1] main.go 195: Failed to verify datastore error=Get https://100.104.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default: unexpected EOF\n2020-01-11 17:10:47.414 [ERROR][1] client.go 255: Error getting cluster information config ClusterInformation=\"default\" error=Get https://100.104.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default: context deadline exceeded\n2020-01-11 17:10:47.414 [ERROR][1] main.go 195: Failed to verify datastore error=Get https://100.104.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default: context deadline exceeded\n2020-01-11 17:10:49.417 [ERROR][1] main.go 226: Failed to reach apiserver error=Get https://100.104.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default: context deadline exceeded\n2020-01-11 17:11:09.418 [ERROR][1] client.go 255: Error getting cluster information config ClusterInformation=\"default\" error=Get https://100.104.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default: context deadline exceeded\n2020-01-11 17:11:09.418 [ERROR][1] main.go 195: Failed to verify datastore error=Get https://100.104.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default: context deadline exceeded\n2020-01-11 17:11:11.421 [ERROR][1] main.go 226: Failed to reach apiserver error=Get https://100.104.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default: context deadline exceeded\nE0111 17:11:22.529135 1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=\"\"\n2020-01-11 17:11:22.529 [ERROR][1] client.go 255: Error getting cluster information config ClusterInformation=\"default\" error=Get https://100.104.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=\"\"\n2020-01-11 17:11:22.529 [ERROR][1] main.go 195: Failed to verify datastore error=Get https://100.104.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=\"\"\nE0111 17:11:32.531587 1 reflector.go:322] github.com/projectcalico/kube-controllers/pkg/controllers/node/node_controller.go:136: Failed to watch *v1.Node: Get https://100.104.0.1:443/api/v1/nodes?resourceVersion=14344&timeoutSeconds=364&watch=true: net/http: TLS handshake timeout\n2020-01-11 17:11:32.532 [ERROR][1] main.go 226: Failed to reach apiserver error=Get https://100.104.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=\"\"\nE0111 17:11:43.534396 1 reflector.go:205] github.com/projectcalico/kube-controllers/pkg/controllers/node/node_controller.go:136: Failed to list *v1.Node: Get https://100.104.0.1:443/api/v1/nodes?limit=500&resourceVersion=0: net/http: TLS handshake timeout\n2020-01-11 17:11:52.532 [ERROR][1] client.go 255: Error getting cluster information config ClusterInformation=\"default\" error=Get https://100.104.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default: context deadline exceeded\n2020-01-11 17:11:52.532 [ERROR][1] main.go 195: Failed to verify datastore error=Get https://100.104.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default: context deadline exceeded\nE0111 17:11:54.537148 1 reflector.go:205] github.com/projectcalico/kube-controllers/pkg/controllers/node/node_controller.go:136: Failed to list *v1.Node: Get https://100.104.0.1:443/api/v1/nodes?limit=500&resourceVersion=0: net/http: TLS handshake timeout\n2020-01-11 17:12:02.535 [ERROR][1] main.go 226: Failed to reach apiserver error=Get https://100.104.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default: context deadline exceeded\nE0111 17:12:05.539855 1 reflector.go:205] github.com/projectcalico/kube-controllers/pkg/controllers/node/node_controller.go:136: Failed to list *v1.Node: Get https://100.104.0.1:443/api/v1/nodes?limit=500&resourceVersion=0: net/http: TLS handshake timeout\n2020-01-11 19:01:17.450 [ERROR][1] client.go 255: Error getting cluster information config ClusterInformation=\"default\" error=Get https://100.104.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default: context deadline exceeded\n2020-01-11 19:01:17.450 [ERROR][1] main.go 195: Failed to verify datastore error=Get https://100.104.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default: context deadline exceeded\n2020-01-11 19:01:19.453 [ERROR][1] main.go 226: Failed to reach apiserver error=Get https://100.104.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default: context deadline exceeded\n2020-01-11 19:01:39.453 [ERROR][1] client.go 255: Error getting cluster information config ClusterInformation=\"default\" error=Get https://100.104.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default: context deadline exceeded\n2020-01-11 19:01:39.453 [ERROR][1] main.go 195: Failed to verify datastore error=Get https://100.104.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default: context deadline exceeded\n2020-01-11 19:01:41.456 [ERROR][1] main.go 226: Failed to reach apiserver error=Get https://100.104.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default: context deadline exceeded\nE0111 19:01:53.432949 1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=2657, ErrCode=NO_ERROR, debug=\"\"\n==== END logs for container calico-kube-controllers of pod kube-system/calico-kube-controllers-79bcd784b6-c46r9 ====\n==== START logs for container calico-node of pod kube-system/calico-node-dl8nk ====\n2020-01-11 15:56:16.537 [INFO][8] startup.go 256: Early log level set to info\n2020-01-11 15:56:16.537 [INFO][8] startup.go 272: Using NODENAME environment for node name\n2020-01-11 15:56:16.537 [INFO][8] startup.go 284: Determined node name: ip-10-250-7-77.ec2.internal\n2020-01-11 15:56:16.538 [INFO][8] k8s.go 219: Using host-local IPAM\n2020-01-11 15:56:16.538 [INFO][8] startup.go 316: Checking datastore connection\n2020-01-11 15:56:16.553 [INFO][8] startup.go 340: Datastore connection verified\n2020-01-11 15:56:16.553 [INFO][8] startup.go 95: Datastore is ready\n2020-01-11 15:56:16.560 [INFO][8] customresource.go 100: Error getting resource Key=GlobalFelixConfig(name=CalicoVersion) Name=\"calicoversion\" Resource=\"GlobalFelixConfigs\" error=the server could not find the requested resource (get GlobalFelixConfigs.crd.projectcalico.org calicoversion)\n2020-01-11 15:56:16.566 [INFO][8] startup.go 584: Using autodetected IPv4 address on interface eth0: 10.250.7.77/19\n2020-01-11 15:56:16.566 [INFO][8] startup.go 452: Node IPv4 changed, will check for conflicts\n2020-01-11 15:56:16.572 [INFO][8] startup.go 647: No AS number configured on node resource, using global value\n2020-01-11 15:56:16.573 [INFO][8] startup.go 149: Setting NetworkUnavailable to False\n2020-01-11 15:56:16.594 [INFO][8] startup.go 536: CALICO_IPV4POOL_NAT_OUTGOING is true (defaulted) through environment variable\n2020-01-11 15:56:16.594 [INFO][8] startup.go 796: Ensure default IPv4 pool is created. IPIP mode: Always, VXLAN mode: Never\n2020-01-11 15:56:16.597 [INFO][8] k8s.go 542: Attempt to 'List' using kubernetes backend is not supported.\n2020-01-11 15:56:16.612 [INFO][8] startup.go 806: Created default IPv4 pool (100.64.0.0/11) with NAT outgoing true. IPIP mode: Always, VXLAN mode: Never\n2020-01-11 15:56:16.612 [INFO][8] startup.go 530: FELIX_IPV6SUPPORT is false through environment variable\n2020-01-11 15:56:16.626 [INFO][8] startup.go 181: Using node name: ip-10-250-7-77.ec2.internal\n2020-01-11 15:56:16.670 [INFO][16] k8s.go 219: Using host-local IPAM\nCalico node started successfully\nbird: Unable to open configuration file /etc/calico/confd/config/bird6.cfg: No such file or directory\nbird: Unable to open configuration file /etc/calico/confd/config/bird.cfg: No such file or directory\n2020-01-11 15:56:17.753 [INFO][41] config.go 105: Skipping confd config file.\n2020-01-11 15:56:17.753 [INFO][41] config.go 199: Found FELIX_TYPHAK8SSERVICENAME=calico-typha\n2020-01-11 15:56:17.753 [INFO][41] run.go 17: Starting calico-confd\n2020-01-11 15:56:17.759 [INFO][41] k8s.go 219: Using host-local IPAM\n2020-01-11 15:56:17.793 [INFO][41] client.go 224: Found Typha ClusterIP. clusterIP=\"100.106.19.47\"\n2020-01-11 15:56:17.793 [INFO][41] client.go 231: Found Typha service port. port=v1.ServicePort{Name:\"calico-typha\", Protocol:\"TCP\", Port:5473, TargetPort:intstr.IntOrString{Type:1, IntVal:0, StrVal:\"calico-typha\"}, NodePort:0}\n2020-01-11 15:56:17.793 [INFO][41] client.go 145: Connecting to Typha. addr=\"100.106.19.47:5473\"\n2020-01-11 15:56:17.793 [INFO][41] sync_client.go 70: requiringTLS=false\n2020-01-11 15:56:17.793 [INFO][41] sync_client.go 169: Starting Typha client\n2020-01-11 15:56:17.793 [INFO][41] sync_client.go 70: requiringTLS=false\n2020-01-11 15:56:17.793 [INFO][41] sync_client.go 218: Connecting to Typha. address=\"100.106.19.47:5473\" connID=0x0 type=\"bgp\"\nbird: Unable to open configuration file /etc/calico/confd/config/bird6.cfg: No such file or directory\nbird: Unable to open configuration file /etc/calico/confd/config/bird.cfg: No such file or directory\n2020-01-11 15:56:18.827 [FATAL][41] client.go 165: Failed to connect to Typha error=dial tcp 100.106.19.47:5473: connect: connection refused\n2020-01-11 15:56:18.881 [INFO][75] config.go 105: Skipping confd config file.\n2020-01-11 15:56:18.881 [INFO][75] config.go 199: Found FELIX_TYPHAK8SSERVICENAME=calico-typha\n2020-01-11 15:56:18.881 [INFO][75] run.go 17: Starting calico-confd\n2020-01-11 15:56:18.887 [INFO][75] k8s.go 219: Using host-local IPAM\n2020-01-11 15:56:18.942 [INFO][75] client.go 224: Found Typha ClusterIP. clusterIP=\"100.106.19.47\"\n2020-01-11 15:56:18.942 [INFO][75] client.go 231: Found Typha service port. port=v1.ServicePort{Name:\"calico-typha\", Protocol:\"TCP\", Port:5473, TargetPort:intstr.IntOrString{Type:1, IntVal:0, StrVal:\"calico-typha\"}, NodePort:0}\n2020-01-11 15:56:18.942 [INFO][75] client.go 145: Connecting to Typha. addr=\"100.106.19.47:5473\"\n2020-01-11 15:56:18.942 [INFO][75] sync_client.go 70: requiringTLS=false\n2020-01-11 15:56:18.942 [INFO][75] sync_client.go 169: Starting Typha client\n2020-01-11 15:56:18.942 [INFO][75] sync_client.go 70: requiringTLS=false\n2020-01-11 15:56:18.942 [INFO][75] sync_client.go 218: Connecting to Typha. address=\"100.106.19.47:5473\" connID=0x0 type=\"bgp\"\n2020-01-11 15:56:18.979 [ERROR][42] daemon.go 446: Failed to connect to Typha. Retrying... error=dial tcp 100.106.19.47:5473: connect: connection refused\nbird: Unable to open configuration file /etc/calico/confd/config/bird.cfg: No such file or directory\nbird: Unable to open configuration file /etc/calico/confd/config/bird6.cfg: No such file or directory\n2020-01-11 15:56:19.978 [FATAL][75] client.go 165: Failed to connect to Typha error=dial tcp 100.106.19.47:5473: connect: connection refused\n2020-01-11 15:56:20.022 [INFO][84] config.go 105: Skipping confd config file.\n2020-01-11 15:56:20.023 [INFO][84] config.go 199: Found FELIX_TYPHAK8SSERVICENAME=calico-typha\n2020-01-11 15:56:20.023 [INFO][84] run.go 17: Starting calico-confd\n2020-01-11 15:56:20.024 [INFO][84] k8s.go 219: Using host-local IPAM\n2020-01-11 15:56:20.048 [INFO][84] client.go 224: Found Typha ClusterIP. clusterIP=\"100.106.19.47\"\n2020-01-11 15:56:20.048 [INFO][84] client.go 231: Found Typha service port. port=v1.ServicePort{Name:\"calico-typha\", Protocol:\"TCP\", Port:5473, TargetPort:intstr.IntOrString{Type:1, IntVal:0, StrVal:\"calico-typha\"}, NodePort:0}\n2020-01-11 15:56:20.048 [INFO][84] client.go 145: Connecting to Typha. addr=\"100.106.19.47:5473\"\n2020-01-11 15:56:20.048 [INFO][84] sync_client.go 70: requiringTLS=false\n2020-01-11 15:56:20.048 [INFO][84] sync_client.go 169: Starting Typha client\n2020-01-11 15:56:20.049 [INFO][84] sync_client.go 70: requiringTLS=false\n2020-01-11 15:56:20.049 [INFO][84] sync_client.go 218: Connecting to Typha. address=\"100.106.19.47:5473\" connID=0x0 type=\"bgp\"\nbird: Unable to open configuration file /etc/calico/confd/config/bird.cfg: No such file or directory\nbird: Unable to open configuration file /etc/calico/confd/config/bird6.cfg: No such file or directory\n2020-01-11 15:56:21.066 [FATAL][84] client.go 165: Failed to connect to Typha error=dial tcp 100.106.19.47:5473: connect: connection refused\n2020-01-11 15:56:21.104 [INFO][93] config.go 105: Skipping confd config file.\n2020-01-11 15:56:21.104 [INFO][93] config.go 199: Found FELIX_TYPHAK8SSERVICENAME=calico-typha\n2020-01-11 15:56:21.104 [INFO][93] run.go 17: Starting calico-confd\n2020-01-11 15:56:21.106 [INFO][93] k8s.go 219: Using host-local IPAM\n2020-01-11 15:56:21.126 [INFO][93] client.go 224: Found Typha ClusterIP. clusterIP=\"100.106.19.47\"\n2020-01-11 15:56:21.126 [INFO][93] client.go 231: Found Typha service port. port=v1.ServicePort{Name:\"calico-typha\", Protocol:\"TCP\", Port:5473, TargetPort:intstr.IntOrString{Type:1, IntVal:0, StrVal:\"calico-typha\"}, NodePort:0}\n2020-01-11 15:56:21.126 [INFO][93] client.go 145: Connecting to Typha. addr=\"100.106.19.47:5473\"\n2020-01-11 15:56:21.126 [INFO][93] sync_client.go 70: requiringTLS=false\n2020-01-11 15:56:21.126 [INFO][93] sync_client.go 169: Starting Typha client\n2020-01-11 15:56:21.126 [INFO][93] sync_client.go 70: requiringTLS=false\n2020-01-11 15:56:21.126 [INFO][93] sync_client.go 218: Connecting to Typha. address=\"100.106.19.47:5473\" connID=0x0 type=\"bgp\"\nbird: Unable to open configuration file /etc/calico/confd/config/bird.cfg: No such file or directory\nbird: Unable to open configuration file /etc/calico/confd/config/bird6.cfg: No such file or directory\n2020-01-11 15:56:22.154 [FATAL][93] client.go 165: Failed to connect to Typha error=dial tcp 100.106.19.47:5473: connect: connection refused\n2020-01-11 15:56:22.190 [INFO][102] config.go 105: Skipping confd config file.\n2020-01-11 15:56:22.190 [INFO][102] config.go 199: Found FELIX_TYPHAK8SSERVICENAME=calico-typha\n2020-01-11 15:56:22.190 [INFO][102] run.go 17: Starting calico-confd\n2020-01-11 15:56:22.191 [INFO][102] k8s.go 219: Using host-local IPAM\n2020-01-11 15:56:22.215 [INFO][102] client.go 224: Found Typha ClusterIP. clusterIP=\"100.106.19.47\"\n2020-01-11 15:56:22.215 [INFO][102] client.go 231: Found Typha service port. port=v1.ServicePort{Name:\"calico-typha\", Protocol:\"TCP\", Port:5473, TargetPort:intstr.IntOrString{Type:1, IntVal:0, StrVal:\"calico-typha\"}, NodePort:0}\n2020-01-11 15:56:22.215 [INFO][102] client.go 145: Connecting to Typha. addr=\"100.106.19.47:5473\"\n2020-01-11 15:56:22.215 [INFO][102] sync_client.go 70: requiringTLS=false\n2020-01-11 15:56:22.215 [INFO][102] sync_client.go 169: Starting Typha client\n2020-01-11 15:56:22.215 [INFO][102] sync_client.go 70: requiringTLS=false\n2020-01-11 15:56:22.215 [INFO][102] sync_client.go 218: Connecting to Typha. address=\"100.106.19.47:5473\" connID=0x0 type=\"bgp\"\nbird: bird: Unable to open configuration file /etc/calico/confd/config/bird.cfg: No such file or directoryUnable to open configuration file /etc/calico/confd/config/bird6.cfg: No such file or directory\n\n2020-01-11 15:56:23.242 [INFO][102] sync_client.go 233: Connected to Typha. address=\"100.106.19.47:5473\" connID=0x0 type=\"bgp\"\n2020-01-11 15:56:23.242 [INFO][102] client.go 186: CALICO_ADVERTISE_CLUSTER_IPS not specified, no cluster ips will be advertised\n2020-01-11 15:56:23.242 [INFO][102] client.go 330: RouteGenerator has indicated it is in sync\n2020-01-11 15:56:23.243 [INFO][102] sync_client.go 268: Started Typha client main loop address=\"100.106.19.47:5473\" connID=0x0 type=\"bgp\"\n2020-01-11 15:56:23.244 [INFO][102] sync_client.go 325: Server hello message received address=\"100.106.19.47:5473\" connID=0x0 serverVersion=\"v3.8.2\" type=\"bgp\"\n2020-01-11 15:56:23.245 [INFO][102] sync_client.go 296: Status update from Typha. address=\"100.106.19.47:5473\" connID=0x0 newStatus=in-sync type=\"bgp\"\n2020-01-11 15:56:23.245 [INFO][102] client.go 327: Calico Syncer has indicated it is in sync\n2020-01-11 15:56:23.246 [INFO][102] resource.go 220: Target config /etc/calico/confd/config/bird_aggr.cfg out of sync\n2020-01-11 15:56:23.246 [INFO][102] resource.go 220: Target config /etc/calico/confd/config/bird.cfg out of sync\n2020-01-11 15:56:23.247 [INFO][102] resource.go 220: Target config /etc/calico/confd/config/bird_ipam.cfg out of sync\n2020-01-11 15:56:23.247 [INFO][102] resource.go 220: Target config /etc/calico/confd/config/bird6_ipam.cfg out of sync\n2020-01-11 15:56:23.249 [INFO][102] resource.go 220: Target config /etc/calico/confd/config/bird6.cfg out of sync\n2020-01-11 15:56:23.249 [INFO][102] resource.go 220: Target config /etc/calico/confd/config/bird6_aggr.cfg out of sync\n2020-01-11 15:56:23.250 [INFO][102] resource.go 220: Target config /tmp/tunl-ip out of sync\n2020-01-11 15:56:23.254 [ERROR][102] resource.go 288: Error from checkcmd: \"Hangup\\n\"\n2020-01-11 15:56:23.254 [INFO][102] resource.go 226: Check failed, but file does not yet exist - create anyway\n2020-01-11 15:56:23.254 [INFO][102] resource.go 260: Target config /etc/calico/confd/config/bird_aggr.cfg has been updated\n2020-01-11 15:56:23.266 [INFO][102] resource.go 260: Target config /etc/calico/confd/config/bird6_ipam.cfg has been updated\n2020-01-11 15:56:23.266 [INFO][102] resource.go 260: Target config /etc/calico/confd/config/bird_ipam.cfg has been updated\n2020-01-11 15:56:23.266 [INFO][102] resource.go 260: Target config /etc/calico/confd/config/bird.cfg has been updated\n2020-01-11 15:56:23.277 [INFO][102] resource.go 260: Target config /etc/calico/confd/config/bird6.cfg has been updated\n2020-01-11 15:56:23.279 [INFO][102] resource.go 260: Target config /etc/calico/confd/config/bird6_aggr.cfg has been updated\n2020-01-11 15:56:23.346 [INFO][102] resource.go 260: Target config /tmp/tunl-ip has been updated\nbird: device1: Initializing\nbird: direct1: Initializing\nbird: device1: Starting\nbird: device1: Initializing\nbird: direct1: Initializing\nbird: Mesh_10_250_27_25: Initializing\nbird: device1: Starting\nbird: device1: Connected to table master\nbird: device1: State changed to feed\nbird: device1: Connected to table master\nbird: device1: State changed to feed\nbird: direct1: Starting\nbird: direct1: Connected to table master\nbird: direct1: State changed to feed\nbird: Graceful restart started\nbird: Graceful restart done\nbird: Started\nbird: direct1: Starting\nbird: device1: State changed to upbird: \ndirect1: Connected to table master\nbird: bird: direct1: State changed to feeddirect1: State changed to up\n\nbird: Mesh_10_250_27_25: Starting\nbird: Mesh_10_250_27_25: State changed to start\nbird: Graceful restart started\nbird: Started\nbird: device1: State changed to up\nbird: direct1: State changed to up\nbird: Mesh_10_250_27_25: Connected to table master\nbird: Mesh_10_250_27_25: State changed to feed\nbird: Mesh_10_250_27_25: State changed to up\nbird: Graceful restart done\n2020-01-11 15:59:49.190 [ERROR][102] sync_client.go 260: Failed to read from server address=\"100.106.19.47:5473\" connID=0x0 error=EOF type=\"bgp\"\n2020-01-11 15:59:49.191 [INFO][102] sync_client.go 155: Typha client Context asked us to exit connID=0x0 type=\"bgp\"\n2020-01-11 15:59:49.190 [ERROR][42] sync_client.go 260: Failed to read from server address=\"100.106.19.47:5473\" connID=0x0 error=EOF type=\"\"\n2020-01-11 15:59:49.191 [FATAL][102] client.go 169: Connection to Typha failed\n2020-01-11 15:59:49.248 [INFO][490] config.go 105: Skipping confd config file.\n2020-01-11 15:59:49.248 [INFO][490] config.go 199: Found FELIX_TYPHAK8SSERVICENAME=calico-typha\n2020-01-11 15:59:49.248 [INFO][490] run.go 17: Starting calico-confd\n2020-01-11 15:59:49.250 [INFO][490] k8s.go 219: Using host-local IPAM\n2020-01-11 15:59:49.274 [INFO][490] client.go 224: Found Typha ClusterIP. clusterIP=\"100.106.19.47\"\n2020-01-11 15:59:49.275 [INFO][490] client.go 231: Found Typha service port. port=v1.ServicePort{Name:\"calico-typha\", Protocol:\"TCP\", Port:5473, TargetPort:intstr.IntOrString{Type:1, IntVal:0, StrVal:\"calico-typha\"}, NodePort:0}\n2020-01-11 15:59:49.275 [INFO][490] client.go 145: Connecting to Typha. addr=\"100.106.19.47:5473\"\n2020-01-11 15:59:49.275 [INFO][490] sync_client.go 70: requiringTLS=false\n2020-01-11 15:59:49.275 [INFO][490] sync_client.go 169: Starting Typha client\n2020-01-11 15:59:49.275 [INFO][490] sync_client.go 70: requiringTLS=false\n2020-01-11 15:59:49.275 [INFO][490] sync_client.go 218: Connecting to Typha. address=\"100.106.19.47:5473\" connID=0x0 type=\"bgp\"\n2020-01-11 15:59:50.282 [FATAL][490] client.go 165: Failed to connect to Typha error=dial tcp 100.106.19.47:5473: connect: connection refused\n2020-01-11 15:59:50.304 [INFO][497] config.go 105: Skipping confd config file.\n2020-01-11 15:59:50.304 [INFO][497] config.go 199: Found FELIX_TYPHAK8SSERVICENAME=calico-typha\n2020-01-11 15:59:50.304 [INFO][497] run.go 17: Starting calico-confd\n2020-01-11 15:59:50.305 [INFO][497] k8s.go 219: Using host-local IPAM\n2020-01-11 15:59:50.323 [INFO][497] client.go 224: Found Typha ClusterIP. clusterIP=\"100.106.19.47\"\n2020-01-11 15:59:50.323 [INFO][497] client.go 231: Found Typha service port. port=v1.ServicePort{Name:\"calico-typha\", Protocol:\"TCP\", Port:5473, TargetPort:intstr.IntOrString{Type:1, IntVal:0, StrVal:\"calico-typha\"}, NodePort:0}\n2020-01-11 15:59:50.323 [INFO][497] client.go 145: Connecting to Typha. addr=\"100.106.19.47:5473\"\n2020-01-11 15:59:50.323 [INFO][497] sync_client.go 70: requiringTLS=false\n2020-01-11 15:59:50.323 [INFO][497] sync_client.go 169: Starting Typha client\n2020-01-11 15:59:50.323 [INFO][497] sync_client.go 70: requiringTLS=false\n2020-01-11 15:59:50.323 [INFO][497] sync_client.go 218: Connecting to Typha. address=\"100.106.19.47:5473\" connID=0x0 type=\"bgp\"\n2020-01-11 15:59:51.191 [FATAL][42] daemon.go 641: Exiting. reason=\"Connection to Typha failed\"\n2020-01-11 15:59:51.370 [FATAL][497] client.go 165: Failed to connect to Typha error=dial tcp 100.106.19.47:5473: connect: connection refused\n2020-01-11 15:59:51.391 [INFO][525] config.go 105: Skipping confd config file.\n2020-01-11 15:59:51.391 [INFO][525] config.go 199: Found FELIX_TYPHAK8SSERVICENAME=calico-typha\n2020-01-11 15:59:51.391 [INFO][525] run.go 17: Starting calico-confd\n2020-01-11 15:59:51.392 [INFO][525] k8s.go 219: Using host-local IPAM\n2020-01-11 15:59:51.411 [INFO][525] client.go 224: Found Typha ClusterIP. clusterIP=\"100.106.19.47\"\n2020-01-11 15:59:51.411 [INFO][525] client.go 231: Found Typha service port. port=v1.ServicePort{Name:\"calico-typha\", Protocol:\"TCP\", Port:5473, TargetPort:intstr.IntOrString{Type:1, IntVal:0, StrVal:\"calico-typha\"}, NodePort:0}\n2020-01-11 15:59:51.411 [INFO][525] client.go 145: Connecting to Typha. addr=\"100.106.19.47:5473\"\n2020-01-11 15:59:51.411 [INFO][525] sync_client.go 70: requiringTLS=false\n2020-01-11 15:59:51.411 [INFO][525] sync_client.go 169: Starting Typha client\n2020-01-11 15:59:51.411 [INFO][525] sync_client.go 70: requiringTLS=false\n2020-01-11 15:59:51.411 [INFO][525] sync_client.go 218: Connecting to Typha. address=\"100.106.19.47:5473\" connID=0x0 type=\"bgp\"\n2020-01-11 15:59:52.330 [ERROR][504] daemon.go 446: Failed to connect to Typha. Retrying... error=dial tcp 100.106.19.47:5473: connect: connection refused\n2020-01-11 15:59:52.458 [FATAL][525] client.go 165: Failed to connect to Typha error=dial tcp 100.106.19.47:5473: connect: connection refused\n2020-01-11 15:59:52.479 [INFO][532] config.go 105: Skipping confd config file.\n2020-01-11 15:59:52.479 [INFO][532] config.go 199: Found FELIX_TYPHAK8SSERVICENAME=calico-typha\n2020-01-11 15:59:52.479 [INFO][532] run.go 17: Starting calico-confd\n2020-01-11 15:59:52.480 [INFO][532] k8s.go 219: Using host-local IPAM\n2020-01-11 15:59:52.500 [INFO][532] client.go 224: Found Typha ClusterIP. clusterIP=\"100.106.19.47\"\n2020-01-11 15:59:52.500 [INFO][532] client.go 231: Found Typha service port. port=v1.ServicePort{Name:\"calico-typha\", Protocol:\"TCP\", Port:5473, TargetPort:intstr.IntOrString{Type:1, IntVal:0, StrVal:\"calico-typha\"}, NodePort:0}\n2020-01-11 15:59:52.500 [INFO][532] client.go 145: Connecting to Typha. addr=\"100.106.19.47:5473\"\n2020-01-11 15:59:52.500 [INFO][532] sync_client.go 70: requiringTLS=false\n2020-01-11 15:59:52.500 [INFO][532] sync_client.go 169: Starting Typha client\n2020-01-11 15:59:52.500 [INFO][532] sync_client.go 70: requiringTLS=false\n2020-01-11 15:59:52.500 [INFO][532] sync_client.go 218: Connecting to Typha. address=\"100.106.19.47:5473\" connID=0x0 type=\"bgp\"\n2020-01-11 15:59:53.546 [FATAL][532] client.go 165: Failed to connect to Typha error=dial tcp 100.106.19.47:5473: connect: connection refused\n2020-01-11 15:59:53.568 [INFO][551] config.go 105: Skipping confd config file.\n2020-01-11 15:59:53.568 [INFO][551] config.go 199: Found FELIX_TYPHAK8SSERVICENAME=calico-typha\n2020-01-11 15:59:53.568 [INFO][551] run.go 17: Starting calico-confd\n2020-01-11 15:59:53.569 [INFO][551] k8s.go 219: Using host-local IPAM\n2020-01-11 15:59:53.585 [INFO][551] client.go 224: Found Typha ClusterIP. clusterIP=\"100.106.19.47\"\n2020-01-11 15:59:53.585 [INFO][551] client.go 231: Found Typha service port. port=v1.ServicePort{Name:\"calico-typha\", Protocol:\"TCP\", Port:5473, TargetPort:intstr.IntOrString{Type:1, IntVal:0, StrVal:\"calico-typha\"}, NodePort:0}\n2020-01-11 15:59:53.585 [INFO][551] client.go 145: Connecting to Typha. addr=\"100.106.19.47:5473\"\n2020-01-11 15:59:53.585 [INFO][551] sync_client.go 70: requiringTLS=false\n2020-01-11 15:59:53.585 [INFO][551] sync_client.go 169: Starting Typha client\n2020-01-11 15:59:53.585 [INFO][551] sync_client.go 70: requiringTLS=false\n2020-01-11 15:59:53.585 [INFO][551] sync_client.go 218: Connecting to Typha. address=\"100.106.19.47:5473\" connID=0x0 type=\"bgp\"\n2020-01-11 15:59:54.634 [FATAL][551] client.go 165: Failed to connect to Typha error=dial tcp 100.106.19.47:5473: connect: connection refused\n2020-01-11 15:59:54.655 [INFO][558] config.go 105: Skipping confd config file.\n2020-01-11 15:59:54.655 [INFO][558] config.go 199: Found FELIX_TYPHAK8SSERVICENAME=calico-typha\n2020-01-11 15:59:54.655 [INFO][558] run.go 17: Starting calico-confd\n2020-01-11 15:59:54.656 [INFO][558] k8s.go 219: Using host-local IPAM\n2020-01-11 15:59:54.676 [INFO][558] client.go 224: Found Typha ClusterIP. clusterIP=\"100.106.19.47\"\n2020-01-11 15:59:54.676 [INFO][558] client.go 231: Found Typha service port. port=v1.ServicePort{Name:\"calico-typha\", Protocol:\"TCP\", Port:5473, TargetPort:intstr.IntOrString{Type:1, IntVal:0, StrVal:\"calico-typha\"}, NodePort:0}\n2020-01-11 15:59:54.676 [INFO][558] client.go 145: Connecting to Typha. addr=\"100.106.19.47:5473\"\n2020-01-11 15:59:54.676 [INFO][558] sync_client.go 70: requiringTLS=false\n2020-01-11 15:59:54.676 [INFO][558] sync_client.go 169: Starting Typha client\n2020-01-11 15:59:54.676 [INFO][558] sync_client.go 70: requiringTLS=false\n2020-01-11 15:59:54.676 [INFO][558] sync_client.go 218: Connecting to Typha. address=\"100.106.19.47:5473\" connID=0x0 type=\"bgp\"\n2020-01-11 15:59:55.722 [FATAL][558] client.go 165: Failed to connect to Typha error=dial tcp 100.106.19.47:5473: connect: connection refused\n2020-01-11 15:59:55.748 [INFO][565] config.go 105: Skipping confd config file.\n2020-01-11 15:59:55.748 [INFO][565] config.go 199: Found FELIX_TYPHAK8SSERVICENAME=calico-typha\n2020-01-11 15:59:55.748 [INFO][565] run.go 17: Starting calico-confd\n2020-01-11 15:59:55.749 [INFO][565] k8s.go 219: Using host-local IPAM\n2020-01-11 15:59:55.766 [INFO][565] client.go 224: Found Typha ClusterIP. clusterIP=\"100.106.19.47\"\n2020-01-11 15:59:55.767 [INFO][565] client.go 231: Found Typha service port. port=v1.ServicePort{Name:\"calico-typha\", Protocol:\"TCP\", Port:5473, TargetPort:intstr.IntOrString{Type:1, IntVal:0, StrVal:\"calico-typha\"}, NodePort:0}\n2020-01-11 15:59:55.767 [INFO][565] client.go 145: Connecting to Typha. addr=\"100.106.19.47:5473\"\n2020-01-11 15:59:55.767 [INFO][565] sync_client.go 70: requiringTLS=false\n2020-01-11 15:59:55.767 [INFO][565] sync_client.go 169: Starting Typha client\n2020-01-11 15:59:55.767 [INFO][565] sync_client.go 70: requiringTLS=false\n2020-01-11 15:59:55.767 [INFO][565] sync_client.go 218: Connecting to Typha. address=\"100.106.19.47:5473\" connID=0x0 type=\"bgp\"\n2020-01-11 15:59:56.810 [FATAL][565] client.go 165: Failed to connect to Typha error=dial tcp 100.106.19.47:5473: connect: connection refused\n2020-01-11 15:59:56.831 [INFO][572] config.go 105: Skipping confd config file.\n2020-01-11 15:59:56.831 [INFO][572] config.go 199: Found FELIX_TYPHAK8SSERVICENAME=calico-typha\n2020-01-11 15:59:56.831 [INFO][572] run.go 17: Starting calico-confd\n2020-01-11 15:59:56.832 [INFO][572] k8s.go 219: Using host-local IPAM\n2020-01-11 15:59:56.849 [INFO][572] client.go 224: Found Typha ClusterIP. clusterIP=\"100.106.19.47\"\n2020-01-11 15:59:56.850 [INFO][572] client.go 231: Found Typha service port. port=v1.ServicePort{Name:\"calico-typha\", Protocol:\"TCP\", Port:5473, TargetPort:intstr.IntOrString{Type:1, IntVal:0, StrVal:\"calico-typha\"}, NodePort:0}\n2020-01-11 15:59:56.850 [INFO][572] client.go 145: Connecting to Typha. addr=\"100.106.19.47:5473\"\n2020-01-11 15:59:56.850 [INFO][572] sync_client.go 70: requiringTLS=false\n2020-01-11 15:59:56.850 [INFO][572] sync_client.go 169: Starting Typha client\n2020-01-11 15:59:56.850 [INFO][572] sync_client.go 70: requiringTLS=false\n2020-01-11 15:59:56.850 [INFO][572] sync_client.go 218: Connecting to Typha. address=\"100.106.19.47:5473\" connID=0x0 type=\"bgp\"\n2020-01-11 15:59:57.898 [FATAL][572] client.go 165: Failed to connect to Typha error=dial tcp 100.106.19.47:5473: connect: connection refused\n2020-01-11 15:59:57.920 [INFO][579] config.go 105: Skipping confd config file.\n2020-01-11 15:59:57.920 [INFO][579] config.go 199: Found FELIX_TYPHAK8SSERVICENAME=calico-typha\n2020-01-11 15:59:57.920 [INFO][579] run.go 17: Starting calico-confd\n2020-01-11 15:59:57.921 [INFO][579] k8s.go 219: Using host-local IPAM\n2020-01-11 15:59:57.940 [INFO][579] client.go 224: Found Typha ClusterIP. clusterIP=\"100.106.19.47\"\n2020-01-11 15:59:57.940 [INFO][579] client.go 231: Found Typha service port. port=v1.ServicePort{Name:\"calico-typha\", Protocol:\"TCP\", Port:5473, TargetPort:intstr.IntOrString{Type:1, IntVal:0, StrVal:\"calico-typha\"}, NodePort:0}\n2020-01-11 15:59:57.940 [INFO][579] client.go 145: Connecting to Typha. addr=\"100.106.19.47:5473\"\n2020-01-11 15:59:57.940 [INFO][579] sync_client.go 70: requiringTLS=false\n2020-01-11 15:59:57.940 [INFO][579] sync_client.go 169: Starting Typha client\n2020-01-11 15:59:57.940 [INFO][579] sync_client.go 70: requiringTLS=false\n2020-01-11 15:59:57.940 [INFO][579] sync_client.go 218: Connecting to Typha. address=\"100.106.19.47:5473\" connID=0x0 type=\"bgp\"\n2020-01-11 15:59:58.986 [INFO][579] sync_client.go 233: Connected to Typha. address=\"100.106.19.47:5473\" connID=0x0 type=\"bgp\"\n2020-01-11 15:59:58.986 [INFO][579] client.go 186: CALICO_ADVERTISE_CLUSTER_IPS not specified, no cluster ips will be advertised\n2020-01-11 15:59:58.987 [INFO][579] client.go 330: RouteGenerator has indicated it is in sync\n2020-01-11 15:59:58.987 [INFO][579] sync_client.go 268: Started Typha client main loop address=\"100.106.19.47:5473\" connID=0x0 type=\"bgp\"\n2020-01-11 15:59:58.988 [INFO][579] sync_client.go 325: Server hello message received address=\"100.106.19.47:5473\" connID=0x0 serverVersion=\"v3.8.2\" type=\"bgp\"\n2020-01-11 15:59:58.991 [INFO][579] sync_client.go 296: Status update from Typha. address=\"100.106.19.47:5473\" connID=0x0 newStatus=in-sync type=\"bgp\"\n2020-01-11 15:59:58.991 [INFO][579] client.go 327: Calico Syncer has indicated it is in sync\n2020-01-11 16:00:57.666 [ERROR][579] sync_client.go 260: Failed to read from server address=\"100.106.19.47:5473\" connID=0x0 error=EOF type=\"bgp\"\n2020-01-11 16:00:57.666 [INFO][579] sync_client.go 155: Typha client Context asked us to exit connID=0x0 type=\"bgp\"\n2020-01-11 16:00:57.666 [FATAL][579] client.go 169: Connection to Typha failed\n2020-01-11 16:00:57.667 [ERROR][504] sync_client.go 260: Failed to read from server address=\"100.106.19.47:5473\" connID=0x0 error=EOF type=\"\"\n2020-01-11 16:00:57.692 [INFO][708] config.go 105: Skipping confd config file.\n2020-01-11 16:00:57.692 [INFO][708] config.go 199: Found FELIX_TYPHAK8SSERVICENAME=calico-typha\n2020-01-11 16:00:57.692 [INFO][708] run.go 17: Starting calico-confd\n2020-01-11 16:00:57.693 [INFO][708] k8s.go 219: Using host-local IPAM\n2020-01-11 16:00:57.711 [INFO][708] client.go 224: Found Typha ClusterIP. clusterIP=\"100.106.19.47\"\n2020-01-11 16:00:57.711 [INFO][708] client.go 231: Found Typha service port. port=v1.ServicePort{Name:\"calico-typha\", Protocol:\"TCP\", Port:5473, TargetPort:intstr.IntOrString{Type:1, IntVal:0, StrVal:\"calico-typha\"}, NodePort:0}\n2020-01-11 16:00:57.711 [INFO][708] client.go 145: Connecting to Typha. addr=\"100.106.19.47:5473\"\n2020-01-11 16:00:57.711 [INFO][708] sync_client.go 70: requiringTLS=false\n2020-01-11 16:00:57.711 [INFO][708] sync_client.go 169: Starting Typha client\n2020-01-11 16:00:57.711 [INFO][708] sync_client.go 70: requiringTLS=false\n2020-01-11 16:00:57.711 [INFO][708] sync_client.go 218: Connecting to Typha. address=\"100.106.19.47:5473\" connID=0x0 type=\"bgp\"\n2020-01-11 16:00:58.762 [FATAL][708] client.go 165: Failed to connect to Typha error=dial tcp 100.106.19.47:5473: connect: connection refused\n2020-01-11 16:00:58.784 [INFO][715] config.go 105: Skipping confd config file.\n2020-01-11 16:00:58.784 [INFO][715] config.go 199: Found FELIX_TYPHAK8SSERVICENAME=calico-typha\n2020-01-11 16:00:58.784 [INFO][715] run.go 17: Starting calico-confd\n2020-01-11 16:00:58.785 [INFO][715] k8s.go 219: Using host-local IPAM\n2020-01-11 16:00:58.802 [INFO][715] client.go 224: Found Typha ClusterIP. clusterIP=\"100.106.19.47\"\n2020-01-11 16:00:58.802 [INFO][715] client.go 231: Found Typha service port. port=v1.ServicePort{Name:\"calico-typha\", Protocol:\"TCP\", Port:5473, TargetPort:intstr.IntOrString{Type:1, IntVal:0, StrVal:\"calico-typha\"}, NodePort:0}\n2020-01-11 16:00:58.802 [INFO][715] client.go 145: Connecting to Typha. addr=\"100.106.19.47:5473\"\n2020-01-11 16:00:58.803 [INFO][715] sync_client.go 70: requiringTLS=false\n2020-01-11 16:00:58.803 [INFO][715] sync_client.go 169: Starting Typha client\n2020-01-11 16:00:58.803 [INFO][715] sync_client.go 70: requiringTLS=false\n2020-01-11 16:00:58.803 [INFO][715] sync_client.go 218: Connecting to Typha. address=\"100.106.19.47:5473\" connID=0x0 type=\"bgp\"\n2020-01-11 16:00:59.667 [FATAL][504] daemon.go 641: Exiting. reason=\"Connection to Typha failed\"\n2020-01-11 16:00:59.850 [FATAL][715] client.go 165: Failed to connect to Typha error=dial tcp 100.106.19.47:5473: connect: connection refused\n2020-01-11 16:00:59.871 [INFO][742] config.go 105: Skipping confd config file.\n2020-01-11 16:00:59.872 [INFO][742] config.go 199: Found FELIX_TYPHAK8SSERVICENAME=calico-typha\n2020-01-11 16:00:59.872 [INFO][742] run.go 17: Starting calico-confd\n2020-01-11 16:00:59.872 [INFO][742] k8s.go 219: Using host-local IPAM\n2020-01-11 16:00:59.891 [INFO][742] client.go 224: Found Typha ClusterIP. clusterIP=\"100.106.19.47\"\n2020-01-11 16:00:59.891 [INFO][742] client.go 231: Found Typha service port. port=v1.ServicePort{Name:\"calico-typha\", Protocol:\"TCP\", Port:5473, TargetPort:intstr.IntOrString{Type:1, IntVal:0, StrVal:\"calico-typha\"}, NodePort:0}\n2020-01-11 16:00:59.891 [INFO][742] client.go 145: Connecting to Typha. addr=\"100.106.19.47:5473\"\n2020-01-11 16:00:59.891 [INFO][742] sync_client.go 70: requiringTLS=false\n2020-01-11 16:00:59.891 [INFO][742] sync_client.go 169: Starting Typha client\n2020-01-11 16:00:59.891 [INFO][742] sync_client.go 70: requiringTLS=false\n2020-01-11 16:00:59.891 [INFO][742] sync_client.go 218: Connecting to Typha. address=\"100.106.19.47:5473\" connID=0x0 type=\"bgp\"\n2020-01-11 16:01:00.874 [ERROR][722] daemon.go 446: Failed to connect to Typha. Retrying... error=dial tcp 100.106.19.47:5473: connect: connection refused\n2020-01-11 16:01:00.938 [FATAL][742] client.go 165: Failed to connect to Typha error=dial tcp 100.106.19.47:5473: connect: connection refused\n2020-01-11 16:01:00.960 [INFO][749] config.go 105: Skipping confd config file.\n2020-01-11 16:01:00.960 [INFO][749] config.go 199: Found FELIX_TYPHAK8SSERVICENAME=calico-typha\n2020-01-11 16:01:00.960 [INFO][749] run.go 17: Starting calico-confd\n2020-01-11 16:01:00.961 [INFO][749] k8s.go 219: Using host-local IPAM\n2020-01-11 16:01:00.980 [INFO][749] client.go 224: Found Typha ClusterIP. clusterIP=\"100.106.19.47\"\n2020-01-11 16:01:00.980 [INFO][749] client.go 231: Found Typha service port. port=v1.ServicePort{Name:\"calico-typha\", Protocol:\"TCP\", Port:5473, TargetPort:intstr.IntOrString{Type:1, IntVal:0, StrVal:\"calico-typha\"}, NodePort:0}\n2020-01-11 16:01:00.980 [INFO][749] client.go 145: Connecting to Typha. addr=\"100.106.19.47:5473\"\n2020-01-11 16:01:00.980 [INFO][749] sync_client.go 70: requiringTLS=false\n2020-01-11 16:01:00.980 [INFO][749] sync_client.go 169: Starting Typha client\n2020-01-11 16:01:00.980 [INFO][749] sync_client.go 70: requiringTLS=false\n2020-01-11 16:01:00.980 [INFO][749] sync_client.go 218: Connecting to Typha. address=\"100.106.19.47:5473\" connID=0x0 type=\"bgp\"\n2020-01-11 16:01:02.026 [FATAL][749] client.go 165: Failed to connect to Typha error=dial tcp 100.106.19.47:5473: connect: connection refused\n2020-01-11 16:01:02.048 [INFO][756] config.go 105: Skipping confd config file.\n2020-01-11 16:01:02.048 [INFO][756] config.go 199: Found FELIX_TYPHAK8SSERVICENAME=calico-typha\n2020-01-11 16:01:02.048 [INFO][756] run.go 17: Starting calico-confd\n2020-01-11 16:01:02.049 [INFO][756] k8s.go 219: Using host-local IPAM\n2020-01-11 16:01:02.193 [INFO][756] client.go 224: Found Typha ClusterIP. clusterIP=\"100.106.19.47\"\n2020-01-11 16:01:02.193 [INFO][756] client.go 231: Found Typha service port. port=v1.ServicePort{Name:\"calico-typha\", Protocol:\"TCP\", Port:5473, TargetPort:intstr.IntOrString{Type:1, IntVal:0, StrVal:\"calico-typha\"}, NodePort:0}\n2020-01-11 16:01:02.193 [INFO][756] client.go 145: Connecting to Typha. addr=\"100.106.19.47:5473\"\n2020-01-11 16:01:02.193 [INFO][756] sync_client.go 70: requiringTLS=false\n2020-01-11 16:01:02.193 [INFO][756] sync_client.go 169: Starting Typha client\n2020-01-11 16:01:02.193 [INFO][756] sync_client.go 70: requiringTLS=false\n2020-01-11 16:01:02.193 [INFO][756] sync_client.go 218: Connecting to Typha. address=\"100.106.19.47:5473\" connID=0x0 type=\"bgp\"\n2020-01-11 16:01:03.242 [FATAL][756] client.go 165: Failed to connect to Typha error=dial tcp 100.106.19.47:5473: connect: connection refused\n2020-01-11 16:01:03.264 [INFO][763] config.go 105: Skipping confd config file.\n2020-01-11 16:01:03.264 [INFO][763] config.go 199: Found FELIX_TYPHAK8SSERVICENAME=calico-typha\n2020-01-11 16:01:03.264 [INFO][763] run.go 17: Starting calico-confd\n2020-01-11 16:01:03.265 [INFO][763] k8s.go 219: Using host-local IPAM\n2020-01-11 16:01:03.283 [INFO][763] client.go 224: Found Typha ClusterIP. clusterIP=\"100.106.19.47\"\n2020-01-11 16:01:03.283 [INFO][763] client.go 231: Found Typha service port. port=v1.ServicePort{Name:\"calico-typha\", Protocol:\"TCP\", Port:5473, TargetPort:intstr.IntOrString{Type:1, IntVal:0, StrVal:\"calico-typha\"}, NodePort:0}\n2020-01-11 16:01:03.284 [INFO][763] client.go 145: Connecting to Typha. addr=\"100.106.19.47:5473\"\n2020-01-11 16:01:03.284 [INFO][763] sync_client.go 70: requiringTLS=false\n2020-01-11 16:01:03.284 [INFO][763] sync_client.go 169: Starting Typha client\n2020-01-11 16:01:03.284 [INFO][763] sync_client.go 70: requiringTLS=false\n2020-01-11 16:01:03.284 [INFO][763] sync_client.go 218: Connecting to Typha. address=\"100.106.19.47:5473\" connID=0x0 type=\"bgp\"\n2020-01-11 16:01:04.330 [FATAL][763] client.go 165: Failed to connect to Typha error=dial tcp 100.106.19.47:5473: connect: connection refused\n2020-01-11 16:01:04.363 [INFO][782] config.go 105: Skipping confd config file.\n2020-01-11 16:01:04.363 [INFO][782] config.go 199: Found FELIX_TYPHAK8SSERVICENAME=calico-typha\n2020-01-11 16:01:04.363 [INFO][782] run.go 17: Starting calico-confd\n2020-01-11 16:01:04.365 [INFO][782] k8s.go 219: Using host-local IPAM\n2020-01-11 16:01:04.384 [INFO][782] client.go 224: Found Typha ClusterIP. clusterIP=\"100.106.19.47\"\n2020-01-11 16:01:04.384 [INFO][782] client.go 231: Found Typha service port. port=v1.ServicePort{Name:\"calico-typha\", Protocol:\"TCP\", Port:5473, TargetPort:intstr.IntOrString{Type:1, IntVal:0, StrVal:\"calico-typha\"}, NodePort:0}\n2020-01-11 16:01:04.384 [INFO][782] client.go 145: Connecting to Typha. addr=\"100.106.19.47:5473\"\n2020-01-11 16:01:04.384 [INFO][782] sync_client.go 70: requiringTLS=false\n2020-01-11 16:01:04.384 [INFO][782] sync_client.go 169: Starting Typha client\n2020-01-11 16:01:04.384 [INFO][782] sync_client.go 70: requiringTLS=false\n2020-01-11 16:01:04.384 [INFO][782] sync_client.go 218: Connecting to Typha. address=\"100.106.19.47:5473\" connID=0x0 type=\"bgp\"\n2020-01-11 16:01:05.418 [FATAL][782] client.go 165: Failed to connect to Typha error=dial tcp 100.106.19.47:5473: connect: connection refused\n2020-01-11 16:01:05.440 [INFO][789] config.go 105: Skipping confd config file.\n2020-01-11 16:01:05.440 [INFO][789] config.go 199: Found FELIX_TYPHAK8SSERVICENAME=calico-typha\n2020-01-11 16:01:05.440 [INFO][789] run.go 17: Starting calico-confd\n2020-01-11 16:01:05.441 [INFO][789] k8s.go 219: Using host-local IPAM\n2020-01-11 16:01:05.463 [INFO][789] client.go 224: Found Typha ClusterIP. clusterIP=\"100.106.19.47\"\n2020-01-11 16:01:05.463 [INFO][789] client.go 231: Found Typha service port. port=v1.ServicePort{Name:\"calico-typha\", Protocol:\"TCP\", Port:5473, TargetPort:intstr.IntOrString{Type:1, IntVal:0, StrVal:\"calico-typha\"}, NodePort:0}\n2020-01-11 16:01:05.463 [INFO][789] client.go 145: Connecting to Typha. addr=\"100.106.19.47:5473\"\n2020-01-11 16:01:05.463 [INFO][789] sync_client.go 70: requiringTLS=false\n2020-01-11 16:01:05.463 [INFO][789] sync_client.go 169: Starting Typha client\n2020-01-11 16:01:05.464 [INFO][789] sync_client.go 70: requiringTLS=false\n2020-01-11 16:01:05.464 [INFO][789] sync_client.go 218: Connecting to Typha. address=\"100.106.19.47:5473\" connID=0x0 type=\"bgp\"\n2020-01-11 16:01:06.506 [FATAL][789] client.go 165: Failed to connect to Typha error=dial tcp 100.106.19.47:5473: connect: connection refused\n2020-01-11 16:01:06.527 [INFO][796] config.go 105: Skipping confd config file.\n2020-01-11 16:01:06.527 [INFO][796] config.go 199: Found FELIX_TYPHAK8SSERVICENAME=calico-typha\n2020-01-11 16:01:06.528 [INFO][796] run.go 17: Starting calico-confd\n2020-01-11 16:01:06.528 [INFO][796] k8s.go 219: Using host-local IPAM\n2020-01-11 16:01:06.549 [INFO][796] client.go 224: Found Typha ClusterIP. clusterIP=\"100.106.19.47\"\n2020-01-11 16:01:06.549 [INFO][796] client.go 231: Found Typha service port. port=v1.ServicePort{Name:\"calico-typha\", Protocol:\"TCP\", Port:5473, TargetPort:intstr.IntOrString{Type:1, IntVal:0, StrVal:\"calico-typha\"}, NodePort:0}\n2020-01-11 16:01:06.549 [INFO][796] client.go 145: Connecting to Typha. addr=\"100.106.19.47:5473\"\n2020-01-11 16:01:06.549 [INFO][796] sync_client.go 70: requiringTLS=false\n2020-01-11 16:01:06.549 [INFO][796] sync_client.go 169: Starting Typha client\n2020-01-11 16:01:06.549 [INFO][796] sync_client.go 70: requiringTLS=false\n2020-01-11 16:01:06.549 [INFO][796] sync_client.go 218: Connecting to Typha. address=\"100.106.19.47:5473\" connID=0x0 type=\"bgp\"\n2020-01-11 16:01:07.594 [FATAL][796] client.go 165: Failed to connect to Typha error=dial tcp 100.106.19.47:5473: connect: connection refused\n2020-01-11 16:01:07.615 [INFO][803] config.go 105: Skipping confd config file.\n2020-01-11 16:01:07.616 [INFO][803] config.go 199: Found FELIX_TYPHAK8SSERVICENAME=calico-typha\n2020-01-11 16:01:07.616 [INFO][803] run.go 17: Starting calico-confd\n2020-01-11 16:01:07.616 [INFO][803] k8s.go 219: Using host-local IPAM\n2020-01-11 16:01:07.636 [INFO][803] client.go 224: Found Typha ClusterIP. clusterIP=\"100.106.19.47\"\n2020-01-11 16:01:07.636 [INFO][803] client.go 231: Found Typha service port. port=v1.ServicePort{Name:\"calico-typha\", Protocol:\"TCP\", Port:5473, TargetPort:intstr.IntOrString{Type:1, IntVal:0, StrVal:\"calico-typha\"}, NodePort:0}\n2020-01-11 16:01:07.637 [INFO][803] client.go 145: Connecting to Typha. addr=\"100.106.19.47:5473\"\n2020-01-11 16:01:07.637 [INFO][803] sync_client.go 70: requiringTLS=false\n2020-01-11 16:01:07.637 [INFO][803] sync_client.go 169: Starting Typha client\n2020-01-11 16:01:07.637 [INFO][803] sync_client.go 70: requiringTLS=false\n2020-01-11 16:01:07.637 [INFO][803] sync_client.go 218: Connecting to Typha. address=\"100.106.19.47:5473\" connID=0x0 type=\"bgp\"\n2020-01-11 16:01:08.682 [FATAL][803] client.go 165: Failed to connect to Typha error=dial tcp 100.106.19.47:5473: connect: connection refused\n2020-01-11 16:01:08.704 [INFO][810] config.go 105: Skipping confd config file.\n2020-01-11 16:01:08.704 [INFO][810] config.go 199: Found FELIX_TYPHAK8SSERVICENAME=calico-typha\n2020-01-11 16:01:08.704 [INFO][810] run.go 17: Starting calico-confd\n2020-01-11 16:01:08.705 [INFO][810] k8s.go 219: Using host-local IPAM\n2020-01-11 16:01:08.726 [INFO][810] client.go 224: Found Typha ClusterIP. clusterIP=\"100.106.19.47\"\n2020-01-11 16:01:08.726 [INFO][810] client.go 231: Found Typha service port. port=v1.ServicePort{Name:\"calico-typha\", Protocol:\"TCP\", Port:5473, TargetPort:intstr.IntOrString{Type:1, IntVal:0, StrVal:\"calico-typha\"}, NodePort:0}\n2020-01-11 16:01:08.726 [INFO][810] client.go 145: Connecting to Typha. addr=\"100.106.19.47:5473\"\n2020-01-11 16:01:08.726 [INFO][810] sync_client.go 70: requiringTLS=false\n2020-01-11 16:01:08.726 [INFO][810] sync_client.go 169: Starting Typha client\n2020-01-11 16:01:08.726 [INFO][810] sync_client.go 70: requiringTLS=false\n2020-01-11 16:01:08.726 [INFO][810] sync_client.go 218: Connecting to Typha. address=\"100.106.19.47:5473\" connID=0x0 type=\"bgp\"\n2020-01-11 16:01:09.770 [FATAL][810] client.go 165: Failed to connect to Typha error=dial tcp 100.106.19.47:5473: connect: connection refused\n2020-01-11 16:01:09.795 [INFO][817] config.go 105: Skipping confd config file.\n2020-01-11 16:01:09.795 [INFO][817] config.go 199: Found FELIX_TYPHAK8SSERVICENAME=calico-typha\n2020-01-11 16:01:09.795 [INFO][817] run.go 17: Starting calico-confd\n2020-01-11 16:01:09.796 [INFO][817] k8s.go 219: Using host-local IPAM\n2020-01-11 16:01:09.816 [INFO][817] client.go 224: Found Typha ClusterIP. clusterIP=\"100.106.19.47\"\n2020-01-11 16:01:09.816 [INFO][817] client.go 231: Found Typha service port. port=v1.ServicePort{Name:\"calico-typha\", Protocol:\"TCP\", Port:5473, TargetPort:intstr.IntOrString{Type:1, IntVal:0, StrVal:\"calico-typha\"}, NodePort:0}\n2020-01-11 16:01:09.816 [INFO][817] client.go 145: Connecting to Typha. addr=\"100.106.19.47:5473\"\n2020-01-11 16:01:09.816 [INFO][817] sync_client.go 70: requiringTLS=false\n2020-01-11 16:01:09.816 [INFO][817] sync_client.go 169: Starting Typha client\n2020-01-11 16:01:09.816 [INFO][817] sync_client.go 70: requiringTLS=false\n2020-01-11 16:01:09.816 [INFO][817] sync_client.go 218: Connecting to Typha. address=\"100.106.19.47:5473\" connID=0x0 type=\"bgp\"\n2020-01-11 16:01:10.858 [FATAL][817] client.go 165: Failed to connect to Typha error=dial tcp 100.106.19.47:5473: connect: connection refused\n2020-01-11 16:01:10.879 [INFO][823] config.go 105: Skipping confd config file.\n2020-01-11 16:01:10.879 [INFO][823] config.go 199: Found FELIX_TYPHAK8SSERVICENAME=calico-typha\n2020-01-11 16:01:10.879 [INFO][823] run.go 17: Starting calico-confd\n2020-01-11 16:01:10.880 [INFO][823] k8s.go 219: Using host-local IPAM\n2020-01-11 16:01:10.898 [INFO][823] client.go 224: Found Typha ClusterIP. clusterIP=\"100.106.19.47\"\n2020-01-11 16:01:10.898 [INFO][823] client.go 231: Found Typha service port. port=v1.ServicePort{Name:\"calico-typha\", Protocol:\"TCP\", Port:5473, TargetPort:intstr.IntOrString{Type:1, IntVal:0, StrVal:\"calico-typha\"}, NodePort:0}\n2020-01-11 16:01:10.898 [INFO][823] client.go 145: Connecting to Typha. addr=\"100.106.19.47:5473\"\n2020-01-11 16:01:10.898 [INFO][823] sync_client.go 70: requiringTLS=false\n2020-01-11 16:01:10.898 [INFO][823] sync_client.go 169: Starting Typha client\n2020-01-11 16:01:10.898 [INFO][823] sync_client.go 70: requiringTLS=false\n2020-01-11 16:01:10.898 [INFO][823] sync_client.go 218: Connecting to Typha. address=\"100.106.19.47:5473\" connID=0x0 type=\"bgp\"\n2020-01-11 16:01:11.946 [FATAL][823] client.go 165: Failed to connect to Typha error=dial tcp 100.106.19.47:5473: connect: connection refused\n2020-01-11 16:01:11.968 [INFO][829] config.go 105: Skipping confd config file.\n2020-01-11 16:01:11.968 [INFO][829] config.go 199: Found FELIX_TYPHAK8SSERVICENAME=calico-typha\n2020-01-11 16:01:11.968 [INFO][829] run.go 17: Starting calico-confd\n2020-01-11 16:01:11.969 [INFO][829] k8s.go 219: Using host-local IPAM\n2020-01-11 16:01:11.990 [INFO][829] client.go 224: Found Typha ClusterIP. clusterIP=\"100.106.19.47\"\n2020-01-11 16:01:11.990 [INFO][829] client.go 231: Found Typha service port. port=v1.ServicePort{Name:\"calico-typha\", Protocol:\"TCP\", Port:5473, TargetPort:intstr.IntOrString{Type:1, IntVal:0, StrVal:\"calico-typha\"}, NodePort:0}\n2020-01-11 16:01:11.991 [INFO][829] client.go 145: Connecting to Typha. addr=\"100.106.19.47:5473\"\n2020-01-11 16:01:11.991 [INFO][829] sync_client.go 70: requiringTLS=false\n2020-01-11 16:01:11.991 [INFO][829] sync_client.go 169: Starting Typha client\n2020-01-11 16:01:11.991 [INFO][829] sync_client.go 70: requiringTLS=false\n2020-01-11 16:01:11.991 [INFO][829] sync_client.go 218: Connecting to Typha. address=\"100.106.19.47:5473\" connID=0x0 type=\"bgp\"\n2020-01-11 16:01:13.034 [FATAL][829] client.go 165: Failed to connect to Typha error=dial tcp 100.106.19.47:5473: connect: connection refused\n2020-01-11 16:01:13.065 [INFO][836] config.go 105: Skipping confd config file.\n2020-01-11 16:01:13.066 [INFO][836] config.go 199: Found FELIX_TYPHAK8SSERVICENAME=calico-typha\n2020-01-11 16:01:13.066 [INFO][836] run.go 17: Starting calico-confd\n2020-01-11 16:01:13.067 [INFO][836] k8s.go 219: Using host-local IPAM\n2020-01-11 16:01:13.087 [INFO][836] client.go 224: Found Typha ClusterIP. clusterIP=\"100.106.19.47\"\n2020-01-11 16:01:13.087 [INFO][836] client.go 231: Found Typha service port. port=v1.ServicePort{Name:\"calico-typha\", Protocol:\"TCP\", Port:5473, TargetPort:intstr.IntOrString{Type:1, IntVal:0, StrVal:\"calico-typha\"}, NodePort:0}\n2020-01-11 16:01:13.087 [INFO][836] client.go 145: Connecting to Typha. addr=\"100.106.19.47:5473\"\n2020-01-11 16:01:13.087 [INFO][836] sync_client.go 70: requiringTLS=false\n2020-01-11 16:01:13.087 [INFO][836] sync_client.go 169: Starting Typha client\n2020-01-11 16:01:13.087 [INFO][836] sync_client.go 70: requiringTLS=false\n2020-01-11 16:01:13.087 [INFO][836] sync_client.go 218: Connecting to Typha. address=\"100.106.19.47:5473\" connID=0x0 type=\"bgp\"\n2020-01-11 16:01:13.088 [INFO][836] sync_client.go 233: Connected to Typha. address=\"100.106.19.47:5473\" connID=0x0 type=\"bgp\"\n2020-01-11 16:01:13.088 [INFO][836] client.go 186: CALICO_ADVERTISE_CLUSTER_IPS not specified, no cluster ips will be advertised\n2020-01-11 16:01:13.088 [INFO][836] client.go 330: RouteGenerator has indicated it is in sync\n2020-01-11 16:01:13.088 [INFO][836] sync_client.go 268: Started Typha client main loop address=\"100.106.19.47:5473\" connID=0x0 type=\"bgp\"\n2020-01-11 16:01:13.089 [INFO][836] sync_client.go 325: Server hello message received address=\"100.106.19.47:5473\" connID=0x0 serverVersion=\"v3.8.2\" type=\"bgp\"\n2020-01-11 16:01:13.090 [INFO][836] sync_client.go 296: Status update from Typha. address=\"100.106.19.47:5473\" connID=0x0 newStatus=in-sync type=\"bgp\"\n2020-01-11 16:01:13.090 [INFO][836] client.go 327: Calico Syncer has indicated it is in sync\n2020-01-11 16:21:07.379 [ERROR][722] sync_client.go 260: Failed to read from server address=\"100.106.19.47:5473\" connID=0x0 error=EOF type=\"\"\n2020-01-11 16:21:07.379 [ERROR][836] sync_client.go 260: Failed to read from server address=\"100.106.19.47:5473\" connID=0x0 error=EOF type=\"bgp\"\n2020-01-11 16:21:07.379 [INFO][836] sync_client.go 155: Typha client Context asked us to exit connID=0x0 type=\"bgp\"\n2020-01-11 16:21:07.379 [FATAL][836] client.go 169: Connection to Typha failed\n2020-01-11 16:21:07.412 [INFO][2513] config.go 105: Skipping confd config file.\n2020-01-11 16:21:07.412 [INFO][2513] config.go 199: Found FELIX_TYPHAK8SSERVICENAME=calico-typha\n2020-01-11 16:21:07.413 [INFO][2513] run.go 17: Starting calico-confd\n2020-01-11 16:21:07.413 [INFO][2513] k8s.go 219: Using host-local IPAM\n2020-01-11 16:21:07.432 [INFO][2513] client.go 224: Found Typha ClusterIP. clusterIP=\"100.106.19.47\"\n2020-01-11 16:21:07.432 [INFO][2513] client.go 231: Found Typha service port. port=v1.ServicePort{Name:\"calico-typha\", Protocol:\"TCP\", Port:5473, TargetPort:intstr.IntOrString{Type:1, IntVal:0, StrVal:\"calico-typha\"}, NodePort:0}\n2020-01-11 16:21:07.432 [INFO][2513] client.go 145: Connecting to Typha. addr=\"100.106.19.47:5473\"\n2020-01-11 16:21:07.432 [INFO][2513] sync_client.go 70: requiringTLS=false\n2020-01-11 16:21:07.432 [INFO][2513] sync_client.go 169: Starting Typha client\n2020-01-11 16:21:07.432 [INFO][2513] sync_client.go 70: requiringTLS=false\n2020-01-11 16:21:07.432 [INFO][2513] sync_client.go 218: Connecting to Typha. address=\"100.106.19.47:5473\" connID=0x0 type=\"bgp\"\n2020-01-11 16:21:08.490 [FATAL][2513] client.go 165: Failed to connect to Typha error=dial tcp 100.106.19.47:5473: connect: connection refused\n2020-01-11 16:21:08.512 [INFO][2521] config.go 105: Skipping confd config file.\n2020-01-11 16:21:08.512 [INFO][2521] config.go 199: Found FELIX_TYPHAK8SSERVICENAME=calico-typha\n2020-01-11 16:21:08.512 [INFO][2521] run.go 17: Starting calico-confd\n2020-01-11 16:21:08.513 [INFO][2521] k8s.go 219: Using host-local IPAM\n2020-01-11 16:21:08.532 [INFO][2521] client.go 224: Found Typha ClusterIP. clusterIP=\"100.106.19.47\"\n2020-01-11 16:21:08.532 [INFO][2521] client.go 231: Found Typha service port. port=v1.ServicePort{Name:\"calico-typha\", Protocol:\"TCP\", Port:5473, TargetPort:intstr.IntOrString{Type:1, IntVal:0, StrVal:\"calico-typha\"}, NodePort:0}\n2020-01-11 16:21:08.532 [INFO][2521] client.go 145: Connecting to Typha. addr=\"100.106.19.47:5473\"\n2020-01-11 16:21:08.532 [INFO][2521] sync_client.go 70: requiringTLS=false\n2020-01-11 16:21:08.532 [INFO][2521] sync_client.go 169: Starting Typha client\n2020-01-11 16:21:08.533 [INFO][2521] sync_client.go 70: requiringTLS=false\n2020-01-11 16:21:08.533 [INFO][2521] sync_client.go 218: Connecting to Typha. address=\"100.106.19.47:5473\" connID=0x0 type=\"bgp\"\n2020-01-11 16:21:09.379 [FATAL][722] daemon.go 641: Exiting. reason=\"Connection to Typha failed\"\n2020-01-11 16:21:09.578 [FATAL][2521] client.go 165: Failed to connect to Typha error=dial tcp 100.106.19.47:5473: connect: connection refused\n2020-01-11 16:21:09.600 [INFO][2549] config.go 105: Skipping confd config file.\n2020-01-11 16:21:09.600 [INFO][2549] config.go 199: Found FELIX_TYPHAK8SSERVICENAME=calico-typha\n2020-01-11 16:21:09.600 [INFO][2549] run.go 17: Starting calico-confd\n2020-01-11 16:21:09.601 [INFO][2549] k8s.go 219: Using host-local IPAM\n2020-01-11 16:21:09.617 [INFO][2549] client.go 224: Found Typha ClusterIP. clusterIP=\"100.106.19.47\"\n2020-01-11 16:21:09.617 [INFO][2549] client.go 231: Found Typha service port. port=v1.ServicePort{Name:\"calico-typha\", Protocol:\"TCP\", Port:5473, TargetPort:intstr.IntOrString{Type:1, IntVal:0, StrVal:\"calico-typha\"}, NodePort:0}\n2020-01-11 16:21:09.617 [INFO][2549] client.go 145: Connecting to Typha. addr=\"100.106.19.47:5473\"\n2020-01-11 16:21:09.617 [INFO][2549] sync_client.go 70: requiringTLS=false\n2020-01-11 16:21:09.617 [INFO][2549] sync_client.go 169: Starting Typha client\n2020-01-11 16:21:09.618 [INFO][2549] sync_client.go 70: requiringTLS=false\n2020-01-11 16:21:09.618 [INFO][2549] sync_client.go 218: Connecting to Typha. address=\"100.106.19.47:5473\" connID=0x0 type=\"bgp\"\n2020-01-11 16:21:10.474 [ERROR][2528] daemon.go 446: Failed to connect to Typha. Retrying... error=dial tcp 100.106.19.47:5473: connect: connection refused\n2020-01-11 16:21:10.666 [FATAL][2549] client.go 165: Failed to connect to Typha error=dial tcp 100.106.19.47:5473: connect: connection refused\n2020-01-11 16:21:10.687 [INFO][2556] config.go 105: Skipping confd config file.\n2020-01-11 16:21:10.687 [INFO][2556] config.go 199: Found FELIX_TYPHAK8SSERVICENAME=calico-typha\n2020-01-11 16:21:10.687 [INFO][2556] run.go 17: Starting calico-confd\n2020-01-11 16:21:10.688 [INFO][2556] k8s.go 219: Using host-local IPAM\n2020-01-11 16:21:10.708 [INFO][2556] client.go 224: Found Typha ClusterIP. clusterIP=\"100.106.19.47\"\n2020-01-11 16:21:10.708 [INFO][2556] client.go 231: Found Typha service port. port=v1.ServicePort{Name:\"calico-typha\", Protocol:\"TCP\", Port:5473, TargetPort:intstr.IntOrString{Type:1, IntVal:0, StrVal:\"calico-typha\"}, NodePort:0}\n2020-01-11 16:21:10.708 [INFO][2556] client.go 145: Connecting to Typha. addr=\"100.106.19.47:5473\"\n2020-01-11 16:21:10.708 [INFO][2556] sync_client.go 70: requiringTLS=false\n2020-01-11 16:21:10.708 [INFO][2556] sync_client.go 169: Starting Typha client\n2020-01-11 16:21:10.708 [INFO][2556] sync_client.go 70: requiringTLS=false\n2020-01-11 16:21:10.708 [INFO][2556] sync_client.go 218: Connecting to Typha. address=\"100.106.19.47:5473\" connID=0x0 type=\"bgp\"\n2020-01-11 16:21:11.754 [FATAL][2556] client.go 165: Failed to connect to Typha error=dial tcp 100.106.19.47:5473: connect: connection refused\n2020-01-11 16:21:11.776 [INFO][2563] config.go 105: Skipping confd config file.\n2020-01-11 16:21:11.776 [INFO][2563] config.go 199: Found FELIX_TYPHAK8SSERVICENAME=calico-typha\n2020-01-11 16:21:11.776 [INFO][2563] run.go 17: Starting calico-confd\n2020-01-11 16:21:11.777 [INFO][2563] k8s.go 219: Using host-local IPAM\n2020-01-11 16:21:11.794 [INFO][2563] client.go 224: Found Typha ClusterIP. clusterIP=\"100.106.19.47\"\n2020-01-11 16:21:11.794 [INFO][2563] client.go 231: Found Typha service port. port=v1.ServicePort{Name:\"calico-typha\", Protocol:\"TCP\", Port:5473, TargetPort:intstr.IntOrString{Type:1, IntVal:0, StrVal:\"calico-typha\"}, NodePort:0}\n2020-01-11 16:21:11.794 [INFO][2563] client.go 145: Connecting to Typha. addr=\"100.106.19.47:5473\"\n2020-01-11 16:21:11.794 [INFO][2563] sync_client.go 70: requiringTLS=false\n2020-01-11 16:21:11.794 [INFO][2563] sync_client.go 169: Starting Typha client\n2020-01-11 16:21:11.794 [INFO][2563] sync_client.go 70: requiringTLS=false\n2020-01-11 16:21:11.794 [INFO][2563] sync_client.go 218: Connecting to Typha. address=\"100.106.19.47:5473\" connID=0x0 type=\"bgp\"\n2020-01-11 16:21:12.842 [FATAL][2563] client.go 165: Failed to connect to Typha error=dial tcp 100.106.19.47:5473: connect: connection refused\n2020-01-11 16:21:12.864 [INFO][2570] config.go 105: Skipping confd config file.\n2020-01-11 16:21:12.864 [INFO][2570] config.go 199: Found FELIX_TYPHAK8SSERVICENAME=calico-typha\n2020-01-11 16:21:12.864 [INFO][2570] run.go 17: Starting calico-confd\n2020-01-11 16:21:12.865 [INFO][2570] k8s.go 219: Using host-local IPAM\n2020-01-11 16:21:12.883 [INFO][2570] client.go 224: Found Typha ClusterIP. clusterIP=\"100.106.19.47\"\n2020-01-11 16:21:12.883 [INFO][2570] client.go 231: Found Typha service port. port=v1.ServicePort{Name:\"calico-typha\", Protocol:\"TCP\", Port:5473, TargetPort:intstr.IntOrString{Type:1, IntVal:0, StrVal:\"calico-typha\"}, NodePort:0}\n2020-01-11 16:21:12.883 [INFO][2570] client.go 145: Connecting to Typha. addr=\"100.106.19.47:5473\"\n2020-01-11 16:21:12.883 [INFO][2570] sync_client.go 70: requiringTLS=false\n2020-01-11 16:21:12.883 [INFO][2570] sync_client.go 169: Starting Typha client\n2020-01-11 16:21:12.883 [INFO][2570] sync_client.go 70: requiringTLS=false\n2020-01-11 16:21:12.883 [INFO][2570] sync_client.go 218: Connecting to Typha. address=\"100.106.19.47:5473\" connID=0x0 type=\"bgp\"\n2020-01-11 16:21:13.930 [FATAL][2570] client.go 165: Failed to connect to Typha error=dial tcp 100.106.19.47:5473: connect: connection refused\n2020-01-11 16:21:13.969 [INFO][2589] config.go 105: Skipping confd config file.\n2020-01-11 16:21:13.969 [INFO][2589] config.go 199: Found FELIX_TYPHAK8SSERVICENAME=calico-typha\n2020-01-11 16:21:13.969 [INFO][2589] run.go 17: Starting calico-confd\n2020-01-11 16:21:13.970 [INFO][2589] k8s.go 219: Using host-local IPAM\n2020-01-11 16:21:13.990 [INFO][2589] client.go 224: Found Typha ClusterIP. clusterIP=\"100.106.19.47\"\n2020-01-11 16:21:13.990 [INFO][2589] client.go 231: Found Typha service port. port=v1.ServicePort{Name:\"calico-typha\", Protocol:\"TCP\", Port:5473, TargetPort:intstr.IntOrString{Type:1, IntVal:0, StrVal:\"calico-typha\"}, NodePort:0}\n2020-01-11 16:21:13.990 [INFO][2589] client.go 145: Connecting to Typha. addr=\"100.106.19.47:5473\"\n2020-01-11 16:21:13.990 [INFO][2589] sync_client.go 70: requiringTLS=false\n2020-01-11 16:21:13.990 [INFO][2589] sync_client.go 169: Starting Typha client\n2020-01-11 16:21:13.990 [INFO][2589] sync_client.go 70: requiringTLS=false\n2020-01-11 16:21:13.991 [INFO][2589] sync_client.go 218: Connecting to Typha. address=\"100.106.19.47:5473\" connID=0x0 type=\"bgp\"\n2020-01-11 16:21:15.018 [FATAL][2589] client.go 165: Failed to connect to Typha error=dial tcp 100.106.19.47:5473: connect: connection refused\n2020-01-11 16:21:15.040 [INFO][2596] config.go 105: Skipping confd config file.\n2020-01-11 16:21:15.040 [INFO][2596] config.go 199: Found FELIX_TYPHAK8SSERVICENAME=calico-typha\n2020-01-11 16:21:15.040 [INFO][2596] run.go 17: Starting calico-confd\n2020-01-11 16:21:15.041 [INFO][2596] k8s.go 219: Using host-local IPAM\n2020-01-11 16:21:15.058 [INFO][2596] client.go 224: Found Typha ClusterIP. clusterIP=\"100.106.19.47\"\n2020-01-11 16:21:15.058 [INFO][2596] client.go 231: Found Typha service port. port=v1.ServicePort{Name:\"calico-typha\", Protocol:\"TCP\", Port:5473, TargetPort:intstr.IntOrString{Type:1, IntVal:0, StrVal:\"calico-typha\"}, NodePort:0}\n2020-01-11 16:21:15.059 [INFO][2596] client.go 145: Connecting to Typha. addr=\"100.106.19.47:5473\"\n2020-01-11 16:21:15.059 [INFO][2596] sync_client.go 70: requiringTLS=false\n2020-01-11 16:21:15.059 [INFO][2596] sync_client.go 169: Starting Typha client\n2020-01-11 16:21:15.059 [INFO][2596] sync_client.go 70: requiringTLS=false\n2020-01-11 16:21:15.059 [INFO][2596] sync_client.go 218: Connecting to Typha. address=\"100.106.19.47:5473\" connID=0x0 type=\"bgp\"\n2020-01-11 16:21:16.107 [INFO][2596] sync_client.go 233: Connected to Typha. address=\"100.106.19.47:5473\" connID=0x0 type=\"bgp\"\n2020-01-11 16:21:16.107 [INFO][2596] client.go 186: CALICO_ADVERTISE_CLUSTER_IPS not specified, no cluster ips will be advertised\n2020-01-11 16:21:16.107 [INFO][2596] client.go 330: RouteGenerator has indicated it is in sync\n2020-01-11 16:21:16.107 [INFO][2596] sync_client.go 268: Started Typha client main loop address=\"100.106.19.47:5473\" connID=0x0 type=\"bgp\"\n2020-01-11 16:21:16.109 [INFO][2596] sync_client.go 325: Server hello message received address=\"100.106.19.47:5473\" connID=0x0 serverVersion=\"v3.8.2\" type=\"bgp\"\n2020-01-11 16:21:16.111 [INFO][2596] sync_client.go 296: Status update from Typha. address=\"100.106.19.47:5473\" connID=0x0 newStatus=in-sync type=\"bgp\"\n2020-01-11 16:21:16.111 [INFO][2596] client.go 327: Calico Syncer has indicated it is in sync\nbird: KIF: Received address message for unknown interface 31\nbird: KIF: Received address message for unknown interface 32\nbird: KIF: Received address message for unknown interface 42\nbird: KIF: Received address message for unknown interface 48\nbird: KIF: Received address message for unknown interface 49\nbird: KIF: Received address message for unknown interface 51\nbird: KIF: Received address message for unknown interface 52\nbird: KIF: Received address message for unknown interface 55\nbird: KIF: Received address message for unknown interface 60\nbird: KIF: Received address message for unknown interface 67\nbird: KIF: Received address message for unknown interface 69\nbird: KIF: Received address message for unknown interface 53\nbird: KIF: Received address message for unknown interface 116\nbird: KIF: Received address message for unknown interface 112\nbird: KIF: Received address message for unknown interface 132\nbird: KIF: Received address message for unknown interface 136\nbird: KIF: Received address message for unknown interface 143\nbird: KIF: Received address message for unknown interface 84\nbird: KIF: Received address message for unknown interface 103\nbird: KIF: Received address message for unknown interface 96\nbird: KIF: Received address message for unknown interface 114\nbird: KIF: Received address message for unknown interface 170\nbird: KIF: Received address message for unknown interface 187\nbird: KIF: Received address message for unknown interface 188\nbird: KIF: Received address message for unknown interface 189\nbird: KIF: Received address message for unknown interface 193\nbird: KIF: Received address message for unknown interface 194\nbird: KIF: Received address message for unknown interface 224\nbird: KIF: Received address message for unknown interface 235\nbird: KIF: Received address message for unknown interface 236\nbird: KIF: Received address message for unknown interface 230\nbird: KIF: Received address message for unknown interface 240\nbird: KIF: Received address message for unknown interface 244\nbird: KIF: Received address message for unknown interface 248\nbird: KIF: Received address message for unknown interface 255\nbird: KIF: Received address message for unknown interface 257\nbird: KIF: Received address message for unknown interface 269\nbird: KIF: Received address message for unknown interface 270\nbird: KIF: Received address message for unknown interface 263\nbird: KIF: Received address message for unknown interface 276\nbird: KIF: Received address message for unknown interface 277\nbird: KIF: Received address message for unknown interface 285\nbird: KIF: Received address message for unknown interface 288\nbird: KIF: Received address message for unknown interface 296\nbird: KIF: Received address message for unknown interface 312\nbird: KIF: Received address message for unknown interface 326\nbird: KIF: Received address message for unknown interface 331\nbird: KIF: Received address message for unknown interface 332\nbird: KIF: Received address message for unknown interface 344\nbird: KIF: Received address message for unknown interface 351\nbird: KIF: Received address message for unknown interface 360\nbird: KIF: Received address message for unknown interface 363\nbird: KIF: Received address message for unknown interface 366\nbird: KIF: Received address message for unknown interface 376\nbird: KIF: Received address message for unknown interface 372\nbird: KIF: Received address message for unknown interface 377\nbird: KIF: Received address message for unknown interface 381\nbird: KIF: Received address message for unknown interface 389\nbird: KIF: Received address message for unknown interface 397\nbird: KIF: Received address message for unknown interface 404\nbird: KIF: Received address message for unknown interface 407\nbird: KIF: Received address message for unknown interface 410\nbird: KIF: Received address message for unknown interface 415\nbird: KIF: Received address message for unknown interface 419\nbird: KIF: Received address message for unknown interface 430\nbird: KIF: Received address message for unknown interface 440\nbird: KIF: Received address message for unknown interface 441\nbird: KIF: Received address message for unknown interface 445\nbird: KIF: Received address message for unknown interface 451\nbird: KIF: Received address message for unknown interface 450\nbird: KIF: Received address message for unknown interface 463\nbird: KIF: Received address message for unknown interface 470\nbird: KIF: Received address message for unknown interface 473\nbird: KIF: Received address message for unknown interface 475\nbird: KIF: Received address message for unknown interface 478\nbird: KIF: Received address message for unknown interface 476\n==== END logs for container calico-node of pod kube-system/calico-node-dl8nk ====\n==== START logs for container calico-node of pod kube-system/calico-node-m8r2d ====\n2020-01-11 15:56:18.378 [INFO][9] startup.go 256: Early log level set to info\n2020-01-11 15:56:18.379 [INFO][9] startup.go 272: Using NODENAME environment for node name\n2020-01-11 15:56:18.379 [INFO][9] startup.go 284: Determined node name: ip-10-250-27-25.ec2.internal\n2020-01-11 15:56:18.380 [INFO][9] k8s.go 219: Using host-local IPAM\n2020-01-11 15:56:18.380 [INFO][9] startup.go 316: Checking datastore connection\n2020-01-11 15:56:18.394 [INFO][9] startup.go 340: Datastore connection verified\n2020-01-11 15:56:18.394 [INFO][9] startup.go 95: Datastore is ready\n2020-01-11 15:56:18.406 [INFO][9] startup.go 584: Using autodetected IPv4 address on interface eth0: 10.250.27.25/19\n2020-01-11 15:56:18.406 [INFO][9] startup.go 452: Node IPv4 changed, will check for conflicts\n2020-01-11 15:56:18.411 [INFO][9] startup.go 647: No AS number configured on node resource, using global value\n2020-01-11 15:56:18.411 [INFO][9] startup.go 149: Setting NetworkUnavailable to False\n2020-01-11 15:56:18.434 [INFO][9] startup.go 530: FELIX_IPV6SUPPORT is false through environment variable\n2020-01-11 15:56:18.443 [INFO][9] startup.go 181: Using node name: ip-10-250-27-25.ec2.internal\n2020-01-11 15:56:18.484 [INFO][17] k8s.go 219: Using host-local IPAM\nCalico node started successfully\nbird: Unable to open configuration file /etc/calico/confd/config/bird6.cfg: No such file or directory\nbird: Unable to open configuration file /etc/calico/confd/config/bird.cfg: No such file or directory\n2020-01-11 15:56:19.568 [INFO][41] config.go 105: Skipping confd config file.\n2020-01-11 15:56:19.568 [INFO][41] config.go 199: Found FELIX_TYPHAK8SSERVICENAME=calico-typha\n2020-01-11 15:56:19.569 [INFO][41] run.go 17: Starting calico-confd\n2020-01-11 15:56:19.570 [INFO][41] k8s.go 219: Using host-local IPAM\n2020-01-11 15:56:19.597 [INFO][41] client.go 224: Found Typha ClusterIP. clusterIP=\"100.106.19.47\"\n2020-01-11 15:56:19.597 [INFO][41] client.go 231: Found Typha service port. port=v1.ServicePort{Name:\"calico-typha\", Protocol:\"TCP\", Port:5473, TargetPort:intstr.IntOrString{Type:1, IntVal:0, StrVal:\"calico-typha\"}, NodePort:0}\n2020-01-11 15:56:19.597 [INFO][41] client.go 145: Connecting to Typha. addr=\"100.106.19.47:5473\"\n2020-01-11 15:56:19.597 [INFO][41] sync_client.go 70: requiringTLS=false\n2020-01-11 15:56:19.597 [INFO][41] sync_client.go 169: Starting Typha client\n2020-01-11 15:56:19.597 [INFO][41] sync_client.go 70: requiringTLS=false\n2020-01-11 15:56:19.598 [INFO][41] sync_client.go 218: Connecting to Typha. address=\"100.106.19.47:5473\" connID=0x0 type=\"bgp\"\nbird: Unable to open configuration file /etc/calico/confd/config/bird6.cfg: No such file or directory\nbird: Unable to open configuration file /etc/calico/confd/config/bird.cfg: No such file or directory\n2020-01-11 15:56:20.655 [FATAL][41] client.go 165: Failed to connect to Typha error=dial tcp 100.106.19.47:5473: connect: connection refused\n2020-01-11 15:56:20.682 [INFO][76] config.go 105: Skipping confd config file.\n2020-01-11 15:56:20.682 [INFO][76] config.go 199: Found FELIX_TYPHAK8SSERVICENAME=calico-typha\n2020-01-11 15:56:20.682 [INFO][76] run.go 17: Starting calico-confd\n2020-01-11 15:56:20.683 [INFO][76] k8s.go 219: Using host-local IPAM\n2020-01-11 15:56:20.701 [INFO][76] client.go 224: Found Typha ClusterIP. clusterIP=\"100.106.19.47\"\n2020-01-11 15:56:20.702 [INFO][76] client.go 231: Found Typha service port. port=v1.ServicePort{Name:\"calico-typha\", Protocol:\"TCP\", Port:5473, TargetPort:intstr.IntOrString{Type:1, IntVal:0, StrVal:\"calico-typha\"}, NodePort:0}\n2020-01-11 15:56:20.702 [INFO][76] client.go 145: Connecting to Typha. addr=\"100.106.19.47:5473\"\n2020-01-11 15:56:20.702 [INFO][76] sync_client.go 70: requiringTLS=false\n2020-01-11 15:56:20.702 [INFO][76] sync_client.go 169: Starting Typha client\n2020-01-11 15:56:20.702 [INFO][76] sync_client.go 70: requiringTLS=false\n2020-01-11 15:56:20.702 [INFO][76] sync_client.go 218: Connecting to Typha. address=\"100.106.19.47:5473\" connID=0x0 type=\"bgp\"\n2020-01-11 15:56:20.783 [ERROR][43] daemon.go 446: Failed to connect to Typha. Retrying... error=dial tcp 100.106.19.47:5473: connect: connection refused\nbird: Unable to open configuration file /etc/calico/confd/config/bird6.cfg: No such file or directory\nbird: Unable to open configuration file /etc/calico/confd/config/bird.cfg: No such file or directory\n2020-01-11 15:56:21.743 [FATAL][76] client.go 165: Failed to connect to Typha error=dial tcp 100.106.19.47:5473: connect: connection refused\n2020-01-11 15:56:21.778 [INFO][85] config.go 105: Skipping confd config file.\n2020-01-11 15:56:21.778 [INFO][85] config.go 199: Found FELIX_TYPHAK8SSERVICENAME=calico-typha\n2020-01-11 15:56:21.778 [INFO][85] run.go 17: Starting calico-confd\n2020-01-11 15:56:21.779 [INFO][85] k8s.go 219: Using host-local IPAM\n2020-01-11 15:56:21.805 [INFO][85] client.go 224: Found Typha ClusterIP. clusterIP=\"100.106.19.47\"\n2020-01-11 15:56:21.805 [INFO][85] client.go 231: Found Typha service port. port=v1.ServicePort{Name:\"calico-typha\", Protocol:\"TCP\", Port:5473, TargetPort:intstr.IntOrString{Type:1, IntVal:0, StrVal:\"calico-typha\"}, NodePort:0}\n2020-01-11 15:56:21.805 [INFO][85] client.go 145: Connecting to Typha. addr=\"100.106.19.47:5473\"\n2020-01-11 15:56:21.805 [INFO][85] sync_client.go 70: requiringTLS=false\n2020-01-11 15:56:21.805 [INFO][85] sync_client.go 169: Starting Typha client\n2020-01-11 15:56:21.805 [INFO][85] sync_client.go 70: requiringTLS=false\n2020-01-11 15:56:21.805 [INFO][85] sync_client.go 218: Connecting to Typha. address=\"100.106.19.47:5473\" connID=0x0 type=\"bgp\"\nbird: Unable to open configuration file /etc/calico/confd/config/bird6.cfg: No such file or directory\nbird: Unable to open configuration file /etc/calico/confd/config/bird.cfg: No such file or directory\n2020-01-11 15:56:22.831 [FATAL][85] client.go 165: Failed to connect to Typha error=dial tcp 100.106.19.47:5473: connect: connection refused\n2020-01-11 15:56:22.862 [INFO][106] config.go 105: Skipping confd config file.\n2020-01-11 15:56:22.862 [INFO][106] config.go 199: Found FELIX_TYPHAK8SSERVICENAME=calico-typha\n2020-01-11 15:56:22.862 [INFO][106] run.go 17: Starting calico-confd\n2020-01-11 15:56:22.863 [INFO][106] k8s.go 219: Using host-local IPAM\n2020-01-11 15:56:22.880 [INFO][106] client.go 224: Found Typha ClusterIP. clusterIP=\"100.106.19.47\"\n2020-01-11 15:56:22.880 [INFO][106] client.go 231: Found Typha service port. port=v1.ServicePort{Name:\"calico-typha\", Protocol:\"TCP\", Port:5473, TargetPort:intstr.IntOrString{Type:1, IntVal:0, StrVal:\"calico-typha\"}, NodePort:0}\n2020-01-11 15:56:22.881 [INFO][106] client.go 145: Connecting to Typha. addr=\"100.106.19.47:5473\"\n2020-01-11 15:56:22.881 [INFO][106] sync_client.go 70: requiringTLS=false\n2020-01-11 15:56:22.881 [INFO][106] sync_client.go 169: Starting Typha client\n2020-01-11 15:56:22.881 [INFO][106] sync_client.go 70: requiringTLS=false\n2020-01-11 15:56:22.881 [INFO][106] sync_client.go 218: Connecting to Typha. address=\"100.106.19.47:5473\" connID=0x0 type=\"bgp\"\nbird: bird: Unable to open configuration file /etc/calico/confd/config/bird.cfg: No such file or directory\nUnable to open configuration file /etc/calico/confd/config/bird6.cfg: No such file or directory\n2020-01-11 15:56:23.919 [INFO][106] sync_client.go 233: Connected to Typha. address=\"100.106.19.47:5473\" connID=0x0 type=\"bgp\"\n2020-01-11 15:56:23.919 [INFO][106] client.go 186: CALICO_ADVERTISE_CLUSTER_IPS not specified, no cluster ips will be advertised\n2020-01-11 15:56:23.919 [INFO][106] client.go 330: RouteGenerator has indicated it is in sync\n2020-01-11 15:56:23.919 [INFO][106] sync_client.go 268: Started Typha client main loop address=\"100.106.19.47:5473\" connID=0x0 type=\"bgp\"\n2020-01-11 15:56:23.923 [INFO][106] sync_client.go 325: Server hello message received address=\"100.106.19.47:5473\" connID=0x0 serverVersion=\"v3.8.2\" type=\"bgp\"\n2020-01-11 15:56:23.924 [INFO][106] sync_client.go 296: Status update from Typha. address=\"100.106.19.47:5473\" connID=0x0 newStatus=in-sync type=\"bgp\"\n2020-01-11 15:56:23.924 [INFO][106] client.go 327: Calico Syncer has indicated it is in sync\n2020-01-11 15:56:23.927 [INFO][106] resource.go 220: Target config /tmp/tunl-ip out of sync\n2020-01-11 15:56:23.929 [INFO][106] resource.go 220: Target config /etc/calico/confd/config/bird.cfg out of sync\n2020-01-11 15:56:23.930 [INFO][106] resource.go 220: Target config /etc/calico/confd/config/bird6_aggr.cfg out of sync\n2020-01-11 15:56:23.930 [INFO][106] resource.go 220: Target config /etc/calico/confd/config/bird6.cfg out of sync\n2020-01-11 15:56:23.933 [INFO][106] resource.go 220: Target config /etc/calico/confd/config/bird6_ipam.cfg out of sync\n2020-01-11 15:56:23.936 [INFO][106] resource.go 220: Target config /etc/calico/confd/config/bird_aggr.cfg out of sync\n2020-01-11 15:56:23.944 [ERROR][106] resource.go 288: Error from checkcmd: \"bird: /etc/calico/confd/config/.bird.cfg074792863, line 2: Unable to open included file /etc/calico/confd/config/bird_aggr.cfg: No such file or directory\\n\"\n2020-01-11 15:56:23.944 [INFO][106] resource.go 226: Check failed, but file does not yet exist - create anyway\n2020-01-11 15:56:23.944 [INFO][106] resource.go 220: Target config /etc/calico/confd/config/bird_ipam.cfg out of sync\n2020-01-11 15:56:23.954 [INFO][106] resource.go 260: Target config /etc/calico/confd/config/bird_aggr.cfg has been updated\n2020-01-11 15:56:23.954 [INFO][106] resource.go 260: Target config /etc/calico/confd/config/bird6_ipam.cfg has been updated\n2020-01-11 15:56:23.954 [INFO][106] resource.go 260: Target config /etc/calico/confd/config/bird6_aggr.cfg has been updated\n2020-01-11 15:56:23.967 [INFO][106] resource.go 260: Target config /etc/calico/confd/config/bird.cfg has been updated\n2020-01-11 15:56:23.978 [INFO][106] resource.go 260: Target config /etc/calico/confd/config/bird6.cfg has been updated\n2020-01-11 15:56:23.980 [INFO][106] resource.go 260: Target config /etc/calico/confd/config/bird_ipam.cfg has been updated\n2020-01-11 15:56:24.063 [INFO][106] resource.go 260: Target config /tmp/tunl-ip has been updated\nbird: device1: Initializing\nbird: direct1: Initializing\nbird: device1: Starting\nbird: device1: Connected to table master\nbird: device1: State changed to feed\nbird: direct1: Starting\nbird: direct1: Connected to table master\nbird: direct1: State changed to feed\nbird: Graceful restart started\nbird: Graceful restart done\nbird: Started\nbird: device1: State changed to up\nbird: direct1: State changed to up\nbird: device1: Initializing\nbird: direct1: Initializing\nbird: Mesh_10_250_7_77: Initializing\nbird: device1: Starting\nbird: device1: Connected to table master\nbird: device1: State changed to feed\nbird: direct1: Starting\nbird: direct1: Connected to table master\nbird: direct1: State changed to feed\nbird: Mesh_10_250_7_77: Starting\nbird: Mesh_10_250_7_77: State changed to start\nbird: Graceful restart started\nbird: Started\nbird: device1: State changed to up\nbird: direct1: State changed to up\nbird: Mesh_10_250_7_77: Connected to table master\nbird: Mesh_10_250_7_77: State changed to feed\nbird: Mesh_10_250_7_77: State changed to up\nbird: Graceful restart done\n2020-01-11 15:59:49.190 [ERROR][106] sync_client.go 260: Failed to read from server address=\"100.106.19.47:5473\" connID=0x0 error=EOF type=\"bgp\"\n2020-01-11 15:59:49.190 [INFO][106] sync_client.go 155: Typha client Context asked us to exit connID=0x0 type=\"bgp\"\n2020-01-11 15:59:49.190 [FATAL][106] client.go 169: Connection to Typha failed\n2020-01-11 15:59:49.191 [ERROR][43] sync_client.go 260: Failed to read from server address=\"100.106.19.47:5473\" connID=0x0 error=EOF type=\"\"\n2020-01-11 15:59:49.235 [INFO][485] config.go 105: Skipping confd config file.\n2020-01-11 15:59:49.235 [INFO][485] config.go 199: Found FELIX_TYPHAK8SSERVICENAME=calico-typha\n2020-01-11 15:59:49.235 [INFO][485] run.go 17: Starting calico-confd\n2020-01-11 15:59:49.236 [INFO][485] k8s.go 219: Using host-local IPAM\n2020-01-11 15:59:49.257 [INFO][485] client.go 224: Found Typha ClusterIP. clusterIP=\"100.106.19.47\"\n2020-01-11 15:59:49.257 [INFO][485] client.go 231: Found Typha service port. port=v1.ServicePort{Name:\"calico-typha\", Protocol:\"TCP\", Port:5473, TargetPort:intstr.IntOrString{Type:1, IntVal:0, StrVal:\"calico-typha\"}, NodePort:0}\n2020-01-11 15:59:49.257 [INFO][485] client.go 145: Connecting to Typha. addr=\"100.106.19.47:5473\"\n2020-01-11 15:59:49.257 [INFO][485] sync_client.go 70: requiringTLS=false\n2020-01-11 15:59:49.257 [INFO][485] sync_client.go 169: Starting Typha client\n2020-01-11 15:59:49.257 [INFO][485] sync_client.go 70: requiringTLS=false\n2020-01-11 15:59:49.257 [INFO][485] sync_client.go 218: Connecting to Typha. address=\"100.106.19.47:5473\" connID=0x0 type=\"bgp\"\n2020-01-11 15:59:50.319 [FATAL][485] client.go 165: Failed to connect to Typha error=dial tcp 100.106.19.47:5473: connect: connection refused\n2020-01-11 15:59:50.340 [INFO][492] config.go 105: Skipping confd config file.\n2020-01-11 15:59:50.340 [INFO][492] config.go 199: Found FELIX_TYPHAK8SSERVICENAME=calico-typha\n2020-01-11 15:59:50.340 [INFO][492] run.go 17: Starting calico-confd\n2020-01-11 15:59:50.341 [INFO][492] k8s.go 219: Using host-local IPAM\n2020-01-11 15:59:50.359 [INFO][492] client.go 224: Found Typha ClusterIP. clusterIP=\"100.106.19.47\"\n2020-01-11 15:59:50.359 [INFO][492] client.go 231: Found Typha service port. port=v1.ServicePort{Name:\"calico-typha\", Protocol:\"TCP\", Port:5473, TargetPort:intstr.IntOrString{Type:1, IntVal:0, StrVal:\"calico-typha\"}, NodePort:0}\n2020-01-11 15:59:50.359 [INFO][492] client.go 145: Connecting to Typha. addr=\"100.106.19.47:5473\"\n2020-01-11 15:59:50.359 [INFO][492] sync_client.go 70: requiringTLS=false\n2020-01-11 15:59:50.359 [INFO][492] sync_client.go 169: Starting Typha client\n2020-01-11 15:59:50.359 [INFO][492] sync_client.go 70: requiringTLS=false\n2020-01-11 15:59:50.359 [INFO][492] sync_client.go 218: Connecting to Typha. address=\"100.106.19.47:5473\" connID=0x0 type=\"bgp\"\n2020-01-11 15:59:51.191 [FATAL][43] daemon.go 641: Exiting. reason=\"Connection to Typha failed\"\n2020-01-11 15:59:51.407 [FATAL][492] client.go 165: Failed to connect to Typha error=dial tcp 100.106.19.47:5473: connect: connection refused\n2020-01-11 15:59:51.428 [INFO][519] config.go 105: Skipping confd config file.\n2020-01-11 15:59:51.428 [INFO][519] config.go 199: Found FELIX_TYPHAK8SSERVICENAME=calico-typha\n2020-01-11 15:59:51.428 [INFO][519] run.go 17: Starting calico-confd\n2020-01-11 15:59:51.429 [INFO][519] k8s.go 219: Using host-local IPAM\n2020-01-11 15:59:51.447 [INFO][519] client.go 224: Found Typha ClusterIP. clusterIP=\"100.106.19.47\"\n2020-01-11 15:59:51.447 [INFO][519] client.go 231: Found Typha service port. port=v1.ServicePort{Name:\"calico-typha\", Protocol:\"TCP\", Port:5473, TargetPort:intstr.IntOrString{Type:1, IntVal:0, StrVal:\"calico-typha\"}, NodePort:0}\n2020-01-11 15:59:51.447 [INFO][519] client.go 145: Connecting to Typha. addr=\"100.106.19.47:5473\"\n2020-01-11 15:59:51.447 [INFO][519] sync_client.go 70: requiringTLS=false\n2020-01-11 15:59:51.447 [INFO][519] sync_client.go 169: Starting Typha client\n2020-01-11 15:59:51.447 [INFO][519] sync_client.go 70: requiringTLS=false\n2020-01-11 15:59:51.447 [INFO][519] sync_client.go 218: Connecting to Typha. address=\"100.106.19.47:5473\" connID=0x0 type=\"bgp\"\n2020-01-11 15:59:52.367 [ERROR][499] daemon.go 446: Failed to connect to Typha. Retrying... error=dial tcp 100.106.19.47:5473: connect: connection refused\n2020-01-11 15:59:52.495 [FATAL][519] client.go 165: Failed to connect to Typha error=dial tcp 100.106.19.47:5473: connect: connection refused\n2020-01-11 15:59:52.526 [INFO][537] config.go 105: Skipping confd config file.\n2020-01-11 15:59:52.526 [INFO][537] config.go 199: Found FELIX_TYPHAK8SSERVICENAME=calico-typha\n2020-01-11 15:59:52.526 [INFO][537] run.go 17: Starting calico-confd\n2020-01-11 15:59:52.527 [INFO][537] k8s.go 219: Using host-local IPAM\n2020-01-11 15:59:52.548 [INFO][537] client.go 224: Found Typha ClusterIP. clusterIP=\"100.106.19.47\"\n2020-01-11 15:59:52.548 [INFO][537] client.go 231: Found Typha service port. port=v1.ServicePort{Name:\"calico-typha\", Protocol:\"TCP\", Port:5473, TargetPort:intstr.IntOrString{Type:1, IntVal:0, StrVal:\"calico-typha\"}, NodePort:0}\n2020-01-11 15:59:52.548 [INFO][537] client.go 145: Connecting to Typha. addr=\"100.106.19.47:5473\"\n2020-01-11 15:59:52.548 [INFO][537] sync_client.go 70: requiringTLS=false\n2020-01-11 15:59:52.548 [INFO][537] sync_client.go 169: Starting Typha client\n2020-01-11 15:59:52.548 [INFO][537] sync_client.go 70: requiringTLS=false\n2020-01-11 15:59:52.548 [INFO][537] sync_client.go 218: Connecting to Typha. address=\"100.106.19.47:5473\" connID=0x0 type=\"bgp\"\n2020-01-11 15:59:53.583 [FATAL][537] client.go 165: Failed to connect to Typha error=dial tcp 100.106.19.47:5473: connect: connection refused\n2020-01-11 15:59:53.619 [INFO][544] config.go 105: Skipping confd config file.\n2020-01-11 15:59:53.619 [INFO][544] config.go 199: Found FELIX_TYPHAK8SSERVICENAME=calico-typha\n2020-01-11 15:59:53.619 [INFO][544] run.go 17: Starting calico-confd\n2020-01-11 15:59:53.621 [INFO][544] k8s.go 219: Using host-local IPAM\n2020-01-11 15:59:53.641 [INFO][544] client.go 224: Found Typha ClusterIP. clusterIP=\"100.106.19.47\"\n2020-01-11 15:59:53.641 [INFO][544] client.go 231: Found Typha service port. port=v1.ServicePort{Name:\"calico-typha\", Protocol:\"TCP\", Port:5473, TargetPort:intstr.IntOrString{Type:1, IntVal:0, StrVal:\"calico-typha\"}, NodePort:0}\n2020-01-11 15:59:53.641 [INFO][544] client.go 145: Connecting to Typha. addr=\"100.106.19.47:5473\"\n2020-01-11 15:59:53.641 [INFO][544] sync_client.go 70: requiringTLS=false\n2020-01-11 15:59:53.641 [INFO][544] sync_client.go 169: Starting Typha client\n2020-01-11 15:59:53.641 [INFO][544] sync_client.go 70: requiringTLS=false\n2020-01-11 15:59:53.641 [INFO][544] sync_client.go 218: Connecting to Typha. address=\"100.106.19.47:5473\" connID=0x0 type=\"bgp\"\n2020-01-11 15:59:54.671 [FATAL][544] client.go 165: Failed to connect to Typha error=dial tcp 100.106.19.47:5473: connect: connection refused\n2020-01-11 15:59:54.691 [INFO][551] config.go 105: Skipping confd config file.\n2020-01-11 15:59:54.691 [INFO][551] config.go 199: Found FELIX_TYPHAK8SSERVICENAME=calico-typha\n2020-01-11 15:59:54.691 [INFO][551] run.go 17: Starting calico-confd\n2020-01-11 15:59:54.692 [INFO][551] k8s.go 219: Using host-local IPAM\n2020-01-11 15:59:54.712 [INFO][551] client.go 224: Found Typha ClusterIP. clusterIP=\"100.106.19.47\"\n2020-01-11 15:59:54.712 [INFO][551] client.go 231: Found Typha service port. port=v1.ServicePort{Name:\"calico-typha\", Protocol:\"TCP\", Port:5473, TargetPort:intstr.IntOrString{Type:1, IntVal:0, StrVal:\"calico-typha\"}, NodePort:0}\n2020-01-11 15:59:54.712 [INFO][551] client.go 145: Connecting to Typha. addr=\"100.106.19.47:5473\"\n2020-01-11 15:59:54.712 [INFO][551] sync_client.go 70: requiringTLS=false\n2020-01-11 15:59:54.712 [INFO][551] sync_client.go 169: Starting Typha client\n2020-01-11 15:59:54.712 [INFO][551] sync_client.go 70: requiringTLS=false\n2020-01-11 15:59:54.713 [INFO][551] sync_client.go 218: Connecting to Typha. address=\"100.106.19.47:5473\" connID=0x0 type=\"bgp\"\n2020-01-11 15:59:55.759 [FATAL][551] client.go 165: Failed to connect to Typha error=dial tcp 100.106.19.47:5473: connect: connection refused\n2020-01-11 15:59:55.780 [INFO][558] config.go 105: Skipping confd config file.\n2020-01-11 15:59:55.780 [INFO][558] config.go 199: Found FELIX_TYPHAK8SSERVICENAME=calico-typha\n2020-01-11 15:59:55.780 [INFO][558] run.go 17: Starting calico-confd\n2020-01-11 15:59:55.781 [INFO][558] k8s.go 219: Using host-local IPAM\n2020-01-11 15:59:55.799 [INFO][558] client.go 224: Found Typha ClusterIP. clusterIP=\"100.106.19.47\"\n2020-01-11 15:59:55.799 [INFO][558] client.go 231: Found Typha service port. port=v1.ServicePort{Name:\"calico-typha\", Protocol:\"TCP\", Port:5473, TargetPort:intstr.IntOrString{Type:1, IntVal:0, StrVal:\"calico-typha\"}, NodePort:0}\n2020-01-11 15:59:55.799 [INFO][558] client.go 145: Connecting to Typha. addr=\"100.106.19.47:5473\"\n2020-01-11 15:59:55.799 [INFO][558] sync_client.go 70: requiringTLS=false\n2020-01-11 15:59:55.799 [INFO][558] sync_client.go 169: Starting Typha client\n2020-01-11 15:59:55.799 [INFO][558] sync_client.go 70: requiringTLS=false\n2020-01-11 15:59:55.799 [INFO][558] sync_client.go 218: Connecting to Typha. address=\"100.106.19.47:5473\" connID=0x0 type=\"bgp\"\n2020-01-11 15:59:56.847 [FATAL][558] client.go 165: Failed to connect to Typha error=dial tcp 100.106.19.47:5473: connect: connection refused\n2020-01-11 15:59:56.868 [INFO][565] config.go 105: Skipping confd config file.\n2020-01-11 15:59:56.868 [INFO][565] config.go 199: Found FELIX_TYPHAK8SSERVICENAME=calico-typha\n2020-01-11 15:59:56.868 [INFO][565] run.go 17: Starting calico-confd\n2020-01-11 15:59:56.869 [INFO][565] k8s.go 219: Using host-local IPAM\n2020-01-11 15:59:56.887 [INFO][565] client.go 224: Found Typha ClusterIP. clusterIP=\"100.106.19.47\"\n2020-01-11 15:59:56.887 [INFO][565] client.go 231: Found Typha service port. port=v1.ServicePort{Name:\"calico-typha\", Protocol:\"TCP\", Port:5473, TargetPort:intstr.IntOrString{Type:1, IntVal:0, StrVal:\"calico-typha\"}, NodePort:0}\n2020-01-11 15:59:56.887 [INFO][565] client.go 145: Connecting to Typha. addr=\"100.106.19.47:5473\"\n2020-01-11 15:59:56.887 [INFO][565] sync_client.go 70: requiringTLS=false\n2020-01-11 15:59:56.887 [INFO][565] sync_client.go 169: Starting Typha client\n2020-01-11 15:59:56.887 [INFO][565] sync_client.go 70: requiringTLS=false\n2020-01-11 15:59:56.887 [INFO][565] sync_client.go 218: Connecting to Typha. address=\"100.106.19.47:5473\" connID=0x0 type=\"bgp\"\n2020-01-11 15:59:57.935 [FATAL][565] client.go 165: Failed to connect to Typha error=dial tcp 100.106.19.47:5473: connect: connection refused\n2020-01-11 15:59:57.956 [INFO][572] config.go 105: Skipping confd config file.\n2020-01-11 15:59:57.956 [INFO][572] config.go 199: Found FELIX_TYPHAK8SSERVICENAME=calico-typha\n2020-01-11 15:59:57.956 [INFO][572] run.go 17: Starting calico-confd\n2020-01-11 15:59:57.957 [INFO][572] k8s.go 219: Using host-local IPAM\n2020-01-11 15:59:57.976 [INFO][572] client.go 224: Found Typha ClusterIP. clusterIP=\"100.106.19.47\"\n2020-01-11 15:59:57.976 [INFO][572] client.go 231: Found Typha service port. port=v1.ServicePort{Name:\"calico-typha\", Protocol:\"TCP\", Port:5473, TargetPort:intstr.IntOrString{Type:1, IntVal:0, StrVal:\"calico-typha\"}, NodePort:0}\n2020-01-11 15:59:57.976 [INFO][572] client.go 145: Connecting to Typha. addr=\"100.106.19.47:5473\"\n2020-01-11 15:59:57.976 [INFO][572] sync_client.go 70: requiringTLS=false\n2020-01-11 15:59:57.976 [INFO][572] sync_client.go 169: Starting Typha client\n2020-01-11 15:59:57.976 [INFO][572] sync_client.go 70: requiringTLS=false\n2020-01-11 15:59:57.976 [INFO][572] sync_client.go 218: Connecting to Typha. address=\"100.106.19.47:5473\" connID=0x0 type=\"bgp\"\n2020-01-11 15:59:59.023 [INFO][572] sync_client.go 233: Connected to Typha. address=\"100.106.19.47:5473\" connID=0x0 type=\"bgp\"\n2020-01-11 15:59:59.023 [INFO][572] client.go 186: CALICO_ADVERTISE_CLUSTER_IPS not specified, no cluster ips will be advertised\n2020-01-11 15:59:59.023 [INFO][572] client.go 330: RouteGenerator has indicated it is in sync\n2020-01-11 15:59:59.023 [INFO][572] sync_client.go 268: Started Typha client main loop address=\"100.106.19.47:5473\" connID=0x0 type=\"bgp\"\n2020-01-11 15:59:59.026 [INFO][572] sync_client.go 325: Server hello message received address=\"100.106.19.47:5473\" connID=0x0 serverVersion=\"v3.8.2\" type=\"bgp\"\n2020-01-11 15:59:59.028 [INFO][572] sync_client.go 296: Status update from Typha. address=\"100.106.19.47:5473\" connID=0x0 newStatus=in-sync type=\"bgp\"\n2020-01-11 15:59:59.028 [INFO][572] client.go 327: Calico Syncer has indicated it is in sync\n2020-01-11 16:00:57.665 [ERROR][572] sync_client.go 260: Failed to read from server address=\"100.106.19.47:5473\" connID=0x0 error=EOF type=\"bgp\"\n2020-01-11 16:00:57.665 [INFO][572] sync_client.go 155: Typha client Context asked us to exit connID=0x0 type=\"bgp\"\n2020-01-11 16:00:57.665 [FATAL][572] client.go 169: Connection to Typha failed\n2020-01-11 16:00:57.665 [ERROR][499] sync_client.go 260: Failed to read from server address=\"100.106.19.47:5473\" connID=0x0 error=EOF type=\"\"\n2020-01-11 16:00:57.709 [INFO][707] config.go 105: Skipping confd config file.\n2020-01-11 16:00:57.709 [INFO][707] config.go 199: Found FELIX_TYPHAK8SSERVICENAME=calico-typha\n2020-01-11 16:00:57.709 [INFO][707] run.go 17: Starting calico-confd\n2020-01-11 16:00:57.710 [INFO][707] k8s.go 219: Using host-local IPAM\n2020-01-11 16:00:57.755 [INFO][707] client.go 224: Found Typha ClusterIP. clusterIP=\"100.106.19.47\"\n2020-01-11 16:00:57.755 [INFO][707] client.go 231: Found Typha service port. port=v1.ServicePort{Name:\"calico-typha\", Protocol:\"TCP\", Port:5473, TargetPort:intstr.IntOrString{Type:1, IntVal:0, StrVal:\"calico-typha\"}, NodePort:0}\n2020-01-11 16:00:57.755 [INFO][707] client.go 145: Connecting to Typha. addr=\"100.106.19.47:5473\"\n2020-01-11 16:00:57.755 [INFO][707] sync_client.go 70: requiringTLS=false\n2020-01-11 16:00:57.755 [INFO][707] sync_client.go 169: Starting Typha client\n2020-01-11 16:00:57.755 [INFO][707] sync_client.go 70: requiringTLS=false\n2020-01-11 16:00:57.755 [INFO][707] sync_client.go 218: Connecting to Typha. address=\"100.106.19.47:5473\" connID=0x0 type=\"bgp\"\n2020-01-11 16:00:58.799 [FATAL][707] client.go 165: Failed to connect to Typha error=dial tcp 100.106.19.47:5473: connect: connection refused\n2020-01-11 16:00:58.820 [INFO][714] config.go 105: Skipping confd config file.\n2020-01-11 16:00:58.820 [INFO][714] config.go 199: Found FELIX_TYPHAK8SSERVICENAME=calico-typha\n2020-01-11 16:00:58.820 [INFO][714] run.go 17: Starting calico-confd\n2020-01-11 16:00:58.821 [INFO][714] k8s.go 219: Using host-local IPAM\n2020-01-11 16:00:58.838 [INFO][714] client.go 224: Found Typha ClusterIP. clusterIP=\"100.106.19.47\"\n2020-01-11 16:00:58.838 [INFO][714] client.go 231: Found Typha service port. port=v1.ServicePort{Name:\"calico-typha\", Protocol:\"TCP\", Port:5473, TargetPort:intstr.IntOrString{Type:1, IntVal:0, StrVal:\"calico-typha\"}, NodePort:0}\n2020-01-11 16:00:58.838 [INFO][714] client.go 145: Connecting to Typha. addr=\"100.106.19.47:5473\"\n2020-01-11 16:00:58.838 [INFO][714] sync_client.go 70: requiringTLS=false\n2020-01-11 16:00:58.838 [INFO][714] sync_client.go 169: Starting Typha client\n2020-01-11 16:00:58.838 [INFO][714] sync_client.go 70: requiringTLS=false\n2020-01-11 16:00:58.838 [INFO][714] sync_client.go 218: Connecting to Typha. address=\"100.106.19.47:5473\" connID=0x0 type=\"bgp\"\n2020-01-11 16:00:59.666 [FATAL][499] daemon.go 641: Exiting. reason=\"Connection to Typha failed\"\n2020-01-11 16:00:59.887 [FATAL][714] client.go 165: Failed to connect to Typha error=dial tcp 100.106.19.47:5473: connect: connection refused\n2020-01-11 16:00:59.908 [INFO][741] config.go 105: Skipping confd config file.\n2020-01-11 16:00:59.908 [INFO][741] config.go 199: Found FELIX_TYPHAK8SSERVICENAME=calico-typha\n2020-01-11 16:00:59.908 [INFO][741] run.go 17: Starting calico-confd\n2020-01-11 16:00:59.909 [INFO][741] k8s.go 219: Using host-local IPAM\n2020-01-11 16:00:59.954 [INFO][741] client.go 224: Found Typha ClusterIP. clusterIP=\"100.106.19.47\"\n2020-01-11 16:00:59.954 [INFO][741] client.go 231: Found Typha service port. port=v1.ServicePort{Name:\"calico-typha\", Protocol:\"TCP\", Port:5473, TargetPort:intstr.IntOrString{Type:1, IntVal:0, StrVal:\"calico-typha\"}, NodePort:0}\n2020-01-11 16:00:59.954 [INFO][741] client.go 145: Connecting to Typha. addr=\"100.106.19.47:5473\"\n2020-01-11 16:00:59.955 [INFO][741] sync_client.go 70: requiringTLS=false\n2020-01-11 16:00:59.955 [INFO][741] sync_client.go 169: Starting Typha client\n2020-01-11 16:00:59.955 [INFO][741] sync_client.go 70: requiringTLS=false\n2020-01-11 16:00:59.955 [INFO][741] sync_client.go 218: Connecting to Typha. address=\"100.106.19.47:5473\" connID=0x0 type=\"bgp\"\n2020-01-11 16:01:00.847 [ERROR][721] daemon.go 446: Failed to connect to Typha. Retrying... error=dial tcp 100.106.19.47:5473: connect: connection refused\n2020-01-11 16:01:00.976 [FATAL][741] client.go 165: Failed to connect to Typha error=dial tcp 100.106.19.47:5473: connect: connection refused\n2020-01-11 16:01:01.000 [INFO][748] config.go 105: Skipping confd config file.\n2020-01-11 16:01:01.000 [INFO][748] config.go 199: Found FELIX_TYPHAK8SSERVICENAME=calico-typha\n2020-01-11 16:01:01.001 [INFO][748] run.go 17: Starting calico-confd\n2020-01-11 16:01:01.002 [INFO][748] k8s.go 219: Using host-local IPAM\n2020-01-11 16:01:01.027 [INFO][748] client.go 224: Found Typha ClusterIP. clusterIP=\"100.106.19.47\"\n2020-01-11 16:01:01.027 [INFO][748] client.go 231: Found Typha service port. port=v1.ServicePort{Name:\"calico-typha\", Protocol:\"TCP\", Port:5473, TargetPort:intstr.IntOrString{Type:1, IntVal:0, StrVal:\"calico-typha\"}, NodePort:0}\n2020-01-11 16:01:01.027 [INFO][748] client.go 145: Connecting to Typha. addr=\"100.106.19.47:5473\"\n2020-01-11 16:01:01.027 [INFO][748] sync_client.go 70: requiringTLS=false\n2020-01-11 16:01:01.027 [INFO][748] sync_client.go 169: Starting Typha client\n2020-01-11 16:01:01.027 [INFO][748] sync_client.go 70: requiringTLS=false\n2020-01-11 16:01:01.027 [INFO][748] sync_client.go 218: Connecting to Typha. address=\"100.106.19.47:5473\" connID=0x0 type=\"bgp\"\n2020-01-11 16:01:02.063 [FATAL][748] client.go 165: Failed to connect to Typha error=dial tcp 100.106.19.47:5473: connect: connection refused\n2020-01-11 16:01:02.094 [INFO][767] config.go 105: Skipping confd config file.\n2020-01-11 16:01:02.094 [INFO][767] config.go 199: Found FELIX_TYPHAK8SSERVICENAME=calico-typha\n2020-01-11 16:01:02.094 [INFO][767] run.go 17: Starting calico-confd\n2020-01-11 16:01:02.096 [INFO][767] k8s.go 219: Using host-local IPAM\n2020-01-11 16:01:02.237 [INFO][767] client.go 224: Found Typha ClusterIP. clusterIP=\"100.106.19.47\"\n2020-01-11 16:01:02.237 [INFO][767] client.go 231: Found Typha service port. port=v1.ServicePort{Name:\"calico-typha\", Protocol:\"TCP\", Port:5473, TargetPort:intstr.IntOrString{Type:1, IntVal:0, StrVal:\"calico-typha\"}, NodePort:0}\n2020-01-11 16:01:02.237 [INFO][767] client.go 145: Connecting to Typha. addr=\"100.106.19.47:5473\"\n2020-01-11 16:01:02.237 [INFO][767] sync_client.go 70: requiringTLS=false\n2020-01-11 16:01:02.237 [INFO][767] sync_client.go 169: Starting Typha client\n2020-01-11 16:01:02.237 [INFO][767] sync_client.go 70: requiringTLS=false\n2020-01-11 16:01:02.237 [INFO][767] sync_client.go 218: Connecting to Typha. address=\"100.106.19.47:5473\" connID=0x0 type=\"bgp\"\n2020-01-11 16:01:03.279 [FATAL][767] client.go 165: Failed to connect to Typha error=dial tcp 100.106.19.47:5473: connect: connection refused\n2020-01-11 16:01:03.301 [INFO][774] config.go 105: Skipping confd config file.\n2020-01-11 16:01:03.302 [INFO][774] config.go 199: Found FELIX_TYPHAK8SSERVICENAME=calico-typha\n2020-01-11 16:01:03.302 [INFO][774] run.go 17: Starting calico-confd\n2020-01-11 16:01:03.303 [INFO][774] k8s.go 219: Using host-local IPAM\n2020-01-11 16:01:03.323 [INFO][774] client.go 224: Found Typha ClusterIP. clusterIP=\"100.106.19.47\"\n2020-01-11 16:01:03.323 [INFO][774] client.go 231: Found Typha service port. port=v1.ServicePort{Name:\"calico-typha\", Protocol:\"TCP\", Port:5473, TargetPort:intstr.IntOrString{Type:1, IntVal:0, StrVal:\"calico-typha\"}, NodePort:0}\n2020-01-11 16:01:03.323 [INFO][774] client.go 145: Connecting to Typha. addr=\"100.106.19.47:5473\"\n2020-01-11 16:01:03.323 [INFO][774] sync_client.go 70: requiringTLS=false\n2020-01-11 16:01:03.323 [INFO][774] sync_client.go 169: Starting Typha client\n2020-01-11 16:01:03.323 [INFO][774] sync_client.go 70: requiringTLS=false\n2020-01-11 16:01:03.323 [INFO][774] sync_client.go 218: Connecting to Typha. address=\"100.106.19.47:5473\" connID=0x0 type=\"bgp\"\n2020-01-11 16:01:04.369 [FATAL][774] client.go 165: Failed to connect to Typha error=dial tcp 100.106.19.47:5473: connect: connection refused\n2020-01-11 16:01:04.402 [INFO][782] config.go 105: Skipping confd config file.\n2020-01-11 16:01:04.402 [INFO][782] config.go 199: Found FELIX_TYPHAK8SSERVICENAME=calico-typha\n2020-01-11 16:01:04.403 [INFO][782] run.go 17: Starting calico-confd\n2020-01-11 16:01:04.404 [INFO][782] k8s.go 219: Using host-local IPAM\n2020-01-11 16:01:04.425 [INFO][782] client.go 224: Found Typha ClusterIP. clusterIP=\"100.106.19.47\"\n2020-01-11 16:01:04.425 [INFO][782] client.go 231: Found Typha service port. port=v1.ServicePort{Name:\"calico-typha\", Protocol:\"TCP\", Port:5473, TargetPort:intstr.IntOrString{Type:1, IntVal:0, StrVal:\"calico-typha\"}, NodePort:0}\n2020-01-11 16:01:04.425 [INFO][782] client.go 145: Connecting to Typha. addr=\"100.106.19.47:5473\"\n2020-01-11 16:01:04.425 [INFO][782] sync_client.go 70: requiringTLS=false\n2020-01-11 16:01:04.425 [INFO][782] sync_client.go 169: Starting Typha client\n2020-01-11 16:01:04.425 [INFO][782] sync_client.go 70: requiringTLS=false\n2020-01-11 16:01:04.425 [INFO][782] sync_client.go 218: Connecting to Typha. address=\"100.106.19.47:5473\" connID=0x0 type=\"bgp\"\n2020-01-11 16:01:05.455 [FATAL][782] client.go 165: Failed to connect to Typha error=dial tcp 100.106.19.47:5473: connect: connection refused\n2020-01-11 16:01:05.476 [INFO][789] config.go 105: Skipping confd config file.\n2020-01-11 16:01:05.476 [INFO][789] config.go 199: Found FELIX_TYPHAK8SSERVICENAME=calico-typha\n2020-01-11 16:01:05.476 [INFO][789] run.go 17: Starting calico-confd\n2020-01-11 16:01:05.477 [INFO][789] k8s.go 219: Using host-local IPAM\n2020-01-11 16:01:05.496 [INFO][789] client.go 224: Found Typha ClusterIP. clusterIP=\"100.106.19.47\"\n2020-01-11 16:01:05.496 [INFO][789] client.go 231: Found Typha service port. port=v1.ServicePort{Name:\"calico-typha\", Protocol:\"TCP\", Port:5473, TargetPort:intstr.IntOrString{Type:1, IntVal:0, StrVal:\"calico-typha\"}, NodePort:0}\n2020-01-11 16:01:05.496 [INFO][789] client.go 145: Connecting to Typha. addr=\"100.106.19.47:5473\"\n2020-01-11 16:01:05.496 [INFO][789] sync_client.go 70: requiringTLS=false\n2020-01-11 16:01:05.496 [INFO][789] sync_client.go 169: Starting Typha client\n2020-01-11 16:01:05.496 [INFO][789] sync_client.go 70: requiringTLS=false\n2020-01-11 16:01:05.496 [INFO][789] sync_client.go 218: Connecting to Typha. address=\"100.106.19.47:5473\" connID=0x0 type=\"bgp\"\n2020-01-11 16:01:06.543 [FATAL][789] client.go 165: Failed to connect to Typha error=dial tcp 100.106.19.47:5473: connect: connection refused\n2020-01-11 16:01:06.564 [INFO][796] config.go 105: Skipping confd config file.\n2020-01-11 16:01:06.564 [INFO][796] config.go 199: Found FELIX_TYPHAK8SSERVICENAME=calico-typha\n2020-01-11 16:01:06.564 [INFO][796] run.go 17: Starting calico-confd\n2020-01-11 16:01:06.565 [INFO][796] k8s.go 219: Using host-local IPAM\n2020-01-11 16:01:06.582 [INFO][796] client.go 224: Found Typha ClusterIP. clusterIP=\"100.106.19.47\"\n2020-01-11 16:01:06.582 [INFO][796] client.go 231: Found Typha service port. port=v1.ServicePort{Name:\"calico-typha\", Protocol:\"TCP\", Port:5473, TargetPort:intstr.IntOrString{Type:1, IntVal:0, StrVal:\"calico-typha\"}, NodePort:0}\n2020-01-11 16:01:06.582 [INFO][796] client.go 145: Connecting to Typha. addr=\"100.106.19.47:5473\"\n2020-01-11 16:01:06.582 [INFO][796] sync_client.go 70: requiringTLS=false\n2020-01-11 16:01:06.582 [INFO][796] sync_client.go 169: Starting Typha client\n2020-01-11 16:01:06.582 [INFO][796] sync_client.go 70: requiringTLS=false\n2020-01-11 16:01:06.582 [INFO][796] sync_client.go 218: Connecting to Typha. address=\"100.106.19.47:5473\" connID=0x0 type=\"bgp\"\n2020-01-11 16:01:07.631 [FATAL][796] client.go 165: Failed to connect to Typha error=dial tcp 100.106.19.47:5473: connect: connection refused\n2020-01-11 16:01:07.652 [INFO][803] config.go 105: Skipping confd config file.\n2020-01-11 16:01:07.652 [INFO][803] config.go 199: Found FELIX_TYPHAK8SSERVICENAME=calico-typha\n2020-01-11 16:01:07.652 [INFO][803] run.go 17: Starting calico-confd\n2020-01-11 16:01:07.653 [INFO][803] k8s.go 219: Using host-local IPAM\n2020-01-11 16:01:07.676 [INFO][803] client.go 224: Found Typha ClusterIP. clusterIP=\"100.106.19.47\"\n2020-01-11 16:01:07.676 [INFO][803] client.go 231: Found Typha service port. port=v1.ServicePort{Name:\"calico-typha\", Protocol:\"TCP\", Port:5473, TargetPort:intstr.IntOrString{Type:1, IntVal:0, StrVal:\"calico-typha\"}, NodePort:0}\n2020-01-11 16:01:07.676 [INFO][803] client.go 145: Connecting to Typha. addr=\"100.106.19.47:5473\"\n2020-01-11 16:01:07.676 [INFO][803] sync_client.go 70: requiringTLS=false\n2020-01-11 16:01:07.676 [INFO][803] sync_client.go 169: Starting Typha client\n2020-01-11 16:01:07.676 [INFO][803] sync_client.go 70: requiringTLS=false\n2020-01-11 16:01:07.676 [INFO][803] sync_client.go 218: Connecting to Typha. address=\"100.106.19.47:5473\" connID=0x0 type=\"bgp\"\n2020-01-11 16:01:08.719 [FATAL][803] client.go 165: Failed to connect to Typha error=dial tcp 100.106.19.47:5473: connect: connection refused\n2020-01-11 16:01:08.740 [INFO][810] config.go 105: Skipping confd config file.\n2020-01-11 16:01:08.740 [INFO][810] config.go 199: Found FELIX_TYPHAK8SSERVICENAME=calico-typha\n2020-01-11 16:01:08.740 [INFO][810] run.go 17: Starting calico-confd\n2020-01-11 16:01:08.741 [INFO][810] k8s.go 219: Using host-local IPAM\n2020-01-11 16:01:08.759 [INFO][810] client.go 224: Found Typha ClusterIP. clusterIP=\"100.106.19.47\"\n2020-01-11 16:01:08.759 [INFO][810] client.go 231: Found Typha service port. port=v1.ServicePort{Name:\"calico-typha\", Protocol:\"TCP\", Port:5473, TargetPort:intstr.IntOrString{Type:1, IntVal:0, StrVal:\"calico-typha\"}, NodePort:0}\n2020-01-11 16:01:08.759 [INFO][810] client.go 145: Connecting to Typha. addr=\"100.106.19.47:5473\"\n2020-01-11 16:01:08.759 [INFO][810] sync_client.go 70: requiringTLS=false\n2020-01-11 16:01:08.759 [INFO][810] sync_client.go 169: Starting Typha client\n2020-01-11 16:01:08.759 [INFO][810] sync_client.go 70: requiringTLS=false\n2020-01-11 16:01:08.759 [INFO][810] sync_client.go 218: Connecting to Typha. address=\"100.106.19.47:5473\" connID=0x0 type=\"bgp\"\n2020-01-11 16:01:09.807 [FATAL][810] client.go 165: Failed to connect to Typha error=dial tcp 100.106.19.47:5473: connect: connection refused\n2020-01-11 16:01:09.844 [INFO][817] config.go 105: Skipping confd config file.\n2020-01-11 16:01:09.844 [INFO][817] config.go 199: Found FELIX_TYPHAK8SSERVICENAME=calico-typha\n2020-01-11 16:01:09.845 [INFO][817] run.go 17: Starting calico-confd\n2020-01-11 16:01:09.846 [INFO][817] k8s.go 219: Using host-local IPAM\n2020-01-11 16:01:09.868 [INFO][817] client.go 224: Found Typha ClusterIP. clusterIP=\"100.106.19.47\"\n2020-01-11 16:01:09.868 [INFO][817] client.go 231: Found Typha service port. port=v1.ServicePort{Name:\"calico-typha\", Protocol:\"TCP\", Port:5473, TargetPort:intstr.IntOrString{Type:1, IntVal:0, StrVal:\"calico-typha\"}, NodePort:0}\n2020-01-11 16:01:09.868 [INFO][817] client.go 145: Connecting to Typha. addr=\"100.106.19.47:5473\"\n2020-01-11 16:01:09.868 [INFO][817] sync_client.go 70: requiringTLS=false\n2020-01-11 16:01:09.869 [INFO][817] sync_client.go 169: Starting Typha client\n2020-01-11 16:01:09.869 [INFO][817] sync_client.go 70: requiringTLS=false\n2020-01-11 16:01:09.869 [INFO][817] sync_client.go 218: Connecting to Typha. address=\"100.106.19.47:5473\" connID=0x0 type=\"bgp\"\n2020-01-11 16:01:10.895 [FATAL][817] client.go 165: Failed to connect to Typha error=dial tcp 100.106.19.47:5473: connect: connection refused\n2020-01-11 16:01:10.916 [INFO][824] config.go 105: Skipping confd config file.\n2020-01-11 16:01:10.916 [INFO][824] config.go 199: Found FELIX_TYPHAK8SSERVICENAME=calico-typha\n2020-01-11 16:01:10.916 [INFO][824] run.go 17: Starting calico-confd\n2020-01-11 16:01:10.917 [INFO][824] k8s.go 219: Using host-local IPAM\n2020-01-11 16:01:10.936 [INFO][824] client.go 224: Found Typha ClusterIP. clusterIP=\"100.106.19.47\"\n2020-01-11 16:01:10.937 [INFO][824] client.go 231: Found Typha service port. port=v1.ServicePort{Name:\"calico-typha\", Protocol:\"TCP\", Port:5473, TargetPort:intstr.IntOrString{Type:1, IntVal:0, StrVal:\"calico-typha\"}, NodePort:0}\n2020-01-11 16:01:10.937 [INFO][824] client.go 145: Connecting to Typha. addr=\"100.106.19.47:5473\"\n2020-01-11 16:01:10.937 [INFO][824] sync_client.go 70: requiringTLS=false\n2020-01-11 16:01:10.937 [INFO][824] sync_client.go 169: Starting Typha client\n2020-01-11 16:01:10.937 [INFO][824] sync_client.go 70: requiringTLS=false\n2020-01-11 16:01:10.937 [INFO][824] sync_client.go 218: Connecting to Typha. address=\"100.106.19.47:5473\" connID=0x0 type=\"bgp\"\n2020-01-11 16:01:11.983 [FATAL][824] client.go 165: Failed to connect to Typha error=dial tcp 100.106.19.47:5473: connect: connection refused\n2020-01-11 16:01:12.024 [INFO][831] config.go 105: Skipping confd config file.\n2020-01-11 16:01:12.024 [INFO][831] config.go 199: Found FELIX_TYPHAK8SSERVICENAME=calico-typha\n2020-01-11 16:01:12.024 [INFO][831] run.go 17: Starting calico-confd\n2020-01-11 16:01:12.025 [INFO][831] k8s.go 219: Using host-local IPAM\n2020-01-11 16:01:12.077 [INFO][831] client.go 224: Found Typha ClusterIP. clusterIP=\"100.106.19.47\"\n2020-01-11 16:01:12.077 [INFO][831] client.go 231: Found Typha service port. port=v1.ServicePort{Name:\"calico-typha\", Protocol:\"TCP\", Port:5473, TargetPort:intstr.IntOrString{Type:1, IntVal:0, StrVal:\"calico-typha\"}, NodePort:0}\n2020-01-11 16:01:12.077 [INFO][831] client.go 145: Connecting to Typha. addr=\"100.106.19.47:5473\"\n2020-01-11 16:01:12.077 [INFO][831] sync_client.go 70: requiringTLS=false\n2020-01-11 16:01:12.078 [INFO][831] sync_client.go 169: Starting Typha client\n2020-01-11 16:01:12.078 [INFO][831] sync_client.go 70: requiringTLS=false\n2020-01-11 16:01:12.078 [INFO][831] sync_client.go 218: Connecting to Typha. address=\"100.106.19.47:5473\" connID=0x0 type=\"bgp\"\n2020-01-11 16:01:13.135 [INFO][831] sync_client.go 233: Connected to Typha. address=\"100.106.19.47:5473\" connID=0x0 type=\"bgp\"\n2020-01-11 16:01:13.135 [INFO][831] client.go 186: CALICO_ADVERTISE_CLUSTER_IPS not specified, no cluster ips will be advertised\n2020-01-11 16:01:13.135 [INFO][831] client.go 330: RouteGenerator has indicated it is in sync\n2020-01-11 16:01:13.135 [INFO][831] sync_client.go 268: Started Typha client main loop address=\"100.106.19.47:5473\" connID=0x0 type=\"bgp\"\n2020-01-11 16:01:13.136 [INFO][831] sync_client.go 325: Server hello message received address=\"100.106.19.47:5473\" connID=0x0 serverVersion=\"v3.8.2\" type=\"bgp\"\n2020-01-11 16:01:13.137 [INFO][831] sync_client.go 296: Status update from Typha. address=\"100.106.19.47:5473\" connID=0x0 newStatus=in-sync type=\"bgp\"\n2020-01-11 16:01:13.137 [INFO][831] client.go 327: Calico Syncer has indicated it is in sync\nbird: KIF: Received address message for unknown interface 18\nbird: KIF: Received address message for unknown interface 24\nbird: KIF: Received address message for unknown interface 27\nbird: KIF: Received address message for unknown interface 28\nbird: KIF: Received address message for unknown interface 39\nbird: KIF: Received address message for unknown interface 45\n2020-01-11 16:21:07.380 [ERROR][831] sync_client.go 260: Failed to read from server address=\"100.106.19.47:5473\" connID=0x0 error=EOF type=\"bgp\"\n2020-01-11 16:21:07.381 [ERROR][721] sync_client.go 260: Failed to read from server address=\"100.106.19.47:5473\" connID=0x0 error=EOF type=\"\"\n2020-01-11 16:21:07.381 [INFO][831] sync_client.go 155: Typha client Context asked us to exit connID=0x0 type=\"bgp\"\n2020-01-11 16:21:07.381 [FATAL][831] client.go 169: Connection to Typha failed\n2020-01-11 16:21:07.421 [INFO][3162] config.go 105: Skipping confd config file.\n2020-01-11 16:21:07.421 [INFO][3162] config.go 199: Found FELIX_TYPHAK8SSERVICENAME=calico-typha\n2020-01-11 16:21:07.421 [INFO][3162] run.go 17: Starting calico-confd\n2020-01-11 16:21:07.422 [INFO][3162] k8s.go 219: Using host-local IPAM\n2020-01-11 16:21:07.450 [INFO][3162] client.go 224: Found Typha ClusterIP. clusterIP=\"100.106.19.47\"\n2020-01-11 16:21:07.450 [INFO][3162] client.go 231: Found Typha service port. port=v1.ServicePort{Name:\"calico-typha\", Protocol:\"TCP\", Port:5473, TargetPort:intstr.IntOrString{Type:1, IntVal:0, StrVal:\"calico-typha\"}, NodePort:0}\n2020-01-11 16:21:07.450 [INFO][3162] client.go 145: Connecting to Typha. addr=\"100.106.19.47:5473\"\n2020-01-11 16:21:07.450 [INFO][3162] sync_client.go 70: requiringTLS=false\n2020-01-11 16:21:07.450 [INFO][3162] sync_client.go 169: Starting Typha client\n2020-01-11 16:21:07.450 [INFO][3162] sync_client.go 70: requiringTLS=false\n2020-01-11 16:21:07.450 [INFO][3162] sync_client.go 218: Connecting to Typha. address=\"100.106.19.47:5473\" connID=0x0 type=\"bgp\"\n2020-01-11 16:21:08.463 [FATAL][3162] client.go 165: Failed to connect to Typha error=dial tcp 100.106.19.47:5473: connect: connection refused\n2020-01-11 16:21:08.484 [INFO][3171] config.go 105: Skipping confd config file.\n2020-01-11 16:21:08.484 [INFO][3171] config.go 199: Found FELIX_TYPHAK8SSERVICENAME=calico-typha\n2020-01-11 16:21:08.484 [INFO][3171] run.go 17: Starting calico-confd\n2020-01-11 16:21:08.485 [INFO][3171] k8s.go 219: Using host-local IPAM\n2020-01-11 16:21:08.504 [INFO][3171] client.go 224: Found Typha ClusterIP. clusterIP=\"100.106.19.47\"\n2020-01-11 16:21:08.504 [INFO][3171] client.go 231: Found Typha service port. port=v1.ServicePort{Name:\"calico-typha\", Protocol:\"TCP\", Port:5473, TargetPort:intstr.IntOrString{Type:1, IntVal:0, StrVal:\"calico-typha\"}, NodePort:0}\n2020-01-11 16:21:08.504 [INFO][3171] client.go 145: Connecting to Typha. addr=\"100.106.19.47:5473\"\n2020-01-11 16:21:08.504 [INFO][3171] sync_client.go 70: requiringTLS=false\n2020-01-11 16:21:08.504 [INFO][3171] sync_client.go 169: Starting Typha client\n2020-01-11 16:21:08.504 [INFO][3171] sync_client.go 70: requiringTLS=false\n2020-01-11 16:21:08.504 [INFO][3171] sync_client.go 218: Connecting to Typha. address=\"100.106.19.47:5473\" connID=0x0 type=\"bgp\"\n2020-01-11 16:21:09.381 [FATAL][721] daemon.go 641: Exiting. reason=\"Connection to Typha failed\"\n2020-01-11 16:21:09.551 [FATAL][3171] client.go 165: Failed to connect to Typha error=dial tcp 100.106.19.47:5473: connect: connection refused\n2020-01-11 16:21:09.573 [INFO][3197] config.go 105: Skipping confd config file.\n2020-01-11 16:21:09.574 [INFO][3197] config.go 199: Found FELIX_TYPHAK8SSERVICENAME=calico-typha\n2020-01-11 16:21:09.574 [INFO][3197] run.go 17: Starting calico-confd\n2020-01-11 16:21:09.575 [INFO][3197] k8s.go 219: Using host-local IPAM\n2020-01-11 16:21:09.593 [INFO][3197] client.go 224: Found Typha ClusterIP. clusterIP=\"100.106.19.47\"\n2020-01-11 16:21:09.594 [INFO][3197] client.go 231: Found Typha service port. port=v1.ServicePort{Name:\"calico-typha\", Protocol:\"TCP\", Port:5473, TargetPort:intstr.IntOrString{Type:1, IntVal:0, StrVal:\"calico-typha\"}, NodePort:0}\n2020-01-11 16:21:09.594 [INFO][3197] client.go 145: Connecting to Typha. addr=\"100.106.19.47:5473\"\n2020-01-11 16:21:09.594 [INFO][3197] sync_client.go 70: requiringTLS=false\n2020-01-11 16:21:09.594 [INFO][3197] sync_client.go 169: Starting Typha client\n2020-01-11 16:21:09.594 [INFO][3197] sync_client.go 70: requiringTLS=false\n2020-01-11 16:21:09.594 [INFO][3197] sync_client.go 218: Connecting to Typha. address=\"100.106.19.47:5473\" connID=0x0 type=\"bgp\"\n2020-01-11 16:21:10.511 [ERROR][3177] daemon.go 446: Failed to connect to Typha. Retrying... error=dial tcp 100.106.19.47:5473: connect: connection refused\n2020-01-11 16:21:10.639 [FATAL][3197] client.go 165: Failed to connect to Typha error=dial tcp 100.106.19.47:5473: connect: connection refused\n2020-01-11 16:21:10.660 [INFO][3204] config.go 105: Skipping confd config file.\n2020-01-11 16:21:10.660 [INFO][3204] config.go 199: Found FELIX_TYPHAK8SSERVICENAME=calico-typha\n2020-01-11 16:21:10.660 [INFO][3204] run.go 17: Starting calico-confd\n2020-01-11 16:21:10.661 [INFO][3204] k8s.go 219: Using host-local IPAM\n2020-01-11 16:21:10.680 [INFO][3204] client.go 224: Found Typha ClusterIP. clusterIP=\"100.106.19.47\"\n2020-01-11 16:21:10.680 [INFO][3204] client.go 231: Found Typha service port. port=v1.ServicePort{Name:\"calico-typha\", Protocol:\"TCP\", Port:5473, TargetPort:intstr.IntOrString{Type:1, IntVal:0, StrVal:\"calico-typha\"}, NodePort:0}\n2020-01-11 16:21:10.680 [INFO][3204] client.go 145: Connecting to Typha. addr=\"100.106.19.47:5473\"\n2020-01-11 16:21:10.680 [INFO][3204] sync_client.go 70: requiringTLS=false\n2020-01-11 16:21:10.680 [INFO][3204] sync_client.go 169: Starting Typha client\n2020-01-11 16:21:10.680 [INFO][3204] sync_client.go 70: requiringTLS=false\n2020-01-11 16:21:10.680 [INFO][3204] sync_client.go 218: Connecting to Typha. address=\"100.106.19.47:5473\" connID=0x0 type=\"bgp\"\n2020-01-11 16:21:11.727 [FATAL][3204] client.go 165: Failed to connect to Typha error=dial tcp 100.106.19.47:5473: connect: connection refused\n2020-01-11 16:21:11.748 [INFO][3211] config.go 105: Skipping confd config file.\n2020-01-11 16:21:11.748 [INFO][3211] config.go 199: Found FELIX_TYPHAK8SSERVICENAME=calico-typha\n2020-01-11 16:21:11.748 [INFO][3211] run.go 17: Starting calico-confd\n2020-01-11 16:21:11.749 [INFO][3211] k8s.go 219: Using host-local IPAM\n2020-01-11 16:21:11.772 [INFO][3211] client.go 224: Found Typha ClusterIP. clusterIP=\"100.106.19.47\"\n2020-01-11 16:21:11.772 [INFO][3211] client.go 231: Found Typha service port. port=v1.ServicePort{Name:\"calico-typha\", Protocol:\"TCP\", Port:5473, TargetPort:intstr.IntOrString{Type:1, IntVal:0, StrVal:\"calico-typha\"}, NodePort:0}\n2020-01-11 16:21:11.772 [INFO][3211] client.go 145: Connecting to Typha. addr=\"100.106.19.47:5473\"\n2020-01-11 16:21:11.772 [INFO][3211] sync_client.go 70: requiringTLS=false\n2020-01-11 16:21:11.772 [INFO][3211] sync_client.go 169: Starting Typha client\n2020-01-11 16:21:11.772 [INFO][3211] sync_client.go 70: requiringTLS=false\n2020-01-11 16:21:11.772 [INFO][3211] sync_client.go 218: Connecting to Typha. address=\"100.106.19.47:5473\" connID=0x0 type=\"bgp\"\n2020-01-11 16:21:12.815 [FATAL][3211] client.go 165: Failed to connect to Typha error=dial tcp 100.106.19.47:5473: connect: connection refused\n2020-01-11 16:21:12.836 [INFO][3229] config.go 105: Skipping confd config file.\n2020-01-11 16:21:12.836 [INFO][3229] config.go 199: Found FELIX_TYPHAK8SSERVICENAME=calico-typha\n2020-01-11 16:21:12.836 [INFO][3229] run.go 17: Starting calico-confd\n2020-01-11 16:21:12.837 [INFO][3229] k8s.go 219: Using host-local IPAM\n2020-01-11 16:21:12.854 [INFO][3229] client.go 224: Found Typha ClusterIP. clusterIP=\"100.106.19.47\"\n2020-01-11 16:21:12.854 [INFO][3229] client.go 231: Found Typha service port. port=v1.ServicePort{Name:\"calico-typha\", Protocol:\"TCP\", Port:5473, TargetPort:intstr.IntOrString{Type:1, IntVal:0, StrVal:\"calico-typha\"}, NodePort:0}\n2020-01-11 16:21:12.854 [INFO][3229] client.go 145: Connecting to Typha. addr=\"100.106.19.47:5473\"\n2020-01-11 16:21:12.854 [INFO][3229] sync_client.go 70: requiringTLS=false\n2020-01-11 16:21:12.854 [INFO][3229] sync_client.go 169: Starting Typha client\n2020-01-11 16:21:12.854 [INFO][3229] sync_client.go 70: requiringTLS=false\n2020-01-11 16:21:12.854 [INFO][3229] sync_client.go 218: Connecting to Typha. address=\"100.106.19.47:5473\" connID=0x0 type=\"bgp\"\n2020-01-11 16:21:13.903 [FATAL][3229] client.go 165: Failed to connect to Typha error=dial tcp 100.106.19.47:5473: connect: connection refused\n2020-01-11 16:21:13.923 [INFO][3236] config.go 105: Skipping confd config file.\n2020-01-11 16:21:13.923 [INFO][3236] config.go 199: Found FELIX_TYPHAK8SSERVICENAME=calico-typha\n2020-01-11 16:21:13.924 [INFO][3236] run.go 17: Starting calico-confd\n2020-01-11 16:21:13.924 [INFO][3236] k8s.go 219: Using host-local IPAM\n2020-01-11 16:21:13.943 [INFO][3236] client.go 224: Found Typha ClusterIP. clusterIP=\"100.106.19.47\"\n2020-01-11 16:21:13.943 [INFO][3236] client.go 231: Found Typha service port. port=v1.ServicePort{Name:\"calico-typha\", Protocol:\"TCP\", Port:5473, TargetPort:intstr.IntOrString{Type:1, IntVal:0, StrVal:\"calico-typha\"}, NodePort:0}\n2020-01-11 16:21:13.943 [INFO][3236] client.go 145: Connecting to Typha. addr=\"100.106.19.47:5473\"\n2020-01-11 16:21:13.944 [INFO][3236] sync_client.go 70: requiringTLS=false\n2020-01-11 16:21:13.944 [INFO][3236] sync_client.go 169: Starting Typha client\n2020-01-11 16:21:13.944 [INFO][3236] sync_client.go 70: requiringTLS=false\n2020-01-11 16:21:13.944 [INFO][3236] sync_client.go 218: Connecting to Typha. address=\"100.106.19.47:5473\" connID=0x0 type=\"bgp\"\n2020-01-11 16:21:14.991 [FATAL][3236] client.go 165: Failed to connect to Typha error=dial tcp 100.106.19.47:5473: connect: connection refused\n2020-01-11 16:21:15.012 [INFO][3243] config.go 105: Skipping confd config file.\n2020-01-11 16:21:15.012 [INFO][3243] config.go 199: Found FELIX_TYPHAK8SSERVICENAME=calico-typha\n2020-01-11 16:21:15.012 [INFO][3243] run.go 17: Starting calico-confd\n2020-01-11 16:21:15.013 [INFO][3243] k8s.go 219: Using host-local IPAM\n2020-01-11 16:21:15.032 [INFO][3243] client.go 224: Found Typha ClusterIP. clusterIP=\"100.106.19.47\"\n2020-01-11 16:21:15.032 [INFO][3243] client.go 231: Found Typha service port. port=v1.ServicePort{Name:\"calico-typha\", Protocol:\"TCP\", Port:5473, TargetPort:intstr.IntOrString{Type:1, IntVal:0, StrVal:\"calico-typha\"}, NodePort:0}\n2020-01-11 16:21:15.032 [INFO][3243] client.go 145: Connecting to Typha. addr=\"100.106.19.47:5473\"\n2020-01-11 16:21:15.032 [INFO][3243] sync_client.go 70: requiringTLS=false\n2020-01-11 16:21:15.032 [INFO][3243] sync_client.go 169: Starting Typha client\n2020-01-11 16:21:15.032 [INFO][3243] sync_client.go 70: requiringTLS=false\n2020-01-11 16:21:15.032 [INFO][3243] sync_client.go 218: Connecting to Typha. address=\"100.106.19.47:5473\" connID=0x0 type=\"bgp\"\n2020-01-11 16:21:16.079 [INFO][3243] sync_client.go 233: Connected to Typha. address=\"100.106.19.47:5473\" connID=0x0 type=\"bgp\"\n2020-01-11 16:21:16.079 [INFO][3243] client.go 186: CALICO_ADVERTISE_CLUSTER_IPS not specified, no cluster ips will be advertised\n2020-01-11 16:21:16.079 [INFO][3243] client.go 330: RouteGenerator has indicated it is in sync\n2020-01-11 16:21:16.079 [INFO][3243] sync_client.go 268: Started Typha client main loop address=\"100.106.19.47:5473\" connID=0x0 type=\"bgp\"\n2020-01-11 16:21:16.080 [INFO][3243] sync_client.go 325: Server hello message received address=\"100.106.19.47:5473\" connID=0x0 serverVersion=\"v3.8.2\" type=\"bgp\"\n2020-01-11 16:21:16.082 [INFO][3243] sync_client.go 296: Status update from Typha. address=\"100.106.19.47:5473\" connID=0x0 newStatus=in-sync type=\"bgp\"\n2020-01-11 16:21:16.082 [INFO][3243] client.go 327: Calico Syncer has indicated it is in sync\nbird: KIF: Received address message for unknown interface 46\nbird: KIF: Received address message for unknown interface 49\nbird: KIF: Received address message for unknown interface 59\nbird: KIF: Received address message for unknown interface 62\nbird: KIF: Received address message for unknown interface 63\nbird: KIF: Received address message for unknown interface 66\nbird: KIF: Received address message for unknown interface 70\nbird: KIF: Received address message for unknown interface 74\nbird: KIF: Received address message for unknown interface 75\nbird: KIF: Received address message for unknown interface 78\nbird: KIF: Received address message for unknown interface 99\nbird: KIF: Received address message for unknown interface 109\nbird: KIF: Received address message for unknown interface 111\nbird: KIF: Received address message for unknown interface 118\nbird: KIF: Received address message for unknown interface 132\nbird: KIF: Received address message for unknown interface 141\nbird: KIF: Received address message for unknown interface 143\nbird: KIF: Received address message for unknown interface 146\nbird: KIF: Received address message for unknown interface 147\nbird: KIF: Received address message for unknown interface 150\nbird: KIF: Received address message for unknown interface 152\nbird: KIF: Received address message for unknown interface 161\nbird: KIF: Received address message for unknown interface 166\nbird: KIF: Received address message for unknown interface 174\nbird: KIF: Received address message for unknown interface 175\nbird: KIF: Received address message for unknown interface 168\nbird: KIF: Received address message for unknown interface 177\nbird: KIF: Received address message for unknown interface 183\nbird: KIF: Received address message for unknown interface 188\nbird: KIF: Received address message for unknown interface 192\nbird: KIF: Received address message for unknown interface 200\nbird: KIF: Received address message for unknown interface 207\nbird: KIF: Received address message for unknown interface 212\nbird: KIF: Received address message for unknown interface 213\nbird: KIF: Received address message for unknown interface 219\nbird: KIF: Received address message for unknown interface 220\nbird: KIF: Received address message for unknown interface 228\nbird: KIF: Received address message for unknown interface 230\nbird: KIF: Received address message for unknown interface 231\nbird: KIF: Received address message for unknown interface 236\nbird: KIF: Received address message for unknown interface 237\nbird: KIF: Received address message for unknown interface 239\nbird: KIF: Received address message for unknown interface 249\nbird: KIF: Received address message for unknown interface 251\nbird: KIF: Received address message for unknown interface 254\nbird: KIF: Received address message for unknown interface 256\nbird: KIF: Received address message for unknown interface 259\nbird: KIF: Received address message for unknown interface 262\nbird: KIF: Received address message for unknown interface 264\nbird: KIF: Received address message for unknown interface 269\nbird: KIF: Received address message for unknown interface 272\nbird: KIF: Received address message for unknown interface 275\nbird: KIF: Received address message for unknown interface 278\nbird: KIF: Received address message for unknown interface 280\nbird: KIF: Received address message for unknown interface 282\nbird: KIF: Received address message for unknown interface 283\nbird: KIF: Received address message for unknown interface 285\nbird: KIF: Received address message for unknown interface 287\nbird: KIF: Received address message for unknown interface 296\nbird: KIF: Received address message for unknown interface 302\nbird: KIF: Received address message for unknown interface 305\nbird: KIF: Received address message for unknown interface 310\nbird: KIF: Received address message for unknown interface 313\nbird: KIF: Received address message for unknown interface 315\nbird: KIF: Received address message for unknown interface 322\nbird: KIF: Received address message for unknown interface 324\nbird: KIF: Received address message for unknown interface 337\nbird: KIF: Received address message for unknown interface 345\nbird: KIF: Received address message for unknown interface 368\nbird: KIF: Received address message for unknown interface 371\nbird: KIF: Received address message for unknown interface 387\nbird: KIF: Received address message for unknown interface 389\nbird: KIF: Received address message for unknown interface 400\nbird: KIF: Received address message for unknown interface 402\nbird: KIF: Received address message for unknown interface 403\nbird: KIF: Received address message for unknown interface 407\nbird: KIF: Received address message for unknown interface 410\nbird: KIF: Received address message for unknown interface 474\nbird: KIF: Received address message for unknown interface 490\nbird: KIF: Received address message for unknown interface 477\nbird: KIF: Received address message for unknown interface 486\nbird: KIF: Received address message for unknown interface 500\nbird: KIF: Received address message for unknown interface 428\nbird: KIF: Received address message for unknown interface 431\nbird: KIF: Received address message for unknown interface 429\nbird: KIF: Received address message for unknown interface 459\nbird: KIF: Received address message for unknown interface 517\nbird: KIF: Received address message for unknown interface 519\nbird: KIF: Received address message for unknown interface 522\nbird: KIF: Received address message for unknown interface 544\nbird: KIF: Received address message for unknown interface 545\nbird: KIF: Received address message for unknown interface 548\nbird: KIF: Received address message for unknown interface 550\nbird: KIF: Received address message for unknown interface 551\nbird: KIF: Received address message for unknown interface 558\nbird: KIF: Received address message for unknown interface 569\nbird: KIF: Received address message for unknown interface 585\nbird: KIF: Received address message for unknown interface 589\nbird: KIF: Received address message for unknown interface 591\nbird: KIF: Received address message for unknown interface 593\nbird: KIF: Received address message for unknown interface 595\nbird: KIF: Received address message for unknown interface 597\nbird: KIF: Received address message for unknown interface 599\nbird: KIF: Received address message for unknown interface 604\nbird: KIF: Received address message for unknown interface 618\nbird: KIF: Received address message for unknown interface 616\nbird: KIF: Received address message for unknown interface 645\nbird: KIF: Received address message for unknown interface 622\nbird: KIF: Received address message for unknown interface 663\nbird: KIF: Received address message for unknown interface 667\nbird: KIF: Received address message for unknown interface 701\nbird: KIF: Received address message for unknown interface 712\nbird: KIF: Received address message for unknown interface 714\nbird: KIF: Received address message for unknown interface 717\nbird: KIF: Received address message for unknown interface 724\nbird: KIF: Received address message for unknown interface 727\nbird: KIF: Received address message for unknown interface 728\nbird: KIF: Received address message for unknown interface 731\nbird: KIF: Received address message for unknown interface 734\nbird: KIF: Received address message for unknown interface 744\nbird: KIF: Received address message for unknown interface 758\nbird: KIF: Received address message for unknown interface 759\nbird: KIF: Received address message for unknown interface 736\nbird: KIF: Received address message for unknown interface 771\nbird: KIF: Received address message for unknown interface 779\nbird: KIF: Received address message for unknown interface 781\nbird: KIF: Received address message for unknown interface 785\nbird: KIF: Received address message for unknown interface 795\nbird: KIF: Received address message for unknown interface 805\nbird: KIF: Received address message for unknown interface 815\nbird: KIF: Received address message for unknown interface 813\nbird: KIF: Received address message for unknown interface 819\nbird: KIF: Received address message for unknown interface 826\nbird: KIF: Received address message for unknown interface 828\nbird: KIF: Received address message for unknown interface 832\nbird: KIF: Received address message for unknown interface 827\nbird: KIF: Received address message for unknown interface 839\nbird: KIF: Received address message for unknown interface 838\nbird: KIF: Received address message for unknown interface 849\nbird: KIF: Received address message for unknown interface 857\nbird: KIF: Received address message for unknown interface 866\nbird: KIF: Received address message for unknown interface 864\nbird: KIF: Received address message for unknown interface 869\nbird: KIF: Received address message for unknown interface 887\nbird: KIF: Received address message for unknown interface 881\nbird: KIF: Received address message for unknown interface 888\nbird: KIF: Received address message for unknown interface 853\nbird: KIF: Received address message for unknown interface 899\nbird: KIF: Received address message for unknown interface 919\nbird: KIF: Received address message for unknown interface 925\nbird: KIF: Received address message for unknown interface 935\nbird: KIF: Received address message for unknown interface 946\nbird: KIF: Received address message for unknown interface 957\nbird: KIF: Received address message for unknown interface 958\nbird: KIF: Received address message for unknown interface 965\nbird: KIF: Received address message for unknown interface 971\nbird: KIF: Received address message for unknown interface 962\nbird: KIF: Received address message for unknown interface 979\nbird: KIF: Received address message for unknown interface 986\nbird: KIF: Received address message for unknown interface 993\nbird: KIF: Received address message for unknown interface 995\nbird: KIF: Received address message for unknown interface 997\nbird: KIF: Received address message for unknown interface 998\nbird: KIF: Received address message for unknown interface 1003\nbird: KIF: Received address message for unknown interface 1006\nbird: KIF: Received address message for unknown interface 1011\nbird: KIF: Received address message for unknown interface 1014\nbird: KIF: Received address message for unknown interface 1019\nbird: KIF: Received address message for unknown interface 1021\nbird: KIF: Received address message for unknown interface 1022\nbird: KIF: Received address message for unknown interface 1017\nbird: KIF: Received address message for unknown interface 1031\nbird: KIF: Received address message for unknown interface 1047\nbird: KIF: Received address message for unknown interface 1043\nbird: KIF: Received address message for unknown interface 1046\nbird: KIF: Received address message for unknown interface 1030\nbird: KIF: Received address message for unknown interface 1053\nbird: KIF: Received address message for unknown interface 1055\nbird: KIF: Received address message for unknown interface 1057\nbird: KIF: Received address message for unknown interface 1054\nbird: KIF: Received address message for unknown interface 1066\nbird: KIF: Received address message for unknown interface 1079\nbird: KIF: Received address message for unknown interface 1088\nbird: KIF: Received address message for unknown interface 1090\nbird: KIF: Received address message for unknown interface 1091\nbird: KIF: Received address message for unknown interface 1096\nbird: KIF: Received address message for unknown interface 1110\nbird: KIF: Received address message for unknown interface 1123\nbird: KIF: Received address message for unknown interface 1141\nbird: KIF: Received address message for unknown interface 1132\nbird: KIF: Received address message for unknown interface 1134\nbird: KIF: Received address message for unknown interface 1143\nbird: KIF: Received address message for unknown interface 1148\nbird: KIF: Received address message for unknown interface 1145\nbird: KIF: Received address message for unknown interface 1150\nbird: KIF: Received address message for unknown interface 1156\nbird: KIF: Received address message for unknown interface 1138\nbird: KIF: Received address message for unknown interface 1159\nbird: KIF: Received address message for unknown interface 1180\nbird: KIF: Received address message for unknown interface 1178\nbird: KIF: Received address message for unknown interface 1175\nbird: KIF: Received address message for unknown interface 1194\nbird: KIF: Received address message for unknown interface 1192\nbird: KIF: Received address message for unknown interface 1181\nbird: KIF: Received address message for unknown interface 1202\nbird: KIF: Received address message for unknown interface 1211\nbird: KIF: Received address message for unknown interface 1205\nbird: KIF: Received address message for unknown interface 1236\nbird: KIF: Received address message for unknown interface 1229\nbird: KIF: Received address message for unknown interface 1231\nbird: KIF: Received address message for unknown interface 1243\nbird: KIF: Received address message for unknown interface 1251\nbird: KIF: Received address message for unknown interface 1261\nbird: KIF: Received address message for unknown interface 1269\nbird: KIF: Received address message for unknown interface 1278\nbird: KIF: Received address message for unknown interface 1295\nbird: KIF: Received address message for unknown interface 1303\nbird: KIF: Received address message for unknown interface 1311\nbird: KIF: Received address message for unknown interface 1313\nbird: KIF: Received address message for unknown interface 1318\nbird: KIF: Received address message for unknown interface 1320\nbird: KIF: Received address message for unknown interface 1324\nbird: KIF: Received address message for unknown interface 1325\nbird: KIF: Received address message for unknown interface 1341\nbird: KIF: Received address message for unknown interface 1342\nbird: KIF: Received address message for unknown interface 1290\nbird: KIF: Received address message for unknown interface 1349\nbird: KIF: Received address message for unknown interface 1357\nbird: KIF: Received address message for unknown interface 1372\nbird: KIF: Received address message for unknown interface 1365\nbird: KIF: Received address message for unknown interface 1376\nbird: KIF: Received address message for unknown interface 1374\nbird: KIF: Received address message for unknown interface 1396\n==== END logs for container calico-node of pod kube-system/calico-node-m8r2d ====\n==== START logs for container calico-typha of pod kube-system/calico-typha-deploy-9f6b455c4-vdrzx ====\nE0111 17:11:22.811944 7 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=1655, ErrCode=NO_ERROR, debug=\"\"\nE0111 17:11:22.812338 7 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=1655, ErrCode=NO_ERROR, debug=\"\"\nE0111 17:11:22.812608 7 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=1655, ErrCode=NO_ERROR, debug=\"\"\nE0111 17:11:22.812852 7 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=1655, ErrCode=NO_ERROR, debug=\"\"\nE0111 17:11:22.812894 7 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=1655, ErrCode=NO_ERROR, debug=\"\"\nE0111 17:11:22.813038 7 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=1655, ErrCode=NO_ERROR, debug=\"\"\nE0111 17:11:22.813165 7 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=1655, ErrCode=NO_ERROR, debug=\"\"\nE0111 17:11:22.813195 7 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=1655, ErrCode=NO_ERROR, debug=\"\"\nE0111 17:11:22.813316 7 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=1655, ErrCode=NO_ERROR, debug=\"\"\nE0111 17:11:22.813708 7 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=1655, ErrCode=NO_ERROR, debug=\"\"\nE0111 17:11:22.814049 7 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=1655, ErrCode=NO_ERROR, debug=\"\"\nE0111 17:11:22.814323 7 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=1655, ErrCode=NO_ERROR, debug=\"\"\nE0111 17:11:22.814551 7 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=1655, ErrCode=NO_ERROR, debug=\"\"\nE0111 17:11:22.814888 7 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=1655, ErrCode=NO_ERROR, debug=\"\"\nE0111 17:11:22.815188 7 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=1655, ErrCode=NO_ERROR, debug=\"\"\n2020-01-11 17:11:22.815 [ERROR][7] lookup.go 61: Failed to get Typha endpoint from Kubernetes error=Get https://100.104.0.1:443/api/v1/namespaces/kube-system/endpoints/calico-typha: http2: server sent GOAWAY and closed the connection; LastStreamID=1655, ErrCode=NO_ERROR, debug=\"\"\n2020-01-11 17:11:42.821 [ERROR][7] lookup.go 61: Failed to get Typha endpoint from Kubernetes error=Get https://100.104.0.1:443/api/v1/namespaces/kube-system/endpoints/calico-typha: net/http: TLS handshake timeout\n==== END logs for container calico-typha of pod kube-system/calico-typha-deploy-9f6b455c4-vdrzx ====\n==== START logs for container autoscaler of pod kube-system/calico-typha-horizontal-autoscaler-85c99966bb-6j6rp ====\nI0111 15:57:01.434613 1 autoscaler.go:49] Scaling Namespace: kube-system, Target: deployment/calico-typha-deploy\nI0111 15:57:02.038592 1 plugin.go:50] Set control mode to ladder\nI0111 15:57:02.038609 1 ladder_controller.go:72] Detected ConfigMap version change (old: new: 275) - rebuilding lookup entries\nI0111 15:57:02.038616 1 ladder_controller.go:73] Params from apiserver: \n{\n \"coresToReplicas\": [],\n \"nodesToReplicas\":\n [\n [1, 1],\n [10, 2],\n [100, 3],\n [250, 4],\n [500, 5],\n [1000, 6],\n [1500, 7],\n [2000, 8]\n ]\n}\nE0111 17:11:22.528448 1 autoscaler_server.go:95] Error syncing configMap with apiserver: Get https://100.104.0.1:443/api/v1/namespaces/kube-system/configmaps/calico-typha-horizontal-autoscaler: http2: server sent GOAWAY and closed the connection; LastStreamID=1637, ErrCode=NO_ERROR, debug=\"\"\nE0111 17:11:32.530942 1 autoscaler_server.go:95] Error syncing configMap with apiserver: Get https://100.104.0.1:443/api/v1/namespaces/kube-system/configmaps/calico-typha-horizontal-autoscaler: net/http: TLS handshake timeout\nE0111 17:11:32.531344 1 reflector.go:283] github.com/kubernetes-incubator/cluster-proportional-autoscaler/pkg/autoscaler/k8sclient/k8sclient.go:96: Failed to watch *v1.Node: Get https://100.104.0.1:443/api/v1/nodes?watch=true: net/http: TLS handshake timeout\nE0111 17:11:42.534801 1 autoscaler_server.go:95] Error syncing configMap with apiserver: Get https://100.104.0.1:443/api/v1/namespaces/kube-system/configmaps/calico-typha-horizontal-autoscaler: net/http: TLS handshake timeout\nE0111 17:11:43.538207 1 reflector.go:125] github.com/kubernetes-incubator/cluster-proportional-autoscaler/pkg/autoscaler/k8sclient/k8sclient.go:96: Failed to list *v1.Node: Get https://100.104.0.1:443/api/v1/nodes: net/http: TLS handshake timeout\nE0111 17:11:52.537575 1 autoscaler_server.go:95] Error syncing configMap with apiserver: Get https://100.104.0.1:443/api/v1/namespaces/kube-system/configmaps/calico-typha-horizontal-autoscaler: net/http: TLS handshake timeout\nE0111 17:11:54.540919 1 reflector.go:125] github.com/kubernetes-incubator/cluster-proportional-autoscaler/pkg/autoscaler/k8sclient/k8sclient.go:96: Failed to list *v1.Node: Get https://100.104.0.1:443/api/v1/nodes: net/http: TLS handshake timeout\nE0111 17:12:02.540410 1 autoscaler_server.go:95] Error syncing configMap with apiserver: Get https://100.104.0.1:443/api/v1/namespaces/kube-system/configmaps/calico-typha-horizontal-autoscaler: net/http: TLS handshake timeout\nE0111 17:12:05.543538 1 reflector.go:125] github.com/kubernetes-incubator/cluster-proportional-autoscaler/pkg/autoscaler/k8sclient/k8sclient.go:96: Failed to list *v1.Node: Get https://100.104.0.1:443/api/v1/nodes: net/http: TLS handshake timeout\n==== END logs for container autoscaler of pod kube-system/calico-typha-horizontal-autoscaler-85c99966bb-6j6rp ====\n==== START logs for container autoscaler of pod kube-system/calico-typha-vertical-autoscaler-5769b74b58-r8t6r ====\nI0111 15:59:49.130005 1 autoscaler.go:46] Scaling namespace: kube-system, target: deployment/calico-typha-deploy\nI0111 15:59:49.162244 1 autoscaler_server.go:120] setting config = { [calico-typha]: { requests: { [cpu]: { base=120m max=1 incr=80m nodes_incr=10 }, }, limits: { } }, }\nI0111 15:59:49.162282 1 autoscaler_server.go:148] Updating resource for nodes: 2, cores: 4\nI0111 15:59:49.162289 1 autoscaler_server.go:162] Setting calico-typha requests[\"cpu\"] = 200m\nE0111 17:11:23.408212 1 autoscaler_server.go:100] Error getting cluster size: Get https://100.104.0.1:443/api/v1/nodes: unexpected EOF\nE0111 17:11:33.414794 1 autoscaler_server.go:100] Error getting cluster size: Get https://100.104.0.1:443/api/v1/nodes: net/http: TLS handshake timeout\nE0111 17:11:59.160717 1 autoscaler_server.go:100] Error getting cluster size: Get https://100.104.0.1:443/api/v1/nodes: net/http: TLS handshake timeout\n==== END logs for container autoscaler of pod kube-system/calico-typha-vertical-autoscaler-5769b74b58-r8t6r ====\n==== START logs for container coredns of pod kube-system/coredns-59c969ffb8-57m7v ====\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n.:8053\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[INFO] plugin/reload: Running configuration MD5 = 79589d3279972e6a52f3adf92ec29d79\n ______ ____ _ _______\n / ____/___ ________ / __ \\/ | / / ___/\t~ CoreDNS-1.6.3\n / / / __ \\/ ___/ _ \\/ / / / |/ /\\__ \\ \t~ linux/amd64, go1.12.9, 37b9550\n/ /___/ /_/ / / / __/ /_/ / /| /___/ / \n\\____/\\____/_/ \\___/_____/_/ |_//____/ \n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[INFO] 100.64.0.6:60925 - 13014 \"AAAA IN ip-10-250-27-25.ec2.internal.ec2.internal. udp 59 false 512\" NXDOMAIN qr,rd,ra 59 0.001221899s\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\nW0111 16:02:27.615410 1 reflector.go:302] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: watch of *v1.Namespace ended with: too old resource version: 188 (1668)\nW0111 16:02:27.615410 1 reflector.go:302] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: watch of *v1.Namespace ended with: too old resource version: 188 (1668)\nW0111 16:02:27.618675 1 reflector.go:302] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: watch of *v1.Service ended with: too old resource version: 478 (1668)\nW0111 16:02:27.618675 1 reflector.go:302] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: watch of *v1.Service ended with: too old resource version: 478 (1668)\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[INFO] 100.64.1.5:47270 - 15976 \"A IN 100-64-1-5.dns-5825.pod.cluster.local.ec2.internal. udp 91 false 4096\" NXDOMAIN qr,rd,ra 68 0.001214253s\n[INFO] 100.64.1.5:43901 - 58189 \"A IN 100-64-1-5.dns-5825.pod.cluster.local.ec2.internal. udp 91 false 4096\" NXDOMAIN qr,rd,ra 68 0.001125973s\n[INFO] 100.64.1.5:38319 - 51761 \"A IN 100-64-1-5.dns-5825.pod.cluster.local.ec2.internal. udp 91 false 4096\" NXDOMAIN qr,rd,ra 68 0.001127974s\n[INFO] 100.64.1.5:50285 - 33590 \"A IN 100-64-1-5.dns-5825.pod.cluster.local.ec2.internal. tcp 91 false 65535\" NXDOMAIN qr,rd,ra 68 0.002437395s\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[INFO] 100.64.1.5:43985 - 42619 \"A IN 100-64-1-5.dns-5825.pod.cluster.local.ec2.internal. udp 91 false 4096\" NXDOMAIN qr,rd,ra 68 0.001158428s\n[INFO] 100.64.1.5:46317 - 27163 \"A IN 100-64-1-5.dns-5825.pod.cluster.local.ec2.internal. tcp 91 false 65535\" NXDOMAIN qr,rd,ra 68 0.000978885s\n[INFO] 100.64.1.5:52013 - 24447 \"A IN 100-64-1-5.dns-5825.pod.cluster.local.ec2.internal. tcp 91 false 65535\" NXDOMAIN qr,rd,ra 68 0.001220338s\n[INFO] 100.64.1.5:48649 - 36475 \"AAAA IN dns-querier-1.dns-test-service.dns-5825.svc.cluster.local.ec2.internal. udp 88 false 512\" NXDOMAIN qr,rd,ra 88 0.001174225s\n[INFO] 100.64.1.5:57433 - 34306 \"A IN 100-64-1-5.dns-5825.pod.cluster.local.ec2.internal. tcp 79 false 65535\" NXDOMAIN qr,rd,ra 68 0.00115823s\n[INFO] 100.64.1.5:57797 - 60232 \"A IN 100-64-1-5.dns-5825.pod.cluster.local.ec2.internal. tcp 91 false 65535\" NXDOMAIN qr,rd,ra 68 0.003181975s\n[INFO] 100.64.1.5:42760 - 61125 \"AAAA IN dns-querier-1.dns-test-service.dns-5825.svc.cluster.local.ec2.internal. udp 88 false 512\" NXDOMAIN qr,rd,ra 88 0.001037642s\n[INFO] 100.64.1.5:60523 - 30286 \"AAAA IN dns-querier-1.dns-test-service.dns-5825.svc.cluster.local.ec2.internal. udp 88 false 512\" NXDOMAIN qr,rd,ra 88 0.00118376s\n[INFO] 100.64.1.5:59687 - 53589 \"AAAA IN dns-querier-1.ec2.internal. udp 44 false 512\" NXDOMAIN qr,rd,ra 44 0.000229213s\n[INFO] 100.64.1.5:48097 - 63687 \"A IN 100-64-1-5.dns-5825.pod.cluster.local.ec2.internal. tcp 79 false 65535\" NXDOMAIN qr,rd,ra 68 0.001061323s\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\nE0111 17:11:33.411781 1 reflector.go:283] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to watch *v1.Service: Get https://100.104.0.1:443/api/v1/services?resourceVersion=14276&timeout=6m34s&timeoutSeconds=394&watch=true: net/http: TLS handshake timeout\nE0111 17:11:33.411781 1 reflector.go:283] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to watch *v1.Service: Get https://100.104.0.1:443/api/v1/services?resourceVersion=14276&timeout=6m34s&timeoutSeconds=394&watch=true: net/http: TLS handshake timeout\nE0111 17:11:33.411781 1 reflector.go:283] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to watch *v1.Service: Get https://100.104.0.1:443/api/v1/services?resourceVersion=14276&timeout=6m34s&timeoutSeconds=394&watch=true: net/http: TLS handshake timeout\nE0111 17:11:33.411781 1 reflector.go:283] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to watch *v1.Service: Get https://100.104.0.1:443/api/v1/services?resourceVersion=14276&timeout=6m34s&timeoutSeconds=394&watch=true: net/http: TLS handshake timeout\nE0111 17:11:33.413929 1 reflector.go:283] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to watch *v1.Endpoints: Get https://100.104.0.1:443/api/v1/endpoints?resourceVersion=14365&timeout=9m47s&timeoutSeconds=587&watch=true: net/http: TLS handshake timeout\nE0111 17:11:33.413929 1 reflector.go:283] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to watch *v1.Endpoints: Get https://100.104.0.1:443/api/v1/endpoints?resourceVersion=14365&timeout=9m47s&timeoutSeconds=587&watch=true: net/http: TLS handshake timeout\nE0111 17:11:33.413929 1 reflector.go:283] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to watch *v1.Endpoints: Get https://100.104.0.1:443/api/v1/endpoints?resourceVersion=14365&timeout=9m47s&timeoutSeconds=587&watch=true: net/http: TLS handshake timeout\nE0111 17:11:33.414000 1 reflector.go:283] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to watch *v1.Namespace: Get https://100.104.0.1:443/api/v1/namespaces?resourceVersion=14290&timeout=7m16s&timeoutSeconds=436&watch=true: net/http: TLS handshake timeout\nE0111 17:11:33.414000 1 reflector.go:283] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to watch *v1.Namespace: Get https://100.104.0.1:443/api/v1/namespaces?resourceVersion=14290&timeout=7m16s&timeoutSeconds=436&watch=true: net/http: TLS handshake timeout\nE0111 17:11:33.414000 1 reflector.go:283] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to watch *v1.Namespace: Get https://100.104.0.1:443/api/v1/namespaces?resourceVersion=14290&timeout=7m16s&timeoutSeconds=436&watch=true: net/http: TLS handshake timeout\nE0111 17:11:33.413929 1 reflector.go:283] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to watch *v1.Endpoints: Get https://100.104.0.1:443/api/v1/endpoints?resourceVersion=14365&timeout=9m47s&timeoutSeconds=587&watch=true: net/http: TLS handshake timeout\nE0111 17:11:33.414000 1 reflector.go:283] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to watch *v1.Namespace: Get https://100.104.0.1:443/api/v1/namespaces?resourceVersion=14290&timeout=7m16s&timeoutSeconds=436&watch=true: net/http: TLS handshake timeout\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\nI0111 17:11:44.414353 1 trace.go:82] Trace[1293567523]: \"Reflector pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98 ListAndWatch\" (started: 2020-01-11 17:11:34.411910501 +0000 UTC m=+4496.647623202) (total time: 10.002408197s):\nTrace[1293567523]: [10.002408197s] [10.002408197s] END\nE0111 17:11:44.414379 1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Service: Get https://100.104.0.1:443/api/v1/services?limit=500&resourceVersion=0: net/http: TLS handshake timeout\nE0111 17:11:44.414379 1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Service: Get https://100.104.0.1:443/api/v1/services?limit=500&resourceVersion=0: net/http: TLS handshake timeout\nE0111 17:11:44.414379 1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Service: Get https://100.104.0.1:443/api/v1/services?limit=500&resourceVersion=0: net/http: TLS handshake timeout\nE0111 17:11:44.414379 1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Service: Get https://100.104.0.1:443/api/v1/services?limit=500&resourceVersion=0: net/http: TLS handshake timeout\nE0111 17:11:44.416288 1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Endpoints: Get https://100.104.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: net/http: TLS handshake timeout\nI0111 17:11:44.416274 1 trace.go:82] Trace[1331176132]: \"Reflector pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98 ListAndWatch\" (started: 2020-01-11 17:11:34.414049716 +0000 UTC m=+4496.649762410) (total time: 10.002193512s):\nTrace[1331176132]: [10.002193512s] [10.002193512s] END\nE0111 17:11:44.416288 1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Endpoints: Get https://100.104.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: net/http: TLS handshake timeout\nE0111 17:11:44.416288 1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Endpoints: Get https://100.104.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: net/http: TLS handshake timeout\nE0111 17:11:44.416288 1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Endpoints: Get https://100.104.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: net/http: TLS handshake timeout\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[INFO] 100.64.1.158:50693 - 40020 \"A IN 100-64-1-158.dns-577.pod.cluster.local.ec2.internal. tcp 92 false 65535\" NXDOMAIN qr,rd,ra 69 0.018425777s\n[INFO] 100.64.1.158:53815 - 9350 \"AAAA IN dns-querier-2.dns-test-service-2.dns-577.svc.cluster.local.ec2.internal. udp 89 false 512\" NXDOMAIN qr,rd,ra 89 0.001160506s\n[INFO] 100.64.1.158:43204 - 43445 \"A IN 100-64-1-158.dns-577.pod.cluster.local.ec2.internal. udp 80 false 4096\" NXDOMAIN qr,rd,ra 69 0.001089812s\n[INFO] 100.64.1.158:33551 - 32153 \"A IN 100-64-1-158.dns-577.pod.cluster.local.ec2.internal. tcp 92 false 65535\" NXDOMAIN qr,rd,ra 69 0.001059707s\n[INFO] 100.64.1.158:33855 - 54904 \"A IN 100-64-1-158.dns-577.pod.cluster.local.ec2.internal. udp 80 false 4096\" NXDOMAIN qr,rd,ra 69 0.00107727s\n[INFO] 100.64.1.158:53850 - 25725 \"A IN 100-64-1-158.dns-577.pod.cluster.local.ec2.internal. udp 92 false 4096\" NXDOMAIN qr,rd,ra 69 0.001124271s\n[INFO] 100.64.1.158:33553 - 34933 \"A IN 100-64-1-158.dns-577.pod.cluster.local.ec2.internal. udp 80 false 4096\" NXDOMAIN qr,rd,ra 69 0.000987772s\n[INFO] 100.64.1.158:41747 - 6768 \"A IN 100-64-1-158.dns-577.pod.cluster.local.ec2.internal. tcp 80 false 65535\" NXDOMAIN qr,rd,ra 69 0.000905216s\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[INFO] 100.64.1.179:47245 - 34571 \"A IN dns-test-service.dns-5967.svc.cluster.local.ec2.internal. udp 97 false 4096\" NXDOMAIN qr,rd,ra 74 0.001332434s\n[INFO] 100.64.1.179:44151 - 19115 \"A IN dns-test-service.dns-5967.svc.cluster.local.ec2.internal. tcp 97 false 65535\" NXDOMAIN qr,rd,ra 74 0.002319931s\n[INFO] 100.64.1.179:50614 - 41363 \"A IN dns-test-service.dns-5967.svc.cluster.local.ec2.internal. udp 85 false 4096\" NXDOMAIN qr,rd,ra 74 0.000990835s\n[INFO] 100.64.1.179:42685 - 35334 \"A IN 100-64-1-179.dns-5967.pod.cluster.local.ec2.internal. tcp 81 false 65535\" NXDOMAIN qr,rd,ra 70 0.001195577s\n[INFO] 100.64.1.179:39102 - 6968 \"A IN 100-64-1-179.dns-5967.pod.cluster.local.ec2.internal. udp 93 false 4096\" NXDOMAIN qr,rd,ra 70 0.001229332s\n[INFO] 100.64.1.179:41640 - 23867 \"A IN dns-test-service.dns-5967.svc.cluster.local.ec2.internal. udp 85 false 4096\" NXDOMAIN qr,rd,ra 74 0.003066117s\n[INFO] 100.64.1.179:36639 - 3296 \"A IN dns-test-service.dns-5967.svc.cluster.local.ec2.internal. tcp 85 false 65535\" NXDOMAIN qr,rd,ra 74 0.001095846s\n[INFO] 100.64.1.179:34498 - 4903 \"A IN 100-64-1-179.dns-5967.pod.cluster.local.ec2.internal. udp 81 false 4096\" NXDOMAIN qr,rd,ra 70 0.001228001s\n[INFO] 100.64.1.179:54187 - 46151 \"A IN 100-64-1-179.dns-5967.pod.cluster.local.ec2.internal. tcp 81 false 65535\" NXDOMAIN qr,rd,ra 70 0.001094501s\n[INFO] 100.64.1.179:36626 - 22285 \"A IN 100-64-1-179.dns-5967.pod.cluster.local.ec2.internal. udp 93 false 4096\" NXDOMAIN qr,rd,ra 70 0.000974156s\n[INFO] 100.64.1.179:39201 - 27437 \"A IN 100-64-1-179.dns-5967.pod.cluster.local.ec2.internal. tcp 93 false 65535\" NXDOMAIN qr,rd,ra 70 0.00108301s\n[INFO] 100.64.1.179:45733 - 51741 \"A IN dns-test-service.dns-5967.svc.cluster.local.ec2.internal. udp 85 false 4096\" NXDOMAIN qr,rd,ra 74 0.001230901s\n[INFO] 100.64.1.179:60247 - 17511 \"A IN dns-test-service.dns-5967.svc.cluster.local.ec2.internal. tcp 85 false 65535\" NXDOMAIN qr,rd,ra 74 0.000961002s\n[INFO] 100.64.1.179:50947 - 14906 \"A IN 100-64-1-179.dns-5967.pod.cluster.local.ec2.internal. tcp 81 false 65535\" NXDOMAIN qr,rd,ra 70 0.000991383s\n[INFO] 100.64.1.179:33501 - 52489 \"A IN dns-test-service.dns-5967.svc.cluster.local.ec2.internal. udp 97 false 4096\" NXDOMAIN qr,rd,ra 74 0.000929984s\n[INFO] 100.64.1.179:36488 - 9569 \"A IN 100-64-1-179.dns-5967.pod.cluster.local.ec2.internal. udp 93 false 4096\" NXDOMAIN qr,rd,ra 70 0.001170484s\n[INFO] 100.64.1.179:44535 - 54635 \"A IN dns-test-service.dns-5967.svc.cluster.local.ec2.internal. tcp 85 false 65535\" NXDOMAIN qr,rd,ra 74 0.001150757s\n[INFO] 100.64.1.179:51581 - 7422 \"A IN dns-test-service.dns-5967.svc.cluster.local.ec2.internal. tcp 97 false 65535\" NXDOMAIN qr,rd,ra 74 0.002378847s\n[INFO] 100.64.1.179:40255 - 40036 \"A IN dns-test-service.dns-5967.svc.cluster.local.ec2.internal. udp 85 false 4096\" NXDOMAIN qr,rd,ra 74 0.005156316s\n[INFO] 100.64.1.179:36605 - 32080 \"A IN 100-64-1-179.dns-5967.pod.cluster.local.ec2.internal. udp 81 false 4096\" NXDOMAIN qr,rd,ra 70 0.001192909s\n[INFO] 100.64.1.179:34519 - 38563 \"A IN 100-64-1-179.dns-5967.pod.cluster.local.ec2.internal. tcp 81 false 65535\" NXDOMAIN qr,rd,ra 70 0.000908363s\n[INFO] 100.64.1.179:40421 - 55091 \"A IN dns-test-service.dns-5967.svc.cluster.local.ec2.internal. udp 97 false 4096\" NXDOMAIN qr,rd,ra 74 0.001340884s\n[INFO] 100.64.1.179:56875 - 60243 \"A IN dns-test-service.dns-5967.svc.cluster.local.ec2.internal. tcp 97 false 65535\" NXDOMAIN qr,rd,ra 74 0.000974279s\n[INFO] 100.64.1.179:54195 - 44829 \"A IN dns-test-service.dns-5967.svc.cluster.local.ec2.internal. tcp 85 false 65535\" NXDOMAIN qr,rd,ra 74 0.000909939s\n[INFO] 100.64.1.179:44028 - 42561 \"A IN 100-64-1-179.dns-5967.pod.cluster.local.ec2.internal. udp 81 false 4096\" NXDOMAIN qr,rd,ra 70 0.001026775s\n[INFO] 100.64.1.179:52697 - 1661 \"A IN 100-64-1-179.dns-5967.pod.cluster.local.ec2.internal. tcp 81 false 65535\" NXDOMAIN qr,rd,ra 70 0.000828276s\n[INFO] 100.64.1.179:54199 - 10023 \"A IN dns-test-service.dns-5967.svc.cluster.local.ec2.internal. tcp 97 false 65535\" NXDOMAIN qr,rd,ra 74 0.001226523s\n[INFO] 100.64.1.179:53019 - 27487 \"A IN 100-64-1-179.dns-5967.pod.cluster.local.ec2.internal. udp 93 false 4096\" NXDOMAIN qr,rd,ra 70 0.00095406s\n[INFO] 100.64.1.179:57109 - 32639 \"A IN 100-64-1-179.dns-5967.pod.cluster.local.ec2.internal. tcp 93 false 65535\" NXDOMAIN qr,rd,ra 70 0.000957217s\n[INFO] 100.64.1.179:53707 - 39597 \"A IN dns-test-service.dns-5967.svc.cluster.local.ec2.internal. tcp 85 false 65535\" NXDOMAIN qr,rd,ra 74 0.001063691s\n[INFO] 100.64.1.179:50505 - 42390 \"A IN 100-64-1-179.dns-5967.pod.cluster.local.ec2.internal. tcp 81 false 65535\" NXDOMAIN qr,rd,ra 70 0.000936495s\n[INFO] 100.64.1.179:38246 - 40796 \"A IN dns-test-service.dns-5967.svc.cluster.local.ec2.internal. udp 97 false 4096\" NXDOMAIN qr,rd,ra 74 0.001048714s\n[INFO] 100.64.1.179:38212 - 63412 \"A IN 100-64-1-179.dns-5967.pod.cluster.local.ec2.internal. udp 93 false 4096\" NXDOMAIN qr,rd,ra 70 0.001126176s\n[INFO] 100.64.1.179:35147 - 42128 \"A IN dns-test-service.dns-5967.svc.cluster.local.ec2.internal. udp 85 false 4096\" NXDOMAIN qr,rd,ra 74 0.000971676s\n[INFO] 100.64.1.179:33429 - 15436 \"A IN dns-test-service.dns-5967.svc.cluster.local.ec2.internal. tcp 85 false 65535\" NXDOMAIN qr,rd,ra 74 0.002587903s\n[INFO] 100.64.1.179:58039 - 4119 \"A IN 100-64-1-179.dns-5967.pod.cluster.local.ec2.internal. udp 81 false 4096\" NXDOMAIN qr,rd,ra 70 0.001166367s\n[INFO] 100.64.1.179:44591 - 48487 \"A IN dns-test-service.dns-5967.svc.cluster.local.ec2.internal. tcp 85 false 65535\" NXDOMAIN qr,rd,ra 74 0.002444322s\n[INFO] 100.64.1.179:45082 - 1602 \"A IN 100-64-1-179.dns-5967.pod.cluster.local.ec2.internal. udp 81 false 4096\" NXDOMAIN qr,rd,ra 70 0.001280817s\n[INFO] 100.64.1.179:54601 - 51334 \"A IN 100-64-1-179.dns-5967.pod.cluster.local.ec2.internal. tcp 81 false 65535\" NXDOMAIN qr,rd,ra 70 0.001178245s\n[INFO] 100.64.1.179:55887 - 50557 \"A IN 100-64-1-179.dns-5967.pod.cluster.local.ec2.internal. tcp 93 false 65535\" NXDOMAIN qr,rd,ra 70 0.001018041s\n[INFO] 100.64.1.179:48416 - 58504 \"A IN dns-test-service.dns-5967.svc.cluster.local.ec2.internal. udp 85 false 4096\" NXDOMAIN qr,rd,ra 74 0.00112022s\n[INFO] 100.64.1.179:42567 - 64655 \"A IN dns-test-service.dns-5967.svc.cluster.local.ec2.internal. tcp 85 false 65535\" NXDOMAIN qr,rd,ra 74 0.001985695s\n[INFO] 100.64.1.179:44810 - 37346 \"A IN 100-64-1-179.dns-5967.pod.cluster.local.ec2.internal. udp 81 false 4096\" NXDOMAIN qr,rd,ra 70 0.001191172s\n[INFO] 100.64.1.179:50379 - 58195 \"A IN 100-64-1-179.dns-5967.pod.cluster.local.ec2.internal. tcp 81 false 65535\" NXDOMAIN qr,rd,ra 70 0.003108422s\n[INFO] 100.64.1.179:35188 - 13786 \"A IN dns-test-service.dns-5967.svc.cluster.local.ec2.internal. udp 97 false 4096\" NXDOMAIN qr,rd,ra 74 0.001048712s\n[INFO] 100.64.1.179:52875 - 15794 \"A IN 100-64-1-179.dns-5967.pod.cluster.local.ec2.internal. udp 93 false 4096\" NXDOMAIN qr,rd,ra 70 0.001042908s\n[INFO] 100.64.1.179:35400 - 6766 \"A IN dns-test-service.dns-5967.svc.cluster.local.ec2.internal. udp 85 false 4096\" NXDOMAIN qr,rd,ra 74 0.001344237s\n[INFO] 100.64.1.179:34233 - 45859 \"A IN dns-test-service.dns-5967.svc.cluster.local.ec2.internal. udp 97 false 4096\" NXDOMAIN qr,rd,ra 74 0.001035716s\n[INFO] 100.64.1.179:40255 - 60280 \"A IN 100-64-1-179.dns-5967.pod.cluster.local.ec2.internal. udp 93 false 4096\" NXDOMAIN qr,rd,ra 70 0.000968979s\n[INFO] 100.64.1.179:47115 - 45164 \"A IN dns-test-service.dns-5967.svc.cluster.local.ec2.internal. udp 85 false 4096\" NXDOMAIN qr,rd,ra 74 0.001076195s\n[INFO] 100.64.1.179:51823 - 42820 \"A IN 100-64-1-179.dns-5967.pod.cluster.local.ec2.internal. udp 81 false 4096\" NXDOMAIN qr,rd,ra 70 0.001083291s\n[INFO] 100.64.1.179:53608 - 8052 \"A IN dns-test-service.dns-5967.svc.cluster.local.ec2.internal. udp 97 false 4096\" NXDOMAIN qr,rd,ra 74 0.001382086s\n[INFO] 100.64.1.179:35465 - 15212 \"A IN 100-64-1-179.dns-5967.pod.cluster.local.ec2.internal. tcp 93 false 65535\" NXDOMAIN qr,rd,ra 70 0.001075337s\n[INFO] 100.64.1.179:40643 - 63631 \"A IN dns-test-service.dns-5967.svc.cluster.local.ec2.internal. tcp 85 false 65535\" NXDOMAIN qr,rd,ra 74 0.007590942s\n[INFO] 100.64.1.179:37843 - 60876 \"A IN 100-64-1-179.dns-5967.pod.cluster.local.ec2.internal. tcp 81 false 65535\" NXDOMAIN qr,rd,ra 70 0.001050453s\n[INFO] 100.64.1.179:49349 - 40265 \"A IN dns-test-service.dns-5967.svc.cluster.local.ec2.internal. udp 97 false 4096\" NXDOMAIN qr,rd,ra 74 0.00124149s\n[INFO] 100.64.1.179:34011 - 45985 \"A IN 100-64-1-179.dns-5967.pod.cluster.local.ec2.internal. udp 93 false 4096\" NXDOMAIN qr,rd,ra 70 0.001036419s\n[INFO] 100.64.1.179:54835 - 47425 \"A IN 100-64-1-179.dns-5967.pod.cluster.local.ec2.internal. tcp 93 false 65535\" NXDOMAIN qr,rd,ra 70 0.00096518s\n[INFO] 100.64.1.179:44001 - 33653 \"A IN dns-test-service.dns-5967.svc.cluster.local.ec2.internal. tcp 85 false 65535\" NXDOMAIN qr,rd,ra 74 0.000983256s\n[INFO] 100.64.1.179:49167 - 60733 \"A IN dns-test-service.dns-5967.svc.cluster.local.ec2.internal. tcp 97 false 65535\" NXDOMAIN qr,rd,ra 74 0.000889044s\n[INFO] 100.64.1.179:39763 - 12662 \"A IN 100-64-1-179.dns-5967.pod.cluster.local.ec2.internal. udp 93 false 4096\" NXDOMAIN qr,rd,ra 70 0.001989633s\n[INFO] 100.64.1.179:60189 - 46502 \"A IN dns-test-service.dns-5967.svc.cluster.local.ec2.internal. udp 85 false 4096\" NXDOMAIN qr,rd,ra 74 0.001167865s\n[INFO] 100.64.1.179:56273 - 28557 \"A IN dns-test-service.dns-5967.svc.cluster.local.ec2.internal. tcp 85 false 65535\" NXDOMAIN qr,rd,ra 74 0.000960738s\n[INFO] 100.64.1.179:42165 - 40053 \"A IN 100-64-1-179.dns-5967.pod.cluster.local.ec2.internal. tcp 81 false 65535\" NXDOMAIN qr,rd,ra 70 0.00104906s\n[INFO] 100.64.1.179:39330 - 25970 \"A IN dns-test-service.dns-5967.svc.cluster.local.ec2.internal. udp 97 false 4096\" NXDOMAIN qr,rd,ra 74 0.001095076s\n[INFO] 100.64.1.179:57696 - 48586 \"A IN 100-64-1-179.dns-5967.pod.cluster.local.ec2.internal. udp 93 false 4096\" NXDOMAIN qr,rd,ra 70 0.000931098s\n[INFO] 100.64.1.179:34039 - 23438 \"A IN dns-test-service.dns-5967.svc.cluster.local.ec2.internal. tcp 85 false 65535\" NXDOMAIN qr,rd,ra 74 0.003217337s\n[INFO] 100.64.1.179:56287 - 21179 \"A IN 100-64-1-179.dns-5967.pod.cluster.local.ec2.internal. udp 81 false 4096\" NXDOMAIN qr,rd,ra 70 0.001213407s\n[INFO] 100.64.1.179:48341 - 1723 \"A IN 100-64-1-179.dns-5967.pod.cluster.local.ec2.internal. tcp 81 false 65535\" NXDOMAIN qr,rd,ra 70 0.000932128s\n[INFO] 100.64.1.179:44031 - 63335 \"A IN dns-test-service.dns-5967.svc.cluster.local.ec2.internal. tcp 97 false 65535\" NXDOMAIN qr,rd,ra 74 0.000919084s\n[INFO] 100.64.1.179:35481 - 15263 \"A IN 100-64-1-179.dns-5967.pod.cluster.local.ec2.internal. udp 93 false 4096\" NXDOMAIN qr,rd,ra 70 0.001047531s\n[INFO] 100.64.1.179:48391 - 20415 \"A IN 100-64-1-179.dns-5967.pod.cluster.local.ec2.internal. tcp 93 false 65535\" NXDOMAIN qr,rd,ra 70 0.000865818s\n[INFO] 100.64.1.179:33837 - 53956 \"A IN 100-64-1-179.dns-5967.pod.cluster.local.ec2.internal. tcp 81 false 65535\" NXDOMAIN qr,rd,ra 70 0.006712302s\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[INFO] 100.64.1.179:49503 - 33723 \"A IN dns-test-service.dns-5967.svc.cluster.local.ec2.internal. tcp 97 false 65535\" NXDOMAIN qr,rd,ra 74 0.001090315s\n[INFO] 100.64.1.179:54298 - 41287 \"A IN dns-test-service.dns-5967.svc.cluster.local.ec2.internal. udp 85 false 4096\" NXDOMAIN qr,rd,ra 74 0.001088467s\n[INFO] 100.64.1.179:55618 - 64496 \"A IN dns-test-service.dns-5967.svc.cluster.local.ec2.internal. udp 97 false 4096\" NXDOMAIN qr,rd,ra 74 0.001287815s\n[INFO] 100.64.1.179:33583 - 400 \"A IN dns-test-service.dns-5967.svc.cluster.local.ec2.internal. tcp 97 false 65535\" NXDOMAIN qr,rd,ra 74 0.001068176s\n[INFO] 100.64.1.179:35747 - 1490 \"A IN dns-test-service.dns-5967.svc.cluster.local.ec2.internal. udp 85 false 4096\" NXDOMAIN qr,rd,ra 74 0.001268325s\n[INFO] 100.64.1.179:34074 - 29229 \"A IN 100-64-1-179.dns-5967.pod.cluster.local.ec2.internal. udp 81 false 4096\" NXDOMAIN qr,rd,ra 70 0.000942389s\n[INFO] 100.64.1.179:33223 - 29266 \"A IN 100-64-1-179.dns-5967.pod.cluster.local.ec2.internal. tcp 81 false 65535\" NXDOMAIN qr,rd,ra 70 0.001117146s\n[INFO] 100.64.1.179:56032 - 31173 \"A IN dns-test-service.dns-5967.svc.cluster.local.ec2.internal. udp 97 false 4096\" NXDOMAIN qr,rd,ra 74 0.001067696s\n[INFO] 100.64.1.179:55865 - 53789 \"A IN 100-64-1-179.dns-5967.pod.cluster.local.ec2.internal. udp 93 false 4096\" NXDOMAIN qr,rd,ra 70 0.001098175s\n[INFO] 100.64.1.179:52381 - 38333 \"A IN 100-64-1-179.dns-5967.pod.cluster.local.ec2.internal. tcp 93 false 65535\" NXDOMAIN qr,rd,ra 70 0.000952609s\n[INFO] 100.64.1.179:52192 - 10375 \"A IN dns-test-service.dns-5967.svc.cluster.local.ec2.internal. udp 85 false 4096\" NXDOMAIN qr,rd,ra 74 0.002947207s\n[INFO] 100.64.1.179:47185 - 22699 \"A IN dns-test-service.dns-5967.svc.cluster.local.ec2.internal. tcp 85 false 65535\" NXDOMAIN qr,rd,ra 74 0.000973209s\n[INFO] 100.64.1.179:44487 - 64411 \"A IN 100-64-1-179.dns-5967.pod.cluster.local.ec2.internal. udp 81 false 4096\" NXDOMAIN qr,rd,ra 70 0.001125018s\n[INFO] 100.64.1.179:51749 - 37855 \"A IN 100-64-1-179.dns-5967.pod.cluster.local.ec2.internal. tcp 81 false 65535\" NXDOMAIN qr,rd,ra 70 0.003093803s\n[INFO] 100.64.1.179:37446 - 1562 \"A IN dns-test-service.dns-5967.svc.cluster.local.ec2.internal. udp 97 false 4096\" NXDOMAIN qr,rd,ra 74 0.001126257s\n[INFO] 100.64.1.179:54459 - 51641 \"A IN dns-test-service.dns-5967.svc.cluster.local.ec2.internal. tcp 97 false 65535\" NXDOMAIN qr,rd,ra 74 0.001012244s\n[INFO] 100.64.1.179:56218 - 3570 \"A IN 100-64-1-179.dns-5967.pod.cluster.local.ec2.internal. udp 93 false 4096\" NXDOMAIN qr,rd,ra 70 0.001304191s\n[INFO] 100.64.1.179:39949 - 15000 \"A IN dns-test-service.dns-5967.svc.cluster.local.ec2.internal. tcp 85 false 65535\" NXDOMAIN qr,rd,ra 74 0.002177834s\n[INFO] 100.64.1.179:44617 - 38926 \"A IN dns-test-service.dns-5967.svc.cluster.local.ec2.internal. tcp 97 false 65535\" NXDOMAIN qr,rd,ra 74 0.00099646s\n[INFO] 100.64.1.179:55547 - 29232 \"A IN dns-test-service.dns-5967.svc.cluster.local.ec2.internal. udp 85 false 4096\" NXDOMAIN qr,rd,ra 74 0.00112546s\n[INFO] 100.64.1.179:43557 - 10433 \"A IN dns-test-service.dns-5967.svc.cluster.local.ec2.internal. tcp 85 false 65535\" NXDOMAIN qr,rd,ra 74 0.001205475s\n[INFO] 100.64.1.179:36984 - 50813 \"A IN 100-64-1-179.dns-5967.pod.cluster.local.ec2.internal. udp 81 false 4096\" NXDOMAIN qr,rd,ra 70 0.001149228s\n[INFO] 100.64.1.179:59291 - 357 \"A IN 100-64-1-179.dns-5967.pod.cluster.local.ec2.internal. tcp 81 false 65535\" NXDOMAIN qr,rd,ra 70 0.001007463s\n[INFO] 100.64.1.179:52071 - 54243 \"A IN dns-test-service.dns-5967.svc.cluster.local.ec2.internal. tcp 97 false 65535\" NXDOMAIN qr,rd,ra 74 0.001021599s\n[INFO] 100.64.1.179:33396 - 6171 \"A IN 100-64-1-179.dns-5967.pod.cluster.local.ec2.internal. udp 93 false 4096\" NXDOMAIN qr,rd,ra 70 0.001315588s\n[INFO] 100.64.1.179:51057 - 11323 \"A IN 100-64-1-179.dns-5967.pod.cluster.local.ec2.internal. tcp 93 false 65535\" NXDOMAIN qr,rd,ra 70 0.000910652s\n[INFO] 100.64.1.179:46978 - 32785 \"A IN 100-64-1-179.dns-5967.pod.cluster.local.ec2.internal. udp 81 false 4096\" NXDOMAIN qr,rd,ra 70 0.001103363s\n[INFO] 100.64.1.179:41281 - 33900 \"A IN 100-64-1-179.dns-5967.pod.cluster.local.ec2.internal. udp 93 false 4096\" NXDOMAIN qr,rd,ra 70 0.006793913s\n[INFO] 100.64.1.179:49923 - 63206 \"A IN dns-test-service.dns-5967.svc.cluster.local.ec2.internal. tcp 85 false 65535\" NXDOMAIN qr,rd,ra 74 0.002305763s\n[INFO] 100.64.1.179:57675 - 48649 \"A IN dns-test-service.dns-5967.svc.cluster.local.ec2.internal. tcp 97 false 65535\" NXDOMAIN qr,rd,ra 74 0.000905867s\n[INFO] 100.64.1.179:42008 - 577 \"A IN 100-64-1-179.dns-5967.pod.cluster.local.ec2.internal. udp 93 false 4096\" NXDOMAIN qr,rd,ra 70 0.001257312s\n[INFO] 100.64.1.179:35099 - 5729 \"A IN 100-64-1-179.dns-5967.pod.cluster.local.ec2.internal. tcp 93 false 65535\" NXDOMAIN qr,rd,ra 70 0.000936464s\n[INFO] 100.64.1.179:45865 - 23066 \"A IN dns-test-service.dns-5967.svc.cluster.local.ec2.internal. tcp 85 false 65535\" NXDOMAIN qr,rd,ra 74 0.002338856s\n[INFO] 100.64.1.179:33336 - 54663 \"A IN 100-64-1-179.dns-5967.pod.cluster.local.ec2.internal. udp 81 false 4096\" NXDOMAIN qr,rd,ra 70 0.001091683s\n[INFO] 100.64.1.179:59572 - 13885 \"A IN dns-test-service.dns-5967.svc.cluster.local.ec2.internal. udp 97 false 4096\" NXDOMAIN qr,rd,ra 74 0.001191575s\n[INFO] 100.64.1.179:53127 - 19037 \"A IN dns-test-service.dns-5967.svc.cluster.local.ec2.internal. tcp 97 false 65535\" NXDOMAIN qr,rd,ra 74 0.001066165s\n[INFO] 100.64.1.179:47561 - 21045 \"A IN 100-64-1-179.dns-5967.pod.cluster.local.ec2.internal. tcp 93 false 65535\" NXDOMAIN qr,rd,ra 70 0.001188861s\n[INFO] 100.64.1.179:51281 - 17882 \"A IN dns-test-service.dns-5967.svc.cluster.local.ec2.internal. udp 85 false 4096\" NXDOMAIN qr,rd,ra 74 0.001244036s\n[INFO] 100.64.1.179:59987 - 6431 \"A IN dns-test-service.dns-5967.svc.cluster.local.ec2.internal. tcp 85 false 65535\" NXDOMAIN qr,rd,ra 74 0.000988154s\n[INFO] 100.64.1.179:45057 - 36558 \"A IN 100-64-1-179.dns-5967.pod.cluster.local.ec2.internal. udp 81 false 4096\" NXDOMAIN qr,rd,ra 70 0.001043942s\n[INFO] 100.64.1.179:44565 - 3388 \"A IN 100-64-1-179.dns-5967.pod.cluster.local.ec2.internal. tcp 81 false 65535\" NXDOMAIN qr,rd,ra 70 0.001013129s\n[INFO] 100.64.1.179:50203 - 49810 \"A IN dns-test-service.dns-5967.svc.cluster.local.ec2.internal. udp 97 false 4096\" NXDOMAIN qr,rd,ra 74 0.000996605s\n[INFO] 100.64.1.179:43399 - 56970 \"A IN 100-64-1-179.dns-5967.pod.cluster.local.ec2.internal. tcp 93 false 65535\" NXDOMAIN qr,rd,ra 70 0.001057494s\n[INFO] 100.64.1.179:54955 - 23751 \"A IN 100-64-1-179.dns-5967.pod.cluster.local.ec2.internal. tcp 81 false 65535\" NXDOMAIN qr,rd,ra 70 0.000968668s\n[INFO] 100.64.1.179:50744 - 16487 \"A IN dns-test-service.dns-5967.svc.cluster.local.ec2.internal. udp 97 false 4096\" NXDOMAIN qr,rd,ra 74 0.000872861s\n[INFO] 100.64.1.179:50654 - 38963 \"A IN 100-64-1-179.dns-5967.pod.cluster.local.ec2.internal. udp 93 false 4096\" NXDOMAIN qr,rd,ra 70 0.001112436s\n[INFO] 100.64.1.179:55313 - 10124 \"A IN dns-test-service.dns-5967.svc.cluster.local.ec2.internal. tcp 85 false 65535\" NXDOMAIN qr,rd,ra 74 0.001387299s\n[INFO] 100.64.1.179:47128 - 14800 \"A IN 100-64-1-179.dns-5967.pod.cluster.local.ec2.internal. udp 81 false 4096\" NXDOMAIN qr,rd,ra 70 0.000940372s\n[INFO] 100.64.1.179:53842 - 54280 \"A IN 100-64-1-179.dns-5967.pod.cluster.local.ec2.internal. udp 93 false 4096\" NXDOMAIN qr,rd,ra 70 0.001291969s\n[INFO] 100.64.1.179:36683 - 36635 \"A IN dns-test-service.dns-5967.svc.cluster.local.ec2.internal. udp 85 false 4096\" NXDOMAIN qr,rd,ra 74 0.00123067s\n[INFO] 100.64.1.179:37721 - 63548 \"A IN 100-64-1-179.dns-5967.pod.cluster.local.ec2.internal. tcp 81 false 65535\" NXDOMAIN qr,rd,ra 70 0.003059651s\n[INFO] 100.64.1.179:58923 - 22236 \"A IN 100-64-1-179.dns-5967.pod.cluster.local.ec2.internal. tcp 81 false 65535\" NXDOMAIN qr,rd,ra 70 0.000973299s\n[INFO] 100.64.1.179:38075 - 54873 \"A IN dns-test-service.dns-5967.svc.cluster.local.ec2.internal. udp 97 false 4096\" NXDOMAIN qr,rd,ra 74 0.001247158s\n[INFO] 100.64.1.179:46099 - 62033 \"A IN 100-64-1-179.dns-5967.pod.cluster.local.ec2.internal. tcp 93 false 65535\" NXDOMAIN qr,rd,ra 70 0.00107171s\n[INFO] 100.64.1.179:39074 - 45604 \"A IN dns-test-service.dns-5967.svc.cluster.local.ec2.internal. udp 85 false 4096\" NXDOMAIN qr,rd,ra 74 0.00538235s\n[INFO] 100.64.1.179:41645 - 9806 \"A IN dns-test-service.dns-5967.svc.cluster.local.ec2.internal. tcp 97 false 65535\" NXDOMAIN qr,rd,ra 74 0.001540619s\n[INFO] 100.64.1.179:50228 - 23633 \"A IN dns-test-service.dns-5967.svc.cluster.local.ec2.internal. udp 85 false 4096\" NXDOMAIN qr,rd,ra 74 0.001135227s\n[INFO] 100.64.1.179:55812 - 53316 \"A IN 100-64-1-179.dns-5967.pod.cluster.local.ec2.internal. udp 81 false 4096\" NXDOMAIN qr,rd,ra 70 0.001086428s\n[INFO] 100.64.1.179:43589 - 32986 \"A IN 100-64-1-179.dns-5967.pod.cluster.local.ec2.internal. tcp 81 false 65535\" NXDOMAIN qr,rd,ra 70 0.001173887s\n[INFO] 100.64.1.179:37676 - 57475 \"A IN dns-test-service.dns-5967.svc.cluster.local.ec2.internal. udp 97 false 4096\" NXDOMAIN qr,rd,ra 74 0.001112585s\n[INFO] 100.64.1.179:46808 - 59483 \"A IN 100-64-1-179.dns-5967.pod.cluster.local.ec2.internal. udp 93 false 4096\" NXDOMAIN qr,rd,ra 70 0.001190874s\n[INFO] 100.64.1.179:39865 - 33954 \"A IN dns-test-service.dns-5967.svc.cluster.local.ec2.internal. tcp 85 false 65535\" NXDOMAIN qr,rd,ra 74 0.009330959s\n[INFO] 100.64.1.179:33514 - 12339 \"A IN 100-64-1-179.dns-5967.pod.cluster.local.ec2.internal. udp 81 false 4096\" NXDOMAIN qr,rd,ra 70 0.001069758s\n[INFO] 100.64.1.179:36397 - 37112 \"A IN 100-64-1-179.dns-5967.pod.cluster.local.ec2.internal. tcp 81 false 65535\" NXDOMAIN qr,rd,ra 70 0.000980826s\n[INFO] 100.64.1.179:50747 - 12407 \"A IN dns-test-service.dns-5967.svc.cluster.local.ec2.internal. tcp 97 false 65535\" NXDOMAIN qr,rd,ra 74 0.004360783s\n[INFO] 100.64.1.179:59811 - 14415 \"A IN 100-64-1-179.dns-5967.pod.cluster.local.ec2.internal. tcp 93 false 65535\" NXDOMAIN qr,rd,ra 70 0.001030763s\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[INFO] 100.64.1.135:58913 - 7902 \"A IN 100-64-1-135.dns-3162.pod.cluster.local.ec2.internal. udp 93 false 4096\" NXDOMAIN qr,rd,ra 70 0.001530638s\n[INFO] 100.64.1.135:42690 - 63738 \"A IN kubernetes.default.svc.cluster.local.ec2.internal. udp 78 false 4096\" NXDOMAIN qr,rd,ra 67 0.001221118s\n[INFO] 100.64.1.135:55329 - 2884 \"A IN 100-64-1-135.dns-3162.pod.cluster.local.ec2.internal. tcp 81 false 65535\" NXDOMAIN qr,rd,ra 70 0.009390762s\n[INFO] 100.64.1.135:36027 - 59283 \"A IN 100-64-1-135.dns-3162.pod.cluster.local.ec2.internal. tcp 93 false 65535\" NXDOMAIN qr,rd,ra 70 0.001053243s\n[INFO] 100.64.1.135:57898 - 21301 \"A IN kubernetes.default.svc.cluster.local.ec2.internal. udp 78 false 4096\" NXDOMAIN qr,rd,ra 67 0.001202817s\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[INFO] 100.64.1.137:54084 - 27365 \"A IN dns-test-service-2.dns-4584.svc.cluster.local.ec2.internal. udp 87 false 4096\" NXDOMAIN qr,rd,ra 76 0.001308381s\n[INFO] 100.64.1.137:57769 - 4896 \"A IN dns-test-service-2.dns-4584.svc.cluster.local.ec2.internal. tcp 87 false 65535\" NXDOMAIN qr,rd,ra 76 0.002138278s\n[INFO] 100.64.1.137:51404 - 59908 \"A IN dns-test-service-2.dns-4584.svc.cluster.local.ec2.internal. udp 99 false 4096\" NXDOMAIN qr,rd,ra 76 0.002608062s\n[INFO] 100.64.1.137:53495 - 15548 \"A IN 100-64-1-137.dns-4584.pod.cluster.local.ec2.internal. udp 93 false 4096\" NXDOMAIN qr,rd,ra 70 0.001117241s\n[INFO] 100.64.1.137:34177 - 92 \"A IN 100-64-1-137.dns-4584.pod.cluster.local.ec2.internal. tcp 93 false 65535\" NXDOMAIN qr,rd,ra 70 0.001039536s\n[INFO] 100.64.1.137:41953 - 57727 \"A IN dns-test-service-2.dns-4584.svc.cluster.local.ec2.internal. udp 87 false 4096\" NXDOMAIN qr,rd,ra 76 0.001081688s\n[INFO] 100.64.1.137:44419 - 10248 \"A IN 100-64-1-137.dns-4584.pod.cluster.local.ec2.internal. tcp 81 false 65535\" NXDOMAIN qr,rd,ra 70 0.000894593s\n[INFO] 100.64.1.137:48129 - 30297 \"A IN dns-test-service-2.dns-4584.svc.cluster.local.ec2.internal. udp 99 false 4096\" NXDOMAIN qr,rd,ra 76 0.001387062s\n[INFO] 100.64.1.137:56529 - 35449 \"A IN dns-test-service-2.dns-4584.svc.cluster.local.ec2.internal. tcp 99 false 65535\" NXDOMAIN qr,rd,ra 76 0.000946101s\n[INFO] 100.64.1.137:40350 - 51473 \"A IN 100-64-1-137.dns-4584.pod.cluster.local.ec2.internal. udp 93 false 4096\" NXDOMAIN qr,rd,ra 70 0.001235447s\n[INFO] 100.64.1.137:42730 - 43325 \"A IN 100-64-1-137.dns-4584.pod.cluster.local.ec2.internal. udp 81 false 4096\" NXDOMAIN qr,rd,ra 70 0.001042724s\n[INFO] 100.64.1.137:50959 - 35489 \"A IN 100-64-1-137.dns-4584.pod.cluster.local.ec2.internal. tcp 81 false 65535\" NXDOMAIN qr,rd,ra 70 0.000855418s\n[INFO] 100.64.1.137:33986 - 21294 \"A IN dns-test-service-2.dns-4584.svc.cluster.local.ec2.internal. udp 99 false 4096\" NXDOMAIN qr,rd,ra 76 0.001139025s\n[INFO] 100.64.1.137:48694 - 47877 \"A IN dns-test-service-2.dns-4584.svc.cluster.local.ec2.internal. udp 87 false 4096\" NXDOMAIN qr,rd,ra 76 0.00156445s\n[INFO] 100.64.1.137:48585 - 39811 \"A IN dns-test-service-2.dns-4584.svc.cluster.local.ec2.internal. tcp 87 false 65535\" NXDOMAIN qr,rd,ra 76 0.001079299s\n[INFO] 100.64.1.137:56938 - 45030 \"A IN 100-64-1-137.dns-4584.pod.cluster.local.ec2.internal. udp 81 false 4096\" NXDOMAIN qr,rd,ra 70 0.001116042s\n[INFO] 100.64.1.137:49160 - 57218 \"A IN dns-test-service-2.dns-4584.svc.cluster.local.ec2.internal. udp 99 false 4096\" NXDOMAIN qr,rd,ra 76 0.001137144s\n[INFO] 100.64.1.137:36349 - 62370 \"A IN dns-test-service-2.dns-4584.svc.cluster.local.ec2.internal. tcp 99 false 65535\" NXDOMAIN qr,rd,ra 76 0.00213946s\n[INFO] 100.64.1.137:45100 - 42247 \"A IN dns-test-service-2.dns-4584.svc.cluster.local.ec2.internal. udp 87 false 4096\" NXDOMAIN qr,rd,ra 76 0.00133078s\n[INFO] 100.64.1.137:41251 - 1560 \"A IN dns-test-service-2.dns-4584.svc.cluster.local.ec2.internal. tcp 87 false 65535\" NXDOMAIN qr,rd,ra 76 0.001034933s\n[INFO] 100.64.1.137:56922 - 26691 \"A IN 100-64-1-137.dns-4584.pod.cluster.local.ec2.internal. udp 81 false 4096\" NXDOMAIN qr,rd,ra 70 0.001136945s\n[INFO] 100.64.1.137:58611 - 59576 \"A IN 100-64-1-137.dns-4584.pod.cluster.local.ec2.internal. tcp 81 false 65535\" NXDOMAIN qr,rd,ra 70 0.000948377s\n[INFO] 100.64.1.137:43789 - 32759 \"A IN dns-test-service-2.dns-4584.svc.cluster.local.ec2.internal. tcp 99 false 65535\" NXDOMAIN qr,rd,ra 76 0.001016692s\n[INFO] 100.64.1.137:50582 - 3855 \"A IN 100-64-1-137.dns-4584.pod.cluster.local.ec2.internal. udp 93 false 4096\" NXDOMAIN qr,rd,ra 70 0.001105871s\n[INFO] 100.64.1.137:43965 - 53935 \"A IN 100-64-1-137.dns-4584.pod.cluster.local.ec2.internal. tcp 93 false 65535\" NXDOMAIN qr,rd,ra 70 0.000960628s\n[INFO] 100.64.1.137:44999 - 16128 \"A IN dns-test-service-2.dns-4584.svc.cluster.local.ec2.internal. tcp 87 false 65535\" NXDOMAIN qr,rd,ra 76 0.004068315s\n[INFO] 100.64.1.137:49792 - 39779 \"A IN 100-64-1-137.dns-4584.pod.cluster.local.ec2.internal. udp 93 false 4096\" NXDOMAIN qr,rd,ra 70 0.001043562s\n[INFO] 100.64.1.137:38911 - 44931 \"A IN 100-64-1-137.dns-4584.pod.cluster.local.ec2.internal. tcp 93 false 65535\" NXDOMAIN qr,rd,ra 70 0.001009994s\n[INFO] 100.64.1.137:34604 - 13880 \"A IN 100-64-1-137.dns-4584.pod.cluster.local.ec2.internal. udp 93 false 4096\" NXDOMAIN qr,rd,ra 70 0.001057915s\n[INFO] 100.64.1.137:41150 - 43734 \"A IN dns-test-service-2.dns-4584.svc.cluster.local.ec2.internal. udp 87 false 4096\" NXDOMAIN qr,rd,ra 76 0.001227136s\n[INFO] 100.64.1.137:56457 - 21518 \"A IN 100-64-1-137.dns-4584.pod.cluster.local.ec2.internal. udp 81 false 4096\" NXDOMAIN qr,rd,ra 70 0.001544874s\n[INFO] 100.64.1.137:58635 - 28629 \"A IN dns-test-service-2.dns-4584.svc.cluster.local.ec2.internal. udp 99 false 4096\" NXDOMAIN qr,rd,ra 76 0.001331907s\n[INFO] 100.64.1.137:60667 - 54957 \"A IN 100-64-1-137.dns-4584.pod.cluster.local.ec2.internal. tcp 93 false 65535\" NXDOMAIN qr,rd,ra 70 0.002065696s\n[INFO] 100.64.1.137:47857 - 48915 \"A IN dns-test-service-2.dns-4584.svc.cluster.local.ec2.internal. tcp 87 false 65535\" NXDOMAIN qr,rd,ra 76 0.001078206s\n[INFO] 100.64.1.137:37396 - 55293 \"A IN 100-64-1-137.dns-4584.pod.cluster.local.ec2.internal. udp 81 false 4096\" NXDOMAIN qr,rd,ra 70 0.001190926s\n[INFO] 100.64.1.137:45903 - 53116 \"A IN 100-64-1-137.dns-4584.pod.cluster.local.ec2.internal. tcp 81 false 65535\" NXDOMAIN qr,rd,ra 70 0.000927049s\n[INFO] 100.64.1.137:58442 - 19625 \"A IN dns-test-service-2.dns-4584.svc.cluster.local.ec2.internal. udp 99 false 4096\" NXDOMAIN qr,rd,ra 76 0.001199541s\n[INFO] 100.64.1.137:42360 - 40801 \"A IN 100-64-1-137.dns-4584.pod.cluster.local.ec2.internal. udp 93 false 4096\" NXDOMAIN qr,rd,ra 70 0.001180856s\n[INFO] 100.64.1.137:58419 - 25345 \"A IN 100-64-1-137.dns-4584.pod.cluster.local.ec2.internal. tcp 93 false 65535\" NXDOMAIN qr,rd,ra 70 0.000905954s\n[INFO] 100.64.1.137:56729 - 44026 \"A IN dns-test-service-2.dns-4584.svc.cluster.local.ec2.internal. udp 87 false 4096\" NXDOMAIN qr,rd,ra 76 0.001216604s\n[INFO] 100.64.1.137:38934 - 55550 \"A IN dns-test-service-2.dns-4584.svc.cluster.local.ec2.internal. udp 99 false 4096\" NXDOMAIN qr,rd,ra 76 0.00095211s\n[INFO] 100.64.1.137:42449 - 60702 \"A IN dns-test-service-2.dns-4584.svc.cluster.local.ec2.internal. tcp 99 false 65535\" NXDOMAIN qr,rd,ra 76 0.001005524s\n[INFO] 100.64.1.137:58271 - 1925 \"A IN dns-test-service-2.dns-4584.svc.cluster.local.ec2.internal. tcp 87 false 65535\" NXDOMAIN qr,rd,ra 76 0.000941092s\n[INFO] 100.64.1.137:35265 - 30515 \"A IN 100-64-1-137.dns-4584.pod.cluster.local.ec2.internal. tcp 81 false 65535\" NXDOMAIN qr,rd,ra 70 0.000828863s\n[INFO] 100.64.1.137:50945 - 31091 \"A IN dns-test-service-2.dns-4584.svc.cluster.local.ec2.internal. tcp 99 false 65535\" NXDOMAIN qr,rd,ra 76 0.001086867s\n[INFO] 100.64.1.137:58909 - 2187 \"A IN 100-64-1-137.dns-4584.pod.cluster.local.ec2.internal. udp 93 false 4096\" NXDOMAIN qr,rd,ra 70 0.001204567s\n[INFO] 100.64.1.137:57013 - 52266 \"A IN 100-64-1-137.dns-4584.pod.cluster.local.ec2.internal. tcp 93 false 65535\" NXDOMAIN qr,rd,ra 70 0.000892916s\n[INFO] 100.64.1.137:42416 - 16935 \"A IN dns-test-service-2.dns-4584.svc.cluster.local.ec2.internal. udp 99 false 4096\" NXDOMAIN qr,rd,ra 76 0.001148097s\n[INFO] 100.64.1.137:57481 - 22087 \"A IN dns-test-service-2.dns-4584.svc.cluster.local.ec2.internal. tcp 99 false 65535\" NXDOMAIN qr,rd,ra 76 0.002247711s\n[INFO] 100.64.1.137:46207 - 38111 \"A IN 100-64-1-137.dns-4584.pod.cluster.local.ec2.internal. udp 93 false 4096\" NXDOMAIN qr,rd,ra 70 0.003176277s\n[INFO] 100.64.1.137:51317 - 53577 \"A IN dns-test-service-2.dns-4584.svc.cluster.local.ec2.internal. udp 87 false 4096\" NXDOMAIN qr,rd,ra 76 0.001124854s\n[INFO] 100.64.1.137:45579 - 3551 \"A IN dns-test-service-2.dns-4584.svc.cluster.local.ec2.internal. tcp 87 false 65535\" NXDOMAIN qr,rd,ra 76 0.000889973s\n[INFO] 100.64.1.137:33632 - 54804 \"A IN 100-64-1-137.dns-4584.pod.cluster.local.ec2.internal. udp 81 false 4096\" NXDOMAIN qr,rd,ra 70 0.001151373s\n[INFO] 100.64.1.137:56513 - 58012 \"A IN dns-test-service-2.dns-4584.svc.cluster.local.ec2.internal. tcp 99 false 65535\" NXDOMAIN qr,rd,ra 76 0.000907129s\n[INFO] 100.64.1.137:57057 - 29108 \"A IN 100-64-1-137.dns-4584.pod.cluster.local.ec2.internal. udp 93 false 4096\" NXDOMAIN qr,rd,ra 70 0.000988476s\n[INFO] 100.64.1.137:42513 - 13652 \"A IN 100-64-1-137.dns-4584.pod.cluster.local.ec2.internal. tcp 93 false 65535\" NXDOMAIN qr,rd,ra 70 0.000904536s\n[INFO] 100.64.1.137:54195 - 22294 \"A IN dns-test-service-2.dns-4584.svc.cluster.local.ec2.internal. udp 87 false 4096\" NXDOMAIN qr,rd,ra 76 0.006713222s\n[INFO] 100.64.1.137:42773 - 59309 \"A IN 100-64-1-137.dns-4584.pod.cluster.local.ec2.internal. tcp 81 false 65535\" NXDOMAIN qr,rd,ra 70 0.004289055s\n[INFO] 100.64.1.137:41182 - 26960 \"A IN dns-test-service-2.dns-4584.svc.cluster.local.ec2.internal. udp 99 false 4096\" NXDOMAIN qr,rd,ra 76 0.001077205s\n[INFO] 100.64.1.137:40385 - 65032 \"A IN 100-64-1-137.dns-4584.pod.cluster.local.ec2.internal. udp 93 false 4096\" NXDOMAIN qr,rd,ra 70 0.001297086s\n[INFO] 100.64.1.137:38295 - 4764 \"A IN dns-test-service-2.dns-4584.svc.cluster.local.ec2.internal. udp 87 false 4096\" NXDOMAIN qr,rd,ra 76 0.001305156s\n[INFO] 100.64.1.137:54668 - 17957 \"A IN dns-test-service-2.dns-4584.svc.cluster.local.ec2.internal. udp 99 false 4096\" NXDOMAIN qr,rd,ra 76 0.001077072s\n[INFO] 100.64.1.137:49944 - 39133 \"A IN 100-64-1-137.dns-4584.pod.cluster.local.ec2.internal. udp 93 false 4096\" NXDOMAIN qr,rd,ra 70 0.001073639s\n[INFO] 100.64.1.137:36745 - 451 \"A IN dns-test-service-2.dns-4584.svc.cluster.local.ec2.internal. udp 87 false 4096\" NXDOMAIN qr,rd,ra 76 0.00120676s\n[INFO] 100.64.1.137:50313 - 26103 \"A IN 100-64-1-137.dns-4584.pod.cluster.local.ec2.internal. tcp 81 false 65535\" NXDOMAIN qr,rd,ra 70 0.000920987s\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[INFO] 100.64.1.137:53916 - 9522 \"A IN 100-64-1-137.dns-4584.pod.cluster.local.ec2.internal. udp 93 false 4096\" NXDOMAIN qr,rd,ra 70 0.001226746s\n[INFO] 100.64.1.137:58135 - 14534 \"A IN 100-64-1-137.dns-4584.pod.cluster.local.ec2.internal. tcp 93 false 65535\" NXDOMAIN qr,rd,ra 70 0.00202161s\n[INFO] 100.64.1.137:51781 - 39034 \"A IN dns-test-service-2.dns-4584.svc.cluster.local.ec2.internal. tcp 87 false 65535\" NXDOMAIN qr,rd,ra 76 0.000984442s\n[INFO] 100.64.1.137:33684 - 44391 \"A IN 100-64-1-137.dns-4584.pod.cluster.local.ec2.internal. udp 81 false 4096\" NXDOMAIN qr,rd,ra 70 0.001308154s\n[INFO] 100.64.1.137:45226 - 44739 \"A IN dns-test-service-2.dns-4584.svc.cluster.local.ec2.internal. udp 99 false 4096\" NXDOMAIN qr,rd,ra 76 0.00124608s\n[INFO] 100.64.1.137:56618 - 379 \"A IN 100-64-1-137.dns-4584.pod.cluster.local.ec2.internal. udp 93 false 4096\" NXDOMAIN qr,rd,ra 70 0.001189701s\n[INFO] 100.64.1.137:56505 - 50459 \"A IN 100-64-1-137.dns-4584.pod.cluster.local.ec2.internal. tcp 93 false 65535\" NXDOMAIN qr,rd,ra 70 0.004901746s\n[INFO] 100.64.1.137:54226 - 41588 \"A IN dns-test-service-2.dns-4584.svc.cluster.local.ec2.internal. udp 87 false 4096\" NXDOMAIN qr,rd,ra 76 0.001202787s\n[INFO] 100.64.1.137:35607 - 12545 \"A IN dns-test-service-2.dns-4584.svc.cluster.local.ec2.internal. tcp 87 false 65535\" NXDOMAIN qr,rd,ra 76 0.000904407s\n[INFO] 100.64.1.137:46821 - 33260 \"A IN 100-64-1-137.dns-4584.pod.cluster.local.ec2.internal. tcp 93 false 65535\" NXDOMAIN qr,rd,ra 70 0.000893784s\n[INFO] 100.64.1.137:55673 - 32020 \"A IN 100-64-1-137.dns-4584.pod.cluster.local.ec2.internal. udp 81 false 4096\" NXDOMAIN qr,rd,ra 70 0.003484233s\n[INFO] 100.64.1.137:38177 - 33392 \"A IN 100-64-1-137.dns-4584.pod.cluster.local.ec2.internal. tcp 81 false 65535\" NXDOMAIN qr,rd,ra 70 0.001007166s\n[INFO] 100.64.1.137:58283 - 3648 \"A IN 100-64-1-137.dns-4584.pod.cluster.local.ec2.internal. tcp 93 false 65535\" NXDOMAIN qr,rd,ra 70 0.00114541s\n[INFO] 100.64.1.137:44293 - 30462 \"A IN dns-test-service-2.dns-4584.svc.cluster.local.ec2.internal. udp 87 false 4096\" NXDOMAIN qr,rd,ra 76 0.001380922s\n[INFO] 100.64.1.137:44743 - 23688 \"A IN dns-test-service-2.dns-4584.svc.cluster.local.ec2.internal. tcp 87 false 65535\" NXDOMAIN qr,rd,ra 76 0.002429191s\n[INFO] 100.64.1.137:48102 - 56362 \"A IN 100-64-1-137.dns-4584.pod.cluster.local.ec2.internal. udp 81 false 4096\" NXDOMAIN qr,rd,ra 70 0.001092284s\n[INFO] 100.64.1.137:45587 - 6989 \"A IN 100-64-1-137.dns-4584.pod.cluster.local.ec2.internal. tcp 81 false 65535\" NXDOMAIN qr,rd,ra 70 0.000906397s\n[INFO] 100.64.1.137:57166 - 33853 \"A IN dns-test-service-2.dns-4584.svc.cluster.local.ec2.internal. udp 99 false 4096\" NXDOMAIN qr,rd,ra 76 0.001482655s\n[INFO] 100.64.1.137:51633 - 55029 \"A IN 100-64-1-137.dns-4584.pod.cluster.local.ec2.internal. udp 93 false 4096\" NXDOMAIN qr,rd,ra 70 0.001110772s\n[INFO] 100.64.1.137:55285 - 60181 \"A IN 100-64-1-137.dns-4584.pod.cluster.local.ec2.internal. tcp 93 false 65535\" NXDOMAIN qr,rd,ra 70 0.002176649s\n[INFO] 100.64.1.137:49039 - 18104 \"A IN dns-test-service-2.dns-4584.svc.cluster.local.ec2.internal. udp 87 false 4096\" NXDOMAIN qr,rd,ra 76 0.001119936s\n[INFO] 100.64.1.137:47178 - 7953 \"A IN dns-test-service-2.dns-4584.svc.cluster.local.ec2.internal. udp 99 false 4096\" NXDOMAIN qr,rd,ra 76 0.001278767s\n[INFO] 100.64.1.137:57613 - 9394 \"A IN dns-test-service-2.dns-4584.svc.cluster.local.ec2.internal. tcp 99 false 65535\" NXDOMAIN qr,rd,ra 76 0.001099513s\n[INFO] 100.64.1.137:47203 - 46026 \"A IN 100-64-1-137.dns-4584.pod.cluster.local.ec2.internal. udp 93 false 4096\" NXDOMAIN qr,rd,ra 70 0.001234474s\n[INFO] 100.64.1.137:42603 - 9350 \"A IN dns-test-service-2.dns-4584.svc.cluster.local.ec2.internal. udp 87 false 4096\" NXDOMAIN qr,rd,ra 76 0.001203921s\n[INFO] 100.64.1.137:45151 - 21704 \"A IN 100-64-1-137.dns-4584.pod.cluster.local.ec2.internal. udp 81 false 4096\" NXDOMAIN qr,rd,ra 70 0.001042663s\n[INFO] 100.64.1.137:46187 - 1627 \"A IN 100-64-1-137.dns-4584.pod.cluster.local.ec2.internal. tcp 81 false 65535\" NXDOMAIN qr,rd,ra 70 0.000957444s\n[INFO] 100.64.1.137:52180 - 43878 \"A IN dns-test-service-2.dns-4584.svc.cluster.local.ec2.internal. udp 99 false 4096\" NXDOMAIN qr,rd,ra 76 0.001150127s\n[INFO] 100.64.1.137:51787 - 49030 \"A IN dns-test-service-2.dns-4584.svc.cluster.local.ec2.internal. tcp 99 false 65535\" NXDOMAIN qr,rd,ra 76 0.007495834s\n[INFO] 100.64.1.137:40842 - 47116 \"A IN dns-test-service-2.dns-4584.svc.cluster.local.ec2.internal. udp 87 false 4096\" NXDOMAIN qr,rd,ra 76 0.001188627s\n[INFO] 100.64.1.137:57189 - 40027 \"A IN dns-test-service-2.dns-4584.svc.cluster.local.ec2.internal. tcp 87 false 65535\" NXDOMAIN qr,rd,ra 76 0.000980593s\n[INFO] 100.64.1.137:38470 - 25123 \"A IN 100-64-1-137.dns-4584.pod.cluster.local.ec2.internal. udp 81 false 4096\" NXDOMAIN qr,rd,ra 70 0.001001482s\n[INFO] 100.64.1.137:44623 - 22260 \"A IN 100-64-1-137.dns-4584.pod.cluster.local.ec2.internal. tcp 81 false 65535\" NXDOMAIN qr,rd,ra 70 0.000894808s\n[INFO] 100.64.1.137:47887 - 34875 \"A IN dns-test-service-2.dns-4584.svc.cluster.local.ec2.internal. udp 99 false 4096\" NXDOMAIN qr,rd,ra 76 0.00103872s\n[INFO] 100.64.1.137:60677 - 56051 \"A IN 100-64-1-137.dns-4584.pod.cluster.local.ec2.internal. udp 93 false 4096\" NXDOMAIN qr,rd,ra 70 0.001115803s\n[INFO] 100.64.1.137:52367 - 57491 \"A IN 100-64-1-137.dns-4584.pod.cluster.local.ec2.internal. tcp 93 false 65535\" NXDOMAIN qr,rd,ra 70 0.000949474s\n[INFO] 100.64.1.137:56481 - 31875 \"A IN dns-test-service-2.dns-4584.svc.cluster.local.ec2.internal. tcp 87 false 65535\" NXDOMAIN qr,rd,ra 76 0.000921731s\n[INFO] 100.64.1.137:36831 - 5263 \"A IN dns-test-service-2.dns-4584.svc.cluster.local.ec2.internal. udp 99 false 4096\" NXDOMAIN qr,rd,ra 76 0.001070376s\n[INFO] 100.64.1.137:54759 - 31591 \"A IN 100-64-1-137.dns-4584.pod.cluster.local.ec2.internal. tcp 93 false 65535\" NXDOMAIN qr,rd,ra 70 0.002068428s\n[INFO] 100.64.1.137:39735 - 53181 \"A IN dns-test-service-2.dns-4584.svc.cluster.local.ec2.internal. tcp 87 false 65535\" NXDOMAIN qr,rd,ra 76 0.00099546s\n[INFO] 100.64.1.137:52371 - 46340 \"A IN dns-test-service-2.dns-4584.svc.cluster.local.ec2.internal. tcp 99 false 65535\" NXDOMAIN qr,rd,ra 76 0.000875792s\n[INFO] 100.64.1.137:53164 - 17436 \"A IN 100-64-1-137.dns-4584.pod.cluster.local.ec2.internal. udp 93 false 4096\" NXDOMAIN qr,rd,ra 70 0.001081509s\n[INFO] 100.64.1.137:36269 - 53361 \"A IN 100-64-1-137.dns-4584.pod.cluster.local.ec2.internal. udp 93 false 4096\" NXDOMAIN qr,rd,ra 70 0.001521578s\n[INFO] 100.64.1.137:46513 - 26797 \"A IN 100-64-1-137.dns-4584.pod.cluster.local.ec2.internal. udp 81 false 4096\" NXDOMAIN qr,rd,ra 70 0.001020715s\n[INFO] 100.64.1.137:45829 - 23645 \"A IN 100-64-1-137.dns-4584.pod.cluster.local.ec2.internal. tcp 81 false 65535\" NXDOMAIN qr,rd,ra 70 0.000931488s\n[INFO] 100.64.1.137:59407 - 23181 \"A IN dns-test-service-2.dns-4584.svc.cluster.local.ec2.internal. udp 99 false 4096\" NXDOMAIN qr,rd,ra 76 0.001139017s\n[INFO] 100.64.1.137:33316 - 44357 \"A IN 100-64-1-137.dns-4584.pod.cluster.local.ec2.internal. udp 93 false 4096\" NXDOMAIN qr,rd,ra 70 0.001127625s\n[INFO] 100.64.1.137:48223 - 28901 \"A IN 100-64-1-137.dns-4584.pod.cluster.local.ec2.internal. tcp 93 false 65535\" NXDOMAIN qr,rd,ra 70 0.001068207s\n[INFO] 100.64.1.137:41555 - 60828 \"A IN 100-64-1-137.dns-4584.pod.cluster.local.ec2.internal. tcp 81 false 65535\" NXDOMAIN qr,rd,ra 70 0.000879416s\n[INFO] 100.64.1.137:42891 - 59106 \"A IN dns-test-service-2.dns-4584.svc.cluster.local.ec2.internal. udp 99 false 4096\" NXDOMAIN qr,rd,ra 76 0.001042249s\n[INFO] 100.64.1.137:37275 - 64258 \"A IN dns-test-service-2.dns-4584.svc.cluster.local.ec2.internal. tcp 99 false 65535\" NXDOMAIN qr,rd,ra 76 0.013099702s\n[INFO] 100.64.1.137:35717 - 19898 \"A IN 100-64-1-137.dns-4584.pod.cluster.local.ec2.internal. tcp 93 false 65535\" NXDOMAIN qr,rd,ra 70 0.00353826s\n[INFO] 100.64.1.137:41833 - 55384 \"A IN dns-test-service-2.dns-4584.svc.cluster.local.ec2.internal. tcp 87 false 65535\" NXDOMAIN qr,rd,ra 76 0.000929615s\n[INFO] 100.64.1.137:51313 - 61831 \"A IN 100-64-1-137.dns-4584.pod.cluster.local.ec2.internal. udp 81 false 4096\" NXDOMAIN qr,rd,ra 70 0.001112954s\n[INFO] 100.64.1.137:43959 - 33459 \"A IN 100-64-1-137.dns-4584.pod.cluster.local.ec2.internal. tcp 81 false 65535\" NXDOMAIN qr,rd,ra 70 0.000926042s\n[INFO] 100.64.1.137:48829 - 5743 \"A IN 100-64-1-137.dns-4584.pod.cluster.local.ec2.internal. udp 93 false 4096\" NXDOMAIN qr,rd,ra 70 0.001159937s\n[INFO] 100.64.1.137:33726 - 3418 \"A IN dns-test-service-2.dns-4584.svc.cluster.local.ec2.internal. udp 87 false 4096\" NXDOMAIN qr,rd,ra 76 0.001401336s\n[INFO] 100.64.1.137:32875 - 52445 \"A IN dns-test-service-2.dns-4584.svc.cluster.local.ec2.internal. tcp 87 false 65535\" NXDOMAIN qr,rd,ra 76 0.000922266s\n[INFO] 100.64.1.137:51569 - 46819 \"A IN 100-64-1-137.dns-4584.pod.cluster.local.ec2.internal. tcp 93 false 65535\" NXDOMAIN qr,rd,ra 70 0.000806602s\n[INFO] 100.64.1.137:47113 - 28760 \"A IN dns-test-service-2.dns-4584.svc.cluster.local.ec2.internal. tcp 87 false 65535\" NXDOMAIN qr,rd,ra 76 0.001094368s\n[INFO] 100.64.1.137:50523 - 51168 \"A IN 100-64-1-137.dns-4584.pod.cluster.local.ec2.internal. tcp 81 false 65535\" NXDOMAIN qr,rd,ra 70 0.001007093s\n[INFO] 100.64.1.137:42511 - 60128 \"A IN dns-test-service-2.dns-4584.svc.cluster.local.ec2.internal. udp 99 false 4096\" NXDOMAIN qr,rd,ra 76 0.001080867s\n[INFO] 100.64.1.137:49899 - 17208 \"A IN 100-64-1-137.dns-4584.pod.cluster.local.ec2.internal. tcp 93 false 65535\" NXDOMAIN qr,rd,ra 70 0.000815218s\n[INFO] 100.64.1.137:44153 - 19066 \"A IN dns-test-service-2.dns-4584.svc.cluster.local.ec2.internal. udp 87 false 4096\" NXDOMAIN qr,rd,ra 76 0.001041568s\n[INFO] 100.64.1.137:40874 - 28995 \"A IN 100-64-1-137.dns-4584.pod.cluster.local.ec2.internal. udp 81 false 4096\" NXDOMAIN qr,rd,ra 70 0.001020599s\n[INFO] 100.64.1.137:37289 - 4070 \"A IN 100-64-1-137.dns-4584.pod.cluster.local.ec2.internal. tcp 81 false 65535\" NXDOMAIN qr,rd,ra 70 0.000929445s\n[INFO] 100.64.1.137:56845 - 35668 \"A IN dns-test-service-2.dns-4584.svc.cluster.local.ec2.internal. tcp 99 false 65535\" NXDOMAIN qr,rd,ra 76 0.002378504s\n[INFO] 100.64.1.137:44723 - 51692 \"A IN 100-64-1-137.dns-4584.pod.cluster.local.ec2.internal. udp 93 false 4096\" NXDOMAIN qr,rd,ra 70 0.001155589s\n[INFO] 100.64.1.137:33105 - 22327 \"A IN 100-64-1-137.dns-4584.pod.cluster.local.ec2.internal. tcp 81 false 65535\" NXDOMAIN qr,rd,ra 70 0.000982081s\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\nW0111 19:01:55.742825 1 reflector.go:302] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: watch of *v1.Service ended with: too old resource version: 36201 (36635)\nW0111 19:01:55.742825 1 reflector.go:302] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: watch of *v1.Service ended with: too old resource version: 36201 (36635)\nW0111 19:01:55.742825 1 reflector.go:302] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: watch of *v1.Endpoints ended with: too old resource version: 36628 (36635)\nW0111 19:01:55.742825 1 reflector.go:302] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: watch of *v1.Endpoints ended with: too old resource version: 36628 (36635)\nW0111 19:02:01.491972 1 reflector.go:302] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: watch of *v1.Namespace ended with: too old resource version: 36590 (36635)\nW0111 19:02:01.491972 1 reflector.go:302] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: watch of *v1.Namespace ended with: too old resource version: 36590 (36635)\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[INFO] 100.64.1.236:49094 - 8279 \"A IN invalid.ec2.internal. udp 38 false 512\" NXDOMAIN qr,rd,ra 38 0.000200713s\n[INFO] 100.64.1.236:44052 - 56004 \"AAAA IN invalid. udp 25 false 512\" NXDOMAIN qr,rd,ra 25 0.000153056s\n[INFO] 100.64.1.236:60261 - 35227 \"A IN invalid.ec2.internal. udp 38 false 512\" NXDOMAIN qr,rd,ra 38 0.00028146s\n[INFO] 100.64.1.236:47796 - 64315 \"AAAA IN invalid. udp 25 false 512\" NXDOMAIN qr,rd,ra 25 0.000201722s\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[INFO] 100.64.1.245:40489 - 21571 \"AAAA IN dns-querier-1.dns-test-service.dns-58.svc.cluster.local.ec2.internal. udp 86 false 512\" NXDOMAIN qr,rd,ra 86 0.001522355s\n[INFO] 100.64.1.245:36184 - 41523 \"A IN 100-64-1-245.dns-58.pod.cluster.local.ec2.internal. udp 79 false 4096\" NXDOMAIN qr,rd,ra 68 0.00110795s\n[INFO] 100.64.1.245:40537 - 15583 \"A IN 100-64-1-245.dns-58.pod.cluster.local.ec2.internal. udp 91 false 4096\" NXDOMAIN qr,rd,ra 68 0.00125651s\n[INFO] 100.64.1.245:57033 - 20735 \"A IN 100-64-1-245.dns-58.pod.cluster.local.ec2.internal. tcp 91 false 65535\" NXDOMAIN qr,rd,ra 68 0.002310987s\n[INFO] 100.64.1.245:33980 - 27735 \"AAAA IN dns-querier-1.dns-test-service.dns-58.svc.cluster.local.ec2.internal. udp 86 false 512\" NXDOMAIN qr,rd,ra 86 0.001149409s\n[INFO] 100.64.1.245:43324 - 14308 \"A IN 100-64-1-245.dns-58.pod.cluster.local.ec2.internal. udp 91 false 4096\" NXDOMAIN qr,rd,ra 68 0.000902608s\n[INFO] 100.64.1.245:44044 - 9291 \"AAAA IN dns-querier-1.dns-test-service.dns-58.svc.cluster.local.ec2.internal. udp 86 false 512\" NXDOMAIN qr,rd,ra 86 0.001010447s\n[INFO] 100.64.1.245:55569 - 63449 \"AAAA IN dns-querier-1. udp 31 false 512\" NXDOMAIN qr,rd,ra 31 0.000215399s\n[INFO] 100.64.1.245:42483 - 15268 \"A IN 100-64-1-245.dns-58.pod.cluster.local.ec2.internal. tcp 79 false 65535\" NXDOMAIN qr,rd,ra 68 0.001018011s\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[INFO] 100.64.0.88:33194 - 11835 \"A IN 100-64-0-88.dns-1736.pod.cluster.local.ec2.internal. udp 92 false 4096\" NXDOMAIN qr,rd,ra 69 0.00129583s\n[INFO] 100.64.0.88:52167 - 13275 \"A IN 100-64-0-88.dns-1736.pod.cluster.local.ec2.internal. tcp 92 false 65535\" NXDOMAIN qr,rd,ra 69 0.002416229s\n[INFO] 100.64.0.88:38505 - 11720 \"A IN 100-64-0-88.dns-1736.pod.cluster.local.ec2.internal. udp 92 false 4096\" NXDOMAIN qr,rd,ra 69 0.00122637s\n[INFO] 100.64.0.88:34737 - 61800 \"A IN 100-64-0-88.dns-1736.pod.cluster.local.ec2.internal. tcp 92 false 65535\" NXDOMAIN qr,rd,ra 69 0.001115824s\n[INFO] 100.64.0.88:58037 - 39778 \"A IN 100-64-0-88.dns-1736.pod.cluster.local.ec2.internal. tcp 92 false 65535\" NXDOMAIN qr,rd,ra 69 0.002240649s\n[INFO] 100.64.0.88:56750 - 23046 \"A IN 100-64-0-88.dns-1736.pod.cluster.local.ec2.internal. udp 92 false 4096\" NXDOMAIN qr,rd,ra 69 0.001578673s\n[INFO] 100.64.0.88:50374 - 1163 \"A IN 100-64-0-88.dns-1736.pod.cluster.local.ec2.internal. udp 92 false 4096\" NXDOMAIN qr,rd,ra 69 0.001155944s\n[INFO] 100.64.0.88:58565 - 39664 \"A IN 100-64-0-88.dns-1736.pod.cluster.local.ec2.internal. tcp 92 false 65535\" NXDOMAIN qr,rd,ra 69 0.001138952s\n[INFO] 100.64.0.88:57632 - 38386 \"AAAA IN dns-querier-1.dns-test-service.dns-1736.svc.cluster.local.ec2.internal. udp 88 false 512\" NXDOMAIN qr,rd,ra 88 0.00100667s\n[INFO] 100.64.0.88:33480 - 25787 \"AAAA IN dns-querier-1.ec2.internal. udp 44 false 512\" NXDOMAIN qr,rd,ra 44 0.000175401s\n[INFO] 100.64.0.88:48405 - 22753 \"A IN 100-64-0-88.dns-1736.pod.cluster.local.ec2.internal. tcp 80 false 65535\" NXDOMAIN qr,rd,ra 69 0.002320362s\n[INFO] 100.64.0.88:52903 - 17780 \"A IN 100-64-0-88.dns-1736.pod.cluster.local.ec2.internal. tcp 92 false 65535\" NXDOMAIN qr,rd,ra 69 0.000964701s\n[INFO] 100.64.0.88:45677 - 43388 \"AAAA IN dns-querier-1.dns-test-service.dns-1736.svc.cluster.local.ec2.internal. udp 88 false 512\" NXDOMAIN qr,rd,ra 88 0.001216825s\n[INFO] 100.64.0.88:46969 - 34416 \"AAAA IN dns-querier-1.ec2.internal. udp 44 false 512\" NXDOMAIN qr,rd,ra 44 0.000208368s\n[INFO] 100.64.0.88:33235 - 6252 \"A IN 100-64-0-88.dns-1736.pod.cluster.local.ec2.internal. tcp 80 false 65535\" NXDOMAIN qr,rd,ra 69 0.001105014s\n[INFO] 100.64.0.88:34054 - 1049 \"A IN 100-64-0-88.dns-1736.pod.cluster.local.ec2.internal. udp 92 false 4096\" NXDOMAIN qr,rd,ra 69 0.001346762s\n[INFO] 100.64.0.88:58854 - 12175 \"AAAA IN dns-querier-1.dns-test-service.dns-1736.svc.cluster.local.ec2.internal. udp 88 false 512\" NXDOMAIN qr,rd,ra 88 0.001214043s\n[INFO] 100.64.0.88:60304 - 25293 \"AAAA IN dns-querier-1.ec2.internal. udp 44 false 512\" NXDOMAIN qr,rd,ra 44 0.000172239s\n[INFO] 100.64.0.88:42472 - 54019 \"A IN 100-64-0-88.dns-1736.pod.cluster.local.ec2.internal. udp 80 false 4096\" NXDOMAIN qr,rd,ra 69 0.000969178s\n[INFO] 100.64.0.88:60621 - 53508 \"A IN 100-64-0-88.dns-1736.pod.cluster.local.ec2.internal. tcp 80 false 65535\" NXDOMAIN qr,rd,ra 69 0.000925792s\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[INFO] 100.64.1.27:38111 - 48376 \"A IN dns-test-service. tcp 57 false 65535\" NXDOMAIN qr,rd,ra 45 0.000620154s\n[INFO] 100.64.1.27:37171 - 1914 \"A IN dns-test-service.dns-8433.ec2.internal. tcp 79 false 65535\" NXDOMAIN qr,rd,ra 56 0.002423401s\n[INFO] 100.64.1.27:54284 - 51994 \"A IN dns-test-service.dns-8433.svc.ec2.internal. udp 83 false 4096\" NXDOMAIN qr,rd,ra 60 0.001247007s\n[INFO] 100.64.1.27:35936 - 37661 \"A IN dns-test-service. udp 45 false 4096\" NXDOMAIN qr,rd,ra 45 0.000208306s\n[INFO] 100.64.1.27:34327 - 57146 \"A IN dns-test-service.dns-8433.svc.ec2.internal. tcp 83 false 65535\" NXDOMAIN qr,rd,ra 60 0.001272474s\n[INFO] 100.64.1.27:51115 - 48190 \"A IN dns-test-service. tcp 45 false 65535\" NXDOMAIN qr,rd,ra 45 0.00021276s\n[INFO] 100.64.1.27:47772 - 35345 \"A IN dns-test-service.dns-8433. udp 54 false 4096\" NXDOMAIN qr,rd,ra 54 0.000189518s\n[INFO] 100.64.1.27:55131 - 18114 \"A IN dns-test-service.dns-8433.ec2.internal. tcp 67 false 65535\" NXDOMAIN qr,rd,ra 56 0.001021636s\n[INFO] 100.64.1.27:49409 - 13441 \"A IN dns-test-service.dns-8433.svc.ec2.internal. udp 71 false 4096\" NXDOMAIN qr,rd,ra 60 0.001172184s\n[INFO] 100.64.1.27:43017 - 10766 \"A IN dns-test-service.dns-8433.svc.ec2.internal. tcp 71 false 65535\" NXDOMAIN qr,rd,ra 60 0.007518498s\n[INFO] 100.64.1.27:46445 - 63738 \"SRV IN _http._tcp.dns-test-service.dns-8433.svc.ec2.internal. tcp 94 false 65535\" NXDOMAIN qr,rd,ra 71 0.001164335s\n[INFO] 100.64.1.27:45996 - 3783 \"A IN 100-64-1-27.dns-8433.pod.cluster.local.ec2.internal. udp 92 false 4096\" NXDOMAIN qr,rd,ra 69 0.001181211s\n[INFO] 100.64.1.27:48627 - 8935 \"A IN 100-64-1-27.dns-8433.pod.cluster.local.ec2.internal. tcp 92 false 65535\" NXDOMAIN qr,rd,ra 69 0.001404038s\n[INFO] 100.64.1.27:46662 - 6537 \"A IN 100-64-1-27.dns-8433.pod.cluster.local.ec2.internal. udp 80 false 4096\" NXDOMAIN qr,rd,ra 69 0.001147321s\n[INFO] 100.64.1.27:53199 - 34433 \"A IN dns-test-service. udp 57 false 4096\" NXDOMAIN qr,rd,ra 45 0.000340618s\n[INFO] 100.64.1.27:46149 - 7612 \"A IN dns-test-service. udp 45 false 4096\" NXDOMAIN qr,rd,ra 45 0.000542745s\n[INFO] 100.64.1.27:48069 - 28835 \"A IN dns-test-service.dns-8433.ec2.internal. udp 79 false 4096\" NXDOMAIN qr,rd,ra 56 0.001116223s\n[INFO] 100.64.1.27:52196 - 36585 \"A IN dns-test-service.dns-8433.ec2.internal. udp 67 false 4096\" NXDOMAIN qr,rd,ra 56 0.001043756s\n[INFO] 100.64.1.27:44089 - 18531 \"A IN dns-test-service.dns-8433.svc.ec2.internal. udp 83 false 4096\" NXDOMAIN qr,rd,ra 60 0.001146004s\n[INFO] 100.64.1.27:43565 - 34205 \"A IN dns-test-service.dns-8433.ec2.internal. tcp 67 false 65535\" NXDOMAIN qr,rd,ra 56 0.000948192s\n[INFO] 100.64.1.27:50947 - 15165 \"A IN dns-test-service.dns-8433.svc.ec2.internal. udp 71 false 4096\" NXDOMAIN qr,rd,ra 60 0.001494179s\n[INFO] 100.64.1.27:54251 - 61905 \"A IN dns-test-service.dns-8433.svc.ec2.internal. tcp 71 false 65535\" NXDOMAIN qr,rd,ra 60 0.00168533s\n[INFO] 100.64.1.27:37337 - 8227 \"SRV IN _http._tcp.dns-test-service.dns-8433.svc.ec2.internal. udp 94 false 4096\" NXDOMAIN qr,rd,ra 71 0.001685931s\n[INFO] 100.64.1.27:56577 - 21046 \"SRV IN _http._tcp.dns-test-service.dns-8433.svc.ec2.internal. tcp 82 false 65535\" NXDOMAIN qr,rd,ra 71 0.000928615s\n[INFO] 100.64.1.27:56205 - 29389 \"A IN 100-64-1-27.dns-8433.pod.cluster.local.ec2.internal. tcp 80 false 65535\" NXDOMAIN qr,rd,ra 69 0.000972297s\n[INFO] 100.64.1.27:38607 - 63255 \"A IN dns-test-service. udp 57 false 4096\" NXDOMAIN qr,rd,ra 45 0.000308457s\n[INFO] 100.64.1.27:59583 - 63759 \"A IN dns-test-service. tcp 57 false 65535\" NXDOMAIN qr,rd,ra 45 0.000293162s\n[INFO] 100.64.1.27:58208 - 40440 \"A IN dns-test-service.dns-8433.ec2.internal. udp 79 false 4096\" NXDOMAIN qr,rd,ra 56 0.00194504s\n[INFO] 100.64.1.27:39894 - 61384 \"A IN dns-test-service. udp 45 false 4096\" NXDOMAIN qr,rd,ra 45 0.000270543s\n[INFO] 100.64.1.27:34715 - 45592 \"A IN dns-test-service.dns-8433.ec2.internal. tcp 79 false 65535\" NXDOMAIN qr,rd,ra 56 0.001127093s\n[INFO] 100.64.1.27:57509 - 22165 \"A IN dns-test-service.dns-8433.ec2.internal. udp 67 false 4096\" NXDOMAIN qr,rd,ra 56 0.001215018s\n[INFO] 100.64.1.27:51525 - 24180 \"A IN dns-test-service.dns-8433.ec2.internal. tcp 67 false 65535\" NXDOMAIN qr,rd,ra 56 0.000862563s\n[INFO] 100.64.1.27:46605 - 24984 \"SRV IN _http._tcp.dns-test-service.dns-8433.svc.ec2.internal. tcp 94 false 65535\" NXDOMAIN qr,rd,ra 71 0.00084324s\n[INFO] 100.64.1.27:50911 - 9789 \"A IN dns-test-service.dns-8433.svc.ec2.internal. tcp 71 false 65535\" NXDOMAIN qr,rd,ra 60 0.000848717s\n[INFO] 100.64.1.27:41341 - 47600 \"A IN 100-64-1-27.dns-8433.pod.cluster.local.ec2.internal. udp 92 false 4096\" NXDOMAIN qr,rd,ra 69 0.00126347s\n[INFO] 100.64.1.27:51173 - 52752 \"A IN 100-64-1-27.dns-8433.pod.cluster.local.ec2.internal. tcp 92 false 65535\" NXDOMAIN qr,rd,ra 69 0.003273738s\n[INFO] 100.64.1.27:36425 - 57197 \"A IN dns-test-service.dns-8433.ec2.internal. tcp 79 false 65535\" NXDOMAIN qr,rd,ra 56 0.000903277s\n[INFO] 100.64.1.27:39542 - 62349 \"A IN dns-test-service.dns-8433.svc.ec2.internal. udp 83 false 4096\" NXDOMAIN qr,rd,ra 60 0.00111705s\n[INFO] 100.64.1.27:39097 - 46893 \"A IN dns-test-service.dns-8433.svc.ec2.internal. tcp 83 false 65535\" NXDOMAIN qr,rd,ra 60 0.000835638s\n[INFO] 100.64.1.27:53017 - 52045 \"SRV IN _http._tcp.dns-test-service.dns-8433.svc.ec2.internal. udp 94 false 4096\" NXDOMAIN qr,rd,ra 71 0.001102055s\n[INFO] 100.64.1.27:41993 - 36589 \"SRV IN _http._tcp.dns-test-service.dns-8433.svc.ec2.internal. tcp 94 false 65535\" NXDOMAIN qr,rd,ra 71 0.00093685s\n[INFO] 100.64.1.27:39982 - 39491 \"A IN dns-test-service.dns-8433.svc.ec2.internal. udp 71 false 4096\" NXDOMAIN qr,rd,ra 60 0.000923514s\n[INFO] 100.64.1.27:43063 - 64357 \"A IN 100-64-1-27.dns-8433.pod.cluster.local.ec2.internal. tcp 92 false 65535\" NXDOMAIN qr,rd,ra 69 0.000956866s\n[INFO] 100.64.1.27:40946 - 2216 \"A IN 100-64-1-27.dns-8433.pod.cluster.local.ec2.internal. udp 80 false 4096\" NXDOMAIN qr,rd,ra 69 0.00101886s\n[INFO] 100.64.1.27:39774 - 49312 \"A IN dns-test-service. udp 57 false 4096\" NXDOMAIN qr,rd,ra 45 0.000179687s\n[INFO] 100.64.1.27:42620 - 18721 \"A IN dns-test-service.dns-8433.ec2.internal. udp 79 false 4096\" NXDOMAIN qr,rd,ra 56 0.001121458s\n[INFO] 100.64.1.27:38150 - 27823 \"A IN dns-test-service. udp 45 false 4096\" NXDOMAIN qr,rd,ra 45 0.000186401s\n[INFO] 100.64.1.27:33618 - 25882 \"A IN 100-64-1-27.dns-8433.pod.cluster.local.ec2.internal. udp 92 false 4096\" NXDOMAIN qr,rd,ra 69 0.001534699s\n[INFO] 100.64.1.27:53906 - 10681 \"A IN dns-test-service.dns-8433.ec2.internal. udp 67 false 4096\" NXDOMAIN qr,rd,ra 56 0.000941096s\n[INFO] 100.64.1.27:34721 - 31034 \"A IN 100-64-1-27.dns-8433.pod.cluster.local.ec2.internal. tcp 92 false 65535\" NXDOMAIN qr,rd,ra 69 0.002464337s\n[INFO] 100.64.1.27:35259 - 44539 \"A IN dns-test-service.dns-8433.svc.ec2.internal. udp 71 false 4096\" NXDOMAIN qr,rd,ra 60 0.000881415s\n[INFO] 100.64.1.27:55063 - 19254 \"A IN dns-test-service.dns-8433.svc.ec2.internal. tcp 71 false 65535\" NXDOMAIN qr,rd,ra 60 0.00093718s\n[INFO] 100.64.1.27:40375 - 33331 \"SRV IN _http._tcp.dns-test-service.dns-8433.svc.ec2.internal. tcp 82 false 65535\" NXDOMAIN qr,rd,ra 71 0.008296083s\n[INFO] 100.64.1.27:49153 - 16148 \"A IN 100-64-1-27.dns-8433.pod.cluster.local.ec2.internal. udp 80 false 4096\" NXDOMAIN qr,rd,ra 69 0.001223285s\n[INFO] 100.64.1.27:52325 - 15713 \"A IN 100-64-1-27.dns-8433.pod.cluster.local.ec2.internal. tcp 80 false 65535\" NXDOMAIN qr,rd,ra 69 0.00100166s\n[INFO] 100.64.1.27:45120 - 34038 \"A IN dns-test-service.dns-8433.ec2.internal. udp 79 false 4096\" NXDOMAIN qr,rd,ra 56 0.001436815s\n[INFO] 100.64.1.27:33055 - 40630 \"A IN dns-test-service.dns-8433.svc.ec2.internal. udp 83 false 4096\" NXDOMAIN qr,rd,ra 60 0.001202605s\n[INFO] 100.64.1.27:42365 - 25174 \"A IN dns-test-service.dns-8433.svc.ec2.internal. tcp 83 false 65535\" NXDOMAIN qr,rd,ra 60 0.001063943s\n[INFO] 100.64.1.27:55452 - 41198 \"A IN 100-64-1-27.dns-8433.pod.cluster.local.ec2.internal. udp 92 false 4096\" NXDOMAIN qr,rd,ra 69 0.001537988s\n[INFO] 100.64.1.27:45671 - 41846 \"A IN dns-test-service. udp 45 false 4096\" NXDOMAIN qr,rd,ra 45 0.00020414s\n[INFO] 100.64.1.27:46579 - 25449 \"A IN dns-test-service. tcp 45 false 65535\" NXDOMAIN qr,rd,ra 45 0.000337175s\n[INFO] 100.64.1.27:37420 - 15442 \"A IN dns-test-service.dns-8433.ec2.internal. udp 67 false 4096\" NXDOMAIN qr,rd,ra 56 0.001234275s\n[INFO] 100.64.1.27:45885 - 53260 \"A IN 100-64-1-27.dns-8433.pod.cluster.local.ec2.internal. tcp 80 false 65535\" NXDOMAIN qr,rd,ra 69 0.001395795s\n[INFO] 100.64.1.27:42721 - 43437 \"A IN dns-test-service. udp 57 false 4096\" NXDOMAIN qr,rd,ra 45 0.000205922s\n[INFO] 100.64.1.27:51963 - 43941 \"A IN dns-test-service. tcp 57 false 65535\" NXDOMAIN qr,rd,ra 45 0.000259607s\n[INFO] 100.64.1.27:48160 - 52235 \"A IN dns-test-service.dns-8433.svc.ec2.internal. udp 83 false 4096\" NXDOMAIN qr,rd,ra 60 0.001182269s\n[INFO] 100.64.1.27:58563 - 57387 \"A IN dns-test-service.dns-8433.svc.ec2.internal. tcp 83 false 65535\" NXDOMAIN qr,rd,ra 60 0.000961852s\n[INFO] 100.64.1.27:41583 - 41931 \"SRV IN _http._tcp.dns-test-service.dns-8433.svc.ec2.internal. udp 94 false 4096\" NXDOMAIN qr,rd,ra 71 0.000997257s\n[INFO] 100.64.1.27:52593 - 55745 \"A IN dns-test-service. udp 45 false 4096\" NXDOMAIN qr,rd,ra 45 0.00018347s\n[INFO] 100.64.1.27:48261 - 52803 \"A IN 100-64-1-27.dns-8433.pod.cluster.local.ec2.internal. udp 92 false 4096\" NXDOMAIN qr,rd,ra 69 0.001127252s\n[INFO] 100.64.1.27:48710 - 27427 \"A IN dns-test-service.dns-8433.ec2.internal. udp 67 false 4096\" NXDOMAIN qr,rd,ra 56 0.001137076s\n[INFO] 100.64.1.27:56029 - 21872 \"A IN dns-test-service.dns-8433.svc.ec2.internal. udp 71 false 4096\" NXDOMAIN qr,rd,ra 60 0.00090347s\n[INFO] 100.64.1.27:44625 - 22052 \"A IN dns-test-service.dns-8433.svc.ec2.internal. tcp 71 false 65535\" NXDOMAIN qr,rd,ra 60 0.00091441s\n[INFO] 100.64.1.27:45851 - 36438 \"A IN 100-64-1-27.dns-8433.pod.cluster.local.ec2.internal. udp 80 false 4096\" NXDOMAIN qr,rd,ra 69 0.001109273s\n[INFO] 100.64.1.27:40337 - 61294 \"A IN 100-64-1-27.dns-8433.pod.cluster.local.ec2.internal. tcp 80 false 65535\" NXDOMAIN qr,rd,ra 69 0.000929438s\n[INFO] 100.64.1.27:56013 - 7227 \"A IN dns-test-service. tcp 57 false 65535\" NXDOMAIN qr,rd,ra 45 0.00024486s\n[INFO] 100.64.1.27:59192 - 8608 \"SRV IN _http._tcp.dns-test-service.dns-8433.svc.ec2.internal. udp 94 false 4096\" NXDOMAIN qr,rd,ra 71 0.001103687s\n[INFO] 100.64.1.27:36306 - 12742 \"A IN dns-test-service. udp 45 false 4096\" NXDOMAIN qr,rd,ra 45 0.000176495s\n[INFO] 100.64.1.27:35734 - 44090 \"A IN dns-test-service.dns-8433.ec2.internal. udp 67 false 4096\" NXDOMAIN qr,rd,ra 56 0.001188796s\n[INFO] 100.64.1.27:49845 - 19480 \"A IN 100-64-1-27.dns-8433.pod.cluster.local.ec2.internal. udp 92 false 4096\" NXDOMAIN qr,rd,ra 69 0.001205352s\n[INFO] 100.64.1.27:35801 - 4024 \"A IN 100-64-1-27.dns-8433.pod.cluster.local.ec2.internal. tcp 92 false 65535\" NXDOMAIN qr,rd,ra 69 0.000955012s\n[INFO] 100.64.1.27:52255 - 58820 \"A IN dns-test-service. udp 57 false 4096\" NXDOMAIN qr,rd,ra 45 0.000245503s\n[INFO] 100.64.1.27:43217 - 59325 \"A IN dns-test-service. tcp 57 false 65535\" NXDOMAIN qr,rd,ra 45 0.000641933s\n[INFO] 100.64.1.27:56106 - 23785 \"A IN dns-test-service.dns-8433.ec2.internal. udp 79 false 4096\" NXDOMAIN qr,rd,ra 56 0.001191104s\n[INFO] 100.64.1.27:53127 - 28937 \"A IN dns-test-service.dns-8433.ec2.internal. tcp 79 false 65535\" NXDOMAIN qr,rd,ra 56 0.001600326s\n[INFO] 100.64.1.27:38223 - 18633 \"A IN dns-test-service.dns-8433.svc.ec2.internal. tcp 83 false 65535\" NXDOMAIN qr,rd,ra 60 0.00110119s\n[INFO] 100.64.1.27:45356 - 11877 \"SRV IN _http._tcp.dns-test-service.dns-8433.svc.ec2.internal. udp 94 false 4096\" NXDOMAIN qr,rd,ra 71 0.001200622s\n[INFO] 100.64.1.27:44889 - 17029 \"SRV IN _http._tcp.dns-test-service.dns-8433.svc.ec2.internal. tcp 94 false 65535\" NXDOMAIN qr,rd,ra 71 0.000885355s\n[INFO] 100.64.1.27:54595 - 22749 \"A IN 100-64-1-27.dns-8433.pod.cluster.local.ec2.internal. udp 92 false 4096\" NXDOMAIN qr,rd,ra 69 0.001106274s\n[INFO] 100.64.1.27:57093 - 27901 \"A IN 100-64-1-27.dns-8433.pod.cluster.local.ec2.internal. tcp 92 false 65535\" NXDOMAIN qr,rd,ra 69 0.005039154s\n[INFO] 100.64.1.27:55825 - 39777 \"A IN dns-test-service.dns-8433.ec2.internal. tcp 67 false 65535\" NXDOMAIN qr,rd,ra 56 0.000932414s\n[INFO] 100.64.1.27:60339 - 63850 \"A IN dns-test-service.dns-8433.svc.ec2.internal. udp 71 false 4096\" NXDOMAIN qr,rd,ra 60 0.000997497s\n[INFO] 100.64.1.27:54368 - 17922 \"A IN 100-64-1-27.dns-8433.pod.cluster.local.ec2.internal. udp 80 false 4096\" NXDOMAIN qr,rd,ra 69 0.001268046s\n[INFO] 100.64.1.27:51243 - 26118 \"A IN dns-test-service. udp 57 false 4096\" NXDOMAIN qr,rd,ra 45 0.000258792s\n[INFO] 100.64.1.27:39197 - 24605 \"A IN dns-test-service. tcp 57 false 65535\" NXDOMAIN qr,rd,ra 45 0.000204326s\n[INFO] 100.64.1.27:41256 - 47802 \"A IN dns-test-service.dns-8433.ec2.internal. udp 79 false 4096\" NXDOMAIN qr,rd,ra 56 0.006734238s\n[INFO] 100.64.1.27:35283 - 37498 \"A IN dns-test-service.dns-8433.svc.ec2.internal. udp 83 false 4096\" NXDOMAIN qr,rd,ra 60 0.000970593s\n[INFO] 100.64.1.27:60337 - 39506 \"A IN 100-64-1-27.dns-8433.pod.cluster.local.ec2.internal. tcp 92 false 65535\" NXDOMAIN qr,rd,ra 69 0.000931855s\n[INFO] 100.64.1.27:38433 - 13623 \"A IN dns-test-service. tcp 45 false 65535\" NXDOMAIN qr,rd,ra 45 0.000207804s\n[INFO] 100.64.1.27:44533 - 22858 \"A IN dns-test-service.dns-8433.ec2.internal. udp 67 false 4096\" NXDOMAIN qr,rd,ra 56 0.001143322s\n[INFO] 100.64.1.27:60835 - 15602 \"SRV IN _http._tcp.dns-test-service.dns-8433.svc.ec2.internal. udp 82 false 4096\" NXDOMAIN qr,rd,ra 71 0.001028858s\n[INFO] 100.64.1.27:50575 - 28235 \"A IN 100-64-1-27.dns-8433.pod.cluster.local.ec2.internal. udp 80 false 4096\" NXDOMAIN qr,rd,ra 69 0.001147645s\n[INFO] 100.64.1.27:36559 - 64559 \"A IN dns-test-service.dns-8433.ec2.internal. tcp 79 false 65535\" NXDOMAIN qr,rd,ra 56 0.001127592s\n[INFO] 100.64.1.27:42171 - 43951 \"SRV IN _http._tcp.dns-test-service.dns-8433.svc.ec2.internal. tcp 94 false 65535\" NXDOMAIN qr,rd,ra 71 0.001006233s\n[INFO] 100.64.1.27:50314 - 1031 \"A IN 100-64-1-27.dns-8433.pod.cluster.local.ec2.internal. udp 92 false 4096\" NXDOMAIN qr,rd,ra 69 0.001015329s\n[INFO] 100.64.1.27:37375 - 6183 \"A IN 100-64-1-27.dns-8433.pod.cluster.local.ec2.internal. tcp 92 false 65535\" NXDOMAIN qr,rd,ra 69 0.001107548s\n[INFO] 100.64.1.27:55732 - 8065 \"A IN dns-test-service.dns-8433.svc.ec2.internal. udp 71 false 4096\" NXDOMAIN qr,rd,ra 60 0.001004412s\n[INFO] 100.64.1.27:44149 - 29694 \"A IN dns-test-service.dns-8433.svc.ec2.internal. tcp 71 false 65535\" NXDOMAIN qr,rd,ra 60 0.005379968s\n[INFO] 100.64.1.27:47760 - 48369 \"SRV IN _http._tcp.dns-test-service.dns-8433.svc.ec2.internal. udp 82 false 4096\" NXDOMAIN qr,rd,ra 71 0.001226563s\n[INFO] 100.64.1.27:43481 - 53269 \"SRV IN _http._tcp.dns-test-service.dns-8433.svc.ec2.internal. tcp 82 false 65535\" NXDOMAIN qr,rd,ra 71 0.000943525s\n[INFO] 100.64.1.27:35330 - 20878 \"A IN 100-64-1-27.dns-8433.pod.cluster.local.ec2.internal. udp 80 false 4096\" NXDOMAIN qr,rd,ra 69 0.000958088s\n[INFO] 100.64.1.27:59537 - 12175 \"A IN dns-test-service. udp 57 false 4096\" NXDOMAIN qr,rd,ra 45 0.000228449s\n[INFO] 100.64.1.27:57958 - 26083 \"A IN dns-test-service.dns-8433.ec2.internal. udp 79 false 4096\" NXDOMAIN qr,rd,ra 56 0.001123503s\n[INFO] 100.64.1.27:50337 - 10627 \"A IN dns-test-service.dns-8433.ec2.internal. tcp 79 false 65535\" NXDOMAIN qr,rd,ra 56 0.003030518s\n[INFO] 100.64.1.27:37923 - 323 \"A IN dns-test-service.dns-8433.svc.ec2.internal. tcp 83 false 65535\" NXDOMAIN qr,rd,ra 60 0.000940633s\n[INFO] 100.64.1.27:46374 - 33244 \"A IN 100-64-1-27.dns-8433.pod.cluster.local.ec2.internal. udp 92 false 4096\" NXDOMAIN qr,rd,ra 69 0.001264954s\n[INFO] 100.64.1.27:42625 - 17788 \"A IN 100-64-1-27.dns-8433.pod.cluster.local.ec2.internal. tcp 92 false 65535\" NXDOMAIN qr,rd,ra 69 0.001001304s\n[INFO] 100.64.1.27:36767 - 3954 \"A IN dns-test-service. udp 45 false 4096\" NXDOMAIN qr,rd,ra 45 0.000224222s\n[INFO] 100.64.1.27:37797 - 4740 \"A IN dns-test-service.dns-8433.ec2.internal. udp 67 false 4096\" NXDOMAIN qr,rd,ra 56 0.00108361s\n[INFO] 100.64.1.27:39105 - 49154 \"A IN dns-test-service.dns-8433.svc.ec2.internal. udp 71 false 4096\" NXDOMAIN qr,rd,ra 60 0.001089396s\n[INFO] 100.64.1.27:56773 - 27896 \"A IN dns-test-service.dns-8433.svc.ec2.internal. tcp 71 false 65535\" NXDOMAIN qr,rd,ra 60 0.00101339s\n[INFO] 100.64.1.27:53387 - 12654 \"SRV IN _http._tcp.dns-test-service.dns-8433.svc.ec2.internal. udp 82 false 4096\" NXDOMAIN qr,rd,ra 71 0.001271149s\n[INFO] 100.64.1.27:40377 - 14336 \"SRV IN _http._tcp.dns-test-service.dns-8433.svc.ec2.internal. tcp 82 false 65535\" NXDOMAIN qr,rd,ra 71 0.001044999s\n[INFO] 100.64.1.27:44381 - 40997 \"A IN dns-test-service. udp 57 false 4096\" NXDOMAIN qr,rd,ra 45 0.000274463s\n[INFO] 100.64.1.27:59629 - 37688 \"A IN dns-test-service.dns-8433.ec2.internal. udp 79 false 4096\" NXDOMAIN qr,rd,ra 56 0.004002064s\n[INFO] 100.64.1.27:38373 - 42840 \"A IN dns-test-service.dns-8433.ec2.internal. tcp 79 false 65535\" NXDOMAIN qr,rd,ra 56 0.007099729s\n[INFO] 100.64.1.27:46180 - 17080 \"SRV IN _http._tcp.dns-test-service.dns-8433.svc.ec2.internal. udp 94 false 4096\" NXDOMAIN qr,rd,ra 71 0.001355128s\n[INFO] 100.64.1.27:46828 - 44848 \"A IN 100-64-1-27.dns-8433.pod.cluster.local.ec2.internal. udp 92 false 4096\" NXDOMAIN qr,rd,ra 69 0.001217482s\n[INFO] 100.64.1.27:37563 - 50000 \"A IN 100-64-1-27.dns-8433.pod.cluster.local.ec2.internal. tcp 92 false 65535\" NXDOMAIN qr,rd,ra 69 0.001292496s\n[INFO] 100.64.1.27:40338 - 28606 \"A IN dns-test-service. udp 45 false 4096\" NXDOMAIN qr,rd,ra 45 0.000227247s\n[INFO] 100.64.1.27:49728 - 30896 \"A IN dns-test-service.dns-8433.svc.ec2.internal. udp 71 false 4096\" NXDOMAIN qr,rd,ra 60 0.001004044s\n[INFO] 100.64.1.27:49910 - 23989 \"A IN 100-64-1-27.dns-8433.pod.cluster.local.ec2.internal. udp 80 false 4096\" NXDOMAIN qr,rd,ra 69 0.001110711s\n[INFO] 100.64.1.27:60301 - 52111 \"A IN 100-64-1-27.dns-8433.pod.cluster.local.ec2.internal. tcp 80 false 65535\" NXDOMAIN qr,rd,ra 69 0.001112536s\n[INFO] 100.64.1.27:33813 - 53005 \"A IN dns-test-service.dns-8433.ec2.internal. udp 79 false 4096\" NXDOMAIN qr,rd,ra 56 0.001104714s\n[INFO] 100.64.1.27:57497 - 59597 \"A IN dns-test-service.dns-8433.svc.ec2.internal. udp 83 false 4096\" NXDOMAIN qr,rd,ra 60 0.001092155s\n[INFO] 100.64.1.27:36286 - 49293 \"SRV IN _http._tcp.dns-test-service.dns-8433.svc.ec2.internal. udp 94 false 4096\" NXDOMAIN qr,rd,ra 71 0.001172578s\n[INFO] 100.64.1.27:40833 - 26898 \"A IN dns-test-service. tcp 45 false 65535\" NXDOMAIN qr,rd,ra 45 0.000281543s\n[INFO] 100.64.1.27:37590 - 63045 \"A IN dns-test-service.dns-8433.ec2.internal. udp 67 false 4096\" NXDOMAIN qr,rd,ra 56 0.001066835s\n[INFO] 100.64.1.27:53199 - 10248 \"A IN dns-test-service.dns-8433.ec2.internal. tcp 67 false 65535\" NXDOMAIN qr,rd,ra 56 0.001085283s\n[INFO] 100.64.1.27:42009 - 39438 \"A IN dns-test-service.dns-8433.svc.ec2.internal. tcp 71 false 65535\" NXDOMAIN qr,rd,ra 60 0.00419754s\n[INFO] 100.64.1.27:48272 - 52091 \"SRV IN _http._tcp.dns-test-service.dns-8433.svc.ec2.internal. udp 82 false 4096\" NXDOMAIN qr,rd,ra 71 0.001155968s\n[INFO] 100.64.1.27:35587 - 7168 \"SRV IN _http._tcp.dns-test-service.dns-8433.svc.ec2.internal. tcp 82 false 65535\" NXDOMAIN qr,rd,ra 71 0.002209757s\n[INFO] 100.64.1.27:57526 - 35122 \"A IN dns-test-service. udp 57 false 4096\" NXDOMAIN qr,rd,ra 45 0.000250695s\n[INFO] 100.64.1.27:38065 - 35626 \"A IN dns-test-service. tcp 57 false 65535\" NXDOMAIN qr,rd,ra 45 0.000215439s\n[INFO] 100.64.1.27:57237 - 64609 \"A IN dns-test-service.dns-8433.ec2.internal. udp 79 false 4096\" NXDOMAIN qr,rd,ra 56 0.001276777s\n[INFO] 100.64.1.27:40723 - 10818 \"A IN dns-test-service.dns-8433.svc.ec2.internal. tcp 83 false 65535\" NXDOMAIN qr,rd,ra 60 0.001202554s\n[INFO] 100.64.1.27:43061 - 514 \"SRV IN _http._tcp.dns-test-service.dns-8433.svc.ec2.internal. tcp 94 false 65535\" NXDOMAIN qr,rd,ra 71 0.001061582s\n[INFO] 100.64.1.27:41471 - 42408 \"A IN dns-test-service. udp 45 false 4096\" NXDOMAIN qr,rd,ra 45 0.00020994s\n[INFO] 100.64.1.27:58309 - 17050 \"A IN dns-test-service.dns-8433.ec2.internal. tcp 67 false 65535\" NXDOMAIN qr,rd,ra 56 0.001082885s\n[INFO] 100.64.1.27:44693 - 9750 \"A IN dns-test-service.dns-8433.svc.ec2.internal. tcp 71 false 65535\" NXDOMAIN qr,rd,ra 60 0.000930956s\n[INFO] 100.64.1.27:33742 - 7276 \"SRV IN _http._tcp.dns-test-service.dns-8433.svc.ec2.internal. udp 82 false 4096\" NXDOMAIN qr,rd,ra 71 0.001149501s\n[INFO] 100.64.1.27:35509 - 64405 \"SRV IN _http._tcp.dns-test-service.dns-8433.svc.ec2.internal. tcp 82 false 65535\" NXDOMAIN qr,rd,ra 71 0.000925372s\n[INFO] 100.64.1.27:55289 - 20596 \"A IN 100-64-1-27.dns-8433.pod.cluster.local.ec2.internal. udp 80 false 4096\" NXDOMAIN qr,rd,ra 69 0.000841247s\n[INFO] 100.64.1.27:42743 - 7802 \"A IN 100-64-1-27.dns-8433.pod.cluster.local.ec2.internal. tcp 80 false 65535\" NXDOMAIN qr,rd,ra 69 0.000918648s\n[INFO] 100.64.1.27:50741 - 424 \"A IN dns-test-service. udp 57 false 4096\" NXDOMAIN qr,rd,ra 45 0.001503237s\n[INFO] 100.64.1.27:48114 - 31286 \"A IN dns-test-service.dns-8433.ec2.internal. udp 79 false 4096\" NXDOMAIN qr,rd,ra 56 0.000960978s\n[INFO] 100.64.1.27:35637 - 20982 \"A IN dns-test-service.dns-8433.svc.ec2.internal. udp 83 false 4096\" NXDOMAIN qr,rd,ra 60 0.001088851s\n[INFO] 100.64.1.27:58901 - 28871 \"A IN dns-test-service. tcp 45 false 65535\" NXDOMAIN qr,rd,ra 45 0.000207393s\n[INFO] 100.64.1.27:56277 - 54051 \"A IN dns-test-service.dns-8433.ec2.internal. udp 67 false 4096\" NXDOMAIN qr,rd,ra 56 0.000962541s\n[INFO] 100.64.1.27:52079 - 46739 \"A IN dns-test-service.dns-8433.svc.ec2.internal. udp 71 false 4096\" NXDOMAIN qr,rd,ra 60 0.000956257s\n[INFO] 100.64.1.27:38621 - 36994 \"A IN dns-test-service.dns-8433.svc.ec2.internal. tcp 71 false 65535\" NXDOMAIN qr,rd,ra 60 0.001022481s\n[INFO] 100.64.1.27:40739 - 39618 \"A IN 100-64-1-27.dns-8433.pod.cluster.local.ec2.internal. udp 80 false 4096\" NXDOMAIN qr,rd,ra 69 0.00101597s\n[INFO] 100.64.1.27:41436 - 32447 \"A IN dns-test-service.dns-8433.svc.ec2.internal. udp 83 false 4096\" NXDOMAIN qr,rd,ra 60 0.002395749s\n[INFO] 100.64.1.27:49335 - 39040 \"SRV IN _http._tcp.dns-test-service.dns-8433.svc.ec2.internal. udp 94 false 4096\" NXDOMAIN qr,rd,ra 71 0.001274504s\n[INFO] 100.64.1.27:36199 - 44192 \"SRV IN _http._tcp.dns-test-service.dns-8433.svc.ec2.internal. tcp 94 false 65535\" NXDOMAIN qr,rd,ra 71 0.014674271s\n[INFO] 100.64.1.27:47021 - 55064 \"A IN 100-64-1-27.dns-8433.pod.cluster.local.ec2.internal. tcp 92 false 65535\" NXDOMAIN qr,rd,ra 69 0.001059242s\n[INFO] 100.64.1.27:47983 - 3352 \"A IN dns-test-service. udp 45 false 4096\" NXDOMAIN qr,rd,ra 45 0.000196169s\n[INFO] 100.64.1.27:43092 - 38300 \"A IN dns-test-service.dns-8433.ec2.internal. udp 67 false 4096\" NXDOMAIN qr,rd,ra 56 0.001051532s\n[INFO] 100.64.1.27:37347 - 24078 \"SRV IN _http._tcp.dns-test-service.dns-8433.svc.ec2.internal. tcp 82 false 65535\" NXDOMAIN qr,rd,ra 71 0.000894019s\n[INFO] 100.64.1.27:60435 - 57157 \"A IN 100-64-1-27.dns-8433.pod.cluster.local.ec2.internal. udp 80 false 4096\" NXDOMAIN qr,rd,ra 69 0.001028802s\n[INFO] 100.64.1.27:38107 - 7889 \"A IN 100-64-1-27.dns-8433.pod.cluster.local.ec2.internal. tcp 80 false 65535\" NXDOMAIN qr,rd,ra 69 0.005620165s\n[INFO] 100.64.1.27:38549 - 16290 \"A IN dns-test-service. tcp 57 false 65535\" NXDOMAIN qr,rd,ra 45 0.000248577s\n[INFO] 100.64.1.27:37637 - 41009 \"A IN dns-test-service.dns-8433.svc.ec2.internal. tcp 83 false 65535\" NXDOMAIN qr,rd,ra 60 0.001218251s\n[INFO] 100.64.1.27:60790 - 8393 \"A IN 100-64-1-27.dns-8433.pod.cluster.local.ec2.internal. udp 92 false 4096\" NXDOMAIN qr,rd,ra 69 0.001392282s\n[INFO] 100.64.1.27:58339 - 58473 \"A IN 100-64-1-27.dns-8433.pod.cluster.local.ec2.internal. tcp 92 false 65535\" NXDOMAIN qr,rd,ra 69 0.000889732s\n[INFO] 100.64.1.27:40114 - 54179 \"A IN dns-test-service. udp 45 false 4096\" NXDOMAIN qr,rd,ra 45 0.000238161s\n[INFO] 100.64.1.27:48953 - 45890 \"A IN dns-test-service.dns-8433.ec2.internal. tcp 67 false 65535\" NXDOMAIN qr,rd,ra 56 0.001032024s\n[INFO] 100.64.1.27:39657 - 36461 \"SRV IN _http._tcp.dns-test-service.dns-8433.svc.ec2.internal. udp 82 false 4096\" NXDOMAIN qr,rd,ra 71 0.001151254s\n[INFO] 100.64.1.27:38767 - 33843 \"SRV IN _http._tcp.dns-test-service.dns-8433.svc.ec2.internal. tcp 82 false 65535\" NXDOMAIN qr,rd,ra 71 0.001006266s\n[INFO] 100.64.1.27:48302 - 46624 \"A IN dns-test-service. udp 57 false 4096\" NXDOMAIN qr,rd,ra 45 0.000207741s\n[INFO] 100.64.1.27:35199 - 47129 \"A IN dns-test-service. tcp 57 false 65535\" NXDOMAIN qr,rd,ra 45 0.000204062s\n[INFO] 100.64.1.27:43992 - 12837 \"A IN dns-test-service.dns-8433.ec2.internal. udp 79 false 4096\" NXDOMAIN qr,rd,ra 56 0.001041407s\n[INFO] 100.64.1.27:44378 - 2533 \"A IN dns-test-service.dns-8433.svc.ec2.internal. udp 83 false 4096\" NXDOMAIN qr,rd,ra 60 0.001093464s\n[INFO] 100.64.1.27:35377 - 64072 \"A IN dns-test-service. udp 45 false 4096\" NXDOMAIN qr,rd,ra 45 0.000213533s\n[INFO] 100.64.1.27:50691 - 32028 \"A IN dns-test-service.dns-8433.ec2.internal. tcp 67 false 65535\" NXDOMAIN qr,rd,ra 56 0.000963328s\n[INFO] 100.64.1.27:47561 - 35901 \"SRV IN _http._tcp.dns-test-service.dns-8433.svc.ec2.internal. udp 82 false 4096\" NXDOMAIN qr,rd,ra 71 0.001024031s\n[INFO] 100.64.1.27:43999 - 59449 \"SRV IN _http._tcp.dns-test-service.dns-8433.svc.ec2.internal. tcp 82 false 65535\" NXDOMAIN qr,rd,ra 71 0.001365758s\n[INFO] 100.64.1.27:34141 - 52406 \"A IN 100-64-1-27.dns-8433.pod.cluster.local.ec2.internal. udp 80 false 4096\" NXDOMAIN qr,rd,ra 69 0.001175656s\n[INFO] 100.64.1.27:36883 - 45050 \"A IN dns-test-service.dns-8433.ec2.internal. udp 79 false 4096\" NXDOMAIN qr,rd,ra 56 0.001160755s\n[INFO] 100.64.1.27:60003 - 19290 \"A IN dns-test-service.dns-8433.svc.ec2.internal. tcp 83 false 65535\" NXDOMAIN qr,rd,ra 60 0.00116229s\n[INFO] 100.64.1.27:52041 - 52210 \"A IN 100-64-1-27.dns-8433.pod.cluster.local.ec2.internal. udp 92 false 4096\" NXDOMAIN qr,rd,ra 69 0.001279845s\n[INFO] 100.64.1.27:58957 - 36754 \"A IN 100-64-1-27.dns-8433.pod.cluster.local.ec2.internal. tcp 92 false 65535\" NXDOMAIN qr,rd,ra 69 0.000891884s\n[INFO] 100.64.1.27:46090 - 63643 \"A IN dns-test-service. udp 45 false 4096\" NXDOMAIN qr,rd,ra 45 0.000264125s\n[INFO] 100.64.1.27:48467 - 17694 \"A IN dns-test-service. tcp 45 false 65535\" NXDOMAIN qr,rd,ra 45 0.000216041s\n[INFO] 100.64.1.27:54070 - 35796 \"A IN dns-test-service.dns-8433.svc.ec2.internal. udp 71 false 4096\" NXDOMAIN qr,rd,ra 60 0.001078143s\n[INFO] 100.64.1.27:57574 - 30949 \"SRV IN _http._tcp.dns-test-service.dns-8433.svc.ec2.internal. udp 82 false 4096\" NXDOMAIN qr,rd,ra 71 0.001049293s\n[INFO] 100.64.1.27:58017 - 61715 \"A IN 100-64-1-27.dns-8433.pod.cluster.local.ec2.internal. udp 80 false 4096\" NXDOMAIN qr,rd,ra 69 0.001169904s\n[INFO] 100.64.1.27:35019 - 31281 \"A IN 100-64-1-27.dns-8433.pod.cluster.local.ec2.internal. tcp 80 false 65535\" NXDOMAIN qr,rd,ra 69 0.00094972s\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[INFO] 100.64.1.27:50945 - 33186 \"A IN dns-test-service. tcp 57 false 65535\" NXDOMAIN qr,rd,ra 45 0.000730939s\n[INFO] 100.64.1.27:48765 - 56655 \"A IN dns-test-service.dns-8433.ec2.internal. udp 79 false 4096\" NXDOMAIN qr,rd,ra 56 0.001091271s\n[INFO] 100.64.1.27:35034 - 46351 \"A IN dns-test-service.dns-8433.svc.ec2.internal. udp 83 false 4096\" NXDOMAIN qr,rd,ra 60 0.001128308s\n[INFO] 100.64.1.27:44103 - 51503 \"A IN dns-test-service.dns-8433.svc.ec2.internal. tcp 83 false 65535\" NXDOMAIN qr,rd,ra 60 0.00182003s\n[INFO] 100.64.1.27:39899 - 41199 \"SRV IN _http._tcp.dns-test-service.dns-8433.svc.ec2.internal. tcp 94 false 65535\" NXDOMAIN qr,rd,ra 71 0.001044839s\n[INFO] 100.64.1.27:56346 - 22381 \"A IN dns-test-service.dns-8433.ec2.internal. udp 67 false 4096\" NXDOMAIN qr,rd,ra 56 0.000929159s\n[INFO] 100.64.1.27:43757 - 63520 \"A IN dns-test-service. udp 57 false 4096\" NXDOMAIN qr,rd,ra 45 0.000189512s\n[INFO] 100.64.1.27:49857 - 62008 \"A IN dns-test-service. tcp 57 false 65535\" NXDOMAIN qr,rd,ra 45 0.000235469s\n[INFO] 100.64.1.27:49169 - 7876 \"A IN dns-test-service.dns-8433.ec2.internal. tcp 79 false 65535\" NXDOMAIN qr,rd,ra 56 0.001036471s\n[INFO] 100.64.1.27:58695 - 2724 \"SRV IN _http._tcp.dns-test-service.dns-8433.svc.ec2.internal. udp 94 false 4096\" NXDOMAIN qr,rd,ra 71 0.001038114s\n[INFO] 100.64.1.27:53187 - 52804 \"SRV IN _http._tcp.dns-test-service.dns-8433.svc.ec2.internal. tcp 94 false 65535\" NXDOMAIN qr,rd,ra 71 0.001100542s\n[INFO] 100.64.1.27:48313 - 15036 \"A IN 100-64-1-27.dns-8433.pod.cluster.local.ec2.internal. tcp 92 false 65535\" NXDOMAIN qr,rd,ra 69 0.001126595s\n[INFO] 100.64.1.27:42731 - 35233 \"A IN dns-test-service.dns-8433.ec2.internal. udp 67 false 4096\" NXDOMAIN qr,rd,ra 56 0.001188306s\n[INFO] 100.64.1.27:48316 - 23800 \"A IN dns-test-service.dns-8433.svc.ec2.internal. udp 71 false 4096\" NXDOMAIN qr,rd,ra 60 0.001079765s\n[INFO] 100.64.1.27:40660 - 18040 \"A IN dns-test-service.dns-8433.ec2.internal. udp 79 false 4096\" NXDOMAIN qr,rd,ra 56 0.001108884s\n[INFO] 100.64.1.27:59183 - 24632 \"A IN dns-test-service.dns-8433.svc.ec2.internal. udp 83 false 4096\" NXDOMAIN qr,rd,ra 60 0.001046965s\n[INFO] 100.64.1.27:60423 - 19480 \"SRV IN _http._tcp.dns-test-service.dns-8433.svc.ec2.internal. tcp 94 false 65535\" NXDOMAIN qr,rd,ra 71 0.001206879s\n[INFO] 100.64.1.27:60367 - 30352 \"A IN 100-64-1-27.dns-8433.pod.cluster.local.ec2.internal. tcp 92 false 65535\" NXDOMAIN qr,rd,ra 69 0.001058828s\n[INFO] 100.64.1.27:42850 - 45343 \"A IN dns-test-service. udp 45 false 4096\" NXDOMAIN qr,rd,ra 45 0.000208031s\n[INFO] 100.64.1.27:60576 - 55957 \"A IN dns-test-service.dns-8433.ec2.internal. udp 67 false 4096\" NXDOMAIN qr,rd,ra 56 0.001255662s\n[INFO] 100.64.1.27:57993 - 34637 \"A IN dns-test-service.dns-8433.ec2.internal. tcp 67 false 65535\" NXDOMAIN qr,rd,ra 56 0.001839568s\n[INFO] 100.64.1.27:60835 - 48577 \"A IN dns-test-service.dns-8433.svc.ec2.internal. tcp 71 false 65535\" NXDOMAIN qr,rd,ra 60 0.000940802s\n[INFO] 100.64.1.27:60665 - 37851 \"SRV IN _http._tcp.dns-test-service.dns-8433.svc.ec2.internal. tcp 82 false 65535\" NXDOMAIN qr,rd,ra 71 0.000952935s\n[INFO] 100.64.1.27:45778 - 12767 \"A IN 100-64-1-27.dns-8433.pod.cluster.local.ec2.internal. udp 80 false 4096\" NXDOMAIN qr,rd,ra 69 0.001018906s\n[INFO] 100.64.1.27:44151 - 56132 \"A IN dns-test-service. tcp 57 false 65535\" NXDOMAIN qr,rd,ra 45 0.000219776s\n[INFO] 100.64.1.27:35244 - 50253 \"A IN dns-test-service.dns-8433.ec2.internal. udp 79 false 4096\" NXDOMAIN qr,rd,ra 56 0.001099714s\n[INFO] 100.64.1.27:44981 - 39949 \"A IN dns-test-service.dns-8433.svc.ec2.internal. udp 83 false 4096\" NXDOMAIN qr,rd,ra 60 0.001183774s\n[INFO] 100.64.1.27:57433 - 31085 \"SRV IN _http._tcp.dns-test-service.dns-8433.svc.ec2.internal. tcp 94 false 65535\" NXDOMAIN qr,rd,ra 71 0.001020428s\n[INFO] 100.64.1.27:34868 - 57274 \"A IN 100-64-1-27.dns-8433.pod.cluster.local.ec2.internal. udp 92 false 4096\" NXDOMAIN qr,rd,ra 69 0.001016885s\n[INFO] 100.64.1.27:55067 - 41818 \"A IN 100-64-1-27.dns-8433.pod.cluster.local.ec2.internal. tcp 92 false 65535\" NXDOMAIN qr,rd,ra 69 0.001037152s\n[INFO] 100.64.1.27:38360 - 38827 \"A IN dns-test-service. udp 45 false 4096\" NXDOMAIN qr,rd,ra 45 0.000212675s\n[INFO] 100.64.1.27:59925 - 42408 \"A IN dns-test-service.dns-8433.svc.ec2.internal. udp 71 false 4096\" NXDOMAIN qr,rd,ra 60 0.001131598s\n[INFO] 100.64.1.27:41113 - 42604 \"SRV IN _http._tcp.dns-test-service.dns-8433.svc.ec2.internal. tcp 82 false 65535\" NXDOMAIN qr,rd,ra 71 0.000999307s\n[INFO] 100.64.1.27:51866 - 9947 \"A IN 100-64-1-27.dns-8433.pod.cluster.local.ec2.internal. udp 80 false 4096\" NXDOMAIN qr,rd,ra 69 0.001114081s\n[INFO] 100.64.1.27:57483 - 62433 \"A IN 100-64-1-27.dns-8433.pod.cluster.local.ec2.internal. tcp 80 false 65535\" NXDOMAIN qr,rd,ra 69 0.00118601s\n[INFO] 100.64.1.27:52235 - 42694 \"A IN dns-test-service. tcp 57 false 65535\" NXDOMAIN qr,rd,ra 45 0.000800444s\n[INFO] 100.64.1.27:43868 - 61718 \"A IN dns-test-service.dns-8433.ec2.internal. udp 79 false 4096\" NXDOMAIN qr,rd,ra 56 0.00104284s\n[INFO] 100.64.1.27:37761 - 1334 \"A IN dns-test-service.dns-8433.ec2.internal. tcp 79 false 65535\" NXDOMAIN qr,rd,ra 56 0.00177116s\n[INFO] 100.64.1.27:34815 - 51414 \"A IN dns-test-service.dns-8433.svc.ec2.internal. udp 83 false 4096\" NXDOMAIN qr,rd,ra 60 0.001172675s\n[INFO] 100.64.1.27:40357 - 56566 \"A IN dns-test-service.dns-8433.svc.ec2.internal. tcp 83 false 65535\" NXDOMAIN qr,rd,ra 60 0.00097425s\n[INFO] 100.64.1.27:37573 - 58006 \"SRV IN _http._tcp.dns-test-service.dns-8433.svc.ec2.internal. udp 94 false 4096\" NXDOMAIN qr,rd,ra 71 0.001042064s\n[INFO] 100.64.1.27:51983 - 8494 \"A IN 100-64-1-27.dns-8433.pod.cluster.local.ec2.internal. tcp 92 false 65535\" NXDOMAIN qr,rd,ra 69 0.00108743s\n[INFO] 100.64.1.27:46038 - 15511 \"A IN dns-test-service. udp 45 false 4096\" NXDOMAIN qr,rd,ra 45 0.000793392s\n[INFO] 100.64.1.27:38887 - 3587 \"A IN dns-test-service. tcp 45 false 65535\" NXDOMAIN qr,rd,ra 45 0.000349888s\n[INFO] 100.64.1.27:53179 - 57396 \"A IN dns-test-service.dns-8433.ec2.internal. udp 67 false 4096\" NXDOMAIN qr,rd,ra 56 0.001712291s\n[INFO] 100.64.1.27:52089 - 32262 \"SRV IN _http._tcp.dns-test-service.dns-8433.svc.ec2.internal. tcp 82 false 65535\" NXDOMAIN qr,rd,ra 71 0.001308389s\n[INFO] 100.64.1.27:40794 - 40362 \"A IN 100-64-1-27.dns-8433.pod.cluster.local.ec2.internal. udp 80 false 4096\" NXDOMAIN qr,rd,ra 69 0.000920821s\n[INFO] 100.64.1.27:33301 - 6504 \"A IN 100-64-1-27.dns-8433.pod.cluster.local.ec2.internal. tcp 80 false 65535\" NXDOMAIN qr,rd,ra 69 0.000981353s\n[INFO] 100.64.1.27:35039 - 28395 \"A IN dns-test-service.dns-8433.ec2.internal. udp 79 false 4096\" NXDOMAIN qr,rd,ra 56 0.00171721s\n[INFO] 100.64.1.27:36313 - 12939 \"A IN dns-test-service.dns-8433.ec2.internal. tcp 79 false 65535\" NXDOMAIN qr,rd,ra 56 0.001079447s\n[INFO] 100.64.1.27:58718 - 18091 \"A IN dns-test-service.dns-8433.svc.ec2.internal. udp 83 false 4096\" NXDOMAIN qr,rd,ra 60 0.00112729s\n[INFO] 100.64.1.27:34318 - 7787 \"SRV IN _http._tcp.dns-test-service.dns-8433.svc.ec2.internal. udp 94 false 4096\" NXDOMAIN qr,rd,ra 71 0.001212715s\n[INFO] 100.64.1.27:56475 - 1031 \"SRV IN _http._tcp.dns-test-service.dns-8433.svc.ec2.internal. tcp 94 false 65535\" NXDOMAIN qr,rd,ra 71 0.000992129s\n[INFO] 100.64.1.27:42249 - 27359 \"A IN 100-64-1-27.dns-8433.pod.cluster.local.ec2.internal. udp 92 false 4096\" NXDOMAIN qr,rd,ra 69 0.001059349s\n[INFO] 100.64.1.27:52799 - 11903 \"A IN 100-64-1-27.dns-8433.pod.cluster.local.ec2.internal. tcp 92 false 65535\" NXDOMAIN qr,rd,ra 69 0.000975357s\n[INFO] 100.64.1.27:41936 - 58467 \"A IN dns-test-service. udp 45 false 4096\" NXDOMAIN qr,rd,ra 45 0.000211401s\n[INFO] 100.64.1.27:35881 - 43827 \"A IN dns-test-service.dns-8433.ec2.internal. udp 67 false 4096\" NXDOMAIN qr,rd,ra 56 0.001210629s\n[INFO] 100.64.1.27:53225 - 26230 \"A IN dns-test-service.dns-8433.svc.ec2.internal. udp 71 false 4096\" NXDOMAIN qr,rd,ra 60 0.00096439s\n[INFO] 100.64.1.27:36514 - 42734 \"A IN 100-64-1-27.dns-8433.pod.cluster.local.ec2.internal. udp 80 false 4096\" NXDOMAIN qr,rd,ra 69 0.00104168s\n[INFO] 100.64.1.27:55446 - 38309 \"A IN dns-test-service. udp 57 false 4096\" NXDOMAIN qr,rd,ra 45 0.000204917s\n[INFO] 100.64.1.27:45607 - 38813 \"A IN dns-test-service. tcp 57 false 65535\" NXDOMAIN qr,rd,ra 45 0.00022173s\n[INFO] 100.64.1.27:55708 - 21500 \"A IN dns-test-service.dns-8433.svc.ec2.internal. udp 83 false 4096\" NXDOMAIN qr,rd,ra 60 0.001115942s\n[INFO] 100.64.1.27:58675 - 26652 \"A IN dns-test-service.dns-8433.svc.ec2.internal. tcp 83 false 65535\" NXDOMAIN qr,rd,ra 60 0.00109215s\n[INFO] 100.64.1.27:48070 - 38964 \"A IN 100-64-1-27.dns-8433.pod.cluster.local.ec2.internal. udp 92 false 4096\" NXDOMAIN qr,rd,ra 69 0.001008481s\n[INFO] 100.64.1.27:38903 - 44116 \"A IN 100-64-1-27.dns-8433.pod.cluster.local.ec2.internal. tcp 92 false 65535\" NXDOMAIN qr,rd,ra 69 0.000955106s\n[INFO] 100.64.1.27:54407 - 23697 \"A IN dns-test-service. tcp 45 false 65535\" NXDOMAIN qr,rd,ra 45 0.000205146s\n[INFO] 100.64.1.27:48734 - 18026 \"A IN dns-test-service.dns-8433.ec2.internal. udp 67 false 4096\" NXDOMAIN qr,rd,ra 56 0.001258575s\n[INFO] 100.64.1.27:42639 - 35222 \"A IN dns-test-service.dns-8433.svc.ec2.internal. tcp 71 false 65535\" NXDOMAIN qr,rd,ra 60 0.001081903s\n[INFO] 100.64.1.27:56768 - 6014 \"SRV IN _http._tcp.dns-test-service.dns-8433.svc.ec2.internal. udp 82 false 4096\" NXDOMAIN qr,rd,ra 71 0.001044779s\n[INFO] 100.64.1.27:58027 - 37991 \"A IN 100-64-1-27.dns-8433.pod.cluster.local.ec2.internal. tcp 80 false 65535\" NXDOMAIN qr,rd,ra 69 0.000933872s\n[INFO] 100.64.1.27:57257 - 5641 \"A IN 100-64-1-27.dns-8433.pod.cluster.local.ec2.internal. udp 92 false 4096\" NXDOMAIN qr,rd,ra 69 0.001104855s\n[INFO] 100.64.1.27:45689 - 35217 \"A IN 100-64-1-27.dns-8433.pod.cluster.local.ec2.internal. tcp 80 false 65535\" NXDOMAIN qr,rd,ra 69 0.001530166s\n[INFO] 100.64.1.27:53681 - 17246 \"A IN 100-64-1-27.dns-8433.pod.cluster.local.ec2.internal. udp 92 false 4096\" NXDOMAIN qr,rd,ra 69 0.002790242s\n[INFO] 100.64.1.27:57299 - 22398 \"A IN 100-64-1-27.dns-8433.pod.cluster.local.ec2.internal. tcp 92 false 65535\" NXDOMAIN qr,rd,ra 69 0.002633261s\n[INFO] 100.64.1.27:60673 - 53489 \"A IN 100-64-1-27.dns-8433.pod.cluster.local.ec2.internal. tcp 80 false 65535\" NXDOMAIN qr,rd,ra 69 0.001166915s\n[INFO] 100.64.1.27:47141 - 49319 \"A IN 100-64-1-27.dns-8433.pod.cluster.local.ec2.internal. tcp 92 false 65535\" NXDOMAIN qr,rd,ra 69 0.001173145s\n[INFO] 100.64.1.27:51517 - 56801 \"A IN 100-64-1-27.dns-8433.pod.cluster.local.ec2.internal. tcp 80 false 65535\" NXDOMAIN qr,rd,ra 69 0.001291175s\n[INFO] 100.64.1.27:36960 - 10844 \"A IN 100-64-1-27.dns-8433.pod.cluster.local.ec2.internal. udp 92 false 4096\" NXDOMAIN qr,rd,ra 69 0.001391569s\n[INFO] 100.64.1.27:57391 - 22309 \"A IN 100-64-1-27.dns-8433.pod.cluster.local.ec2.internal. udp 92 false 4096\" NXDOMAIN qr,rd,ra 69 0.001212135s\n[INFO] 100.64.1.27:43301 - 27461 \"A IN 100-64-1-27.dns-8433.pod.cluster.local.ec2.internal. tcp 92 false 65535\" NXDOMAIN qr,rd,ra 69 0.00243367s\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[INFO] 100.64.0.136:38137 - 58791 \"A IN 100-64-0-136.dns-5603.pod.cluster.local.ec2.internal. tcp 93 false 65535\" NXDOMAIN qr,rd,ra 70 0.015076743s\n[INFO] 100.64.0.136:34581 - 63270 \"AAAA IN dns-querier-2.dns-test-service-2.dns-5603.svc.cluster.local.ec2.internal. udp 90 false 512\" NXDOMAIN qr,rd,ra 90 0.009561573s\n[INFO] 100.64.0.136:43441 - 46545 \"A IN 100-64-0-136.dns-5603.pod.cluster.local.ec2.internal. tcp 81 false 65535\" NXDOMAIN qr,rd,ra 70 0.003449249s\n[INFO] 100.64.0.136:40744 - 44496 \"A IN 100-64-0-136.dns-5603.pod.cluster.local.ec2.internal. udp 93 false 4096\" NXDOMAIN qr,rd,ra 70 0.001243706s\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[INFO] 100.64.0.146:46077 - 38791 \"AAAA IN boom-server.ec2.internal. udp 42 false 512\" NXDOMAIN qr,rd,ra 42 0.000235434s\n[INFO] 100.64.0.146:46077 - 38500 \"A IN boom-server.ec2.internal. udp 42 false 512\" NXDOMAIN qr,rd,ra 42 0.000353733s\n[INFO] 100.64.0.146:40876 - 42695 \"AAAA IN boom-server.ec2.internal. udp 42 false 512\" NXDOMAIN qr,rd,ra 42 0.000210199s\n[INFO] 100.64.0.146:40876 - 42439 \"A IN boom-server.ec2.internal. udp 42 false 512\" NXDOMAIN qr,rd,ra 42 0.000296863s\n[INFO] 100.64.0.146:60434 - 58629 \"A IN boom-server.ec2.internal. udp 42 false 512\" NXDOMAIN qr,rd,ra 42 0.000255208s\n[INFO] 100.64.0.146:60434 - 58854 \"AAAA IN boom-server.ec2.internal. udp 42 false 512\" NXDOMAIN qr,rd,ra 42 0.000382035s\n[INFO] 100.64.0.146:57236 - 8650 \"AAAA IN boom-server.ec2.internal. udp 42 false 512\" NXDOMAIN qr,rd,ra 42 0.000361887s\n[INFO] 100.64.0.146:57236 - 8371 \"A IN boom-server.ec2.internal. udp 42 false 512\" NXDOMAIN qr,rd,ra 42 0.000430509s\n[INFO] 100.64.0.146:56266 - 39177 \"AAAA IN boom-server.ec2.internal. udp 42 false 512\" NXDOMAIN qr,rd,ra 42 0.00023834s\n[INFO] 100.64.0.146:56266 - 38900 \"A IN boom-server.ec2.internal. udp 42 false 512\" NXDOMAIN qr,rd,ra 42 0.000275564s\n[INFO] 100.64.0.146:42386 - 37560 \"A IN boom-server.ec2.internal. udp 42 false 512\" NXDOMAIN qr,rd,ra 42 0.000284487s\n[INFO] 100.64.0.146:42386 - 37809 \"AAAA IN boom-server.ec2.internal. udp 42 false 512\" NXDOMAIN qr,rd,ra 42 0.000377786s\n[INFO] 100.64.0.146:56834 - 5253 \"AAAA IN boom-server.ec2.internal. udp 42 false 512\" NXDOMAIN qr,rd,ra 42 0.000224527s\n[INFO] 100.64.0.146:56834 - 4990 \"A IN boom-server.ec2.internal. udp 42 false 512\" NXDOMAIN qr,rd,ra 42 0.000165084s\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[INFO] 100.64.0.182:36389 - 14149 \"A IN dns-test-service-2.dns-3324.svc.cluster.local.ec2.internal. udp 99 false 4096\" NXDOMAIN qr,rd,ra 76 0.001205919s\n[INFO] 100.64.0.182:42495 - 64229 \"A IN dns-test-service-2.dns-3324.svc.cluster.local.ec2.internal. tcp 99 false 65535\" NXDOMAIN qr,rd,ra 76 0.011269731s\n[INFO] 100.64.0.182:45485 - 35325 \"A IN 100-64-0-182.dns-3324.pod.cluster.local.ec2.internal. udp 93 false 4096\" NXDOMAIN qr,rd,ra 70 0.001294201s\n[INFO] 100.64.0.182:60609 - 36765 \"A IN 100-64-0-182.dns-3324.pod.cluster.local.ec2.internal. tcp 93 false 65535\" NXDOMAIN qr,rd,ra 70 0.001184572s\n[INFO] 100.64.0.182:41619 - 9564 \"A IN dns-test-service-2.dns-3324.svc.cluster.local.ec2.internal. udp 87 false 4096\" NXDOMAIN qr,rd,ra 76 0.001197768s\n[INFO] 100.64.0.182:44919 - 61786 \"A IN 100-64-0-182.dns-3324.pod.cluster.local.ec2.internal. udp 81 false 4096\" NXDOMAIN qr,rd,ra 70 0.001430048s\n[INFO] 100.64.0.182:59835 - 50073 \"A IN dns-test-service-2.dns-3324.svc.cluster.local.ec2.internal. udp 99 false 4096\" NXDOMAIN qr,rd,ra 76 0.001091675s\n[INFO] 100.64.0.182:49439 - 55225 \"A IN dns-test-service-2.dns-3324.svc.cluster.local.ec2.internal. tcp 99 false 65535\" NXDOMAIN qr,rd,ra 76 0.00090137s\n[INFO] 100.64.0.182:53218 - 5713 \"A IN 100-64-0-182.dns-3324.pod.cluster.local.ec2.internal. udp 93 false 4096\" NXDOMAIN qr,rd,ra 70 0.063153546s\n[INFO] 100.64.0.182:39822 - 48747 \"A IN dns-test-service-2.dns-3324.svc.cluster.local.ec2.internal. udp 87 false 4096\" NXDOMAIN qr,rd,ra 76 0.001278721s\n[INFO] 100.64.0.182:40347 - 4384 \"A IN 100-64-0-182.dns-3324.pod.cluster.local.ec2.internal. tcp 81 false 65535\" NXDOMAIN qr,rd,ra 70 0.000888869s\n[INFO] 100.64.0.182:39104 - 41070 \"A IN dns-test-service-2.dns-3324.svc.cluster.local.ec2.internal. udp 99 false 4096\" NXDOMAIN qr,rd,ra 76 0.001265469s\n[INFO] 100.64.0.182:49717 - 25614 \"A IN dns-test-service-2.dns-3324.svc.cluster.local.ec2.internal. tcp 99 false 65535\" NXDOMAIN qr,rd,ra 76 0.00094317s\n[INFO] 100.64.0.182:49033 - 16445 \"A IN dns-test-service-2.dns-3324.svc.cluster.local.ec2.internal. tcp 87 false 65535\" NXDOMAIN qr,rd,ra 76 0.001298646s\n[INFO] 100.64.0.182:47169 - 23381 \"A IN 100-64-0-182.dns-3324.pod.cluster.local.ec2.internal. udp 81 false 4096\" NXDOMAIN qr,rd,ra 70 0.001153043s\n[INFO] 100.64.0.182:42643 - 32668 \"A IN 100-64-0-182.dns-3324.pod.cluster.local.ec2.internal. tcp 81 false 65535\" NXDOMAIN qr,rd,ra 70 0.00076009s\n[INFO] 100.64.0.182:36905 - 11459 \"A IN dns-test-service-2.dns-3324.svc.cluster.local.ec2.internal. udp 99 false 4096\" NXDOMAIN qr,rd,ra 76 0.001031347s\n[INFO] 100.64.0.182:34591 - 16611 \"A IN dns-test-service-2.dns-3324.svc.cluster.local.ec2.internal. tcp 99 false 65535\" NXDOMAIN qr,rd,ra 76 0.000860953s\n[INFO] 100.64.0.182:42635 - 37787 \"A IN 100-64-0-182.dns-3324.pod.cluster.local.ec2.internal. tcp 93 false 65535\" NXDOMAIN qr,rd,ra 70 0.000781552s\n[INFO] 100.64.0.182:52494 - 37160 \"A IN dns-test-service-2.dns-3324.svc.cluster.local.ec2.internal. udp 87 false 4096\" NXDOMAIN qr,rd,ra 76 0.000972187s\n[INFO] 100.64.0.182:55913 - 15328 \"A IN dns-test-service-2.dns-3324.svc.cluster.local.ec2.internal. tcp 87 false 65535\" NXDOMAIN qr,rd,ra 76 0.00075826s\n[INFO] 100.64.0.182:40933 - 8175 \"A IN 100-64-0-182.dns-3324.pod.cluster.local.ec2.internal. tcp 93 false 65535\" NXDOMAIN qr,rd,ra 70 0.002153472s\n[INFO] 100.64.0.182:51337 - 10939 \"A IN dns-test-service-2.dns-3324.svc.cluster.local.ec2.internal. udp 87 false 4096\" NXDOMAIN qr,rd,ra 76 0.001364019s\n[INFO] 100.64.0.182:35871 - 64060 \"A IN 100-64-0-182.dns-3324.pod.cluster.local.ec2.internal. tcp 81 false 65535\" NXDOMAIN qr,rd,ra 70 0.001011042s\n[INFO] 100.64.0.182:43479 - 38380 \"A IN dns-test-service-2.dns-3324.svc.cluster.local.ec2.internal. udp 99 false 4096\" NXDOMAIN qr,rd,ra 76 0.001208843s\n[INFO] 100.64.0.182:43437 - 64708 \"A IN 100-64-0-182.dns-3324.pod.cluster.local.ec2.internal. tcp 93 false 65535\" NXDOMAIN qr,rd,ra 70 0.001541416s\n[INFO] 100.64.0.182:59689 - 34328 \"A IN dns-test-service-2.dns-3324.svc.cluster.local.ec2.internal. tcp 87 false 65535\" NXDOMAIN qr,rd,ra 76 0.001235065s\n[INFO] 100.64.0.182:40721 - 1034 \"A IN 100-64-0-182.dns-3324.pod.cluster.local.ec2.internal. tcp 81 false 65535\" NXDOMAIN qr,rd,ra 70 0.00317122s\n[INFO] 100.64.0.182:53249 - 13921 \"A IN dns-test-service-2.dns-3324.svc.cluster.local.ec2.internal. tcp 99 false 65535\" NXDOMAIN qr,rd,ra 76 0.000951258s\n[INFO] 100.64.0.182:42923 - 50553 \"A IN 100-64-0-182.dns-3324.pod.cluster.local.ec2.internal. udp 93 false 4096\" NXDOMAIN qr,rd,ra 70 0.001034949s\n[INFO] 100.64.0.182:35189 - 35096 \"A IN 100-64-0-182.dns-3324.pod.cluster.local.ec2.internal. tcp 93 false 65535\" NXDOMAIN qr,rd,ra 70 0.000896177s\n[INFO] 100.64.0.182:49887 - 23173 \"A IN 100-64-0-182.dns-3324.pod.cluster.local.ec2.internal. tcp 81 false 65535\" NXDOMAIN qr,rd,ra 70 0.000882824s\n[INFO] 100.64.0.182:38685 - 48405 \"A IN dns-test-service-2.dns-3324.svc.cluster.local.ec2.internal. udp 99 false 4096\" NXDOMAIN qr,rd,ra 76 0.001167754s\n[INFO] 100.64.0.182:40094 - 20941 \"A IN 100-64-0-182.dns-3324.pod.cluster.local.ec2.internal. udp 93 false 4096\" NXDOMAIN qr,rd,ra 70 0.001318886s\n[INFO] 100.64.0.182:50941 - 26093 \"A IN 100-64-0-182.dns-3324.pod.cluster.local.ec2.internal. tcp 93 false 65535\" NXDOMAIN qr,rd,ra 70 0.00091544s\n[INFO] 100.64.0.182:45156 - 2710 \"A IN dns-test-service-2.dns-3324.svc.cluster.local.ec2.internal. udp 87 false 4096\" NXDOMAIN qr,rd,ra 76 0.001159125s\n[INFO] 100.64.0.182:34172 - 2848 \"A IN 100-64-0-182.dns-3324.pod.cluster.local.ec2.internal. udp 81 false 4096\" NXDOMAIN qr,rd,ra 70 0.001062874s\n[INFO] 100.64.0.182:33899 - 39402 \"A IN dns-test-service-2.dns-3324.svc.cluster.local.ec2.internal. udp 99 false 4096\" NXDOMAIN qr,rd,ra 76 0.001054772s\n[INFO] 100.64.0.182:38566 - 25333 \"A IN 100-64-0-182.dns-3324.pod.cluster.local.ec2.internal. udp 81 false 4096\" NXDOMAIN qr,rd,ra 70 0.001073395s\n[INFO] 100.64.0.182:33698 - 19593 \"A IN dns-test-service-2.dns-3324.svc.cluster.local.ec2.internal. udp 87 false 4096\" NXDOMAIN qr,rd,ra 76 0.00118955s\n[INFO] 100.64.0.182:53373 - 19955 \"A IN 100-64-0-182.dns-3324.pod.cluster.local.ec2.internal. udp 81 false 4096\" NXDOMAIN qr,rd,ra 70 0.053264381s\n[INFO] 100.64.0.182:35921 - 6507 \"A IN 100-64-0-182.dns-3324.pod.cluster.local.ec2.internal. tcp 93 false 65535\" NXDOMAIN qr,rd,ra 70 0.002172193s\n[INFO] 100.64.0.182:40935 - 19106 \"A IN dns-test-service-2.dns-3324.svc.cluster.local.ec2.internal. tcp 87 false 65535\" NXDOMAIN qr,rd,ra 76 0.001088539s\n[INFO] 100.64.0.182:45633 - 55157 \"A IN 100-64-0-182.dns-3324.pod.cluster.local.ec2.internal. tcp 81 false 65535\" NXDOMAIN qr,rd,ra 70 0.000869477s\n[INFO] 100.64.0.182:38135 - 36712 \"A IN dns-test-service-2.dns-3324.svc.cluster.local.ec2.internal. udp 99 false 4096\" NXDOMAIN qr,rd,ra 76 0.001193174s\n[INFO] 100.64.0.182:45169 - 41864 \"A IN dns-test-service-2.dns-3324.svc.cluster.local.ec2.internal. tcp 99 false 65535\" NXDOMAIN qr,rd,ra 76 0.000971748s\n[INFO] 100.64.0.182:37925 - 46600 \"A IN 100-64-0-182.dns-3324.pod.cluster.local.ec2.internal. udp 81 false 4096\" NXDOMAIN qr,rd,ra 70 0.001092108s\n[INFO] 100.64.0.182:43702 - 27708 \"A IN dns-test-service-2.dns-3324.svc.cluster.local.ec2.internal. udp 99 false 4096\" NXDOMAIN qr,rd,ra 76 0.001186417s\n[INFO] 100.64.0.182:46801 - 33428 \"A IN 100-64-0-182.dns-3324.pod.cluster.local.ec2.internal. tcp 93 false 65535\" NXDOMAIN qr,rd,ra 70 0.000977922s\n[INFO] 100.64.0.182:48941 - 30559 \"A IN dns-test-service-2.dns-3324.svc.cluster.local.ec2.internal. udp 87 false 4096\" NXDOMAIN qr,rd,ra 76 0.001236763s\n[INFO] 100.64.0.182:33887 - 43824 \"A IN dns-test-service-2.dns-3324.svc.cluster.local.ec2.internal. tcp 87 false 65535\" NXDOMAIN qr,rd,ra 76 0.000964246s\n[INFO] 100.64.0.182:44613 - 56788 \"A IN 100-64-0-182.dns-3324.pod.cluster.local.ec2.internal. udp 81 false 4096\" NXDOMAIN qr,rd,ra 70 0.001096211s\n[INFO] 100.64.0.182:59625 - 63493 \"A IN dns-test-service-2.dns-3324.svc.cluster.local.ec2.internal. udp 99 false 4096\" NXDOMAIN qr,rd,ra 76 0.004831781s\n[INFO] 100.64.0.182:53186 - 19133 \"A IN 100-64-0-182.dns-3324.pod.cluster.local.ec2.internal. udp 93 false 4096\" NXDOMAIN qr,rd,ra 70 0.001113293s\n[INFO] 100.64.0.182:34309 - 24285 \"A IN 100-64-0-182.dns-3324.pod.cluster.local.ec2.internal. tcp 93 false 65535\" NXDOMAIN qr,rd,ra 70 0.000960577s\n[INFO] 100.64.0.182:59913 - 15324 \"A IN 100-64-0-182.dns-3324.pod.cluster.local.ec2.internal. tcp 81 false 65535\" NXDOMAIN qr,rd,ra 70 0.000982967s\n[INFO] 100.64.0.182:35434 - 46762 \"A IN 100-64-0-182.dns-3324.pod.cluster.local.ec2.internal. udp 81 false 4096\" NXDOMAIN qr,rd,ra 70 0.001065922s\n[INFO] 100.64.0.182:43719 - 13134 \"A IN dns-test-service-2.dns-3324.svc.cluster.local.ec2.internal. tcp 99 false 65535\" NXDOMAIN qr,rd,ra 76 0.012153489s\n[INFO] 100.64.0.182:40139 - 51207 \"A IN 100-64-0-182.dns-3324.pod.cluster.local.ec2.internal. tcp 93 false 65535\" NXDOMAIN qr,rd,ra 70 0.001188429s\n[INFO] 100.64.0.182:57751 - 7904 \"A IN dns-test-service-2.dns-3324.svc.cluster.local.ec2.internal. udp 87 false 4096\" NXDOMAIN qr,rd,ra 76 0.001138376s\n[INFO] 100.64.0.182:34617 - 63970 \"A IN dns-test-service-2.dns-3324.svc.cluster.local.ec2.internal. tcp 87 false 65535\" NXDOMAIN qr,rd,ra 76 0.00111692s\n[INFO] 100.64.0.182:42721 - 19031 \"A IN 100-64-0-182.dns-3324.pod.cluster.local.ec2.internal. udp 81 false 4096\" NXDOMAIN qr,rd,ra 70 0.001077358s\n[INFO] 100.64.0.182:58849 - 49059 \"A IN dns-test-service-2.dns-3324.svc.cluster.local.ec2.internal. tcp 99 false 65535\" NXDOMAIN qr,rd,ra 76 0.001141656s\n[INFO] 100.64.0.182:60099 - 20155 \"A IN 100-64-0-182.dns-3324.pod.cluster.local.ec2.internal. udp 93 false 4096\" NXDOMAIN qr,rd,ra 70 0.001092842s\n[INFO] 100.64.0.182:40327 - 21595 \"A IN 100-64-0-182.dns-3324.pod.cluster.local.ec2.internal. tcp 93 false 65535\" NXDOMAIN qr,rd,ra 70 0.001003462s\n[INFO] 100.64.0.182:42837 - 44569 \"A IN dns-test-service-2.dns-3324.svc.cluster.local.ec2.internal. udp 87 false 4096\" NXDOMAIN qr,rd,ra 76 0.001190766s\n[INFO] 100.64.0.182:58041 - 8356 \"A IN dns-test-service-2.dns-3324.svc.cluster.local.ec2.internal. tcp 87 false 65535\" NXDOMAIN qr,rd,ra 76 0.001011894s\n[INFO] 100.64.0.182:45621 - 46575 \"A IN 100-64-0-182.dns-3324.pod.cluster.local.ec2.internal. tcp 81 false 65535\" NXDOMAIN qr,rd,ra 70 0.000934713s\n[INFO] 100.64.0.182:41947 - 6228 \"A IN dns-test-service-2.dns-3324.svc.cluster.local.ec2.internal. tcp 87 false 65535\" NXDOMAIN qr,rd,ra 76 0.001115397s\n[INFO] 100.64.0.182:35762 - 53629 \"A IN dns-test-service-2.dns-3324.svc.cluster.local.ec2.internal. udp 99 false 4096\" NXDOMAIN qr,rd,ra 76 0.001124065s\n[INFO] 100.64.0.182:42583 - 14421 \"A IN 100-64-0-182.dns-3324.pod.cluster.local.ec2.internal. tcp 93 false 65535\" NXDOMAIN qr,rd,ra 70 0.00226291s\n[INFO] 100.64.0.182:54604 - 8827 \"A IN dns-test-service-2.dns-3324.svc.cluster.local.ec2.internal. udp 87 false 4096\" NXDOMAIN qr,rd,ra 76 0.001132173s\n[INFO] 100.64.0.182:47976 - 33487 \"A IN 100-64-0-182.dns-3324.pod.cluster.local.ec2.internal. udp 81 false 4096\" NXDOMAIN qr,rd,ra 70 0.001056867s\n[INFO] 100.64.0.182:55599 - 16086 \"A IN 100-64-0-182.dns-3324.pod.cluster.local.ec2.internal. tcp 81 false 65535\" NXDOMAIN qr,rd,ra 70 0.000764704s\n[INFO] 100.64.0.182:40587 - 29170 \"A IN dns-test-service-2.dns-3324.svc.cluster.local.ec2.internal. tcp 99 false 65535\" NXDOMAIN qr,rd,ra 76 0.000937936s\n[INFO] 100.64.0.182:57083 - 56284 \"A IN dns-test-service-2.dns-3324.svc.cluster.local.ec2.internal. tcp 87 false 65535\" NXDOMAIN qr,rd,ra 76 0.00094601s\n[INFO] 100.64.0.182:53379 - 5616 \"A IN 100-64-0-182.dns-3324.pod.cluster.local.ec2.internal. udp 81 false 4096\" NXDOMAIN qr,rd,ra 70 0.001157752s\n[INFO] 100.64.0.182:58381 - 54496 \"A IN 100-64-0-182.dns-3324.pod.cluster.local.ec2.internal. tcp 81 false 65535\" NXDOMAIN qr,rd,ra 70 0.000841125s\n[INFO] 100.64.0.182:52194 - 15015 \"A IN dns-test-service-2.dns-3324.svc.cluster.local.ec2.internal. udp 99 false 4096\" NXDOMAIN qr,rd,ra 76 0.00118401s\n[INFO] 100.64.0.182:60917 - 41343 \"A IN 100-64-0-182.dns-3324.pod.cluster.local.ec2.internal. tcp 93 false 65535\" NXDOMAIN qr,rd,ra 70 0.000834735s\n[INFO] 100.64.0.182:46645 - 56091 \"A IN dns-test-service-2.dns-3324.svc.cluster.local.ec2.internal. tcp 99 false 65535\" NXDOMAIN qr,rd,ra 76 0.000870737s\n[INFO] 100.64.0.182:36992 - 27187 \"A IN 100-64-0-182.dns-3324.pod.cluster.local.ec2.internal. udp 93 false 4096\" NXDOMAIN qr,rd,ra 70 0.001176716s\n[INFO] 100.64.0.182:58571 - 18135 \"A IN 100-64-0-182.dns-3324.pod.cluster.local.ec2.internal. udp 81 false 4096\" NXDOMAIN qr,rd,ra 70 0.001099052s\n[INFO] 100.64.0.182:51413 - 25040 \"A IN dns-test-service-2.dns-3324.svc.cluster.local.ec2.internal. udp 99 false 4096\" NXDOMAIN qr,rd,ra 76 0.001422891s\n[INFO] 100.64.0.182:37927 - 63112 \"A IN 100-64-0-182.dns-3324.pod.cluster.local.ec2.internal. udp 93 false 4096\" NXDOMAIN qr,rd,ra 70 0.001533208s\n[INFO] 100.64.0.182:51321 - 60467 \"A IN dns-test-service-2.dns-3324.svc.cluster.local.ec2.internal. tcp 87 false 65535\" NXDOMAIN qr,rd,ra 76 0.002370703s\n[INFO] 100.64.0.182:48881 - 48904 \"A IN 100-64-0-182.dns-3324.pod.cluster.local.ec2.internal. tcp 81 false 65535\" NXDOMAIN qr,rd,ra 70 0.001064981s\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[INFO] 100.64.0.182:49097 - 38653 \"A IN 100-64-0-182.dns-3324.pod.cluster.local.ec2.internal. tcp 93 false 65535\" NXDOMAIN qr,rd,ra 70 0.00086458s\n[INFO] 100.64.0.182:45503 - 31515 \"A IN 100-64-0-182.dns-3324.pod.cluster.local.ec2.internal. tcp 81 false 65535\" NXDOMAIN qr,rd,ra 70 0.000943975s\n[INFO] 100.64.0.182:45478 - 7601 \"A IN 100-64-0-182.dns-3324.pod.cluster.local.ec2.internal. udp 93 false 4096\" NXDOMAIN qr,rd,ra 70 0.001276591s\n[INFO] 100.64.0.182:45771 - 29804 \"A IN dns-test-service-2.dns-3324.svc.cluster.local.ec2.internal. tcp 87 false 65535\" NXDOMAIN qr,rd,ra 76 0.000905672s\n[INFO] 100.64.0.182:41562 - 13889 \"A IN 100-64-0-182.dns-3324.pod.cluster.local.ec2.internal. udp 81 false 4096\" NXDOMAIN qr,rd,ra 70 0.00114479s\n[INFO] 100.64.0.182:45989 - 27502 \"A IN dns-test-service-2.dns-3324.svc.cluster.local.ec2.internal. tcp 99 false 65535\" NXDOMAIN qr,rd,ra 76 0.000948554s\n[INFO] 100.64.0.182:49695 - 1520 \"A IN dns-test-service-2.dns-3324.svc.cluster.local.ec2.internal. tcp 87 false 65535\" NXDOMAIN qr,rd,ra 76 0.000984228s\n[INFO] 100.64.0.182:51446 - 39557 \"A IN 100-64-0-182.dns-3324.pod.cluster.local.ec2.internal. udp 81 false 4096\" NXDOMAIN qr,rd,ra 70 0.001159349s\n[INFO] 100.64.0.182:55735 - 18498 \"A IN dns-test-service-2.dns-3324.svc.cluster.local.ec2.internal. tcp 99 false 65535\" NXDOMAIN qr,rd,ra 76 0.002930113s\n[INFO] 100.64.0.182:55690 - 34522 \"A IN 100-64-0-182.dns-3324.pod.cluster.local.ec2.internal. udp 93 false 4096\" NXDOMAIN qr,rd,ra 70 0.001037019s\n[INFO] 100.64.0.182:34697 - 5579 \"A IN dns-test-service-2.dns-3324.svc.cluster.local.ec2.internal. udp 87 false 4096\" NXDOMAIN qr,rd,ra 76 0.001336333s\n[INFO] 100.64.0.182:57785 - 32282 \"A IN dns-test-service-2.dns-3324.svc.cluster.local.ec2.internal. tcp 87 false 65535\" NXDOMAIN qr,rd,ra 76 0.002077774s\n[INFO] 100.64.0.182:59817 - 49370 \"A IN 100-64-0-182.dns-3324.pod.cluster.local.ec2.internal. tcp 81 false 65535\" NXDOMAIN qr,rd,ra 70 0.00100835s\n[INFO] 100.64.0.182:37283 - 4343 \"A IN dns-test-service-2.dns-3324.svc.cluster.local.ec2.internal. udp 99 false 4096\" NXDOMAIN qr,rd,ra 76 0.001289592s\n[INFO] 100.64.0.182:53945 - 10063 \"A IN 100-64-0-182.dns-3324.pod.cluster.local.ec2.internal. tcp 93 false 65535\" NXDOMAIN qr,rd,ra 70 0.00088933s\n[INFO] 100.64.0.182:53642 - 1071 \"A IN dns-test-service-2.dns-3324.svc.cluster.local.ec2.internal. udp 87 false 4096\" NXDOMAIN qr,rd,ra 76 0.001070168s\n[INFO] 100.64.0.182:48371 - 61127 \"A IN 100-64-0-182.dns-3324.pod.cluster.local.ec2.internal. udp 81 false 4096\" NXDOMAIN qr,rd,ra 70 0.001149736s\n[INFO] 100.64.0.182:51447 - 52851 \"A IN 100-64-0-182.dns-3324.pod.cluster.local.ec2.internal. tcp 81 false 65535\" NXDOMAIN qr,rd,ra 70 0.000813624s\n[INFO] 100.64.0.182:57499 - 40268 \"A IN dns-test-service-2.dns-3324.svc.cluster.local.ec2.internal. udp 99 false 4096\" NXDOMAIN qr,rd,ra 76 0.001188804s\n[INFO] 100.64.0.182:34086 - 61444 \"A IN 100-64-0-182.dns-3324.pod.cluster.local.ec2.internal. udp 93 false 4096\" NXDOMAIN qr,rd,ra 70 0.001170143s\n[INFO] 100.64.0.182:33931 - 31396 \"A IN dns-test-service-2.dns-3324.svc.cluster.local.ec2.internal. tcp 87 false 65535\" NXDOMAIN qr,rd,ra 76 0.000877088s\n[INFO] 100.64.0.182:48890 - 14368 \"A IN dns-test-service-2.dns-3324.svc.cluster.local.ec2.internal. udp 99 false 4096\" NXDOMAIN qr,rd,ra 76 0.001126325s\n[INFO] 100.64.0.182:50053 - 15808 \"A IN dns-test-service-2.dns-3324.svc.cluster.local.ec2.internal. tcp 99 false 65535\" NXDOMAIN qr,rd,ra 76 0.001017762s\n[INFO] 100.64.0.182:37089 - 36984 \"A IN 100-64-0-182.dns-3324.pod.cluster.local.ec2.internal. tcp 93 false 65535\" NXDOMAIN qr,rd,ra 70 0.000901624s\n[INFO] 100.64.0.182:55267 - 9047 \"A IN dns-test-service-2.dns-3324.svc.cluster.local.ec2.internal. tcp 87 false 65535\" NXDOMAIN qr,rd,ra 76 0.000986949s\n[INFO] 100.64.0.182:46186 - 50293 \"A IN dns-test-service-2.dns-3324.svc.cluster.local.ec2.internal. udp 99 false 4096\" NXDOMAIN qr,rd,ra 76 0.001099732s\n[INFO] 100.64.0.182:36075 - 55445 \"A IN dns-test-service-2.dns-3324.svc.cluster.local.ec2.internal. tcp 99 false 65535\" NXDOMAIN qr,rd,ra 76 0.001606709s\n[INFO] 100.64.0.182:50778 - 22905 \"A IN dns-test-service-2.dns-3324.svc.cluster.local.ec2.internal. udp 87 false 4096\" NXDOMAIN qr,rd,ra 76 0.001114984s\n[INFO] 100.64.0.182:38147 - 1138 \"A IN dns-test-service-2.dns-3324.svc.cluster.local.ec2.internal. tcp 87 false 65535\" NXDOMAIN qr,rd,ra 76 0.001877414s\n[INFO] 100.64.0.182:53779 - 25833 \"A IN dns-test-service-2.dns-3324.svc.cluster.local.ec2.internal. tcp 99 false 65535\" NXDOMAIN qr,rd,ra 76 0.002137009s\n[INFO] 100.64.0.182:59113 - 63905 \"A IN 100-64-0-182.dns-3324.pod.cluster.local.ec2.internal. tcp 93 false 65535\" NXDOMAIN qr,rd,ra 70 0.000926318s\n[INFO] 100.64.0.182:32833 - 44041 \"A IN dns-test-service-2.dns-3324.svc.cluster.local.ec2.internal. udp 87 false 4096\" NXDOMAIN qr,rd,ra 76 0.001169987s\n[INFO] 100.64.0.182:35843 - 27800 \"A IN dns-test-service-2.dns-3324.svc.cluster.local.ec2.internal. tcp 87 false 65535\" NXDOMAIN qr,rd,ra 76 0.000985063s\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[INFO] 100.64.1.73:45561 - 39659 \"A IN dns-test-service.dns-8876.svc.cluster.local.ec2.internal. tcp 97 false 65535\" NXDOMAIN qr,rd,ra 74 0.002136541s\n[INFO] 100.64.1.73:48345 - 41667 \"A IN 100-64-1-73.dns-8876.pod.cluster.local.ec2.internal. tcp 92 false 65535\" NXDOMAIN qr,rd,ra 69 0.005581842s\n[INFO] 100.64.1.73:35105 - 56830 \"A IN dns-test-service.dns-8876.svc.cluster.local.ec2.internal. udp 85 false 4096\" NXDOMAIN qr,rd,ra 74 0.001170328s\n[INFO] 100.64.1.73:36567 - 20681 \"A IN dns-test-service.dns-8876.svc.cluster.local.ec2.internal. tcp 85 false 65535\" NXDOMAIN qr,rd,ra 74 0.000849884s\n[INFO] 100.64.1.73:33019 - 10119 \"A IN 100-64-1-73.dns-8876.pod.cluster.local.ec2.internal. udp 80 false 4096\" NXDOMAIN qr,rd,ra 69 0.001146422s\n[INFO] 100.64.1.73:59771 - 54976 \"A IN dns-test-service.dns-8876.svc.cluster.local.ec2.internal. tcp 97 false 65535\" NXDOMAIN qr,rd,ra 74 0.000885774s\n[INFO] 100.64.1.73:59966 - 6904 \"A IN 100-64-1-73.dns-8876.pod.cluster.local.ec2.internal. udp 92 false 4096\" NXDOMAIN qr,rd,ra 69 0.00113416s\n[INFO] 100.64.1.73:47501 - 12056 \"A IN 100-64-1-73.dns-8876.pod.cluster.local.ec2.internal. tcp 92 false 65535\" NXDOMAIN qr,rd,ra 69 0.001065491s\n[INFO] 100.64.1.73:51939 - 61788 \"A IN dns-test-service.dns-8876.svc.cluster.local.ec2.internal. udp 85 false 4096\" NXDOMAIN qr,rd,ra 74 0.00114132s\n[INFO] 100.64.1.73:59021 - 23964 \"A IN dns-test-service.dns-8876.svc.cluster.local.ec2.internal. tcp 85 false 65535\" NXDOMAIN qr,rd,ra 74 0.002513596s\n[INFO] 100.64.1.73:53882 - 17034 \"A IN 100-64-1-73.dns-8876.pod.cluster.local.ec2.internal. udp 80 false 4096\" NXDOMAIN qr,rd,ra 69 0.001317703s\n[INFO] 100.64.1.73:56357 - 25067 \"A IN 100-64-1-73.dns-8876.pod.cluster.local.ec2.internal. tcp 80 false 65535\" NXDOMAIN qr,rd,ra 69 0.000854954s\n[INFO] 100.64.1.73:47346 - 37108 \"A IN dns-test-service.dns-8876.svc.cluster.local.ec2.internal. udp 97 false 4096\" NXDOMAIN qr,rd,ra 74 0.0011299s\n[INFO] 100.64.1.73:38215 - 42260 \"A IN dns-test-service.dns-8876.svc.cluster.local.ec2.internal. tcp 97 false 65535\" NXDOMAIN qr,rd,ra 74 0.00092704s\n[INFO] 100.64.1.73:54355 - 44269 \"A IN 100-64-1-73.dns-8876.pod.cluster.local.ec2.internal. tcp 92 false 65535\" NXDOMAIN qr,rd,ra 69 0.000875524s\n[INFO] 100.64.1.73:36302 - 24047 \"A IN 100-64-1-73.dns-8876.pod.cluster.local.ec2.internal. udp 80 false 4096\" NXDOMAIN qr,rd,ra 69 0.001137833s\n[INFO] 100.64.1.73:57365 - 15630 \"A IN 100-64-1-73.dns-8876.pod.cluster.local.ec2.internal. tcp 80 false 65535\" NXDOMAIN qr,rd,ra 69 0.000860428s\n[INFO] 100.64.1.73:44816 - 7497 \"A IN dns-test-service.dns-8876.svc.cluster.local.ec2.internal. udp 97 false 4096\" NXDOMAIN qr,rd,ra 74 0.001298022s\n[INFO] 100.64.1.73:52827 - 57577 \"A IN dns-test-service.dns-8876.svc.cluster.local.ec2.internal. tcp 97 false 65535\" NXDOMAIN qr,rd,ra 74 0.000999422s\n[INFO] 100.64.1.73:51915 - 62259 \"A IN 100-64-1-73.dns-8876.pod.cluster.local.ec2.internal. udp 80 false 4096\" NXDOMAIN qr,rd,ra 69 0.001028406s\n[INFO] 100.64.1.73:38957 - 27966 \"A IN dns-test-service.dns-8876.svc.cluster.local.ec2.internal. tcp 97 false 65535\" NXDOMAIN qr,rd,ra 74 0.002085624s\n[INFO] 100.64.1.73:40013 - 29974 \"A IN 100-64-1-73.dns-8876.pod.cluster.local.ec2.internal. tcp 92 false 65535\" NXDOMAIN qr,rd,ra 69 0.001037338s\n[INFO] 100.64.1.73:54193 - 408 \"A IN dns-test-service.dns-8876.svc.cluster.local.ec2.internal. udp 85 false 4096\" NXDOMAIN qr,rd,ra 74 0.001171683s\n[INFO] 100.64.1.73:49397 - 14725 \"A IN 100-64-1-73.dns-8876.pod.cluster.local.ec2.internal. tcp 80 false 65535\" NXDOMAIN qr,rd,ra 69 0.000838258s\n[INFO] 100.64.1.73:51351 - 17119 \"A IN 100-64-1-73.dns-8876.pod.cluster.local.ec2.internal. tcp 92 false 65535\" NXDOMAIN qr,rd,ra 69 0.000878154s\n[INFO] 100.64.1.73:33019 - 43607 \"A IN 100-64-1-73.dns-8876.pod.cluster.local.ec2.internal. udp 80 false 4096\" NXDOMAIN qr,rd,ra 69 0.001257466s\n[INFO] 100.64.1.73:36053 - 45839 \"A IN 100-64-1-73.dns-8876.pod.cluster.local.ec2.internal. tcp 80 false 65535\" NXDOMAIN qr,rd,ra 69 0.00190232s\n[INFO] 100.64.1.73:54201 - 47892 \"A IN 100-64-1-73.dns-8876.pod.cluster.local.ec2.internal. udp 92 false 4096\" NXDOMAIN qr,rd,ra 69 0.001785182s\n[INFO] 100.64.1.73:55877 - 31708 \"A IN dns-test-service.dns-8876.svc.cluster.local.ec2.internal. udp 85 false 4096\" NXDOMAIN qr,rd,ra 74 0.001203939s\n[INFO] 100.64.1.73:59729 - 65461 \"A IN dns-test-service.dns-8876.svc.cluster.local.ec2.internal. tcp 85 false 65535\" NXDOMAIN qr,rd,ra 74 0.000982581s\n[INFO] 100.64.1.74:40742 - 46411 \"A IN kubernetes.default.svc.cluster.local.ec2.internal. udp 90 false 4096\" NXDOMAIN qr,rd,ra 67 0.001376065s\n[INFO] 100.64.1.74:47945 - 30955 \"A IN kubernetes.default.svc.cluster.local.ec2.internal. tcp 90 false 65535\" NXDOMAIN qr,rd,ra 67 0.001013638s\n[INFO] 100.64.1.74:53665 - 52131 \"A IN 100-64-1-74.dns-5429.pod.cluster.local.ec2.internal. udp 92 false 4096\" NXDOMAIN qr,rd,ra 69 0.001126051s\n[INFO] 100.64.1.74:39353 - 35379 \"A IN kubernetes.default.svc.cluster.local.ec2.internal. udp 78 false 4096\" NXDOMAIN qr,rd,ra 67 0.000972716s\n[INFO] 100.64.1.74:41617 - 47824 \"A IN kubernetes.default.svc.cluster.local.ec2.internal. tcp 78 false 65535\" NXDOMAIN qr,rd,ra 67 0.000864807s\n[INFO] 100.64.1.74:49099 - 18721 \"A IN 100-64-1-74.dns-5429.pod.cluster.local.ec2.internal. udp 80 false 4096\" NXDOMAIN qr,rd,ra 69 0.001017371s\n[INFO] 100.64.1.74:36355 - 36106 \"A IN 100-64-1-74.dns-5429.pod.cluster.local.ec2.internal. tcp 80 false 65535\" NXDOMAIN qr,rd,ra 69 0.001287316s\n[INFO] 100.64.1.73:33007 - 62640 \"A IN dns-test-service.dns-8876.svc.cluster.local.ec2.internal. tcp 97 false 65535\" NXDOMAIN qr,rd,ra 74 0.005405689s\n[INFO] 100.64.1.73:37501 - 2824 \"A IN 100-64-1-73.dns-8876.pod.cluster.local.ec2.internal. tcp 92 false 65535\" NXDOMAIN qr,rd,ra 69 0.000849286s\n[INFO] 100.64.1.73:59454 - 17659 \"A IN dns-test-service.dns-8876.svc.cluster.local.ec2.internal. udp 85 false 4096\" NXDOMAIN qr,rd,ra 74 0.008225334s\n[INFO] 100.64.1.74:46215 - 15359 \"A IN kubernetes.default.svc.cluster.local.ec2.internal. tcp 90 false 65535\" NXDOMAIN qr,rd,ra 67 0.000974562s\n[INFO] 100.64.1.74:49133 - 53431 \"A IN 100-64-1-74.dns-5429.pod.cluster.local.ec2.internal. udp 92 false 4096\" NXDOMAIN qr,rd,ra 69 0.001273341s\n[INFO] 100.64.1.74:34455 - 37975 \"A IN 100-64-1-74.dns-5429.pod.cluster.local.ec2.internal. tcp 92 false 65535\" NXDOMAIN qr,rd,ra 69 0.003960423s\n[INFO] 100.64.1.74:59235 - 30540 \"A IN kubernetes.default.svc.cluster.local.ec2.internal. tcp 78 false 65535\" NXDOMAIN qr,rd,ra 67 0.000898708s\n[INFO] 100.64.1.74:52533 - 42597 \"A IN 100-64-1-74.dns-5429.pod.cluster.local.ec2.internal. udp 80 false 4096\" NXDOMAIN qr,rd,ra 69 0.001378789s\n[INFO] 100.64.1.73:48257 - 33029 \"A IN dns-test-service.dns-8876.svc.cluster.local.ec2.internal. tcp 97 false 65535\" NXDOMAIN qr,rd,ra 74 0.00253464s\n[INFO] 100.64.1.73:57480 - 50493 \"A IN 100-64-1-73.dns-8876.pod.cluster.local.ec2.internal. udp 92 false 4096\" NXDOMAIN qr,rd,ra 69 0.00117594s\n[INFO] 100.64.1.73:58671 - 63644 \"A IN dns-test-service.dns-8876.svc.cluster.local.ec2.internal. udp 85 false 4096\" NXDOMAIN qr,rd,ra 74 0.001102752s\n[INFO] 100.64.1.73:34095 - 48971 \"A IN dns-test-service.dns-8876.svc.cluster.local.ec2.internal. tcp 85 false 65535\" NXDOMAIN qr,rd,ra 74 0.001017097s\n[INFO] 100.64.1.73:36121 - 9810 \"A IN 100-64-1-73.dns-8876.pod.cluster.local.ec2.internal. udp 80 false 4096\" NXDOMAIN qr,rd,ra 69 0.003210798s\n[INFO] 100.64.1.74:53425 - 61099 \"A IN 100-64-1-74.dns-5429.pod.cluster.local.ec2.internal. udp 80 false 4096\" NXDOMAIN qr,rd,ra 69 0.000950697s\n[INFO] 100.64.1.73:50814 - 63802 \"A IN dns-test-service.dns-8876.svc.cluster.local.ec2.internal. udp 97 false 4096\" NXDOMAIN qr,rd,ra 74 0.000988092s\n[INFO] 100.64.1.73:51251 - 48346 \"A IN dns-test-service.dns-8876.svc.cluster.local.ec2.internal. tcp 97 false 65535\" NXDOMAIN qr,rd,ra 74 0.001013249s\n[INFO] 100.64.1.73:35029 - 274 \"A IN 100-64-1-73.dns-8876.pod.cluster.local.ec2.internal. udp 92 false 4096\" NXDOMAIN qr,rd,ra 69 0.003979413s\n[INFO] 100.64.1.73:49847 - 5426 \"A IN 100-64-1-73.dns-8876.pod.cluster.local.ec2.internal. tcp 92 false 65535\" NXDOMAIN qr,rd,ra 69 0.001073988s\n[INFO] 100.64.1.73:52658 - 8094 \"A IN 100-64-1-73.dns-8876.pod.cluster.local.ec2.internal. udp 80 false 4096\" NXDOMAIN qr,rd,ra 69 0.001039906s\n[INFO] 100.64.1.73:42971 - 35630 \"A IN dns-test-service.dns-8876.svc.cluster.local.ec2.internal. tcp 97 false 65535\" NXDOMAIN qr,rd,ra 74 0.000991008s\n[INFO] 100.64.1.73:49427 - 36198 \"A IN 100-64-1-73.dns-8876.pod.cluster.local.ec2.internal. udp 92 false 4096\" NXDOMAIN qr,rd,ra 69 0.004650195s\n[INFO] 100.64.1.73:59933 - 50349 \"A IN dns-test-service.dns-8876.svc.cluster.local.ec2.internal. tcp 85 false 65535\" NXDOMAIN qr,rd,ra 74 0.001588961s\n[INFO] 100.64.1.73:41109 - 4662 \"A IN 100-64-1-73.dns-8876.pod.cluster.local.ec2.internal. tcp 80 false 65535\" NXDOMAIN qr,rd,ra 69 0.001045926s\n[INFO] 100.64.1.73:40673 - 50947 \"A IN dns-test-service.dns-8876.svc.cluster.local.ec2.internal. tcp 97 false 65535\" NXDOMAIN qr,rd,ra 74 0.001110079s\n[INFO] 100.64.1.73:60515 - 60215 \"A IN 100-64-1-73.dns-8876.pod.cluster.local.ec2.internal. udp 92 false 4096\" NXDOMAIN qr,rd,ra 69 0.001249841s\n[INFO] 100.64.1.73:45119 - 65367 \"A IN 100-64-1-73.dns-8876.pod.cluster.local.ec2.internal. tcp 92 false 65535\" NXDOMAIN qr,rd,ra 69 0.001102924s\n[INFO] 100.64.1.73:56041 - 23199 \"A IN dns-test-service.dns-8876.svc.cluster.local.ec2.internal. tcp 85 false 65535\" NXDOMAIN qr,rd,ra 74 0.002161933s\n[INFO] 100.64.1.73:50223 - 2169 \"A IN 100-64-1-73.dns-8876.pod.cluster.local.ec2.internal. tcp 80 false 65535\" NXDOMAIN qr,rd,ra 69 0.001006869s\n[INFO] 100.64.1.73:51250 - 7988 \"A IN dns-test-service.dns-8876.svc.cluster.local.ec2.internal. udp 97 false 4096\" NXDOMAIN qr,rd,ra 74 0.00122637s\n[INFO] 100.64.1.73:51307 - 13140 \"A IN dns-test-service.dns-8876.svc.cluster.local.ec2.internal. tcp 97 false 65535\" NXDOMAIN qr,rd,ra 74 0.002338165s\n[INFO] 100.64.1.73:51116 - 30604 \"A IN 100-64-1-73.dns-8876.pod.cluster.local.ec2.internal. udp 92 false 4096\" NXDOMAIN qr,rd,ra 69 0.001090879s\n[INFO] 100.64.1.73:60071 - 15148 \"A IN 100-64-1-73.dns-8876.pod.cluster.local.ec2.internal. tcp 92 false 65535\" NXDOMAIN qr,rd,ra 69 0.001512974s\n[INFO] 100.64.1.73:45283 - 23812 \"A IN 100-64-1-73.dns-8876.pod.cluster.local.ec2.internal. tcp 80 false 65535\" NXDOMAIN qr,rd,ra 69 0.000979349s\n[INFO] 100.64.1.73:49770 - 60809 \"A IN dns-test-service.dns-8876.svc.cluster.local.ec2.internal. udp 97 false 4096\" NXDOMAIN qr,rd,ra 74 0.001354833s\n[INFO] 100.64.1.73:54363 - 2433 \"A IN 100-64-1-73.dns-8876.pod.cluster.local.ec2.internal. tcp 92 false 65535\" NXDOMAIN qr,rd,ra 69 0.001151607s\n[INFO] 100.64.1.73:35271 - 32368 \"A IN dns-test-service.dns-8876.svc.cluster.local.ec2.internal. tcp 85 false 65535\" NXDOMAIN qr,rd,ra 74 0.000943087s\n[INFO] 100.64.1.73:32919 - 19581 \"A IN 100-64-1-73.dns-8876.pod.cluster.local.ec2.internal. udp 80 false 4096\" NXDOMAIN qr,rd,ra 69 0.001032699s\n[INFO] 100.64.1.73:41515 - 15741 \"A IN dns-test-service.dns-8876.svc.cluster.local.ec2.internal. tcp 97 false 65535\" NXDOMAIN qr,rd,ra 74 0.001051547s\n[INFO] 100.64.1.73:44342 - 33205 \"A IN 100-64-1-73.dns-8876.pod.cluster.local.ec2.internal. udp 92 false 4096\" NXDOMAIN qr,rd,ra 69 0.001195774s\n[INFO] 100.64.1.73:52703 - 35725 \"A IN dns-test-service.dns-8876.svc.cluster.local.ec2.internal. tcp 85 false 65535\" NXDOMAIN qr,rd,ra 74 0.000964183s\n[INFO] 100.64.1.73:57132 - 9468 \"A IN 100-64-1-73.dns-8876.pod.cluster.local.ec2.internal. udp 80 false 4096\" NXDOMAIN qr,rd,ra 69 0.00111304s\n[INFO] 100.64.1.73:49175 - 60863 \"A IN 100-64-1-73.dns-8876.pod.cluster.local.ec2.internal. tcp 80 false 65535\" NXDOMAIN qr,rd,ra 69 0.001002953s\n[INFO] 100.64.1.73:56331 - 47954 \"A IN dns-test-service.dns-8876.svc.cluster.local.ec2.internal. tcp 97 false 65535\" NXDOMAIN qr,rd,ra 74 0.000922831s\n[INFO] 100.64.1.73:51835 - 31181 \"A IN 100-64-1-73.dns-8876.pod.cluster.local.ec2.internal. udp 80 false 4096\" NXDOMAIN qr,rd,ra 69 0.001201521s\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[INFO] 100.64.1.73:53704 - 13191 \"A IN dns-test-service.dns-8876.svc.cluster.local.ec2.internal. udp 97 false 4096\" NXDOMAIN qr,rd,ra 74 0.001270025s\n[INFO] 100.64.1.73:41857 - 20351 \"A IN 100-64-1-73.dns-8876.pod.cluster.local.ec2.internal. tcp 92 false 65535\" NXDOMAIN qr,rd,ra 69 0.002386422s\n[INFO] 100.64.1.73:32884 - 57246 \"A IN dns-test-service.dns-8876.svc.cluster.local.ec2.internal. udp 85 false 4096\" NXDOMAIN qr,rd,ra 74 0.000972778s\n[INFO] 100.64.1.73:43654 - 44682 \"A IN 100-64-1-73.dns-8876.pod.cluster.local.ec2.internal. udp 80 false 4096\" NXDOMAIN qr,rd,ra 69 0.001003257s\n[INFO] 100.64.1.73:59059 - 33553 \"A IN 100-64-1-73.dns-8876.pod.cluster.local.ec2.internal. tcp 80 false 65535\" NXDOMAIN qr,rd,ra 69 0.000922138s\n[INFO] 100.64.1.73:49157 - 51123 \"A IN 100-64-1-73.dns-8876.pod.cluster.local.ec2.internal. udp 92 false 4096\" NXDOMAIN qr,rd,ra 69 0.001319062s\n[INFO] 100.64.1.73:35363 - 56275 \"A IN 100-64-1-73.dns-8876.pod.cluster.local.ec2.internal. tcp 92 false 65535\" NXDOMAIN qr,rd,ra 69 0.001013974s\n[INFO] 100.64.1.73:46953 - 24188 \"A IN 100-64-1-73.dns-8876.pod.cluster.local.ec2.internal. udp 80 false 4096\" NXDOMAIN qr,rd,ra 69 0.000885147s\n[INFO] 100.64.1.73:50620 - 15792 \"A IN dns-test-service.dns-8876.svc.cluster.local.ec2.internal. udp 97 false 4096\" NXDOMAIN qr,rd,ra 74 0.00122648s\n[INFO] 100.64.1.73:57384 - 21512 \"A IN 100-64-1-73.dns-8876.pod.cluster.local.ec2.internal. udp 92 false 4096\" NXDOMAIN qr,rd,ra 69 0.001000132s\n[INFO] 100.64.1.73:55859 - 22952 \"A IN 100-64-1-73.dns-8876.pod.cluster.local.ec2.internal. tcp 92 false 65535\" NXDOMAIN qr,rd,ra 69 0.000970127s\n[INFO] 100.64.1.73:42361 - 45019 \"A IN 100-64-1-73.dns-8876.pod.cluster.local.ec2.internal. tcp 80 false 65535\" NXDOMAIN qr,rd,ra 69 0.001171902s\n[INFO] 100.64.1.73:37550 - 53725 \"A IN 100-64-1-73.dns-8876.pod.cluster.local.ec2.internal. udp 92 false 4096\" NXDOMAIN qr,rd,ra 69 0.00576268s\n[INFO] 100.64.1.73:53700 - 13370 \"A IN 100-64-1-73.dns-8876.pod.cluster.local.ec2.internal. udp 80 false 4096\" NXDOMAIN qr,rd,ra 69 0.001207033s\n[INFO] 100.64.1.73:47167 - 22082 \"A IN 100-64-1-73.dns-8876.pod.cluster.local.ec2.internal. tcp 80 false 65535\" NXDOMAIN qr,rd,ra 69 0.002593354s\n[INFO] 100.64.1.73:39289 - 1497 \"A IN dns-test-service.dns-8876.svc.cluster.local.ec2.internal. udp 97 false 4096\" NXDOMAIN qr,rd,ra 74 0.001135538s\n[INFO] 100.64.1.73:59707 - 6649 \"A IN dns-test-service.dns-8876.svc.cluster.local.ec2.internal. tcp 97 false 65535\" NXDOMAIN qr,rd,ra 74 0.002636339s\n[INFO] 100.64.1.73:39677 - 2551 \"A IN dns-test-service.dns-8876.svc.cluster.local.ec2.internal. udp 85 false 4096\" NXDOMAIN qr,rd,ra 74 0.001105081s\n[INFO] 100.64.1.73:54870 - 38014 \"A IN 100-64-1-73.dns-8876.pod.cluster.local.ec2.internal. udp 80 false 4096\" NXDOMAIN qr,rd,ra 69 0.001008989s\n[INFO] 100.64.1.73:52643 - 51126 \"A IN 100-64-1-73.dns-8876.pod.cluster.local.ec2.internal. udp 80 false 4096\" NXDOMAIN qr,rd,ra 69 0.000960417s\n[INFO] 100.64.1.73:48055 - 9111 \"A IN dns-test-service.dns-8876.svc.cluster.local.ec2.internal. tcp 97 false 65535\" NXDOMAIN qr,rd,ra 74 0.001062941s\n[INFO] 100.64.1.73:50229 - 58528 \"A IN dns-test-service.dns-8876.svc.cluster.local.ec2.internal. tcp 85 false 65535\" NXDOMAIN qr,rd,ra 74 0.001005709s\n[INFO] 100.64.1.73:38487 - 49942 \"A IN 100-64-1-73.dns-8876.pod.cluster.local.ec2.internal. tcp 80 false 65535\" NXDOMAIN qr,rd,ra 69 0.001755023s\n[INFO] 100.64.1.73:38155 - 39884 \"A IN dns-test-service.dns-8876.svc.cluster.local.ec2.internal. udp 97 false 4096\" NXDOMAIN qr,rd,ra 74 0.00110797s\n[INFO] 100.64.1.73:57431 - 33128 \"A IN dns-test-service.dns-8876.svc.cluster.local.ec2.internal. tcp 97 false 65535\" NXDOMAIN qr,rd,ra 74 0.009548102s\n[INFO] 100.64.1.73:49342 - 33696 \"A IN 100-64-1-73.dns-8876.pod.cluster.local.ec2.internal. udp 92 false 4096\" NXDOMAIN qr,rd,ra 69 0.000975685s\n[INFO] 100.64.1.73:58007 - 48364 \"A IN dns-test-service.dns-8876.svc.cluster.local.ec2.internal. udp 85 false 4096\" NXDOMAIN qr,rd,ra 74 0.001108904s\n[INFO] 100.64.1.73:53137 - 44213 \"A IN 100-64-1-73.dns-8876.pod.cluster.local.ec2.internal. udp 80 false 4096\" NXDOMAIN qr,rd,ra 69 0.001273344s\n[INFO] 100.64.1.73:40209 - 47843 \"A IN 100-64-1-73.dns-8876.pod.cluster.local.ec2.internal. tcp 80 false 65535\" NXDOMAIN qr,rd,ra 69 0.00242694s\n[INFO] 100.64.1.73:58918 - 63901 \"A IN dns-test-service.dns-8876.svc.cluster.local.ec2.internal. udp 97 false 4096\" NXDOMAIN qr,rd,ra 74 0.001032678s\n[INFO] 100.64.1.73:39861 - 3517 \"A IN dns-test-service.dns-8876.svc.cluster.local.ec2.internal. tcp 97 false 65535\" NXDOMAIN qr,rd,ra 74 0.01590608s\n[INFO] 100.64.1.73:44337 - 20981 \"A IN 100-64-1-73.dns-8876.pod.cluster.local.ec2.internal. udp 92 false 4096\" NXDOMAIN qr,rd,ra 69 0.00088025s\n[INFO] 100.64.1.73:55169 - 7007 \"A IN dns-test-service.dns-8876.svc.cluster.local.ec2.internal. tcp 85 false 65535\" NXDOMAIN qr,rd,ra 74 0.000831377s\n[INFO] 100.64.1.73:33866 - 36298 \"A IN 100-64-1-73.dns-8876.pod.cluster.local.ec2.internal. udp 92 false 4096\" NXDOMAIN qr,rd,ra 69 0.000989378s\n[INFO] 100.64.1.73:51979 - 41450 \"A IN 100-64-1-73.dns-8876.pod.cluster.local.ec2.internal. tcp 92 false 65535\" NXDOMAIN qr,rd,ra 69 0.000907309s\n[INFO] 100.64.1.73:46103 - 20262 \"A IN 100-64-1-73.dns-8876.pod.cluster.local.ec2.internal. udp 80 false 4096\" NXDOMAIN qr,rd,ra 69 0.00113406s\n[INFO] 100.64.1.73:37680 - 966 \"A IN dns-test-service.dns-8876.svc.cluster.local.ec2.internal. udp 97 false 4096\" NXDOMAIN qr,rd,ra 74 0.001273965s\n[INFO] 100.64.1.73:46137 - 6118 \"A IN dns-test-service.dns-8876.svc.cluster.local.ec2.internal. tcp 97 false 65535\" NXDOMAIN qr,rd,ra 74 0.002138692s\n[INFO] 100.64.1.73:36699 - 6686 \"A IN 100-64-1-73.dns-8876.pod.cluster.local.ec2.internal. udp 92 false 4096\" NXDOMAIN qr,rd,ra 69 0.001190192s\n[INFO] 100.64.1.73:46279 - 8126 \"A IN 100-64-1-73.dns-8876.pod.cluster.local.ec2.internal. tcp 92 false 65535\" NXDOMAIN qr,rd,ra 69 0.000744138s\n[INFO] 100.64.1.73:56635 - 5090 \"A IN dns-test-service.dns-8876.svc.cluster.local.ec2.internal. tcp 85 false 65535\" NXDOMAIN qr,rd,ra 74 0.000788716s\n[INFO] 100.64.1.73:42725 - 21849 \"A IN 100-64-1-73.dns-8876.pod.cluster.local.ec2.internal. udp 80 false 4096\" NXDOMAIN qr,rd,ra 69 0.00105947s\n[INFO] 100.64.1.73:32945 - 36792 \"A IN 100-64-1-73.dns-8876.pod.cluster.local.ec2.internal. tcp 80 false 65535\" NXDOMAIN qr,rd,ra 69 0.000720118s\n[INFO] 100.64.1.73:43705 - 44051 \"A IN 100-64-1-73.dns-8876.pod.cluster.local.ec2.internal. tcp 92 false 65535\" NXDOMAIN qr,rd,ra 69 0.000744076s\n[INFO] 100.64.1.73:35192 - 40803 \"A IN 100-64-1-73.dns-8876.pod.cluster.local.ec2.internal. udp 80 false 4096\" NXDOMAIN qr,rd,ra 69 0.001042667s\n[INFO] 100.64.1.73:34331 - 57360 \"A IN dns-test-service.dns-8876.svc.cluster.local.ec2.internal. tcp 97 false 65535\" NXDOMAIN qr,rd,ra 74 0.001811717s\n[INFO] 100.64.1.73:33143 - 8352 \"A IN dns-test-service.dns-8876.svc.cluster.local.ec2.internal. tcp 85 false 65535\" NXDOMAIN qr,rd,ra 74 0.000742148s\n[INFO] 100.64.1.73:53227 - 24036 \"A IN dns-test-service.dns-8876.svc.cluster.local.ec2.internal. tcp 97 false 65535\" NXDOMAIN qr,rd,ra 74 0.000772136s\n[INFO] 100.64.1.73:45264 - 22210 \"A IN dns-test-service.dns-8876.svc.cluster.local.ec2.internal. udp 85 false 4096\" NXDOMAIN qr,rd,ra 74 0.001231596s\n[INFO] 100.64.1.73:46083 - 61564 \"A IN dns-test-service.dns-8876.svc.cluster.local.ec2.internal. tcp 85 false 65535\" NXDOMAIN qr,rd,ra 74 0.000791245s\n[INFO] 100.64.1.73:44741 - 58180 \"A IN 100-64-1-73.dns-8876.pod.cluster.local.ec2.internal. udp 80 false 4096\" NXDOMAIN qr,rd,ra 69 0.001143561s\n[INFO] 100.64.1.73:59129 - 60249 \"A IN 100-64-1-73.dns-8876.pod.cluster.local.ec2.internal. tcp 80 false 65535\" NXDOMAIN qr,rd,ra 69 0.000831381s\n[INFO] 100.64.1.73:34589 - 59961 \"A IN dns-test-service.dns-8876.svc.cluster.local.ec2.internal. tcp 97 false 65535\" NXDOMAIN qr,rd,ra 74 0.000826601s\n[INFO] 100.64.1.73:58401 - 61969 \"A IN 100-64-1-73.dns-8876.pod.cluster.local.ec2.internal. tcp 92 false 65535\" NXDOMAIN qr,rd,ra 69 0.000703621s\n[INFO] 100.64.1.73:50991 - 25198 \"A IN dns-test-service.dns-8876.svc.cluster.local.ec2.internal. udp 97 false 4096\" NXDOMAIN qr,rd,ra 74 0.001272648s\n[INFO] 100.64.1.73:42195 - 43861 \"A IN dns-test-service.dns-8876.svc.cluster.local.ec2.internal. tcp 85 false 65535\" NXDOMAIN qr,rd,ra 74 0.000893491s\n[INFO] 100.64.1.73:42575 - 57337 \"A IN dns-test-service.dns-8876.svc.cluster.local.ec2.internal. udp 85 false 4096\" NXDOMAIN qr,rd,ra 74 0.001341208s\n[INFO] 100.64.1.73:41455 - 62414 \"A IN dns-test-service.dns-8876.svc.cluster.local.ec2.internal. tcp 85 false 65535\" NXDOMAIN qr,rd,ra 74 0.011592196s\n[INFO] 100.64.1.73:56631 - 54244 \"A IN 100-64-1-73.dns-8876.pod.cluster.local.ec2.internal. tcp 80 false 65535\" NXDOMAIN qr,rd,ra 69 0.001005513s\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n==== END logs for container coredns of pod kube-system/coredns-59c969ffb8-57m7v ====\n==== START logs for container coredns of pod kube-system/coredns-59c969ffb8-fqq79 ====\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n.:8053\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[INFO] plugin/reload: Running configuration MD5 = 79589d3279972e6a52f3adf92ec29d79\n ______ ____ _ _______\n / ____/___ ________ / __ \\/ | / / ___/\t~ CoreDNS-1.6.3\n / / / __ \\/ ___/ _ \\/ / / / |/ /\\__ \\ \t~ linux/amd64, go1.12.9, 37b9550\n/ /___/ /_/ / / / __/ /_/ / /| /___/ / \n\\____/\\____/_/ \\___/_____/_/ |_//____/ \n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[INFO] 100.64.0.6:33047 - 2717 \"A IN ip-10-250-27-25.ec2.internal.ec2.internal. udp 59 false 512\" NXDOMAIN qr,rd,ra 59 0.001441808s\n[INFO] 100.64.0.6:44509 - 50349 \"AAAA IN ip-10-250-7-77.ec2.internal.ec2.internal. udp 58 false 512\" NXDOMAIN qr,rd,ra 58 0.001427591s\n[INFO] 100.64.0.6:54415 - 40488 \"A IN ip-10-250-7-77.ec2.internal.ec2.internal. udp 58 false 512\" NXDOMAIN qr,rd,ra 58 0.001525229s\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\nW0111 16:02:27.627854 1 reflector.go:302] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: watch of *v1.Service ended with: too old resource version: 478 (1668)\nW0111 16:02:27.627854 1 reflector.go:302] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: watch of *v1.Service ended with: too old resource version: 478 (1668)\nW0111 16:02:27.645238 1 reflector.go:302] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: watch of *v1.Namespace ended with: too old resource version: 188 (1668)\nW0111 16:02:27.645238 1 reflector.go:302] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: watch of *v1.Namespace ended with: too old resource version: 188 (1668)\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[INFO] 100.64.1.5:54967 - 520 \"A IN 100-64-1-5.dns-5825.pod.cluster.local.ec2.internal. tcp 91 false 65535\" NXDOMAIN qr,rd,ra 68 0.002438092s\n[INFO] 100.64.1.5:37217 - 63341 \"A IN 100-64-1-5.dns-5825.pod.cluster.local.ec2.internal. tcp 91 false 65535\" NXDOMAIN qr,rd,ra 68 0.000798562s\n[INFO] 100.64.1.5:41879 - 56913 \"A IN 100-64-1-5.dns-5825.pod.cluster.local.ec2.internal. tcp 91 false 65535\" NXDOMAIN qr,rd,ra 68 0.000910012s\n[INFO] 100.64.1.5:40336 - 49046 \"A IN 100-64-1-5.dns-5825.pod.cluster.local.ec2.internal. udp 91 false 4096\" NXDOMAIN qr,rd,ra 68 0.001131304s\n[INFO] 100.64.1.5:39944 - 19295 \"A IN 100-64-1-5.dns-5825.pod.cluster.local.ec2.internal. udp 91 false 4096\" NXDOMAIN qr,rd,ra 68 0.004823004s\n[INFO] 100.64.1.5:38927 - 12868 \"A IN 100-64-1-5.dns-5825.pod.cluster.local.ec2.internal. udp 91 false 4096\" NXDOMAIN qr,rd,ra 68 0.001223031s\n[INFO] 100.64.1.5:57285 - 18020 \"A IN 100-64-1-5.dns-5825.pod.cluster.local.ec2.internal. tcp 91 false 65535\" NXDOMAIN qr,rd,ra 68 0.00636657s\n[INFO] 100.64.1.5:48580 - 8301 \"A IN 100-64-1-5.dns-5825.pod.cluster.local.ec2.internal. udp 79 false 4096\" NXDOMAIN qr,rd,ra 68 0.001182305s\n[INFO] 100.64.1.5:50044 - 10152 \"A IN 100-64-1-5.dns-5825.pod.cluster.local.ec2.internal. udp 91 false 4096\" NXDOMAIN qr,rd,ra 68 0.001162044s\n[INFO] 100.64.1.5:44062 - 24024 \"AAAA IN dns-querier-1. udp 31 false 512\" NXDOMAIN qr,rd,ra 31 0.000182762s\n[INFO] 100.64.1.5:36638 - 3020 \"A IN 100-64-1-5.dns-5825.pod.cluster.local.ec2.internal. udp 79 false 4096\" NXDOMAIN qr,rd,ra 68 0.001539946s\n[INFO] 100.64.1.5:37221 - 56689 \"A IN 100-64-1-5.dns-5825.pod.cluster.local.ec2.internal. tcp 79 false 65535\" NXDOMAIN qr,rd,ra 68 0.000936173s\n[INFO] 100.64.1.5:34953 - 52365 \"A IN 100-64-1-5.dns-5825.pod.cluster.local.ec2.internal. udp 91 false 4096\" NXDOMAIN qr,rd,ra 68 0.001098226s\n[INFO] 100.64.1.5:58409 - 53805 \"A IN 100-64-1-5.dns-5825.pod.cluster.local.ec2.internal. tcp 91 false 65535\" NXDOMAIN qr,rd,ra 68 0.000907633s\n[INFO] 100.64.1.5:58764 - 63808 \"A IN 100-64-1-5.dns-5825.pod.cluster.local.ec2.internal. udp 79 false 4096\" NXDOMAIN qr,rd,ra 68 0.001231772s\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[INFO] 100.64.1.94:47154 - 34007 \"AAAA IN localhost.ec2.internal. udp 40 false 512\" NXDOMAIN qr,rd,ra 40 0.000275991s\n[INFO] 100.64.1.94:42140 - 42831 \"A IN localhost.ec2.internal. udp 40 false 512\" NXDOMAIN qr,rd,ra 40 0.000222786s\n[INFO] 100.64.1.94:57233 - 52460 \"AAAA IN localhost.ec2.internal. udp 40 false 512\" NXDOMAIN qr,rd,ra 40 0.000288448s\n[INFO] 100.64.1.94:33609 - 34260 \"AAAA IN localhost.ec2.internal. udp 40 false 512\" NXDOMAIN qr,rd,ra 40 0.000364227s\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\nE0111 17:11:33.414017 1 reflector.go:283] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to watch *v1.Namespace: Get https://100.104.0.1:443/api/v1/namespaces?resourceVersion=14290&timeout=9m25s&timeoutSeconds=565&watch=true: net/http: TLS handshake timeout\nE0111 17:11:33.414161 1 reflector.go:283] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to watch *v1.Endpoints: Get https://100.104.0.1:443/api/v1/endpoints?resourceVersion=14365&timeout=9m22s&timeoutSeconds=562&watch=true: net/http: TLS handshake timeout\nE0111 17:11:33.414226 1 reflector.go:283] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to watch *v1.Service: Get https://100.104.0.1:443/api/v1/services?resourceVersion=14276&timeout=8m23s&timeoutSeconds=503&watch=true: net/http: TLS handshake timeout\nE0111 17:11:33.414017 1 reflector.go:283] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to watch *v1.Namespace: Get https://100.104.0.1:443/api/v1/namespaces?resourceVersion=14290&timeout=9m25s&timeoutSeconds=565&watch=true: net/http: TLS handshake timeout\nE0111 17:11:33.414017 1 reflector.go:283] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to watch *v1.Namespace: Get https://100.104.0.1:443/api/v1/namespaces?resourceVersion=14290&timeout=9m25s&timeoutSeconds=565&watch=true: net/http: TLS handshake timeout\nE0111 17:11:33.414017 1 reflector.go:283] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to watch *v1.Namespace: Get https://100.104.0.1:443/api/v1/namespaces?resourceVersion=14290&timeout=9m25s&timeoutSeconds=565&watch=true: net/http: TLS handshake timeout\nE0111 17:11:33.414161 1 reflector.go:283] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to watch *v1.Endpoints: Get https://100.104.0.1:443/api/v1/endpoints?resourceVersion=14365&timeout=9m22s&timeoutSeconds=562&watch=true: net/http: TLS handshake timeout\nE0111 17:11:33.414161 1 reflector.go:283] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to watch *v1.Endpoints: Get https://100.104.0.1:443/api/v1/endpoints?resourceVersion=14365&timeout=9m22s&timeoutSeconds=562&watch=true: net/http: TLS handshake timeout\nE0111 17:11:33.414161 1 reflector.go:283] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to watch *v1.Endpoints: Get https://100.104.0.1:443/api/v1/endpoints?resourceVersion=14365&timeout=9m22s&timeoutSeconds=562&watch=true: net/http: TLS handshake timeout\nE0111 17:11:33.414226 1 reflector.go:283] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to watch *v1.Service: Get https://100.104.0.1:443/api/v1/services?resourceVersion=14276&timeout=8m23s&timeoutSeconds=503&watch=true: net/http: TLS handshake timeout\nE0111 17:11:33.414226 1 reflector.go:283] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to watch *v1.Service: Get https://100.104.0.1:443/api/v1/services?resourceVersion=14276&timeout=8m23s&timeoutSeconds=503&watch=true: net/http: TLS handshake timeout\nE0111 17:11:33.414226 1 reflector.go:283] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to watch *v1.Service: Get https://100.104.0.1:443/api/v1/services?resourceVersion=14276&timeout=8m23s&timeoutSeconds=503&watch=true: net/http: TLS handshake timeout\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[INFO] 100.64.1.158:41993 - 55476 \"A IN 100-64-1-158.dns-577.pod.cluster.local.ec2.internal. udp 92 false 4096\" NXDOMAIN qr,rd,ra 69 0.001253011s\n[INFO] 100.64.1.158:46353 - 34022 \"A IN 100-64-1-158.dns-577.pod.cluster.local.ec2.internal. tcp 80 false 65535\" NXDOMAIN qr,rd,ra 69 0.002964113s\n[INFO] 100.64.1.158:46200 - 27001 \"A IN 100-64-1-158.dns-577.pod.cluster.local.ec2.internal. udp 92 false 4096\" NXDOMAIN qr,rd,ra 69 0.001077437s\n[INFO] 100.64.1.158:39255 - 10805 \"AAAA IN dns-querier-2.dns-test-service-2.dns-577.svc.cluster.local.ec2.internal. udp 89 false 512\" NXDOMAIN qr,rd,ra 89 0.001050602s\n[INFO] 100.64.1.158:40617 - 6431 \"AAAA IN dns-querier-2. udp 31 false 512\" NXDOMAIN qr,rd,ra 31 0.00018776s\n[INFO] 100.64.1.158:46239 - 43168 \"A IN 100-64-1-158.dns-577.pod.cluster.local.ec2.internal. tcp 80 false 65535\" NXDOMAIN qr,rd,ra 69 0.000892371s\n[INFO] 100.64.1.158:51285 - 30877 \"A IN 100-64-1-158.dns-577.pod.cluster.local.ec2.internal. tcp 92 false 65535\" NXDOMAIN qr,rd,ra 69 0.000968174s\n[INFO] 100.64.1.158:57740 - 57441 \"AAAA IN dns-querier-2.dns-test-service-2.dns-577.svc.cluster.local.ec2.internal. udp 89 false 512\" NXDOMAIN qr,rd,ra 89 0.001112444s\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[INFO] 100.64.1.179:34656 - 36579 \"A IN 100-64-1-179.dns-5967.pod.cluster.local.ec2.internal. udp 93 false 4096\" NXDOMAIN qr,rd,ra 70 0.001607786s\n[INFO] 100.64.1.179:45501 - 41731 \"A IN 100-64-1-179.dns-5967.pod.cluster.local.ec2.internal. tcp 93 false 65535\" NXDOMAIN qr,rd,ra 70 0.002261939s\n[INFO] 100.64.1.179:46649 - 29058 \"A IN dns-test-service.dns-5967.svc.cluster.local.ec2.internal. tcp 85 false 65535\" NXDOMAIN qr,rd,ra 74 0.005230992s\n[INFO] 100.64.1.179:60589 - 45866 \"A IN 100-64-1-179.dns-5967.pod.cluster.local.ec2.internal. udp 81 false 4096\" NXDOMAIN qr,rd,ra 70 0.001182063s\n[INFO] 100.64.1.179:51560 - 49888 \"A IN dns-test-service.dns-5967.svc.cluster.local.ec2.internal. udp 97 false 4096\" NXDOMAIN qr,rd,ra 74 0.001179927s\n[INFO] 100.64.1.179:44587 - 55040 \"A IN dns-test-service.dns-5967.svc.cluster.local.ec2.internal. tcp 97 false 65535\" NXDOMAIN qr,rd,ra 74 0.001024063s\n[INFO] 100.64.1.179:54849 - 57048 \"A IN 100-64-1-179.dns-5967.pod.cluster.local.ec2.internal. tcp 93 false 65535\" NXDOMAIN qr,rd,ra 70 0.00097674s\n[INFO] 100.64.1.179:53225 - 20276 \"A IN dns-test-service.dns-5967.svc.cluster.local.ec2.internal. udp 97 false 4096\" NXDOMAIN qr,rd,ra 74 0.00118501s\n[INFO] 100.64.1.179:50679 - 21717 \"A IN dns-test-service.dns-5967.svc.cluster.local.ec2.internal. tcp 97 false 65535\" NXDOMAIN qr,rd,ra 74 0.000956909s\n[INFO] 100.64.1.179:57992 - 36806 \"A IN 100-64-1-179.dns-5967.pod.cluster.local.ec2.internal. udp 81 false 4096\" NXDOMAIN qr,rd,ra 70 0.001039157s\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[INFO] 100.64.1.179:56305 - 57641 \"A IN dns-test-service.dns-5967.svc.cluster.local.ec2.internal. tcp 97 false 65535\" NXDOMAIN qr,rd,ra 74 0.001054174s\n[INFO] 100.64.1.179:44377 - 59649 \"A IN 100-64-1-179.dns-5967.pod.cluster.local.ec2.internal. tcp 93 false 65535\" NXDOMAIN qr,rd,ra 70 0.00088037s\n[INFO] 100.64.1.179:45052 - 7373 \"A IN dns-test-service.dns-5967.svc.cluster.local.ec2.internal. udp 85 false 4096\" NXDOMAIN qr,rd,ra 74 0.001263974s\n[INFO] 100.64.1.179:39416 - 40491 \"A IN 100-64-1-179.dns-5967.pod.cluster.local.ec2.internal. udp 81 false 4096\" NXDOMAIN qr,rd,ra 70 0.001332263s\n[INFO] 100.64.1.179:46245 - 41285 \"A IN 100-64-1-179.dns-5967.pod.cluster.local.ec2.internal. tcp 81 false 65535\" NXDOMAIN qr,rd,ra 70 0.003525632s\n[INFO] 100.64.1.179:34411 - 22878 \"A IN dns-test-service.dns-5967.svc.cluster.local.ec2.internal. udp 97 false 4096\" NXDOMAIN qr,rd,ra 74 0.001193566s\n[INFO] 100.64.1.179:58093 - 24886 \"A IN 100-64-1-179.dns-5967.pod.cluster.local.ec2.internal. udp 93 false 4096\" NXDOMAIN qr,rd,ra 70 0.001039062s\n[INFO] 100.64.1.179:60177 - 30038 \"A IN 100-64-1-179.dns-5967.pod.cluster.local.ec2.internal. tcp 93 false 65535\" NXDOMAIN qr,rd,ra 70 0.001982988s\n[INFO] 100.64.1.179:36603 - 19567 \"A IN dns-test-service.dns-5967.svc.cluster.local.ec2.internal. tcp 85 false 65535\" NXDOMAIN qr,rd,ra 74 0.001061731s\n[INFO] 100.64.1.179:60642 - 60811 \"A IN 100-64-1-179.dns-5967.pod.cluster.local.ec2.internal. udp 93 false 4096\" NXDOMAIN qr,rd,ra 70 0.001019914s\n[INFO] 100.64.1.179:60311 - 62251 \"A IN 100-64-1-179.dns-5967.pod.cluster.local.ec2.internal. tcp 93 false 65535\" NXDOMAIN qr,rd,ra 70 0.00086603s\n[INFO] 100.64.1.179:42171 - 45737 \"A IN dns-test-service.dns-5967.svc.cluster.local.ec2.internal. udp 85 false 4096\" NXDOMAIN qr,rd,ra 74 0.001150394s\n[INFO] 100.64.1.179:57157 - 25479 \"A IN dns-test-service.dns-5967.svc.cluster.local.ec2.internal. udp 97 false 4096\" NXDOMAIN qr,rd,ra 74 0.001170319s\n[INFO] 100.64.1.179:50300 - 51803 \"A IN dns-test-service.dns-5967.svc.cluster.local.ec2.internal. udp 85 false 4096\" NXDOMAIN qr,rd,ra 74 0.003085635s\n[INFO] 100.64.1.179:40677 - 34763 \"A IN 100-64-1-179.dns-5967.pod.cluster.local.ec2.internal. udp 81 false 4096\" NXDOMAIN qr,rd,ra 70 0.001518204s\n[INFO] 100.64.1.179:57637 - 45948 \"A IN dns-test-service.dns-5967.svc.cluster.local.ec2.internal. tcp 97 false 65535\" NXDOMAIN qr,rd,ra 74 0.000840951s\n[INFO] 100.64.1.179:46177 - 47956 \"A IN 100-64-1-179.dns-5967.pod.cluster.local.ec2.internal. tcp 93 false 65535\" NXDOMAIN qr,rd,ra 70 0.000817421s\n[INFO] 100.64.1.179:46119 - 36633 \"A IN 100-64-1-179.dns-5967.pod.cluster.local.ec2.internal. tcp 81 false 65535\" NXDOMAIN qr,rd,ra 70 0.000922529s\n[INFO] 100.64.1.179:52006 - 28081 \"A IN dns-test-service.dns-5967.svc.cluster.local.ec2.internal. udp 97 false 4096\" NXDOMAIN qr,rd,ra 74 0.001151945s\n[INFO] 100.64.1.179:43555 - 12625 \"A IN dns-test-service.dns-5967.svc.cluster.local.ec2.internal. tcp 97 false 65535\" NXDOMAIN qr,rd,ra 74 0.002329825s\n[INFO] 100.64.1.179:44537 - 30089 \"A IN 100-64-1-179.dns-5967.pod.cluster.local.ec2.internal. udp 93 false 4096\" NXDOMAIN qr,rd,ra 70 0.000968287s\n[INFO] 100.64.1.179:50003 - 35241 \"A IN 100-64-1-179.dns-5967.pod.cluster.local.ec2.internal. tcp 93 false 65535\" NXDOMAIN qr,rd,ra 70 0.001152863s\n[INFO] 100.64.1.179:40193 - 9040 \"A IN dns-test-service.dns-5967.svc.cluster.local.ec2.internal. udp 85 false 4096\" NXDOMAIN qr,rd,ra 74 0.001247083s\n[INFO] 100.64.1.179:43756 - 43397 \"A IN dns-test-service.dns-5967.svc.cluster.local.ec2.internal. udp 97 false 4096\" NXDOMAIN qr,rd,ra 74 0.001244679s\n[INFO] 100.64.1.179:40763 - 48549 \"A IN dns-test-service.dns-5967.svc.cluster.local.ec2.internal. tcp 97 false 65535\" NXDOMAIN qr,rd,ra 74 0.001213192s\n[INFO] 100.64.1.179:53694 - 477 \"A IN 100-64-1-179.dns-5967.pod.cluster.local.ec2.internal. udp 93 false 4096\" NXDOMAIN qr,rd,ra 70 0.001053048s\n[INFO] 100.64.1.179:43827 - 15226 \"A IN dns-test-service.dns-5967.svc.cluster.local.ec2.internal. tcp 97 false 65535\" NXDOMAIN qr,rd,ra 74 0.000991238s\n[INFO] 100.64.1.179:37663 - 20946 \"A IN 100-64-1-179.dns-5967.pod.cluster.local.ec2.internal. tcp 93 false 65535\" NXDOMAIN qr,rd,ra 70 0.000878422s\n[INFO] 100.64.1.179:34245 - 40633 \"A IN dns-test-service.dns-5967.svc.cluster.local.ec2.internal. tcp 85 false 65535\" NXDOMAIN qr,rd,ra 74 0.00096486s\n[INFO] 100.64.1.179:32784 - 40792 \"A IN 100-64-1-179.dns-5967.pod.cluster.local.ec2.internal. udp 81 false 4096\" NXDOMAIN qr,rd,ra 70 0.001100808s\n[INFO] 100.64.1.179:49435 - 3057 \"A IN 100-64-1-179.dns-5967.pod.cluster.local.ec2.internal. tcp 81 false 65535\" NXDOMAIN qr,rd,ra 70 0.003904415s\n[INFO] 100.64.1.179:37199 - 51011 \"A IN dns-test-service.dns-5967.svc.cluster.local.ec2.internal. tcp 97 false 65535\" NXDOMAIN qr,rd,ra 74 0.00102997s\n[INFO] 100.64.1.179:35061 - 44824 \"A IN 100-64-1-179.dns-5967.pod.cluster.local.ec2.internal. tcp 93 false 65535\" NXDOMAIN qr,rd,ra 70 0.000892703s\n[INFO] 100.64.1.179:47861 - 28876 \"A IN dns-test-service.dns-5967.svc.cluster.local.ec2.internal. tcp 85 false 65535\" NXDOMAIN qr,rd,ra 74 0.001003274s\n[INFO] 100.64.1.179:33619 - 19937 \"A IN 100-64-1-179.dns-5967.pod.cluster.local.ec2.internal. tcp 81 false 65535\" NXDOMAIN qr,rd,ra 70 0.003084429s\n[INFO] 100.64.1.179:53885 - 58132 \"A IN dns-test-service.dns-5967.svc.cluster.local.ec2.internal. tcp 97 false 65535\" NXDOMAIN qr,rd,ra 74 0.002170208s\n[INFO] 100.64.1.179:54197 - 10060 \"A IN 100-64-1-179.dns-5967.pod.cluster.local.ec2.internal. udp 93 false 4096\" NXDOMAIN qr,rd,ra 70 0.00098501s\n[INFO] 100.64.1.179:50036 - 32174 \"A IN dns-test-service.dns-5967.svc.cluster.local.ec2.internal. udp 85 false 4096\" NXDOMAIN qr,rd,ra 74 0.001255412s\n[INFO] 100.64.1.179:52593 - 45673 \"A IN 100-64-1-179.dns-5967.pod.cluster.local.ec2.internal. udp 81 false 4096\" NXDOMAIN qr,rd,ra 70 0.001153209s\n[INFO] 100.64.1.179:55689 - 45417 \"A IN dns-test-service.dns-5967.svc.cluster.local.ec2.internal. tcp 97 false 65535\" NXDOMAIN qr,rd,ra 74 0.000871629s\n[INFO] 100.64.1.179:54944 - 63010 \"A IN dns-test-service.dns-5967.svc.cluster.local.ec2.internal. udp 85 false 4096\" NXDOMAIN qr,rd,ra 74 0.001207091s\n[INFO] 100.64.1.179:51610 - 29298 \"A IN 100-64-1-179.dns-5967.pod.cluster.local.ec2.internal. udp 81 false 4096\" NXDOMAIN qr,rd,ra 70 0.001242964s\n[INFO] 100.64.1.179:50499 - 64311 \"A IN 100-64-1-179.dns-5967.pod.cluster.local.ec2.internal. tcp 81 false 65535\" NXDOMAIN qr,rd,ra 70 0.001320819s\n[INFO] 100.64.1.179:47290 - 10653 \"A IN dns-test-service.dns-5967.svc.cluster.local.ec2.internal. udp 97 false 4096\" NXDOMAIN qr,rd,ra 74 0.001130474s\n[INFO] 100.64.1.179:54411 - 17814 \"A IN 100-64-1-179.dns-5967.pod.cluster.local.ec2.internal. tcp 93 false 65535\" NXDOMAIN qr,rd,ra 70 0.001163915s\n[INFO] 100.64.1.179:33215 - 1865 \"A IN 100-64-1-179.dns-5967.pod.cluster.local.ec2.internal. udp 81 false 4096\" NXDOMAIN qr,rd,ra 70 0.004950243s\n[INFO] 100.64.1.179:41445 - 31122 \"A IN dns-test-service.dns-5967.svc.cluster.local.ec2.internal. tcp 97 false 65535\" NXDOMAIN qr,rd,ra 74 0.000960355s\n[INFO] 100.64.1.179:50847 - 33130 \"A IN 100-64-1-179.dns-5967.pod.cluster.local.ec2.internal. tcp 93 false 65535\" NXDOMAIN qr,rd,ra 70 0.000839125s\n[INFO] 100.64.1.179:53725 - 12585 \"A IN dns-test-service.dns-5967.svc.cluster.local.ec2.internal. udp 85 false 4096\" NXDOMAIN qr,rd,ra 74 0.001127643s\n[INFO] 100.64.1.179:51038 - 13255 \"A IN dns-test-service.dns-5967.svc.cluster.local.ec2.internal. udp 97 false 4096\" NXDOMAIN qr,rd,ra 74 0.001037237s\n[INFO] 100.64.1.179:42473 - 47300 \"A IN dns-test-service.dns-5967.svc.cluster.local.ec2.internal. udp 85 false 4096\" NXDOMAIN qr,rd,ra 74 0.001257888s\n[INFO] 100.64.1.179:45163 - 7540 \"A IN dns-test-service.dns-5967.svc.cluster.local.ec2.internal. tcp 85 false 65535\" NXDOMAIN qr,rd,ra 74 0.002254466s\n[INFO] 100.64.1.179:42614 - 54681 \"A IN 100-64-1-179.dns-5967.pod.cluster.local.ec2.internal. udp 81 false 4096\" NXDOMAIN qr,rd,ra 70 0.007458718s\n[INFO] 100.64.1.179:36075 - 28571 \"A IN dns-test-service.dns-5967.svc.cluster.local.ec2.internal. udp 97 false 4096\" NXDOMAIN qr,rd,ra 74 0.001181448s\n[INFO] 100.64.1.179:39703 - 51188 \"A IN 100-64-1-179.dns-5967.pod.cluster.local.ec2.internal. udp 93 false 4096\" NXDOMAIN qr,rd,ra 70 0.000969012s\n[INFO] 100.64.1.179:38881 - 35732 \"A IN 100-64-1-179.dns-5967.pod.cluster.local.ec2.internal. tcp 93 false 65535\" NXDOMAIN qr,rd,ra 70 0.001074817s\n[INFO] 100.64.1.179:39995 - 7748 \"A IN dns-test-service.dns-5967.svc.cluster.local.ec2.internal. tcp 85 false 65535\" NXDOMAIN qr,rd,ra 74 0.001034163s\n[INFO] 100.64.1.179:43237 - 3638 \"A IN 100-64-1-179.dns-5967.pod.cluster.local.ec2.internal. udp 81 false 4096\" NXDOMAIN qr,rd,ra 70 0.001502709s\n[INFO] 100.64.1.179:46481 - 55220 \"A IN 100-64-1-179.dns-5967.pod.cluster.local.ec2.internal. tcp 81 false 65535\" NXDOMAIN qr,rd,ra 70 0.000978077s\n[INFO] 100.64.1.179:33748 - 968 \"A IN 100-64-1-179.dns-5967.pod.cluster.local.ec2.internal. udp 93 false 4096\" NXDOMAIN qr,rd,ra 70 0.001118018s\n[INFO] 100.64.1.179:39535 - 6120 \"A IN 100-64-1-179.dns-5967.pod.cluster.local.ec2.internal. tcp 93 false 65535\" NXDOMAIN qr,rd,ra 70 0.001274807s\n[INFO] 100.64.1.179:50423 - 49613 \"A IN dns-test-service.dns-5967.svc.cluster.local.ec2.internal. tcp 85 false 65535\" NXDOMAIN qr,rd,ra 74 0.000919033s\n[INFO] 100.64.1.179:54647 - 36325 \"A IN dns-test-service.dns-5967.svc.cluster.local.ec2.internal. tcp 97 false 65535\" NXDOMAIN qr,rd,ra 74 0.007700495s\n[INFO] 100.64.1.179:51387 - 8722 \"A IN 100-64-1-179.dns-5967.pod.cluster.local.ec2.internal. tcp 93 false 65535\" NXDOMAIN qr,rd,ra 70 0.000980446s\n[INFO] 100.64.1.179:41118 - 53773 \"A IN dns-test-service.dns-5967.svc.cluster.local.ec2.internal. udp 85 false 4096\" NXDOMAIN qr,rd,ra 74 0.001363766s\n[INFO] 100.64.1.179:37185 - 48827 \"A IN 100-64-1-179.dns-5967.pod.cluster.local.ec2.internal. udp 81 false 4096\" NXDOMAIN qr,rd,ra 70 0.003207422s\n[INFO] 100.64.1.179:48467 - 10205 \"A IN 100-64-1-179.dns-5967.pod.cluster.local.ec2.internal. tcp 81 false 65535\" NXDOMAIN qr,rd,ra 70 0.002303734s\n[INFO] 100.64.1.179:52372 - 33774 \"A IN dns-test-service.dns-5967.svc.cluster.local.ec2.internal. udp 97 false 4096\" NXDOMAIN qr,rd,ra 74 0.003299836s\n[INFO] 100.64.1.179:54568 - 39494 \"A IN 100-64-1-179.dns-5967.pod.cluster.local.ec2.internal. udp 93 false 4096\" NXDOMAIN qr,rd,ra 70 0.000986183s\n[INFO] 100.64.1.179:44457 - 40934 \"A IN 100-64-1-179.dns-5967.pod.cluster.local.ec2.internal. tcp 93 false 65535\" NXDOMAIN qr,rd,ra 70 0.000851722s\n[INFO] 100.64.1.179:49018 - 4163 \"A IN dns-test-service.dns-5967.svc.cluster.local.ec2.internal. udp 97 false 4096\" NXDOMAIN qr,rd,ra 74 0.000953583s\n[INFO] 100.64.1.179:48638 - 14667 \"A IN dns-test-service.dns-5967.svc.cluster.local.ec2.internal. udp 85 false 4096\" NXDOMAIN qr,rd,ra 74 0.001062085s\n[INFO] 100.64.1.179:33217 - 38486 \"A IN dns-test-service.dns-5967.svc.cluster.local.ec2.internal. tcp 85 false 65535\" NXDOMAIN qr,rd,ra 74 0.001032404s\n[INFO] 100.64.1.179:44161 - 59020 \"A IN 100-64-1-179.dns-5967.pod.cluster.local.ec2.internal. tcp 81 false 65535\" NXDOMAIN qr,rd,ra 70 0.004059201s\n[INFO] 100.64.1.179:49844 - 19479 \"A IN dns-test-service.dns-5967.svc.cluster.local.ec2.internal. udp 97 false 4096\" NXDOMAIN qr,rd,ra 74 0.001266924s\n[INFO] 100.64.1.179:35015 - 24632 \"A IN dns-test-service.dns-5967.svc.cluster.local.ec2.internal. tcp 97 false 65535\" NXDOMAIN qr,rd,ra 74 0.000856634s\n[INFO] 100.64.1.179:49885 - 18444 \"A IN 100-64-1-179.dns-5967.pod.cluster.local.ec2.internal. tcp 93 false 65535\" NXDOMAIN qr,rd,ra 70 0.000890596s\n[INFO] 100.64.1.179:45899 - 15829 \"A IN dns-test-service.dns-5967.svc.cluster.local.ec2.internal. udp 85 false 4096\" NXDOMAIN qr,rd,ra 74 0.004990838s\n[INFO] 100.64.1.179:56579 - 41778 \"A IN 100-64-1-179.dns-5967.pod.cluster.local.ec2.internal. udp 81 false 4096\" NXDOMAIN qr,rd,ra 70 0.001059696s\n[INFO] 100.64.1.179:57295 - 45147 \"A IN 100-64-1-179.dns-5967.pod.cluster.local.ec2.internal. tcp 81 false 65535\" NXDOMAIN qr,rd,ra 70 0.00086112s\n[INFO] 100.64.1.179:42780 - 64105 \"A IN dns-test-service.dns-5967.svc.cluster.local.ec2.internal. udp 97 false 4096\" NXDOMAIN qr,rd,ra 74 0.001197098s\n[INFO] 100.64.1.179:51962 - 59412 \"A IN dns-test-service.dns-5967.svc.cluster.local.ec2.internal. udp 85 false 4096\" NXDOMAIN qr,rd,ra 74 0.014528904s\n[INFO] 100.64.1.179:39769 - 9796 \"A IN 100-64-1-179.dns-5967.pod.cluster.local.ec2.internal. tcp 81 false 65535\" NXDOMAIN qr,rd,ra 70 0.01911428s\n[INFO] 100.64.1.179:57468 - 36501 \"A IN 100-64-1-179.dns-5967.pod.cluster.local.ec2.internal. udp 93 false 4096\" NXDOMAIN qr,rd,ra 70 0.001204814s\n[INFO] 100.64.1.179:48019 - 51250 \"A IN dns-test-service.dns-5967.svc.cluster.local.ec2.internal. tcp 97 false 65535\" NXDOMAIN qr,rd,ra 74 0.001101045s\n[INFO] 100.64.1.179:36358 - 51818 \"A IN 100-64-1-179.dns-5967.pod.cluster.local.ec2.internal. udp 93 false 4096\" NXDOMAIN qr,rd,ra 70 0.001225446s\n[INFO] 100.64.1.179:48704 - 18670 \"A IN dns-test-service.dns-5967.svc.cluster.local.ec2.internal. udp 85 false 4096\" NXDOMAIN qr,rd,ra 74 0.001212733s\n[INFO] 100.64.1.179:41583 - 63586 \"A IN dns-test-service.dns-5967.svc.cluster.local.ec2.internal. tcp 85 false 65535\" NXDOMAIN qr,rd,ra 74 0.00092169s\n[INFO] 100.64.1.179:34005 - 28762 \"A IN 100-64-1-179.dns-5967.pod.cluster.local.ec2.internal. udp 81 false 4096\" NXDOMAIN qr,rd,ra 70 0.000995656s\n[INFO] 100.64.1.179:38031 - 21639 \"A IN dns-test-service.dns-5967.svc.cluster.local.ec2.internal. tcp 97 false 65535\" NXDOMAIN qr,rd,ra 74 0.001002232s\n[INFO] 100.64.1.179:39763 - 23507 \"A IN 100-64-1-179.dns-5967.pod.cluster.local.ec2.internal. tcp 93 false 65535\" NXDOMAIN qr,rd,ra 70 0.000937696s\n[INFO] 100.64.1.179:44251 - 30711 \"A IN dns-test-service.dns-5967.svc.cluster.local.ec2.internal. udp 85 false 4096\" NXDOMAIN qr,rd,ra 74 0.006245369s\n[INFO] 100.64.1.179:54215 - 30511 \"A IN 100-64-1-179.dns-5967.pod.cluster.local.ec2.internal. tcp 81 false 65535\" NXDOMAIN qr,rd,ra 70 0.000990961s\n[INFO] 100.64.1.179:42639 - 52272 \"A IN dns-test-service.dns-5967.svc.cluster.local.ec2.internal. udp 97 false 4096\" NXDOMAIN qr,rd,ra 74 0.001156128s\n[INFO] 100.64.1.179:60427 - 36816 \"A IN dns-test-service.dns-5967.svc.cluster.local.ec2.internal. tcp 97 false 65535\" NXDOMAIN qr,rd,ra 74 0.002798252s\n[INFO] 100.64.1.179:42945 - 59432 \"A IN 100-64-1-179.dns-5967.pod.cluster.local.ec2.internal. tcp 93 false 65535\" NXDOMAIN qr,rd,ra 70 0.000884944s\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[INFO] 100.64.1.179:51457 - 18529 \"A IN dns-test-service.dns-5967.svc.cluster.local.ec2.internal. tcp 85 false 65535\" NXDOMAIN qr,rd,ra 74 0.00214265s\n[INFO] 100.64.1.179:44786 - 30613 \"A IN 100-64-1-179.dns-5967.pod.cluster.local.ec2.internal. udp 81 false 4096\" NXDOMAIN qr,rd,ra 70 0.001215497s\n[INFO] 100.64.1.179:49540 - 18949 \"A IN dns-test-service.dns-5967.svc.cluster.local.ec2.internal. udp 97 false 4096\" NXDOMAIN qr,rd,ra 74 0.001253336s\n[INFO] 100.64.1.179:54979 - 24101 \"A IN dns-test-service.dns-5967.svc.cluster.local.ec2.internal. tcp 97 false 65535\" NXDOMAIN qr,rd,ra 74 0.00121529s\n[INFO] 100.64.1.179:53672 - 24668 \"A IN 100-64-1-179.dns-5967.pod.cluster.local.ec2.internal. udp 93 false 4096\" NXDOMAIN qr,rd,ra 70 0.00122662s\n[INFO] 100.64.1.179:41841 - 26109 \"A IN 100-64-1-179.dns-5967.pod.cluster.local.ec2.internal. tcp 93 false 65535\" NXDOMAIN qr,rd,ra 70 0.001339011s\n[INFO] 100.64.1.179:33024 - 1527 \"A IN dns-test-service.dns-5967.svc.cluster.local.ec2.internal. udp 85 false 4096\" NXDOMAIN qr,rd,ra 74 0.001054628s\n[INFO] 100.64.1.179:43097 - 8234 \"A IN dns-test-service.dns-5967.svc.cluster.local.ec2.internal. tcp 85 false 65535\" NXDOMAIN qr,rd,ra 74 0.001547256s\n[INFO] 100.64.1.179:57627 - 35936 \"A IN 100-64-1-179.dns-5967.pod.cluster.local.ec2.internal. udp 81 false 4096\" NXDOMAIN qr,rd,ra 70 0.001024735s\n[INFO] 100.64.1.179:43957 - 39417 \"A IN dns-test-service.dns-5967.svc.cluster.local.ec2.internal. tcp 97 false 65535\" NXDOMAIN qr,rd,ra 74 0.000876416s\n[INFO] 100.64.1.179:43973 - 56881 \"A IN 100-64-1-179.dns-5967.pod.cluster.local.ec2.internal. udp 93 false 4096\" NXDOMAIN qr,rd,ra 70 0.001167438s\n[INFO] 100.64.1.179:32953 - 52868 \"A IN dns-test-service.dns-5967.svc.cluster.local.ec2.internal. tcp 85 false 65535\" NXDOMAIN qr,rd,ra 74 0.000903064s\n[INFO] 100.64.1.179:46626 - 15909 \"A IN 100-64-1-179.dns-5967.pod.cluster.local.ec2.internal. udp 81 false 4096\" NXDOMAIN qr,rd,ra 70 0.001062474s\n[INFO] 100.64.1.179:34969 - 49809 \"A IN 100-64-1-179.dns-5967.pod.cluster.local.ec2.internal. tcp 81 false 65535\" NXDOMAIN qr,rd,ra 70 0.000886596s\n[INFO] 100.64.1.179:36074 - 4654 \"A IN dns-test-service.dns-5967.svc.cluster.local.ec2.internal. udp 97 false 4096\" NXDOMAIN qr,rd,ra 74 0.001136161s\n[INFO] 100.64.1.179:46633 - 27270 \"A IN 100-64-1-179.dns-5967.pod.cluster.local.ec2.internal. udp 93 false 4096\" NXDOMAIN qr,rd,ra 70 0.001217328s\n[INFO] 100.64.1.179:54083 - 11814 \"A IN 100-64-1-179.dns-5967.pod.cluster.local.ec2.internal. tcp 93 false 65535\" NXDOMAIN qr,rd,ra 70 0.000973966s\n[INFO] 100.64.1.179:56485 - 32626 \"A IN dns-test-service.dns-5967.svc.cluster.local.ec2.internal. tcp 85 false 65535\" NXDOMAIN qr,rd,ra 74 0.001037234s\n[INFO] 100.64.1.179:47603 - 42019 \"A IN dns-test-service.dns-5967.svc.cluster.local.ec2.internal. tcp 97 false 65535\" NXDOMAIN qr,rd,ra 74 0.003053334s\n[INFO] 100.64.1.179:60553 - 64635 \"A IN 100-64-1-179.dns-5967.pod.cluster.local.ec2.internal. tcp 93 false 65535\" NXDOMAIN qr,rd,ra 70 0.000854707s\n[INFO] 100.64.1.179:36707 - 37841 \"A IN dns-test-service.dns-5967.svc.cluster.local.ec2.internal. udp 85 false 4096\" NXDOMAIN qr,rd,ra 74 0.001256546s\n[INFO] 100.64.1.179:54637 - 7255 \"A IN dns-test-service.dns-5967.svc.cluster.local.ec2.internal. udp 97 false 4096\" NXDOMAIN qr,rd,ra 74 0.00140431s\n[INFO] 100.64.1.179:47094 - 29871 \"A IN 100-64-1-179.dns-5967.pod.cluster.local.ec2.internal. udp 93 false 4096\" NXDOMAIN qr,rd,ra 70 0.001251045s\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[INFO] 100.64.1.135:46471 - 1021 \"A IN kubernetes.default.svc.cluster.local.ec2.internal. udp 90 false 4096\" NXDOMAIN qr,rd,ra 67 0.001306915s\n[INFO] 100.64.1.135:52257 - 51101 \"A IN kubernetes.default.svc.cluster.local.ec2.internal. tcp 90 false 65535\" NXDOMAIN qr,rd,ra 67 0.002322008s\n[INFO] 100.64.1.135:43214 - 6601 \"A IN 100-64-1-135.dns-3162.pod.cluster.local.ec2.internal. udp 93 false 4096\" NXDOMAIN qr,rd,ra 70 0.001237839s\n[INFO] 100.64.1.135:51753 - 11753 \"A IN 100-64-1-135.dns-3162.pod.cluster.local.ec2.internal. tcp 93 false 65535\" NXDOMAIN qr,rd,ra 70 0.001082905s\n[INFO] 100.64.1.135:55794 - 12738 \"A IN kubernetes.default.svc.cluster.local.ec2.internal. udp 78 false 4096\" NXDOMAIN qr,rd,ra 67 0.001539178s\n[INFO] 100.64.1.135:60125 - 17797 \"A IN kubernetes.default.svc.cluster.local.ec2.internal. tcp 78 false 65535\" NXDOMAIN qr,rd,ra 67 0.001042091s\n[INFO] 100.64.1.135:37480 - 62363 \"A IN 100-64-1-135.dns-3162.pod.cluster.local.ec2.internal. udp 81 false 4096\" NXDOMAIN qr,rd,ra 70 0.001200361s\n[INFO] 100.64.1.135:58979 - 54908 \"A IN 100-64-1-135.dns-3162.pod.cluster.local.ec2.internal. tcp 81 false 65535\" NXDOMAIN qr,rd,ra 70 0.000864218s\n[INFO] 100.64.1.135:40315 - 47110 \"A IN kubernetes.default.svc.cluster.local.ec2.internal. udp 90 false 4096\" NXDOMAIN qr,rd,ra 67 0.001149407s\n[INFO] 100.64.1.135:55143 - 52262 \"A IN kubernetes.default.svc.cluster.local.ec2.internal. tcp 90 false 65535\" NXDOMAIN qr,rd,ra 67 0.000950296s\n[INFO] 100.64.1.135:43411 - 57982 \"A IN 100-64-1-135.dns-3162.pod.cluster.local.ec2.internal. tcp 93 false 65535\" NXDOMAIN qr,rd,ra 70 0.001103976s\n[INFO] 100.64.1.135:59721 - 23706 \"A IN kubernetes.default.svc.cluster.local.ec2.internal. tcp 78 false 65535\" NXDOMAIN qr,rd,ra 67 0.000996442s\n[INFO] 100.64.1.135:56320 - 33329 \"A IN 100-64-1-135.dns-3162.pod.cluster.local.ec2.internal. udp 81 false 4096\" NXDOMAIN qr,rd,ra 70 0.001161681s\n[INFO] 100.64.1.135:39148 - 31514 \"A IN kubernetes.default.svc.cluster.local.ec2.internal. udp 90 false 4096\" NXDOMAIN qr,rd,ra 67 0.001202139s\n[INFO] 100.64.1.135:33053 - 16058 \"A IN kubernetes.default.svc.cluster.local.ec2.internal. tcp 90 false 65535\" NXDOMAIN qr,rd,ra 67 0.001018125s\n[INFO] 100.64.1.135:48615 - 54131 \"A IN 100-64-1-135.dns-3162.pod.cluster.local.ec2.internal. udp 93 false 4096\" NXDOMAIN qr,rd,ra 70 0.001189588s\n[INFO] 100.64.1.135:47297 - 60466 \"A IN kubernetes.default.svc.cluster.local.ec2.internal. tcp 78 false 65535\" NXDOMAIN qr,rd,ra 67 0.000982436s\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[INFO] 100.64.1.137:53092 - 3376 \"A IN dns-test-service-2.dns-4584.svc.cluster.local.ec2.internal. udp 99 false 4096\" NXDOMAIN qr,rd,ra 76 0.001170545s\n[INFO] 100.64.1.137:43235 - 8528 \"A IN dns-test-service-2.dns-4584.svc.cluster.local.ec2.internal. tcp 99 false 65535\" NXDOMAIN qr,rd,ra 76 0.002248029s\n[INFO] 100.64.1.137:35828 - 24552 \"A IN 100-64-1-137.dns-4584.pod.cluster.local.ec2.internal. udp 93 false 4096\" NXDOMAIN qr,rd,ra 70 0.001041448s\n[INFO] 100.64.1.137:42321 - 29704 \"A IN 100-64-1-137.dns-4584.pod.cluster.local.ec2.internal. tcp 93 false 65535\" NXDOMAIN qr,rd,ra 70 0.002636311s\n[INFO] 100.64.1.137:44520 - 55568 \"A IN 100-64-1-137.dns-4584.pod.cluster.local.ec2.internal. udp 81 false 4096\" NXDOMAIN qr,rd,ra 70 0.001132834s\n[INFO] 100.64.1.137:37229 - 63725 \"A IN 100-64-1-137.dns-4584.pod.cluster.local.ec2.internal. tcp 81 false 65535\" NXDOMAIN qr,rd,ra 70 0.000909236s\n[INFO] 100.64.1.137:39053 - 44452 \"A IN dns-test-service-2.dns-4584.svc.cluster.local.ec2.internal. tcp 99 false 65535\" NXDOMAIN qr,rd,ra 76 0.000988941s\n[INFO] 100.64.1.137:38373 - 22016 \"A IN dns-test-service-2.dns-4584.svc.cluster.local.ec2.internal. tcp 87 false 65535\" NXDOMAIN qr,rd,ra 76 0.001006274s\n[INFO] 100.64.1.137:48256 - 32224 \"A IN 100-64-1-137.dns-4584.pod.cluster.local.ec2.internal. udp 81 false 4096\" NXDOMAIN qr,rd,ra 70 0.001173194s\n[INFO] 100.64.1.137:39571 - 56625 \"A IN 100-64-1-137.dns-4584.pod.cluster.local.ec2.internal. tcp 93 false 65535\" NXDOMAIN qr,rd,ra 70 0.001010342s\n[INFO] 100.64.1.137:56321 - 57857 \"A IN dns-test-service-2.dns-4584.svc.cluster.local.ec2.internal. udp 87 false 4096\" NXDOMAIN qr,rd,ra 76 0.001225346s\n[INFO] 100.64.1.137:42885 - 30977 \"A IN dns-test-service-2.dns-4584.svc.cluster.local.ec2.internal. tcp 87 false 65535\" NXDOMAIN qr,rd,ra 76 0.000918393s\n[INFO] 100.64.1.137:58773 - 5838 \"A IN dns-test-service-2.dns-4584.svc.cluster.local.ec2.internal. tcp 99 false 65535\" NXDOMAIN qr,rd,ra 76 0.005736791s\n[INFO] 100.64.1.137:41556 - 42470 \"A IN 100-64-1-137.dns-4584.pod.cluster.local.ec2.internal. udp 93 false 4096\" NXDOMAIN qr,rd,ra 70 0.001116855s\n[INFO] 100.64.1.137:51703 - 27014 \"A IN 100-64-1-137.dns-4584.pod.cluster.local.ec2.internal. tcp 93 false 65535\" NXDOMAIN qr,rd,ra 70 0.000931583s\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[INFO] 100.64.1.137:36335 - 33765 \"A IN 100-64-1-137.dns-4584.pod.cluster.local.ec2.internal. tcp 81 false 65535\" NXDOMAIN qr,rd,ra 70 0.000991879s\n[INFO] 100.64.1.137:49541 - 12858 \"A IN 100-64-1-137.dns-4584.pod.cluster.local.ec2.internal. udp 93 false 4096\" NXDOMAIN qr,rd,ra 70 0.000987582s\n[INFO] 100.64.1.137:42613 - 18010 \"A IN 100-64-1-137.dns-4584.pod.cluster.local.ec2.internal. tcp 93 false 65535\" NXDOMAIN qr,rd,ra 70 0.002016353s\n[INFO] 100.64.1.137:38099 - 31319 \"A IN dns-test-service-2.dns-4584.svc.cluster.local.ec2.internal. udp 99 false 4096\" NXDOMAIN qr,rd,ra 76 0.001228466s\n[INFO] 100.64.1.137:34916 - 21862 \"A IN dns-test-service-2.dns-4584.svc.cluster.local.ec2.internal. udp 87 false 4096\" NXDOMAIN qr,rd,ra 76 0.005088092s\n[INFO] 100.64.1.137:49085 - 30532 \"A IN 100-64-1-137.dns-4584.pod.cluster.local.ec2.internal. udp 81 false 4096\" NXDOMAIN qr,rd,ra 70 0.001125433s\n[INFO] 100.64.1.137:53593 - 32597 \"A IN 100-64-1-137.dns-4584.pod.cluster.local.ec2.internal. tcp 81 false 65535\" NXDOMAIN qr,rd,ra 70 0.000886316s\n[INFO] 100.64.1.137:40878 - 1707 \"A IN dns-test-service-2.dns-4584.svc.cluster.local.ec2.internal. udp 99 false 4096\" NXDOMAIN qr,rd,ra 76 0.00120315s\n[INFO] 100.64.1.137:57689 - 6859 \"A IN dns-test-service-2.dns-4584.svc.cluster.local.ec2.internal. tcp 99 false 65535\" NXDOMAIN qr,rd,ra 76 0.001434214s\n[INFO] 100.64.1.137:46865 - 20144 \"A IN dns-test-service-2.dns-4584.svc.cluster.local.ec2.internal. udp 87 false 4096\" NXDOMAIN qr,rd,ra 76 0.001185961s\n[INFO] 100.64.1.137:38843 - 33319 \"A IN dns-test-service-2.dns-4584.svc.cluster.local.ec2.internal. tcp 87 false 65535\" NXDOMAIN qr,rd,ra 76 0.00092877s\n[INFO] 100.64.1.137:55476 - 7431 \"A IN 100-64-1-137.dns-4584.pod.cluster.local.ec2.internal. udp 81 false 4096\" NXDOMAIN qr,rd,ra 70 0.00105109s\n[INFO] 100.64.1.137:48981 - 46269 \"A IN 100-64-1-137.dns-4584.pod.cluster.local.ec2.internal. tcp 81 false 65535\" NXDOMAIN qr,rd,ra 70 0.000867883s\n[INFO] 100.64.1.137:58343 - 58240 \"A IN dns-test-service-2.dns-4584.svc.cluster.local.ec2.internal. udp 99 false 4096\" NXDOMAIN qr,rd,ra 76 0.001057253s\n[INFO] 100.64.1.137:59817 - 42784 \"A IN dns-test-service-2.dns-4584.svc.cluster.local.ec2.internal. tcp 99 false 65535\" NXDOMAIN qr,rd,ra 76 0.000949587s\n[INFO] 100.64.1.137:44301 - 15320 \"A IN 100-64-1-137.dns-4584.pod.cluster.local.ec2.internal. tcp 93 false 65535\" NXDOMAIN qr,rd,ra 70 0.000859894s\n[INFO] 100.64.1.137:58409 - 37145 \"A IN dns-test-service-2.dns-4584.svc.cluster.local.ec2.internal. tcp 87 false 65535\" NXDOMAIN qr,rd,ra 76 0.000920218s\n[INFO] 100.64.1.137:39353 - 42342 \"A IN 100-64-1-137.dns-4584.pod.cluster.local.ec2.internal. tcp 81 false 65535\" NXDOMAIN qr,rd,ra 70 0.000850542s\n[INFO] 100.64.1.137:59401 - 33781 \"A IN dns-test-service-2.dns-4584.svc.cluster.local.ec2.internal. tcp 99 false 65535\" NXDOMAIN qr,rd,ra 76 0.002215712s\n[INFO] 100.64.1.137:49857 - 49805 \"A IN 100-64-1-137.dns-4584.pod.cluster.local.ec2.internal. udp 93 false 4096\" NXDOMAIN qr,rd,ra 70 0.001057425s\n[INFO] 100.64.1.137:38626 - 24514 \"A IN dns-test-service-2.dns-4584.svc.cluster.local.ec2.internal. udp 87 false 4096\" NXDOMAIN qr,rd,ra 76 0.001248185s\n[INFO] 100.64.1.137:45737 - 4169 \"A IN dns-test-service-2.dns-4584.svc.cluster.local.ec2.internal. tcp 99 false 65535\" NXDOMAIN qr,rd,ra 76 0.001010801s\n[INFO] 100.64.1.137:60985 - 24284 \"A IN dns-test-service-2.dns-4584.svc.cluster.local.ec2.internal. tcp 87 false 65535\" NXDOMAIN qr,rd,ra 76 0.000963363s\n[INFO] 100.64.1.137:49965 - 50802 \"A IN 100-64-1-137.dns-4584.pod.cluster.local.ec2.internal. udp 81 false 4096\" NXDOMAIN qr,rd,ra 70 0.001233143s\n[INFO] 100.64.1.137:49793 - 18475 \"A IN 100-64-1-137.dns-4584.pod.cluster.local.ec2.internal. tcp 81 false 65535\" NXDOMAIN qr,rd,ra 70 0.001119912s\n[INFO] 100.64.1.137:43292 - 11190 \"A IN 100-64-1-137.dns-4584.pod.cluster.local.ec2.internal. udp 93 false 4096\" NXDOMAIN qr,rd,ra 70 0.001071199s\n[INFO] 100.64.1.137:51417 - 16342 \"A IN 100-64-1-137.dns-4584.pod.cluster.local.ec2.internal. tcp 93 false 65535\" NXDOMAIN qr,rd,ra 70 0.000930714s\n[INFO] 100.64.1.137:56163 - 37055 \"A IN dns-test-service-2.dns-4584.svc.cluster.local.ec2.internal. udp 87 false 4096\" NXDOMAIN qr,rd,ra 76 0.001164608s\n[INFO] 100.64.1.137:42975 - 5663 \"A IN 100-64-1-137.dns-4584.pod.cluster.local.ec2.internal. udp 81 false 4096\" NXDOMAIN qr,rd,ra 70 0.000825282s\n[INFO] 100.64.1.137:49985 - 46547 \"A IN dns-test-service-2.dns-4584.svc.cluster.local.ec2.internal. udp 99 false 4096\" NXDOMAIN qr,rd,ra 76 0.001124809s\n[INFO] 100.64.1.137:34364 - 21202 \"A IN dns-test-service-2.dns-4584.svc.cluster.local.ec2.internal. udp 87 false 4096\" NXDOMAIN qr,rd,ra 76 0.001353175s\n[INFO] 100.64.1.137:41167 - 43895 \"A IN dns-test-service-2.dns-4584.svc.cluster.local.ec2.internal. tcp 87 false 65535\" NXDOMAIN qr,rd,ra 76 0.000914496s\n[INFO] 100.64.1.137:49301 - 7602 \"A IN 100-64-1-137.dns-4584.pod.cluster.local.ec2.internal. udp 81 false 4096\" NXDOMAIN qr,rd,ra 70 0.000902799s\n[INFO] 100.64.1.137:57059 - 21973 \"A IN 100-64-1-137.dns-4584.pod.cluster.local.ec2.internal. tcp 81 false 65535\" NXDOMAIN qr,rd,ra 70 0.000970043s\n[INFO] 100.64.1.137:58049 - 43263 \"A IN 100-64-1-137.dns-4584.pod.cluster.local.ec2.internal. tcp 93 false 65535\" NXDOMAIN qr,rd,ra 70 0.002847966s\n[INFO] 100.64.1.137:60163 - 32711 \"A IN 100-64-1-137.dns-4584.pod.cluster.local.ec2.internal. tcp 81 false 65535\" NXDOMAIN qr,rd,ra 70 0.000937788s\n[INFO] 100.64.1.137:47065 - 56572 \"A IN dns-test-service-2.dns-4584.svc.cluster.local.ec2.internal. udp 99 false 4096\" NXDOMAIN qr,rd,ra 76 0.00116875s\n[INFO] 100.64.1.137:49711 - 53581 \"A IN dns-test-service-2.dns-4584.svc.cluster.local.ec2.internal. tcp 87 false 65535\" NXDOMAIN qr,rd,ra 76 0.001068286s\n[INFO] 100.64.1.137:56081 - 54851 \"A IN 100-64-1-137.dns-4584.pod.cluster.local.ec2.internal. udp 81 false 4096\" NXDOMAIN qr,rd,ra 70 0.001085431s\n[INFO] 100.64.1.137:33519 - 32112 \"A IN dns-test-service-2.dns-4584.svc.cluster.local.ec2.internal. tcp 99 false 65535\" NXDOMAIN qr,rd,ra 76 0.000999081s\n[INFO] 100.64.1.137:51489 - 4648 \"A IN 100-64-1-137.dns-4584.pod.cluster.local.ec2.internal. tcp 93 false 65535\" NXDOMAIN qr,rd,ra 70 0.001185373s\n[INFO] 100.64.1.137:51755 - 50630 \"A IN dns-test-service-2.dns-4584.svc.cluster.local.ec2.internal. tcp 87 false 65535\" NXDOMAIN qr,rd,ra 76 0.001024238s\n[INFO] 100.64.1.137:46679 - 43181 \"A IN 100-64-1-137.dns-4584.pod.cluster.local.ec2.internal. udp 81 false 4096\" NXDOMAIN qr,rd,ra 70 0.00101627s\n[INFO] 100.64.1.137:54103 - 20388 \"A IN 100-64-1-137.dns-4584.pod.cluster.local.ec2.internal. tcp 81 false 65535\" NXDOMAIN qr,rd,ra 70 0.001115858s\n[INFO] 100.64.1.137:40879 - 2501 \"A IN dns-test-service-2.dns-4584.svc.cluster.local.ec2.internal. tcp 99 false 65535\" NXDOMAIN qr,rd,ra 76 0.000984417s\n[INFO] 100.64.1.137:52499 - 40573 \"A IN 100-64-1-137.dns-4584.pod.cluster.local.ec2.internal. tcp 93 false 65535\" NXDOMAIN qr,rd,ra 70 0.001191212s\n[INFO] 100.64.1.137:33869 - 39187 \"A IN dns-test-service-2.dns-4584.svc.cluster.local.ec2.internal. tcp 87 false 65535\" NXDOMAIN qr,rd,ra 76 0.000898497s\n[INFO] 100.64.1.137:44593 - 52920 \"A IN 100-64-1-137.dns-4584.pod.cluster.local.ec2.internal. udp 81 false 4096\" NXDOMAIN qr,rd,ra 70 0.001075079s\n[INFO] 100.64.1.137:59959 - 53882 \"A IN dns-test-service-2.dns-4584.svc.cluster.local.ec2.internal. udp 99 false 4096\" NXDOMAIN qr,rd,ra 76 0.001065153s\n[INFO] 100.64.1.137:38401 - 59034 \"A IN dns-test-service-2.dns-4584.svc.cluster.local.ec2.internal. tcp 99 false 65535\" NXDOMAIN qr,rd,ra 76 0.002573523s\n[INFO] 100.64.1.137:54757 - 9031 \"A IN dns-test-service-2.dns-4584.svc.cluster.local.ec2.internal. udp 87 false 4096\" NXDOMAIN qr,rd,ra 76 0.001272498s\n[INFO] 100.64.1.137:56745 - 35819 \"A IN 100-64-1-137.dns-4584.pod.cluster.local.ec2.internal. tcp 81 false 65535\" NXDOMAIN qr,rd,ra 70 0.000959573s\n[INFO] 100.64.1.137:59667 - 29283 \"A IN dns-test-service-2.dns-4584.svc.cluster.local.ec2.internal. tcp 99 false 65535\" NXDOMAIN qr,rd,ra 76 0.000803069s\n[INFO] 100.64.1.137:48729 - 49439 \"A IN 100-64-1-137.dns-4584.pod.cluster.local.ec2.internal. udp 81 false 4096\" NXDOMAIN qr,rd,ra 70 0.001020202s\n[INFO] 100.64.1.137:50733 - 30351 \"A IN 100-64-1-137.dns-4584.pod.cluster.local.ec2.internal. tcp 81 false 65535\" NXDOMAIN qr,rd,ra 70 0.000836628s\n[INFO] 100.64.1.137:58808 - 6932 \"A IN dns-test-service-2.dns-4584.svc.cluster.local.ec2.internal. udp 99 false 4096\" NXDOMAIN qr,rd,ra 76 0.002329171s\n[INFO] 100.64.1.137:34839 - 12084 \"A IN dns-test-service-2.dns-4584.svc.cluster.local.ec2.internal. tcp 99 false 65535\" NXDOMAIN qr,rd,ra 76 0.00087358s\n[INFO] 100.64.1.137:50021 - 28108 \"A IN 100-64-1-137.dns-4584.pod.cluster.local.ec2.internal. udp 93 false 4096\" NXDOMAIN qr,rd,ra 70 0.001358043s\n[INFO] 100.64.1.137:38825 - 49172 \"A IN dns-test-service-2.dns-4584.svc.cluster.local.ec2.internal. udp 87 false 4096\" NXDOMAIN qr,rd,ra 76 0.001181634s\n[INFO] 100.64.1.137:37557 - 27126 \"A IN dns-test-service-2.dns-4584.svc.cluster.local.ec2.internal. tcp 87 false 65535\" NXDOMAIN qr,rd,ra 76 0.00095593s\n[INFO] 100.64.1.137:41955 - 63464 \"A IN dns-test-service-2.dns-4584.svc.cluster.local.ec2.internal. udp 99 false 4096\" NXDOMAIN qr,rd,ra 76 0.001281952s\n[INFO] 100.64.1.137:55501 - 48008 \"A IN dns-test-service-2.dns-4584.svc.cluster.local.ec2.internal. tcp 99 false 65535\" NXDOMAIN qr,rd,ra 76 0.000865497s\n[INFO] 100.64.1.137:39418 - 19104 \"A IN 100-64-1-137.dns-4584.pod.cluster.local.ec2.internal. udp 93 false 4096\" NXDOMAIN qr,rd,ra 70 0.001206393s\n[INFO] 100.64.1.137:44657 - 39005 \"A IN dns-test-service-2.dns-4584.svc.cluster.local.ec2.internal. tcp 99 false 65535\" NXDOMAIN qr,rd,ra 76 0.003341433s\n[INFO] 100.64.1.137:44709 - 50584 \"A IN dns-test-service-2.dns-4584.svc.cluster.local.ec2.internal. tcp 87 false 65535\" NXDOMAIN qr,rd,ra 76 0.000747974s\n[INFO] 100.64.1.137:47451 - 33133 \"A IN 100-64-1-137.dns-4584.pod.cluster.local.ec2.internal. udp 81 false 4096\" NXDOMAIN qr,rd,ra 70 0.001259129s\n[INFO] 100.64.1.137:54619 - 63587 \"A IN 100-64-1-137.dns-4584.pod.cluster.local.ec2.internal. tcp 81 false 65535\" NXDOMAIN qr,rd,ra 70 0.002305832s\n[INFO] 100.64.1.137:33299 - 30570 \"A IN 100-64-1-137.dns-4584.pod.cluster.local.ec2.internal. tcp 93 false 65535\" NXDOMAIN qr,rd,ra 70 0.000888058s\n[INFO] 100.64.1.137:51551 - 52804 \"A IN dns-test-service-2.dns-4584.svc.cluster.local.ec2.internal. tcp 87 false 65535\" NXDOMAIN qr,rd,ra 76 0.000783894s\n[INFO] 100.64.1.137:52102 - 16414 \"A IN 100-64-1-137.dns-4584.pod.cluster.local.ec2.internal. udp 93 false 4096\" NXDOMAIN qr,rd,ra 70 0.001157011s\n[INFO] 100.64.1.137:60121 - 21566 \"A IN 100-64-1-137.dns-4584.pod.cluster.local.ec2.internal. tcp 93 false 65535\" NXDOMAIN qr,rd,ra 70 0.000802272s\n[INFO] 100.64.1.137:55241 - 19419 \"A IN dns-test-service-2.dns-4584.svc.cluster.local.ec2.internal. tcp 99 false 65535\" NXDOMAIN qr,rd,ra 76 0.000862983s\n[INFO] 100.64.1.137:48549 - 63538 \"A IN dns-test-service-2.dns-4584.svc.cluster.local.ec2.internal. udp 87 false 4096\" NXDOMAIN qr,rd,ra 76 0.001256389s\n[INFO] 100.64.1.137:36076 - 13771 \"A IN 100-64-1-137.dns-4584.pod.cluster.local.ec2.internal. udp 81 false 4096\" NXDOMAIN qr,rd,ra 70 0.001015651s\n[INFO] 100.64.1.137:39377 - 17713 \"A IN 100-64-1-137.dns-4584.pod.cluster.local.ec2.internal. tcp 81 false 65535\" NXDOMAIN qr,rd,ra 70 0.000810885s\n[INFO] 100.64.1.137:37635 - 10415 \"A IN dns-test-service-2.dns-4584.svc.cluster.local.ec2.internal. tcp 99 false 65535\" NXDOMAIN qr,rd,ra 76 0.002090519s\n[INFO] 100.64.1.137:59432 - 26439 \"A IN 100-64-1-137.dns-4584.pod.cluster.local.ec2.internal. udp 93 false 4096\" NXDOMAIN qr,rd,ra 70 0.001189946s\n[INFO] 100.64.1.137:33202 - 643 \"A IN dns-test-service-2.dns-4584.svc.cluster.local.ec2.internal. udp 87 false 4096\" NXDOMAIN qr,rd,ra 76 0.001184666s\n[INFO] 100.64.1.137:55631 - 17438 \"A IN 100-64-1-137.dns-4584.pod.cluster.local.ec2.internal. udp 81 false 4096\" NXDOMAIN qr,rd,ra 70 0.001190513s\n[INFO] 100.64.1.137:60341 - 64955 \"A IN 100-64-1-137.dns-4584.pod.cluster.local.ec2.internal. tcp 81 false 65535\" NXDOMAIN qr,rd,ra 70 0.00116953s\n[INFO] 100.64.1.137:42305 - 61796 \"A IN dns-test-service-2.dns-4584.svc.cluster.local.ec2.internal. udp 99 false 4096\" NXDOMAIN qr,rd,ra 76 0.001090126s\n[INFO] 100.64.1.137:57417 - 1980 \"A IN 100-64-1-137.dns-4584.pod.cluster.local.ec2.internal. tcp 93 false 65535\" NXDOMAIN qr,rd,ra 70 0.000833485s\n[INFO] 100.64.1.137:57053 - 39780 \"A IN dns-test-service-2.dns-4584.svc.cluster.local.ec2.internal. udp 87 false 4096\" NXDOMAIN qr,rd,ra 76 0.00130203s\n[INFO] 100.64.1.137:49493 - 44344 \"A IN dns-test-service-2.dns-4584.svc.cluster.local.ec2.internal. tcp 87 false 65535\" NXDOMAIN qr,rd,ra 76 0.000857165s\n[INFO] 100.64.1.137:46641 - 53336 \"A IN 100-64-1-137.dns-4584.pod.cluster.local.ec2.internal. udp 81 false 4096\" NXDOMAIN qr,rd,ra 70 0.001245022s\n[INFO] 100.64.1.137:48123 - 61653 \"A IN 100-64-1-137.dns-4584.pod.cluster.local.ec2.internal. tcp 81 false 65535\" NXDOMAIN qr,rd,ra 70 0.000830827s\n[INFO] 100.64.1.137:40369 - 32185 \"A IN dns-test-service-2.dns-4584.svc.cluster.local.ec2.internal. udp 99 false 4096\" NXDOMAIN qr,rd,ra 76 0.001220564s\n[INFO] 100.64.1.137:33165 - 37337 \"A IN dns-test-service-2.dns-4584.svc.cluster.local.ec2.internal. tcp 99 false 65535\" NXDOMAIN qr,rd,ra 76 0.001232806s\n[INFO] 100.64.1.137:53673 - 58513 \"A IN 100-64-1-137.dns-4584.pod.cluster.local.ec2.internal. tcp 93 false 65535\" NXDOMAIN qr,rd,ra 70 0.000899176s\n[INFO] 100.64.1.137:43628 - 60983 \"A IN dns-test-service-2.dns-4584.svc.cluster.local.ec2.internal. udp 87 false 4096\" NXDOMAIN qr,rd,ra 76 0.001182549s\n[INFO] 100.64.1.137:58843 - 31923 \"A IN dns-test-service-2.dns-4584.svc.cluster.local.ec2.internal. tcp 87 false 65535\" NXDOMAIN qr,rd,ra 76 0.00086415s\n[INFO] 100.64.1.137:38973 - 7725 \"A IN dns-test-service-2.dns-4584.svc.cluster.local.ec2.internal. tcp 99 false 65535\" NXDOMAIN qr,rd,ra 76 0.005151086s\n[INFO] 100.64.1.137:34513 - 45179 \"A IN dns-test-service-2.dns-4584.svc.cluster.local.ec2.internal. udp 87 false 4096\" NXDOMAIN qr,rd,ra 76 0.001245964s\n[INFO] 100.64.1.137:53239 - 55972 \"A IN dns-test-service-2.dns-4584.svc.cluster.local.ec2.internal. tcp 87 false 65535\" NXDOMAIN qr,rd,ra 76 0.00077667s\n[INFO] 100.64.1.137:35590 - 20944 \"A IN 100-64-1-137.dns-4584.pod.cluster.local.ec2.internal. udp 81 false 4096\" NXDOMAIN qr,rd,ra 70 0.001158219s\n[INFO] 100.64.1.137:44871 - 14746 \"A IN 100-64-1-137.dns-4584.pod.cluster.local.ec2.internal. udp 93 false 4096\" NXDOMAIN qr,rd,ra 70 0.001934709s\n[INFO] 100.64.1.137:40701 - 56295 \"A IN dns-test-service-2.dns-4584.svc.cluster.local.ec2.internal. udp 87 false 4096\" NXDOMAIN qr,rd,ra 76 0.001090617s\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[INFO] 100.64.1.137:57469 - 33206 \"A IN dns-test-service-2.dns-4584.svc.cluster.local.ec2.internal. udp 99 false 4096\" NXDOMAIN qr,rd,ra 76 0.001163048s\n[INFO] 100.64.1.137:33423 - 34647 \"A IN dns-test-service-2.dns-4584.svc.cluster.local.ec2.internal. tcp 99 false 65535\" NXDOMAIN qr,rd,ra 76 0.002183702s\n[INFO] 100.64.1.137:43187 - 55823 \"A IN 100-64-1-137.dns-4584.pod.cluster.local.ec2.internal. tcp 93 false 65535\" NXDOMAIN qr,rd,ra 70 0.001249523s\n[INFO] 100.64.1.137:35337 - 55720 \"A IN 100-64-1-137.dns-4584.pod.cluster.local.ec2.internal. udp 81 false 4096\" NXDOMAIN qr,rd,ra 70 0.001091142s\n[INFO] 100.64.1.137:58381 - 16603 \"A IN 100-64-1-137.dns-4584.pod.cluster.local.ec2.internal. tcp 81 false 65535\" NXDOMAIN qr,rd,ra 70 0.003084969s\n[INFO] 100.64.1.137:44952 - 3595 \"A IN dns-test-service-2.dns-4584.svc.cluster.local.ec2.internal. udp 99 false 4096\" NXDOMAIN qr,rd,ra 76 0.001293013s\n[INFO] 100.64.1.137:40659 - 8747 \"A IN dns-test-service-2.dns-4584.svc.cluster.local.ec2.internal. tcp 99 false 65535\" NXDOMAIN qr,rd,ra 76 0.000908567s\n[INFO] 100.64.1.137:33613 - 41667 \"A IN 100-64-1-137.dns-4584.pod.cluster.local.ec2.internal. udp 93 false 4096\" NXDOMAIN qr,rd,ra 70 0.001107524s\n[INFO] 100.64.1.137:60464 - 8885 \"A IN dns-test-service-2.dns-4584.svc.cluster.local.ec2.internal. udp 87 false 4096\" NXDOMAIN qr,rd,ra 76 0.000969622s\n[INFO] 100.64.1.137:60331 - 41416 \"A IN 100-64-1-137.dns-4584.pod.cluster.local.ec2.internal. udp 81 false 4096\" NXDOMAIN qr,rd,ra 70 0.001021323s\n[INFO] 100.64.1.137:38441 - 44672 \"A IN dns-test-service-2.dns-4584.svc.cluster.local.ec2.internal. tcp 99 false 65535\" NXDOMAIN qr,rd,ra 76 0.000904249s\n[INFO] 100.64.1.137:42841 - 15768 \"A IN 100-64-1-137.dns-4584.pod.cluster.local.ec2.internal. udp 93 false 4096\" NXDOMAIN qr,rd,ra 70 0.00120155s\n[INFO] 100.64.1.137:34697 - 17506 \"A IN dns-test-service-2.dns-4584.svc.cluster.local.ec2.internal. tcp 87 false 65535\" NXDOMAIN qr,rd,ra 76 0.000890256s\n[INFO] 100.64.1.137:55808 - 30516 \"A IN dns-test-service-2.dns-4584.svc.cluster.local.ec2.internal. udp 99 false 4096\" NXDOMAIN qr,rd,ra 76 0.001249418s\n[INFO] 100.64.1.137:39439 - 56844 \"A IN 100-64-1-137.dns-4584.pod.cluster.local.ec2.internal. tcp 93 false 65535\" NXDOMAIN qr,rd,ra 70 0.000999013s\n[INFO] 100.64.1.137:52634 - 39965 \"A IN dns-test-service-2.dns-4584.svc.cluster.local.ec2.internal. udp 87 false 4096\" NXDOMAIN qr,rd,ra 76 0.001215484s\n[INFO] 100.64.1.137:33271 - 16539 \"A IN dns-test-service-2.dns-4584.svc.cluster.local.ec2.internal. tcp 87 false 65535\" NXDOMAIN qr,rd,ra 76 0.000988496s\n[INFO] 100.64.1.137:39949 - 54468 \"A IN 100-64-1-137.dns-4584.pod.cluster.local.ec2.internal. udp 81 false 4096\" NXDOMAIN qr,rd,ra 70 0.00101286s\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\nE0111 19:01:40.817988 1 reflector.go:283] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to watch *v1.Endpoints: Get https://100.104.0.1:443/api/v1/endpoints?resourceVersion=36628&timeout=8m41s&timeoutSeconds=521&watch=true: net/http: TLS handshake timeout\nE0111 19:01:40.817988 1 reflector.go:283] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to watch *v1.Endpoints: Get https://100.104.0.1:443/api/v1/endpoints?resourceVersion=36628&timeout=8m41s&timeoutSeconds=521&watch=true: net/http: TLS handshake timeout\nE0111 19:01:40.817988 1 reflector.go:283] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to watch *v1.Endpoints: Get https://100.104.0.1:443/api/v1/endpoints?resourceVersion=36628&timeout=8m41s&timeoutSeconds=521&watch=true: net/http: TLS handshake timeout\nE0111 19:01:40.817988 1 reflector.go:283] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to watch *v1.Endpoints: Get https://100.104.0.1:443/api/v1/endpoints?resourceVersion=36628&timeout=8m41s&timeoutSeconds=521&watch=true: net/http: TLS handshake timeout\nE0111 19:01:40.820291 1 reflector.go:283] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to watch *v1.Namespace: Get https://100.104.0.1:443/api/v1/namespaces?resourceVersion=36590&timeout=7m44s&timeoutSeconds=464&watch=true: net/http: TLS handshake timeout\nE0111 19:01:40.820291 1 reflector.go:283] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to watch *v1.Namespace: Get https://100.104.0.1:443/api/v1/namespaces?resourceVersion=36590&timeout=7m44s&timeoutSeconds=464&watch=true: net/http: TLS handshake timeout\nE0111 19:01:40.820291 1 reflector.go:283] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to watch *v1.Namespace: Get https://100.104.0.1:443/api/v1/namespaces?resourceVersion=36590&timeout=7m44s&timeoutSeconds=464&watch=true: net/http: TLS handshake timeout\nE0111 19:01:40.820291 1 reflector.go:283] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to watch *v1.Namespace: Get https://100.104.0.1:443/api/v1/namespaces?resourceVersion=36590&timeout=7m44s&timeoutSeconds=464&watch=true: net/http: TLS handshake timeout\nE0111 19:01:40.820514 1 reflector.go:283] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to watch *v1.Service: Get https://100.104.0.1:443/api/v1/services?resourceVersion=36201&timeout=8m1s&timeoutSeconds=481&watch=true: net/http: TLS handshake timeout\nE0111 19:01:40.820514 1 reflector.go:283] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to watch *v1.Service: Get https://100.104.0.1:443/api/v1/services?resourceVersion=36201&timeout=8m1s&timeoutSeconds=481&watch=true: net/http: TLS handshake timeout\nE0111 19:01:40.820514 1 reflector.go:283] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to watch *v1.Service: Get https://100.104.0.1:443/api/v1/services?resourceVersion=36201&timeout=8m1s&timeoutSeconds=481&watch=true: net/http: TLS handshake timeout\nE0111 19:01:40.820514 1 reflector.go:283] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to watch *v1.Service: Get https://100.104.0.1:443/api/v1/services?resourceVersion=36201&timeout=8m1s&timeoutSeconds=481&watch=true: net/http: TLS handshake timeout\nW0111 19:01:58.344079 1 reflector.go:302] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: watch of *v1.Service ended with: too old resource version: 36201 (36628)\nW0111 19:01:58.344079 1 reflector.go:302] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: watch of *v1.Service ended with: too old resource version: 36201 (36628)\nW0111 19:02:01.493906 1 reflector.go:302] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: watch of *v1.Namespace ended with: too old resource version: 36590 (36628)\nW0111 19:02:01.493906 1 reflector.go:302] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: watch of *v1.Namespace ended with: too old resource version: 36590 (36628)\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[INFO] 100.64.1.236:37417 - 52887 \"AAAA IN invalid.ec2.internal. udp 38 false 512\" NXDOMAIN qr,rd,ra 38 0.000211532s\n[INFO] 100.64.1.236:35664 - 35594 \"AAAA IN invalid.ec2.internal. udp 38 false 512\" NXDOMAIN qr,rd,ra 38 0.000125294s\n[INFO] 100.64.1.236:51442 - 12393 \"A IN invalid. udp 25 false 512\" NXDOMAIN qr,rd,ra 25 0.000132728s\n[INFO] 100.64.1.236:50480 - 61141 \"AAAA IN invalid.ec2.internal. udp 38 false 512\" NXDOMAIN qr,rd,ra 38 0.00013981s\n[INFO] 100.64.1.236:33047 - 26247 \"A IN invalid. udp 25 false 512\" NXDOMAIN qr,rd,ra 25 0.000140406s\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[INFO] 100.64.1.245:45302 - 44059 \"A IN 100-64-1-245.dns-58.pod.cluster.local.ec2.internal. udp 91 false 4096\" NXDOMAIN qr,rd,ra 68 0.001233819s\n[INFO] 100.64.1.245:38695 - 28603 \"A IN 100-64-1-245.dns-58.pod.cluster.local.ec2.internal. tcp 91 false 65535\" NXDOMAIN qr,rd,ra 68 0.004071364s\n[INFO] 100.64.1.245:45299 - 19429 \"A IN 100-64-1-245.dns-58.pod.cluster.local.ec2.internal. tcp 79 false 65535\" NXDOMAIN qr,rd,ra 68 0.000979032s\n[INFO] 100.64.1.245:47940 - 27512 \"AAAA IN dns-querier-1.ec2.internal. udp 44 false 512\" NXDOMAIN qr,rd,ra 44 0.000188548s\n[INFO] 100.64.1.245:35553 - 19366 \"A IN 100-64-1-245.dns-58.pod.cluster.local.ec2.internal. udp 79 false 4096\" NXDOMAIN qr,rd,ra 68 0.00093514s\n[INFO] 100.64.1.245:48679 - 16102 \"A IN 100-64-1-245.dns-58.pod.cluster.local.ec2.internal. tcp 79 false 65535\" NXDOMAIN qr,rd,ra 68 0.000962254s\n[INFO] 100.64.1.245:36749 - 19460 \"A IN 100-64-1-245.dns-58.pod.cluster.local.ec2.internal. tcp 91 false 65535\" NXDOMAIN qr,rd,ra 68 0.005163841s\n[INFO] 100.64.1.245:56936 - 28426 \"AAAA IN dns-querier-1.ec2.internal. udp 44 false 512\" NXDOMAIN qr,rd,ra 44 0.000174735s\n[INFO] 100.64.1.245:43998 - 40472 \"A IN 100-64-1-245.dns-58.pod.cluster.local.ec2.internal. udp 79 false 4096\" NXDOMAIN qr,rd,ra 68 0.001012896s\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[INFO] 100.64.0.88:51991 - 45183 \"A IN 100-64-0-88.dns-1736.pod.cluster.local.ec2.internal. udp 92 false 4096\" NXDOMAIN qr,rd,ra 69 0.001353291s\n[INFO] 100.64.0.88:43789 - 50335 \"A IN 100-64-0-88.dns-1736.pod.cluster.local.ec2.internal. tcp 92 false 65535\" NXDOMAIN qr,rd,ra 69 0.004686498s\n[INFO] 100.64.0.88:56426 - 23300 \"A IN 100-64-0-88.dns-1736.pod.cluster.local.ec2.internal. udp 92 false 4096\" NXDOMAIN qr,rd,ra 69 0.001276055s\n[INFO] 100.64.0.88:56803 - 28452 \"A IN 100-64-0-88.dns-1736.pod.cluster.local.ec2.internal. tcp 92 false 65535\" NXDOMAIN qr,rd,ra 69 0.00096187s\n[INFO] 100.64.0.88:55766 - 55234 \"A IN 100-64-0-88.dns-1736.pod.cluster.local.ec2.internal. udp 92 false 4096\" NXDOMAIN qr,rd,ra 69 0.001247029s\n[INFO] 100.64.0.88:39963 - 28198 \"A IN 100-64-0-88.dns-1736.pod.cluster.local.ec2.internal. tcp 92 false 65535\" NXDOMAIN qr,rd,ra 69 0.013953245s\n[INFO] 100.64.0.88:43729 - 6315 \"A IN 100-64-0-88.dns-1736.pod.cluster.local.ec2.internal. tcp 92 false 65535\" NXDOMAIN qr,rd,ra 69 0.001321491s\n[INFO] 100.64.0.88:55493 - 42861 \"AAAA IN dns-querier-1.dns-test-service.dns-1736.svc.cluster.local.ec2.internal. udp 88 false 512\" NXDOMAIN qr,rd,ra 88 0.001146576s\n[INFO] 100.64.0.88:32859 - 32551 \"A IN 100-64-0-88.dns-1736.pod.cluster.local.ec2.internal. udp 80 false 4096\" NXDOMAIN qr,rd,ra 69 0.00112878s\n[INFO] 100.64.0.88:48387 - 19021 \"A IN 100-64-0-88.dns-1736.pod.cluster.local.ec2.internal. tcp 80 false 65535\" NXDOMAIN qr,rd,ra 69 0.0009712s\n[INFO] 100.64.0.88:42054 - 55120 \"A IN 100-64-0-88.dns-1736.pod.cluster.local.ec2.internal. udp 92 false 4096\" NXDOMAIN qr,rd,ra 69 0.001061283s\n[INFO] 100.64.0.88:48461 - 54558 \"A IN 100-64-0-88.dns-1736.pod.cluster.local.ec2.internal. udp 80 false 4096\" NXDOMAIN qr,rd,ra 69 0.001161439s\n[INFO] 100.64.0.88:49664 - 33236 \"A IN 100-64-0-88.dns-1736.pod.cluster.local.ec2.internal. udp 92 false 4096\" NXDOMAIN qr,rd,ra 69 0.001130838s\n[INFO] 100.64.0.88:35171 - 60179 \"A IN 100-64-0-88.dns-1736.pod.cluster.local.ec2.internal. udp 80 false 4096\" NXDOMAIN qr,rd,ra 69 0.001293085s\n[INFO] 100.64.0.88:33529 - 6201 \"A IN 100-64-0-88.dns-1736.pod.cluster.local.ec2.internal. tcp 92 false 65535\" NXDOMAIN qr,rd,ra 69 0.002574536s\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[INFO] 100.64.0.88:48808 - 44701 \"A IN 100-64-0-88.dns-1736.pod.cluster.local.ec2.internal. udp 92 false 4096\" NXDOMAIN qr,rd,ra 69 0.025157419s\n[INFO] 100.64.0.88:36591 - 49853 \"A IN 100-64-0-88.dns-1736.pod.cluster.local.ec2.internal. tcp 92 false 65535\" NXDOMAIN qr,rd,ra 69 0.001081352s\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[INFO] 100.64.1.27:42985 - 12218 \"A IN dns-test-service.ec2.internal. tcp 70 false 65535\" NXDOMAIN qr,rd,ra 58 0.000698724s\n[INFO] 100.64.1.27:50903 - 62298 \"A IN dns-test-service.dns-8433.ec2.internal. udp 79 false 4096\" NXDOMAIN qr,rd,ra 56 0.001380096s\n[INFO] 100.64.1.27:36027 - 54775 \"A IN dns-test-service.ec2.internal. udp 58 false 4096\" NXDOMAIN qr,rd,ra 58 0.000321908s\n[INFO] 100.64.1.27:40149 - 37030 \"A IN dns-test-service.ec2.internal. tcp 58 false 65535\" NXDOMAIN qr,rd,ra 58 0.000309753s\n[INFO] 100.64.1.27:47563 - 58586 \"SRV IN _http._tcp.dns-test-service.dns-8433.svc.ec2.internal. udp 94 false 4096\" NXDOMAIN qr,rd,ra 71 0.001534584s\n[INFO] 100.64.1.27:49550 - 55008 \"A IN dns-test-service.dns-8433.ec2.internal. udp 67 false 4096\" NXDOMAIN qr,rd,ra 56 0.001241069s\n[INFO] 100.64.1.27:55590 - 54679 \"SRV IN _http._tcp.dns-test-service.dns-8433.svc.ec2.internal. udp 82 false 4096\" NXDOMAIN qr,rd,ra 71 0.001166718s\n[INFO] 100.64.1.27:44609 - 3178 \"SRV IN _http._tcp.dns-test-service.dns-8433.svc.ec2.internal. tcp 82 false 65535\" NXDOMAIN qr,rd,ra 71 0.001799504s\n[INFO] 100.64.1.27:39931 - 54378 \"A IN 100-64-1-27.dns-8433.pod.cluster.local.ec2.internal. tcp 80 false 65535\" NXDOMAIN qr,rd,ra 69 0.001035575s\n[INFO] 100.64.1.27:51257 - 13379 \"A IN dns-test-service.dns-8433.ec2.internal. tcp 79 false 65535\" NXDOMAIN qr,rd,ra 56 0.001163905s\n[INFO] 100.64.1.27:56513 - 3075 \"A IN dns-test-service.dns-8433.svc.ec2.internal. tcp 83 false 65535\" NXDOMAIN qr,rd,ra 60 0.001124844s\n[INFO] 100.64.1.27:33121 - 4672 \"SRV IN _http._tcp.dns-test-service.dns-8433.svc.ec2.internal. udp 82 false 4096\" NXDOMAIN qr,rd,ra 71 0.001128423s\n[INFO] 100.64.1.27:36899 - 9667 \"SRV IN _http._tcp.dns-test-service.dns-8433.svc.ec2.internal. tcp 94 false 65535\" NXDOMAIN qr,rd,ra 71 0.000969251s\n[INFO] 100.64.1.27:46142 - 35995 \"A IN 100-64-1-27.dns-8433.pod.cluster.local.ec2.internal. udp 92 false 4096\" NXDOMAIN qr,rd,ra 69 0.001316841s\n[INFO] 100.64.1.27:47821 - 20539 \"A IN 100-64-1-27.dns-8433.pod.cluster.local.ec2.internal. tcp 92 false 65535\" NXDOMAIN qr,rd,ra 69 0.002695874s\n[INFO] 100.64.1.27:43204 - 42719 \"A IN 100-64-1-27.dns-8433.pod.cluster.local.ec2.internal. udp 80 false 4096\" NXDOMAIN qr,rd,ra 69 0.001226978s\n[INFO] 100.64.1.27:37757 - 15563 \"A IN dns-test-service.ec2.internal. tcp 58 false 65535\" NXDOMAIN qr,rd,ra 58 0.000441579s\n[INFO] 100.64.1.27:41912 - 30136 \"A IN dns-test-service.dns-8433.svc.ec2.internal. udp 83 false 4096\" NXDOMAIN qr,rd,ra 60 0.001250572s\n[INFO] 100.64.1.27:44927 - 35288 \"A IN dns-test-service.dns-8433.svc.ec2.internal. tcp 83 false 65535\" NXDOMAIN qr,rd,ra 60 0.000980396s\n[INFO] 100.64.1.27:55360 - 19832 \"SRV IN _http._tcp.dns-test-service.dns-8433.svc.ec2.internal. udp 94 false 4096\" NXDOMAIN qr,rd,ra 71 0.001163733s\n[INFO] 100.64.1.27:45942 - 7419 \"A IN dns-test-service.dns-8433.svc.ec2.internal. udp 71 false 4096\" NXDOMAIN qr,rd,ra 60 0.000989585s\n[INFO] 100.64.1.27:49603 - 64498 \"SRV IN _http._tcp.dns-test-service.dns-8433.svc.ec2.internal. udp 82 false 4096\" NXDOMAIN qr,rd,ra 71 0.001010559s\n[INFO] 100.64.1.27:59159 - 55171 \"SRV IN _http._tcp.dns-test-service.dns-8433.svc.ec2.internal. tcp 82 false 65535\" NXDOMAIN qr,rd,ra 71 0.000882538s\n[INFO] 100.64.1.27:53277 - 43128 \"A IN 100-64-1-27.dns-8433.pod.cluster.local.ec2.internal. udp 80 false 4096\" NXDOMAIN qr,rd,ra 69 0.00116694s\n[INFO] 100.64.1.27:59959 - 55767 \"A IN 100-64-1-27.dns-8433.pod.cluster.local.ec2.internal. tcp 80 false 65535\" NXDOMAIN qr,rd,ra 69 0.000870502s\n[INFO] 100.64.1.27:48282 - 525 \"A IN dns-test-service.ec2.internal. udp 70 false 4096\" NXDOMAIN qr,rd,ra 58 0.000236521s\n[INFO] 100.64.1.27:42782 - 7117 \"A IN dns-test-service.dns-8433.ec2.internal. udp 79 false 4096\" NXDOMAIN qr,rd,ra 56 0.001034399s\n[INFO] 100.64.1.27:46296 - 58026 \"A IN dns-test-service.ec2.internal. udp 58 false 4096\" NXDOMAIN qr,rd,ra 58 0.000210653s\n[INFO] 100.64.1.27:59713 - 56750 \"A IN dns-test-service.dns-8433.ec2.internal. udp 67 false 4096\" NXDOMAIN qr,rd,ra 56 0.001099353s\n[INFO] 100.64.1.27:42047 - 17162 \"A IN dns-test-service.dns-8433.ec2.internal. tcp 67 false 65535\" NXDOMAIN qr,rd,ra 56 0.00086355s\n[INFO] 100.64.1.27:49623 - 57304 \"A IN dns-test-service.dns-8433.svc.ec2.internal. tcp 71 false 65535\" NXDOMAIN qr,rd,ra 60 0.004444833s\n[INFO] 100.64.1.27:36717 - 14277 \"A IN 100-64-1-27.dns-8433.pod.cluster.local.ec2.internal. udp 92 false 4096\" NXDOMAIN qr,rd,ra 69 0.000979582s\n[INFO] 100.64.1.27:44854 - 37877 \"SRV IN _http._tcp.dns-test-service.dns-8433.svc.ec2.internal. udp 82 false 4096\" NXDOMAIN qr,rd,ra 71 0.020602364s\n[INFO] 100.64.1.27:53021 - 518 \"SRV IN _http._tcp.dns-test-service.dns-8433.svc.ec2.internal. tcp 82 false 65535\" NXDOMAIN qr,rd,ra 71 0.000835635s\n[INFO] 100.64.1.27:34977 - 37506 \"A IN 100-64-1-27.dns-8433.pod.cluster.local.ec2.internal. tcp 80 false 65535\" NXDOMAIN qr,rd,ra 69 0.000916084s\n[INFO] 100.64.1.27:60084 - 12129 \"A IN dns-test-service.ec2.internal. udp 70 false 4096\" NXDOMAIN qr,rd,ra 58 0.000222095s\n[INFO] 100.64.1.27:59591 - 17281 \"A IN dns-test-service.ec2.internal. tcp 70 false 65535\" NXDOMAIN qr,rd,ra 58 0.000724993s\n[INFO] 100.64.1.27:33023 - 23873 \"A IN dns-test-service.dns-8433.ec2.internal. tcp 79 false 65535\" NXDOMAIN qr,rd,ra 56 0.002064519s\n[INFO] 100.64.1.27:46976 - 8417 \"A IN dns-test-service.dns-8433.svc.ec2.internal. udp 83 false 4096\" NXDOMAIN qr,rd,ra 60 0.001145183s\n[INFO] 100.64.1.27:35279 - 13569 \"A IN dns-test-service.dns-8433.svc.ec2.internal. tcp 83 false 65535\" NXDOMAIN qr,rd,ra 60 0.001032353s\n[INFO] 100.64.1.27:49603 - 63649 \"SRV IN _http._tcp.dns-test-service.dns-8433.svc.ec2.internal. udp 94 false 4096\" NXDOMAIN qr,rd,ra 71 0.000993051s\n[INFO] 100.64.1.27:33363 - 3265 \"SRV IN _http._tcp.dns-test-service.dns-8433.svc.ec2.internal. tcp 94 false 65535\" NXDOMAIN qr,rd,ra 71 0.001062057s\n[INFO] 100.64.1.27:57497 - 52164 \"A IN dns-test-service.ec2.internal. tcp 58 false 65535\" NXDOMAIN qr,rd,ra 58 0.000258765s\n[INFO] 100.64.1.27:35231 - 65446 \"A IN dns-test-service.dns-8433.ec2.internal. tcp 67 false 65535\" NXDOMAIN qr,rd,ra 56 0.00097165s\n[INFO] 100.64.1.27:37833 - 7568 \"SRV IN _http._tcp.dns-test-service.dns-8433.svc.ec2.internal. udp 82 false 4096\" NXDOMAIN qr,rd,ra 71 0.001379475s\n[INFO] 100.64.1.27:41853 - 44342 \"A IN dns-test-service.ec2.internal. udp 70 false 4096\" NXDOMAIN qr,rd,ra 58 0.000297843s\n[INFO] 100.64.1.27:42317 - 28886 \"A IN dns-test-service.ec2.internal. tcp 70 false 65535\" NXDOMAIN qr,rd,ra 58 0.000305397s\n[INFO] 100.64.1.27:46741 - 35478 \"A IN dns-test-service.dns-8433.ec2.internal. tcp 79 false 65535\" NXDOMAIN qr,rd,ra 56 0.001087042s\n[INFO] 100.64.1.27:57529 - 30326 \"SRV IN _http._tcp.dns-test-service.dns-8433.svc.ec2.internal. udp 94 false 4096\" NXDOMAIN qr,rd,ra 71 0.001222952s\n[INFO] 100.64.1.27:32973 - 14870 \"SRV IN _http._tcp.dns-test-service.dns-8433.svc.ec2.internal. tcp 94 false 65535\" NXDOMAIN qr,rd,ra 71 0.001027562s\n[INFO] 100.64.1.27:51839 - 63242 \"A IN dns-test-service.ec2.internal. udp 58 false 4096\" NXDOMAIN qr,rd,ra 58 0.000216587s\n[INFO] 100.64.1.27:51169 - 42638 \"A IN 100-64-1-27.dns-8433.pod.cluster.local.ec2.internal. tcp 92 false 65535\" NXDOMAIN qr,rd,ra 69 0.001155609s\n[INFO] 100.64.1.27:51783 - 19380 \"A IN dns-test-service.ec2.internal. tcp 58 false 65535\" NXDOMAIN qr,rd,ra 58 0.000228743s\n[INFO] 100.64.1.27:35785 - 16358 \"A IN dns-test-service.dns-8433.ec2.internal. tcp 67 false 65535\" NXDOMAIN qr,rd,ra 56 0.001064492s\n[INFO] 100.64.1.27:56808 - 28831 \"A IN dns-test-service.dns-8433.svc.ec2.internal. udp 71 false 4096\" NXDOMAIN qr,rd,ra 60 0.001088074s\n[INFO] 100.64.1.27:35481 - 23669 \"A IN dns-test-service.dns-8433.svc.ec2.internal. tcp 71 false 65535\" NXDOMAIN qr,rd,ra 60 0.002115948s\n[INFO] 100.64.1.27:53600 - 4235 \"SRV IN _http._tcp.dns-test-service.dns-8433.svc.ec2.internal. udp 82 false 4096\" NXDOMAIN qr,rd,ra 71 0.001164117s\n[INFO] 100.64.1.27:42903 - 25904 \"SRV IN _http._tcp.dns-test-service.dns-8433.svc.ec2.internal. tcp 82 false 65535\" NXDOMAIN qr,rd,ra 71 0.000995391s\n[INFO] 100.64.1.27:36961 - 21693 \"A IN 100-64-1-27.dns-8433.pod.cluster.local.ec2.internal. udp 80 false 4096\" NXDOMAIN qr,rd,ra 69 0.001017486s\n[INFO] 100.64.1.27:43497 - 55947 \"A IN dns-test-service.ec2.internal. udp 70 false 4096\" NXDOMAIN qr,rd,ra 58 0.000267578s\n[INFO] 100.64.1.27:34931 - 61099 \"A IN dns-test-service.ec2.internal. tcp 70 false 65535\" NXDOMAIN qr,rd,ra 58 0.000225831s\n[INFO] 100.64.1.27:57635 - 45643 \"A IN dns-test-service.dns-8433.ec2.internal. udp 79 false 4096\" NXDOMAIN qr,rd,ra 56 0.001291461s\n[INFO] 100.64.1.27:49911 - 50795 \"A IN dns-test-service.dns-8433.ec2.internal. tcp 79 false 65535\" NXDOMAIN qr,rd,ra 56 0.004513902s\n[INFO] 100.64.1.27:55249 - 47083 \"SRV IN _http._tcp.dns-test-service.dns-8433.svc.ec2.internal. tcp 94 false 65535\" NXDOMAIN qr,rd,ra 71 0.000980749s\n[INFO] 100.64.1.27:54221 - 62219 \"A IN dns-test-service.ec2.internal. udp 58 false 4096\" NXDOMAIN qr,rd,ra 58 0.000202183s\n[INFO] 100.64.1.27:36387 - 57955 \"A IN 100-64-1-27.dns-8433.pod.cluster.local.ec2.internal. tcp 92 false 65535\" NXDOMAIN qr,rd,ra 69 0.000922746s\n[INFO] 100.64.1.27:50405 - 16886 \"A IN dns-test-service.dns-8433.ec2.internal. tcp 67 false 65535\" NXDOMAIN qr,rd,ra 56 0.000941479s\n[INFO] 100.64.1.27:42226 - 17982 \"SRV IN _http._tcp.dns-test-service.dns-8433.svc.ec2.internal. udp 82 false 4096\" NXDOMAIN qr,rd,ra 71 0.001144705s\n[INFO] 100.64.1.27:40485 - 34591 \"SRV IN _http._tcp.dns-test-service.dns-8433.svc.ec2.internal. tcp 82 false 65535\" NXDOMAIN qr,rd,ra 71 0.000894274s\n[INFO] 100.64.1.27:36221 - 22624 \"A IN dns-test-service.ec2.internal. udp 70 false 4096\" NXDOMAIN qr,rd,ra 58 0.00025478s\n[INFO] 100.64.1.27:38114 - 12320 \"A IN dns-test-service.dns-8433.ec2.internal. udp 79 false 4096\" NXDOMAIN qr,rd,ra 56 0.000962077s\n[INFO] 100.64.1.27:49027 - 62400 \"A IN dns-test-service.dns-8433.ec2.internal. tcp 79 false 65535\" NXDOMAIN qr,rd,ra 56 0.001013169s\n[INFO] 100.64.1.27:45232 - 2016 \"A IN dns-test-service.dns-8433.svc.ec2.internal. udp 83 false 4096\" NXDOMAIN qr,rd,ra 60 0.000978712s\n[INFO] 100.64.1.27:34153 - 3456 \"A IN dns-test-service.dns-8433.svc.ec2.internal. tcp 83 false 65535\" NXDOMAIN qr,rd,ra 60 0.000999621s\n[INFO] 100.64.1.27:59295 - 58688 \"SRV IN _http._tcp.dns-test-service.dns-8433.svc.ec2.internal. tcp 94 false 65535\" NXDOMAIN qr,rd,ra 71 0.000978916s\n[INFO] 100.64.1.27:35370 - 16404 \"A IN dns-test-service.ec2.internal. udp 58 false 4096\" NXDOMAIN qr,rd,ra 58 0.000220963s\n[INFO] 100.64.1.27:32799 - 39950 \"A IN dns-test-service.dns-8433.ec2.internal. tcp 67 false 65535\" NXDOMAIN qr,rd,ra 56 0.000978163s\n[INFO] 100.64.1.27:53357 - 60224 \"A IN dns-test-service.dns-8433.svc.ec2.internal. udp 71 false 4096\" NXDOMAIN qr,rd,ra 60 0.001168211s\n[INFO] 100.64.1.27:56769 - 63265 \"A IN dns-test-service.dns-8433.svc.ec2.internal. tcp 71 false 65535\" NXDOMAIN qr,rd,ra 60 0.002063066s\n[INFO] 100.64.1.27:53573 - 41077 \"SRV IN _http._tcp.dns-test-service.dns-8433.svc.ec2.internal. udp 82 false 4096\" NXDOMAIN qr,rd,ra 71 0.001204881s\n[INFO] 100.64.1.27:49829 - 35191 \"SRV IN _http._tcp.dns-test-service.dns-8433.svc.ec2.internal. tcp 82 false 65535\" NXDOMAIN qr,rd,ra 71 0.001016691s\n[INFO] 100.64.1.27:54400 - 60348 \"A IN 100-64-1-27.dns-8433.pod.cluster.local.ec2.internal. udp 80 false 4096\" NXDOMAIN qr,rd,ra 69 0.001122957s\n[INFO] 100.64.1.27:41977 - 17829 \"A IN 100-64-1-27.dns-8433.pod.cluster.local.ec2.internal. tcp 80 false 65535\" NXDOMAIN qr,rd,ra 69 0.001007243s\n[INFO] 100.64.1.27:43835 - 34089 \"A IN dns-test-service.ec2.internal. udp 70 false 4096\" NXDOMAIN qr,rd,ra 58 0.00027977s\n[INFO] 100.64.1.27:41555 - 39241 \"A IN dns-test-service.ec2.internal. tcp 70 false 65535\" NXDOMAIN qr,rd,ra 58 0.000743175s\n[INFO] 100.64.1.27:51069 - 13481 \"A IN dns-test-service.dns-8433.svc.ec2.internal. udp 83 false 4096\" NXDOMAIN qr,rd,ra 60 0.001085594s\n[INFO] 100.64.1.27:42025 - 14263 \"A IN dns-test-service.ec2.internal. udp 58 false 4096\" NXDOMAIN qr,rd,ra 58 0.000221296s\n[INFO] 100.64.1.27:55311 - 33329 \"A IN dns-test-service.dns-8433.ec2.internal. udp 67 false 4096\" NXDOMAIN qr,rd,ra 56 0.000995315s\n[INFO] 100.64.1.27:60881 - 36598 \"A IN dns-test-service.dns-8433.svc.ec2.internal. tcp 71 false 65535\" NXDOMAIN qr,rd,ra 60 0.001892331s\n[INFO] 100.64.1.27:35469 - 62727 \"SRV IN _http._tcp.dns-test-service.dns-8433.svc.ec2.internal. udp 82 false 4096\" NXDOMAIN qr,rd,ra 71 0.001182822s\n[INFO] 100.64.1.27:40989 - 57474 \"SRV IN _http._tcp.dns-test-service.dns-8433.svc.ec2.internal. tcp 82 false 65535\" NXDOMAIN qr,rd,ra 71 0.001068469s\n[INFO] 100.64.1.27:47795 - 23412 \"A IN 100-64-1-27.dns-8433.pod.cluster.local.ec2.internal. tcp 80 false 65535\" NXDOMAIN qr,rd,ra 69 0.00254746s\n[INFO] 100.64.1.27:36859 - 42650 \"A IN dns-test-service.ec2.internal. tcp 70 false 65535\" NXDOMAIN qr,rd,ra 58 0.000252783s\n[INFO] 100.64.1.27:41157 - 32346 \"A IN dns-test-service.dns-8433.ec2.internal. tcp 79 false 65535\" NXDOMAIN qr,rd,ra 56 0.001099006s\n[INFO] 100.64.1.27:49885 - 22042 \"A IN dns-test-service.dns-8433.svc.ec2.internal. tcp 83 false 65535\" NXDOMAIN qr,rd,ra 60 0.000888732s\n[INFO] 100.64.1.27:47332 - 27194 \"SRV IN _http._tcp.dns-test-service.dns-8433.svc.ec2.internal. udp 94 false 4096\" NXDOMAIN qr,rd,ra 71 0.00101159s\n[INFO] 100.64.1.27:33113 - 28634 \"SRV IN _http._tcp.dns-test-service.dns-8433.svc.ec2.internal. tcp 94 false 65535\" NXDOMAIN qr,rd,ra 71 0.001015428s\n[INFO] 100.64.1.27:41193 - 54962 \"A IN 100-64-1-27.dns-8433.pod.cluster.local.ec2.internal. udp 92 false 4096\" NXDOMAIN qr,rd,ra 69 0.001133454s\n[INFO] 100.64.1.27:57179 - 45498 \"A IN dns-test-service.ec2.internal. tcp 58 false 65535\" NXDOMAIN qr,rd,ra 58 0.000236193s\n[INFO] 100.64.1.27:55509 - 28985 \"A IN dns-test-service.dns-8433.ec2.internal. tcp 67 false 65535\" NXDOMAIN qr,rd,ra 56 0.00093792s\n[INFO] 100.64.1.27:48640 - 54865 \"A IN dns-test-service.dns-8433.svc.ec2.internal. udp 71 false 4096\" NXDOMAIN qr,rd,ra 60 0.000850656s\n[INFO] 100.64.1.27:47919 - 47628 \"A IN dns-test-service.dns-8433.svc.ec2.internal. tcp 71 false 65535\" NXDOMAIN qr,rd,ra 60 0.00147907s\n[INFO] 100.64.1.27:56611 - 3345 \"SRV IN _http._tcp.dns-test-service.dns-8433.svc.ec2.internal. tcp 82 false 65535\" NXDOMAIN qr,rd,ra 71 0.00096764s\n[INFO] 100.64.1.27:53113 - 36766 \"A IN 100-64-1-27.dns-8433.pod.cluster.local.ec2.internal. tcp 80 false 65535\" NXDOMAIN qr,rd,ra 69 0.000930448s\n[INFO] 100.64.1.27:54048 - 59407 \"A IN dns-test-service.dns-8433.ec2.internal. udp 79 false 4096\" NXDOMAIN qr,rd,ra 56 0.001130987s\n[INFO] 100.64.1.27:35863 - 49103 \"A IN dns-test-service.dns-8433.svc.ec2.internal. udp 83 false 4096\" NXDOMAIN qr,rd,ra 60 0.001180402s\n[INFO] 100.64.1.27:42329 - 54255 \"A IN dns-test-service.dns-8433.svc.ec2.internal. tcp 83 false 65535\" NXDOMAIN qr,rd,ra 60 0.000963018s\n[INFO] 100.64.1.27:54491 - 38799 \"SRV IN _http._tcp.dns-test-service.dns-8433.svc.ec2.internal. udp 94 false 4096\" NXDOMAIN qr,rd,ra 71 0.001388324s\n[INFO] 100.64.1.27:55643 - 34492 \"A IN dns-test-service.ec2.internal. udp 58 false 4096\" NXDOMAIN qr,rd,ra 58 0.000255409s\n[INFO] 100.64.1.27:53465 - 42088 \"A IN dns-test-service.ec2.internal. tcp 58 false 65535\" NXDOMAIN qr,rd,ra 58 0.000350822s\n[INFO] 100.64.1.27:34561 - 46052 \"A IN dns-test-service.dns-8433.ec2.internal. udp 67 false 4096\" NXDOMAIN qr,rd,ra 56 0.001116964s\n[INFO] 100.64.1.27:46329 - 64970 \"A IN dns-test-service.dns-8433.ec2.internal. tcp 67 false 65535\" NXDOMAIN qr,rd,ra 56 0.000968171s\n[INFO] 100.64.1.27:43015 - 40982 \"A IN 100-64-1-27.dns-8433.pod.cluster.local.ec2.internal. tcp 80 false 65535\" NXDOMAIN qr,rd,ra 69 0.000938267s\n[INFO] 100.64.1.27:55357 - 15779 \"A IN dns-test-service.dns-8433.svc.ec2.internal. udp 83 false 4096\" NXDOMAIN qr,rd,ra 60 0.001174215s\n[INFO] 100.64.1.27:33118 - 5475 \"SRV IN _http._tcp.dns-test-service.dns-8433.svc.ec2.internal. udp 94 false 4096\" NXDOMAIN qr,rd,ra 71 0.001367178s\n[INFO] 100.64.1.27:33583 - 55555 \"SRV IN _http._tcp.dns-test-service.dns-8433.svc.ec2.internal. tcp 94 false 65535\" NXDOMAIN qr,rd,ra 71 0.008411985s\n[INFO] 100.64.1.27:35581 - 9104 \"A IN dns-test-service.ec2.internal. udp 58 false 4096\" NXDOMAIN qr,rd,ra 58 0.000262336s\n[INFO] 100.64.1.27:50021 - 54559 \"A IN dns-test-service.dns-8433.ec2.internal. tcp 67 false 65535\" NXDOMAIN qr,rd,ra 56 0.001121152s\n[INFO] 100.64.1.27:56729 - 55330 \"A IN 100-64-1-27.dns-8433.pod.cluster.local.ec2.internal. udp 80 false 4096\" NXDOMAIN qr,rd,ra 69 0.001359426s\n[INFO] 100.64.1.27:51581 - 31417 \"A IN 100-64-1-27.dns-8433.pod.cluster.local.ec2.internal. tcp 80 false 65535\" NXDOMAIN qr,rd,ra 69 0.001042228s\n[INFO] 100.64.1.27:58724 - 27384 \"A IN dns-test-service.dns-8433.svc.ec2.internal. udp 83 false 4096\" NXDOMAIN qr,rd,ra 60 0.001224129s\n[INFO] 100.64.1.27:41243 - 32536 \"A IN dns-test-service.dns-8433.svc.ec2.internal. tcp 83 false 65535\" NXDOMAIN qr,rd,ra 60 0.014068077s\n[INFO] 100.64.1.27:54137 - 22232 \"SRV IN _http._tcp.dns-test-service.dns-8433.svc.ec2.internal. tcp 94 false 65535\" NXDOMAIN qr,rd,ra 71 0.001035068s\n[INFO] 100.64.1.27:55681 - 33954 \"A IN dns-test-service.ec2.internal. udp 58 false 4096\" NXDOMAIN qr,rd,ra 58 0.000277369s\n[INFO] 100.64.1.27:57363 - 44569 \"A IN dns-test-service.ec2.internal. tcp 58 false 65535\" NXDOMAIN qr,rd,ra 58 0.000263036s\n[INFO] 100.64.1.27:57007 - 31142 \"A IN dns-test-service.dns-8433.ec2.internal. udp 67 false 4096\" NXDOMAIN qr,rd,ra 56 0.001074622s\n[INFO] 100.64.1.27:53893 - 31701 \"A IN dns-test-service.dns-8433.ec2.internal. tcp 67 false 65535\" NXDOMAIN qr,rd,ra 56 0.002112303s\n[INFO] 100.64.1.27:35983 - 42037 \"A IN dns-test-service.dns-8433.svc.ec2.internal. tcp 71 false 65535\" NXDOMAIN qr,rd,ra 60 0.000982151s\n[INFO] 100.64.1.27:35124 - 23557 \"SRV IN _http._tcp.dns-test-service.dns-8433.svc.ec2.internal. udp 82 false 4096\" NXDOMAIN qr,rd,ra 71 0.000899642s\n[INFO] 100.64.1.27:56147 - 46564 \"SRV IN _http._tcp.dns-test-service.dns-8433.svc.ec2.internal. tcp 82 false 65535\" NXDOMAIN qr,rd,ra 71 0.000886482s\n[INFO] 100.64.1.27:40033 - 54445 \"A IN dns-test-service.dns-8433.ec2.internal. tcp 79 false 65535\" NXDOMAIN qr,rd,ra 56 0.000952642s\n[INFO] 100.64.1.27:36683 - 44141 \"A IN dns-test-service.dns-8433.svc.ec2.internal. tcp 83 false 65535\" NXDOMAIN qr,rd,ra 60 0.000914491s\n[INFO] 100.64.1.27:34781 - 33837 \"SRV IN _http._tcp.dns-test-service.dns-8433.svc.ec2.internal. tcp 94 false 65535\" NXDOMAIN qr,rd,ra 71 0.000850438s\n[INFO] 100.64.1.27:49569 - 60165 \"A IN 100-64-1-27.dns-8433.pod.cluster.local.ec2.internal. udp 92 false 4096\" NXDOMAIN qr,rd,ra 69 0.001129756s\n[INFO] 100.64.1.27:42247 - 61605 \"A IN 100-64-1-27.dns-8433.pod.cluster.local.ec2.internal. tcp 92 false 65535\" NXDOMAIN qr,rd,ra 69 0.001075064s\n[INFO] 100.64.1.27:60119 - 33708 \"A IN dns-test-service.dns-8433.svc.ec2.internal. udp 71 false 4096\" NXDOMAIN qr,rd,ra 60 0.00137714s\n[INFO] 100.64.1.27:49506 - 62011 \"A IN 100-64-1-27.dns-8433.pod.cluster.local.ec2.internal. udp 80 false 4096\" NXDOMAIN qr,rd,ra 69 0.001088683s\n[INFO] 100.64.1.27:59179 - 28987 \"A IN 100-64-1-27.dns-8433.pod.cluster.local.ec2.internal. tcp 80 false 65535\" NXDOMAIN qr,rd,ra 69 0.001132945s\n[INFO] 100.64.1.27:50827 - 14530 \"A IN dns-test-service.ec2.internal. tcp 70 false 65535\" NXDOMAIN qr,rd,ra 58 0.000242871s\n[INFO] 100.64.1.27:36223 - 4225 \"A IN dns-test-service.dns-8433.ec2.internal. tcp 79 false 65535\" NXDOMAIN qr,rd,ra 56 0.001018707s\n[INFO] 100.64.1.27:34239 - 5666 \"A IN dns-test-service.dns-8433.svc.ec2.internal. udp 83 false 4096\" NXDOMAIN qr,rd,ra 60 0.006725391s\n[INFO] 100.64.1.27:40570 - 60898 \"SRV IN _http._tcp.dns-test-service.dns-8433.svc.ec2.internal. udp 94 false 4096\" NXDOMAIN qr,rd,ra 71 0.001160423s\n[INFO] 100.64.1.27:47973 - 6234 \"A IN 100-64-1-27.dns-8433.pod.cluster.local.ec2.internal. udp 92 false 4096\" NXDOMAIN qr,rd,ra 69 0.001270196s\n[INFO] 100.64.1.27:55065 - 11386 \"A IN 100-64-1-27.dns-8433.pod.cluster.local.ec2.internal. tcp 92 false 65535\" NXDOMAIN qr,rd,ra 69 0.00089538s\n[INFO] 100.64.1.27:58543 - 61732 \"A IN dns-test-service.ec2.internal. udp 58 false 4096\" NXDOMAIN qr,rd,ra 58 0.000222018s\n[INFO] 100.64.1.27:60389 - 62648 \"A IN dns-test-service.dns-8433.ec2.internal. udp 67 false 4096\" NXDOMAIN qr,rd,ra 56 0.000982306s\n[INFO] 100.64.1.27:48369 - 65041 \"A IN dns-test-service.dns-8433.svc.ec2.internal. udp 71 false 4096\" NXDOMAIN qr,rd,ra 60 0.001076136s\n[INFO] 100.64.1.27:59174 - 41590 \"A IN dns-test-service.ec2.internal. udp 70 false 4096\" NXDOMAIN qr,rd,ra 58 0.000406286s\n[INFO] 100.64.1.27:58579 - 26134 \"A IN dns-test-service.ec2.internal. tcp 70 false 65535\" NXDOMAIN qr,rd,ra 58 0.000282357s\n[INFO] 100.64.1.27:34253 - 15830 \"A IN dns-test-service.dns-8433.ec2.internal. tcp 79 false 65535\" NXDOMAIN qr,rd,ra 56 0.001098922s\n[INFO] 100.64.1.27:56217 - 22422 \"A IN dns-test-service.dns-8433.svc.ec2.internal. tcp 83 false 65535\" NXDOMAIN qr,rd,ra 60 0.000902089s\n[INFO] 100.64.1.27:36066 - 27574 \"SRV IN _http._tcp.dns-test-service.dns-8433.svc.ec2.internal. udp 94 false 4096\" NXDOMAIN qr,rd,ra 71 0.001492323s\n[INFO] 100.64.1.27:53353 - 12118 \"SRV IN _http._tcp.dns-test-service.dns-8433.svc.ec2.internal. tcp 94 false 65535\" NXDOMAIN qr,rd,ra 71 0.000987174s\n[INFO] 100.64.1.27:57554 - 38446 \"A IN 100-64-1-27.dns-8433.pod.cluster.local.ec2.internal. udp 92 false 4096\" NXDOMAIN qr,rd,ra 69 0.001094632s\n[INFO] 100.64.1.27:50135 - 22990 \"A IN 100-64-1-27.dns-8433.pod.cluster.local.ec2.internal. tcp 92 false 65535\" NXDOMAIN qr,rd,ra 69 0.000911682s\n[INFO] 100.64.1.27:55319 - 37025 \"A IN dns-test-service.dns-8433.ec2.internal. tcp 67 false 65535\" NXDOMAIN qr,rd,ra 56 0.000918791s\n[INFO] 100.64.1.27:57970 - 63692 \"SRV IN _http._tcp.dns-test-service.dns-8433.svc.ec2.internal. udp 82 false 4096\" NXDOMAIN qr,rd,ra 71 0.000975171s\n[INFO] 100.64.1.27:50847 - 60471 \"SRV IN _http._tcp.dns-test-service.dns-8433.svc.ec2.internal. tcp 82 false 65535\" NXDOMAIN qr,rd,ra 71 0.000861538s\n[INFO] 100.64.1.27:49717 - 9490 \"A IN 100-64-1-27.dns-8433.pod.cluster.local.ec2.internal. tcp 80 false 65535\" NXDOMAIN qr,rd,ra 69 0.000843678s\n[INFO] 100.64.1.27:52184 - 42752 \"A IN dns-test-service.dns-8433.ec2.internal. udp 79 false 4096\" NXDOMAIN qr,rd,ra 56 0.001099926s\n[INFO] 100.64.1.27:48999 - 47904 \"A IN dns-test-service.dns-8433.ec2.internal. tcp 79 false 65535\" NXDOMAIN qr,rd,ra 56 0.00238379s\n[INFO] 100.64.1.27:44441 - 37600 \"A IN dns-test-service.dns-8433.svc.ec2.internal. tcp 83 false 65535\" NXDOMAIN qr,rd,ra 60 0.003167213s\n[INFO] 100.64.1.27:51339 - 49912 \"A IN 100-64-1-27.dns-8433.pod.cluster.local.ec2.internal. udp 92 false 4096\" NXDOMAIN qr,rd,ra 69 0.001069046s\n[INFO] 100.64.1.27:45413 - 65221 \"A IN dns-test-service.ec2.internal. udp 58 false 4096\" NXDOMAIN qr,rd,ra 58 0.000218226s\n[INFO] 100.64.1.27:54033 - 4889 \"A IN dns-test-service.dns-8433.ec2.internal. tcp 67 false 65535\" NXDOMAIN qr,rd,ra 56 0.000974217s\n[INFO] 100.64.1.27:57084 - 46629 \"A IN dns-test-service.dns-8433.svc.ec2.internal. udp 71 false 4096\" NXDOMAIN qr,rd,ra 60 0.001054424s\n[INFO] 100.64.1.27:59299 - 13166 \"A IN dns-test-service.dns-8433.svc.ec2.internal. tcp 71 false 65535\" NXDOMAIN qr,rd,ra 60 0.000911773s\n[INFO] 100.64.1.27:50046 - 41869 \"SRV IN _http._tcp.dns-test-service.dns-8433.svc.ec2.internal. udp 82 false 4096\" NXDOMAIN qr,rd,ra 71 0.00088086s\n[INFO] 100.64.1.27:44711 - 11537 \"A IN dns-test-service.ec2.internal. udp 70 false 4096\" NXDOMAIN qr,rd,ra 58 0.000265817s\n[INFO] 100.64.1.27:41441 - 61617 \"A IN dns-test-service.ec2.internal. tcp 70 false 65535\" NXDOMAIN qr,rd,ra 58 0.00054596s\n[INFO] 100.64.1.27:54683 - 1233 \"A IN dns-test-service.dns-8433.ec2.internal. udp 79 false 4096\" NXDOMAIN qr,rd,ra 56 0.000948392s\n[INFO] 100.64.1.27:44499 - 51313 \"A IN dns-test-service.dns-8433.ec2.internal. tcp 79 false 65535\" NXDOMAIN qr,rd,ra 56 0.001236055s\n[INFO] 100.64.1.27:51901 - 56465 \"A IN dns-test-service.dns-8433.svc.ec2.internal. udp 83 false 4096\" NXDOMAIN qr,rd,ra 60 0.001198492s\n[INFO] 100.64.1.27:52992 - 46161 \"SRV IN _http._tcp.dns-test-service.dns-8433.svc.ec2.internal. udp 94 false 4096\" NXDOMAIN qr,rd,ra 71 0.001191089s\n[INFO] 100.64.1.27:43721 - 47601 \"SRV IN _http._tcp.dns-test-service.dns-8433.svc.ec2.internal. tcp 94 false 65535\" NXDOMAIN qr,rd,ra 71 0.001199413s\n[INFO] 100.64.1.27:51275 - 14314 \"A IN dns-test-service.dns-8433.ec2.internal. udp 67 false 4096\" NXDOMAIN qr,rd,ra 56 0.000986952s\n[INFO] 100.64.1.27:59091 - 26883 \"A IN dns-test-service.dns-8433.svc.ec2.internal. udp 71 false 4096\" NXDOMAIN qr,rd,ra 60 0.001132554s\n[INFO] 100.64.1.27:41001 - 31420 \"A IN dns-test-service.dns-8433.svc.ec2.internal. tcp 71 false 65535\" NXDOMAIN qr,rd,ra 60 0.001324374s\n[INFO] 100.64.1.27:45425 - 50445 \"A IN 100-64-1-27.dns-8433.pod.cluster.local.ec2.internal. udp 80 false 4096\" NXDOMAIN qr,rd,ra 69 0.001103274s\n[INFO] 100.64.1.27:59153 - 60701 \"A IN 100-64-1-27.dns-8433.pod.cluster.local.ec2.internal. tcp 80 false 65535\" NXDOMAIN qr,rd,ra 69 0.001540041s\n[INFO] 100.64.1.27:42883 - 23141 \"A IN dns-test-service.ec2.internal. udp 70 false 4096\" NXDOMAIN qr,rd,ra 58 0.000258897s\n[INFO] 100.64.1.27:58449 - 28293 \"A IN dns-test-service.ec2.internal. tcp 70 false 65535\" NXDOMAIN qr,rd,ra 58 0.000240264s\n[INFO] 100.64.1.27:34129 - 17989 \"A IN dns-test-service.dns-8433.ec2.internal. tcp 79 false 65535\" NXDOMAIN qr,rd,ra 56 0.000955112s\n[INFO] 100.64.1.27:45555 - 7685 \"A IN dns-test-service.dns-8433.svc.ec2.internal. tcp 83 false 65535\" NXDOMAIN qr,rd,ra 60 0.000878185s\n[INFO] 100.64.1.27:55863 - 57765 \"SRV IN _http._tcp.dns-test-service.dns-8433.svc.ec2.internal. udp 94 false 4096\" NXDOMAIN qr,rd,ra 71 0.00118031s\n[INFO] 100.64.1.27:34691 - 62917 \"SRV IN _http._tcp.dns-test-service.dns-8433.svc.ec2.internal. tcp 94 false 65535\" NXDOMAIN qr,rd,ra 71 0.000991596s\n[INFO] 100.64.1.27:37778 - 19997 \"A IN 100-64-1-27.dns-8433.pod.cluster.local.ec2.internal. udp 92 false 4096\" NXDOMAIN qr,rd,ra 69 0.000974271s\n[INFO] 100.64.1.27:59669 - 25149 \"A IN 100-64-1-27.dns-8433.pod.cluster.local.ec2.internal. tcp 92 false 65535\" NXDOMAIN qr,rd,ra 69 0.001020679s\n[INFO] 100.64.1.27:32881 - 42128 \"A IN dns-test-service.dns-8433.ec2.internal. udp 67 false 4096\" NXDOMAIN qr,rd,ra 56 0.001246797s\n[INFO] 100.64.1.27:49880 - 53231 \"A IN dns-test-service.dns-8433.svc.ec2.internal. udp 71 false 4096\" NXDOMAIN qr,rd,ra 60 0.001139859s\n[INFO] 100.64.1.27:35393 - 13811 \"A IN dns-test-service.dns-8433.svc.ec2.internal. tcp 71 false 65535\" NXDOMAIN qr,rd,ra 60 0.001030047s\n[INFO] 100.64.1.27:32971 - 12935 \"A IN 100-64-1-27.dns-8433.pod.cluster.local.ec2.internal. tcp 80 false 65535\" NXDOMAIN qr,rd,ra 69 0.002055171s\n[INFO] 100.64.1.27:34263 - 29594 \"A IN dns-test-service.dns-8433.ec2.internal. tcp 79 false 65535\" NXDOMAIN qr,rd,ra 56 0.001177647s\n[INFO] 100.64.1.27:46622 - 34746 \"A IN dns-test-service.dns-8433.svc.ec2.internal. udp 83 false 4096\" NXDOMAIN qr,rd,ra 60 0.001011909s\n[INFO] 100.64.1.27:58254 - 24442 \"SRV IN _http._tcp.dns-test-service.dns-8433.svc.ec2.internal. udp 94 false 4096\" NXDOMAIN qr,rd,ra 71 0.001205073s\n[INFO] 100.64.1.27:54441 - 8986 \"SRV IN _http._tcp.dns-test-service.dns-8433.svc.ec2.internal. tcp 94 false 65535\" NXDOMAIN qr,rd,ra 71 0.001315041s\n[INFO] 100.64.1.27:38611 - 16578 \"A IN dns-test-service.dns-8433.ec2.internal. udp 67 false 4096\" NXDOMAIN qr,rd,ra 56 0.001058225s\n[INFO] 100.64.1.27:54387 - 59929 \"A IN dns-test-service.dns-8433.ec2.internal. tcp 67 false 65535\" NXDOMAIN qr,rd,ra 56 0.000945205s\n[INFO] 100.64.1.27:45671 - 59910 \"A IN dns-test-service.dns-8433.svc.ec2.internal. tcp 71 false 65535\" NXDOMAIN qr,rd,ra 60 0.000939446s\n[INFO] 100.64.1.27:39973 - 61407 \"SRV IN _http._tcp.dns-test-service.dns-8433.svc.ec2.internal. tcp 82 false 65535\" NXDOMAIN qr,rd,ra 71 0.000915369s\n[INFO] 100.64.1.27:58678 - 50063 \"A IN dns-test-service.ec2.internal. udp 70 false 4096\" NXDOMAIN qr,rd,ra 58 0.000319461s\n[INFO] 100.64.1.27:55595 - 61807 \"A IN dns-test-service.dns-8433.ec2.internal. tcp 79 false 65535\" NXDOMAIN qr,rd,ra 56 0.005763681s\n[INFO] 100.64.1.27:40158 - 36047 \"SRV IN _http._tcp.dns-test-service.dns-8433.svc.ec2.internal. udp 94 false 4096\" NXDOMAIN qr,rd,ra 71 0.001453544s\n[INFO] 100.64.1.27:59262 - 63815 \"A IN 100-64-1-27.dns-8433.pod.cluster.local.ec2.internal. udp 92 false 4096\" NXDOMAIN qr,rd,ra 69 0.001016109s\n[INFO] 100.64.1.27:54323 - 3431 \"A IN 100-64-1-27.dns-8433.pod.cluster.local.ec2.internal. tcp 92 false 65535\" NXDOMAIN qr,rd,ra 69 0.001085687s\n[INFO] 100.64.1.27:35648 - 8495 \"A IN dns-test-service.ec2.internal. udp 58 false 4096\" NXDOMAIN qr,rd,ra 58 0.000292185s\n[INFO] 100.64.1.27:38149 - 30396 \"A IN dns-test-service.dns-8433.ec2.internal. tcp 67 false 65535\" NXDOMAIN qr,rd,ra 56 0.000984338s\n[INFO] 100.64.1.27:55854 - 45612 \"A IN dns-test-service.dns-8433.svc.ec2.internal. udp 71 false 4096\" NXDOMAIN qr,rd,ra 60 0.07140518s\n[INFO] 100.64.1.27:43915 - 13581 \"A IN dns-test-service.dns-8433.svc.ec2.internal. tcp 71 false 65535\" NXDOMAIN qr,rd,ra 60 0.001129494s\n[INFO] 100.64.1.27:48063 - 60379 \"SRV IN _http._tcp.dns-test-service.dns-8433.svc.ec2.internal. udp 82 false 4096\" NXDOMAIN qr,rd,ra 71 0.001042009s\n[INFO] 100.64.1.27:50649 - 36435 \"SRV IN _http._tcp.dns-test-service.dns-8433.svc.ec2.internal. tcp 82 false 65535\" NXDOMAIN qr,rd,ra 71 0.00222022s\n[INFO] 100.64.1.27:43724 - 56981 \"A IN 100-64-1-27.dns-8433.pod.cluster.local.ec2.internal. udp 80 false 4096\" NXDOMAIN qr,rd,ra 69 0.00088109s\n[INFO] 100.64.1.27:52791 - 31584 \"A IN 100-64-1-27.dns-8433.pod.cluster.local.ec2.internal. tcp 80 false 65535\" NXDOMAIN qr,rd,ra 69 0.000982275s\n[INFO] 100.64.1.27:50677 - 16739 \"A IN dns-test-service.ec2.internal. udp 70 false 4096\" NXDOMAIN qr,rd,ra 58 0.000267559s\n[INFO] 100.64.1.27:44364 - 6435 \"A IN dns-test-service.dns-8433.ec2.internal. udp 79 false 4096\" NXDOMAIN qr,rd,ra 56 0.001188553s\n[INFO] 100.64.1.27:54683 - 13028 \"A IN dns-test-service.dns-8433.svc.ec2.internal. udp 83 false 4096\" NXDOMAIN qr,rd,ra 60 0.001061792s\n[INFO] 100.64.1.27:47601 - 63108 \"A IN dns-test-service.dns-8433.svc.ec2.internal. tcp 83 false 65535\" NXDOMAIN qr,rd,ra 60 0.000949441s\n[INFO] 100.64.1.27:50492 - 13596 \"A IN 100-64-1-27.dns-8433.pod.cluster.local.ec2.internal. udp 92 false 4096\" NXDOMAIN qr,rd,ra 69 0.00152951s\n[INFO] 100.64.1.27:51333 - 39919 \"A IN dns-test-service.ec2.internal. udp 58 false 4096\" NXDOMAIN qr,rd,ra 58 0.000233408s\n[INFO] 100.64.1.27:36167 - 15547 \"A IN dns-test-service.dns-8433.ec2.internal. tcp 67 false 65535\" NXDOMAIN qr,rd,ra 56 0.000938587s\n[INFO] 100.64.1.27:58661 - 8310 \"A IN dns-test-service.dns-8433.svc.ec2.internal. tcp 71 false 65535\" NXDOMAIN qr,rd,ra 60 0.0009783s\n[INFO] 100.64.1.27:35711 - 14199 \"SRV IN _http._tcp.dns-test-service.dns-8433.svc.ec2.internal. udp 82 false 4096\" NXDOMAIN qr,rd,ra 71 0.001098053s\n[INFO] 100.64.1.27:52385 - 34368 \"SRV IN _http._tcp.dns-test-service.dns-8433.svc.ec2.internal. tcp 82 false 65535\" NXDOMAIN qr,rd,ra 71 0.000893391s\n[INFO] 100.64.1.27:40405 - 50298 \"A IN 100-64-1-27.dns-8433.pod.cluster.local.ec2.internal. udp 80 false 4096\" NXDOMAIN qr,rd,ra 69 0.001143012s\n[INFO] 100.64.1.27:54583 - 31394 \"A IN 100-64-1-27.dns-8433.pod.cluster.local.ec2.internal. tcp 80 false 65535\" NXDOMAIN qr,rd,ra 69 0.000892982s\n[INFO] 100.64.1.27:59432 - 28344 \"A IN dns-test-service.ec2.internal. udp 70 false 4096\" NXDOMAIN qr,rd,ra 58 0.00025777s\n[INFO] 100.64.1.27:40345 - 33496 \"A IN dns-test-service.ec2.internal. tcp 70 false 65535\" NXDOMAIN qr,rd,ra 58 0.004802997s\n[INFO] 100.64.1.27:46717 - 23192 \"A IN dns-test-service.dns-8433.ec2.internal. tcp 79 false 65535\" NXDOMAIN qr,rd,ra 56 0.001051951s\n[INFO] 100.64.1.27:48979 - 29784 \"A IN dns-test-service.dns-8433.svc.ec2.internal. tcp 83 false 65535\" NXDOMAIN qr,rd,ra 60 0.00091718s\n[INFO] 100.64.1.27:57716 - 14328 \"SRV IN _http._tcp.dns-test-service.dns-8433.svc.ec2.internal. udp 94 false 4096\" NXDOMAIN qr,rd,ra 71 0.001292086s\n[INFO] 100.64.1.27:44562 - 25200 \"A IN 100-64-1-27.dns-8433.pod.cluster.local.ec2.internal. udp 92 false 4096\" NXDOMAIN qr,rd,ra 69 0.001440039s\n[INFO] 100.64.1.27:33863 - 43920 \"A IN dns-test-service.ec2.internal. udp 58 false 4096\" NXDOMAIN qr,rd,ra 58 0.000248903s\n[INFO] 100.64.1.27:55589 - 49436 \"A IN dns-test-service.ec2.internal. tcp 58 false 65535\" NXDOMAIN qr,rd,ra 58 0.000231361s\n[INFO] 100.64.1.27:37031 - 7761 \"A IN dns-test-service.dns-8433.svc.ec2.internal. udp 71 false 4096\" NXDOMAIN qr,rd,ra 60 0.001075018s\n[INFO] 100.64.1.27:46308 - 44338 \"SRV IN _http._tcp.dns-test-service.dns-8433.svc.ec2.internal. udp 82 false 4096\" NXDOMAIN qr,rd,ra 71 0.00099151s\n[INFO] 100.64.1.27:43311 - 28651 \"A IN 100-64-1-27.dns-8433.pod.cluster.local.ec2.internal. tcp 80 false 65535\" NXDOMAIN qr,rd,ra 69 0.001175886s\n[INFO] 100.64.1.27:35333 - 45101 \"A IN dns-test-service.ec2.internal. tcp 70 false 65535\" NXDOMAIN qr,rd,ra 58 0.000250486s\n[INFO] 100.64.1.27:49713 - 34797 \"A IN dns-test-service.dns-8433.ec2.internal. tcp 79 false 65535\" NXDOMAIN qr,rd,ra 56 0.000984402s\n[INFO] 100.64.1.27:35863 - 41389 \"A IN dns-test-service.dns-8433.svc.ec2.internal. tcp 83 false 65535\" NXDOMAIN qr,rd,ra 60 0.000944896s\n[INFO] 100.64.1.27:46590 - 46541 \"SRV IN _http._tcp.dns-test-service.dns-8433.svc.ec2.internal. udp 94 false 4096\" NXDOMAIN qr,rd,ra 71 0.001167636s\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[INFO] 100.64.1.27:51888 - 10963 \"A IN dns-test-service.ec2.internal. udp 58 false 4096\" NXDOMAIN qr,rd,ra 58 0.000235202s\n[INFO] 100.64.1.27:55226 - 40380 \"A IN dns-test-service.dns-8433.ec2.internal. udp 67 false 4096\" NXDOMAIN qr,rd,ra 56 0.00109352s\n[INFO] 100.64.1.27:50361 - 21757 \"A IN dns-test-service.dns-8433.ec2.internal. tcp 67 false 65535\" NXDOMAIN qr,rd,ra 56 0.001145147s\n[INFO] 100.64.1.27:48503 - 2579 \"A IN dns-test-service.dns-8433.svc.ec2.internal. tcp 71 false 65535\" NXDOMAIN qr,rd,ra 60 0.001093554s\n[INFO] 100.64.1.27:57255 - 40491 \"SRV IN _http._tcp.dns-test-service.dns-8433.svc.ec2.internal. udp 82 false 4096\" NXDOMAIN qr,rd,ra 71 0.001193421s\n[INFO] 100.64.1.27:55201 - 63158 \"SRV IN _http._tcp.dns-test-service.dns-8433.svc.ec2.internal. tcp 94 false 65535\" NXDOMAIN qr,rd,ra 71 0.002130242s\n[INFO] 100.64.1.27:39680 - 3342 \"A IN 100-64-1-27.dns-8433.pod.cluster.local.ec2.internal. udp 92 false 4096\" NXDOMAIN qr,rd,ra 69 0.000987821s\n[INFO] 100.64.1.27:36149 - 18870 \"A IN dns-test-service.dns-8433.ec2.internal. tcp 67 false 65535\" NXDOMAIN qr,rd,ra 56 0.001074245s\n[INFO] 100.64.1.27:53999 - 38985 \"A IN dns-test-service.dns-8433.svc.ec2.internal. udp 71 false 4096\" NXDOMAIN qr,rd,ra 60 0.002651827s\n[INFO] 100.64.1.27:44275 - 25342 \"A IN dns-test-service.dns-8433.svc.ec2.internal. tcp 71 false 65535\" NXDOMAIN qr,rd,ra 60 0.000989949s\n[INFO] 100.64.1.27:55925 - 28274 \"SRV IN _http._tcp.dns-test-service.dns-8433.svc.ec2.internal. udp 82 false 4096\" NXDOMAIN qr,rd,ra 71 0.000882525s\n[INFO] 100.64.1.27:42613 - 23243 \"A IN dns-test-service.ec2.internal. tcp 70 false 65535\" NXDOMAIN qr,rd,ra 58 0.000384555s\n[INFO] 100.64.1.27:43731 - 2635 \"A IN dns-test-service.dns-8433.svc.ec2.internal. tcp 83 false 65535\" NXDOMAIN qr,rd,ra 60 0.001002971s\n[INFO] 100.64.1.27:39316 - 22330 \"A IN dns-test-service.ec2.internal. udp 58 false 4096\" NXDOMAIN qr,rd,ra 58 0.000231475s\n[INFO] 100.64.1.27:32859 - 48255 \"A IN dns-test-service.dns-8433.ec2.internal. tcp 67 false 65535\" NXDOMAIN qr,rd,ra 56 0.000900009s\n[INFO] 100.64.1.27:35511 - 505 \"A IN dns-test-service.dns-8433.svc.ec2.internal. tcp 71 false 65535\" NXDOMAIN qr,rd,ra 60 0.000836034s\n[INFO] 100.64.1.27:37678 - 27229 \"SRV IN _http._tcp.dns-test-service.dns-8433.svc.ec2.internal. udp 82 false 4096\" NXDOMAIN qr,rd,ra 71 0.001136043s\n[INFO] 100.64.1.27:50771 - 1600 \"SRV IN _http._tcp.dns-test-service.dns-8433.svc.ec2.internal. tcp 82 false 65535\" NXDOMAIN qr,rd,ra 71 0.001022698s\n[INFO] 100.64.1.27:56515 - 57582 \"A IN 100-64-1-27.dns-8433.pod.cluster.local.ec2.internal. tcp 80 false 65535\" NXDOMAIN qr,rd,ra 69 0.001296156s\n[INFO] 100.64.1.27:49236 - 42108 \"A IN dns-test-service.ec2.internal. udp 70 false 4096\" NXDOMAIN qr,rd,ra 58 0.00023538s\n[INFO] 100.64.1.27:37379 - 31804 \"A IN dns-test-service.dns-8433.ec2.internal. udp 79 false 4096\" NXDOMAIN qr,rd,ra 56 0.001343479s\n[INFO] 100.64.1.27:35121 - 36956 \"A IN dns-test-service.dns-8433.ec2.internal. tcp 79 false 65535\" NXDOMAIN qr,rd,ra 56 0.00084631s\n[INFO] 100.64.1.27:56871 - 11196 \"SRV IN _http._tcp.dns-test-service.dns-8433.svc.ec2.internal. udp 94 false 4096\" NXDOMAIN qr,rd,ra 71 0.001050782s\n[INFO] 100.64.1.27:56293 - 16348 \"SRV IN _http._tcp.dns-test-service.dns-8433.svc.ec2.internal. tcp 94 false 65535\" NXDOMAIN qr,rd,ra 71 0.0008506s\n[INFO] 100.64.1.27:49891 - 16724 \"A IN dns-test-service.ec2.internal. tcp 58 false 65535\" NXDOMAIN qr,rd,ra 58 0.000257237s\n[INFO] 100.64.1.27:50409 - 59227 \"SRV IN _http._tcp.dns-test-service.dns-8433.svc.ec2.internal. tcp 82 false 65535\" NXDOMAIN qr,rd,ra 71 0.001601391s\n[INFO] 100.64.1.27:50053 - 62826 \"A IN 100-64-1-27.dns-8433.pod.cluster.local.ec2.internal. udp 80 false 4096\" NXDOMAIN qr,rd,ra 69 0.001111172s\n[INFO] 100.64.1.27:35065 - 55721 \"A IN 100-64-1-27.dns-8433.pod.cluster.local.ec2.internal. tcp 92 false 65535\" NXDOMAIN qr,rd,ra 69 0.003462141s\n[INFO] 100.64.1.27:58023 - 35518 \"A IN 100-64-1-27.dns-8433.pod.cluster.local.ec2.internal. udp 80 false 4096\" NXDOMAIN qr,rd,ra 69 0.000989476s\n[INFO] 100.64.1.27:53177 - 11064 \"A IN 100-64-1-27.dns-8433.pod.cluster.local.ec2.internal. udp 80 false 4096\" NXDOMAIN qr,rd,ra 69 0.001171084s\n[INFO] 100.64.1.27:34653 - 32562 \"A IN 100-64-1-27.dns-8433.pod.cluster.local.ec2.internal. udp 92 false 4096\" NXDOMAIN qr,rd,ra 69 0.001028374s\n[INFO] 100.64.1.27:54287 - 34002 \"A IN 100-64-1-27.dns-8433.pod.cluster.local.ec2.internal. tcp 92 false 65535\" NXDOMAIN qr,rd,ra 69 0.002236558s\n[INFO] 100.64.1.27:33133 - 46906 \"A IN 100-64-1-27.dns-8433.pod.cluster.local.ec2.internal. udp 80 false 4096\" NXDOMAIN qr,rd,ra 69 0.001379333s\n[INFO] 100.64.1.27:42291 - 21435 \"A IN 100-64-1-27.dns-8433.pod.cluster.local.ec2.internal. tcp 80 false 65535\" NXDOMAIN qr,rd,ra 69 0.000985684s\n[INFO] 100.64.1.27:58767 - 44167 \"A IN 100-64-1-27.dns-8433.pod.cluster.local.ec2.internal. udp 92 false 4096\" NXDOMAIN qr,rd,ra 69 0.001228115s\n[INFO] 100.64.1.27:50837 - 35715 \"A IN 100-64-1-27.dns-8433.pod.cluster.local.ec2.internal. udp 80 false 4096\" NXDOMAIN qr,rd,ra 69 0.001249838s\n[INFO] 100.64.1.27:38255 - 60924 \"A IN 100-64-1-27.dns-8433.pod.cluster.local.ec2.internal. tcp 92 false 65535\" NXDOMAIN qr,rd,ra 69 0.000887851s\n[INFO] 100.64.1.27:37407 - 29859 \"A IN 100-64-1-27.dns-8433.pod.cluster.local.ec2.internal. udp 80 false 4096\" NXDOMAIN qr,rd,ra 69 0.001228999s\n[INFO] 100.64.1.27:58897 - 5523 \"A IN 100-64-1-27.dns-8433.pod.cluster.local.ec2.internal. tcp 80 false 65535\" NXDOMAIN qr,rd,ra 69 0.000849946s\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[INFO] 100.64.0.136:36940 - 8711 \"A IN 100-64-0-136.dns-5603.pod.cluster.local.ec2.internal. udp 93 false 4096\" NXDOMAIN qr,rd,ra 70 0.0013653s\n[INFO] 100.64.0.136:48995 - 39388 \"AAAA IN dns-querier-2.dns-test-service-2.dns-5603.svc.cluster.local.ec2.internal. udp 90 false 512\" NXDOMAIN qr,rd,ra 90 0.001606476s\n[INFO] 100.64.0.136:59596 - 54478 \"A IN 100-64-0-136.dns-5603.pod.cluster.local.ec2.internal. udp 81 false 4096\" NXDOMAIN qr,rd,ra 70 0.001368716s\n[INFO] 100.64.0.136:35231 - 29385 \"A IN 100-64-0-136.dns-5603.pod.cluster.local.ec2.internal. tcp 81 false 65535\" NXDOMAIN qr,rd,ra 70 0.002285217s\n[INFO] 100.64.0.136:59038 - 45772 \"A IN 100-64-0-136.dns-5603.pod.cluster.local.ec2.internal. udp 93 false 4096\" NXDOMAIN qr,rd,ra 70 0.001227225s\n[INFO] 100.64.0.136:53485 - 50924 \"A IN 100-64-0-136.dns-5603.pod.cluster.local.ec2.internal. tcp 93 false 65535\" NXDOMAIN qr,rd,ra 70 0.001252265s\n[INFO] 100.64.0.136:41333 - 8013 \"A IN 100-64-0-136.dns-5603.pod.cluster.local.ec2.internal. udp 81 false 4096\" NXDOMAIN qr,rd,ra 70 0.001915427s\n[INFO] 100.64.0.136:36279 - 49648 \"A IN 100-64-0-136.dns-5603.pod.cluster.local.ec2.internal. tcp 93 false 65535\" NXDOMAIN qr,rd,ra 70 0.001006842s\n[INFO] 100.64.0.136:47949 - 61776 \"AAAA IN dns-querier-2.dns-test-service-2.dns-5603.svc.cluster.local.ec2.internal. udp 90 false 512\" NXDOMAIN qr,rd,ra 90 0.001076353s\n[INFO] 100.64.0.136:49026 - 52830 \"A IN 100-64-0-136.dns-5603.pod.cluster.local.ec2.internal. udp 81 false 4096\" NXDOMAIN qr,rd,ra 70 0.001311306s\n[INFO] 100.64.0.136:39231 - 42638 \"A IN 100-64-0-136.dns-5603.pod.cluster.local.ec2.internal. tcp 81 false 65535\" NXDOMAIN qr,rd,ra 70 0.00144891s\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[INFO] 100.64.0.146:40944 - 61110 \"AAAA IN boom-server. udp 29 false 512\" NXDOMAIN qr,rd,ra 29 0.000215178s\n[INFO] 100.64.0.146:40944 - 60830 \"A IN boom-server. udp 29 false 512\" NXDOMAIN qr,rd,ra 29 0.00024653s\n[INFO] 100.64.0.146:48097 - 55627 \"AAAA IN boom-server. udp 29 false 512\" NXDOMAIN qr,rd,ra 29 0.000256214s\n[INFO] 100.64.0.146:48097 - 55423 \"A IN boom-server. udp 29 false 512\" NXDOMAIN qr,rd,ra 29 0.000418215s\n[INFO] 100.64.0.146:47854 - 47071 \"AAAA IN boom-server. udp 29 false 512\" NXDOMAIN qr,rd,ra 29 0.000196588s\n[INFO] 100.64.0.146:47854 - 46853 \"A IN boom-server. udp 29 false 512\" NXDOMAIN qr,rd,ra 29 0.000168605s\n[INFO] 100.64.0.146:41622 - 65521 \"A IN boom-server. udp 29 false 512\" NXDOMAIN qr,rd,ra 29 0.00018617s\n[INFO] 100.64.0.146:41622 - 209 \"AAAA IN boom-server. udp 29 false 512\" NXDOMAIN qr,rd,ra 29 0.000206281s\n[INFO] 100.64.0.146:54682 - 16258 \"AAAA IN boom-server. udp 29 false 512\" NXDOMAIN qr,rd,ra 29 0.000208615s\n[INFO] 100.64.0.146:54682 - 16044 \"A IN boom-server. udp 29 false 512\" NXDOMAIN qr,rd,ra 29 0.000192796s\n[INFO] 100.64.0.146:51173 - 44985 \"AAAA IN boom-server. udp 29 false 512\" NXDOMAIN qr,rd,ra 29 0.000250615s\n[INFO] 100.64.0.146:51173 - 44775 \"A IN boom-server. udp 29 false 512\" NXDOMAIN qr,rd,ra 29 0.000242396s\n[INFO] 100.64.0.146:40815 - 50472 \"AAAA IN boom-server. udp 29 false 512\" NXDOMAIN qr,rd,ra 29 0.000273099s\n[INFO] 100.64.0.146:40815 - 50263 \"A IN boom-server. udp 29 false 512\" NXDOMAIN qr,rd,ra 29 0.000239139s\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[INFO] 100.64.0.146:38227 - 63766 \"AAAA IN boom-server. udp 29 false 512\" NXDOMAIN qr,rd,ra 29 0.000262009s\n[INFO] 100.64.0.146:38227 - 63549 \"A IN boom-server. udp 29 false 512\" NXDOMAIN qr,rd,ra 29 0.000317061s\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[INFO] 100.64.0.182:39183 - 30217 \"A IN dns-test-service-2.dns-3324.svc.cluster.local.ec2.internal. tcp 87 false 65535\" NXDOMAIN qr,rd,ra 76 0.010714344s\n[INFO] 100.64.0.182:35233 - 36696 \"A IN 100-64-0-182.dns-3324.pod.cluster.local.ec2.internal. tcp 81 false 65535\" NXDOMAIN qr,rd,ra 70 0.001349838s\n[INFO] 100.64.0.182:46539 - 10865 \"A IN 100-64-0-182.dns-3324.pod.cluster.local.ec2.internal. tcp 93 false 65535\" NXDOMAIN qr,rd,ra 70 0.001066575s\n[INFO] 100.64.0.182:44813 - 12933 \"A IN dns-test-service-2.dns-3324.svc.cluster.local.ec2.internal. tcp 87 false 65535\" NXDOMAIN qr,rd,ra 76 0.00094276s\n[INFO] 100.64.0.182:47600 - 30136 \"A IN 100-64-0-182.dns-3324.pod.cluster.local.ec2.internal. udp 81 false 4096\" NXDOMAIN qr,rd,ra 70 0.001289908s\n[INFO] 100.64.0.182:58022 - 62246 \"A IN 100-64-0-182.dns-3324.pod.cluster.local.ec2.internal. udp 93 false 4096\" NXDOMAIN qr,rd,ra 70 0.001123423s\n[INFO] 100.64.0.182:48275 - 46790 \"A IN 100-64-0-182.dns-3324.pod.cluster.local.ec2.internal. tcp 93 false 65535\" NXDOMAIN qr,rd,ra 70 0.001093315s\n[INFO] 100.64.0.182:58326 - 17149 \"A IN dns-test-service-2.dns-3324.svc.cluster.local.ec2.internal. udp 87 false 4096\" NXDOMAIN qr,rd,ra 76 0.001191444s\n[INFO] 100.64.0.182:60665 - 32635 \"A IN 100-64-0-182.dns-3324.pod.cluster.local.ec2.internal. udp 93 false 4096\" NXDOMAIN qr,rd,ra 70 0.001194764s\n[INFO] 100.64.0.182:51510 - 11250 \"A IN 100-64-0-182.dns-3324.pod.cluster.local.ec2.internal. udp 81 false 4096\" NXDOMAIN qr,rd,ra 70 0.000785486s\n[INFO] 100.64.0.182:53417 - 62139 \"A IN 100-64-0-182.dns-3324.pod.cluster.local.ec2.internal. tcp 81 false 65535\" NXDOMAIN qr,rd,ra 70 0.001082095s\n[INFO] 100.64.0.182:36242 - 2455 \"A IN dns-test-service-2.dns-3324.svc.cluster.local.ec2.internal. udp 99 false 4096\" NXDOMAIN qr,rd,ra 76 0.001152663s\n[INFO] 100.64.0.182:59621 - 52535 \"A IN dns-test-service-2.dns-3324.svc.cluster.local.ec2.internal. tcp 99 false 65535\" NXDOMAIN qr,rd,ra 76 0.002089862s\n[INFO] 100.64.0.182:52452 - 23631 \"A IN 100-64-0-182.dns-3324.pod.cluster.local.ec2.internal. udp 93 false 4096\" NXDOMAIN qr,rd,ra 70 0.001366553s\n[INFO] 100.64.0.182:47301 - 12032 \"A IN dns-test-service-2.dns-3324.svc.cluster.local.ec2.internal. tcp 87 false 65535\" NXDOMAIN qr,rd,ra 76 0.000872358s\n[INFO] 100.64.0.182:41835 - 43223 \"A IN 100-64-0-182.dns-3324.pod.cluster.local.ec2.internal. udp 81 false 4096\" NXDOMAIN qr,rd,ra 70 0.00109655s\n[INFO] 100.64.0.182:36249 - 43532 \"A IN dns-test-service-2.dns-3324.svc.cluster.local.ec2.internal. tcp 99 false 65535\" NXDOMAIN qr,rd,ra 76 0.000882254s\n[INFO] 100.64.0.182:34320 - 59556 \"A IN 100-64-0-182.dns-3324.pod.cluster.local.ec2.internal. udp 93 false 4096\" NXDOMAIN qr,rd,ra 70 0.001135126s\n[INFO] 100.64.0.182:34762 - 8881 \"A IN dns-test-service-2.dns-3324.svc.cluster.local.ec2.internal. udp 87 false 4096\" NXDOMAIN qr,rd,ra 76 0.001297012s\n[INFO] 100.64.0.182:34793 - 19004 \"A IN 100-64-0-182.dns-3324.pod.cluster.local.ec2.internal. udp 81 false 4096\" NXDOMAIN qr,rd,ra 70 0.0011668s\n[INFO] 100.64.0.182:55304 - 12480 \"A IN dns-test-service-2.dns-3324.svc.cluster.local.ec2.internal. udp 99 false 4096\" NXDOMAIN qr,rd,ra 76 0.001073051s\n[INFO] 100.64.0.182:52631 - 59296 \"A IN dns-test-service-2.dns-3324.svc.cluster.local.ec2.internal. udp 87 false 4096\" NXDOMAIN qr,rd,ra 76 0.001184202s\n[INFO] 100.64.0.182:35413 - 34203 \"A IN dns-test-service-2.dns-3324.svc.cluster.local.ec2.internal. tcp 87 false 65535\" NXDOMAIN qr,rd,ra 76 0.000791269s\n[INFO] 100.64.0.182:36804 - 3758 \"A IN 100-64-0-182.dns-3324.pod.cluster.local.ec2.internal. udp 81 false 4096\" NXDOMAIN qr,rd,ra 70 0.001303353s\n[INFO] 100.64.0.182:38803 - 53557 \"A IN dns-test-service-2.dns-3324.svc.cluster.local.ec2.internal. tcp 99 false 65535\" NXDOMAIN qr,rd,ra 76 0.001803449s\n[INFO] 100.64.0.182:50303 - 40509 \"A IN dns-test-service-2.dns-3324.svc.cluster.local.ec2.internal. tcp 87 false 65535\" NXDOMAIN qr,rd,ra 76 0.000990675s\n[INFO] 100.64.0.182:36625 - 64768 \"A IN 100-64-0-182.dns-3324.pod.cluster.local.ec2.internal. tcp 81 false 65535\" NXDOMAIN qr,rd,ra 70 0.001027978s\n[INFO] 100.64.0.182:37871 - 23946 \"A IN dns-test-service-2.dns-3324.svc.cluster.local.ec2.internal. tcp 99 false 65535\" NXDOMAIN qr,rd,ra 76 0.002423451s\n[INFO] 100.64.0.182:35151 - 60578 \"A IN 100-64-0-182.dns-3324.pod.cluster.local.ec2.internal. udp 93 false 4096\" NXDOMAIN qr,rd,ra 70 0.001146424s\n[INFO] 100.64.0.182:38313 - 62018 \"A IN 100-64-0-182.dns-3324.pod.cluster.local.ec2.internal. tcp 93 false 65535\" NXDOMAIN qr,rd,ra 70 0.000806252s\n[INFO] 100.64.0.182:57317 - 683 \"A IN dns-test-service-2.dns-3324.svc.cluster.local.ec2.internal. udp 87 false 4096\" NXDOMAIN qr,rd,ra 76 0.001220375s\n[INFO] 100.64.0.182:55371 - 16646 \"A IN dns-test-service-2.dns-3324.svc.cluster.local.ec2.internal. tcp 87 false 65535\" NXDOMAIN qr,rd,ra 76 0.00075318s\n[INFO] 100.64.0.182:58083 - 17888 \"A IN 100-64-0-182.dns-3324.pod.cluster.local.ec2.internal. tcp 81 false 65535\" NXDOMAIN qr,rd,ra 70 0.000722667s\n[INFO] 100.64.0.182:51188 - 9790 \"A IN dns-test-service-2.dns-3324.svc.cluster.local.ec2.internal. udp 99 false 4096\" NXDOMAIN qr,rd,ra 76 0.001080439s\n[INFO] 100.64.0.182:59535 - 14942 \"A IN dns-test-service-2.dns-3324.svc.cluster.local.ec2.internal. tcp 99 false 65535\" NXDOMAIN qr,rd,ra 76 0.000829803s\n[INFO] 100.64.0.182:47038 - 30966 \"A IN 100-64-0-182.dns-3324.pod.cluster.local.ec2.internal. udp 93 false 4096\" NXDOMAIN qr,rd,ra 70 0.001154443s\n[INFO] 100.64.0.182:36181 - 36118 \"A IN 100-64-0-182.dns-3324.pod.cluster.local.ec2.internal. tcp 93 false 65535\" NXDOMAIN qr,rd,ra 70 0.000784038s\n[INFO] 100.64.0.182:56499 - 16957 \"A IN dns-test-service-2.dns-3324.svc.cluster.local.ec2.internal. tcp 87 false 65535\" NXDOMAIN qr,rd,ra 76 0.000825157s\n[INFO] 100.64.0.182:55471 - 30047 \"A IN 100-64-0-182.dns-3324.pod.cluster.local.ec2.internal. tcp 81 false 65535\" NXDOMAIN qr,rd,ra 70 0.000828726s\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[INFO] 100.64.0.182:45736 - 787 \"A IN dns-test-service-2.dns-3324.svc.cluster.local.ec2.internal. udp 99 false 4096\" NXDOMAIN qr,rd,ra 76 0.001213843s\n[INFO] 100.64.0.182:55255 - 50867 \"A IN dns-test-service-2.dns-3324.svc.cluster.local.ec2.internal. tcp 99 false 65535\" NXDOMAIN qr,rd,ra 76 0.001075072s\n[INFO] 100.64.0.182:47451 - 21963 \"A IN 100-64-0-182.dns-3324.pod.cluster.local.ec2.internal. udp 93 false 4096\" NXDOMAIN qr,rd,ra 70 0.001116492s\n[INFO] 100.64.0.182:38880 - 45143 \"A IN dns-test-service-2.dns-3324.svc.cluster.local.ec2.internal. udp 87 false 4096\" NXDOMAIN qr,rd,ra 76 0.001243638s\n[INFO] 100.64.0.182:59428 - 10699 \"A IN 100-64-0-182.dns-3324.pod.cluster.local.ec2.internal. udp 81 false 4096\" NXDOMAIN qr,rd,ra 70 0.00120814s\n[INFO] 100.64.0.182:56866 - 57888 \"A IN 100-64-0-182.dns-3324.pod.cluster.local.ec2.internal. udp 93 false 4096\" NXDOMAIN qr,rd,ra 70 0.001239657s\n[INFO] 100.64.0.182:40273 - 63040 \"A IN 100-64-0-182.dns-3324.pod.cluster.local.ec2.internal. tcp 93 false 65535\" NXDOMAIN qr,rd,ra 70 0.000832464s\n[INFO] 100.64.0.182:53206 - 33057 \"A IN dns-test-service-2.dns-3324.svc.cluster.local.ec2.internal. udp 87 false 4096\" NXDOMAIN qr,rd,ra 76 0.001092602s\n[INFO] 100.64.0.182:45139 - 42767 \"A IN dns-test-service-2.dns-3324.svc.cluster.local.ec2.internal. tcp 87 false 65535\" NXDOMAIN qr,rd,ra 76 0.000863678s\n[INFO] 100.64.0.182:45425 - 16433 \"A IN 100-64-0-182.dns-3324.pod.cluster.local.ec2.internal. tcp 81 false 65535\" NXDOMAIN qr,rd,ra 70 0.000870718s\n[INFO] 100.64.0.182:46453 - 12252 \"A IN dns-test-service-2.dns-3324.svc.cluster.local.ec2.internal. tcp 99 false 65535\" NXDOMAIN qr,rd,ra 76 0.002258136s\n[INFO] 100.64.0.182:51316 - 48884 \"A IN 100-64-0-182.dns-3324.pod.cluster.local.ec2.internal. udp 93 false 4096\" NXDOMAIN qr,rd,ra 70 0.001228328s\n[INFO] 100.64.0.182:33101 - 2096 \"A IN 100-64-0-182.dns-3324.pod.cluster.local.ec2.internal. tcp 81 false 65535\" NXDOMAIN qr,rd,ra 70 0.001306605s\n[INFO] 100.64.0.182:36295 - 3109 \"A IN dns-test-service-2.dns-3324.svc.cluster.local.ec2.internal. tcp 99 false 65535\" NXDOMAIN qr,rd,ra 76 0.001421321s\n[INFO] 100.64.0.182:45138 - 51242 \"A IN dns-test-service-2.dns-3324.svc.cluster.local.ec2.internal. udp 87 false 4096\" NXDOMAIN qr,rd,ra 76 0.00127362s\n[INFO] 100.64.0.182:45823 - 47545 \"A IN dns-test-service-2.dns-3324.svc.cluster.local.ec2.internal. tcp 87 false 65535\" NXDOMAIN qr,rd,ra 76 0.000957745s\n[INFO] 100.64.0.182:42692 - 65 \"A IN 100-64-0-182.dns-3324.pod.cluster.local.ec2.internal. udp 81 false 4096\" NXDOMAIN qr,rd,ra 70 0.001562358s\n[INFO] 100.64.0.182:49307 - 37594 \"A IN dns-test-service-2.dns-3324.svc.cluster.local.ec2.internal. udp 99 false 4096\" NXDOMAIN qr,rd,ra 76 0.001174146s\n[INFO] 100.64.0.182:44671 - 39034 \"A IN dns-test-service-2.dns-3324.svc.cluster.local.ec2.internal. tcp 99 false 65535\" NXDOMAIN qr,rd,ra 76 0.000996441s\n[INFO] 100.64.0.182:54185 - 10130 \"A IN 100-64-0-182.dns-3324.pod.cluster.local.ec2.internal. udp 93 false 4096\" NXDOMAIN qr,rd,ra 70 0.00123326s\n[INFO] 100.64.0.182:36659 - 60210 \"A IN 100-64-0-182.dns-3324.pod.cluster.local.ec2.internal. tcp 93 false 65535\" NXDOMAIN qr,rd,ra 70 0.000897043s\n[INFO] 100.64.0.182:56304 - 51308 \"A IN dns-test-service-2.dns-3324.svc.cluster.local.ec2.internal. udp 87 false 4096\" NXDOMAIN qr,rd,ra 76 0.001669959s\n[INFO] 100.64.0.182:51995 - 30683 \"A IN dns-test-service-2.dns-3324.svc.cluster.local.ec2.internal. tcp 87 false 65535\" NXDOMAIN qr,rd,ra 76 0.004383274s\n[INFO] 100.64.0.182:41083 - 49351 \"A IN 100-64-0-182.dns-3324.pod.cluster.local.ec2.internal. tcp 81 false 65535\" NXDOMAIN qr,rd,ra 70 0.000947606s\n[INFO] 100.64.0.182:33880 - 7982 \"A IN dns-test-service-2.dns-3324.svc.cluster.local.ec2.internal. udp 99 false 4096\" NXDOMAIN qr,rd,ra 76 0.0037s\n[INFO] 100.64.0.182:60925 - 46055 \"A IN 100-64-0-182.dns-3324.pod.cluster.local.ec2.internal. udp 93 false 4096\" NXDOMAIN qr,rd,ra 70 0.001235685s\n[INFO] 100.64.0.182:34511 - 61108 \"A IN 100-64-0-182.dns-3324.pod.cluster.local.ec2.internal. tcp 81 false 65535\" NXDOMAIN qr,rd,ra 70 0.000998647s\n[INFO] 100.64.0.182:52772 - 64515 \"A IN dns-test-service-2.dns-3324.svc.cluster.local.ec2.internal. udp 99 false 4096\" NXDOMAIN qr,rd,ra 76 0.001438408s\n[INFO] 100.64.0.182:52953 - 51745 \"A IN 100-64-0-182.dns-3324.pod.cluster.local.ec2.internal. udp 81 false 4096\" NXDOMAIN qr,rd,ra 70 0.00102787s\n[INFO] 100.64.0.182:38563 - 34904 \"A IN dns-test-service-2.dns-3324.svc.cluster.local.ec2.internal. udp 99 false 4096\" NXDOMAIN qr,rd,ra 76 0.002786994s\n[INFO] 100.64.0.182:44027 - 40056 \"A IN dns-test-service-2.dns-3324.svc.cluster.local.ec2.internal. tcp 99 false 65535\" NXDOMAIN qr,rd,ra 76 0.002473015s\n[INFO] 100.64.0.182:58416 - 56080 \"A IN 100-64-0-182.dns-3324.pod.cluster.local.ec2.internal. udp 93 false 4096\" NXDOMAIN qr,rd,ra 70 0.001146529s\n[INFO] 100.64.0.182:47079 - 61232 \"A IN 100-64-0-182.dns-3324.pod.cluster.local.ec2.internal. tcp 93 false 65535\" NXDOMAIN qr,rd,ra 70 0.001091649s\n[INFO] 100.64.0.182:38495 - 3657 \"A IN dns-test-service-2.dns-3324.svc.cluster.local.ec2.internal. udp 87 false 4096\" NXDOMAIN qr,rd,ra 76 0.00117051s\n[INFO] 100.64.0.182:59225 - 4692 \"A IN 100-64-0-182.dns-3324.pod.cluster.local.ec2.internal. udp 81 false 4096\" NXDOMAIN qr,rd,ra 70 0.00113622s\n[INFO] 100.64.0.182:45357 - 30878 \"A IN 100-64-0-182.dns-3324.pod.cluster.local.ec2.internal. tcp 81 false 65535\" NXDOMAIN qr,rd,ra 70 0.001026616s\n[INFO] 100.64.0.182:39898 - 17705 \"A IN dns-test-service-2.dns-3324.svc.cluster.local.ec2.internal. udp 99 false 4096\" NXDOMAIN qr,rd,ra 76 0.002223637s\n[INFO] 100.64.0.182:53395 - 2249 \"A IN dns-test-service-2.dns-3324.svc.cluster.local.ec2.internal. tcp 99 false 65535\" NXDOMAIN qr,rd,ra 76 0.001075602s\n[INFO] 100.64.0.182:44970 - 38881 \"A IN 100-64-0-182.dns-3324.pod.cluster.local.ec2.internal. udp 93 false 4096\" NXDOMAIN qr,rd,ra 70 0.00120691s\n[INFO] 100.64.0.182:37337 - 23425 \"A IN 100-64-0-182.dns-3324.pod.cluster.local.ec2.internal. tcp 93 false 65535\" NXDOMAIN qr,rd,ra 70 0.001012695s\n[INFO] 100.64.0.182:35260 - 62443 \"A IN dns-test-service-2.dns-3324.svc.cluster.local.ec2.internal. udp 87 false 4096\" NXDOMAIN qr,rd,ra 76 0.002633761s\n[INFO] 100.64.0.182:60953 - 55326 \"A IN dns-test-service-2.dns-3324.svc.cluster.local.ec2.internal. tcp 87 false 65535\" NXDOMAIN qr,rd,ra 76 0.001162357s\n[INFO] 100.64.0.182:36502 - 55406 \"A IN 100-64-0-182.dns-3324.pod.cluster.local.ec2.internal. udp 81 false 4096\" NXDOMAIN qr,rd,ra 70 0.001221006s\n[INFO] 100.64.0.182:32939 - 36028 \"A IN 100-64-0-182.dns-3324.pod.cluster.local.ec2.internal. tcp 81 false 65535\" NXDOMAIN qr,rd,ra 70 0.001711563s\n[INFO] 100.64.0.182:45949 - 58781 \"A IN dns-test-service-2.dns-3324.svc.cluster.local.ec2.internal. tcp 99 false 65535\" NXDOMAIN qr,rd,ra 76 0.00099628s\n[INFO] 100.64.0.182:60793 - 9269 \"A IN 100-64-0-182.dns-3324.pod.cluster.local.ec2.internal. udp 93 false 4096\" NXDOMAIN qr,rd,ra 70 0.001407362s\n[INFO] 100.64.0.182:53059 - 22691 \"A IN dns-test-service-2.dns-3324.svc.cluster.local.ec2.internal. tcp 87 false 65535\" NXDOMAIN qr,rd,ra 76 0.000935676s\n[INFO] 100.64.0.182:41906 - 44626 \"A IN dns-test-service-2.dns-3324.svc.cluster.local.ec2.internal. udp 99 false 4096\" NXDOMAIN qr,rd,ra 76 0.001217601s\n[INFO] 100.64.0.182:38207 - 266 \"A IN 100-64-0-182.dns-3324.pod.cluster.local.ec2.internal. udp 93 false 4096\" NXDOMAIN qr,rd,ra 70 0.001101121s\n[INFO] 100.64.0.182:37925 - 50346 \"A IN 100-64-0-182.dns-3324.pod.cluster.local.ec2.internal. tcp 93 false 65535\" NXDOMAIN qr,rd,ra 70 0.001028807s\n[INFO] 100.64.0.182:46755 - 31908 \"A IN dns-test-service-2.dns-3324.svc.cluster.local.ec2.internal. udp 87 false 4096\" NXDOMAIN qr,rd,ra 76 0.000874163s\n[INFO] 100.64.0.182:46315 - 20167 \"A IN dns-test-service-2.dns-3324.svc.cluster.local.ec2.internal. tcp 99 false 65535\" NXDOMAIN qr,rd,ra 76 0.003965474s\n[INFO] 100.64.0.182:53716 - 36191 \"A IN 100-64-0-182.dns-3324.pod.cluster.local.ec2.internal. udp 93 false 4096\" NXDOMAIN qr,rd,ra 70 0.001023773s\n[INFO] 100.64.0.182:54917 - 17701 \"A IN dns-test-service-2.dns-3324.svc.cluster.local.ec2.internal. udp 87 false 4096\" NXDOMAIN qr,rd,ra 76 0.001276654s\n[INFO] 100.64.0.182:55687 - 65041 \"A IN dns-test-service-2.dns-3324.svc.cluster.local.ec2.internal. tcp 87 false 65535\" NXDOMAIN qr,rd,ra 76 0.001066507s\n[INFO] 100.64.0.182:48395 - 32375 \"A IN 100-64-0-182.dns-3324.pod.cluster.local.ec2.internal. udp 81 false 4096\" NXDOMAIN qr,rd,ra 70 0.001090816s\n[INFO] 100.64.0.182:57285 - 1533 \"A IN 100-64-0-182.dns-3324.pod.cluster.local.ec2.internal. tcp 81 false 65535\" NXDOMAIN qr,rd,ra 70 0.001165908s\n[INFO] 100.64.0.182:60563 - 54651 \"A IN dns-test-service-2.dns-3324.svc.cluster.local.ec2.internal. udp 99 false 4096\" NXDOMAIN qr,rd,ra 76 0.001227617s\n[INFO] 100.64.0.182:50103 - 11731 \"A IN 100-64-0-182.dns-3324.pod.cluster.local.ec2.internal. tcp 93 false 65535\" NXDOMAIN qr,rd,ra 70 0.001381249s\n[INFO] 100.64.0.182:41197 - 9713 \"A IN dns-test-service-2.dns-3324.svc.cluster.local.ec2.internal. udp 87 false 4096\" NXDOMAIN qr,rd,ra 76 0.001018142s\n[INFO] 100.64.0.182:41835 - 14989 \"A IN dns-test-service-2.dns-3324.svc.cluster.local.ec2.internal. tcp 87 false 65535\" NXDOMAIN qr,rd,ra 76 0.00095739s\n[INFO] 100.64.0.182:55427 - 43560 \"A IN 100-64-0-182.dns-3324.pod.cluster.local.ec2.internal. tcp 81 false 65535\" NXDOMAIN qr,rd,ra 70 0.006077568s\n[INFO] 100.64.0.182:58695 - 30192 \"A IN dns-test-service-2.dns-3324.svc.cluster.local.ec2.internal. tcp 99 false 65535\" NXDOMAIN qr,rd,ra 76 0.001030475s\n[INFO] 100.64.0.182:37253 - 2728 \"A IN 100-64-0-182.dns-3324.pod.cluster.local.ec2.internal. tcp 93 false 65535\" NXDOMAIN qr,rd,ra 70 0.001000898s\n[INFO] 100.64.0.182:47647 - 65103 \"A IN dns-test-service-2.dns-3324.svc.cluster.local.ec2.internal. udp 87 false 4096\" NXDOMAIN qr,rd,ra 76 0.001077253s\n[INFO] 100.64.0.182:59576 - 7126 \"A IN 100-64-0-182.dns-3324.pod.cluster.local.ec2.internal. udp 81 false 4096\" NXDOMAIN qr,rd,ra 70 0.001088065s\n[INFO] 100.64.0.182:38056 - 16036 \"A IN dns-test-service-2.dns-3324.svc.cluster.local.ec2.internal. udp 99 false 4096\" NXDOMAIN qr,rd,ra 76 0.001104688s\n[INFO] 100.64.0.182:49773 - 580 \"A IN dns-test-service-2.dns-3324.svc.cluster.local.ec2.internal. tcp 99 false 65535\" NXDOMAIN qr,rd,ra 76 0.001002303s\n[INFO] 100.64.0.182:38192 - 37212 \"A IN 100-64-0-182.dns-3324.pod.cluster.local.ec2.internal. udp 93 false 4096\" NXDOMAIN qr,rd,ra 70 0.001339625s\n[INFO] 100.64.0.182:46517 - 31248 \"A IN dns-test-service-2.dns-3324.svc.cluster.local.ec2.internal. udp 87 false 4096\" NXDOMAIN qr,rd,ra 76 0.001038141s\n[INFO] 100.64.0.182:39007 - 15704 \"A IN dns-test-service-2.dns-3324.svc.cluster.local.ec2.internal. tcp 87 false 65535\" NXDOMAIN qr,rd,ra 76 0.000934012s\n[INFO] 100.64.0.182:51402 - 47303 \"A IN 100-64-0-182.dns-3324.pod.cluster.local.ec2.internal. udp 81 false 4096\" NXDOMAIN qr,rd,ra 70 0.001081251s\n[INFO] 100.64.0.182:46015 - 51961 \"A IN dns-test-service-2.dns-3324.svc.cluster.local.ec2.internal. udp 99 false 4096\" NXDOMAIN qr,rd,ra 76 0.001470044s\n[INFO] 100.64.0.182:47085 - 57113 \"A IN dns-test-service-2.dns-3324.svc.cluster.local.ec2.internal. tcp 99 false 65535\" NXDOMAIN qr,rd,ra 76 0.005877254s\n[INFO] 100.64.0.182:58685 - 12753 \"A IN 100-64-0-182.dns-3324.pod.cluster.local.ec2.internal. tcp 93 false 65535\" NXDOMAIN qr,rd,ra 70 0.001014241s\n[INFO] 100.64.0.182:43015 - 30539 \"A IN dns-test-service-2.dns-3324.svc.cluster.local.ec2.internal. udp 87 false 4096\" NXDOMAIN qr,rd,ra 76 0.001179629s\n[INFO] 100.64.0.182:47533 - 36272 \"A IN 100-64-0-182.dns-3324.pod.cluster.local.ec2.internal. tcp 81 false 65535\" NXDOMAIN qr,rd,ra 70 0.000956109s\n[INFO] 100.64.0.182:33054 - 42958 \"A IN dns-test-service-2.dns-3324.svc.cluster.local.ec2.internal. udp 99 false 4096\" NXDOMAIN qr,rd,ra 76 0.001206185s\n[INFO] 100.64.0.182:55661 - 64134 \"A IN 100-64-0-182.dns-3324.pod.cluster.local.ec2.internal. udp 93 false 4096\" NXDOMAIN qr,rd,ra 70 0.001105429s\n[INFO] 100.64.0.182:43433 - 48678 \"A IN 100-64-0-182.dns-3324.pod.cluster.local.ec2.internal. tcp 93 false 65535\" NXDOMAIN qr,rd,ra 70 0.000970273s\n[INFO] 100.64.0.182:41448 - 59926 \"A IN dns-test-service-2.dns-3324.svc.cluster.local.ec2.internal. udp 87 false 4096\" NXDOMAIN qr,rd,ra 76 0.000972734s\n[INFO] 100.64.0.182:52607 - 56482 \"A IN 100-64-0-182.dns-3324.pod.cluster.local.ec2.internal. tcp 81 false 65535\" NXDOMAIN qr,rd,ra 70 0.000906415s\n[INFO] 100.64.0.182:46011 - 13346 \"A IN dns-test-service-2.dns-3324.svc.cluster.local.ec2.internal. udp 99 false 4096\" NXDOMAIN qr,rd,ra 76 0.001152744s\n[INFO] 100.64.0.182:33217 - 39674 \"A IN 100-64-0-182.dns-3324.pod.cluster.local.ec2.internal. tcp 93 false 65535\" NXDOMAIN qr,rd,ra 70 0.001010482s\n[INFO] 100.64.0.182:52438 - 61282 \"A IN 100-64-0-182.dns-3324.pod.cluster.local.ec2.internal. udp 81 false 4096\" NXDOMAIN qr,rd,ra 70 0.001124177s\n[INFO] 100.64.0.182:39087 - 54423 \"A IN dns-test-service-2.dns-3324.svc.cluster.local.ec2.internal. tcp 99 false 65535\" NXDOMAIN qr,rd,ra 76 0.001079572s\n[INFO] 100.64.0.182:41356 - 25519 \"A IN 100-64-0-182.dns-3324.pod.cluster.local.ec2.internal. udp 93 false 4096\" NXDOMAIN qr,rd,ra 70 0.001128072s\n[INFO] 100.64.0.182:54977 - 38191 \"A IN dns-test-service-2.dns-3324.svc.cluster.local.ec2.internal. tcp 87 false 65535\" NXDOMAIN qr,rd,ra 76 0.001034297s\n[INFO] 100.64.0.182:48657 - 45420 \"A IN dns-test-service-2.dns-3324.svc.cluster.local.ec2.internal. tcp 99 false 65535\" NXDOMAIN qr,rd,ra 76 0.002247328s\n[INFO] 100.64.0.182:46833 - 1060 \"A IN 100-64-0-182.dns-3324.pod.cluster.local.ec2.internal. tcp 93 false 65535\" NXDOMAIN qr,rd,ra 70 0.001095375s\n[INFO] 100.64.0.182:45261 - 2196 \"A IN dns-test-service-2.dns-3324.svc.cluster.local.ec2.internal. udp 87 false 4096\" NXDOMAIN qr,rd,ra 76 0.001140389s\n[INFO] 100.64.0.182:46465 - 8853 \"A IN 100-64-0-182.dns-3324.pod.cluster.local.ec2.internal. udp 81 false 4096\" NXDOMAIN qr,rd,ra 70 0.001275813s\n[INFO] 100.64.0.182:48621 - 61745 \"A IN 100-64-0-182.dns-3324.pod.cluster.local.ec2.internal. tcp 81 false 65535\" NXDOMAIN qr,rd,ra 70 0.000983189s\n[INFO] 100.64.0.182:40509 - 52440 \"A IN 100-64-0-182.dns-3324.pod.cluster.local.ec2.internal. udp 93 false 4096\" NXDOMAIN qr,rd,ra 70 0.00122519s\n[INFO] 100.64.0.182:40499 - 7339 \"A IN dns-test-service-2.dns-3324.svc.cluster.local.ec2.internal. udp 87 false 4096\" NXDOMAIN qr,rd,ra 76 0.001126368s\n[INFO] 100.64.0.182:43102 - 64554 \"A IN 100-64-0-182.dns-3324.pod.cluster.local.ec2.internal. udp 81 false 4096\" NXDOMAIN qr,rd,ra 70 0.001265074s\n[INFO] 100.64.0.182:42033 - 20920 \"A IN 100-64-0-182.dns-3324.pod.cluster.local.ec2.internal. tcp 81 false 65535\" NXDOMAIN qr,rd,ra 70 0.001016612s\n[INFO] 100.64.0.182:44795 - 22829 \"A IN 100-64-0-182.dns-3324.pod.cluster.local.ec2.internal. udp 93 false 4096\" NXDOMAIN qr,rd,ra 70 0.000961279s\n[INFO] 100.64.0.182:53865 - 27981 \"A IN 100-64-0-182.dns-3324.pod.cluster.local.ec2.internal. tcp 93 false 65535\" NXDOMAIN qr,rd,ra 70 0.00096931s\n[INFO] 100.64.0.182:60515 - 64994 \"A IN 100-64-0-182.dns-3324.pod.cluster.local.ec2.internal. udp 81 false 4096\" NXDOMAIN qr,rd,ra 70 0.001074017s\n[INFO] 100.64.0.182:51659 - 36804 \"A IN 100-64-0-182.dns-3324.pod.cluster.local.ec2.internal. tcp 81 false 65535\" NXDOMAIN qr,rd,ra 70 0.002306065s\n[INFO] 100.64.0.182:34954 - 41289 \"A IN dns-test-service-2.dns-3324.svc.cluster.local.ec2.internal. udp 99 false 4096\" NXDOMAIN qr,rd,ra 76 0.001325599s\n[INFO] 100.64.0.182:60832 - 62465 \"A IN 100-64-0-182.dns-3324.pod.cluster.local.ec2.internal. udp 93 false 4096\" NXDOMAIN qr,rd,ra 70 0.001192318s\n[INFO] 100.64.0.182:52929 - 12980 \"A IN 100-64-0-182.dns-3324.pod.cluster.local.ec2.internal. udp 81 false 4096\" NXDOMAIN qr,rd,ra 70 0.001405515s\n[INFO] 100.64.0.182:40049 - 23908 \"A IN 100-64-0-182.dns-3324.pod.cluster.local.ec2.internal. tcp 81 false 65535\" NXDOMAIN qr,rd,ra 70 0.003467162s\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[INFO] 100.64.1.58:51709 - 5418 \"A IN tolerate-unready.services-7215.svc.cluster.local.ec2.internal. udp 79 false 512\" NXDOMAIN qr,rd,ra 79 0.001343464s\n[INFO] 100.64.1.58:51709 - 5722 \"AAAA IN tolerate-unready.services-7215.svc.cluster.local.ec2.internal. udp 79 false 512\" NXDOMAIN qr,rd,ra 79 0.00140385s\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[INFO] 100.64.1.58:49860 - 29045 \"AAAA IN tolerate-unready.services-7215.svc.cluster.local.ec2.internal. udp 79 false 512\" NXDOMAIN qr,rd,ra 79 0.00124632s\n[INFO] 100.64.1.58:49860 - 28808 \"A IN tolerate-unready.services-7215.svc.cluster.local.ec2.internal. udp 79 false 512\" NXDOMAIN qr,rd,ra 79 0.001275633s\n[INFO] 100.64.1.58:55739 - 1125 \"A IN tolerate-unready.services-7215.svc.cluster.local.ec2.internal. udp 79 false 512\" NXDOMAIN qr,rd,ra 79 0.001032265s\n[INFO] 100.64.1.58:55739 - 1512 \"AAAA IN tolerate-unready.services-7215.svc.cluster.local.ec2.internal. udp 79 false 512\" NXDOMAIN qr,rd,ra 79 0.001279898s\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[INFO] 100.64.1.73:50898 - 34507 \"A IN dns-test-service.dns-8876.svc.cluster.local.ec2.internal. udp 97 false 4096\" NXDOMAIN qr,rd,ra 74 0.001136543s\n[INFO] 100.64.1.73:35159 - 57123 \"A IN 100-64-1-73.dns-8876.pod.cluster.local.ec2.internal. udp 92 false 4096\" NXDOMAIN qr,rd,ra 69 0.001243901s\n[INFO] 100.64.1.73:53877 - 26130 \"A IN 100-64-1-73.dns-8876.pod.cluster.local.ec2.internal. tcp 80 false 65535\" NXDOMAIN qr,rd,ra 69 0.002325398s\n[INFO] 100.64.1.73:37449 - 4896 \"A IN dns-test-service.dns-8876.svc.cluster.local.ec2.internal. udp 97 false 4096\" NXDOMAIN qr,rd,ra 74 0.00121409s\n[INFO] 100.64.1.73:49693 - 42828 \"A IN 100-64-1-73.dns-8876.pod.cluster.local.ec2.internal. udp 92 false 4096\" NXDOMAIN qr,rd,ra 69 0.001580572s\n[INFO] 100.64.1.73:40209 - 64071 \"A IN dns-test-service.dns-8876.svc.cluster.local.ec2.internal. udp 85 false 4096\" NXDOMAIN qr,rd,ra 74 0.001233099s\n[INFO] 100.64.1.73:50993 - 42718 \"A IN dns-test-service.dns-8876.svc.cluster.local.ec2.internal. tcp 85 false 65535\" NXDOMAIN qr,rd,ra 74 0.001043223s\n[INFO] 100.64.1.73:35408 - 9505 \"A IN 100-64-1-73.dns-8876.pod.cluster.local.ec2.internal. udp 92 false 4096\" NXDOMAIN qr,rd,ra 69 0.001107138s\n[INFO] 100.64.1.73:42939 - 14657 \"A IN 100-64-1-73.dns-8876.pod.cluster.local.ec2.internal. tcp 92 false 65535\" NXDOMAIN qr,rd,ra 69 0.001922722s\n[INFO] 100.64.1.73:50149 - 60091 \"A IN dns-test-service.dns-8876.svc.cluster.local.ec2.internal. udp 85 false 4096\" NXDOMAIN qr,rd,ra 74 0.000963788s\n[INFO] 100.64.1.73:37169 - 40716 \"A IN dns-test-service.dns-8876.svc.cluster.local.ec2.internal. tcp 85 false 65535\" NXDOMAIN qr,rd,ra 74 0.000868598s\n[INFO] 100.64.1.73:46337 - 54463 \"A IN 100-64-1-73.dns-8876.pod.cluster.local.ec2.internal. tcp 80 false 65535\" NXDOMAIN qr,rd,ra 69 0.000964844s\n[INFO] 100.64.1.73:57917 - 22814 \"A IN dns-test-service.dns-8876.svc.cluster.local.ec2.internal. udp 97 false 4096\" NXDOMAIN qr,rd,ra 74 0.001150837s\n[INFO] 100.64.1.73:59334 - 45430 \"A IN 100-64-1-73.dns-8876.pod.cluster.local.ec2.internal. udp 92 false 4096\" NXDOMAIN qr,rd,ra 69 0.001247531s\n[INFO] 100.64.1.73:45461 - 59663 \"A IN dns-test-service.dns-8876.svc.cluster.local.ec2.internal. tcp 85 false 65535\" NXDOMAIN qr,rd,ra 74 0.012240664s\n[INFO] 100.64.1.73:48442 - 6601 \"A IN 100-64-1-73.dns-8876.pod.cluster.local.ec2.internal. udp 80 false 4096\" NXDOMAIN qr,rd,ra 69 0.001031691s\n[INFO] 100.64.1.73:43185 - 10099 \"A IN dns-test-service.dns-8876.svc.cluster.local.ec2.internal. udp 97 false 4096\" NXDOMAIN qr,rd,ra 74 0.001294015s\n[INFO] 100.64.1.73:53229 - 60178 \"A IN dns-test-service.dns-8876.svc.cluster.local.ec2.internal. tcp 97 false 65535\" NXDOMAIN qr,rd,ra 74 0.000912067s\n[INFO] 100.64.1.73:53583 - 12107 \"A IN 100-64-1-73.dns-8876.pod.cluster.local.ec2.internal. udp 92 false 4096\" NXDOMAIN qr,rd,ra 69 0.001099968s\n[INFO] 100.64.1.73:58838 - 29494 \"A IN dns-test-service.dns-8876.svc.cluster.local.ec2.internal. udp 85 false 4096\" NXDOMAIN qr,rd,ra 74 0.001175657s\n[INFO] 100.64.1.73:35025 - 39390 \"A IN dns-test-service.dns-8876.svc.cluster.local.ec2.internal. tcp 85 false 65535\" NXDOMAIN qr,rd,ra 74 0.000844691s\n[INFO] 100.64.1.73:47521 - 25276 \"A IN dns-test-service.dns-8876.svc.cluster.local.ec2.internal. udp 97 false 4096\" NXDOMAIN qr,rd,ra 74 0.001494171s\n[INFO] 100.64.1.73:50837 - 30428 \"A IN dns-test-service.dns-8876.svc.cluster.local.ec2.internal. tcp 97 false 65535\" NXDOMAIN qr,rd,ra 74 0.000954309s\n[INFO] 100.64.1.73:41409 - 32436 \"A IN 100-64-1-73.dns-8876.pod.cluster.local.ec2.internal. tcp 92 false 65535\" NXDOMAIN qr,rd,ra 69 0.001071153s\n[INFO] 100.64.1.73:33688 - 27868 \"A IN 100-64-1-73.dns-8876.pod.cluster.local.ec2.internal. udp 80 false 4096\" NXDOMAIN qr,rd,ra 69 0.001144695s\n[INFO] 100.64.1.73:44835 - 1710 \"A IN 100-64-1-73.dns-8876.pod.cluster.local.ec2.internal. tcp 80 false 65535\" NXDOMAIN qr,rd,ra 69 0.000825099s\n[INFO] 100.64.1.74:48819 - 57283 \"A IN 100-64-1-74.dns-5429.pod.cluster.local.ec2.internal. tcp 92 false 65535\" NXDOMAIN qr,rd,ra 69 0.000992355s\n[INFO] 100.64.1.73:42183 - 61200 \"A IN dns-test-service.dns-8876.svc.cluster.local.ec2.internal. udp 97 false 4096\" NXDOMAIN qr,rd,ra 74 0.001127962s\n[INFO] 100.64.1.73:56576 - 63208 \"A IN 100-64-1-73.dns-8876.pod.cluster.local.ec2.internal. udp 92 false 4096\" NXDOMAIN qr,rd,ra 69 0.001513077s\n[INFO] 100.64.1.74:53708 - 10207 \"A IN kubernetes.default.svc.cluster.local.ec2.internal. udp 90 false 4096\" NXDOMAIN qr,rd,ra 67 0.001170327s\n[INFO] 100.64.1.73:39703 - 38285 \"A IN dns-test-service.dns-8876.svc.cluster.local.ec2.internal. tcp 85 false 65535\" NXDOMAIN qr,rd,ra 74 0.001942158s\n[INFO] 100.64.1.73:43480 - 34677 \"A IN 100-64-1-73.dns-8876.pod.cluster.local.ec2.internal. udp 80 false 4096\" NXDOMAIN qr,rd,ra 69 0.001334221s\n[INFO] 100.64.1.73:33771 - 30798 \"A IN 100-64-1-73.dns-8876.pod.cluster.local.ec2.internal. tcp 80 false 65535\" NXDOMAIN qr,rd,ra 69 0.000996819s\n[INFO] 100.64.1.74:36781 - 26989 \"A IN kubernetes.default.svc.cluster.local.ec2.internal. udp 78 false 4096\" NXDOMAIN qr,rd,ra 67 0.001272523s\n[INFO] 100.64.1.74:33401 - 5716 \"A IN 100-64-1-74.dns-5429.pod.cluster.local.ec2.internal. tcp 80 false 65535\" NXDOMAIN qr,rd,ra 69 0.0011407s\n[INFO] 100.64.1.73:50395 - 27877 \"A IN dns-test-service.dns-8876.svc.cluster.local.ec2.internal. udp 97 false 4096\" NXDOMAIN qr,rd,ra 74 0.001284926s\n[INFO] 100.64.1.73:56303 - 35037 \"A IN 100-64-1-73.dns-8876.pod.cluster.local.ec2.internal. tcp 92 false 65535\" NXDOMAIN qr,rd,ra 69 0.002445201s\n[INFO] 100.64.1.74:33654 - 11508 \"A IN kubernetes.default.svc.cluster.local.ec2.internal. udp 90 false 4096\" NXDOMAIN qr,rd,ra 67 0.001172756s\n[INFO] 100.64.1.74:40991 - 61588 \"A IN kubernetes.default.svc.cluster.local.ec2.internal. tcp 90 false 65535\" NXDOMAIN qr,rd,ra 67 0.001140872s\n[INFO] 100.64.1.74:38231 - 17228 \"A IN 100-64-1-74.dns-5429.pod.cluster.local.ec2.internal. udp 92 false 4096\" NXDOMAIN qr,rd,ra 69 0.001076575s\n[INFO] 100.64.1.74:33523 - 22380 \"A IN 100-64-1-74.dns-5429.pod.cluster.local.ec2.internal. tcp 92 false 65535\" NXDOMAIN qr,rd,ra 69 0.001018005s\n[INFO] 100.64.1.73:44665 - 44871 \"A IN 100-64-1-73.dns-8876.pod.cluster.local.ec2.internal. tcp 80 false 65535\" NXDOMAIN qr,rd,ra 69 0.001148114s\n[INFO] 100.64.1.74:37407 - 62141 \"A IN kubernetes.default.svc.cluster.local.ec2.internal. udp 78 false 4096\" NXDOMAIN qr,rd,ra 67 0.000964971s\n[INFO] 100.64.1.74:47271 - 40443 \"A IN kubernetes.default.svc.cluster.local.ec2.internal. tcp 78 false 65535\" NXDOMAIN qr,rd,ra 67 0.000967853s\n[INFO] 100.64.1.73:42639 - 31453 \"A IN dns-test-service.dns-8876.svc.cluster.local.ec2.internal. udp 85 false 4096\" NXDOMAIN qr,rd,ra 74 0.001101208s\n[INFO] 100.64.1.73:53339 - 38249 \"A IN dns-test-service.dns-8876.svc.cluster.local.ec2.internal. tcp 85 false 65535\" NXDOMAIN qr,rd,ra 74 0.001185871s\n[INFO] 100.64.1.73:39189 - 56712 \"A IN 100-64-1-73.dns-8876.pod.cluster.local.ec2.internal. tcp 80 false 65535\" NXDOMAIN qr,rd,ra 69 0.000913274s\n[INFO] 100.64.1.73:50500 - 30478 \"A IN dns-test-service.dns-8876.svc.cluster.local.ec2.internal. udp 97 false 4096\" NXDOMAIN qr,rd,ra 74 0.001145473s\n[INFO] 100.64.1.73:56863 - 37639 \"A IN 100-64-1-73.dns-8876.pod.cluster.local.ec2.internal. tcp 92 false 65535\" NXDOMAIN qr,rd,ra 69 0.000956549s\n[INFO] 100.64.1.73:39038 - 63115 \"A IN dns-test-service.dns-8876.svc.cluster.local.ec2.internal. udp 85 false 4096\" NXDOMAIN qr,rd,ra 74 0.001348821s\n[INFO] 100.64.1.73:46338 - 29087 \"A IN 100-64-1-73.dns-8876.pod.cluster.local.ec2.internal. udp 80 false 4096\" NXDOMAIN qr,rd,ra 69 0.001028248s\n[INFO] 100.64.1.73:53412 - 867 \"A IN dns-test-service.dns-8876.svc.cluster.local.ec2.internal. udp 97 false 4096\" NXDOMAIN qr,rd,ra 74 0.003614508s\n[INFO] 100.64.1.73:57382 - 22655 \"A IN dns-test-service.dns-8876.svc.cluster.local.ec2.internal. udp 85 false 4096\" NXDOMAIN qr,rd,ra 74 0.001289756s\n[INFO] 100.64.1.73:53960 - 54647 \"A IN 100-64-1-73.dns-8876.pod.cluster.local.ec2.internal. udp 80 false 4096\" NXDOMAIN qr,rd,ra 69 0.001126653s\n[INFO] 100.64.1.73:58283 - 60444 \"A IN dns-test-service.dns-8876.svc.cluster.local.ec2.internal. udp 85 false 4096\" NXDOMAIN qr,rd,ra 74 0.001009119s\n[INFO] 100.64.1.73:38441 - 8443 \"A IN dns-test-service.dns-8876.svc.cluster.local.ec2.internal. tcp 85 false 65535\" NXDOMAIN qr,rd,ra 74 0.006041734s\n[INFO] 100.64.1.73:41556 - 33160 \"A IN 100-64-1-73.dns-8876.pod.cluster.local.ec2.internal. udp 80 false 4096\" NXDOMAIN qr,rd,ra 69 0.00105842s\n[INFO] 100.64.1.73:59973 - 45353 \"A IN dns-test-service.dns-8876.svc.cluster.local.ec2.internal. tcp 97 false 65535\" NXDOMAIN qr,rd,ra 74 0.000928052s\n[INFO] 100.64.1.73:37883 - 62817 \"A IN 100-64-1-73.dns-8876.pod.cluster.local.ec2.internal. udp 92 false 4096\" NXDOMAIN qr,rd,ra 69 0.001097246s\n[INFO] 100.64.1.73:51331 - 49963 \"A IN dns-test-service.dns-8876.svc.cluster.local.ec2.internal. udp 85 false 4096\" NXDOMAIN qr,rd,ra 74 0.001210477s\n[INFO] 100.64.1.73:49319 - 30326 \"A IN 100-64-1-73.dns-8876.pod.cluster.local.ec2.internal. tcp 80 false 65535\" NXDOMAIN qr,rd,ra 69 0.00093608s\n[INFO] 100.64.1.73:54553 - 10589 \"A IN dns-test-service.dns-8876.svc.cluster.local.ec2.internal. udp 97 false 4096\" NXDOMAIN qr,rd,ra 74 0.001073289s\n[INFO] 100.64.1.73:33385 - 17749 \"A IN 100-64-1-73.dns-8876.pod.cluster.local.ec2.internal. tcp 92 false 65535\" NXDOMAIN qr,rd,ra 69 0.000847769s\n[INFO] 100.64.1.73:45871 - 63592 \"A IN dns-test-service.dns-8876.svc.cluster.local.ec2.internal. udp 85 false 4096\" NXDOMAIN qr,rd,ra 74 0.001259867s\n[INFO] 100.64.1.73:34582 - 46514 \"A IN dns-test-service.dns-8876.svc.cluster.local.ec2.internal. udp 97 false 4096\" NXDOMAIN qr,rd,ra 74 0.001288454s\n[INFO] 100.64.1.73:40215 - 48522 \"A IN 100-64-1-73.dns-8876.pod.cluster.local.ec2.internal. udp 92 false 4096\" NXDOMAIN qr,rd,ra 69 0.003347837s\n[INFO] 100.64.1.73:50583 - 53674 \"A IN 100-64-1-73.dns-8876.pod.cluster.local.ec2.internal. tcp 92 false 65535\" NXDOMAIN qr,rd,ra 69 0.000862034s\n[INFO] 100.64.1.73:35626 - 6622 \"A IN dns-test-service.dns-8876.svc.cluster.local.ec2.internal. udp 85 false 4096\" NXDOMAIN qr,rd,ra 74 0.001033816s\n[INFO] 100.64.1.73:57655 - 15979 \"A IN dns-test-service.dns-8876.svc.cluster.local.ec2.internal. tcp 85 false 65535\" NXDOMAIN qr,rd,ra 74 0.000837619s\n[INFO] 100.64.1.73:60615 - 9058 \"A IN 100-64-1-73.dns-8876.pod.cluster.local.ec2.internal. tcp 80 false 65535\" NXDOMAIN qr,rd,ra 69 0.00079003s\n[INFO] 100.64.1.73:51163 - 18343 \"A IN dns-test-service.dns-8876.svc.cluster.local.ec2.internal. tcp 97 false 65535\" NXDOMAIN qr,rd,ra 74 0.00086353s\n[INFO] 100.64.1.73:36315 - 35807 \"A IN 100-64-1-73.dns-8876.pod.cluster.local.ec2.internal. udp 92 false 4096\" NXDOMAIN qr,rd,ra 69 0.001122482s\n[INFO] 100.64.1.73:47185 - 3396 \"A IN dns-test-service.dns-8876.svc.cluster.local.ec2.internal. tcp 85 false 65535\" NXDOMAIN qr,rd,ra 74 0.001937342s\n[INFO] 100.64.1.73:49643 - 49115 \"A IN dns-test-service.dns-8876.svc.cluster.local.ec2.internal. udp 97 false 4096\" NXDOMAIN qr,rd,ra 74 0.001246164s\n[INFO] 100.64.1.73:53899 - 33659 \"A IN dns-test-service.dns-8876.svc.cluster.local.ec2.internal. tcp 97 false 65535\" NXDOMAIN qr,rd,ra 74 0.001060045s\n[INFO] 100.64.1.73:47690 - 26562 \"A IN dns-test-service.dns-8876.svc.cluster.local.ec2.internal. udp 85 false 4096\" NXDOMAIN qr,rd,ra 74 0.001131766s\n[INFO] 100.64.1.73:43733 - 58088 \"A IN dns-test-service.dns-8876.svc.cluster.local.ec2.internal. tcp 85 false 65535\" NXDOMAIN qr,rd,ra 74 0.000780657s\n[INFO] 100.64.1.73:42189 - 18440 \"A IN 100-64-1-73.dns-8876.pod.cluster.local.ec2.internal. tcp 80 false 65535\" NXDOMAIN qr,rd,ra 69 0.000916845s\n[INFO] 100.64.1.73:35481 - 20944 \"A IN dns-test-service.dns-8876.svc.cluster.local.ec2.internal. tcp 97 false 65535\" NXDOMAIN qr,rd,ra 74 0.006045678s\n[INFO] 100.64.1.73:43684 - 12742 \"A IN dns-test-service.dns-8876.svc.cluster.local.ec2.internal. udp 85 false 4096\" NXDOMAIN qr,rd,ra 74 0.001243064s\n[INFO] 100.64.1.73:37429 - 5643 \"A IN dns-test-service.dns-8876.svc.cluster.local.ec2.internal. tcp 85 false 65535\" NXDOMAIN qr,rd,ra 74 0.001174114s\n[INFO] 100.64.1.73:57775 - 26294 \"A IN 100-64-1-73.dns-8876.pod.cluster.local.ec2.internal. udp 80 false 4096\" NXDOMAIN qr,rd,ra 69 0.00107439s\n[INFO] 100.64.1.73:35011 - 51717 \"A IN dns-test-service.dns-8876.svc.cluster.local.ec2.internal. udp 97 false 4096\" NXDOMAIN qr,rd,ra 74 0.001224223s\n[INFO] 100.64.1.73:36087 - 36261 \"A IN dns-test-service.dns-8876.svc.cluster.local.ec2.internal. tcp 97 false 65535\" NXDOMAIN qr,rd,ra 74 0.000798519s\n[INFO] 100.64.1.73:45977 - 58877 \"A IN 100-64-1-73.dns-8876.pod.cluster.local.ec2.internal. tcp 92 false 65535\" NXDOMAIN qr,rd,ra 69 0.000914985s\n[INFO] 100.64.1.73:34580 - 11157 \"A IN dns-test-service.dns-8876.svc.cluster.local.ec2.internal. udp 85 false 4096\" NXDOMAIN qr,rd,ra 74 0.001213927s\n[INFO] 100.64.1.73:32937 - 24020 \"A IN dns-test-service.dns-8876.svc.cluster.local.ec2.internal. tcp 85 false 65535\" NXDOMAIN qr,rd,ra 74 0.000788943s\n[INFO] 100.64.1.73:58226 - 24113 \"A IN 100-64-1-73.dns-8876.pod.cluster.local.ec2.internal. udp 92 false 4096\" NXDOMAIN qr,rd,ra 69 0.001215955s\n[INFO] 100.64.1.73:38505 - 8657 \"A IN 100-64-1-73.dns-8876.pod.cluster.local.ec2.internal. tcp 92 false 65535\" NXDOMAIN qr,rd,ra 69 0.000872742s\n[INFO] 100.64.1.73:56777 - 21113 \"A IN dns-test-service.dns-8876.svc.cluster.local.ec2.internal. tcp 85 false 65535\" NXDOMAIN qr,rd,ra 74 0.00191422s\n[INFO] 100.64.1.73:52325 - 5008 \"A IN 100-64-1-73.dns-8876.pod.cluster.local.ec2.internal. tcp 80 false 65535\" NXDOMAIN qr,rd,ra 69 0.000879204s\n[INFO] 100.64.1.73:53737 - 54179 \"A IN dns-test-service.dns-8876.svc.cluster.local.ec2.internal. udp 97 false 4096\" NXDOMAIN qr,rd,ra 74 0.004862573s\n[INFO] 100.64.1.73:46257 - 38723 \"A IN dns-test-service.dns-8876.svc.cluster.local.ec2.internal. tcp 97 false 65535\" NXDOMAIN qr,rd,ra 74 0.001010552s\n[INFO] 100.64.1.73:42957 - 56187 \"A IN 100-64-1-73.dns-8876.pod.cluster.local.ec2.internal. udp 92 false 4096\" NXDOMAIN qr,rd,ra 69 0.00144646s\n[INFO] 100.64.1.73:53587 - 61339 \"A IN 100-64-1-73.dns-8876.pod.cluster.local.ec2.internal. tcp 92 false 65535\" NXDOMAIN qr,rd,ra 69 0.000886592s\n[INFO] 100.64.1.73:43890 - 45516 \"A IN dns-test-service.dns-8876.svc.cluster.local.ec2.internal. udp 85 false 4096\" NXDOMAIN qr,rd,ra 74 0.00121964s\n[INFO] 100.64.1.73:35927 - 16834 \"A IN dns-test-service.dns-8876.svc.cluster.local.ec2.internal. tcp 85 false 65535\" NXDOMAIN qr,rd,ra 74 0.000834026s\n[INFO] 100.64.1.73:34463 - 37083 \"A IN 100-64-1-73.dns-8876.pod.cluster.local.ec2.internal. tcp 80 false 65535\" NXDOMAIN qr,rd,ra 69 0.000797868s\n[INFO] 100.64.1.73:47803 - 3959 \"A IN dns-test-service.dns-8876.svc.cluster.local.ec2.internal. udp 97 false 4096\" NXDOMAIN qr,rd,ra 74 0.001402439s\n[INFO] 100.64.1.73:35000 - 26575 \"A IN 100-64-1-73.dns-8876.pod.cluster.local.ec2.internal. udp 92 false 4096\" NXDOMAIN qr,rd,ra 69 0.001228825s\n[INFO] 100.64.1.73:39909 - 11119 \"A IN 100-64-1-73.dns-8876.pod.cluster.local.ec2.internal. tcp 92 false 65535\" NXDOMAIN qr,rd,ra 69 0.000963193s\n[INFO] 100.64.1.73:47665 - 14486 \"A IN dns-test-service.dns-8876.svc.cluster.local.ec2.internal. udp 85 false 4096\" NXDOMAIN qr,rd,ra 74 0.001260514s\n[INFO] 100.64.1.73:47231 - 43329 \"A IN 100-64-1-73.dns-8876.pod.cluster.local.ec2.internal. udp 80 false 4096\" NXDOMAIN qr,rd,ra 69 0.001240516s\n[INFO] 100.64.1.73:35297 - 38848 \"A IN 100-64-1-73.dns-8876.pod.cluster.local.ec2.internal. tcp 92 false 65535\" NXDOMAIN qr,rd,ra 69 0.000916478s\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[INFO] 100.64.1.73:44185 - 62750 \"A IN dns-test-service.dns-8876.svc.cluster.local.ec2.internal. tcp 85 false 65535\" NXDOMAIN qr,rd,ra 74 0.000837189s\n[INFO] 100.64.1.73:37813 - 5525 \"A IN 100-64-1-73.dns-8876.pod.cluster.local.ec2.internal. tcp 92 false 65535\" NXDOMAIN qr,rd,ra 69 0.000852461s\n[INFO] 100.64.1.73:58146 - 29791 \"A IN dns-test-service.dns-8876.svc.cluster.local.ec2.internal. udp 85 false 4096\" NXDOMAIN qr,rd,ra 74 0.001135844s\n[INFO] 100.64.1.73:35210 - 3897 \"A IN 100-64-1-73.dns-8876.pod.cluster.local.ec2.internal. udp 80 false 4096\" NXDOMAIN qr,rd,ra 69 0.001203029s\n[INFO] 100.64.1.73:49139 - 44841 \"A IN 100-64-1-73.dns-8876.pod.cluster.local.ec2.internal. tcp 80 false 65535\" NXDOMAIN qr,rd,ra 69 0.002236197s\n[INFO] 100.64.1.73:36853 - 34290 \"A IN dns-test-service.dns-8876.svc.cluster.local.ec2.internal. udp 97 false 4096\" NXDOMAIN qr,rd,ra 74 0.001127767s\n[INFO] 100.64.1.73:49041 - 18834 \"A IN dns-test-service.dns-8876.svc.cluster.local.ec2.internal. tcp 97 false 65535\" NXDOMAIN qr,rd,ra 74 0.001082018s\n[INFO] 100.64.1.73:57753 - 38701 \"A IN dns-test-service.dns-8876.svc.cluster.local.ec2.internal. udp 85 false 4096\" NXDOMAIN qr,rd,ra 74 0.001033078s\n[INFO] 100.64.1.73:37597 - 45100 \"A IN dns-test-service.dns-8876.svc.cluster.local.ec2.internal. tcp 85 false 65535\" NXDOMAIN qr,rd,ra 74 0.000994164s\n[INFO] 100.64.1.73:40427 - 40995 \"A IN 100-64-1-73.dns-8876.pod.cluster.local.ec2.internal. tcp 80 false 65535\" NXDOMAIN qr,rd,ra 69 0.000951499s\n[INFO] 100.64.1.73:46004 - 46569 \"A IN dns-test-service.dns-8876.svc.cluster.local.ec2.internal. udp 85 false 4096\" NXDOMAIN qr,rd,ra 74 0.001239933s\n[INFO] 100.64.1.73:58677 - 36891 \"A IN dns-test-service.dns-8876.svc.cluster.local.ec2.internal. udp 97 false 4096\" NXDOMAIN qr,rd,ra 74 0.001107979s\n[INFO] 100.64.1.73:54593 - 21435 \"A IN dns-test-service.dns-8876.svc.cluster.local.ec2.internal. tcp 97 false 65535\" NXDOMAIN qr,rd,ra 74 0.000971639s\n[INFO] 100.64.1.73:51995 - 38899 \"A IN 100-64-1-73.dns-8876.pod.cluster.local.ec2.internal. udp 92 false 4096\" NXDOMAIN qr,rd,ra 69 0.000993995s\n[INFO] 100.64.1.73:45071 - 36446 \"A IN dns-test-service.dns-8876.svc.cluster.local.ec2.internal. udp 85 false 4096\" NXDOMAIN qr,rd,ra 74 0.001289289s\n[INFO] 100.64.1.73:33979 - 40516 \"A IN dns-test-service.dns-8876.svc.cluster.local.ec2.internal. tcp 85 false 65535\" NXDOMAIN qr,rd,ra 74 0.002325652s\n[INFO] 100.64.1.73:42141 - 29511 \"A IN 100-64-1-73.dns-8876.pod.cluster.local.ec2.internal. tcp 80 false 65535\" NXDOMAIN qr,rd,ra 69 0.000878075s\n[INFO] 100.64.1.73:44057 - 52208 \"A IN dns-test-service.dns-8876.svc.cluster.local.ec2.internal. udp 97 false 4096\" NXDOMAIN qr,rd,ra 74 0.00102046s\n[INFO] 100.64.1.73:49572 - 9288 \"A IN 100-64-1-73.dns-8876.pod.cluster.local.ec2.internal. udp 92 false 4096\" NXDOMAIN qr,rd,ra 69 0.00109807s\n[INFO] 100.64.1.73:44369 - 59368 \"A IN 100-64-1-73.dns-8876.pod.cluster.local.ec2.internal. tcp 92 false 65535\" NXDOMAIN qr,rd,ra 69 0.001210531s\n[INFO] 100.64.1.73:39193 - 45267 \"A IN dns-test-service.dns-8876.svc.cluster.local.ec2.internal. udp 85 false 4096\" NXDOMAIN qr,rd,ra 74 0.004113101s\n[INFO] 100.64.1.73:45236 - 29989 \"A IN 100-64-1-73.dns-8876.pod.cluster.local.ec2.internal. udp 80 false 4096\" NXDOMAIN qr,rd,ra 69 0.00095594s\n[INFO] 100.64.1.73:52385 - 23960 \"A IN 100-64-1-73.dns-8876.pod.cluster.local.ec2.internal. tcp 80 false 65535\" NXDOMAIN qr,rd,ra 69 0.006941499s\n[INFO] 100.64.1.73:41513 - 39492 \"A IN dns-test-service.dns-8876.svc.cluster.local.ec2.internal. udp 97 false 4096\" NXDOMAIN qr,rd,ra 74 0.001099185s\n[INFO] 100.64.1.73:56340 - 41501 \"A IN 100-64-1-73.dns-8876.pod.cluster.local.ec2.internal. udp 92 false 4096\" NXDOMAIN qr,rd,ra 69 0.008075669s\n[INFO] 100.64.1.73:58551 - 46653 \"A IN 100-64-1-73.dns-8876.pod.cluster.local.ec2.internal. tcp 92 false 65535\" NXDOMAIN qr,rd,ra 69 0.001130511s\n[INFO] 100.64.1.73:53488 - 54809 \"A IN dns-test-service.dns-8876.svc.cluster.local.ec2.internal. udp 97 false 4096\" NXDOMAIN qr,rd,ra 74 0.004340354s\n[INFO] 100.64.1.73:39532 - 11889 \"A IN 100-64-1-73.dns-8876.pod.cluster.local.ec2.internal. udp 92 false 4096\" NXDOMAIN qr,rd,ra 69 0.001194789s\n[INFO] 100.64.1.73:41750 - 43317 \"A IN dns-test-service.dns-8876.svc.cluster.local.ec2.internal. udp 85 false 4096\" NXDOMAIN qr,rd,ra 74 0.001062995s\n[INFO] 100.64.1.73:59743 - 52124 \"A IN dns-test-service.dns-8876.svc.cluster.local.ec2.internal. tcp 85 false 65535\" NXDOMAIN qr,rd,ra 74 0.001330302s\n[INFO] 100.64.1.73:52286 - 33258 \"A IN 100-64-1-73.dns-8876.pod.cluster.local.ec2.internal. udp 80 false 4096\" NXDOMAIN qr,rd,ra 69 0.001298131s\n[INFO] 100.64.1.73:50241 - 14295 \"A IN 100-64-1-73.dns-8876.pod.cluster.local.ec2.internal. tcp 80 false 65535\" NXDOMAIN qr,rd,ra 69 0.002704418s\n[INFO] 100.64.1.73:38955 - 26638 \"A IN dns-test-service.dns-8876.svc.cluster.local.ec2.internal. tcp 97 false 65535\" NXDOMAIN qr,rd,ra 74 0.000934241s\n[INFO] 100.64.1.73:37143 - 27206 \"A IN 100-64-1-73.dns-8876.pod.cluster.local.ec2.internal. udp 92 false 4096\" NXDOMAIN qr,rd,ra 69 0.00114138s\n[INFO] 100.64.1.73:59323 - 32358 \"A IN 100-64-1-73.dns-8876.pod.cluster.local.ec2.internal. tcp 92 false 65535\" NXDOMAIN qr,rd,ra 69 0.001197586s\n[INFO] 100.64.1.73:57397 - 18857 \"A IN dns-test-service.dns-8876.svc.cluster.local.ec2.internal. udp 85 false 4096\" NXDOMAIN qr,rd,ra 74 0.001251462s\n[INFO] 100.64.1.73:40374 - 3225 \"A IN 100-64-1-73.dns-8876.pod.cluster.local.ec2.internal. udp 80 false 4096\" NXDOMAIN qr,rd,ra 69 0.001103628s\n[INFO] 100.64.1.73:40423 - 16623 \"A IN 100-64-1-73.dns-8876.pod.cluster.local.ec2.internal. tcp 80 false 65535\" NXDOMAIN qr,rd,ra 69 0.000987465s\n[INFO] 100.64.1.73:41391 - 57410 \"A IN dns-test-service.dns-8876.svc.cluster.local.ec2.internal. udp 97 false 4096\" NXDOMAIN qr,rd,ra 74 0.000983557s\n[INFO] 100.64.1.73:53497 - 62562 \"A IN dns-test-service.dns-8876.svc.cluster.local.ec2.internal. tcp 97 false 65535\" NXDOMAIN qr,rd,ra 74 0.0011032s\n[INFO] 100.64.1.73:44544 - 14351 \"A IN 100-64-1-73.dns-8876.pod.cluster.local.ec2.internal. udp 92 false 4096\" NXDOMAIN qr,rd,ra 69 0.001106571s\n[INFO] 100.64.1.73:52345 - 64431 \"A IN 100-64-1-73.dns-8876.pod.cluster.local.ec2.internal. tcp 92 false 65535\" NXDOMAIN qr,rd,ra 69 0.001145477s\n[INFO] 100.64.1.73:43584 - 3602 \"A IN 100-64-1-73.dns-8876.pod.cluster.local.ec2.internal. udp 80 false 4096\" NXDOMAIN qr,rd,ra 69 0.001286792s\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n[WARNING] No files matching import glob pattern: custom/*.override\n[WARNING] No files matching import glob pattern: custom/*.server\n==== END logs for container coredns of pod kube-system/coredns-59c969ffb8-fqq79 ====\n==== START logs for container kube-proxy of pod kube-system/kube-proxy-nn5px ====\nI0111 15:55:59.748417 1 flags.go:33] FLAG: --add-dir-header=\"false\"\nI0111 15:55:59.748465 1 flags.go:33] FLAG: --alsologtostderr=\"false\"\nI0111 15:55:59.748470 1 flags.go:33] FLAG: --application-metrics-count-limit=\"100\"\nI0111 15:55:59.748474 1 flags.go:33] FLAG: --azure-container-registry-config=\"\"\nI0111 15:55:59.748479 1 flags.go:33] FLAG: --bind-address=\"0.0.0.0\"\nI0111 15:55:59.748486 1 flags.go:33] FLAG: --boot-id-file=\"/proc/sys/kernel/random/boot_id\"\nI0111 15:55:59.748494 1 flags.go:33] FLAG: --cleanup=\"false\"\nI0111 15:55:59.748499 1 flags.go:33] FLAG: --cleanup-ipvs=\"true\"\nI0111 15:55:59.748504 1 flags.go:33] FLAG: --cloud-provider-gce-lb-src-cidrs=\"130.211.0.0/22,209.85.152.0/22,209.85.204.0/22,35.191.0.0/16\"\nI0111 15:55:59.748511 1 flags.go:33] FLAG: --cluster-cidr=\"\"\nI0111 15:55:59.748515 1 flags.go:33] FLAG: --config=\"/var/lib/kube-proxy-config/config.yaml\"\nI0111 15:55:59.748519 1 flags.go:33] FLAG: --config-sync-period=\"15m0s\"\nI0111 15:55:59.748525 1 flags.go:33] FLAG: --conntrack-max-per-core=\"32768\"\nI0111 15:55:59.748531 1 flags.go:33] FLAG: --conntrack-min=\"131072\"\nI0111 15:55:59.748535 1 flags.go:33] FLAG: --conntrack-tcp-timeout-close-wait=\"1h0m0s\"\nI0111 15:55:59.748539 1 flags.go:33] FLAG: --conntrack-tcp-timeout-established=\"24h0m0s\"\nI0111 15:55:59.748543 1 flags.go:33] FLAG: --container-hints=\"/etc/cadvisor/container_hints.json\"\nI0111 15:55:59.748548 1 flags.go:33] FLAG: --containerd=\"/run/containerd/containerd.sock\"\nI0111 15:55:59.748553 1 flags.go:33] FLAG: --containerd-namespace=\"k8s.io\"\nI0111 15:55:59.748557 1 flags.go:33] FLAG: --default-not-ready-toleration-seconds=\"300\"\nI0111 15:55:59.748562 1 flags.go:33] FLAG: --default-unreachable-toleration-seconds=\"300\"\nI0111 15:55:59.748566 1 flags.go:33] FLAG: --disable-root-cgroup-stats=\"false\"\nI0111 15:55:59.748570 1 flags.go:33] FLAG: --docker=\"unix:///var/run/docker.sock\"\nI0111 15:55:59.748574 1 flags.go:33] FLAG: --docker-env-metadata-whitelist=\"\"\nI0111 15:55:59.748578 1 flags.go:33] FLAG: --docker-only=\"false\"\nI0111 15:55:59.748582 1 flags.go:33] FLAG: --docker-root=\"/var/lib/docker\"\nI0111 15:55:59.748586 1 flags.go:33] FLAG: --docker-tls=\"false\"\nI0111 15:55:59.748590 1 flags.go:33] FLAG: --docker-tls-ca=\"ca.pem\"\nI0111 15:55:59.748594 1 flags.go:33] FLAG: --docker-tls-cert=\"cert.pem\"\nI0111 15:55:59.748599 1 flags.go:33] FLAG: --docker-tls-key=\"key.pem\"\nI0111 15:55:59.748604 1 flags.go:33] FLAG: --enable-load-reader=\"false\"\nI0111 15:55:59.748607 1 flags.go:33] FLAG: --event-storage-age-limit=\"default=0\"\nI0111 15:55:59.748611 1 flags.go:33] FLAG: --event-storage-event-limit=\"default=0\"\nI0111 15:55:59.748615 1 flags.go:33] FLAG: --feature-gates=\"\"\nI0111 15:55:59.748622 1 flags.go:33] FLAG: --global-housekeeping-interval=\"1m0s\"\nI0111 15:55:59.748649 1 flags.go:33] FLAG: --healthz-bind-address=\"0.0.0.0:10256\"\nI0111 15:55:59.748655 1 flags.go:33] FLAG: --healthz-port=\"10256\"\nI0111 15:55:59.748660 1 flags.go:33] FLAG: --help=\"false\"\nI0111 15:55:59.748664 1 flags.go:33] FLAG: --hostname-override=\"\"\nI0111 15:55:59.748668 1 flags.go:33] FLAG: --housekeeping-interval=\"10s\"\nI0111 15:55:59.748673 1 flags.go:33] FLAG: --iptables-masquerade-bit=\"14\"\nI0111 15:55:59.748677 1 flags.go:33] FLAG: --iptables-min-sync-period=\"0s\"\nI0111 15:55:59.748681 1 flags.go:33] FLAG: --iptables-sync-period=\"30s\"\nI0111 15:55:59.748685 1 flags.go:33] FLAG: --ipvs-exclude-cidrs=\"[]\"\nI0111 15:55:59.748693 1 flags.go:33] FLAG: --ipvs-min-sync-period=\"0s\"\nI0111 15:55:59.748698 1 flags.go:33] FLAG: --ipvs-scheduler=\"\"\nI0111 15:55:59.748702 1 flags.go:33] FLAG: --ipvs-strict-arp=\"false\"\nI0111 15:55:59.748707 1 flags.go:33] FLAG: --ipvs-sync-period=\"30s\"\nI0111 15:55:59.748711 1 flags.go:33] FLAG: --kube-api-burst=\"10\"\nI0111 15:55:59.748715 1 flags.go:33] FLAG: --kube-api-content-type=\"application/vnd.kubernetes.protobuf\"\nI0111 15:55:59.748720 1 flags.go:33] FLAG: --kube-api-qps=\"5\"\nI0111 15:55:59.748726 1 flags.go:33] FLAG: --kubeconfig=\"\"\nI0111 15:55:59.748731 1 flags.go:33] FLAG: --log-backtrace-at=\":0\"\nI0111 15:55:59.748737 1 flags.go:33] FLAG: --log-cadvisor-usage=\"false\"\nI0111 15:55:59.748743 1 flags.go:33] FLAG: --log-dir=\"\"\nI0111 15:55:59.748748 1 flags.go:33] FLAG: --log-file=\"\"\nI0111 15:55:59.748752 1 flags.go:33] FLAG: --log-file-max-size=\"1800\"\nI0111 15:55:59.748757 1 flags.go:33] FLAG: --log-flush-frequency=\"5s\"\nI0111 15:55:59.748762 1 flags.go:33] FLAG: --logtostderr=\"true\"\nI0111 15:55:59.748766 1 flags.go:33] FLAG: --machine-id-file=\"/etc/machine-id,/var/lib/dbus/machine-id\"\nI0111 15:55:59.748773 1 flags.go:33] FLAG: --masquerade-all=\"false\"\nI0111 15:55:59.748778 1 flags.go:33] FLAG: --master=\"\"\nI0111 15:55:59.748782 1 flags.go:33] FLAG: --metrics-bind-address=\"127.0.0.1:10249\"\nI0111 15:55:59.748788 1 flags.go:33] FLAG: --metrics-port=\"10249\"\nI0111 15:55:59.748793 1 flags.go:33] FLAG: --nodeport-addresses=\"[]\"\nI0111 15:55:59.748799 1 flags.go:33] FLAG: --oom-score-adj=\"-999\"\nI0111 15:55:59.748804 1 flags.go:33] FLAG: --profiling=\"false\"\nI0111 15:55:59.748808 1 flags.go:33] FLAG: --proxy-mode=\"\"\nI0111 15:55:59.748815 1 flags.go:33] FLAG: --proxy-port-range=\"\"\nI0111 15:55:59.748821 1 flags.go:33] FLAG: --skip-headers=\"false\"\nI0111 15:55:59.748826 1 flags.go:33] FLAG: --skip-log-headers=\"false\"\nI0111 15:55:59.748831 1 flags.go:33] FLAG: --stderrthreshold=\"2\"\nI0111 15:55:59.748836 1 flags.go:33] FLAG: --storage-driver-buffer-duration=\"1m0s\"\nI0111 15:55:59.748841 1 flags.go:33] FLAG: --storage-driver-db=\"cadvisor\"\nI0111 15:55:59.748846 1 flags.go:33] FLAG: --storage-driver-host=\"localhost:8086\"\nI0111 15:55:59.748853 1 flags.go:33] FLAG: --storage-driver-password=\"root\"\nI0111 15:55:59.748858 1 flags.go:33] FLAG: --storage-driver-secure=\"false\"\nI0111 15:55:59.748862 1 flags.go:33] FLAG: --storage-driver-table=\"stats\"\nI0111 15:55:59.748867 1 flags.go:33] FLAG: --storage-driver-user=\"root\"\nI0111 15:55:59.748871 1 flags.go:33] FLAG: --udp-timeout=\"250ms\"\nI0111 15:55:59.748876 1 flags.go:33] FLAG: --update-machine-info-interval=\"5m0s\"\nI0111 15:55:59.748881 1 flags.go:33] FLAG: --v=\"2\"\nI0111 15:55:59.748886 1 flags.go:33] FLAG: --version=\"false\"\nI0111 15:55:59.748893 1 flags.go:33] FLAG: --vmodule=\"\"\nI0111 15:55:59.748898 1 flags.go:33] FLAG: --write-config-to=\"\"\nI0111 15:55:59.749607 1 feature_gate.go:216] feature gates: &{map[]}\nI0111 15:55:59.812393 1 node.go:135] Successfully retrieved node IP: 10.250.7.77\nI0111 15:55:59.812425 1 server_others.go:150] Using iptables Proxier.\nI0111 15:55:59.814153 1 server.go:529] Version: v1.16.4\nI0111 15:55:59.814656 1 conntrack.go:52] Setting nf_conntrack_max to 1048576\nI0111 15:55:59.814790 1 mount_linux.go:153] Detected OS without systemd\nI0111 15:55:59.815013 1 conntrack.go:83] Setting conntrack hashsize to 262144\nI0111 15:55:59.820693 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400\nI0111 15:55:59.820743 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600\nI0111 15:55:59.820865 1 config.go:131] Starting endpoints config controller\nI0111 15:55:59.820953 1 shared_informer.go:197] Waiting for caches to sync for endpoints config\nI0111 15:55:59.821040 1 config.go:313] Starting service config controller\nI0111 15:55:59.821198 1 shared_informer.go:197] Waiting for caches to sync for service config\nI0111 15:55:59.921143 1 shared_informer.go:204] Caches are synced for endpoints config \nI0111 15:55:59.921371 1 proxier.go:678] Not syncing iptables until Services and Endpoints have been received from master\nI0111 15:55:59.921414 1 shared_informer.go:204] Caches are synced for service config \nI0111 15:55:59.921528 1 service.go:357] Adding new service port \"kube-system/addons-nginx-ingress-controller:https\" at 100.107.194.218:443/TCP\nI0111 15:55:59.921554 1 service.go:357] Adding new service port \"kube-system/addons-nginx-ingress-controller:http\" at 100.107.194.218:80/TCP\nI0111 15:55:59.921571 1 service.go:357] Adding new service port \"kube-system/addons-nginx-ingress-nginx-ingress-k8s-backend:\" at 100.104.186.216:80/TCP\nI0111 15:55:59.921585 1 service.go:357] Adding new service port \"kube-system/kubernetes-dashboard:\" at 100.106.164.167:443/TCP\nI0111 15:55:59.921597 1 service.go:357] Adding new service port \"kube-system/kube-dns:dns\" at 100.104.0.10:53/UDP\nI0111 15:55:59.921609 1 service.go:357] Adding new service port \"kube-system/kube-dns:dns-tcp\" at 100.104.0.10:53/TCP\nI0111 15:55:59.921620 1 service.go:357] Adding new service port \"kube-system/kube-dns:metrics\" at 100.104.0.10:9153/TCP\nI0111 15:55:59.921648 1 service.go:357] Adding new service port \"kube-system/blackbox-exporter:probe\" at 100.107.248.105:9115/TCP\nI0111 15:55:59.921661 1 service.go:357] Adding new service port \"kube-system/calico-typha:calico-typha\" at 100.106.19.47:5473/TCP\nI0111 15:55:59.921678 1 service.go:357] Adding new service port \"kube-system/metrics-server:\" at 100.108.63.140:443/TCP\nI0111 15:55:59.921690 1 service.go:357] Adding new service port \"default/kubernetes:https\" at 100.104.0.1:443/TCP\nI0111 15:55:59.921701 1 service.go:357] Adding new service port \"kube-system/vpn-shoot:openvpn\" at 100.108.198.84:4314/TCP\nI0111 15:55:59.945943 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 15:55:59.946070 1 proxier.go:1519] Opened local port \"nodePort for kube-system/vpn-shoot:openvpn\" (:32265/tcp)\nI0111 15:55:59.946112 1 proxier.go:1519] Opened local port \"nodePort for kube-system/addons-nginx-ingress-controller:http\" (:32046/tcp)\nI0111 15:55:59.946193 1 proxier.go:1519] Opened local port \"nodePort for kube-system/addons-nginx-ingress-controller:https\" (:32298/tcp)\nI0111 15:56:23.126098 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 15:56:25.007033 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 15:56:32.160086 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 15:56:35.396757 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 15:56:38.557141 1 proxier.go:700] Stale udp service kube-system/kube-dns:dns -> 100.104.0.10\nI0111 15:56:38.585970 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 15:56:40.732898 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 15:56:41.972451 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 15:56:48.141874 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 15:57:02.959513 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 15:57:32.989277 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 15:58:03.019037 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 15:58:33.048764 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 15:59:03.080372 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 15:59:33.110814 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 15:59:49.262332 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 15:59:58.203872 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:00:28.232703 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:00:57.674852 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:01:13.080325 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:01:43.111687 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:02:13.142064 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:02:27.596020 1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0111 16:02:27.603540 1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nW0111 16:02:27.640901 1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.Service ended with: too old resource version: 478 (1668)\nI0111 16:02:43.172492 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:03:13.203050 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:03:43.233723 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:04:13.264381 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:04:43.301588 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:05:13.331396 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:05:43.361398 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:06:13.390783 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:06:43.442903 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:07:13.495684 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:07:15.957564 1 service.go:357] Adding new service port \"webhook-2204/e2e-test-webhook:\" at 100.104.248.71:8443/TCP\nI0111 16:07:15.984476 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:07:16.028134 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:07:23.314076 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:07:23.337532 1 service.go:382] Removing service port \"webhook-2204/e2e-test-webhook:\"\nI0111 16:07:23.363541 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:07:53.395192 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:08:23.442086 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:08:53.492326 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:08:59.741232 1 service.go:357] Adding new service port \"pods-886/fooservice:\" at 100.110.37.249:8765/TCP\nI0111 16:08:59.768421 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:08:59.798847 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:09:07.684711 1 service.go:382] Removing service port \"pods-886/fooservice:\"\nI0111 16:09:07.712448 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:09:07.740183 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:09:37.773602 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:10:07.803107 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:10:37.832749 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:11:07.862572 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:11:37.893666 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:12:07.923761 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:12:37.954362 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:13:07.991172 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:13:32.943036 1 service.go:357] Adding new service port \"webhook-667/e2e-test-webhook:\" at 100.104.214.167:8443/TCP\nI0111 16:13:32.970334 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:13:32.999779 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:13:53.393536 1 service.go:382] Removing service port \"webhook-667/e2e-test-webhook:\"\nI0111 16:13:53.439158 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:13:53.487988 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:14:23.527491 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:14:26.055908 1 service.go:357] Adding new service port \"crd-webhook-5744/e2e-test-crd-conversion-webhook:\" at 100.110.200.70:9443/TCP\nI0111 16:14:26.081396 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:14:26.109687 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:14:33.566699 1 service.go:382] Removing service port \"crd-webhook-5744/e2e-test-crd-conversion-webhook:\"\nI0111 16:14:33.592935 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:14:33.627919 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:15:03.657458 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:15:33.687370 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:16:02.047611 1 service.go:357] Adding new service port \"dns-1144/dns-test-service-3:http\" at 100.109.136.3:80/TCP\nI0111 16:16:02.073887 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:16:04.956995 1 service.go:382] Removing service port \"dns-1144/dns-test-service-3:http\"\nI0111 16:16:04.983592 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:16:35.013958 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:17:05.045335 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:17:35.076279 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:18:05.107462 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:18:35.136617 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:19:05.178892 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:19:35.231226 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:20:05.278155 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:20:35.309514 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:21:05.339526 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:21:07.398914 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:21:15.394869 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:21:45.434735 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:22:15.471717 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:22:45.510815 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:23:15.544800 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:23:45.574766 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:24:15.612142 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:24:45.644595 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:25:15.675665 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:25:45.706946 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:26:15.736207 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:26:45.765700 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:27:15.795461 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:27:45.825455 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:28:15.855611 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:28:45.886110 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:29:15.924564 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:29:45.955863 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:30:15.986738 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:30:46.031715 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:31:16.062480 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:31:46.092463 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:32:16.122968 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:32:26.275550 1 service.go:357] Adding new service port \"services-8170/endpoint-test2:\" at 100.110.2.36:80/TCP\nI0111 16:32:26.301230 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:32:26.329793 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:32:28.284276 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:32:30.297820 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:32:30.748489 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:32:31.076660 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:32:31.173049 1 service.go:382] Removing service port \"services-8170/endpoint-test2:\"\nI0111 16:32:31.210019 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:32:31.240923 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:33:01.271009 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:33:31.301594 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:34:01.331939 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:34:17.433954 1 service.go:357] Adding new service port \"webhook-3494/e2e-test-webhook:\" at 100.107.103.21:8443/TCP\nI0111 16:34:17.462268 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:34:17.491972 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:34:24.380975 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:34:24.446733 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:34:24.450855 1 service.go:382] Removing service port \"webhook-3494/e2e-test-webhook:\"\nI0111 16:34:24.476231 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:34:54.505207 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:35:24.534390 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:35:54.564447 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:36:24.594095 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:36:54.624143 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:37:24.654417 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:37:54.684839 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:38:24.716045 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:38:54.759277 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:39:24.789875 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:39:42.093890 1 service.go:357] Adding new service port \"aggregator-2165/sample-api:\" at 100.111.244.145:7443/TCP\nI0111 16:39:42.119305 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:39:42.147391 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:39:54.271964 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:39:56.950294 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:39:57.094096 1 service.go:382] Removing service port \"aggregator-2165/sample-api:\"\nI0111 16:39:57.120250 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:39:57.148750 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:40:27.178695 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:40:50.726039 1 service.go:357] Adding new service port \"services-706/externalname-service:http\" at 100.107.75.177:80/TCP\nI0111 16:40:50.752064 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:40:50.752322 1 proxier.go:1519] Opened local port \"nodePort for services-706/externalname-service:http\" (:31646/tcp)\nI0111 16:40:50.781244 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:40:52.977892 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:40:53.021535 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:41:02.514258 1 service.go:382] Removing service port \"services-706/externalname-service:http\"\nI0111 16:41:02.540600 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:41:02.569486 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:41:32.600351 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:41:51.898472 1 service.go:357] Adding new service port \"proxy-5821/proxy-service-7k47l:portname1\" at 100.107.195.169:80/TCP\nI0111 16:41:51.898501 1 service.go:357] Adding new service port \"proxy-5821/proxy-service-7k47l:portname2\" at 100.107.195.169:81/TCP\nI0111 16:41:51.898539 1 service.go:357] Adding new service port \"proxy-5821/proxy-service-7k47l:tlsportname1\" at 100.107.195.169:443/TCP\nI0111 16:41:51.898551 1 service.go:357] Adding new service port \"proxy-5821/proxy-service-7k47l:tlsportname2\" at 100.107.195.169:444/TCP\nI0111 16:41:51.924896 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:41:51.962690 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:41:57.778618 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:42:01.062728 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:42:19.052458 1 service.go:382] Removing service port \"proxy-5821/proxy-service-7k47l:portname1\"\nI0111 16:42:19.052537 1 service.go:382] Removing service port \"proxy-5821/proxy-service-7k47l:portname2\"\nI0111 16:42:19.052560 1 service.go:382] Removing service port \"proxy-5821/proxy-service-7k47l:tlsportname1\"\nI0111 16:42:19.052580 1 service.go:382] Removing service port \"proxy-5821/proxy-service-7k47l:tlsportname2\"\nI0111 16:42:19.078644 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:42:19.108279 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:42:49.139399 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:43:19.169868 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:43:49.208081 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:44:19.238777 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:44:49.268201 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:45:19.298109 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:45:23.242986 1 service.go:357] Adding new service port \"webhook-3118/e2e-test-webhook:\" at 100.108.100.65:8443/TCP\nI0111 16:45:23.269004 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:45:23.298006 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:45:30.812851 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:45:30.848460 1 service.go:382] Removing service port \"webhook-3118/e2e-test-webhook:\"\nI0111 16:45:30.873869 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:45:30.903360 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:45:49.326272 1 service.go:357] Adding new service port \"webhook-1264/e2e-test-webhook:\" at 100.110.182.82:8443/TCP\nI0111 16:45:49.352659 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:45:49.381777 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:45:56.201954 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:45:56.217448 1 service.go:382] Removing service port \"webhook-1264/e2e-test-webhook:\"\nI0111 16:45:56.250075 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:45:56.279191 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:46:26.309654 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:46:34.011999 1 service.go:357] Adding new service port \"webhook-3181/e2e-test-webhook:\" at 100.109.150.151:8443/TCP\nI0111 16:46:34.038677 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:46:34.068368 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:46:52.210179 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:46:52.240683 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:46:52.260726 1 service.go:382] Removing service port \"webhook-3181/e2e-test-webhook:\"\nI0111 16:46:52.287422 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:47:22.317738 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:47:52.347502 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:48:22.377499 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:48:52.407534 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:49:22.437947 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:49:52.468574 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:50:22.504743 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:50:52.534155 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:51:22.564091 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:51:52.594029 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:52:22.624371 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:52:52.654771 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:53:22.685079 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:53:52.716332 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:54:22.747527 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:54:48.412595 1 service.go:357] Adding new service port \"webhook-2228/e2e-test-webhook:\" at 100.109.182.153:8443/TCP\nI0111 16:54:48.437777 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:54:48.466154 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:54:55.944872 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:54:55.948959 1 service.go:382] Removing service port \"webhook-2228/e2e-test-webhook:\"\nI0111 16:54:55.973737 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:55:26.020202 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:55:56.050477 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:56:26.080988 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:56:27.535294 1 service.go:357] Adding new service port \"webhook-9087/e2e-test-webhook:\" at 100.108.52.131:8443/TCP\nI0111 16:56:27.560126 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:56:27.588888 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:56:35.100710 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:56:35.104843 1 service.go:382] Removing service port \"webhook-9087/e2e-test-webhook:\"\nI0111 16:56:35.130661 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:57:05.160586 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:57:35.198800 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:58:05.248050 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:58:35.286675 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:58:38.040116 1 service.go:357] Adding new service port \"crd-webhook-5777/e2e-test-crd-conversion-webhook:\" at 100.110.255.25:9443/TCP\nI0111 16:58:38.066988 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:58:38.097721 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:58:45.981229 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:58:46.053225 1 service.go:382] Removing service port \"crd-webhook-5777/e2e-test-crd-conversion-webhook:\"\nI0111 16:58:46.080987 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:59:07.882380 1 service.go:357] Adding new service port \"services-8385/nodeport-test:http\" at 100.108.130.218:80/TCP\nI0111 16:59:07.907743 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:59:07.907909 1 proxier.go:1519] Opened local port \"nodePort for services-8385/nodeport-test:http\" (:30629/tcp)\nI0111 16:59:07.935878 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:59:09.683464 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:59:09.711446 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:59:25.785216 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:59:25.791442 1 service.go:382] Removing service port \"services-8385/nodeport-test:http\"\nI0111 16:59:25.828531 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:59:55.861830 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:00:25.898420 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:00:55.930585 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:01:25.962188 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:01:56.002178 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:02:26.035789 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:02:52.884938 1 service.go:357] Adding new service port \"resourcequota-4665/test-service:\" at 100.111.227.87:80/TCP\nI0111 17:02:52.912183 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:02:55.066268 1 service.go:382] Removing service port \"resourcequota-4665/test-service:\"\nI0111 17:02:55.091442 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:03:25.120672 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:03:55.150340 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:04:25.185034 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:04:55.239669 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:05:25.276243 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:05:55.306895 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:06:25.337963 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:06:55.369248 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:07:25.398377 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:07:55.428503 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:08:25.458988 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:08:55.488520 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:09:25.518311 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:09:55.548562 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:10:25.579827 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:10:55.619125 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:11:25.650603 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nE0111 17:11:33.416468 1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Endpoints: Get https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com/api/v1/endpoints?allowWatchBookmarks=true&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=14365&timeout=7m45s&timeoutSeconds=465&watch=true: net/http: TLS handshake timeout\nE0111 17:11:33.416668 1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: Get https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com/api/v1/services?allowWatchBookmarks=true&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=14276&timeout=6m31s&timeoutSeconds=391&watch=true: net/http: TLS handshake timeout\nI0111 17:11:44.419973 1 trace.go:116] Trace[720868957]: \"Reflector ListAndWatch\" name:k8s.io/client-go/informers/factory.go:134 (started: 2020-01-11 17:11:34.41659843 +0000 UTC m=+4534.760050816) (total time: 10.003338561s):\nTrace[720868957]: [10.003338561s] [10.003338561s] END\nE0111 17:11:44.420068 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Endpoints: Get https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com/api/v1/endpoints?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0: net/http: TLS handshake timeout\nI0111 17:11:44.420592 1 trace.go:116] Trace[951259020]: \"Reflector ListAndWatch\" name:k8s.io/client-go/informers/factory.go:134 (started: 2020-01-11 17:11:34.417826988 +0000 UTC m=+4534.761279380) (total time: 10.002748232s):\nTrace[951259020]: [10.002748232s] [10.002748232s] END\nE0111 17:11:44.420701 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: Get https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0: net/http: TLS handshake timeout\nI0111 17:11:55.423485 1 trace.go:116] Trace[185177740]: \"Reflector ListAndWatch\" name:k8s.io/client-go/informers/factory.go:134 (started: 2020-01-11 17:11:45.420329767 +0000 UTC m=+4545.763782144) (total time: 10.003128926s):\nTrace[185177740]: [10.003128926s] [10.003128926s] END\nE0111 17:11:55.423504 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Endpoints: Get https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com/api/v1/endpoints?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0: net/http: TLS handshake timeout\nI0111 17:11:55.424189 1 trace.go:116] Trace[520462365]: \"Reflector ListAndWatch\" name:k8s.io/client-go/informers/factory.go:134 (started: 2020-01-11 17:11:45.421387739 +0000 UTC m=+4545.764840087) (total time: 10.002786816s):\nTrace[520462365]: [10.002786816s] [10.002786816s] END\nE0111 17:11:55.424204 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: Get https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0: net/http: TLS handshake timeout\nI0111 17:11:55.680343 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:12:06.426616 1 trace.go:116] Trace[2029802002]: \"Reflector ListAndWatch\" name:k8s.io/client-go/informers/factory.go:134 (started: 2020-01-11 17:11:56.423661423 +0000 UTC m=+4556.767113776) (total time: 10.002929848s):\nTrace[2029802002]: [10.002929848s] [10.002929848s] END\nE0111 17:12:06.426657 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Endpoints: Get https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com/api/v1/endpoints?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0: net/http: TLS handshake timeout\nI0111 17:12:06.427680 1 trace.go:116] Trace[293919546]: \"Reflector ListAndWatch\" name:k8s.io/client-go/informers/factory.go:134 (started: 2020-01-11 17:11:56.424665526 +0000 UTC m=+4556.768117892) (total time: 10.00299949s):\nTrace[293919546]: [10.00299949s] [10.00299949s] END\nE0111 17:12:06.427695 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: Get https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0: net/http: TLS handshake timeout\nI0111 17:12:25.711260 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:12:55.742030 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:13:25.773162 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:13:55.804559 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:14:25.839844 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:14:47.693125 1 service.go:357] Adding new service port \"webhook-3412/e2e-test-webhook:\" at 100.107.124.220:8443/TCP\nI0111 17:14:47.720714 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:14:47.758787 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:14:54.288034 1 service.go:382] Removing service port \"webhook-3412/e2e-test-webhook:\"\nI0111 17:14:54.320582 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:14:54.354018 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:15:24.385312 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:15:54.414831 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:16:24.445316 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:16:54.475095 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:17:24.505476 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:17:54.536117 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:18:24.566856 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:18:54.598713 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:19:24.630084 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:19:54.667498 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:20:24.697444 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:20:54.727536 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:21:14.002926 1 service.go:357] Adding new service port \"webhook-1291/e2e-test-webhook:\" at 100.109.201.250:8443/TCP\nI0111 17:21:14.033812 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:21:14.063158 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:21:20.928870 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:21:20.949396 1 service.go:382] Removing service port \"webhook-1291/e2e-test-webhook:\"\nI0111 17:21:20.986567 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:21:51.019530 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:22:00.047748 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:22:00.052156 1 service.go:357] Adding new service port \"services-1502/multi-endpoint-test:portname1\" at 100.109.36.91:80/TCP\nI0111 17:22:00.052180 1 service.go:357] Adding new service port \"services-1502/multi-endpoint-test:portname2\" at 100.109.36.91:81/TCP\nI0111 17:22:00.077323 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:22:01.825170 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:22:03.837602 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:22:04.492111 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:22:04.762037 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:22:04.914678 1 service.go:382] Removing service port \"services-1502/multi-endpoint-test:portname1\"\nI0111 17:22:04.914700 1 service.go:382] Removing service port \"services-1502/multi-endpoint-test:portname2\"\nI0111 17:22:04.940800 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:22:04.970338 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:22:35.001434 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:23:05.033475 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:23:35.065065 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:24:05.095186 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:24:35.125065 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:25:05.163762 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:25:35.215875 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:26:05.266006 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:26:35.298486 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:27:05.329514 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:27:35.366324 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:28:05.397557 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:28:35.428929 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:29:05.459555 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:29:35.490178 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:30:05.524781 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:30:35.555198 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:31:05.585843 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:31:35.621320 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:32:05.652612 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:32:35.683877 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:33:05.715081 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:33:33.148030 1 service.go:357] Adding new service port \"dns-5967/test-service-2:http\" at 100.106.60.114:80/TCP\nI0111 17:33:33.174970 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:33:33.203409 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:33:34.662000 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:34:04.694141 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:34:09.484948 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:34:09.549512 1 service.go:382] Removing service port \"dns-5967/test-service-2:http\"\nI0111 17:34:09.576462 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:34:09.606274 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:34:39.641871 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:35:09.673841 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:35:39.705239 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:35:47.995675 1 service.go:357] Adding new service port \"webhook-9616/e2e-test-webhook:\" at 100.109.158.139:8443/TCP\nI0111 17:35:48.033779 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:35:48.072301 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:35:56.224787 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:35:56.390741 1 service.go:382] Removing service port \"webhook-9616/e2e-test-webhook:\"\nI0111 17:35:56.417876 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:35:56.448088 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:36:26.483963 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:36:56.514259 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:37:26.545135 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:37:56.575937 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:38:26.606423 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:38:56.648858 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:39:26.682040 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:39:56.713460 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:40:26.745068 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:40:56.775669 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:41:26.805481 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:41:56.835738 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:42:26.866320 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:42:56.896963 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:43:26.928080 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:43:56.958793 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:44:16.240575 1 service.go:357] Adding new service port \"webhook-9359/e2e-test-webhook:\" at 100.106.95.64:8443/TCP\nI0111 17:44:16.267816 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:44:16.297960 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:44:24.322040 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:44:24.372373 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:44:24.564519 1 service.go:382] Removing service port \"webhook-9359/e2e-test-webhook:\"\nI0111 17:44:24.591861 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:44:54.623178 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:45:24.060166 1 service.go:357] Adding new service port \"webhook-2924/e2e-test-webhook:\" at 100.106.21.231:8443/TCP\nI0111 17:45:24.086543 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:45:24.115089 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:45:32.671105 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:45:32.765958 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:45:32.938873 1 service.go:382] Removing service port \"webhook-2924/e2e-test-webhook:\"\nI0111 17:45:32.964610 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:46:02.994611 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:46:33.025470 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:47:03.056366 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:47:33.094752 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:48:03.126212 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:48:33.157876 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:48:58.024618 1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0111 17:48:58.025019 1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0111 17:49:03.187435 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:49:33.223224 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:49:51.059330 1 service.go:357] Adding new service port \"webhook-3373/e2e-test-webhook:\" at 100.106.185.29:8443/TCP\nI0111 17:49:51.085998 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:49:51.123932 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:49:58.971321 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:49:59.241491 1 service.go:382] Removing service port \"webhook-3373/e2e-test-webhook:\"\nI0111 17:49:59.268793 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:50:29.299388 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:50:59.330585 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:50:59.796584 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-fghsq:\" at 100.105.187.137:80/TCP\nI0111 17:50:59.825745 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:50:59.855498 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:50:59.892594 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-lfwkn:\" at 100.108.2.133:80/TCP\nI0111 17:50:59.920468 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:50:59.924935 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-zm7xz:\" at 100.108.179.223:80/TCP\nI0111 17:50:59.963796 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:50:59.979033 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-8r2gm:\" at 100.110.229.48:80/TCP\nI0111 17:51:00.035403 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:00.040213 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-lntnk:\" at 100.111.229.145:80/TCP\nI0111 17:51:00.040237 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-bc9zc:\" at 100.106.176.119:80/TCP\nI0111 17:51:00.040249 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-7tzh4:\" at 100.107.210.180:80/TCP\nI0111 17:51:00.040261 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-gr2kb:\" at 100.109.54.14:80/TCP\nI0111 17:51:00.040273 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-p4jfx:\" at 100.107.106.156:80/TCP\nI0111 17:51:00.040286 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-xv9mt:\" at 100.109.171.138:80/TCP\nI0111 17:51:00.040304 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-2vcvb:\" at 100.108.229.93:80/TCP\nI0111 17:51:00.040321 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-mrqf5:\" at 100.105.169.205:80/TCP\nI0111 17:51:00.040334 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-9g794:\" at 100.107.191.168:80/TCP\nI0111 17:51:00.040349 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-p2tzk:\" at 100.105.48.215:80/TCP\nI0111 17:51:00.040361 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-vf4x2:\" at 100.109.53.238:80/TCP\nI0111 17:51:00.040383 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-pnntp:\" at 100.109.194.7:80/TCP\nI0111 17:51:00.040402 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-zdbqj:\" at 100.107.62.57:80/TCP\nI0111 17:51:00.040418 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-j8b4c:\" at 100.108.86.36:80/TCP\nI0111 17:51:00.065468 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:00.076642 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-l726n:\" at 100.111.133.140:80/TCP\nI0111 17:51:00.104956 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:00.110543 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-lmv2c:\" at 100.104.67.175:80/TCP\nI0111 17:51:00.110564 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-7f2s5:\" at 100.109.205.139:80/TCP\nI0111 17:51:00.110591 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-snbx2:\" at 100.105.168.58:80/TCP\nI0111 17:51:00.110607 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-mrvgv:\" at 100.104.241.196:80/TCP\nI0111 17:51:00.110660 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-zwf66:\" at 100.106.123.103:80/TCP\nI0111 17:51:00.138511 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:00.163549 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-vhhhn:\" at 100.110.218.117:80/TCP\nI0111 17:51:00.195327 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:00.202022 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-hsq6b:\" at 100.105.62.188:80/TCP\nI0111 17:51:00.202042 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-vcsdq:\" at 100.107.139.215:80/TCP\nI0111 17:51:00.202053 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-zxcgj:\" at 100.106.50.111:80/TCP\nI0111 17:51:00.202065 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-rjhwv:\" at 100.106.226.91:80/TCP\nI0111 17:51:00.202079 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-kvdgg:\" at 100.108.106.176:80/TCP\nI0111 17:51:00.202091 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-r6d2d:\" at 100.105.33.186:80/TCP\nI0111 17:51:00.202103 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-swtmz:\" at 100.111.70.92:80/TCP\nI0111 17:51:00.202113 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-6gs6p:\" at 100.104.204.171:80/TCP\nI0111 17:51:00.202123 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-8zknf:\" at 100.107.126.187:80/TCP\nI0111 17:51:00.231271 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:00.238145 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-lnzms:\" at 100.108.15.116:80/TCP\nI0111 17:51:00.238165 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-rlw8d:\" at 100.104.165.122:80/TCP\nI0111 17:51:00.238177 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-9brn2:\" at 100.111.8.132:80/TCP\nI0111 17:51:00.238189 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-tjclt:\" at 100.111.198.65:80/TCP\nI0111 17:51:00.238202 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-8frd7:\" at 100.106.231.195:80/TCP\nI0111 17:51:00.266560 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:00.275582 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-pkbc2:\" at 100.111.8.84:80/TCP\nI0111 17:51:00.275607 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-9svxz:\" at 100.109.47.169:80/TCP\nI0111 17:51:00.275648 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-x2lzk:\" at 100.104.97.159:80/TCP\nI0111 17:51:00.275787 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-4xzhg:\" at 100.107.90.120:80/TCP\nI0111 17:51:00.305218 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:00.312786 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-vx7cx:\" at 100.106.37.99:80/TCP\nI0111 17:51:00.312810 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-2xf6w:\" at 100.104.119.187:80/TCP\nI0111 17:51:00.312826 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-98ml7:\" at 100.107.208.79:80/TCP\nI0111 17:51:00.312840 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-79x6g:\" at 100.107.192.82:80/TCP\nI0111 17:51:00.312854 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-hvl5r:\" at 100.109.167.153:80/TCP\nI0111 17:51:00.312864 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-rdn6f:\" at 100.111.68.202:80/TCP\nI0111 17:51:00.312875 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-46qpw:\" at 100.107.166.208:80/TCP\nI0111 17:51:00.312884 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-v8l5m:\" at 100.111.169.103:80/TCP\nI0111 17:51:00.341671 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:00.349461 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-j4chh:\" at 100.106.36.124:80/TCP\nI0111 17:51:00.349482 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-z965h:\" at 100.111.129.30:80/TCP\nI0111 17:51:00.378697 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:00.389374 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-5lgpw:\" at 100.105.235.201:80/TCP\nI0111 17:51:00.418435 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:00.455322 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:00.462788 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-ld26b:\" at 100.104.143.93:80/TCP\nI0111 17:51:00.491703 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:00.499394 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-rc2wb:\" at 100.111.67.5:80/TCP\nI0111 17:51:00.528441 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:00.538017 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-dcqcb:\" at 100.106.228.88:80/TCP\nI0111 17:51:00.573229 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:00.610230 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:00.618389 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-6wg65:\" at 100.110.3.69:80/TCP\nI0111 17:51:00.649704 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:00.657864 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-jgnzj:\" at 100.105.228.55:80/TCP\nI0111 17:51:00.686893 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:00.695108 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-4mxvd:\" at 100.107.195.139:80/TCP\nI0111 17:51:00.724028 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:00.738965 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-x9g7w:\" at 100.105.241.166:80/TCP\nI0111 17:51:00.768973 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:00.806687 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:00.814748 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-k4xlt:\" at 100.106.182.115:80/TCP\nI0111 17:51:00.844757 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:00.853360 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-hmnht:\" at 100.105.47.66:80/TCP\nI0111 17:51:00.892106 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:00.900493 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-2qt4b:\" at 100.109.60.9:80/TCP\nI0111 17:51:00.928219 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:00.942751 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-qgk5l:\" at 100.105.57.4:80/TCP\nI0111 17:51:00.970846 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:01.007611 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:01.015842 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-xnx5b:\" at 100.109.70.130:80/TCP\nI0111 17:51:01.046490 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:01.054934 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-288rg:\" at 100.111.19.237:80/TCP\nI0111 17:51:01.082868 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:01.091814 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-22rbd:\" at 100.105.68.39:80/TCP\nI0111 17:51:01.128518 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:01.165937 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:01.174450 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-xq52d:\" at 100.110.133.128:80/TCP\nI0111 17:51:01.203753 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:01.212298 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-ptll6:\" at 100.107.136.143:80/TCP\nI0111 17:51:01.241465 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:01.250643 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-bc45j:\" at 100.110.1.82:80/TCP\nI0111 17:51:01.278788 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:01.292214 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-7dtnl:\" at 100.107.106.231:80/TCP\nI0111 17:51:01.324596 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:01.362718 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:01.375981 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-mw7zl:\" at 100.109.59.119:80/TCP\nI0111 17:51:01.411504 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:01.420687 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-zxd2z:\" at 100.105.72.206:80/TCP\nI0111 17:51:01.450863 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:01.459970 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-7clvx:\" at 100.111.82.165:80/TCP\nI0111 17:51:01.489263 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:01.498746 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-stsdr:\" at 100.109.178.79:80/TCP\nI0111 17:51:01.528165 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:01.538622 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-lmgcb:\" at 100.107.232.224:80/TCP\nI0111 17:51:01.568913 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:01.607737 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:01.616965 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-m54tw:\" at 100.108.60.252:80/TCP\nI0111 17:51:01.647323 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:01.656712 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-pcmnn:\" at 100.105.155.74:80/TCP\nI0111 17:51:01.687612 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:01.697516 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-t6pk7:\" at 100.111.61.11:80/TCP\nI0111 17:51:01.733332 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:01.743311 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-cm8m8:\" at 100.110.224.216:80/TCP\nI0111 17:51:01.773056 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:01.812895 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:01.822621 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-6bdz4:\" at 100.110.78.118:80/TCP\nI0111 17:51:01.853135 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:01.862963 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-5kt9s:\" at 100.111.70.248:80/TCP\nI0111 17:51:01.902554 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:01.913010 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-kqklr:\" at 100.109.245.148:80/TCP\nI0111 17:51:01.943906 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:01.954157 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-gdg78:\" at 100.108.97.160:80/TCP\nI0111 17:51:01.984616 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:02.003573 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-tscvw:\" at 100.111.243.99:80/TCP\nI0111 17:51:02.036470 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:02.047186 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-7b72t:\" at 100.110.171.33:80/TCP\nI0111 17:51:02.078315 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:02.088730 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-lv8jh:\" at 100.107.204.188:80/TCP\nI0111 17:51:02.119910 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:02.160449 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:02.170489 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-5996l:\" at 100.111.250.171:80/TCP\nI0111 17:51:02.201553 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:02.211668 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-gmd2g:\" at 100.104.111.159:80/TCP\nI0111 17:51:02.242590 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:02.253041 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-q4xm7:\" at 100.105.110.33:80/TCP\nI0111 17:51:02.289338 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:02.300320 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-9sqw5:\" at 100.108.174.113:80/TCP\nI0111 17:51:02.338312 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:02.349186 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-jl9vl:\" at 100.108.183.118:80/TCP\nI0111 17:51:02.380052 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:02.390915 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-6q7br:\" at 100.106.3.9:80/TCP\nI0111 17:51:02.421832 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:02.463776 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:02.474199 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-cp8qd:\" at 100.106.66.229:80/TCP\nI0111 17:51:02.506830 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:02.517286 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-r46hf:\" at 100.105.135.93:80/TCP\nI0111 17:51:02.558904 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:02.574182 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-fp8kh:\" at 100.104.152.180:80/TCP\nI0111 17:51:02.616073 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:02.634660 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-vqbf2:\" at 100.106.172.42:80/TCP\nI0111 17:51:02.678084 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:02.695831 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-jd985:\" at 100.110.75.104:80/TCP\nI0111 17:51:02.695868 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-fxthb:\" at 100.105.62.22:80/TCP\nI0111 17:51:02.740146 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:02.756756 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-6m7zb:\" at 100.107.76.88:80/TCP\nI0111 17:51:02.805162 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:02.817017 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-8zt78:\" at 100.104.44.93:80/TCP\nI0111 17:51:02.850095 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:02.861199 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-ffpdw:\" at 100.110.77.156:80/TCP\nI0111 17:51:02.892883 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:02.914481 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-f8j4x:\" at 100.109.242.24:80/TCP\nI0111 17:51:02.962035 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:02.973334 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-q9snn:\" at 100.106.116.103:80/TCP\nI0111 17:51:03.006585 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:03.017837 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-pf59q:\" at 100.110.215.122:80/TCP\nI0111 17:51:03.051260 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:03.062743 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-54tw6:\" at 100.110.24.93:80/TCP\nI0111 17:51:03.095529 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:03.107055 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-6st9l:\" at 100.108.37.219:80/TCP\nI0111 17:51:03.140221 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:03.152864 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-4bzlp:\" at 100.108.17.116:80/TCP\nI0111 17:51:03.184560 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:03.196944 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-7s4p7:\" at 100.109.118.209:80/TCP\nI0111 17:51:03.229056 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:03.242081 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-d7cht:\" at 100.106.247.150:80/TCP\nI0111 17:51:03.274799 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:03.326713 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:03.338817 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-s672b:\" at 100.107.196.208:80/TCP\nI0111 17:51:03.371807 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:03.383499 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-g7p24:\" at 100.110.181.183:80/TCP\nI0111 17:51:03.430030 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:03.464301 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-6zm2x:\" at 100.111.196.73:80/TCP\nI0111 17:51:03.464326 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-dhjzj:\" at 100.107.79.86:80/TCP\nI0111 17:51:03.522977 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:03.542141 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-5pqpk:\" at 100.104.171.54:80/TCP\nI0111 17:51:03.542165 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-bpxkp:\" at 100.108.19.109:80/TCP\nI0111 17:51:03.576084 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:03.652093 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:03.672872 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-hb49s:\" at 100.107.206.42:80/TCP\nI0111 17:51:03.672902 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-cnh47:\" at 100.108.201.76:80/TCP\nI0111 17:51:03.719356 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:03.738294 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-vsnnb:\" at 100.109.126.119:80/TCP\nI0111 17:51:03.783793 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:03.802471 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-9ssgv:\" at 100.106.53.73:80/TCP\nI0111 17:51:03.802494 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-bv4ds:\" at 100.104.3.219:80/TCP\nI0111 17:51:03.848740 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:03.865005 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-q8qd6:\" at 100.104.2.206:80/TCP\nI0111 17:51:03.898864 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:03.913803 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-cv5ch:\" at 100.108.35.180:80/TCP\nI0111 17:51:03.949603 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:03.962041 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-kwxmp:\" at 100.105.194.230:80/TCP\nI0111 17:51:04.003376 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:04.023394 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-mhjfx:\" at 100.105.107.95:80/TCP\nI0111 17:51:04.056384 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:04.069414 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-b9jhs:\" at 100.109.228.0:80/TCP\nI0111 17:51:04.101673 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:04.115478 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-mxvqc:\" at 100.105.75.32:80/TCP\nI0111 17:51:04.148778 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:04.161952 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-srgbh:\" at 100.106.151.178:80/TCP\nI0111 17:51:04.194994 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:04.215008 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-g6fcm:\" at 100.107.44.222:80/TCP\nI0111 17:51:04.260717 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:04.280774 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-5d5cz:\" at 100.106.196.198:80/TCP\nI0111 17:51:04.329991 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:04.351986 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-zqv5x:\" at 100.107.36.53:80/TCP\nI0111 17:51:04.352010 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-84lkj:\" at 100.109.143.18:80/TCP\nI0111 17:51:04.384345 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:04.398932 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-qqfg9:\" at 100.110.104.135:80/TCP\nI0111 17:51:04.438010 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:04.452674 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-2hgv4:\" at 100.106.127.50:80/TCP\nI0111 17:51:04.484692 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:04.498951 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-gpdhb:\" at 100.109.197.79:80/TCP\nI0111 17:51:04.531176 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:04.545540 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-xdmk5:\" at 100.107.69.128:80/TCP\nI0111 17:51:04.578327 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:04.592456 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-kwskk:\" at 100.111.218.17:80/TCP\nI0111 17:51:04.626188 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:04.640463 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-kk2cr:\" at 100.111.233.144:80/TCP\nI0111 17:51:04.674674 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:04.721357 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:04.735117 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-7cd8s:\" at 100.107.86.167:80/TCP\nI0111 17:51:04.768366 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:04.782120 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-xnsp9:\" at 100.108.45.224:80/TCP\nI0111 17:51:04.815783 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:04.829559 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-drw45:\" at 100.109.174.142:80/TCP\nI0111 17:51:04.863836 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:04.878108 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-q7znd:\" at 100.111.147.31:80/TCP\nI0111 17:51:04.912427 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:04.926905 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-k8k5t:\" at 100.110.56.30:80/TCP\nI0111 17:51:04.977311 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:04.992473 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-d76kq:\" at 100.106.125.35:80/TCP\nI0111 17:51:04.992496 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-rsp5z:\" at 100.107.92.67:80/TCP\nI0111 17:51:05.026643 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:05.041815 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-g9d9x:\" at 100.111.196.5:80/TCP\nI0111 17:51:05.075765 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:05.091351 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-dsrhl:\" at 100.111.32.49:80/TCP\nI0111 17:51:05.132115 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:05.147397 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-qxbrh:\" at 100.109.22.132:80/TCP\nI0111 17:51:05.185330 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:05.204410 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-jjw2m:\" at 100.105.162.136:80/TCP\nI0111 17:51:05.267415 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:05.290932 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-6rsv9:\" at 100.107.15.18:80/TCP\nI0111 17:51:05.329517 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:05.345763 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-269dd:\" at 100.108.230.157:80/TCP\nI0111 17:51:05.345786 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-t84g2:\" at 100.105.106.47:80/TCP\nI0111 17:51:05.380180 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:05.395973 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-lxs58:\" at 100.104.145.28:80/TCP\nI0111 17:51:05.430268 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:05.446145 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-rrvc5:\" at 100.109.184.131:80/TCP\nI0111 17:51:05.479943 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:05.496887 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-tctgc:\" at 100.108.176.38:80/TCP\nI0111 17:51:05.531436 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:05.547486 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-fsxrq:\" at 100.105.234.105:80/TCP\nI0111 17:51:05.581602 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:05.597609 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-r7ncf:\" at 100.105.124.130:80/TCP\nI0111 17:51:05.632225 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:05.648295 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-s6dcx:\" at 100.105.233.166:80/TCP\nI0111 17:51:05.683118 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:05.699842 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-gxpfj:\" at 100.109.252.109:80/TCP\nI0111 17:51:05.735836 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:05.752154 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-4ld2b:\" at 100.104.113.50:80/TCP\nI0111 17:51:05.787821 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:05.804389 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-zqw96:\" at 100.111.167.113:80/TCP\nI0111 17:51:05.840705 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:05.863611 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-lxr7q:\" at 100.107.252.142:80/TCP\nI0111 17:51:05.899496 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:05.915316 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-vngrh:\" at 100.106.133.96:80/TCP\nI0111 17:51:05.954917 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:05.976371 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-dgwdn:\" at 100.104.123.179:80/TCP\nI0111 17:51:06.028931 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:06.045965 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-cxs2s:\" at 100.111.18.23:80/TCP\nI0111 17:51:06.045987 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-2nw4c:\" at 100.109.220.251:80/TCP\nI0111 17:51:06.081228 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:06.098038 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-vl2b4:\" at 100.106.208.213:80/TCP\nI0111 17:51:06.133210 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:06.150433 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-8zqlx:\" at 100.104.229.79:80/TCP\nI0111 17:51:06.186788 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:06.204241 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-phh7t:\" at 100.106.113.31:80/TCP\nI0111 17:51:06.241444 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:06.258433 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-hhlk8:\" at 100.108.43.70:80/TCP\nI0111 17:51:06.294837 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:06.311964 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-djk2k:\" at 100.109.32.105:80/TCP\nI0111 17:51:06.348463 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:06.365082 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-67jwd:\" at 100.105.52.53:80/TCP\nI0111 17:51:06.401940 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:06.418253 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-xtbm5:\" at 100.107.21.131:80/TCP\nI0111 17:51:06.455372 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:06.472107 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-49tnq:\" at 100.107.166.64:80/TCP\nI0111 17:51:06.508160 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:06.525275 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-9vf2x:\" at 100.107.117.209:80/TCP\nI0111 17:51:06.562106 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:06.578727 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-x5vmr:\" at 100.104.188.40:80/TCP\nI0111 17:51:06.622581 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:06.639285 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-qjm42:\" at 100.106.13.122:80/TCP\nI0111 17:51:06.639319 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-cpkvc:\" at 100.109.59.98:80/TCP\nI0111 17:51:06.675895 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:06.693245 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-cdzgk:\" at 100.105.90.25:80/TCP\nI0111 17:51:06.729911 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:06.748368 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-9wg8b:\" at 100.106.29.14:80/TCP\nI0111 17:51:06.784582 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:06.802146 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-7lv9c:\" at 100.107.183.157:80/TCP\nI0111 17:51:06.841128 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:06.859049 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-zqzcg:\" at 100.110.3.191:80/TCP\nI0111 17:51:06.896837 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:06.915507 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-htk7c:\" at 100.107.52.233:80/TCP\nI0111 17:51:06.953922 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:06.979122 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-99qc6:\" at 100.108.240.52:80/TCP\nI0111 17:51:07.017247 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:07.035206 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-g98nc:\" at 100.108.141.219:80/TCP\nI0111 17:51:07.071082 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:07.089356 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-k2p7n:\" at 100.111.193.69:80/TCP\nI0111 17:51:07.125018 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:07.143324 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-dpzqk:\" at 100.106.65.22:80/TCP\nI0111 17:51:07.143347 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-6mszj:\" at 100.110.7.202:80/TCP\nI0111 17:51:07.179905 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:07.198906 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-kmndd:\" at 100.104.182.149:80/TCP\nI0111 17:51:07.235663 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:07.254567 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-r5ddp:\" at 100.111.211.43:80/TCP\nI0111 17:51:07.290495 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:07.308608 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-bnx57:\" at 100.111.88.251:80/TCP\nI0111 17:51:07.345729 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:07.371495 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-65r7h:\" at 100.111.32.143:80/TCP\nI0111 17:51:07.407957 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:07.426167 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-m5bpf:\" at 100.110.112.240:80/TCP\nI0111 17:51:07.462800 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:07.480822 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-z84p8:\" at 100.111.175.73:80/TCP\nI0111 17:51:07.517602 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:07.536101 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-fxq59:\" at 100.111.181.79:80/TCP\nI0111 17:51:07.573283 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:07.592645 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-ltfvt:\" at 100.104.255.216:80/TCP\nI0111 17:51:07.592689 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-q9m8h:\" at 100.104.190.239:80/TCP\nI0111 17:51:07.629555 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:07.648949 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-hfhdr:\" at 100.109.204.86:80/TCP\nI0111 17:51:07.686524 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:07.705750 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-gs4w5:\" at 100.106.33.191:80/TCP\nI0111 17:51:07.743531 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:07.762469 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-rcfcz:\" at 100.106.10.87:80/TCP\nI0111 17:51:07.800348 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:07.856665 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:07.913481 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:07.970896 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:08.048306 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:08.106063 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:08.170607 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:08.229278 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:08.287522 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:08.364837 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:08.426258 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:08.484020 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:13.644841 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-22rbd:\"\nI0111 17:51:13.685322 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:13.705122 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-288rg:\"\nI0111 17:51:13.705149 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-269dd:\"\nI0111 17:51:13.743859 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:13.763711 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-2hgv4:\"\nI0111 17:51:13.801525 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:13.839396 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-2nw4c:\"\nI0111 17:51:13.890718 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:13.919345 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-2qt4b:\"\nI0111 17:51:13.972707 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:14.004514 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-2vcvb:\"\nI0111 17:51:14.065165 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:14.099783 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-2xf6w:\"\nI0111 17:51:14.174502 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:14.239415 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-46qpw:\"\nI0111 17:51:14.278318 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:14.335536 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:14.440080 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-49tnq:\"\nI0111 17:51:14.478958 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:14.535368 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:14.639818 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-4bzlp:\"\nI0111 17:51:14.679313 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:14.736796 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:14.757570 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-4ld2b:\"\nI0111 17:51:14.795613 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:14.838823 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-4mxvd:\"\nI0111 17:51:14.879034 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:14.898880 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-4xzhg:\"\nI0111 17:51:14.936192 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:14.956227 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-54tw6:\"\nI0111 17:51:14.956250 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-5996l:\"\nI0111 17:51:14.994150 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:15.039328 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-5d5cz:\"\nI0111 17:51:15.079421 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:15.098261 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-5kt9s:\"\nI0111 17:51:15.098282 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-5lgpw:\"\nI0111 17:51:15.147444 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:15.175937 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-5pqpk:\"\nI0111 17:51:15.175964 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-65r7h:\"\nI0111 17:51:15.175974 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-67jwd:\"\nI0111 17:51:15.175993 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-6bdz4:\"\nI0111 17:51:15.243292 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:15.277569 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-6gs6p:\"\nI0111 17:51:15.277595 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-6m7zb:\"\nI0111 17:51:15.324716 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:15.343306 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-6mszj:\"\nI0111 17:51:15.383279 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:15.401095 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-6q7br:\"\nI0111 17:51:15.401120 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-6rsv9:\"\nI0111 17:51:15.401130 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-6st9l:\"\nI0111 17:51:15.440559 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:15.459954 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-6wg65:\"\nI0111 17:51:15.459975 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-6zm2x:\"\nI0111 17:51:15.497176 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:15.539196 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-79x6g:\"\nI0111 17:51:15.578552 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:15.596073 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-7b72t:\"\nI0111 17:51:15.596107 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-7cd8s:\"\nI0111 17:51:15.632998 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:15.651466 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-7clvx:\"\nI0111 17:51:15.651499 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-7dtnl:\"\nI0111 17:51:15.690249 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:15.707697 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-84lkj:\"\nI0111 17:51:15.707714 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-8frd7:\"\nI0111 17:51:15.707722 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-7f2s5:\"\nI0111 17:51:15.707730 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-7lv9c:\"\nI0111 17:51:15.707739 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-7s4p7:\"\nI0111 17:51:15.707746 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-7tzh4:\"\nI0111 17:51:15.744985 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:15.761447 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-8r2gm:\"\nI0111 17:51:15.797911 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:15.839709 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-8zknf:\"\nI0111 17:51:15.877787 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:15.893744 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-8zqlx:\"\nI0111 17:51:15.893764 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-8zt78:\"\nI0111 17:51:15.927179 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:15.944250 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-98ml7:\"\nI0111 17:51:15.979790 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:16.001614 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-9brn2:\"\nI0111 17:51:16.001645 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-9g794:\"\nI0111 17:51:16.001654 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-9sqw5:\"\nI0111 17:51:16.001662 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-9ssgv:\"\nI0111 17:51:16.001670 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-9svxz:\"\nI0111 17:51:16.001678 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-9vf2x:\"\nI0111 17:51:16.001686 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-99qc6:\"\nI0111 17:51:16.056368 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:16.073250 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-bnx57:\"\nI0111 17:51:16.073272 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-bpxkp:\"\nI0111 17:51:16.073280 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-9wg8b:\"\nI0111 17:51:16.073288 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-b9jhs:\"\nI0111 17:51:16.073296 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-bc45j:\"\nI0111 17:51:16.073304 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-bc9zc:\"\nI0111 17:51:16.109961 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:16.125164 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-bv4ds:\"\nI0111 17:51:16.125183 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-cdzgk:\"\nI0111 17:51:16.125192 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-cm8m8:\"\nI0111 17:51:16.125200 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-cnh47:\"\nI0111 17:51:16.125294 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-cp8qd:\"\nI0111 17:51:16.168443 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:16.185929 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-cpkvc:\"\nI0111 17:51:16.185952 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-cv5ch:\"\nI0111 17:51:16.185962 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-cxs2s:\"\nI0111 17:51:16.185969 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-d76kq:\"\nI0111 17:51:16.186068 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-d7cht:\"\nI0111 17:51:16.186082 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-dcqcb:\"\nI0111 17:51:16.186091 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-dgwdn:\"\nI0111 17:51:16.220278 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:16.235150 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-dhjzj:\"\nI0111 17:51:16.268150 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:16.339354 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-djk2k:\"\nI0111 17:51:16.375084 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:16.389380 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-dpzqk:\"\nI0111 17:51:16.389400 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-drw45:\"\nI0111 17:51:16.389409 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-dsrhl:\"\nI0111 17:51:16.421583 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:16.438897 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-f8j4x:\"\nI0111 17:51:16.474909 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:16.489622 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-fp8kh:\"\nI0111 17:51:16.489687 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-fsxrq:\"\nI0111 17:51:16.489712 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-fxq59:\"\nI0111 17:51:16.489729 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-ffpdw:\"\nI0111 17:51:16.489743 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-fghsq:\"\nI0111 17:51:16.524460 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:16.538512 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-g6fcm:\"\nI0111 17:51:16.538534 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-g7p24:\"\nI0111 17:51:16.538707 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-g98nc:\"\nI0111 17:51:16.538722 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-g9d9x:\"\nI0111 17:51:16.538911 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-fxthb:\"\nI0111 17:51:16.574530 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:16.588390 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-gmd2g:\"\nI0111 17:51:16.588410 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-gpdhb:\"\nI0111 17:51:16.588419 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-gr2kb:\"\nI0111 17:51:16.588427 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-gs4w5:\"\nI0111 17:51:16.588435 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-gxpfj:\"\nI0111 17:51:16.588446 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-hb49s:\"\nI0111 17:51:16.588455 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-gdg78:\"\nI0111 17:51:16.622470 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:16.635033 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-hsq6b:\"\nI0111 17:51:16.635052 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-htk7c:\"\nI0111 17:51:16.635060 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-hfhdr:\"\nI0111 17:51:16.635067 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-hhlk8:\"\nI0111 17:51:16.635075 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-hmnht:\"\nI0111 17:51:16.668066 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:16.681892 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-jd985:\"\nI0111 17:51:16.681911 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-jgnzj:\"\nI0111 17:51:16.681918 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-jjw2m:\"\nI0111 17:51:16.681926 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-jl9vl:\"\nI0111 17:51:16.681934 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-hvl5r:\"\nI0111 17:51:16.681943 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-j4chh:\"\nI0111 17:51:16.681951 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-j8b4c:\"\nI0111 17:51:16.715134 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:16.727345 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-k2p7n:\"\nI0111 17:51:16.727367 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-k4xlt:\"\nI0111 17:51:16.727374 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-k8k5t:\"\nI0111 17:51:16.727382 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-kk2cr:\"\nI0111 17:51:16.727390 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-kmndd:\"\nI0111 17:51:16.727398 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-kqklr:\"\nI0111 17:51:16.768331 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:16.780107 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-kvdgg:\"\nI0111 17:51:16.780128 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-kwskk:\"\nI0111 17:51:16.780137 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-kwxmp:\"\nI0111 17:51:16.780144 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-l726n:\"\nI0111 17:51:16.780152 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-ld26b:\"\nI0111 17:51:16.780160 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-lfwkn:\"\nI0111 17:51:16.780168 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-lmgcb:\"\nI0111 17:51:16.813459 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:16.824952 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-lntnk:\"\nI0111 17:51:16.824973 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-lnzms:\"\nI0111 17:51:16.825031 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-ltfvt:\"\nI0111 17:51:16.825042 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-lv8jh:\"\nI0111 17:51:16.825050 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-lxr7q:\"\nI0111 17:51:16.825086 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-lmv2c:\"\nI0111 17:51:16.856197 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:16.867293 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-lxs58:\"\nI0111 17:51:16.867312 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-m54tw:\"\nI0111 17:51:16.867321 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-m5bpf:\"\nI0111 17:51:16.867328 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-mhjfx:\"\nI0111 17:51:16.899015 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:16.909953 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-mrqf5:\"\nI0111 17:51:16.909972 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-mrvgv:\"\nI0111 17:51:16.909981 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-mw7zl:\"\nI0111 17:51:16.909989 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-mxvqc:\"\nI0111 17:51:16.909997 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-p2tzk:\"\nI0111 17:51:16.910008 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-p4jfx:\"\nI0111 17:51:16.940136 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:16.950048 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-pcmnn:\"\nI0111 17:51:16.950067 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-pf59q:\"\nI0111 17:51:16.950075 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-phh7t:\"\nI0111 17:51:16.981200 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:16.990783 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-q7znd:\"\nI0111 17:51:16.990969 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-q8qd6:\"\nI0111 17:51:16.991117 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-pkbc2:\"\nI0111 17:51:16.991234 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-pnntp:\"\nI0111 17:51:16.991311 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-ptll6:\"\nI0111 17:51:16.991325 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-q4xm7:\"\nI0111 17:51:17.021586 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:17.030021 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-q9m8h:\"\nI0111 17:51:17.030040 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-q9snn:\"\nI0111 17:51:17.030064 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-qgk5l:\"\nI0111 17:51:17.061036 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:17.069650 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-qjm42:\"\nI0111 17:51:17.069669 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-qqfg9:\"\nI0111 17:51:17.069677 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-qxbrh:\"\nI0111 17:51:17.098353 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:17.138612 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-r46hf:\"\nI0111 17:51:17.173503 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:17.185918 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-r5ddp:\"\nI0111 17:51:17.185937 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-r6d2d:\"\nI0111 17:51:17.185947 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-r7ncf:\"\nI0111 17:51:17.185957 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-rc2wb:\"\nI0111 17:51:17.185965 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-rcfcz:\"\nI0111 17:51:17.214481 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:17.238879 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-rdn6f:\"\nI0111 17:51:17.270527 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:17.278691 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-rjhwv:\"\nI0111 17:51:17.278714 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-rlw8d:\"\nI0111 17:51:17.278723 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-rrvc5:\"\nI0111 17:51:17.278730 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-rsp5z:\"\nI0111 17:51:17.278749 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-s672b:\"\nI0111 17:51:17.308201 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:17.315600 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-s6dcx:\"\nI0111 17:51:17.315620 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-snbx2:\"\nI0111 17:51:17.315642 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-srgbh:\"\nI0111 17:51:17.315650 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-stsdr:\"\nI0111 17:51:17.315658 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-swtmz:\"\nI0111 17:51:17.344598 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:17.351712 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-t6pk7:\"\nI0111 17:51:17.351732 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-t84g2:\"\nI0111 17:51:17.351741 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-tctgc:\"\nI0111 17:51:17.351747 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-tjclt:\"\nI0111 17:51:17.380929 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:17.387659 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-tscvw:\"\nI0111 17:51:17.387677 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-v8l5m:\"\nI0111 17:51:17.387685 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-vcsdq:\"\nI0111 17:51:17.387692 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-vf4x2:\"\nI0111 17:51:17.422082 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:17.428119 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-vl2b4:\"\nI0111 17:51:17.428139 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-vhhhn:\"\nI0111 17:51:17.456301 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:17.462498 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-vngrh:\"\nI0111 17:51:17.462517 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-vqbf2:\"\nI0111 17:51:17.462525 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-vsnnb:\"\nI0111 17:51:17.462532 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-vx7cx:\"\nI0111 17:51:17.490694 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:17.496837 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-x9g7w:\"\nI0111 17:51:17.496857 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-xdmk5:\"\nI0111 17:51:17.496865 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-x2lzk:\"\nI0111 17:51:17.496873 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-x5vmr:\"\nI0111 17:51:17.524987 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:17.530297 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-xnsp9:\"\nI0111 17:51:17.530316 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-xnx5b:\"\nI0111 17:51:17.530326 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-xq52d:\"\nI0111 17:51:17.559245 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:17.564712 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-xtbm5:\"\nI0111 17:51:17.564732 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-xv9mt:\"\nI0111 17:51:17.564758 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-z84p8:\"\nI0111 17:51:17.564793 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-z965h:\"\nI0111 17:51:17.564807 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-zdbqj:\"\nI0111 17:51:17.592412 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:17.597463 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-zqv5x:\"\nI0111 17:51:17.597485 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-zqw96:\"\nI0111 17:51:17.597494 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-zqzcg:\"\nI0111 17:51:17.597501 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-zm7xz:\"\nI0111 17:51:17.624919 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:17.629319 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-zwf66:\"\nI0111 17:51:17.629338 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-zxcgj:\"\nI0111 17:51:17.629346 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-zxd2z:\"\nI0111 17:51:17.655459 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:47.686718 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:52:17.718913 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:52:47.768916 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:53:17.801484 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:53:47.832511 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:54:17.865040 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:54:47.918704 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:55:17.952854 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:55:47.985184 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:55:57.106342 1 service.go:357] Adding new service port \"services-6103/clusterip-service:\" at 100.110.17.70:80/TCP\nI0111 17:55:57.134412 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:55:57.165411 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:55:57.200024 1 service.go:357] Adding new service port \"services-6103/externalsvc:\" at 100.104.253.21:80/TCP\nI0111 17:55:57.226229 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:55:57.254660 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:55:59.002182 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:55:59.036250 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:56:00.665115 1 service.go:382] Removing service port \"services-6103/clusterip-service:\"\nI0111 17:56:00.691170 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:56:04.957984 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:56:04.986778 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:56:13.908447 1 service.go:382] Removing service port \"services-6103/externalsvc:\"\nI0111 17:56:13.942207 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:56:13.980131 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:56:14.034922 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:56:44.065319 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:57:14.096019 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:57:44.126620 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:58:14.163389 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:58:44.194396 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:59:04.547455 1 service.go:357] Adding new service port \"webhook-2629/e2e-test-webhook:\" at 100.105.159.218:8443/TCP\nI0111 17:59:04.575600 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:59:04.605668 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:59:12.037155 1 service.go:382] Removing service port \"webhook-2629/e2e-test-webhook:\"\nI0111 17:59:12.065158 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:59:12.095507 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:59:42.127839 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:00:12.158084 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:00:42.189293 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:01:12.226195 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:01:42.256912 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:02:12.287952 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:02:42.318814 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:03:12.350166 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:03:42.381348 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:04:12.413745 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:04:42.443684 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:05:12.474131 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:05:42.504054 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:06:12.543890 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:06:13.796513 1 service.go:357] Adding new service port \"webhook-1365/e2e-test-webhook:\" at 100.106.250.157:8443/TCP\nI0111 18:06:13.823757 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:06:13.853410 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:06:23.573792 1 service.go:382] Removing service port \"webhook-1365/e2e-test-webhook:\"\nI0111 18:06:23.600730 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:06:23.630130 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:06:53.664656 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:07:23.695917 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:07:32.121507 1 service.go:357] Adding new service port \"kubectl-5864/rm2:\" at 100.107.125.220:1234/TCP\nI0111 18:07:32.149403 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:07:32.178910 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:07:34.911001 1 service.go:357] Adding new service port \"kubectl-5864/rm3:\" at 100.105.162.61:2345/TCP\nI0111 18:07:34.938212 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:07:34.968134 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:07:42.465724 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:07:42.513017 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:07:42.649406 1 service.go:382] Removing service port \"kubectl-5864/rm2:\"\nI0111 18:07:42.684049 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:07:42.736502 1 service.go:382] Removing service port \"kubectl-5864/rm3:\"\nI0111 18:07:42.766246 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:08:12.797972 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:08:42.827805 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:09:12.865739 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:09:42.903152 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:09:44.705357 1 service.go:357] Adding new service port \"nsdeletetest-1023/test-service:\" at 100.106.14.217:80/TCP\nI0111 18:09:44.731531 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:09:44.761665 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:09:49.863867 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:09:49.868406 1 service.go:382] Removing service port \"nsdeletetest-1023/test-service:\"\nI0111 18:09:49.905286 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:10:19.936856 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:10:49.968157 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:11:20.017375 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:11:50.050929 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:12:20.081894 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:12:50.113284 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:13:20.143218 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:13:50.173821 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:14:02.371363 1 service.go:357] Adding new service port \"services-610/externalname-service:http\" at 100.110.14.234:80/TCP\nI0111 18:14:02.396964 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:14:02.425480 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:14:04.143073 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:14:04.172400 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:14:11.475255 1 service.go:382] Removing service port \"services-610/externalname-service:http\"\nI0111 18:14:11.502999 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:14:11.532050 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:14:41.563776 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:15:11.595660 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:15:41.626657 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:16:11.666078 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:16:41.697490 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:17:01.586068 1 service.go:357] Adding new service port \"webhook-1614/e2e-test-webhook:\" at 100.104.110.190:8443/TCP\nI0111 18:17:01.618817 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:17:01.647803 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:17:09.771177 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:17:10.042271 1 service.go:382] Removing service port \"webhook-1614/e2e-test-webhook:\"\nI0111 18:17:10.068383 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:17:25.512010 1 service.go:357] Adding new service port \"kubectl-3929/redis-slave:\" at 100.110.38.60:6379/TCP\nI0111 18:17:25.538654 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:17:25.567646 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:17:26.109886 1 service.go:357] Adding new service port \"kubectl-3929/redis-master:\" at 100.108.160.227:6379/TCP\nI0111 18:17:26.136315 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:17:26.165744 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:17:27.058662 1 service.go:357] Adding new service port \"kubectl-3929/frontend:\" at 100.104.86.163:80/TCP\nI0111 18:17:27.090799 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:17:27.120444 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:17:31.099360 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:17:31.487720 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:17:31.525437 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:17:41.582255 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:17:41.613387 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:18:06.569954 1 service.go:382] Removing service port \"kubectl-3929/redis-slave:\"\nI0111 18:18:06.609779 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:18:06.655239 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:18:07.088978 1 service.go:382] Removing service port \"kubectl-3929/redis-master:\"\nI0111 18:18:07.116880 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:18:07.147894 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:18:07.606084 1 service.go:382] Removing service port \"kubectl-3929/frontend:\"\nI0111 18:18:07.639761 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:18:07.669762 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:18:37.701426 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:19:07.732914 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:19:37.765669 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:20:07.795974 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:20:37.826050 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:21:07.856998 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:21:37.892509 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:22:07.928716 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:22:37.959435 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:23:08.000209 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:23:38.035865 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:24:08.074201 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:24:38.106256 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:24:48.857431 1 service.go:357] Adding new service port \"kubectl-855/redis-master:\" at 100.104.203.63:6379/TCP\nI0111 18:24:48.882701 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:24:48.910952 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:24:59.757166 1 service.go:382] Removing service port \"kubectl-855/redis-master:\"\nI0111 18:24:59.797284 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:24:59.833489 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:25:29.865407 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:25:59.895363 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:26:29.932607 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:26:59.963445 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:27:30.007197 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:28:00.050787 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:28:30.084401 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:29:00.116690 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:29:30.146471 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:30:00.180002 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:30:30.210067 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:31:00.240252 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:31:30.274454 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:32:00.305209 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:32:30.335877 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:33:00.368105 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:33:30.399814 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:34:00.430856 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:34:30.461949 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:35:00.492673 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:35:30.524116 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:36:00.555176 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:36:05.003344 1 service.go:357] Adding new service port \"services-9188/nodeport-service:\" at 100.104.18.175:80/TCP\nI0111 18:36:05.031863 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:36:05.032170 1 proxier.go:1519] Opened local port \"nodePort for services-9188/nodeport-service:\" (:31726/tcp)\nI0111 18:36:05.061884 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:36:05.096568 1 service.go:357] Adding new service port \"services-9188/externalsvc:\" at 100.107.51.74:80/TCP\nI0111 18:36:05.121862 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:36:05.151400 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:36:06.894065 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:36:06.931774 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:36:08.563834 1 service.go:382] Removing service port \"services-9188/nodeport-service:\"\nI0111 18:36:08.591116 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:36:13.002568 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:36:13.033244 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:36:23.860679 1 service.go:382] Removing service port \"services-9188/externalsvc:\"\nI0111 18:36:23.896265 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:36:23.926408 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:36:23.982702 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:36:54.028765 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:37:24.065340 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:37:54.095995 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:38:24.128095 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:38:54.159951 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:39:24.190933 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:39:54.222156 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:40:24.254081 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:40:54.285717 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:41:24.315358 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:41:54.349710 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:42:24.379684 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:42:54.410208 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:43:24.440597 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:43:54.471575 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:44:24.502583 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:44:54.533768 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:45:24.565714 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:45:54.595939 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:46:24.626007 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:46:54.656508 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:47:24.686121 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:47:54.716272 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:48:24.748099 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:48:54.780336 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:49:24.881361 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:49:54.988751 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:50:25.062025 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:50:55.093608 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:51:25.122875 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:51:55.155527 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:52:25.190151 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:52:55.231769 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:53:25.278438 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:53:55.308719 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:54:25.339834 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:54:55.372187 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:55:25.402216 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:55:44.961815 1 service.go:357] Adding new service port \"emptydir-wrapper-2029/git-server-svc:http-portal\" at 100.104.113.249:2345/TCP\nI0111 18:55:44.988548 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:55:45.017205 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:56:15.049544 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:56:45.082726 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:57:15.113562 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:57:45.144393 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:58:03.980661 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:58:04.041139 1 service.go:382] Removing service port \"emptydir-wrapper-2029/git-server-svc:http-portal\"\nI0111 18:58:04.068259 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:58:04.097618 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:58:34.129159 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:59:04.160048 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:59:34.189077 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:00:04.218468 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:00:34.248455 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:01:04.278339 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:01:34.314024 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:01:52.551137 1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0111 19:01:52.551604 1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nW0111 19:01:58.343792 1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.Service ended with: too old resource version: 36201 (36628)\nI0111 19:02:04.346608 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:02:34.376743 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:03:04.407261 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:03:34.438693 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:04:04.469619 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:04:34.499033 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:05:04.528601 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:05:34.558794 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:06:04.589157 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:06:34.619468 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:07:04.651007 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:07:34.680318 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:08:04.709840 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:08:34.739429 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:09:04.770721 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:09:34.801610 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:10:04.832351 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:10:34.870705 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:11:04.902085 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:11:34.931907 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:12:04.962223 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:12:34.992833 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:13:05.023530 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:13:35.054906 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:14:05.086750 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:14:35.116351 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:15:05.146601 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:15:35.178190 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:16:05.223384 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:16:35.271231 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:17:05.302581 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:17:35.332724 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:18:05.363867 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:18:35.394613 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:19:05.423580 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:19:35.453200 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:20:05.483471 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:20:35.513747 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:21:05.543868 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:21:35.582155 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:22:05.614397 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:22:35.654525 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:23:05.685825 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:23:35.715099 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:24:05.744417 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:24:35.775086 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:25:05.805669 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:25:35.835708 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:26:05.866728 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:26:35.897369 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:27:05.928269 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:27:35.958224 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:28:05.987473 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:28:36.032233 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:29:06.063494 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:29:25.244909 1 service.go:357] Adding new service port \"nsdeletetest-1069/test-service:\" at 100.110.201.181:80/TCP\nI0111 19:29:25.277314 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:29:25.309482 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:29:30.485805 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:29:30.539183 1 service.go:382] Removing service port \"nsdeletetest-1069/test-service:\"\nI0111 19:29:30.564968 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:30:00.595734 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:30:30.626312 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:31:00.657226 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:31:30.689073 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:32:00.718638 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:32:30.753549 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:33:00.788386 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:33:30.821190 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:34:00.852415 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:34:30.881818 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:34:34.021042 1 service.go:357] Adding new service port \"services-8498/externalname-service:http\" at 100.111.232.102:80/TCP\nI0111 19:34:34.094699 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:34:34.148397 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:34:35.306699 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:34:35.931708 1 service.go:357] Adding new service port \"provisioning-888/csi-hostpath-attacher:dummy\" at 100.107.151.111:12345/TCP\nI0111 19:34:35.968006 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:34:36.021284 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:34:36.204554 1 service.go:357] Adding new service port \"provisioning-888/csi-hostpathplugin:dummy\" at 100.110.48.29:12345/TCP\nI0111 19:34:36.242063 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:34:36.350491 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:34:36.387384 1 service.go:357] Adding new service port \"provisioning-888/csi-hostpath-provisioner:dummy\" at 100.111.110.38:12345/TCP\nI0111 19:34:36.431453 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:34:36.474241 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:34:36.569640 1 service.go:357] Adding new service port \"provisioning-888/csi-hostpath-resizer:dummy\" at 100.108.56.238:12345/TCP\nI0111 19:34:36.629920 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:34:36.686750 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:34:36.753161 1 service.go:357] Adding new service port \"provisioning-888/csi-snapshotter:dummy\" at 100.106.116.100:12345/TCP\nI0111 19:34:36.801795 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:34:36.857143 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:34:37.486403 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:34:41.149684 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:34:43.195313 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:34:45.286168 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:34:46.321688 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:34:46.339722 1 service.go:382] Removing service port \"services-8498/externalname-service:http\"\nI0111 19:34:46.380355 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:34:46.427698 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:34:48.370475 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:34:57.441759 1 service.go:357] Adding new service port \"services-6365/hairpin-test:\" at 100.104.2.229:8080/TCP\nI0111 19:34:57.500370 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:34:57.557537 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:34:59.418464 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:35:08.975567 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:35:09.069959 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:35:09.643958 1 service.go:382] Removing service port \"services-6365/hairpin-test:\"\nI0111 19:35:09.675418 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:35:10.774939 1 service.go:382] Removing service port \"provisioning-888/csi-hostpath-attacher:dummy\"\nI0111 19:35:10.814406 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:35:10.848831 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:35:11.051260 1 service.go:382] Removing service port \"provisioning-888/csi-hostpathplugin:dummy\"\nI0111 19:35:11.093452 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:35:11.136374 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:35:11.235944 1 service.go:382] Removing service port \"provisioning-888/csi-hostpath-provisioner:dummy\"\nI0111 19:35:11.316402 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:35:11.392437 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:35:11.420621 1 service.go:382] Removing service port \"provisioning-888/csi-hostpath-resizer:dummy\"\nI0111 19:35:11.462365 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:35:11.512523 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:35:11.607079 1 service.go:382] Removing service port \"provisioning-888/csi-snapshotter:dummy\"\nI0111 19:35:11.662057 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:35:11.719804 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:35:40.497010 1 service.go:357] Adding new service port \"webhook-9730/e2e-test-webhook:\" at 100.104.55.61:8443/TCP\nI0111 19:35:40.524417 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:35:40.555179 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:35:59.272930 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:35:59.637724 1 service.go:382] Removing service port \"webhook-9730/e2e-test-webhook:\"\nI0111 19:35:59.666958 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:35:59.705750 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:36:29.737410 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:36:54.261471 1 service.go:357] Adding new service port \"provisioning-9667/csi-hostpath-attacher:dummy\" at 100.108.71.0:12345/TCP\nI0111 19:36:54.288266 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:36:54.321958 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:36:54.533823 1 service.go:357] Adding new service port \"provisioning-9667/csi-hostpathplugin:dummy\" at 100.106.76.172:12345/TCP\nI0111 19:36:54.566499 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:36:54.598983 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:36:54.719314 1 service.go:357] Adding new service port \"provisioning-9667/csi-hostpath-provisioner:dummy\" at 100.107.196.36:12345/TCP\nI0111 19:36:54.763573 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:36:54.808933 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:36:54.904378 1 service.go:357] Adding new service port \"provisioning-9667/csi-hostpath-resizer:dummy\" at 100.110.45.244:12345/TCP\nI0111 19:36:54.939166 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:36:55.093191 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:36:55.102984 1 service.go:357] Adding new service port \"provisioning-9667/csi-snapshotter:dummy\" at 100.109.116.128:12345/TCP\nI0111 19:36:55.154269 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:36:55.301841 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:36:56.267744 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:36:57.320787 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:36:57.362526 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:36:57.393168 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:37:27.428244 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:37:29.805643 1 service.go:357] Adding new service port \"provisioning-3332/csi-hostpath-attacher:dummy\" at 100.108.30.170:12345/TCP\nI0111 19:37:29.835101 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:37:29.869073 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:37:30.080374 1 service.go:357] Adding new service port \"provisioning-3332/csi-hostpathplugin:dummy\" at 100.108.92.174:12345/TCP\nI0111 19:37:30.108955 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:37:30.168197 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:37:30.263921 1 service.go:357] Adding new service port \"provisioning-3332/csi-hostpath-provisioner:dummy\" at 100.104.144.123:12345/TCP\nI0111 19:37:30.302433 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:37:30.378042 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:37:30.447773 1 service.go:357] Adding new service port \"provisioning-3332/csi-hostpath-resizer:dummy\" at 100.110.205.43:12345/TCP\nI0111 19:37:30.495805 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:37:30.584615 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:37:30.632951 1 service.go:357] Adding new service port \"provisioning-3332/csi-snapshotter:dummy\" at 100.110.240.69:12345/TCP\nI0111 19:37:30.669992 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:37:30.717607 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:37:32.300492 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:37:32.465329 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:37:32.500548 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:37:33.570573 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:37:41.291926 1 service.go:357] Adding new service port \"aggregator-7230/sample-api:\" at 100.108.11.104:7443/TCP\nI0111 19:37:41.322574 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:37:41.357118 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:37:43.087099 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:37:46.146764 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:37:46.287846 1 service.go:382] Removing service port \"aggregator-7230/sample-api:\"\nI0111 19:37:46.317962 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:37:46.352233 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:38:16.390988 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:38:46.430704 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:38:53.499094 1 service.go:357] Adding new service port \"ephemeral-1641/csi-hostpath-attacher:dummy\" at 100.111.99.89:12345/TCP\nI0111 19:38:53.537523 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:38:53.578409 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:38:53.775056 1 service.go:357] Adding new service port \"ephemeral-1641/csi-hostpathplugin:dummy\" at 100.106.163.81:12345/TCP\nI0111 19:38:53.813277 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:38:53.864401 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:38:53.958174 1 service.go:357] Adding new service port \"ephemeral-1641/csi-hostpath-provisioner:dummy\" at 100.110.184.75:12345/TCP\nI0111 19:38:54.006245 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:38:54.058090 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:38:54.145354 1 service.go:357] Adding new service port \"ephemeral-1641/csi-hostpath-resizer:dummy\" at 100.107.122.84:12345/TCP\nI0111 19:38:54.182553 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:38:54.272462 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:38:54.329299 1 service.go:357] Adding new service port \"ephemeral-1641/csi-snapshotter:dummy\" at 100.105.43.140:12345/TCP\nI0111 19:38:54.368665 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:38:54.402559 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:38:56.931926 1 service.go:382] Removing service port \"provisioning-9667/csi-hostpath-attacher:dummy\"\nI0111 19:38:56.980119 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:38:57.017862 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:38:57.207220 1 service.go:382] Removing service port \"provisioning-9667/csi-hostpathplugin:dummy\"\nI0111 19:38:57.240121 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:38:57.286826 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:38:57.394051 1 service.go:382] Removing service port \"provisioning-9667/csi-hostpath-provisioner:dummy\"\nI0111 19:38:57.424545 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:38:57.466453 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:38:57.577979 1 service.go:382] Removing service port \"provisioning-9667/csi-hostpath-resizer:dummy\"\nI0111 19:38:57.608461 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:38:57.647706 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:38:57.765232 1 service.go:382] Removing service port \"provisioning-9667/csi-snapshotter:dummy\"\nI0111 19:38:57.805571 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:38:57.845477 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:39:00.004863 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:39:00.624874 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:39:01.425336 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:39:03.496246 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:39:04.485320 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:39:06.527905 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:39:36.396520 1 service.go:382] Removing service port \"provisioning-3332/csi-hostpath-attacher:dummy\"\nI0111 19:39:36.441823 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:39:36.483340 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:39:36.675773 1 service.go:382] Removing service port \"provisioning-3332/csi-hostpathplugin:dummy\"\nI0111 19:39:36.752071 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:39:36.808143 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:39:36.863852 1 service.go:382] Removing service port \"provisioning-3332/csi-hostpath-provisioner:dummy\"\nI0111 19:39:36.946080 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:39:37.016430 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:39:37.052698 1 service.go:382] Removing service port \"provisioning-3332/csi-hostpath-resizer:dummy\"\nI0111 19:39:37.096362 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:39:37.150580 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:39:37.239814 1 service.go:382] Removing service port \"provisioning-3332/csi-snapshotter:dummy\"\nI0111 19:39:37.289343 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nE0111 19:39:37.296519 1 proxier.go:1418] Failed to execute iptables-restore: exit status 1 (iptables-restore: line 163 failed\n)\nI0111 19:39:37.296596 1 proxier.go:1421] Closing local ports after iptables-restore failure\nI0111 19:39:37.337483 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:39:50.462478 1 service.go:357] Adding new service port \"crd-webhook-4150/e2e-test-crd-conversion-webhook:\" at 100.108.53.198:9443/TCP\nI0111 19:39:50.512455 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:39:50.595751 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:39:58.360883 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:39:58.365701 1 service.go:382] Removing service port \"crd-webhook-4150/e2e-test-crd-conversion-webhook:\"\nI0111 19:39:58.391458 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:40:18.001790 1 service.go:357] Adding new service port \"volumeio-3164/csi-hostpath-attacher:dummy\" at 100.106.26.141:12345/TCP\nI0111 19:40:18.034809 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:40:18.065622 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:40:18.275775 1 service.go:357] Adding new service port \"volumeio-3164/csi-hostpathplugin:dummy\" at 100.105.6.97:12345/TCP\nI0111 19:40:18.318891 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:40:18.348982 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:40:18.463330 1 service.go:357] Adding new service port \"volumeio-3164/csi-hostpath-provisioner:dummy\" at 100.111.81.6:12345/TCP\nI0111 19:40:18.509530 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:40:18.676082 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:40:18.683745 1 service.go:357] Adding new service port \"volumeio-3164/csi-hostpath-resizer:dummy\" at 100.104.246.169:12345/TCP\nI0111 19:40:18.728405 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:40:18.777385 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:40:18.830998 1 service.go:357] Adding new service port \"volumeio-3164/csi-snapshotter:dummy\" at 100.107.227.112:12345/TCP\nI0111 19:40:18.905162 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:40:19.089325 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:40:19.788208 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:40:20.976053 1 service.go:357] Adding new service port \"kubectl-16/redis-master:\" at 100.111.154.89:6379/TCP\nI0111 19:40:21.025563 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:40:21.073970 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:40:21.246324 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:40:21.318748 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:40:22.448727 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:40:24.972685 1 service.go:382] Removing service port \"ephemeral-1641/csi-hostpath-attacher:dummy\"\nI0111 19:40:25.002893 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:40:25.037057 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:40:25.252089 1 service.go:382] Removing service port \"ephemeral-1641/csi-hostpathplugin:dummy\"\nI0111 19:40:25.286347 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:40:25.318794 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:40:25.441454 1 service.go:382] Removing service port \"ephemeral-1641/csi-hostpath-provisioner:dummy\"\nI0111 19:40:25.479551 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:40:25.550202 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:40:25.629088 1 service.go:382] Removing service port \"ephemeral-1641/csi-hostpath-resizer:dummy\"\nI0111 19:40:25.678441 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:40:25.732771 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:40:25.814817 1 service.go:382] Removing service port \"ephemeral-1641/csi-snapshotter:dummy\"\nI0111 19:40:25.854236 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:40:25.894419 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:40:28.348908 1 service.go:382] Removing service port \"kubectl-16/redis-master:\"\nI0111 19:40:28.379131 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:40:28.412430 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:40:58.444670 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:41:10.389304 1 service.go:382] Removing service port \"volumeio-3164/csi-hostpath-attacher:dummy\"\nI0111 19:41:10.419186 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:41:10.450112 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:41:10.667223 1 service.go:382] Removing service port \"volumeio-3164/csi-hostpathplugin:dummy\"\nI0111 19:41:10.719070 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:41:10.762431 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:41:10.854785 1 service.go:382] Removing service port \"volumeio-3164/csi-hostpath-provisioner:dummy\"\nI0111 19:41:10.893365 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:41:10.942383 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:41:11.046057 1 service.go:382] Removing service port \"volumeio-3164/csi-hostpath-resizer:dummy\"\nI0111 19:41:11.098255 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:41:11.153163 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:41:11.229282 1 service.go:382] Removing service port \"volumeio-3164/csi-snapshotter:dummy\"\nI0111 19:41:11.273713 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:41:11.318931 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:41:27.915250 1 service.go:357] Adding new service port \"services-7435/nodeport-update-service:\" at 100.105.182.168:80/TCP\nI0111 19:41:27.941844 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:41:27.971716 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:41:28.099196 1 service.go:357] Adding new service port \"services-7435/nodeport-update-service:tcp-port\" at 100.105.182.168:80/TCP\nI0111 19:41:28.099221 1 service.go:357] Adding new service port \"services-7435/nodeport-update-service:udp-port\" at 100.105.182.168:80/UDP\nI0111 19:41:28.099231 1 service.go:382] Removing service port \"services-7435/nodeport-update-service:\"\nI0111 19:41:28.126073 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:41:28.126300 1 proxier.go:1519] Opened local port \"nodePort for services-7435/nodeport-update-service:tcp-port\" (:30723/tcp)\nI0111 19:41:28.126434 1 proxier.go:1519] Opened local port \"nodePort for services-7435/nodeport-update-service:udp-port\" (:30691/udp)\nI0111 19:41:28.190195 1 service.go:382] Removing service port \"services-7435/nodeport-update-service:tcp-port\"\nI0111 19:41:28.190351 1 service.go:382] Removing service port \"services-7435/nodeport-update-service:udp-port\"\nI0111 19:41:28.215736 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:41:28.260925 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:41:50.340909 1 service.go:357] Adding new service port \"dns-5564/dns-test-service-3:http\" at 100.107.56.47:80/TCP\nI0111 19:41:50.389426 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:41:55.253712 1 service.go:382] Removing service port \"dns-5564/dns-test-service-3:http\"\nI0111 19:41:55.294473 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:42:25.328677 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:42:36.917196 1 service.go:357] Adding new service port \"volume-expand-8983/csi-hostpath-attacher:dummy\" at 100.106.199.203:12345/TCP\nI0111 19:42:36.948921 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:42:36.992644 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:42:37.189965 1 service.go:357] Adding new service port \"volume-expand-8983/csi-hostpathplugin:dummy\" at 100.111.155.218:12345/TCP\nI0111 19:42:37.219895 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:42:37.258796 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:42:37.375389 1 service.go:357] Adding new service port \"volume-expand-8983/csi-hostpath-provisioner:dummy\" at 100.106.228.17:12345/TCP\nI0111 19:42:37.414606 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:42:37.459841 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:42:37.557828 1 service.go:357] Adding new service port \"volume-expand-8983/csi-hostpath-resizer:dummy\" at 100.107.187.84:12345/TCP\nI0111 19:42:37.597312 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:42:37.645269 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:42:37.740498 1 service.go:357] Adding new service port \"volume-expand-8983/csi-snapshotter:dummy\" at 100.111.117.210:12345/TCP\nI0111 19:42:37.802906 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:42:37.873679 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:42:38.530722 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:42:39.621785 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:42:39.667222 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:42:39.713838 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:43:09.748698 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:43:35.531453 1 service.go:357] Adding new service port \"provisioning-6240/csi-hostpath-attacher:dummy\" at 100.106.228.19:12345/TCP\nI0111 19:43:35.569844 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:43:35.618537 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:43:35.864209 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:43:35.872009 1 service.go:357] Adding new service port \"provisioning-6240/csi-hostpathplugin:dummy\" at 100.111.245.108:12345/TCP\nI0111 19:43:35.909538 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:43:35.990710 1 service.go:357] Adding new service port \"provisioning-6240/csi-hostpath-provisioner:dummy\" at 100.105.123.238:12345/TCP\nI0111 19:43:36.031923 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:43:36.073484 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:43:36.174790 1 service.go:357] Adding new service port \"provisioning-6240/csi-hostpath-resizer:dummy\" at 100.109.16.216:12345/TCP\nI0111 19:43:36.203962 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:43:36.263884 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:43:36.358491 1 service.go:357] Adding new service port \"provisioning-6240/csi-snapshotter:dummy\" at 100.111.11.210:12345/TCP\nI0111 19:43:36.403716 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:43:36.466089 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:43:37.902945 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:43:37.943351 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:43:37.976137 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:43:39.065431 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:43:39.165061 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:43:54.542523 1 service.go:382] Removing service port \"provisioning-6240/csi-hostpath-attacher:dummy\"\nI0111 19:43:54.572399 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:43:54.606307 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:43:54.822921 1 service.go:382] Removing service port \"provisioning-6240/csi-hostpathplugin:dummy\"\nI0111 19:43:54.854199 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:43:54.897570 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:43:55.010655 1 service.go:382] Removing service port \"provisioning-6240/csi-hostpath-provisioner:dummy\"\nI0111 19:43:55.040577 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:43:55.075080 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:43:55.197008 1 service.go:382] Removing service port \"provisioning-6240/csi-hostpath-resizer:dummy\"\nI0111 19:43:55.239428 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:43:55.296045 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:43:55.384198 1 service.go:382] Removing service port \"provisioning-6240/csi-snapshotter:dummy\"\nI0111 19:43:55.422835 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:43:55.466753 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:44:25.503170 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:44:55.536501 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:45:25.569334 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:45:55.602486 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:46:10.901326 1 service.go:357] Adding new service port \"proxy-5995/proxy-service-hnjh4:portname1\" at 100.109.130.35:80/TCP\nI0111 19:46:10.901352 1 service.go:357] Adding new service port \"proxy-5995/proxy-service-hnjh4:portname2\" at 100.109.130.35:81/TCP\nI0111 19:46:10.901365 1 service.go:357] Adding new service port \"proxy-5995/proxy-service-hnjh4:tlsportname1\" at 100.109.130.35:443/TCP\nI0111 19:46:10.901376 1 service.go:357] Adding new service port \"proxy-5995/proxy-service-hnjh4:tlsportname2\" at 100.109.130.35:444/TCP\nI0111 19:46:10.930350 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:46:10.963122 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:46:20.138076 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:46:23.681495 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:46:34.204458 1 service.go:357] Adding new service port \"services-1603/affinity-clusterip:\" at 100.105.158.181:80/TCP\nI0111 19:46:34.234414 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:46:34.270776 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:46:36.564839 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:46:36.668665 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:46:36.812927 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:46:39.136918 1 service.go:382] Removing service port \"proxy-5995/proxy-service-hnjh4:tlsportname1\"\nI0111 19:46:39.136947 1 service.go:382] Removing service port \"proxy-5995/proxy-service-hnjh4:tlsportname2\"\nI0111 19:46:39.136955 1 service.go:382] Removing service port \"proxy-5995/proxy-service-hnjh4:portname1\"\nI0111 19:46:39.137009 1 service.go:382] Removing service port \"proxy-5995/proxy-service-hnjh4:portname2\"\nI0111 19:46:39.165874 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:46:39.197494 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:46:52.751536 1 service.go:357] Adding new service port \"services-9361/clusterip-service:\" at 100.107.168.179:80/TCP\nI0111 19:46:52.780393 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:46:52.820564 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:46:52.845541 1 service.go:357] Adding new service port \"services-9361/externalsvc:\" at 100.109.178.43:80/TCP\nI0111 19:46:52.888368 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:46:52.933018 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:46:54.770146 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:46:55.457203 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:46:56.310410 1 service.go:382] Removing service port \"services-9361/clusterip-service:\"\nI0111 19:46:56.339706 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:47:00.363589 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:47:13.911549 1 service.go:382] Removing service port \"services-9361/externalsvc:\"\nI0111 19:47:13.939957 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:47:13.972676 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:47:14.041335 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:47:14.873197 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:47:14.905967 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:47:23.925156 1 service.go:382] Removing service port \"services-1603/affinity-clusterip:\"\nI0111 19:47:23.954407 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:47:23.986705 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:47:31.696429 1 service.go:357] Adding new service port \"resourcequota-9564/test-service:\" at 100.104.148.118:80/TCP\nI0111 19:47:31.725107 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:47:33.877855 1 service.go:382] Removing service port \"resourcequota-9564/test-service:\"\nI0111 19:47:33.920511 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:47:36.670766 1 service.go:357] Adding new service port \"services-9378/nodeport-service:\" at 100.108.70.19:80/TCP\nI0111 19:47:36.700395 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:47:36.700588 1 proxier.go:1519] Opened local port \"nodePort for services-9378/nodeport-service:\" (:31216/tcp)\nI0111 19:47:36.733567 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:47:36.764758 1 service.go:357] Adding new service port \"services-9378/externalsvc:\" at 100.104.132.219:80/TCP\nI0111 19:47:36.793881 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:47:36.868128 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:47:38.163681 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:47:38.198127 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:47:40.230172 1 service.go:382] Removing service port \"services-9378/nodeport-service:\"\nI0111 19:47:40.260608 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:47:44.557538 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:47:44.591817 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:47:46.115258 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:47:46.166669 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:47:46.224727 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:47:46.594749 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nE0111 19:47:46.601442 1 proxier.go:1418] Failed to execute iptables-restore: exit status 1 (iptables-restore: line 134 failed\n)\nI0111 19:47:46.601521 1 proxier.go:1421] Closing local ports after iptables-restore failure\nI0111 19:47:46.938509 1 service.go:382] Removing service port \"volume-expand-8983/csi-hostpath-attacher:dummy\"\nI0111 19:47:46.979176 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:47:47.028278 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:47:47.037251 1 service.go:382] Removing service port \"volume-expand-8983/csi-hostpath-provisioner:dummy\"\nI0111 19:47:47.076434 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:47:47.087751 1 service.go:382] Removing service port \"volume-expand-8983/csi-hostpath-resizer:dummy\"\nI0111 19:47:47.137532 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:47:47.144687 1 service.go:382] Removing service port \"volume-expand-8983/csi-hostpathplugin:dummy\"\nI0111 19:47:47.183680 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:47:47.236853 1 service.go:382] Removing service port \"volume-expand-8983/csi-snapshotter:dummy\"\nI0111 19:47:47.292722 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:47:47.359097 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:47:53.852214 1 service.go:382] Removing service port \"services-9378/externalsvc:\"\nI0111 19:47:53.881071 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:47:53.912685 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:47:53.978799 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:48:07.391937 1 service.go:357] Adding new service port \"kubectl-8526/redis-slave:\" at 100.105.252.234:6379/TCP\nI0111 19:48:07.418757 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:48:07.462003 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:48:08.353479 1 service.go:357] Adding new service port \"kubectl-8526/redis-master:\" at 100.107.139.169:6379/TCP\nI0111 19:48:08.380474 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:48:08.411347 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:48:08.964841 1 service.go:357] Adding new service port \"kubectl-8526/frontend:\" at 100.105.185.251:80/TCP\nI0111 19:48:08.999267 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:48:09.028988 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:48:12.618927 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:48:12.716593 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:48:12.773173 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:48:12.826155 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:48:12.961861 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:48:13.910867 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:48:15.864073 1 service.go:357] Adding new service port \"provisioning-638/csi-hostpath-attacher:dummy\" at 100.107.106.222:12345/TCP\nI0111 19:48:15.892014 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:48:15.923415 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:48:16.136407 1 service.go:357] Adding new service port \"provisioning-638/csi-hostpathplugin:dummy\" at 100.106.73.214:12345/TCP\nI0111 19:48:16.165713 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:48:16.197372 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:48:16.319833 1 service.go:357] Adding new service port \"provisioning-638/csi-hostpath-provisioner:dummy\" at 100.111.163.16:12345/TCP\nI0111 19:48:16.361385 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:48:16.414706 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:48:16.503145 1 service.go:357] Adding new service port \"provisioning-638/csi-hostpath-resizer:dummy\" at 100.110.44.220:12345/TCP\nI0111 19:48:16.556225 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:48:16.685745 1 service.go:357] Adding new service port \"provisioning-638/csi-snapshotter:dummy\" at 100.105.65.0:12345/TCP\nI0111 19:48:16.729524 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:48:16.878778 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:48:16.941839 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:48:17.383557 1 service.go:357] Adding new service port \"volume-expand-7991/csi-hostpath-attacher:dummy\" at 100.111.160.134:12345/TCP\nI0111 19:48:17.424061 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:48:17.473188 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:48:17.587653 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:48:17.656678 1 service.go:357] Adding new service port \"volume-expand-7991/csi-hostpathplugin:dummy\" at 100.110.232.122:12345/TCP\nI0111 19:48:17.714894 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:48:17.801803 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:48:17.839825 1 service.go:357] Adding new service port \"volume-expand-7991/csi-hostpath-provisioner:dummy\" at 100.105.131.50:12345/TCP\nI0111 19:48:17.929182 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:48:17.996874 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:48:18.024040 1 service.go:357] Adding new service port \"volume-expand-7991/csi-hostpath-resizer:dummy\" at 100.105.90.149:12345/TCP\nI0111 19:48:18.068897 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:48:18.143788 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:48:18.205785 1 service.go:357] Adding new service port \"volume-expand-7991/csi-snapshotter:dummy\" at 100.109.232.139:12345/TCP\nI0111 19:48:18.251714 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:48:18.311534 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:48:18.368477 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:48:18.507070 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:48:19.484606 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:48:19.529263 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:48:19.573071 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:48:20.503085 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:48:20.541954 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:48:20.575474 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:48:28.204198 1 service.go:382] Removing service port \"kubectl-8526/redis-slave:\"\nI0111 19:48:28.234011 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:48:28.267490 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:48:28.722917 1 service.go:382] Removing service port \"kubectl-8526/redis-master:\"\nI0111 19:48:28.792232 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:48:28.842908 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:48:29.240254 1 service.go:382] Removing service port \"kubectl-8526/frontend:\"\nI0111 19:48:29.270082 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:48:29.304063 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:48:32.755821 1 service.go:382] Removing service port \"provisioning-638/csi-hostpath-attacher:dummy\"\nI0111 19:48:32.813717 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:48:32.845048 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:48:33.033511 1 service.go:382] Removing service port \"provisioning-638/csi-hostpathplugin:dummy\"\nI0111 19:48:33.080555 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:48:33.128883 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:48:33.219955 1 service.go:382] Removing service port \"provisioning-638/csi-hostpath-provisioner:dummy\"\nI0111 19:48:33.266976 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:48:33.341699 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:48:33.405845 1 service.go:382] Removing service port \"provisioning-638/csi-hostpath-resizer:dummy\"\nI0111 19:48:33.449483 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:48:33.517544 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:48:33.594012 1 service.go:382] Removing service port \"provisioning-638/csi-snapshotter:dummy\"\nI0111 19:48:33.702976 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:48:33.759055 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:48:53.635846 1 service.go:382] Removing service port \"volume-expand-7991/csi-hostpath-attacher:dummy\"\nI0111 19:48:53.667250 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:48:53.699093 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:48:53.912534 1 service.go:382] Removing service port \"volume-expand-7991/csi-hostpathplugin:dummy\"\nI0111 19:48:53.943976 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:48:53.978439 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:48:54.097134 1 service.go:382] Removing service port \"volume-expand-7991/csi-hostpath-provisioner:dummy\"\nI0111 19:48:54.131276 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:48:54.162834 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:48:54.282184 1 service.go:382] Removing service port \"volume-expand-7991/csi-hostpath-resizer:dummy\"\nI0111 19:48:54.308153 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:48:54.337301 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:48:54.466794 1 service.go:382] Removing service port \"volume-expand-7991/csi-snapshotter:dummy\"\nI0111 19:48:54.494117 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:48:54.522971 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:49:22.532548 1 service.go:357] Adding new service port \"provisioning-4625/csi-hostpath-attacher:dummy\" at 100.107.20.95:12345/TCP\nI0111 19:49:22.597975 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:49:22.659344 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:49:22.802108 1 service.go:357] Adding new service port \"provisioning-4625/csi-hostpathplugin:dummy\" at 100.110.254.231:12345/TCP\nI0111 19:49:22.843953 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:49:22.896538 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:49:23.050294 1 service.go:357] Adding new service port \"provisioning-4625/csi-hostpath-provisioner:dummy\" at 100.106.61.221:12345/TCP\nI0111 19:49:23.091404 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:49:23.150892 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:49:23.167665 1 service.go:357] Adding new service port \"provisioning-4625/csi-hostpath-resizer:dummy\" at 100.111.197.158:12345/TCP\nI0111 19:49:23.209979 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:49:23.280870 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:49:23.353381 1 service.go:357] Adding new service port \"provisioning-4625/csi-snapshotter:dummy\" at 100.111.200.248:12345/TCP\nI0111 19:49:23.423763 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:49:24.278890 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:49:24.841437 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:49:26.195053 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:49:26.231329 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:49:26.276358 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:49:44.817800 1 service.go:357] Adding new service port \"webhook-5741/e2e-test-webhook:\" at 100.109.54.96:8443/TCP\nI0111 19:49:44.888578 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:49:44.945028 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:49:47.692247 1 service.go:382] Removing service port \"provisioning-4625/csi-hostpath-attacher:dummy\"\nI0111 19:49:47.725448 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:49:47.775907 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:49:47.972190 1 service.go:382] Removing service port \"provisioning-4625/csi-hostpathplugin:dummy\"\nI0111 19:49:48.102873 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:49:48.162034 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:49:48.175560 1 service.go:382] Removing service port \"provisioning-4625/csi-hostpath-provisioner:dummy\"\nI0111 19:49:48.212879 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:49:48.345475 1 service.go:382] Removing service port \"provisioning-4625/csi-hostpath-resizer:dummy\"\nI0111 19:49:48.395613 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:49:48.453551 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:49:48.532729 1 service.go:382] Removing service port \"provisioning-4625/csi-snapshotter:dummy\"\nI0111 19:49:48.573844 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:49:48.623850 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:49:52.441671 1 service.go:382] Removing service port \"webhook-5741/e2e-test-webhook:\"\nI0111 19:49:52.470706 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:49:52.501064 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:50:02.202726 1 service.go:357] Adding new service port \"volume-2441/csi-hostpath-attacher:dummy\" at 100.109.220.55:12345/TCP\nI0111 19:50:02.238553 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:50:02.284885 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:50:02.475060 1 service.go:357] Adding new service port \"volume-2441/csi-hostpathplugin:dummy\" at 100.107.75.156:12345/TCP\nI0111 19:50:02.509283 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:50:02.562286 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:50:02.658610 1 service.go:357] Adding new service port \"volume-2441/csi-hostpath-provisioner:dummy\" at 100.107.43.184:12345/TCP\nI0111 19:50:02.698430 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:50:02.797969 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:50:02.842054 1 service.go:357] Adding new service port \"volume-2441/csi-hostpath-resizer:dummy\" at 100.104.164.149:12345/TCP\nI0111 19:50:02.897718 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:50:02.944112 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:50:03.024441 1 service.go:357] Adding new service port \"volume-2441/csi-snapshotter:dummy\" at 100.106.125.148:12345/TCP\nI0111 19:50:03.074601 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:50:03.136095 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:50:04.543264 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:50:04.752849 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:50:05.754742 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:50:05.786750 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:50:30.831256 1 service.go:357] Adding new service port \"volumemode-2792/csi-hostpath-attacher:dummy\" at 100.110.54.141:12345/TCP\nI0111 19:50:30.872600 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:50:30.912262 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:50:31.105393 1 service.go:357] Adding new service port \"volumemode-2792/csi-hostpathplugin:dummy\" at 100.110.67.54:12345/TCP\nI0111 19:50:31.145040 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:50:31.200526 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:50:31.289342 1 service.go:357] Adding new service port \"volumemode-2792/csi-hostpath-provisioner:dummy\" at 100.109.204.185:12345/TCP\nI0111 19:50:31.345034 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:50:31.409424 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:50:31.473685 1 service.go:357] Adding new service port \"volumemode-2792/csi-hostpath-resizer:dummy\" at 100.111.197.170:12345/TCP\nI0111 19:50:31.511689 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:50:31.557961 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:50:31.658397 1 service.go:357] Adding new service port \"volumemode-2792/csi-snapshotter:dummy\" at 100.111.104.208:12345/TCP\nI0111 19:50:31.707485 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:50:31.787864 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:50:34.388045 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:50:34.430061 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:51:02.188513 1 service.go:382] Removing service port \"volume-2441/csi-hostpath-attacher:dummy\"\nI0111 19:51:02.221256 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:51:02.258790 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:51:02.467299 1 service.go:382] Removing service port \"volume-2441/csi-hostpathplugin:dummy\"\nI0111 19:51:02.501379 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:51:02.537552 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:51:02.652887 1 service.go:382] Removing service port \"volume-2441/csi-hostpath-provisioner:dummy\"\nI0111 19:51:02.710516 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:51:02.748757 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:51:02.848941 1 service.go:382] Removing service port \"volume-2441/csi-hostpath-resizer:dummy\"\nI0111 19:51:02.888704 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:51:02.925852 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:51:03.038859 1 service.go:382] Removing service port \"volume-2441/csi-snapshotter:dummy\"\nI0111 19:51:03.101237 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:51:03.167654 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:51:05.944315 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:51:09.188506 1 service.go:357] Adding new service port \"ephemeral-9708/csi-hostpath-attacher:dummy\" at 100.108.86.106:12345/TCP\nI0111 19:51:09.219701 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:51:09.257410 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:51:09.462351 1 service.go:357] Adding new service port \"ephemeral-9708/csi-hostpathplugin:dummy\" at 100.106.68.123:12345/TCP\nI0111 19:51:09.491593 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:51:09.524041 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:51:09.645701 1 service.go:357] Adding new service port \"ephemeral-9708/csi-hostpath-provisioner:dummy\" at 100.104.179.221:12345/TCP\nI0111 19:51:09.684290 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:51:09.743973 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:51:09.829301 1 service.go:357] Adding new service port \"ephemeral-9708/csi-hostpath-resizer:dummy\" at 100.110.139.128:12345/TCP\nI0111 19:51:09.879303 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:51:09.933197 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:51:10.013696 1 service.go:357] Adding new service port \"ephemeral-9708/csi-snapshotter:dummy\" at 100.108.245.40:12345/TCP\nI0111 19:51:10.056421 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:51:10.108020 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:51:11.318084 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:51:11.497580 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:51:12.470991 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:51:12.520924 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:51:21.733083 1 service.go:382] Removing service port \"volumemode-2792/csi-hostpath-attacher:dummy\"\nI0111 19:51:21.776555 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:51:21.812486 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:51:22.014671 1 service.go:382] Removing service port \"volumemode-2792/csi-hostpathplugin:dummy\"\nI0111 19:51:22.058609 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:51:22.097483 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:51:22.200869 1 service.go:382] Removing service port \"volumemode-2792/csi-hostpath-provisioner:dummy\"\nI0111 19:51:22.233873 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:51:22.282835 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:51:22.387504 1 service.go:382] Removing service port \"volumemode-2792/csi-hostpath-resizer:dummy\"\nI0111 19:51:22.490395 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:51:22.569404 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:51:22.579922 1 service.go:382] Removing service port \"volumemode-2792/csi-snapshotter:dummy\"\nI0111 19:51:22.629006 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:51:24.981977 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:51:55.030718 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:52:03.033999 1 service.go:382] Removing service port \"ephemeral-9708/csi-hostpath-attacher:dummy\"\nI0111 19:52:03.062972 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:52:03.096822 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:52:03.313194 1 service.go:382] Removing service port \"ephemeral-9708/csi-hostpathplugin:dummy\"\nI0111 19:52:03.389229 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:52:03.454905 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:52:03.500689 1 service.go:382] Removing service port \"ephemeral-9708/csi-hostpath-provisioner:dummy\"\nI0111 19:52:03.574851 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:52:03.694411 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:52:03.701876 1 service.go:382] Removing service port \"ephemeral-9708/csi-hostpath-resizer:dummy\"\nI0111 19:52:03.747489 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:52:03.875454 1 service.go:382] Removing service port \"ephemeral-9708/csi-snapshotter:dummy\"\nI0111 19:52:03.920867 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:52:03.990456 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:52:23.304961 1 service.go:357] Adding new service port \"services-4413/affinity-clusterip-transition:\" at 100.110.87.16:80/TCP\nI0111 19:52:23.332375 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:52:23.363124 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:52:24.576703 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:52:24.666585 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:52:24.703969 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:52:32.685445 1 service.go:359] Updating existing service port \"services-4413/affinity-clusterip-transition:\" at 100.110.87.16:80/TCP\nI0111 19:52:32.711960 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:52:33.346265 1 service.go:357] Adding new service port \"webhook-4534/e2e-test-webhook:\" at 100.105.99.124:8443/TCP\nI0111 19:52:33.381692 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:52:33.423076 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:52:37.571622 1 service.go:359] Updating existing service port \"services-4413/affinity-clusterip-transition:\" at 100.110.87.16:80/TCP\nI0111 19:52:37.598667 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:52:40.462980 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:52:40.563620 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:52:41.536574 1 service.go:382] Removing service port \"webhook-4534/e2e-test-webhook:\"\nI0111 19:52:41.563409 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:53:08.587028 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:53:08.617122 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:53:20.478292 1 service.go:357] Adding new service port \"webhook-9767/e2e-test-webhook:\" at 100.111.238.71:8443/TCP\nI0111 19:53:20.512361 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:53:20.548791 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:53:24.038654 1 service.go:382] Removing service port \"services-4413/affinity-clusterip-transition:\"\nI0111 19:53:24.106377 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:53:24.150241 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nE0111 19:53:24.156714 1 proxier.go:1418] Failed to execute iptables-restore: exit status 1 (iptables-restore: line 131 failed\n)\nI0111 19:53:24.156797 1 proxier.go:1421] Closing local ports after iptables-restore failure\nI0111 19:53:28.916826 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:53:29.155107 1 service.go:382] Removing service port \"webhook-9767/e2e-test-webhook:\"\nI0111 19:53:29.196190 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:53:45.245365 1 service.go:357] Adding new service port \"services-3432/externalname-service:http\" at 100.109.111.210:80/TCP\nI0111 19:53:45.286611 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:53:45.287026 1 proxier.go:1519] Opened local port \"nodePort for services-3432/externalname-service:http\" (:31921/tcp)\nI0111 19:53:45.319026 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:53:46.311535 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:53:46.964713 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:53:57.382566 1 service.go:382] Removing service port \"services-3432/externalname-service:http\"\nI0111 19:53:57.409813 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:53:57.439731 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:53:58.263472 1 service.go:357] Adding new service port \"nettest-5543/node-port-service:udp\" at 100.106.24.223:90/UDP\nI0111 19:53:58.263699 1 service.go:357] Adding new service port \"nettest-5543/node-port-service:http\" at 100.106.24.223:80/TCP\nI0111 19:53:58.290525 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:53:58.290812 1 proxier.go:1519] Opened local port \"nodePort for nettest-5543/node-port-service:udp\" (:30133/udp)\nI0111 19:53:58.295444 1 proxier.go:1519] Opened local port \"nodePort for nettest-5543/node-port-service:http\" (:31082/tcp)\nI0111 19:53:58.299947 1 proxier.go:700] Stale udp service nettest-5543/node-port-service:udp -> 100.106.24.223\nI0111 19:53:58.325591 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:53:58.539520 1 service.go:357] Adding new service port \"nettest-5543/session-affinity-service:udp\" at 100.105.121.222:90/UDP\nI0111 19:53:58.539548 1 service.go:357] Adding new service port \"nettest-5543/session-affinity-service:http\" at 100.105.121.222:80/TCP\nI0111 19:53:58.566906 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:53:58.567101 1 proxier.go:1519] Opened local port \"nodePort for nettest-5543/session-affinity-service:udp\" (:32119/udp)\nI0111 19:53:58.572928 1 proxier.go:1519] Opened local port \"nodePort for nettest-5543/session-affinity-service:http\" (:32285/tcp)\nI0111 19:53:58.580056 1 proxier.go:700] Stale udp service nettest-5543/session-affinity-service:udp -> 100.105.121.222\nI0111 19:53:58.614860 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:54:05.752715 1 service.go:382] Removing service port \"nettest-5543/node-port-service:http\"\nI0111 19:54:05.752743 1 service.go:382] Removing service port \"nettest-5543/node-port-service:udp\"\nI0111 19:54:05.784289 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:54:05.820913 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:54:15.820673 1 service.go:357] Adding new service port \"webhook-6368/e2e-test-webhook:\" at 100.104.42.180:8443/TCP\nI0111 19:54:15.848080 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:54:15.879004 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:54:23.443332 1 service.go:382] Removing service port \"webhook-6368/e2e-test-webhook:\"\nI0111 19:54:23.489546 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:54:23.535962 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:54:53.574151 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:55:23.606591 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:55:53.638567 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:56:05.013536 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:56:05.346704 1 service.go:382] Removing service port \"nettest-5543/session-affinity-service:http\"\nI0111 19:56:05.346727 1 service.go:382] Removing service port \"nettest-5543/session-affinity-service:udp\"\nI0111 19:56:05.386482 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:56:23.878167 1 service.go:357] Adding new service port \"webhook-7214/e2e-test-webhook:\" at 100.105.108.252:8443/TCP\nI0111 19:56:23.905709 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:56:23.935669 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:56:31.440158 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:56:31.755358 1 service.go:382] Removing service port \"webhook-7214/e2e-test-webhook:\"\nI0111 19:56:31.782393 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:57:01.813607 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:57:31.843331 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:57:35.534551 1 service.go:357] Adding new service port \"provisioning-2263/csi-hostpath-attacher:dummy\" at 100.109.147.107:12345/TCP\nI0111 19:57:35.560496 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:57:35.589215 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:57:35.807093 1 service.go:357] Adding new service port \"provisioning-2263/csi-hostpathplugin:dummy\" at 100.108.205.78:12345/TCP\nI0111 19:57:35.835334 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:57:35.864499 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:57:35.989935 1 service.go:357] Adding new service port \"provisioning-2263/csi-hostpath-provisioner:dummy\" at 100.105.56.161:12345/TCP\nI0111 19:57:36.037964 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:57:36.171995 1 service.go:357] Adding new service port \"provisioning-2263/csi-hostpath-resizer:dummy\" at 100.108.81.63:12345/TCP\nI0111 19:57:36.227490 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:57:36.358007 1 service.go:357] Adding new service port \"provisioning-2263/csi-snapshotter:dummy\" at 100.104.107.87:12345/TCP\nI0111 19:57:36.395889 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:57:36.502911 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:57:36.565160 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:57:36.694029 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:57:38.098294 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:57:39.130690 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:57:39.172410 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:57:45.361898 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:57:45.393884 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:58:01.895798 1 service.go:382] Removing service port \"provisioning-2263/csi-hostpath-attacher:dummy\"\nI0111 19:58:01.925410 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:58:01.958214 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:58:02.172184 1 service.go:382] Removing service port \"provisioning-2263/csi-hostpathplugin:dummy\"\nI0111 19:58:02.202147 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:58:02.234671 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:58:02.357350 1 service.go:382] Removing service port \"provisioning-2263/csi-hostpath-provisioner:dummy\"\nI0111 19:58:02.386235 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:58:02.419059 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:58:02.542030 1 service.go:382] Removing service port \"provisioning-2263/csi-hostpath-resizer:dummy\"\nI0111 19:58:02.587470 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:58:02.644094 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:58:02.726402 1 service.go:382] Removing service port \"provisioning-2263/csi-snapshotter:dummy\"\nI0111 19:58:02.770339 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:58:02.821926 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:58:32.856573 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:59:02.898468 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:59:32.932645 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:00:01.599843 1 service.go:357] Adding new service port \"dns-8433/test-service-2:http\" at 100.110.223.87:80/TCP\nI0111 20:00:01.629601 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:00:01.667191 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:00:03.374139 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:00:33.410434 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:00:39.078303 1 service.go:357] Adding new service port \"webhook-1074/e2e-test-webhook:\" at 100.105.33.138:8443/TCP\nI0111 20:00:39.105416 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:00:39.135315 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:00:39.676348 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:00:39.713269 1 service.go:382] Removing service port \"dns-8433/test-service-2:http\"\nI0111 20:00:39.749848 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nE0111 20:00:39.756132 1 proxier.go:1418] Failed to execute iptables-restore: exit status 1 (iptables-restore: line 131 failed\n)\nI0111 20:00:39.756293 1 proxier.go:1421] Closing local ports after iptables-restore failure\nI0111 20:00:39.792786 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:00:46.963600 1 service.go:382] Removing service port \"webhook-1074/e2e-test-webhook:\"\nI0111 20:00:47.003932 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:00:47.033728 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:01:17.066498 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:01:40.522015 1 service.go:357] Adding new service port \"services-3570/affinity-nodeport-transition:\" at 100.106.99.75:80/TCP\nI0111 20:01:40.551171 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:01:40.551331 1 proxier.go:1519] Opened local port \"nodePort for services-3570/affinity-nodeport-transition:\" (:31636/tcp)\nI0111 20:01:40.590091 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:01:42.463367 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:01:42.495745 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:01:52.703911 1 service.go:359] Updating existing service port \"services-3570/affinity-nodeport-transition:\" at 100.106.99.75:80/TCP\nI0111 20:01:52.729452 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:01:57.528595 1 service.go:359] Updating existing service port \"services-3570/affinity-nodeport-transition:\" at 100.106.99.75:80/TCP\nI0111 20:01:57.566124 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:02:00.169281 1 service.go:357] Adding new service port \"pods-6711/fooservice:\" at 100.110.244.63:8765/TCP\nI0111 20:02:00.208003 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:02:00.238099 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:02:08.973276 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:02:09.739400 1 service.go:382] Removing service port \"pods-6711/fooservice:\"\nI0111 20:02:09.768411 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:02:25.002057 1 service.go:357] Adding new service port \"kubectl-5845/rm2:\" at 100.106.94.54:1234/TCP\nI0111 20:02:25.030011 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:02:25.064005 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:02:27.836868 1 service.go:357] Adding new service port \"kubectl-5845/rm3:\" at 100.107.40.0:2345/TCP\nI0111 20:02:27.865136 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:02:27.897323 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:02:28.576424 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:02:28.622129 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:02:35.335483 1 service.go:382] Removing service port \"kubectl-5845/rm2:\"\nI0111 20:02:35.363404 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:02:35.367797 1 service.go:382] Removing service port \"kubectl-5845/rm3:\"\nI0111 20:02:35.393928 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:02:39.132223 1 service.go:357] Adding new service port \"provisioning-5877/csi-hostpath-attacher:dummy\" at 100.106.177.230:12345/TCP\nI0111 20:02:39.159991 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:02:39.190434 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:02:39.406260 1 service.go:357] Adding new service port \"provisioning-5877/csi-hostpathplugin:dummy\" at 100.104.110.160:12345/TCP\nI0111 20:02:39.433648 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:02:39.476386 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:02:39.595954 1 service.go:357] Adding new service port \"provisioning-5877/csi-hostpath-provisioner:dummy\" at 100.107.109.180:12345/TCP\nI0111 20:02:39.635578 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:02:39.672855 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:02:39.779448 1 service.go:357] Adding new service port \"provisioning-5877/csi-hostpath-resizer:dummy\" at 100.106.98.2:12345/TCP\nI0111 20:02:39.819305 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:02:39.872387 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:02:39.963365 1 service.go:357] Adding new service port \"provisioning-5877/csi-snapshotter:dummy\" at 100.108.180.144:12345/TCP\nI0111 20:02:40.012807 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:02:40.084601 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:02:41.269393 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:02:41.335037 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:02:42.303462 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:02:42.334115 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:02:43.935281 1 service.go:382] Removing service port \"services-3570/affinity-nodeport-transition:\"\nI0111 20:02:43.963152 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:02:44.007928 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:02:56.343778 1 service.go:357] Adding new service port \"volume-expand-1929/csi-hostpath-attacher:dummy\" at 100.108.247.151:12345/TCP\nI0111 20:02:56.371917 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:02:56.409264 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:02:56.618154 1 service.go:357] Adding new service port \"volume-expand-1929/csi-hostpathplugin:dummy\" at 100.109.220.123:12345/TCP\nI0111 20:02:56.646150 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:02:56.678719 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:02:56.802264 1 service.go:357] Adding new service port \"volume-expand-1929/csi-hostpath-provisioner:dummy\" at 100.108.230.252:12345/TCP\nI0111 20:02:56.843329 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:02:56.893224 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:02:56.985500 1 service.go:357] Adding new service port \"volume-expand-1929/csi-hostpath-resizer:dummy\" at 100.107.85.243:12345/TCP\nI0111 20:02:57.022170 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:02:57.066723 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:02:57.169096 1 service.go:357] Adding new service port \"volume-expand-1929/csi-snapshotter:dummy\" at 100.111.105.38:12345/TCP\nI0111 20:02:57.204169 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:02:57.290287 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:02:57.893233 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:02:58.881486 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:02:58.915514 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:02:58.997846 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:03:01.515446 1 service.go:382] Removing service port \"provisioning-5877/csi-hostpath-attacher:dummy\"\nI0111 20:03:01.552352 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:03:01.586485 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:03:01.794071 1 service.go:382] Removing service port \"provisioning-5877/csi-hostpathplugin:dummy\"\nI0111 20:03:01.836106 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:03:01.878471 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:03:01.981022 1 service.go:382] Removing service port \"provisioning-5877/csi-hostpath-provisioner:dummy\"\nI0111 20:03:02.121060 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:03:02.178525 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:03:02.187025 1 service.go:382] Removing service port \"provisioning-5877/csi-hostpath-resizer:dummy\"\nI0111 20:03:02.249405 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:03:02.355331 1 service.go:382] Removing service port \"provisioning-5877/csi-snapshotter:dummy\"\nI0111 20:03:02.402045 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:03:02.458995 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:03:04.637215 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:03:19.311780 1 service.go:357] Adding new service port \"webhook-4016/e2e-test-webhook:\" at 100.109.213.198:8443/TCP\nI0111 20:03:19.344021 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:03:19.377200 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:03:40.665961 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:03:40.679024 1 service.go:382] Removing service port \"webhook-4016/e2e-test-webhook:\"\nI0111 20:03:40.705526 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:04:05.030192 1 service.go:357] Adding new service port \"services-2057/sourceip-test:\" at 100.106.18.136:8080/TCP\nI0111 20:04:05.058278 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:04:05.091722 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:04:06.436270 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:04:12.880750 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:04:12.928196 1 service.go:382] Removing service port \"services-2057/sourceip-test:\"\nI0111 20:04:12.964765 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:04:13.017679 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:04:40.603840 1 service.go:357] Adding new service port \"services-6943/lb-finalizer:\" at 100.107.10.25:80/TCP\nI0111 20:04:40.641001 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:04:40.641434 1 proxier.go:1519] Opened local port \"nodePort for services-6943/lb-finalizer:\" (:32029/tcp)\nI0111 20:04:40.865709 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:04:42.067668 1 service.go:359] Updating existing service port \"services-6943/lb-finalizer:\" at 100.107.10.25:80/TCP\nI0111 20:04:42.112917 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:04:43.323107 1 service.go:359] Updating existing service port \"services-6943/lb-finalizer:\" at 100.107.10.25:80/TCP\nI0111 20:04:43.351541 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:04:43.990944 1 service.go:359] Updating existing service port \"services-6943/lb-finalizer:\" at 100.107.10.25:80/TCP\nI0111 20:04:44.033564 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:04:47.515786 1 service.go:382] Removing service port \"volume-expand-1929/csi-hostpath-attacher:dummy\"\nI0111 20:04:47.544659 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:04:47.584136 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:04:47.793527 1 service.go:382] Removing service port \"volume-expand-1929/csi-hostpathplugin:dummy\"\nI0111 20:04:47.829211 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:04:47.878269 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:04:47.981769 1 service.go:382] Removing service port \"volume-expand-1929/csi-hostpath-provisioner:dummy\"\nI0111 20:04:48.035895 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:04:48.106781 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:04:48.170761 1 service.go:382] Removing service port \"volume-expand-1929/csi-hostpath-resizer:dummy\"\nI0111 20:04:48.214976 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:04:48.262990 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:04:48.359621 1 service.go:382] Removing service port \"volume-expand-1929/csi-snapshotter:dummy\"\nI0111 20:04:48.412108 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:04:48.486860 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:05:13.869705 1 service.go:359] Updating existing service port \"services-6943/lb-finalizer:\" at 100.107.10.25:80/TCP\nI0111 20:05:13.898759 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:05:13.899047 1 proxier.go:1519] Opened local port \"nodePort for services-6943/lb-finalizer:\" (:32125/tcp)\nI0111 20:05:14.168676 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:05:15.310673 1 service.go:359] Updating existing service port \"services-6943/lb-finalizer:\" at 100.107.10.25:80/TCP\nI0111 20:05:15.336007 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:05:15.364518 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:05:15.698665 1 service.go:382] Removing service port \"services-6943/lb-finalizer:\"\nI0111 20:05:15.724085 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:05:15.752294 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:05:45.783168 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:06:12.040279 1 service.go:357] Adding new service port \"provisioning-5738/csi-hostpath-attacher:dummy\" at 100.104.206.73:12345/TCP\nI0111 20:06:12.070715 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:06:12.104242 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:06:12.314912 1 service.go:357] Adding new service port \"provisioning-5738/csi-hostpathplugin:dummy\" at 100.105.45.215:12345/TCP\nI0111 20:06:12.341493 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:06:12.372038 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:06:12.497997 1 service.go:357] Adding new service port \"provisioning-5738/csi-hostpath-provisioner:dummy\" at 100.108.206.199:12345/TCP\nI0111 20:06:12.525002 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:06:12.563346 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:06:12.681163 1 service.go:357] Adding new service port \"provisioning-5738/csi-hostpath-resizer:dummy\" at 100.110.205.209:12345/TCP\nI0111 20:06:12.708366 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:06:12.739083 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:06:12.864352 1 service.go:357] Adding new service port \"provisioning-5738/csi-snapshotter:dummy\" at 100.109.37.67:12345/TCP\nI0111 20:06:12.891519 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:06:12.925169 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:06:15.065172 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:06:15.102689 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:06:15.160888 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:06:15.198872 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:06:26.506671 1 service.go:357] Adding new service port \"provisioning-1947/csi-hostpath-attacher:dummy\" at 100.110.98.47:12345/TCP\nI0111 20:06:26.549340 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:06:26.594867 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:06:26.782000 1 service.go:357] Adding new service port \"provisioning-1947/csi-hostpathplugin:dummy\" at 100.105.32.126:12345/TCP\nI0111 20:06:26.818842 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:06:26.875714 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:06:26.968415 1 service.go:357] Adding new service port \"provisioning-1947/csi-hostpath-provisioner:dummy\" at 100.107.75.248:12345/TCP\nI0111 20:06:27.010893 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:06:27.060552 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:06:27.152507 1 service.go:357] Adding new service port \"provisioning-1947/csi-hostpath-resizer:dummy\" at 100.104.32.32:12345/TCP\nI0111 20:06:27.211585 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:06:27.266594 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:06:27.336371 1 service.go:357] Adding new service port \"provisioning-1947/csi-snapshotter:dummy\" at 100.107.242.130:12345/TCP\nI0111 20:06:27.391565 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:06:27.447861 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:06:30.691758 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:06:30.796806 1 service.go:357] Adding new service port \"volume-expand-8205/csi-hostpath-attacher:dummy\" at 100.110.87.213:12345/TCP\nI0111 20:06:30.833850 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:06:30.901802 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:06:30.982553 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:06:31.072123 1 service.go:357] Adding new service port \"volume-expand-8205/csi-hostpathplugin:dummy\" at 100.109.55.191:12345/TCP\nI0111 20:06:31.130244 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:06:31.193596 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:06:31.257821 1 service.go:357] Adding new service port \"volume-expand-8205/csi-hostpath-provisioner:dummy\" at 100.104.240.231:12345/TCP\nI0111 20:06:31.309909 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:06:31.391889 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:06:31.442922 1 service.go:357] Adding new service port \"volume-expand-8205/csi-hostpath-resizer:dummy\" at 100.108.14.115:12345/TCP\nI0111 20:06:31.495534 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:06:31.593193 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:06:31.626239 1 service.go:357] Adding new service port \"volume-expand-8205/csi-snapshotter:dummy\" at 100.105.147.217:12345/TCP\nI0111 20:06:31.675688 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:06:31.729808 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:06:33.973063 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:06:34.071332 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:06:34.286333 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:06:34.340863 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:06:34.392165 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:06:44.478310 1 service.go:382] Removing service port \"provisioning-5738/csi-hostpath-attacher:dummy\"\nI0111 20:06:44.508287 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:06:44.544918 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:06:44.757482 1 service.go:382] Removing service port \"provisioning-5738/csi-hostpathplugin:dummy\"\nI0111 20:06:44.797272 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:06:44.831376 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:06:44.943795 1 service.go:382] Removing service port \"provisioning-5738/csi-hostpath-provisioner:dummy\"\nI0111 20:06:44.973910 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:06:45.007833 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:06:45.130970 1 service.go:382] Removing service port \"provisioning-5738/csi-hostpath-resizer:dummy\"\nI0111 20:06:45.163661 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:06:45.217459 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:06:45.317841 1 service.go:382] Removing service port \"provisioning-5738/csi-snapshotter:dummy\"\nI0111 20:06:45.349757 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:06:45.384494 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:06:48.667083 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:07:01.302699 1 service.go:382] Removing service port \"provisioning-1947/csi-hostpath-attacher:dummy\"\nI0111 20:07:01.342411 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:07:01.376360 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:07:01.582089 1 service.go:382] Removing service port \"provisioning-1947/csi-hostpathplugin:dummy\"\nI0111 20:07:01.622864 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:07:01.666140 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:07:01.768849 1 service.go:382] Removing service port \"provisioning-1947/csi-hostpath-provisioner:dummy\"\nI0111 20:07:01.827541 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:07:01.902020 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:07:01.957340 1 service.go:382] Removing service port \"provisioning-1947/csi-hostpath-resizer:dummy\"\nI0111 20:07:02.022594 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:07:02.073258 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:07:02.145196 1 service.go:382] Removing service port \"provisioning-1947/csi-snapshotter:dummy\"\nI0111 20:07:02.186647 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:07:02.243703 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:07:32.279145 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:07:36.880367 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:07:44.971482 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:07:47.240214 1 service.go:382] Removing service port \"volume-expand-8205/csi-hostpath-attacher:dummy\"\nI0111 20:07:47.268197 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:07:47.299215 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:07:47.522379 1 service.go:382] Removing service port \"volume-expand-8205/csi-hostpathplugin:dummy\"\nI0111 20:07:47.550001 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:07:47.581181 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:07:47.711540 1 service.go:382] Removing service port \"volume-expand-8205/csi-hostpath-provisioner:dummy\"\nI0111 20:07:47.739211 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:07:47.786460 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:07:47.900108 1 service.go:382] Removing service port \"volume-expand-8205/csi-hostpath-resizer:dummy\"\nI0111 20:07:47.927585 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:07:47.958161 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:07:48.087688 1 service.go:382] Removing service port \"volume-expand-8205/csi-snapshotter:dummy\"\nI0111 20:07:48.115604 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:07:48.153127 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:08:06.225557 1 service.go:357] Adding new service port \"provisioning-8445/csi-hostpath-attacher:dummy\" at 100.110.236.175:12345/TCP\nI0111 20:08:06.252997 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:08:06.282935 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:08:06.499696 1 service.go:357] Adding new service port \"provisioning-8445/csi-hostpathplugin:dummy\" at 100.104.179.192:12345/TCP\nI0111 20:08:06.527192 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:08:06.558517 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:08:06.683877 1 service.go:357] Adding new service port \"provisioning-8445/csi-hostpath-provisioner:dummy\" at 100.110.104.0:12345/TCP\nI0111 20:08:06.720087 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:08:06.754332 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:08:06.870529 1 service.go:357] Adding new service port \"provisioning-8445/csi-hostpath-resizer:dummy\" at 100.107.38.225:12345/TCP\nI0111 20:08:06.897846 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:08:06.961517 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:08:07.053598 1 service.go:357] Adding new service port \"provisioning-8445/csi-snapshotter:dummy\" at 100.105.69.48:12345/TCP\nI0111 20:08:07.082130 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:08:07.162514 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:08:08.092550 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:08:09.562949 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:08:09.594193 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:08:09.782389 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:08:17.391182 1 service.go:357] Adding new service port \"volume-1340/csi-hostpath-attacher:dummy\" at 100.107.93.182:12345/TCP\nI0111 20:08:17.436460 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:08:17.494546 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:08:17.665138 1 service.go:357] Adding new service port \"volume-1340/csi-hostpathplugin:dummy\" at 100.108.84.118:12345/TCP\nI0111 20:08:17.692660 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:08:17.722516 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:08:17.848925 1 service.go:357] Adding new service port \"volume-1340/csi-hostpath-provisioner:dummy\" at 100.110.186.113:12345/TCP\nI0111 20:08:17.889118 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:08:18.032730 1 service.go:357] Adding new service port \"volume-1340/csi-hostpath-resizer:dummy\" at 100.111.255.119:12345/TCP\nI0111 20:08:18.074795 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:08:18.139724 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:08:18.216162 1 service.go:357] Adding new service port \"volume-1340/csi-snapshotter:dummy\" at 100.107.22.67:12345/TCP\nI0111 20:08:18.259546 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:08:18.404810 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:08:19.369951 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:08:20.721136 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:08:20.774701 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:08:20.823168 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:08:23.214670 1 service.go:382] Removing service port \"provisioning-8445/csi-hostpath-attacher:dummy\"\nI0111 20:08:23.244206 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:08:23.276995 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:08:23.494187 1 service.go:382] Removing service port \"provisioning-8445/csi-hostpathplugin:dummy\"\nI0111 20:08:23.525241 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:08:23.566089 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:08:23.683781 1 service.go:382] Removing service port \"provisioning-8445/csi-hostpath-provisioner:dummy\"\nI0111 20:08:23.712714 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:08:23.749493 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:08:23.871131 1 service.go:382] Removing service port \"provisioning-8445/csi-hostpath-resizer:dummy\"\nI0111 20:08:23.905809 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:08:23.949904 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:08:24.058041 1 service.go:382] Removing service port \"provisioning-8445/csi-snapshotter:dummy\"\nI0111 20:08:24.098417 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:08:24.148461 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:08:28.590269 1 service.go:357] Adding new service port \"webhook-9223/e2e-test-webhook:\" at 100.107.34.101:8443/TCP\nI0111 20:08:28.619845 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:08:28.652682 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:08:37.351724 1 service.go:382] Removing service port \"webhook-9223/e2e-test-webhook:\"\nI0111 20:08:37.382423 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:08:37.423130 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:09:07.456874 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:09:15.501869 1 service.go:357] Adding new service port \"webhook-1039/e2e-test-webhook:\" at 100.104.233.181:8443/TCP\nI0111 20:09:15.550680 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:09:15.599284 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:09:18.771580 1 service.go:357] Adding new service port \"network-7217/boom-server:\" at 100.104.44.53:9000/TCP\nI0111 20:09:18.800238 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:09:18.832663 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:09:23.336715 1 service.go:382] Removing service port \"webhook-1039/e2e-test-webhook:\"\nI0111 20:09:23.376681 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:09:23.425232 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:09:26.955283 1 service.go:382] Removing service port \"volume-1340/csi-hostpath-attacher:dummy\"\nI0111 20:09:26.985087 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:09:27.018385 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:09:27.234959 1 service.go:382] Removing service port \"volume-1340/csi-hostpathplugin:dummy\"\nI0111 20:09:27.273397 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:09:27.320342 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:09:27.424663 1 service.go:382] Removing service port \"volume-1340/csi-hostpath-provisioner:dummy\"\nI0111 20:09:27.466240 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:09:27.497451 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:09:27.613012 1 service.go:382] Removing service port \"volume-1340/csi-hostpath-resizer:dummy\"\nI0111 20:09:27.662056 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:09:27.712181 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:09:27.802955 1 service.go:382] Removing service port \"volume-1340/csi-snapshotter:dummy\"\nI0111 20:09:27.848959 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:09:27.906343 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:09:34.709719 1 service.go:357] Adding new service port \"services-8930/nodeport-test:http\" at 100.111.48.61:80/TCP\nI0111 20:09:34.738778 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:09:34.739057 1 proxier.go:1519] Opened local port \"nodePort for services-8930/nodeport-test:http\" (:31523/tcp)\nI0111 20:09:34.771119 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:09:35.884047 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:09:36.890110 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:09:47.812124 1 service.go:357] Adding new service port \"webhook-9977/e2e-test-webhook:\" at 100.107.155.214:8443/TCP\nI0111 20:09:47.841284 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:09:47.873761 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:09:51.840851 1 service.go:382] Removing service port \"services-8930/nodeport-test:http\"\nI0111 20:09:51.873092 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:09:51.902694 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:09:55.687356 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:09:56.048270 1 service.go:382] Removing service port \"webhook-9977/e2e-test-webhook:\"\nI0111 20:09:56.084533 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:09:56.124584 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:09:56.972197 1 service.go:357] Adding new service port \"services-4609/nodeports:udp-port\" at 100.105.31.240:53/UDP\nI0111 20:09:56.972224 1 service.go:357] Adding new service port \"services-4609/nodeports:tcp-port\" at 100.105.31.240:53/TCP\nI0111 20:09:57.009762 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:09:57.010044 1 proxier.go:1519] Opened local port \"nodePort for services-4609/nodeports:tcp-port\" (:31219/tcp)\nI0111 20:09:57.013110 1 proxier.go:1519] Opened local port \"nodePort for services-4609/nodeports:udp-port\" (:31219/udp)\nI0111 20:09:57.053099 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:09:57.063337 1 service.go:382] Removing service port \"services-4609/nodeports:udp-port\"\nI0111 20:09:57.063568 1 service.go:382] Removing service port \"services-4609/nodeports:tcp-port\"\nI0111 20:09:57.090891 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:09:57.125445 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:10:26.171397 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:10:26.268333 1 service.go:382] Removing service port \"network-7217/boom-server:\"\nI0111 20:10:26.297329 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:10:26.340822 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:10:56.386385 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:11:26.429463 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:11:56.460610 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:12:17.080603 1 service.go:357] Adding new service port \"volume-expand-1240/csi-hostpath-attacher:dummy\" at 100.109.110.58:12345/TCP\nI0111 20:12:17.108104 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:12:17.139385 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:12:17.353233 1 service.go:357] Adding new service port \"volume-expand-1240/csi-hostpathplugin:dummy\" at 100.107.205.142:12345/TCP\nI0111 20:12:17.394419 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:12:17.442859 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:12:17.536595 1 service.go:357] Adding new service port \"volume-expand-1240/csi-hostpath-provisioner:dummy\" at 100.110.254.121:12345/TCP\nI0111 20:12:17.574158 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:12:17.627870 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:12:17.719981 1 service.go:357] Adding new service port \"volume-expand-1240/csi-hostpath-resizer:dummy\" at 100.108.23.63:12345/TCP\nI0111 20:12:17.779122 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:12:17.829128 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:12:17.903235 1 service.go:357] Adding new service port \"volume-expand-1240/csi-snapshotter:dummy\" at 100.107.242.127:12345/TCP\nI0111 20:12:17.942380 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:12:18.006962 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:12:20.119998 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:12:20.155085 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:12:20.187136 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:12:50.219347 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:12:58.427073 1 service.go:382] Removing service port \"volume-expand-1240/csi-hostpath-attacher:dummy\"\nI0111 20:12:58.454667 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:12:58.486840 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:12:58.704166 1 service.go:382] Removing service port \"volume-expand-1240/csi-hostpathplugin:dummy\"\nI0111 20:12:58.739767 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:12:58.855952 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:12:58.896406 1 service.go:382] Removing service port \"volume-expand-1240/csi-hostpath-provisioner:dummy\"\nI0111 20:12:58.950948 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:12:59.000830 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:12:59.081397 1 service.go:382] Removing service port \"volume-expand-1240/csi-hostpath-resizer:dummy\"\nI0111 20:12:59.153106 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:12:59.209970 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:12:59.267082 1 service.go:382] Removing service port \"volume-expand-1240/csi-snapshotter:dummy\"\nI0111 20:12:59.305917 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:12:59.358837 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:13:18.204995 1 service.go:357] Adding new service port \"services-4281/endpoint-test2:\" at 100.107.132.97:80/TCP\nI0111 20:13:18.237914 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:13:18.282441 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:13:20.407510 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:13:22.431450 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:13:23.973221 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:13:24.244469 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:13:24.373991 1 service.go:382] Removing service port \"services-4281/endpoint-test2:\"\nI0111 20:13:24.419769 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:13:24.467599 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:13:34.264193 1 service.go:357] Adding new service port \"provisioning-5271/csi-hostpath-attacher:dummy\" at 100.109.111.235:12345/TCP\nI0111 20:13:34.308913 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:13:34.381795 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:13:34.538571 1 service.go:357] Adding new service port \"provisioning-5271/csi-hostpathplugin:dummy\" at 100.108.41.132:12345/TCP\nI0111 20:13:34.589876 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:13:34.663915 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:13:34.726083 1 service.go:357] Adding new service port \"provisioning-5271/csi-hostpath-provisioner:dummy\" at 100.105.231.186:12345/TCP\nI0111 20:13:34.779982 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:13:34.861879 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:13:34.913678 1 service.go:357] Adding new service port \"provisioning-5271/csi-hostpath-resizer:dummy\" at 100.104.109.64:12345/TCP\nI0111 20:13:34.962231 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:13:35.047300 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:13:35.097524 1 service.go:357] Adding new service port \"provisioning-5271/csi-snapshotter:dummy\" at 100.109.28.157:12345/TCP\nI0111 20:13:35.148750 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:13:35.474768 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:13:36.097282 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:13:36.971435 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:13:37.006024 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:13:38.019446 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:14:04.689884 1 service.go:382] Removing service port \"provisioning-5271/csi-hostpath-attacher:dummy\"\nI0111 20:14:04.717667 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:14:04.755097 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:14:04.972167 1 service.go:382] Removing service port \"provisioning-5271/csi-hostpathplugin:dummy\"\nI0111 20:14:04.999919 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:14:05.030669 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:14:05.162400 1 service.go:382] Removing service port \"provisioning-5271/csi-hostpath-provisioner:dummy\"\nI0111 20:14:05.200699 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:14:05.247785 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:14:05.350991 1 service.go:382] Removing service port \"provisioning-5271/csi-hostpath-resizer:dummy\"\nI0111 20:14:05.379604 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:14:05.410883 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:14:05.538754 1 service.go:382] Removing service port \"provisioning-5271/csi-snapshotter:dummy\"\nI0111 20:14:05.567194 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:14:05.598278 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:14:16.760264 1 service.go:357] Adding new service port \"webhook-9848/e2e-test-webhook:\" at 100.110.42.241:8443/TCP\nI0111 20:14:16.789190 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:14:16.821636 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:14:23.497452 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:14:23.736123 1 service.go:382] Removing service port \"webhook-9848/e2e-test-webhook:\"\nI0111 20:14:23.762032 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:14:23.790842 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:14:45.729096 1 service.go:357] Adding new service port \"webhook-1622/e2e-test-webhook:\" at 100.104.101.200:8443/TCP\nI0111 20:14:45.756311 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:14:45.802257 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:14:52.551025 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:14:52.558318 1 service.go:382] Removing service port \"webhook-1622/e2e-test-webhook:\"\nI0111 20:14:52.607945 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:14:52.648847 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:15:11.388015 1 service.go:357] Adding new service port \"webhook-1400/e2e-test-webhook:\" at 100.111.254.59:8443/TCP\nI0111 20:15:11.414315 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:15:11.445011 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:15:13.487610 1 service.go:357] Adding new service port \"provisioning-1550/csi-hostpath-attacher:dummy\" at 100.106.144.153:12345/TCP\nI0111 20:15:13.515289 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:15:13.560047 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:15:13.762839 1 service.go:357] Adding new service port \"provisioning-1550/csi-hostpathplugin:dummy\" at 100.111.192.91:12345/TCP\nI0111 20:15:13.791403 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:15:13.858220 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:15:13.946706 1 service.go:357] Adding new service port \"provisioning-1550/csi-hostpath-provisioner:dummy\" at 100.106.130.64:12345/TCP\nI0111 20:15:13.973811 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:15:14.020950 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:15:14.130102 1 service.go:357] Adding new service port \"provisioning-1550/csi-hostpath-resizer:dummy\" at 100.109.67.138:12345/TCP\nI0111 20:15:14.157859 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:15:14.188354 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:15:14.313166 1 service.go:357] Adding new service port \"provisioning-1550/csi-snapshotter:dummy\" at 100.110.64.214:12345/TCP\nI0111 20:15:14.346847 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:15:14.378237 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:15:16.114848 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:15:16.162131 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:15:17.062783 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:15:17.095392 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:15:18.984604 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:15:19.045729 1 service.go:382] Removing service port \"webhook-1400/e2e-test-webhook:\"\nI0111 20:15:19.083931 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:15:19.138073 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:15:36.070898 1 service.go:357] Adding new service port \"volumemode-2239/csi-hostpath-attacher:dummy\" at 100.105.190.109:12345/TCP\nI0111 20:15:36.112952 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:15:36.152266 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:15:36.342788 1 service.go:357] Adding new service port \"volumemode-2239/csi-hostpathplugin:dummy\" at 100.107.177.223:12345/TCP\nI0111 20:15:36.379224 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:15:36.438454 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:15:36.526252 1 service.go:357] Adding new service port \"volumemode-2239/csi-hostpath-provisioner:dummy\" at 100.106.117.161:12345/TCP\nI0111 20:15:36.567962 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:15:36.616872 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:15:36.709495 1 service.go:357] Adding new service port \"volumemode-2239/csi-hostpath-resizer:dummy\" at 100.105.246.16:12345/TCP\nI0111 20:15:36.759037 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:15:36.801730 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:15:36.892860 1 service.go:357] Adding new service port \"volumemode-2239/csi-snapshotter:dummy\" at 100.106.238.61:12345/TCP\nI0111 20:15:36.935837 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:15:36.981820 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:15:38.163049 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:15:39.196653 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:15:39.277888 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:15:39.330468 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:15:58.179347 1 service.go:382] Removing service port \"provisioning-1550/csi-hostpath-attacher:dummy\"\nI0111 20:15:58.211551 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:15:58.244255 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:15:58.456894 1 service.go:382] Removing service port \"provisioning-1550/csi-hostpathplugin:dummy\"\nI0111 20:15:58.486111 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:15:58.518719 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:15:58.642670 1 service.go:382] Removing service port \"provisioning-1550/csi-hostpath-provisioner:dummy\"\nI0111 20:15:58.672705 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:15:58.705451 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:15:58.828114 1 service.go:382] Removing service port \"provisioning-1550/csi-hostpath-resizer:dummy\"\nI0111 20:15:58.867772 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:15:58.900674 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:15:59.013455 1 service.go:382] Removing service port \"provisioning-1550/csi-snapshotter:dummy\"\nI0111 20:15:59.043479 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:15:59.075826 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:16:02.830942 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:16:04.449990 1 service.go:357] Adding new service port \"webhook-4029/e2e-test-webhook:\" at 100.107.227.33:8443/TCP\nI0111 20:16:04.476661 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:16:04.561374 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:16:12.864317 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:16:13.038247 1 service.go:382] Removing service port \"webhook-4029/e2e-test-webhook:\"\nI0111 20:16:13.065598 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:16:21.509342 1 service.go:382] Removing service port \"volumemode-2239/csi-hostpath-attacher:dummy\"\nI0111 20:16:21.552862 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:16:21.606924 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:16:21.786094 1 service.go:382] Removing service port \"volumemode-2239/csi-hostpathplugin:dummy\"\nI0111 20:16:21.822563 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:16:21.866704 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:16:21.973531 1 service.go:382] Removing service port \"volumemode-2239/csi-hostpath-provisioner:dummy\"\nI0111 20:16:22.062910 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:16:22.097814 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:16:22.167196 1 service.go:382] Removing service port \"volumemode-2239/csi-hostpath-resizer:dummy\"\nI0111 20:16:22.195247 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:16:22.226127 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:16:22.352519 1 service.go:382] Removing service port \"volumemode-2239/csi-snapshotter:dummy\"\nI0111 20:16:22.383524 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:16:22.420546 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:16:52.452898 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:17:08.301786 1 service.go:357] Adding new service port \"services-9670/nodeport-reuse:\" at 100.106.55.71:80/TCP\nI0111 20:17:08.328897 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:17:08.329150 1 proxier.go:1519] Opened local port \"nodePort for services-9670/nodeport-reuse:\" (:30304/tcp)\nI0111 20:17:08.359953 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:17:08.393245 1 service.go:382] Removing service port \"services-9670/nodeport-reuse:\"\nI0111 20:17:08.420227 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:17:08.448610 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:17:12.161520 1 service.go:357] Adding new service port \"services-9670/nodeport-reuse:\" at 100.105.245.146:80/TCP\nI0111 20:17:12.187543 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:17:12.187806 1 proxier.go:1519] Opened local port \"nodePort for services-9670/nodeport-reuse:\" (:30304/tcp)\nI0111 20:17:12.221336 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:17:12.252963 1 service.go:382] Removing service port \"services-9670/nodeport-reuse:\"\nI0111 20:17:12.278422 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:17:12.306669 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:17:13.610371 1 service.go:357] Adding new service port \"ephemeral-1155/csi-hostpath-attacher:dummy\" at 100.108.187.111:12345/TCP\nI0111 20:17:13.636118 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:17:13.664462 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:17:13.884068 1 service.go:357] Adding new service port \"ephemeral-1155/csi-hostpathplugin:dummy\" at 100.108.1.251:12345/TCP\nI0111 20:17:13.915973 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:17:13.960535 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:17:14.066181 1 service.go:357] Adding new service port \"ephemeral-1155/csi-hostpath-provisioner:dummy\" at 100.106.54.117:12345/TCP\nI0111 20:17:14.091706 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:17:14.120024 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:17:14.247733 1 service.go:357] Adding new service port \"ephemeral-1155/csi-hostpath-resizer:dummy\" at 100.106.193.173:12345/TCP\nI0111 20:17:14.276085 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:17:14.319278 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:17:14.430547 1 service.go:357] Adding new service port \"ephemeral-1155/csi-snapshotter:dummy\" at 100.108.55.173:12345/TCP\nI0111 20:17:14.472194 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:17:14.521038 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:17:15.997371 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:17:17.026153 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:17:17.060311 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:17:17.091469 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:17:47.125458 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:17:56.691845 1 service.go:382] Removing service port \"ephemeral-1155/csi-hostpath-attacher:dummy\"\nI0111 20:17:56.730585 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:17:56.773234 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:17:56.973370 1 service.go:382] Removing service port \"ephemeral-1155/csi-hostpathplugin:dummy\"\nI0111 20:17:57.009403 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:17:57.041745 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:17:57.159833 1 service.go:382] Removing service port \"ephemeral-1155/csi-hostpath-provisioner:dummy\"\nI0111 20:17:57.188346 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:17:57.220574 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:17:57.345596 1 service.go:382] Removing service port \"ephemeral-1155/csi-hostpath-resizer:dummy\"\nI0111 20:17:57.374299 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:17:57.407453 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:17:57.531868 1 service.go:382] Removing service port \"ephemeral-1155/csi-snapshotter:dummy\"\nI0111 20:17:57.583563 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:17:57.632295 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:18:27.666758 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:18:57.705775 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:19:20.481137 1 service.go:357] Adding new service port \"ephemeral-3918/csi-hostpath-attacher:dummy\" at 100.104.122.168:12345/TCP\nI0111 20:19:20.508160 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:19:20.561541 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:19:20.756585 1 service.go:357] Adding new service port \"ephemeral-3918/csi-hostpathplugin:dummy\" at 100.104.61.109:12345/TCP\nI0111 20:19:20.784388 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:19:20.975334 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:19:20.982356 1 service.go:357] Adding new service port \"ephemeral-3918/csi-hostpath-provisioner:dummy\" at 100.107.120.175:12345/TCP\nI0111 20:19:21.008596 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:19:21.125420 1 service.go:357] Adding new service port \"ephemeral-3918/csi-hostpath-resizer:dummy\" at 100.109.209.103:12345/TCP\nI0111 20:19:21.165726 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:19:21.211857 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:19:21.308399 1 service.go:357] Adding new service port \"ephemeral-3918/csi-snapshotter:dummy\" at 100.111.111.111:12345/TCP\nI0111 20:19:21.349427 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:19:21.469178 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:19:22.893613 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:19:23.897408 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:19:24.057322 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:19:24.089508 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:19:54.121664 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:20:01.562826 1 service.go:382] Removing service port \"ephemeral-3918/csi-hostpath-attacher:dummy\"\nI0111 20:20:01.601599 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:20:01.641587 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:20:01.854847 1 service.go:382] Removing service port \"ephemeral-3918/csi-hostpathplugin:dummy\"\nI0111 20:20:01.899067 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:20:01.939403 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:20:02.043000 1 service.go:382] Removing service port \"ephemeral-3918/csi-hostpath-provisioner:dummy\"\nI0111 20:20:02.078120 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:20:02.110421 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:20:02.282011 1 service.go:382] Removing service port \"ephemeral-3918/csi-hostpath-resizer:dummy\"\nI0111 20:20:02.310981 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:20:02.341764 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:20:02.468226 1 service.go:382] Removing service port \"ephemeral-3918/csi-snapshotter:dummy\"\nI0111 20:20:02.500281 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:20:02.530874 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:20:32.561996 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:21:02.593674 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:21:24.907120 1 service.go:357] Adding new service port \"kubectl-2777/redis-master:\" at 100.107.0.220:6379/TCP\nI0111 20:21:24.934217 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:21:24.964019 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:21:25.719016 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:21:35.080225 1 service.go:382] Removing service port \"kubectl-2777/redis-master:\"\nI0111 20:21:35.106092 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:21:35.133876 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:22:05.169395 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:22:35.210917 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:23:05.262146 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:23:13.445913 1 service.go:357] Adding new service port \"services-7215/tolerate-unready:http\" at 100.104.48.229:80/TCP\nI0111 20:23:13.486606 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:23:13.535108 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:23:14.276192 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:23:23.663859 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:23:27.223621 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:23:28.829170 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:23:29.132815 1 service.go:382] Removing service port \"services-7215/tolerate-unready:http\"\nI0111 20:23:29.171323 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:23:29.217326 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:23:59.254893 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:24:18.387617 1 service.go:357] Adding new service port \"dns-8876/test-service-2:http\" at 100.108.55.126:80/TCP\nI0111 20:24:18.415512 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:24:18.446651 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:24:19.880783 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:24:49.912154 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:24:54.787715 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:24:54.834506 1 service.go:382] Removing service port \"dns-8876/test-service-2:http\"\nI0111 20:24:54.861732 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:24:54.891387 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:25:24.929239 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:25:54.963144 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:26:24.994920 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:26:25.618471 1 service.go:357] Adding new service port \"services-6600/nodeport-collision-1:\" at 100.111.80.57:80/TCP\nI0111 20:26:25.647171 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:26:25.647402 1 proxier.go:1519] Opened local port \"nodePort for services-6600/nodeport-collision-1:\" (:30488/tcp)\nI0111 20:26:25.677571 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:26:25.805456 1 service.go:382] Removing service port \"services-6600/nodeport-collision-1:\"\nI0111 20:26:25.833577 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:26:25.864688 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:26:25.906667 1 service.go:357] Adding new service port \"services-6600/nodeport-collision-2:\" at 100.111.95.62:80/TCP\nI0111 20:26:25.943804 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:26:25.943984 1 proxier.go:1519] Opened local port \"nodePort for services-6600/nodeport-collision-2:\" (:30488/tcp)\nI0111 20:26:25.983938 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:26:25.997812 1 service.go:382] Removing service port \"services-6600/nodeport-collision-2:\"\nI0111 20:26:26.041488 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:26:26.088721 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:26:56.121698 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:27:26.154351 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:27:54.436837 1 service.go:357] Adding new service port \"webhook-7189/e2e-test-webhook:\" at 100.106.240.100:8443/TCP\nI0111 20:27:54.471007 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:27:54.501455 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:28:02.821958 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:28:02.828503 1 service.go:382] Removing service port \"webhook-7189/e2e-test-webhook:\"\nI0111 20:28:02.868697 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:28:32.903507 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:28:59.605047 1 service.go:357] Adding new service port \"services-1480/affinity-nodeport:\" at 100.108.78.161:80/TCP\nI0111 20:28:59.632553 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:28:59.633106 1 proxier.go:1519] Opened local port \"nodePort for services-1480/affinity-nodeport:\" (:31995/tcp)\nI0111 20:28:59.664018 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:01.169378 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:02.152042 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:02.181961 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:16.890340 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-s6rdv:\" at 100.110.50.165:80/TCP\nI0111 20:29:16.916452 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:16.945901 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:16.989527 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-q7krl:\" at 100.104.210.104:80/TCP\nI0111 20:29:17.021575 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:17.026108 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-s2b7g:\" at 100.109.34.243:80/TCP\nI0111 20:29:17.026131 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-cqmxc:\" at 100.108.159.113:80/TCP\nI0111 20:29:17.050335 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:17.074301 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-wblts:\" at 100.106.49.242:80/TCP\nI0111 20:29:17.103143 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:17.108447 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-mxv6b:\" at 100.105.37.245:80/TCP\nI0111 20:29:17.108467 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-7487x:\" at 100.109.158.98:80/TCP\nI0111 20:29:17.108479 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-5ps2n:\" at 100.108.97.89:80/TCP\nI0111 20:29:17.108491 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-zlkgw:\" at 100.111.6.117:80/TCP\nI0111 20:29:17.108501 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-52zhd:\" at 100.111.77.195:80/TCP\nI0111 20:29:17.108512 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-ncb5r:\" at 100.109.10.194:80/TCP\nI0111 20:29:17.108522 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-2jj8w:\" at 100.108.132.249:80/TCP\nI0111 20:29:17.137115 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:17.142665 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-m2zvw:\" at 100.111.195.89:80/TCP\nI0111 20:29:17.142685 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-bz8mw:\" at 100.110.69.72:80/TCP\nI0111 20:29:17.142697 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-mthv6:\" at 100.109.45.97:80/TCP\nI0111 20:29:17.142707 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-gl9mt:\" at 100.104.212.161:80/TCP\nI0111 20:29:17.142717 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-8wwgc:\" at 100.111.85.170:80/TCP\nI0111 20:29:17.142729 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-sz7fx:\" at 100.106.29.46:80/TCP\nI0111 20:29:17.142740 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-fjjc4:\" at 100.110.151.105:80/TCP\nI0111 20:29:17.168908 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:17.175226 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-x7bxp:\" at 100.106.205.191:80/TCP\nI0111 20:29:17.204601 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:17.212611 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-8dgcz:\" at 100.104.245.255:80/TCP\nI0111 20:29:17.212659 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-nqjfp:\" at 100.105.216.233:80/TCP\nI0111 20:29:17.212672 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-9xzhq:\" at 100.105.45.12:80/TCP\nI0111 20:29:17.212683 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-ts8zr:\" at 100.111.248.209:80/TCP\nI0111 20:29:17.212693 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-xb97n:\" at 100.105.255.28:80/TCP\nI0111 20:29:17.212703 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-p5qn4:\" at 100.107.189.31:80/TCP\nI0111 20:29:17.212714 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-c7c64:\" at 100.104.26.166:80/TCP\nI0111 20:29:17.212724 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-6z7qf:\" at 100.107.183.82:80/TCP\nI0111 20:29:17.239730 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:17.246188 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-wbrp9:\" at 100.106.100.240:80/TCP\nI0111 20:29:17.275267 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:17.282330 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-g5jvm:\" at 100.111.115.57:80/TCP\nI0111 20:29:17.282350 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-bq2lc:\" at 100.106.106.97:80/TCP\nI0111 20:29:17.282362 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-xdhk5:\" at 100.106.180.87:80/TCP\nI0111 20:29:17.282374 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-h9dnx:\" at 100.110.247.203:80/TCP\nI0111 20:29:17.282387 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-wgjmq:\" at 100.107.16.213:80/TCP\nI0111 20:29:17.282403 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-sqjqx:\" at 100.106.141.177:80/TCP\nI0111 20:29:17.282413 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-wc6hr:\" at 100.109.222.15:80/TCP\nI0111 20:29:17.311223 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:17.318332 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-jnplv:\" at 100.111.194.181:80/TCP\nI0111 20:29:17.318351 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-c7khz:\" at 100.105.204.238:80/TCP\nI0111 20:29:17.318362 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-w859m:\" at 100.104.215.70:80/TCP\nI0111 20:29:17.318375 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-58ct8:\" at 100.105.90.93:80/TCP\nI0111 20:29:17.318388 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-992f9:\" at 100.109.10.48:80/TCP\nI0111 20:29:17.318401 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-9kzqd:\" at 100.107.201.138:80/TCP\nI0111 20:29:17.318412 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-lw287:\" at 100.111.143.26:80/TCP\nI0111 20:29:17.346242 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:17.353285 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-jkck5:\" at 100.107.219.32:80/TCP\nI0111 20:29:17.381916 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:17.389781 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-fbg4b:\" at 100.108.9.77:80/TCP\nI0111 20:29:17.389800 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-kxh8b:\" at 100.104.132.231:80/TCP\nI0111 20:29:17.389812 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-cbkqt:\" at 100.109.228.244:80/TCP\nI0111 20:29:17.389824 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-94qtf:\" at 100.106.145.155:80/TCP\nI0111 20:29:17.389837 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-rs69c:\" at 100.105.1.118:80/TCP\nI0111 20:29:17.389851 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-7zmw2:\" at 100.105.132.238:80/TCP\nI0111 20:29:17.418223 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:17.425576 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-pg55q:\" at 100.111.133.104:80/TCP\nI0111 20:29:17.425596 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-p4257:\" at 100.105.13.247:80/TCP\nI0111 20:29:17.453978 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:17.461326 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-9ljvn:\" at 100.110.183.93:80/TCP\nI0111 20:29:17.489542 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:17.497288 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-npx27:\" at 100.107.8.132:80/TCP\nI0111 20:29:17.525073 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:17.535074 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-hk25w:\" at 100.111.70.131:80/TCP\nI0111 20:29:17.562398 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:17.604057 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:17.611990 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-pczsx:\" at 100.104.206.235:80/TCP\nI0111 20:29:17.640685 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:17.649136 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-fc7z5:\" at 100.109.1.158:80/TCP\nI0111 20:29:17.678030 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:17.686809 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-6dhzz:\" at 100.106.85.123:80/TCP\nI0111 20:29:17.718420 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:17.759368 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:17.773761 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-vlpcx:\" at 100.110.56.135:80/TCP\nI0111 20:29:17.803109 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:17.811491 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-jbr4n:\" at 100.108.107.10:80/TCP\nI0111 20:29:17.840771 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:17.849489 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-r4d9z:\" at 100.108.124.213:80/TCP\nI0111 20:29:17.879271 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:17.891872 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-vwvc6:\" at 100.108.170.188:80/TCP\nI0111 20:29:17.920606 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:17.958688 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:17.967164 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-4s2lt:\" at 100.110.234.154:80/TCP\nI0111 20:29:18.011991 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:18.023915 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-7dg5w:\" at 100.107.85.81:80/TCP\nI0111 20:29:18.054778 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:18.063857 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-fdplr:\" at 100.106.82.184:80/TCP\nI0111 20:29:18.094413 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:18.103106 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-m2r4k:\" at 100.105.34.36:80/TCP\nI0111 20:29:18.132219 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:18.141668 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-fhrxk:\" at 100.104.91.118:80/TCP\nI0111 20:29:18.171565 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:18.210544 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:18.219217 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-kvtbx:\" at 100.110.180.178:80/TCP\nI0111 20:29:18.260007 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:18.268804 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-6kl97:\" at 100.106.230.235:80/TCP\nI0111 20:29:18.298874 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:18.307756 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-kndd7:\" at 100.108.250.72:80/TCP\nI0111 20:29:18.337775 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:18.347061 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-k529k:\" at 100.109.188.200:80/TCP\nI0111 20:29:18.376956 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:18.386329 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-6xztd:\" at 100.106.15.1:80/TCP\nI0111 20:29:18.416486 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:18.456131 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:18.465345 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-hx4c9:\" at 100.108.18.30:80/TCP\nI0111 20:29:18.497446 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:18.506592 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-xp7z4:\" at 100.106.239.169:80/TCP\nI0111 20:29:18.537781 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:18.547866 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-pvn95:\" at 100.111.238.203:80/TCP\nI0111 20:29:18.578143 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:18.588399 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-xjpbr:\" at 100.104.141.186:80/TCP\nI0111 20:29:18.619469 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:18.666026 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:18.675467 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-kbd7z:\" at 100.110.227.85:80/TCP\nI0111 20:29:18.706598 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:18.716286 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-5qnq6:\" at 100.104.135.167:80/TCP\nI0111 20:29:18.747290 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:18.756840 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-jwks9:\" at 100.109.201.181:80/TCP\nI0111 20:29:18.808869 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:18.820253 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-5r6k6:\" at 100.104.54.3:80/TCP\nI0111 20:29:18.851771 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:18.861377 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-sz22t:\" at 100.111.189.31:80/TCP\nI0111 20:29:18.892607 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:18.902301 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-hzpx5:\" at 100.107.14.246:80/TCP\nI0111 20:29:18.939786 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:18.950024 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-txx5w:\" at 100.109.37.31:80/TCP\nI0111 20:29:18.989971 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:19.001416 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-rlg5v:\" at 100.109.119.235:80/TCP\nI0111 20:29:19.032436 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:19.043042 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-989p5:\" at 100.111.73.68:80/TCP\nI0111 20:29:19.074529 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:19.085610 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-5qfmz:\" at 100.109.157.3:80/TCP\nI0111 20:29:19.117336 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:19.159911 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:19.170620 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-6wt25:\" at 100.107.20.220:80/TCP\nI0111 20:29:19.200481 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:19.210567 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-r9vdn:\" at 100.104.245.110:80/TCP\nI0111 20:29:19.249876 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:19.264537 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-7565s:\" at 100.111.12.117:80/TCP\nI0111 20:29:19.304553 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:19.320619 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-prmvm:\" at 100.106.81.1:80/TCP\nI0111 20:29:19.359830 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:19.379899 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-dz8hd:\" at 100.107.78.41:80/TCP\nI0111 20:29:19.425091 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:19.445169 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-bzdlz:\" at 100.109.126.85:80/TCP\nI0111 20:29:19.445190 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-hkxhd:\" at 100.108.205.83:80/TCP\nI0111 20:29:19.483913 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:19.500770 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-fxd6l:\" at 100.108.38.225:80/TCP\nI0111 20:29:19.544747 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:19.555734 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-s65vs:\" at 100.109.41.46:80/TCP\nI0111 20:29:19.586662 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:19.598068 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-6mc9h:\" at 100.108.60.188:80/TCP\nI0111 20:29:19.628811 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:19.641412 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-b6448:\" at 100.110.134.144:80/TCP\nI0111 20:29:19.672026 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:19.720494 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:19.731450 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-55gtf:\" at 100.107.132.2:80/TCP\nI0111 20:29:19.762103 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:19.773424 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-wrknt:\" at 100.107.232.191:80/TCP\nI0111 20:29:19.813182 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:19.824448 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-8md9n:\" at 100.108.157.224:80/TCP\nI0111 20:29:19.855664 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:19.866960 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-6tkm4:\" at 100.109.47.117:80/TCP\nI0111 20:29:19.897932 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:19.909230 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-5pmk8:\" at 100.104.139.0:80/TCP\nI0111 20:29:19.939926 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:19.951576 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-sxsrj:\" at 100.106.241.95:80/TCP\nI0111 20:29:19.982175 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:19.998544 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-q2xhr:\" at 100.109.33.166:80/TCP\nI0111 20:29:20.039765 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:20.052604 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-8hxbr:\" at 100.108.14.244:80/TCP\nI0111 20:29:20.083503 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:20.095928 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-tqpks:\" at 100.111.101.9:80/TCP\nI0111 20:29:20.126873 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:20.141940 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-hksc7:\" at 100.107.253.69:80/TCP\nI0111 20:29:20.184609 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:20.197809 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-wvx2k:\" at 100.108.121.10:80/TCP\nI0111 20:29:20.229129 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:20.241320 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-rq5x4:\" at 100.104.140.34:80/TCP\nI0111 20:29:20.272773 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:20.331440 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:20.344194 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-l2w82:\" at 100.106.180.55:80/TCP\nI0111 20:29:20.344214 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-vxsbd:\" at 100.108.232.3:80/TCP\nI0111 20:29:20.375872 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:20.388208 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-bw56h:\" at 100.108.110.225:80/TCP\nI0111 20:29:20.426357 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:20.439316 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-cqvqq:\" at 100.104.92.79:80/TCP\nI0111 20:29:20.484575 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:20.515208 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-t45wj:\" at 100.108.132.229:80/TCP\nI0111 20:29:20.597025 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:20.631760 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-zq9n9:\" at 100.105.131.189:80/TCP\nI0111 20:29:20.631781 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-6f2l9:\" at 100.108.48.179:80/TCP\nI0111 20:29:20.722488 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:20.767591 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-jp7dq:\" at 100.105.120.67:80/TCP\nI0111 20:29:20.767702 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-mgcrt:\" at 100.111.9.64:80/TCP\nI0111 20:29:20.767731 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-7fm9r:\" at 100.111.0.3:80/TCP\nI0111 20:29:20.822452 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:20.844744 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-nzg9v:\" at 100.108.215.174:80/TCP\nI0111 20:29:20.844767 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-c6t4p:\" at 100.109.205.206:80/TCP\nI0111 20:29:20.905589 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:20.932572 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-wkxr8:\" at 100.111.22.54:80/TCP\nI0111 20:29:21.001802 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:21.023762 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-ptj6z:\" at 100.105.16.240:80/TCP\nI0111 20:29:21.023786 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-8llmx:\" at 100.106.150.163:80/TCP\nI0111 20:29:21.073167 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:21.093326 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-c7vtv:\" at 100.109.120.118:80/TCP\nI0111 20:29:21.093350 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-dhr5n:\" at 100.108.20.221:80/TCP\nI0111 20:29:21.142620 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:21.157579 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-shrdl:\" at 100.108.44.155:80/TCP\nI0111 20:29:21.199205 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:21.226807 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-rzpqh:\" at 100.106.186.206:80/TCP\nI0111 20:29:21.289042 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nE0111 20:29:21.304666 1 proxier.go:1418] Failed to execute iptables-restore: exit status 1 (iptables-restore: line 22 failed\n)\nI0111 20:29:21.304888 1 proxier.go:1421] Closing local ports after iptables-restore failure\nI0111 20:29:21.304931 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-dqmd8:\" at 100.110.13.169:80/TCP\nI0111 20:29:21.304951 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-f72wt:\" at 100.108.171.237:80/TCP\nI0111 20:29:21.352943 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:21.377153 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-j99bh:\" at 100.110.60.255:80/TCP\nI0111 20:29:21.447146 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:21.472183 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-pssrp:\" at 100.109.198.1:80/TCP\nI0111 20:29:21.472215 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-z5cpl:\" at 100.109.2.217:80/TCP\nI0111 20:29:21.517765 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:21.542429 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-2rnrs:\" at 100.106.44.212:80/TCP\nI0111 20:29:21.542453 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-9zslb:\" at 100.107.118.24:80/TCP\nI0111 20:29:21.588973 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:21.611701 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-9m945:\" at 100.108.238.138:80/TCP\nI0111 20:29:21.646826 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:21.661125 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-sr8ck:\" at 100.110.247.217:80/TCP\nI0111 20:29:21.694031 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:21.709054 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-vvf98:\" at 100.110.252.124:80/TCP\nI0111 20:29:21.760427 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:21.774834 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-2rxnn:\" at 100.107.60.33:80/TCP\nI0111 20:29:21.808172 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:21.822329 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-5q62h:\" at 100.108.246.174:80/TCP\nI0111 20:29:21.860686 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:21.875392 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-c9vww:\" at 100.109.238.2:80/TCP\nI0111 20:29:21.910135 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:21.925566 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-6rzg7:\" at 100.109.162.214:80/TCP\nI0111 20:29:21.960401 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:21.977254 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-bnbzz:\" at 100.105.82.22:80/TCP\nI0111 20:29:22.024341 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:22.040404 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-xk5p9:\" at 100.108.249.86:80/TCP\nI0111 20:29:22.040426 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-xc6xg:\" at 100.106.244.154:80/TCP\nI0111 20:29:22.077841 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:22.093413 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-mqmb7:\" at 100.104.140.116:80/TCP\nI0111 20:29:22.140522 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:22.157442 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-42ntw:\" at 100.108.223.123:80/TCP\nI0111 20:29:22.194508 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:22.209429 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-45vjl:\" at 100.104.82.145:80/TCP\nI0111 20:29:22.245463 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:22.262959 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-9vf74:\" at 100.106.216.184:80/TCP\nI0111 20:29:22.322832 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:22.348230 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-r6bp4:\" at 100.111.63.57:80/TCP\nI0111 20:29:22.348406 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-47l2j:\" at 100.104.41.40:80/TCP\nI0111 20:29:22.384458 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:22.400768 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-jz2fm:\" at 100.109.118.224:80/TCP\nI0111 20:29:22.437779 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:22.461049 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-5vdf6:\" at 100.108.59.15:80/TCP\nI0111 20:29:22.524141 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:22.550462 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-wf46g:\" at 100.110.53.47:80/TCP\nI0111 20:29:22.550525 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-h7vk6:\" at 100.109.148.180:80/TCP\nI0111 20:29:22.586840 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:22.602944 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-vd6nh:\" at 100.104.8.153:80/TCP\nI0111 20:29:22.640240 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:22.675808 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-wfmp8:\" at 100.110.81.234:80/TCP\n==== END logs for container kube-proxy of pod kube-system/kube-proxy-nn5px ====\n==== START logs for container kube-proxy of pod kube-system/kube-proxy-rq4kf ====\nI0111 15:56:05.753437 1 flags.go:33] FLAG: --add-dir-header=\"false\"\nI0111 15:56:05.753492 1 flags.go:33] FLAG: --alsologtostderr=\"false\"\nI0111 15:56:05.753498 1 flags.go:33] FLAG: --application-metrics-count-limit=\"100\"\nI0111 15:56:05.753502 1 flags.go:33] FLAG: --azure-container-registry-config=\"\"\nI0111 15:56:05.753511 1 flags.go:33] FLAG: --bind-address=\"0.0.0.0\"\nI0111 15:56:05.753518 1 flags.go:33] FLAG: --boot-id-file=\"/proc/sys/kernel/random/boot_id\"\nI0111 15:56:05.753540 1 flags.go:33] FLAG: --cleanup=\"false\"\nI0111 15:56:05.753546 1 flags.go:33] FLAG: --cleanup-ipvs=\"true\"\nI0111 15:56:05.753551 1 flags.go:33] FLAG: --cloud-provider-gce-lb-src-cidrs=\"130.211.0.0/22,209.85.152.0/22,209.85.204.0/22,35.191.0.0/16\"\nI0111 15:56:05.753558 1 flags.go:33] FLAG: --cluster-cidr=\"\"\nI0111 15:56:05.753562 1 flags.go:33] FLAG: --config=\"/var/lib/kube-proxy-config/config.yaml\"\nI0111 15:56:05.753568 1 flags.go:33] FLAG: --config-sync-period=\"15m0s\"\nI0111 15:56:05.753574 1 flags.go:33] FLAG: --conntrack-max-per-core=\"32768\"\nI0111 15:56:05.753581 1 flags.go:33] FLAG: --conntrack-min=\"131072\"\nI0111 15:56:05.753589 1 flags.go:33] FLAG: --conntrack-tcp-timeout-close-wait=\"1h0m0s\"\nI0111 15:56:05.753594 1 flags.go:33] FLAG: --conntrack-tcp-timeout-established=\"24h0m0s\"\nI0111 15:56:05.753598 1 flags.go:33] FLAG: --container-hints=\"/etc/cadvisor/container_hints.json\"\nI0111 15:56:05.753603 1 flags.go:33] FLAG: --containerd=\"/run/containerd/containerd.sock\"\nI0111 15:56:05.753608 1 flags.go:33] FLAG: --containerd-namespace=\"k8s.io\"\nI0111 15:56:05.753613 1 flags.go:33] FLAG: --default-not-ready-toleration-seconds=\"300\"\nI0111 15:56:05.753618 1 flags.go:33] FLAG: --default-unreachable-toleration-seconds=\"300\"\nI0111 15:56:05.753623 1 flags.go:33] FLAG: --disable-root-cgroup-stats=\"false\"\nI0111 15:56:05.753628 1 flags.go:33] FLAG: --docker=\"unix:///var/run/docker.sock\"\nI0111 15:56:05.753633 1 flags.go:33] FLAG: --docker-env-metadata-whitelist=\"\"\nI0111 15:56:05.753638 1 flags.go:33] FLAG: --docker-only=\"false\"\nI0111 15:56:05.753641 1 flags.go:33] FLAG: --docker-root=\"/var/lib/docker\"\nI0111 15:56:05.753646 1 flags.go:33] FLAG: --docker-tls=\"false\"\nI0111 15:56:05.753650 1 flags.go:33] FLAG: --docker-tls-ca=\"ca.pem\"\nI0111 15:56:05.753654 1 flags.go:33] FLAG: --docker-tls-cert=\"cert.pem\"\nI0111 15:56:05.753660 1 flags.go:33] FLAG: --docker-tls-key=\"key.pem\"\nI0111 15:56:05.753664 1 flags.go:33] FLAG: --enable-load-reader=\"false\"\nI0111 15:56:05.753673 1 flags.go:33] FLAG: --event-storage-age-limit=\"default=0\"\nI0111 15:56:05.753678 1 flags.go:33] FLAG: --event-storage-event-limit=\"default=0\"\nI0111 15:56:05.753682 1 flags.go:33] FLAG: --feature-gates=\"\"\nI0111 15:56:05.753689 1 flags.go:33] FLAG: --global-housekeeping-interval=\"1m0s\"\nI0111 15:56:05.753693 1 flags.go:33] FLAG: --healthz-bind-address=\"0.0.0.0:10256\"\nI0111 15:56:05.753698 1 flags.go:33] FLAG: --healthz-port=\"10256\"\nI0111 15:56:05.753703 1 flags.go:33] FLAG: --help=\"false\"\nI0111 15:56:05.753709 1 flags.go:33] FLAG: --hostname-override=\"\"\nI0111 15:56:05.753713 1 flags.go:33] FLAG: --housekeeping-interval=\"10s\"\nI0111 15:56:05.753719 1 flags.go:33] FLAG: --iptables-masquerade-bit=\"14\"\nI0111 15:56:05.753725 1 flags.go:33] FLAG: --iptables-min-sync-period=\"0s\"\nI0111 15:56:05.753730 1 flags.go:33] FLAG: --iptables-sync-period=\"30s\"\nI0111 15:56:05.753735 1 flags.go:33] FLAG: --ipvs-exclude-cidrs=\"[]\"\nI0111 15:56:05.753745 1 flags.go:33] FLAG: --ipvs-min-sync-period=\"0s\"\nI0111 15:56:05.753750 1 flags.go:33] FLAG: --ipvs-scheduler=\"\"\nI0111 15:56:05.753754 1 flags.go:33] FLAG: --ipvs-strict-arp=\"false\"\nI0111 15:56:05.753758 1 flags.go:33] FLAG: --ipvs-sync-period=\"30s\"\nI0111 15:56:05.753763 1 flags.go:33] FLAG: --kube-api-burst=\"10\"\nI0111 15:56:05.753767 1 flags.go:33] FLAG: --kube-api-content-type=\"application/vnd.kubernetes.protobuf\"\nI0111 15:56:05.753773 1 flags.go:33] FLAG: --kube-api-qps=\"5\"\nI0111 15:56:05.753780 1 flags.go:33] FLAG: --kubeconfig=\"\"\nI0111 15:56:05.753784 1 flags.go:33] FLAG: --log-backtrace-at=\":0\"\nI0111 15:56:05.753791 1 flags.go:33] FLAG: --log-cadvisor-usage=\"false\"\nI0111 15:56:05.753797 1 flags.go:33] FLAG: --log-dir=\"\"\nI0111 15:56:05.753802 1 flags.go:33] FLAG: --log-file=\"\"\nI0111 15:56:05.753807 1 flags.go:33] FLAG: --log-file-max-size=\"1800\"\nI0111 15:56:05.753812 1 flags.go:33] FLAG: --log-flush-frequency=\"5s\"\nI0111 15:56:05.753818 1 flags.go:33] FLAG: --logtostderr=\"true\"\nI0111 15:56:05.753823 1 flags.go:33] FLAG: --machine-id-file=\"/etc/machine-id,/var/lib/dbus/machine-id\"\nI0111 15:56:05.753830 1 flags.go:33] FLAG: --masquerade-all=\"false\"\nI0111 15:56:05.753835 1 flags.go:33] FLAG: --master=\"\"\nI0111 15:56:05.753840 1 flags.go:33] FLAG: --metrics-bind-address=\"127.0.0.1:10249\"\nI0111 15:56:05.753845 1 flags.go:33] FLAG: --metrics-port=\"10249\"\nI0111 15:56:05.753851 1 flags.go:33] FLAG: --nodeport-addresses=\"[]\"\nI0111 15:56:05.753857 1 flags.go:33] FLAG: --oom-score-adj=\"-999\"\nI0111 15:56:05.753862 1 flags.go:33] FLAG: --profiling=\"false\"\nI0111 15:56:05.753867 1 flags.go:33] FLAG: --proxy-mode=\"\"\nI0111 15:56:05.753877 1 flags.go:33] FLAG: --proxy-port-range=\"\"\nI0111 15:56:05.753883 1 flags.go:33] FLAG: --skip-headers=\"false\"\nI0111 15:56:05.753888 1 flags.go:33] FLAG: --skip-log-headers=\"false\"\nI0111 15:56:05.753895 1 flags.go:33] FLAG: --stderrthreshold=\"2\"\nI0111 15:56:05.753900 1 flags.go:33] FLAG: --storage-driver-buffer-duration=\"1m0s\"\nI0111 15:56:05.753905 1 flags.go:33] FLAG: --storage-driver-db=\"cadvisor\"\nI0111 15:56:05.753910 1 flags.go:33] FLAG: --storage-driver-host=\"localhost:8086\"\nI0111 15:56:05.753914 1 flags.go:33] FLAG: --storage-driver-password=\"root\"\nI0111 15:56:05.753919 1 flags.go:33] FLAG: --storage-driver-secure=\"false\"\nI0111 15:56:05.753925 1 flags.go:33] FLAG: --storage-driver-table=\"stats\"\nI0111 15:56:05.753929 1 flags.go:33] FLAG: --storage-driver-user=\"root\"\nI0111 15:56:05.753934 1 flags.go:33] FLAG: --udp-timeout=\"250ms\"\nI0111 15:56:05.753940 1 flags.go:33] FLAG: --update-machine-info-interval=\"5m0s\"\nI0111 15:56:05.753945 1 flags.go:33] FLAG: --v=\"2\"\nI0111 15:56:05.753950 1 flags.go:33] FLAG: --version=\"false\"\nI0111 15:56:05.753959 1 flags.go:33] FLAG: --vmodule=\"\"\nI0111 15:56:05.753964 1 flags.go:33] FLAG: --write-config-to=\"\"\nI0111 15:56:05.754689 1 feature_gate.go:216] feature gates: &{map[]}\nI0111 15:56:05.815438 1 node.go:135] Successfully retrieved node IP: 10.250.27.25\nI0111 15:56:05.815467 1 server_others.go:150] Using iptables Proxier.\nI0111 15:56:05.817074 1 server.go:529] Version: v1.16.4\nI0111 15:56:05.817478 1 conntrack.go:52] Setting nf_conntrack_max to 1048576\nI0111 15:56:05.817636 1 mount_linux.go:153] Detected OS without systemd\nI0111 15:56:05.817854 1 conntrack.go:83] Setting conntrack hashsize to 262144\nI0111 15:56:05.822595 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400\nI0111 15:56:05.822650 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600\nI0111 15:56:05.822814 1 config.go:131] Starting endpoints config controller\nI0111 15:56:05.822922 1 shared_informer.go:197] Waiting for caches to sync for endpoints config\nI0111 15:56:05.823073 1 config.go:313] Starting service config controller\nI0111 15:56:05.823189 1 shared_informer.go:197] Waiting for caches to sync for service config\nI0111 15:56:05.923475 1 shared_informer.go:204] Caches are synced for service config \nI0111 15:56:05.923673 1 shared_informer.go:204] Caches are synced for endpoints config \nI0111 15:56:05.923726 1 proxier.go:678] Not syncing iptables until Services and Endpoints have been received from master\nI0111 15:56:05.923876 1 service.go:357] Adding new service port \"kube-system/addons-nginx-ingress-controller:https\" at 100.107.194.218:443/TCP\nI0111 15:56:05.923901 1 service.go:357] Adding new service port \"kube-system/addons-nginx-ingress-controller:http\" at 100.107.194.218:80/TCP\nI0111 15:56:05.923933 1 service.go:357] Adding new service port \"kube-system/addons-nginx-ingress-nginx-ingress-k8s-backend:\" at 100.104.186.216:80/TCP\nI0111 15:56:05.923950 1 service.go:357] Adding new service port \"kube-system/calico-typha:calico-typha\" at 100.106.19.47:5473/TCP\nI0111 15:56:05.923967 1 service.go:357] Adding new service port \"kube-system/metrics-server:\" at 100.108.63.140:443/TCP\nI0111 15:56:05.923998 1 service.go:357] Adding new service port \"default/kubernetes:https\" at 100.104.0.1:443/TCP\nI0111 15:56:05.924055 1 service.go:357] Adding new service port \"kube-system/kubernetes-dashboard:\" at 100.106.164.167:443/TCP\nI0111 15:56:05.924126 1 service.go:357] Adding new service port \"kube-system/vpn-shoot:openvpn\" at 100.108.198.84:4314/TCP\nI0111 15:56:05.924142 1 service.go:357] Adding new service port \"kube-system/kube-dns:dns\" at 100.104.0.10:53/UDP\nI0111 15:56:05.924201 1 service.go:357] Adding new service port \"kube-system/kube-dns:dns-tcp\" at 100.104.0.10:53/TCP\nI0111 15:56:05.924216 1 service.go:357] Adding new service port \"kube-system/kube-dns:metrics\" at 100.104.0.10:9153/TCP\nI0111 15:56:05.924247 1 service.go:357] Adding new service port \"kube-system/blackbox-exporter:probe\" at 100.107.248.105:9115/TCP\nI0111 15:56:05.948034 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 15:56:05.948207 1 proxier.go:1519] Opened local port \"nodePort for kube-system/addons-nginx-ingress-controller:http\" (:32046/tcp)\nI0111 15:56:05.948308 1 proxier.go:1519] Opened local port \"nodePort for kube-system/vpn-shoot:openvpn\" (:32265/tcp)\nI0111 15:56:05.948389 1 proxier.go:1519] Opened local port \"nodePort for kube-system/addons-nginx-ingress-controller:https\" (:32298/tcp)\nI0111 15:56:23.130369 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 15:56:24.951118 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 15:56:32.140709 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 15:56:35.387397 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 15:56:38.556481 1 proxier.go:700] Stale udp service kube-system/kube-dns:dns -> 100.104.0.10\nI0111 15:56:38.576648 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 15:56:40.720984 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 15:56:41.921344 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 15:56:48.135492 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 15:57:02.954359 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 15:57:32.979241 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 15:58:03.004038 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 15:58:33.035159 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 15:59:03.060672 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 15:59:33.084610 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 15:59:49.226451 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 15:59:58.191897 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:00:28.216799 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:00:57.677063 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:01:13.063374 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:01:43.088577 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:02:13.112505 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:02:27.595679 1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0111 16:02:27.595703 1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nW0111 16:02:27.621682 1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.Service ended with: too old resource version: 478 (1516)\nI0111 16:02:43.136511 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:03:13.167982 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:03:43.194509 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:04:13.219084 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:04:43.243675 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:05:13.268416 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:05:43.293219 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:06:13.318314 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:06:43.343414 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:07:13.369265 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:07:15.956805 1 service.go:357] Adding new service port \"webhook-2204/e2e-test-webhook:\" at 100.104.248.71:8443/TCP\nI0111 16:07:15.979222 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:07:16.003777 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:07:23.308877 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:07:23.338562 1 service.go:382] Removing service port \"webhook-2204/e2e-test-webhook:\"\nI0111 16:07:23.386891 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:07:53.412084 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:08:23.442146 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:08:53.467984 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:08:59.740555 1 service.go:357] Adding new service port \"pods-886/fooservice:\" at 100.110.37.249:8765/TCP\nI0111 16:08:59.762344 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:08:59.786879 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:09:07.683806 1 service.go:382] Removing service port \"pods-886/fooservice:\"\nI0111 16:09:07.706048 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:09:07.739246 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:09:37.764379 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:10:07.790493 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:10:37.816164 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:11:07.841414 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:11:37.866665 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:12:07.892117 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:12:37.917965 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:13:07.943538 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:13:32.943613 1 service.go:357] Adding new service port \"webhook-667/e2e-test-webhook:\" at 100.104.214.167:8443/TCP\nI0111 16:13:32.964651 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:13:32.988140 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:13:53.393635 1 service.go:382] Removing service port \"webhook-667/e2e-test-webhook:\"\nI0111 16:13:53.415229 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:13:53.443836 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:14:23.468300 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:14:26.056174 1 service.go:357] Adding new service port \"crd-webhook-5744/e2e-test-crd-conversion-webhook:\" at 100.110.200.70:9443/TCP\nI0111 16:14:26.077767 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:14:26.102045 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:14:33.567382 1 service.go:382] Removing service port \"crd-webhook-5744/e2e-test-crd-conversion-webhook:\"\nI0111 16:14:33.589433 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:14:33.659234 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:15:03.686951 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:15:33.713749 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:16:02.047758 1 service.go:357] Adding new service port \"dns-1144/dns-test-service-3:http\" at 100.109.136.3:80/TCP\nI0111 16:16:02.080898 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:16:04.957638 1 service.go:382] Removing service port \"dns-1144/dns-test-service-3:http\"\nI0111 16:16:04.981081 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:16:35.006478 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:17:05.031463 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:17:35.057133 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:18:05.085781 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:18:35.111659 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:19:05.137558 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:19:35.161825 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:20:05.186176 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:20:35.210360 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:21:05.234972 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:21:07.384010 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:21:15.391490 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:21:45.416711 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:22:15.442173 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:22:45.467571 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:23:15.492173 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:23:45.520098 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:24:15.550125 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:24:45.582565 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:25:15.612745 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:25:45.637254 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:26:15.661571 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:26:45.686047 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:27:15.710504 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:27:45.741417 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:28:15.765933 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:28:45.791102 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:29:15.816347 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:29:45.842265 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:30:15.872021 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:30:45.899342 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:31:15.923580 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:31:45.948616 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:32:15.973511 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:32:26.277602 1 service.go:357] Adding new service port \"services-8170/endpoint-test2:\" at 100.110.2.36:80/TCP\nI0111 16:32:26.299480 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:32:26.323628 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:32:28.283745 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:32:30.296949 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:32:30.761715 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:32:31.029990 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:32:31.175194 1 service.go:382] Removing service port \"services-8170/endpoint-test2:\"\nI0111 16:32:31.197989 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:32:31.223565 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:33:01.249292 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:33:31.273860 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:34:01.299090 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:34:17.435969 1 service.go:357] Adding new service port \"webhook-3494/e2e-test-webhook:\" at 100.107.103.21:8443/TCP\nI0111 16:34:17.458080 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:34:17.482667 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:34:24.402136 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:34:24.452639 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:34:24.456846 1 service.go:382] Removing service port \"webhook-3494/e2e-test-webhook:\"\nI0111 16:34:24.484322 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:34:54.510159 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:35:24.534854 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:35:54.559340 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:36:24.584248 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:36:54.609354 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:37:24.634870 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:37:54.660267 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:38:24.686933 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:38:54.711455 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:39:24.738793 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:39:42.095783 1 service.go:357] Adding new service port \"aggregator-2165/sample-api:\" at 100.111.244.145:7443/TCP\nI0111 16:39:42.128549 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:39:42.167197 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:39:54.269862 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:39:56.989488 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:39:57.095694 1 service.go:382] Removing service port \"aggregator-2165/sample-api:\"\nI0111 16:39:57.126265 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:39:57.154778 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:40:27.180986 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:40:50.727829 1 service.go:357] Adding new service port \"services-706/externalname-service:http\" at 100.107.75.177:80/TCP\nI0111 16:40:50.749412 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:40:50.749574 1 proxier.go:1519] Opened local port \"nodePort for services-706/externalname-service:http\" (:31646/tcp)\nI0111 16:40:50.776567 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:40:52.964026 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:40:52.989439 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:41:02.515725 1 service.go:382] Removing service port \"services-706/externalname-service:http\"\nI0111 16:41:02.538947 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:41:02.564809 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:41:32.589780 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:41:51.900095 1 service.go:357] Adding new service port \"proxy-5821/proxy-service-7k47l:tlsportname1\" at 100.107.195.169:443/TCP\nI0111 16:41:51.900120 1 service.go:357] Adding new service port \"proxy-5821/proxy-service-7k47l:tlsportname2\" at 100.107.195.169:444/TCP\nI0111 16:41:51.900138 1 service.go:357] Adding new service port \"proxy-5821/proxy-service-7k47l:portname1\" at 100.107.195.169:80/TCP\nI0111 16:41:51.900149 1 service.go:357] Adding new service port \"proxy-5821/proxy-service-7k47l:portname2\" at 100.107.195.169:81/TCP\nI0111 16:41:51.929122 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:41:51.962478 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:41:57.779588 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:42:01.057133 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:42:19.054223 1 service.go:382] Removing service port \"proxy-5821/proxy-service-7k47l:portname1\"\nI0111 16:42:19.054250 1 service.go:382] Removing service port \"proxy-5821/proxy-service-7k47l:portname2\"\nI0111 16:42:19.054257 1 service.go:382] Removing service port \"proxy-5821/proxy-service-7k47l:tlsportname1\"\nI0111 16:42:19.054264 1 service.go:382] Removing service port \"proxy-5821/proxy-service-7k47l:tlsportname2\"\nI0111 16:42:19.076685 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:42:19.104012 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:42:49.130592 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:43:19.156644 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:43:49.183117 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:44:19.209573 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:44:49.234109 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:45:19.258823 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:45:23.245069 1 service.go:357] Adding new service port \"webhook-3118/e2e-test-webhook:\" at 100.108.100.65:8443/TCP\nI0111 16:45:23.266718 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:45:23.290475 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:45:30.827657 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:45:30.850662 1 service.go:382] Removing service port \"webhook-3118/e2e-test-webhook:\"\nI0111 16:45:30.878796 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:45:30.909494 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:45:49.328720 1 service.go:357] Adding new service port \"webhook-1264/e2e-test-webhook:\" at 100.110.182.82:8443/TCP\nI0111 16:45:49.350853 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:45:49.375494 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:45:56.250075 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:45:56.256647 1 service.go:382] Removing service port \"webhook-1264/e2e-test-webhook:\"\nI0111 16:45:56.309080 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:46:26.336475 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:46:34.014191 1 service.go:357] Adding new service port \"webhook-3181/e2e-test-webhook:\" at 100.109.150.151:8443/TCP\nI0111 16:46:34.036628 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:46:34.061484 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:46:52.233498 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:46:52.271189 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:46:52.276415 1 service.go:382] Removing service port \"webhook-3181/e2e-test-webhook:\"\nI0111 16:46:52.303775 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:47:22.328111 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:47:52.353204 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:48:22.378186 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:48:52.403162 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:49:22.428193 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:49:52.453986 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:50:22.485725 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:50:52.511653 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:51:22.551568 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:51:52.580648 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:52:22.606143 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:52:52.631376 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:53:22.657172 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:53:52.683626 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:54:22.707711 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:54:48.419265 1 service.go:357] Adding new service port \"webhook-2228/e2e-test-webhook:\" at 100.109.182.153:8443/TCP\nI0111 16:54:48.440600 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:54:48.464323 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:54:55.951735 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:54:55.956467 1 service.go:382] Removing service port \"webhook-2228/e2e-test-webhook:\"\nI0111 16:54:55.987436 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:55:26.013440 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:55:56.039352 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:56:26.065446 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:56:27.538793 1 service.go:357] Adding new service port \"webhook-9087/e2e-test-webhook:\" at 100.108.52.131:8443/TCP\nI0111 16:56:27.560984 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:56:27.585612 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:56:35.100643 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:56:35.104250 1 service.go:382] Removing service port \"webhook-9087/e2e-test-webhook:\"\nI0111 16:56:35.125234 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:57:05.162413 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:57:35.186741 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:58:05.210943 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:58:35.241544 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:58:38.043937 1 service.go:357] Adding new service port \"crd-webhook-5777/e2e-test-crd-conversion-webhook:\" at 100.110.255.25:9443/TCP\nI0111 16:58:38.070716 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:58:38.094727 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:58:45.974841 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:58:46.056887 1 service.go:382] Removing service port \"crd-webhook-5777/e2e-test-crd-conversion-webhook:\"\nI0111 16:58:46.086770 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:59:07.885769 1 service.go:357] Adding new service port \"services-8385/nodeport-test:http\" at 100.108.130.218:80/TCP\nI0111 16:59:07.907497 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:59:07.907701 1 proxier.go:1519] Opened local port \"nodePort for services-8385/nodeport-test:http\" (:30629/tcp)\nI0111 16:59:07.931365 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:59:09.679000 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:59:09.704033 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:59:25.780057 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:59:25.783994 1 service.go:382] Removing service port \"services-8385/nodeport-test:http\"\nI0111 16:59:25.809021 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 16:59:55.834772 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:00:25.859478 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:00:55.889433 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:01:25.915106 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:01:55.941871 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:02:25.966383 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:02:52.889412 1 service.go:357] Adding new service port \"resourcequota-4665/test-service:\" at 100.111.227.87:80/TCP\nI0111 17:02:52.910556 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:02:55.070860 1 service.go:382] Removing service port \"resourcequota-4665/test-service:\"\nI0111 17:02:55.091772 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:03:25.116381 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:03:55.141883 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:04:25.167872 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:04:55.194339 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:05:25.220310 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:05:55.246469 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:06:25.272425 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:06:55.296777 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:07:25.321386 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:07:55.346048 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:08:25.370977 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:08:55.398358 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:09:25.423501 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:09:55.448449 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:10:25.474447 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:10:55.500711 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:11:25.527076 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nE0111 17:11:32.836589 1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: Get https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com/api/v1/services?allowWatchBookmarks=true&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=14276&timeout=8m58s&timeoutSeconds=538&watch=true: net/http: TLS handshake timeout\nE0111 17:11:32.836645 1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Endpoints: Get https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com/api/v1/endpoints?allowWatchBookmarks=true&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=14365&timeout=8m42s&timeoutSeconds=522&watch=true: net/http: TLS handshake timeout\nI0111 17:11:43.839176 1 trace.go:116] Trace[392149778]: \"Reflector ListAndWatch\" name:k8s.io/client-go/informers/factory.go:134 (started: 2020-01-11 17:11:33.836716031 +0000 UTC m=+4528.191273182) (total time: 10.00241272s):\nTrace[392149778]: [10.002380102s] [10.002380102s] Objects listed\nI0111 17:11:43.841015 1 trace.go:116] Trace[1102261016]: \"Reflector ListAndWatch\" name:k8s.io/client-go/informers/factory.go:134 (started: 2020-01-11 17:11:33.837766482 +0000 UTC m=+4528.192323623) (total time: 10.003231738s):\nTrace[1102261016]: [10.003231738s] [10.003231738s] END\nE0111 17:11:43.841030 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Endpoints: Get https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com/api/v1/endpoints?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0: net/http: TLS handshake timeout\nI0111 17:11:55.559606 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:12:25.586332 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:12:55.611766 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:13:25.637510 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:13:55.669621 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:14:25.696795 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:14:47.694670 1 service.go:357] Adding new service port \"webhook-3412/e2e-test-webhook:\" at 100.107.124.220:8443/TCP\nI0111 17:14:47.722063 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:14:47.754646 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:14:54.289728 1 service.go:382] Removing service port \"webhook-3412/e2e-test-webhook:\"\nI0111 17:14:54.329883 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:14:54.366539 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:15:24.395041 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:15:54.420202 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:16:24.446084 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:16:54.471497 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:17:24.497135 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:17:54.522116 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:18:24.547629 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:18:54.572781 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:19:24.604345 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:19:54.628381 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:20:24.653648 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:20:54.678744 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:21:14.004791 1 service.go:357] Adding new service port \"webhook-1291/e2e-test-webhook:\" at 100.109.201.250:8443/TCP\nI0111 17:21:14.027933 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:21:14.052750 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:21:20.926428 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:21:20.950933 1 service.go:382] Removing service port \"webhook-1291/e2e-test-webhook:\"\nI0111 17:21:20.972470 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:21:50.998194 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:22:00.022731 1 service.go:357] Adding new service port \"services-1502/multi-endpoint-test:portname1\" at 100.109.36.91:80/TCP\nI0111 17:22:00.022917 1 service.go:357] Adding new service port \"services-1502/multi-endpoint-test:portname2\" at 100.109.36.91:81/TCP\nI0111 17:22:00.045377 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:22:00.070565 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:22:01.822629 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:22:03.835461 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:22:04.501427 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:22:04.768333 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:22:04.916596 1 service.go:382] Removing service port \"services-1502/multi-endpoint-test:portname1\"\nI0111 17:22:04.916621 1 service.go:382] Removing service port \"services-1502/multi-endpoint-test:portname2\"\nI0111 17:22:04.943799 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:22:04.967808 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:22:34.995275 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:23:05.019514 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:23:35.051221 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:24:05.076807 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:24:35.102560 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:25:05.128480 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:25:35.160309 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:26:05.186252 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:26:35.212189 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:27:05.238404 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:27:35.268918 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:28:05.300017 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:28:35.325227 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:29:05.351130 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:29:35.382073 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:30:05.407639 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:30:35.434182 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:31:05.460112 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:31:35.492223 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:32:05.518960 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:32:35.544173 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:33:05.569062 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:33:33.150131 1 service.go:357] Adding new service port \"dns-5967/test-service-2:http\" at 100.106.60.114:80/TCP\nI0111 17:33:33.171621 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:33:33.195462 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:33:34.660180 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:34:04.693337 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:34:09.530490 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:34:09.551815 1 service.go:382] Removing service port \"dns-5967/test-service-2:http\"\nI0111 17:34:09.581872 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:34:09.616790 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:34:39.644469 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:35:09.669386 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:35:39.695756 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:35:47.998831 1 service.go:357] Adding new service port \"webhook-9616/e2e-test-webhook:\" at 100.109.158.139:8443/TCP\nI0111 17:35:48.022082 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:35:48.046878 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:35:56.251718 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:35:56.392880 1 service.go:382] Removing service port \"webhook-9616/e2e-test-webhook:\"\nI0111 17:35:56.429149 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:35:56.462172 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:36:26.497601 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:36:56.524697 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:37:26.550248 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:37:56.586963 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:38:26.614876 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:38:56.639427 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:39:26.670974 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:39:56.696841 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:40:26.722734 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:40:56.748578 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:41:26.776829 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:41:56.803035 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:42:26.827891 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:42:56.852816 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:43:26.877612 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:43:56.902807 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:44:16.241163 1 service.go:357] Adding new service port \"webhook-9359/e2e-test-webhook:\" at 100.106.95.64:8443/TCP\nI0111 17:44:16.263660 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:44:16.288101 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:44:24.339298 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:44:24.383149 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:44:24.564981 1 service.go:382] Removing service port \"webhook-9359/e2e-test-webhook:\"\nI0111 17:44:24.608340 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:44:54.636188 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:45:24.060857 1 service.go:357] Adding new service port \"webhook-2924/e2e-test-webhook:\" at 100.106.21.231:8443/TCP\nI0111 17:45:24.081952 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:45:24.105442 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:45:32.767166 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:45:32.805029 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:45:32.939480 1 service.go:382] Removing service port \"webhook-2924/e2e-test-webhook:\"\nI0111 17:45:32.963762 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:46:02.999304 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:46:33.026685 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:47:03.069766 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:47:33.106972 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:48:03.131726 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:48:33.156920 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:49:03.182456 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:49:33.230060 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:49:51.059416 1 service.go:357] Adding new service port \"webhook-3373/e2e-test-webhook:\" at 100.106.185.29:8443/TCP\nI0111 17:49:51.080457 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:49:51.104069 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:49:58.966304 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:49:59.241689 1 service.go:382] Removing service port \"webhook-3373/e2e-test-webhook:\"\nI0111 17:49:59.263461 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:50:29.288435 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:50:59.323357 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:50:59.796350 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-fghsq:\" at 100.105.187.137:80/TCP\nI0111 17:50:59.818628 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:50:59.847606 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:50:59.892891 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-lfwkn:\" at 100.108.2.133:80/TCP\nI0111 17:50:59.920925 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:50:59.924686 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-zm7xz:\" at 100.108.179.223:80/TCP\nI0111 17:50:59.945726 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:50:59.978815 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-8r2gm:\" at 100.110.229.48:80/TCP\nI0111 17:51:00.001231 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:00.005072 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-2vcvb:\" at 100.108.229.93:80/TCP\nI0111 17:51:00.005094 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-vf4x2:\" at 100.109.53.238:80/TCP\nI0111 17:51:00.005141 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-j8b4c:\" at 100.108.86.36:80/TCP\nI0111 17:51:00.005156 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-p2tzk:\" at 100.105.48.215:80/TCP\nI0111 17:51:00.005172 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-xv9mt:\" at 100.109.171.138:80/TCP\nI0111 17:51:00.027343 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:00.031992 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-7tzh4:\" at 100.107.210.180:80/TCP\nI0111 17:51:00.032012 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-pnntp:\" at 100.109.194.7:80/TCP\nI0111 17:51:00.032023 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-zdbqj:\" at 100.107.62.57:80/TCP\nI0111 17:51:00.032034 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-p4jfx:\" at 100.107.106.156:80/TCP\nI0111 17:51:00.032043 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-mrqf5:\" at 100.105.169.205:80/TCP\nI0111 17:51:00.032053 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-9g794:\" at 100.107.191.168:80/TCP\nI0111 17:51:00.032073 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-gr2kb:\" at 100.109.54.14:80/TCP\nI0111 17:51:00.032086 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-lntnk:\" at 100.111.229.145:80/TCP\nI0111 17:51:00.032095 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-bc9zc:\" at 100.106.176.119:80/TCP\nI0111 17:51:00.053084 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:00.081278 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:00.086622 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-snbx2:\" at 100.105.168.58:80/TCP\nI0111 17:51:00.086643 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-l726n:\" at 100.111.133.140:80/TCP\nI0111 17:51:00.086675 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-7f2s5:\" at 100.109.205.139:80/TCP\nI0111 17:51:00.109642 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:00.115134 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-mrvgv:\" at 100.104.241.196:80/TCP\nI0111 17:51:00.115159 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-zwf66:\" at 100.106.123.103:80/TCP\nI0111 17:51:00.115189 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-lmv2c:\" at 100.104.67.175:80/TCP\nI0111 17:51:00.139257 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:00.163665 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-vhhhn:\" at 100.110.218.117:80/TCP\nI0111 17:51:00.186957 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:00.192614 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-kvdgg:\" at 100.108.106.176:80/TCP\nI0111 17:51:00.192637 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-r6d2d:\" at 100.105.33.186:80/TCP\nI0111 17:51:00.192664 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-swtmz:\" at 100.111.70.92:80/TCP\nI0111 17:51:00.192682 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-zxcgj:\" at 100.106.50.111:80/TCP\nI0111 17:51:00.192698 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-6gs6p:\" at 100.104.204.171:80/TCP\nI0111 17:51:00.192712 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-8zknf:\" at 100.107.126.187:80/TCP\nI0111 17:51:00.192743 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-rjhwv:\" at 100.106.226.91:80/TCP\nI0111 17:51:00.216181 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:00.222500 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-rlw8d:\" at 100.104.165.122:80/TCP\nI0111 17:51:00.222564 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-9brn2:\" at 100.111.8.132:80/TCP\nI0111 17:51:00.222585 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-hsq6b:\" at 100.105.62.188:80/TCP\nI0111 17:51:00.222602 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-vcsdq:\" at 100.107.139.215:80/TCP\nI0111 17:51:00.222616 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-lnzms:\" at 100.108.15.116:80/TCP\nI0111 17:51:00.246325 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:00.259366 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-tjclt:\" at 100.111.198.65:80/TCP\nI0111 17:51:00.259389 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-8frd7:\" at 100.106.231.195:80/TCP\nI0111 17:51:00.283766 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:00.290436 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-x2lzk:\" at 100.104.97.159:80/TCP\nI0111 17:51:00.290455 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-vx7cx:\" at 100.106.37.99:80/TCP\nI0111 17:51:00.290487 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-2xf6w:\" at 100.104.119.187:80/TCP\nI0111 17:51:00.290504 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-98ml7:\" at 100.107.208.79:80/TCP\nI0111 17:51:00.290546 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-79x6g:\" at 100.107.192.82:80/TCP\nI0111 17:51:00.290580 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-4xzhg:\" at 100.107.90.120:80/TCP\nI0111 17:51:00.290597 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-pkbc2:\" at 100.111.8.84:80/TCP\nI0111 17:51:00.290611 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-9svxz:\" at 100.109.47.169:80/TCP\nI0111 17:51:00.315216 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:00.322042 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-hvl5r:\" at 100.109.167.153:80/TCP\nI0111 17:51:00.322062 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-rdn6f:\" at 100.111.68.202:80/TCP\nI0111 17:51:00.322098 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-46qpw:\" at 100.107.166.208:80/TCP\nI0111 17:51:00.322114 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-v8l5m:\" at 100.111.169.103:80/TCP\nI0111 17:51:00.322127 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-j4chh:\" at 100.106.36.124:80/TCP\nI0111 17:51:00.346222 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:00.352992 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-z965h:\" at 100.111.129.30:80/TCP\nI0111 17:51:00.386980 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:00.394001 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-5lgpw:\" at 100.105.235.201:80/TCP\nI0111 17:51:00.418330 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:00.450544 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:00.457434 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-ld26b:\" at 100.104.143.93:80/TCP\nI0111 17:51:00.481725 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:00.490293 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-rc2wb:\" at 100.111.67.5:80/TCP\nI0111 17:51:00.514626 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:00.546914 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:00.553967 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-dcqcb:\" at 100.106.228.88:80/TCP\nI0111 17:51:00.578434 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:00.589640 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-6wg65:\" at 100.110.3.69:80/TCP\nI0111 17:51:00.612651 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:00.643141 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:00.650385 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-jgnzj:\" at 100.105.228.55:80/TCP\nI0111 17:51:00.675115 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:00.688393 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-4mxvd:\" at 100.107.195.139:80/TCP\nI0111 17:51:00.711707 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:00.744039 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:00.751266 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-x9g7w:\" at 100.105.241.166:80/TCP\nI0111 17:51:00.776181 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:00.789375 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-k4xlt:\" at 100.106.182.115:80/TCP\nI0111 17:51:00.813031 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:00.855941 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:00.863357 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-hmnht:\" at 100.105.47.66:80/TCP\nI0111 17:51:00.887355 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:00.894919 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-2qt4b:\" at 100.109.60.9:80/TCP\nI0111 17:51:00.918442 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:00.951364 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:00.958867 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-qgk5l:\" at 100.105.57.4:80/TCP\nI0111 17:51:00.982029 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:00.989646 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-xnx5b:\" at 100.109.70.130:80/TCP\nI0111 17:51:01.013069 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:01.044077 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:01.051770 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-288rg:\" at 100.111.19.237:80/TCP\nI0111 17:51:01.075197 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:01.088379 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-22rbd:\" at 100.105.68.39:80/TCP\nI0111 17:51:01.111931 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:01.143379 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:01.151246 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-xq52d:\" at 100.110.133.128:80/TCP\nI0111 17:51:01.174957 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:01.189301 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-ptll6:\" at 100.107.136.143:80/TCP\nI0111 17:51:01.212871 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:01.244812 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:01.252856 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-bc45j:\" at 100.110.1.82:80/TCP\nI0111 17:51:01.276849 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:01.292159 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-7dtnl:\" at 100.107.106.231:80/TCP\nI0111 17:51:01.325004 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:01.356980 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:01.365161 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-mw7zl:\" at 100.109.59.119:80/TCP\nI0111 17:51:01.389626 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:01.398019 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-zxd2z:\" at 100.105.72.206:80/TCP\nI0111 17:51:01.422301 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:01.437958 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-7clvx:\" at 100.111.82.165:80/TCP\nI0111 17:51:01.462661 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:01.495948 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:01.504484 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-stsdr:\" at 100.109.178.79:80/TCP\nI0111 17:51:01.529163 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:01.538283 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-lmgcb:\" at 100.107.232.224:80/TCP\nI0111 17:51:01.563347 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:01.597149 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:01.605950 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-m54tw:\" at 100.108.60.252:80/TCP\nI0111 17:51:01.630604 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:01.639595 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-pcmnn:\" at 100.105.155.74:80/TCP\nI0111 17:51:01.664645 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:01.698607 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:01.707611 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-t6pk7:\" at 100.111.61.11:80/TCP\nI0111 17:51:01.742999 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:01.752093 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-cm8m8:\" at 100.110.224.216:80/TCP\nI0111 17:51:01.778725 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:01.795038 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-6bdz4:\" at 100.110.78.118:80/TCP\nI0111 17:51:01.821170 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:01.838894 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-5kt9s:\" at 100.111.70.248:80/TCP\nI0111 17:51:01.873979 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:01.908509 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:01.917592 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-kqklr:\" at 100.109.245.148:80/TCP\nI0111 17:51:01.943035 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:01.952420 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-gdg78:\" at 100.108.97.160:80/TCP\nI0111 17:51:01.977977 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:01.989067 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-tscvw:\" at 100.111.243.99:80/TCP\nI0111 17:51:02.026959 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:02.042063 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-7b72t:\" at 100.110.171.33:80/TCP\nI0111 17:51:02.092548 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:02.107616 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-lv8jh:\" at 100.107.204.188:80/TCP\nI0111 17:51:02.147721 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:02.157297 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-5996l:\" at 100.111.250.171:80/TCP\nI0111 17:51:02.183034 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:02.192690 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-gmd2g:\" at 100.104.111.159:80/TCP\nI0111 17:51:02.218504 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:02.254352 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:02.264206 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-q4xm7:\" at 100.105.110.33:80/TCP\nI0111 17:51:02.291082 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:02.300821 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-9sqw5:\" at 100.108.174.113:80/TCP\nI0111 17:51:02.338775 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:02.351754 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-jl9vl:\" at 100.108.183.118:80/TCP\nI0111 17:51:02.401163 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:02.416777 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-6q7br:\" at 100.106.3.9:80/TCP\nI0111 17:51:02.456420 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:02.470254 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-cp8qd:\" at 100.106.66.229:80/TCP\nI0111 17:51:02.508935 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:02.524639 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-r46hf:\" at 100.105.135.93:80/TCP\nI0111 17:51:02.567578 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:02.577945 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-fp8kh:\" at 100.104.152.180:80/TCP\nI0111 17:51:02.604627 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:02.614843 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-vqbf2:\" at 100.106.172.42:80/TCP\nI0111 17:51:02.641300 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:02.651689 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-jd985:\" at 100.110.75.104:80/TCP\nI0111 17:51:02.678049 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:02.690598 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-fxthb:\" at 100.105.62.22:80/TCP\nI0111 17:51:02.716015 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:02.753483 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:02.763878 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-6m7zb:\" at 100.107.76.88:80/TCP\nI0111 17:51:02.791466 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:02.802134 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-8zt78:\" at 100.104.44.93:80/TCP\nI0111 17:51:02.829089 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:02.840069 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-ffpdw:\" at 100.110.77.156:80/TCP\nI0111 17:51:02.872188 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:02.909755 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:02.920572 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-f8j4x:\" at 100.109.242.24:80/TCP\nI0111 17:51:02.954956 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:02.965760 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-q9snn:\" at 100.106.116.103:80/TCP\nI0111 17:51:02.993125 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:03.003944 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-pf59q:\" at 100.110.215.122:80/TCP\nI0111 17:51:03.031033 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:03.042347 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-54tw6:\" at 100.110.24.93:80/TCP\nI0111 17:51:03.070103 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:03.108347 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:03.119425 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-6st9l:\" at 100.108.37.219:80/TCP\nI0111 17:51:03.147118 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:03.158170 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-4bzlp:\" at 100.108.17.116:80/TCP\nI0111 17:51:03.185444 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:03.196902 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-7s4p7:\" at 100.109.118.209:80/TCP\nI0111 17:51:03.224243 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:03.238678 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-d7cht:\" at 100.106.247.150:80/TCP\nI0111 17:51:03.265210 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:03.304672 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:03.316038 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-s672b:\" at 100.107.196.208:80/TCP\nI0111 17:51:03.343894 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:03.355540 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-g7p24:\" at 100.110.181.183:80/TCP\nI0111 17:51:03.384027 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:03.395850 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-6zm2x:\" at 100.111.196.73:80/TCP\nI0111 17:51:03.424455 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:03.443066 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-dhjzj:\" at 100.107.79.86:80/TCP\nI0111 17:51:03.471201 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:03.510781 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:03.522220 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-5pqpk:\" at 100.104.171.54:80/TCP\nI0111 17:51:03.550302 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:03.561961 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-bpxkp:\" at 100.108.19.109:80/TCP\nI0111 17:51:03.590646 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:03.602599 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-hb49s:\" at 100.107.206.42:80/TCP\nI0111 17:51:03.630502 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:03.642449 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-cnh47:\" at 100.108.201.76:80/TCP\nI0111 17:51:03.670775 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:03.710860 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:03.727418 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-vsnnb:\" at 100.109.126.119:80/TCP\nI0111 17:51:03.759444 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:03.772837 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-9ssgv:\" at 100.106.53.73:80/TCP\nI0111 17:51:03.801309 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:03.813890 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-bv4ds:\" at 100.104.3.219:80/TCP\nI0111 17:51:03.863697 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:03.880846 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-q8qd6:\" at 100.104.2.206:80/TCP\nI0111 17:51:03.909387 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:03.921621 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-cv5ch:\" at 100.108.35.180:80/TCP\nI0111 17:51:03.950482 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:03.969926 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-kwxmp:\" at 100.105.194.230:80/TCP\nI0111 17:51:04.020181 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:04.032646 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-mhjfx:\" at 100.105.107.95:80/TCP\nI0111 17:51:04.061668 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:04.073972 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-b9jhs:\" at 100.109.228.0:80/TCP\nI0111 17:51:04.102889 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:04.115144 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-mxvqc:\" at 100.105.75.32:80/TCP\nI0111 17:51:04.144086 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:04.156552 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-srgbh:\" at 100.106.151.178:80/TCP\nI0111 17:51:04.184131 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:04.196865 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-g6fcm:\" at 100.107.44.222:80/TCP\nI0111 17:51:04.224001 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:04.239153 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-5d5cz:\" at 100.106.196.198:80/TCP\nI0111 17:51:04.266508 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:04.306976 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:04.319982 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-84lkj:\" at 100.109.143.18:80/TCP\nI0111 17:51:04.347768 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:04.360589 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-zqv5x:\" at 100.107.36.53:80/TCP\nI0111 17:51:04.388417 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:04.401329 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-qqfg9:\" at 100.110.104.135:80/TCP\nI0111 17:51:04.429125 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:04.442408 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-2hgv4:\" at 100.106.127.50:80/TCP\nI0111 17:51:04.470376 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:04.519339 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:04.532503 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-gpdhb:\" at 100.109.197.79:80/TCP\nI0111 17:51:04.560710 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:04.573965 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-xdmk5:\" at 100.107.69.128:80/TCP\nI0111 17:51:04.602248 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:04.615591 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-kwskk:\" at 100.111.218.17:80/TCP\nI0111 17:51:04.643715 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:04.657062 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-kk2cr:\" at 100.111.233.144:80/TCP\nI0111 17:51:04.685414 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:04.699079 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-7cd8s:\" at 100.107.86.167:80/TCP\nI0111 17:51:04.727325 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:04.741274 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-xnsp9:\" at 100.108.45.224:80/TCP\nI0111 17:51:04.773439 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:04.816009 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:04.829960 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-drw45:\" at 100.109.174.142:80/TCP\nI0111 17:51:04.859118 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:04.873067 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-q7znd:\" at 100.111.147.31:80/TCP\nI0111 17:51:04.906399 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:04.920170 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-k8k5t:\" at 100.110.56.30:80/TCP\nI0111 17:51:04.949052 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:04.963067 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-d76kq:\" at 100.106.125.35:80/TCP\nI0111 17:51:04.992247 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:05.006278 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-rsp5z:\" at 100.107.92.67:80/TCP\nI0111 17:51:05.042127 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:05.056556 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-g9d9x:\" at 100.111.196.5:80/TCP\nI0111 17:51:05.085462 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:05.099760 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-dsrhl:\" at 100.111.32.49:80/TCP\nI0111 17:51:05.129747 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:05.144112 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-qxbrh:\" at 100.109.22.132:80/TCP\nI0111 17:51:05.173692 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:05.218506 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:05.232843 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-jjw2m:\" at 100.105.162.136:80/TCP\nI0111 17:51:05.262584 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:05.276898 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-6rsv9:\" at 100.107.15.18:80/TCP\nI0111 17:51:05.323400 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nE0111 17:51:05.346567 1 proxier.go:1418] Failed to execute iptables-restore: exit status 1 (iptables-restore: line 1103 failed\n)\nI0111 17:51:05.346843 1 proxier.go:1421] Closing local ports after iptables-restore failure\nI0111 17:51:05.346946 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-t84g2:\" at 100.105.106.47:80/TCP\nI0111 17:51:05.347075 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-269dd:\" at 100.108.230.157:80/TCP\nI0111 17:51:05.390384 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:05.412373 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-lxs58:\" at 100.104.145.28:80/TCP\nI0111 17:51:05.451551 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:05.466462 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-rrvc5:\" at 100.109.184.131:80/TCP\nI0111 17:51:05.496598 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:05.511266 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-tctgc:\" at 100.108.176.38:80/TCP\nI0111 17:51:05.541040 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:05.556200 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-fsxrq:\" at 100.105.234.105:80/TCP\nI0111 17:51:05.586457 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:05.601767 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-r7ncf:\" at 100.105.124.130:80/TCP\nI0111 17:51:05.637225 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:05.652619 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-s6dcx:\" at 100.105.233.166:80/TCP\nI0111 17:51:05.682788 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:05.698152 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-gxpfj:\" at 100.109.252.109:80/TCP\nI0111 17:51:05.733885 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:05.749420 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-4ld2b:\" at 100.104.113.50:80/TCP\nI0111 17:51:05.779924 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:05.795510 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-zqw96:\" at 100.111.167.113:80/TCP\nI0111 17:51:05.826387 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:05.842014 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-lxr7q:\" at 100.107.252.142:80/TCP\nI0111 17:51:05.874009 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:05.893748 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-vngrh:\" at 100.106.133.96:80/TCP\nI0111 17:51:05.924501 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:05.940605 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-dgwdn:\" at 100.104.123.179:80/TCP\nI0111 17:51:05.971401 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:06.018069 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:06.033716 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-cxs2s:\" at 100.111.18.23:80/TCP\nI0111 17:51:06.065022 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:06.080607 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-2nw4c:\" at 100.109.220.251:80/TCP\nI0111 17:51:06.111615 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:06.127507 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-vl2b4:\" at 100.106.208.213:80/TCP\nI0111 17:51:06.162761 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:06.179141 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-8zqlx:\" at 100.104.229.79:80/TCP\nI0111 17:51:06.215964 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:06.231889 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-phh7t:\" at 100.106.113.31:80/TCP\nI0111 17:51:06.262962 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:06.278586 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-hhlk8:\" at 100.108.43.70:80/TCP\nI0111 17:51:06.309925 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:06.326198 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-djk2k:\" at 100.109.32.105:80/TCP\nI0111 17:51:06.357669 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:06.373576 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-67jwd:\" at 100.105.52.53:80/TCP\nI0111 17:51:06.405247 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:06.421379 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-xtbm5:\" at 100.107.21.131:80/TCP\nI0111 17:51:06.453468 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:06.469914 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-49tnq:\" at 100.107.166.64:80/TCP\nI0111 17:51:06.501679 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:06.518193 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-9vf2x:\" at 100.107.117.209:80/TCP\nI0111 17:51:06.550148 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:06.566593 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-x5vmr:\" at 100.104.188.40:80/TCP\nI0111 17:51:06.598601 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:06.615040 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-qjm42:\" at 100.106.13.122:80/TCP\nI0111 17:51:06.647189 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:06.663721 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-cpkvc:\" at 100.109.59.98:80/TCP\nI0111 17:51:06.696113 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:06.712771 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-cdzgk:\" at 100.105.90.25:80/TCP\nI0111 17:51:06.745036 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:06.768695 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-9wg8b:\" at 100.106.29.14:80/TCP\nI0111 17:51:06.800246 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:06.817550 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-7lv9c:\" at 100.107.183.157:80/TCP\nI0111 17:51:06.850026 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:06.867065 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-zqzcg:\" at 100.110.3.191:80/TCP\nI0111 17:51:06.903649 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:06.920549 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-htk7c:\" at 100.107.52.233:80/TCP\nI0111 17:51:06.952727 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:06.969660 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-99qc6:\" at 100.108.240.52:80/TCP\nI0111 17:51:07.002135 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:07.019234 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-g98nc:\" at 100.108.141.219:80/TCP\nI0111 17:51:07.051733 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:07.068883 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-k2p7n:\" at 100.111.193.69:80/TCP\nI0111 17:51:07.104030 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:07.134032 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-dpzqk:\" at 100.106.65.22:80/TCP\nI0111 17:51:07.177728 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:07.204226 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-6mszj:\" at 100.110.7.202:80/TCP\nI0111 17:51:07.204250 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-kmndd:\" at 100.104.182.149:80/TCP\nI0111 17:51:07.252243 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:07.278263 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-r5ddp:\" at 100.111.211.43:80/TCP\nI0111 17:51:07.324506 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:07.350763 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-bnx57:\" at 100.111.88.251:80/TCP\nI0111 17:51:07.350783 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-65r7h:\" at 100.111.32.143:80/TCP\nI0111 17:51:07.390386 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:07.408404 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-m5bpf:\" at 100.110.112.240:80/TCP\nI0111 17:51:07.442584 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:07.467651 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-z84p8:\" at 100.111.175.73:80/TCP\nI0111 17:51:07.500807 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:07.518787 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-fxq59:\" at 100.111.181.79:80/TCP\nI0111 17:51:07.552039 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:07.570398 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-ltfvt:\" at 100.104.255.216:80/TCP\nI0111 17:51:07.602286 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:07.620185 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-q9m8h:\" at 100.104.190.239:80/TCP\nI0111 17:51:07.652116 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:07.670396 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-hfhdr:\" at 100.109.204.86:80/TCP\nI0111 17:51:07.702452 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:07.723998 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-gs4w5:\" at 100.106.33.191:80/TCP\nI0111 17:51:07.761244 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:07.779836 1 service.go:357] Adding new service port \"svc-latency-980/latency-svc-rcfcz:\" at 100.106.10.87:80/TCP\nI0111 17:51:07.812100 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:07.863122 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:07.918230 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:07.969162 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:08.020512 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:08.078835 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:08.130088 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:08.182292 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:08.234608 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:08.286312 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:08.338515 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:08.390447 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:08.444157 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:13.644158 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-22rbd:\"\nI0111 17:51:13.678889 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:13.698028 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-269dd:\"\nI0111 17:51:13.698053 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-288rg:\"\nI0111 17:51:13.740343 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:13.759392 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-2hgv4:\"\nI0111 17:51:13.799332 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:13.838825 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-2nw4c:\"\nI0111 17:51:13.872852 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:13.891801 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-2qt4b:\"\nI0111 17:51:13.924951 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:13.949105 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-2vcvb:\"\nI0111 17:51:13.988643 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:14.040006 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-2xf6w:\"\nI0111 17:51:14.073558 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:14.124718 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:14.239065 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-46qpw:\"\nI0111 17:51:14.272649 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:14.324280 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:14.439943 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-49tnq:\"\nI0111 17:51:14.474347 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:14.525495 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:14.639500 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-4bzlp:\"\nI0111 17:51:14.673149 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:14.731024 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:14.749699 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-4ld2b:\"\nI0111 17:51:14.782115 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:14.839057 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-4mxvd:\"\nI0111 17:51:14.872494 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:14.890727 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-4xzhg:\"\nI0111 17:51:14.923404 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:14.941589 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-54tw6:\"\nI0111 17:51:14.977836 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:14.996887 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-5996l:\"\nI0111 17:51:15.029744 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:15.048313 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-5d5cz:\"\nI0111 17:51:15.048332 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-5kt9s:\"\nI0111 17:51:15.081138 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:15.099102 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-5lgpw:\"\nI0111 17:51:15.131881 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:15.149838 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-5pqpk:\"\nI0111 17:51:15.149857 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-65r7h:\"\nI0111 17:51:15.182240 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:15.199573 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-67jwd:\"\nI0111 17:51:15.199592 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-6bdz4:\"\nI0111 17:51:15.231565 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:15.249240 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-6gs6p:\"\nI0111 17:51:15.249257 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-6m7zb:\"\nI0111 17:51:15.281657 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:15.339674 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-6mszj:\"\nI0111 17:51:15.372459 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:15.397070 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-6st9l:\"\nI0111 17:51:15.397090 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-6q7br:\"\nI0111 17:51:15.397098 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-6rsv9:\"\nI0111 17:51:15.429342 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:15.446504 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-6wg65:\"\nI0111 17:51:15.494477 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:15.519819 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-6zm2x:\"\nI0111 17:51:15.563098 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:15.586845 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-79x6g:\"\nI0111 17:51:15.586866 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-7b72t:\"\nI0111 17:51:15.586874 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-7cd8s:\"\nI0111 17:51:15.619129 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:15.639939 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-7clvx:\"\nI0111 17:51:15.672186 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:15.689029 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-7s4p7:\"\nI0111 17:51:15.689047 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-7tzh4:\"\nI0111 17:51:15.689056 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-84lkj:\"\nI0111 17:51:15.689063 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-7dtnl:\"\nI0111 17:51:15.689070 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-7f2s5:\"\nI0111 17:51:15.689078 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-7lv9c:\"\nI0111 17:51:15.725038 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:15.744850 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-8r2gm:\"\nI0111 17:51:15.744868 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-8frd7:\"\nI0111 17:51:15.776063 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:15.839359 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-8zknf:\"\nI0111 17:51:15.872815 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:15.888582 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-8zt78:\"\nI0111 17:51:15.888600 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-8zqlx:\"\nI0111 17:51:15.919637 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:15.939791 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-98ml7:\"\nI0111 17:51:15.975501 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:15.991706 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-99qc6:\"\nI0111 17:51:15.991724 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-9brn2:\"\nI0111 17:51:15.991732 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-9g794:\"\nI0111 17:51:15.991739 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-9sqw5:\"\nI0111 17:51:15.991822 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-9ssgv:\"\nI0111 17:51:16.023100 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:16.038132 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-9svxz:\"\nI0111 17:51:16.038149 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-9vf2x:\"\nI0111 17:51:16.038158 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-9wg8b:\"\nI0111 17:51:16.070062 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:16.092452 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-bv4ds:\"\nI0111 17:51:16.092637 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-cdzgk:\"\nI0111 17:51:16.092654 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-cm8m8:\"\nI0111 17:51:16.092677 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-b9jhs:\"\nI0111 17:51:16.092714 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-bc45j:\"\nI0111 17:51:16.092784 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-bc9zc:\"\nI0111 17:51:16.092796 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-bnx57:\"\nI0111 17:51:16.092805 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-bpxkp:\"\nI0111 17:51:16.122437 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:16.136363 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-cnh47:\"\nI0111 17:51:16.136381 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-cp8qd:\"\nI0111 17:51:16.166575 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:16.180921 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-cpkvc:\"\nI0111 17:51:16.180940 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-cv5ch:\"\nI0111 17:51:16.180948 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-cxs2s:\"\nI0111 17:51:16.180955 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-d76kq:\"\nI0111 17:51:16.180962 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-d7cht:\"\nI0111 17:51:16.180971 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-dcqcb:\"\nI0111 17:51:16.180978 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-dgwdn:\"\nI0111 17:51:16.210713 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:16.224268 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-dhjzj:\"\nI0111 17:51:16.253514 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:16.338954 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-djk2k:\"\nI0111 17:51:16.369552 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:16.383011 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-dpzqk:\"\nI0111 17:51:16.383030 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-drw45:\"\nI0111 17:51:16.383038 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-dsrhl:\"\nI0111 17:51:16.412097 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:16.438909 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-f8j4x:\"\nI0111 17:51:16.467756 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:16.481196 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-ffpdw:\"\nI0111 17:51:16.481215 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-fghsq:\"\nI0111 17:51:16.481223 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-fp8kh:\"\nI0111 17:51:16.481230 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-fsxrq:\"\nI0111 17:51:16.510507 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:16.523621 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-g6fcm:\"\nI0111 17:51:16.523640 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-g7p24:\"\nI0111 17:51:16.523647 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-g98nc:\"\nI0111 17:51:16.523655 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-fxq59:\"\nI0111 17:51:16.523662 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-fxthb:\"\nI0111 17:51:16.552994 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:16.566103 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-g9d9x:\"\nI0111 17:51:16.566124 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-gdg78:\"\nI0111 17:51:16.566132 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-gmd2g:\"\nI0111 17:51:16.566138 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-gpdhb:\"\nI0111 17:51:16.566146 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-gr2kb:\"\nI0111 17:51:16.595775 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:16.607928 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-hfhdr:\"\nI0111 17:51:16.607947 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-hhlk8:\"\nI0111 17:51:16.607955 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-hmnht:\"\nI0111 17:51:16.607963 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-gs4w5:\"\nI0111 17:51:16.607975 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-gxpfj:\"\nI0111 17:51:16.607984 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-hb49s:\"\nI0111 17:51:16.636491 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:16.648280 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-hsq6b:\"\nI0111 17:51:16.648298 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-htk7c:\"\nI0111 17:51:16.648306 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-hvl5r:\"\nI0111 17:51:16.648313 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-j4chh:\"\nI0111 17:51:16.676060 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:16.687248 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-jjw2m:\"\nI0111 17:51:16.687267 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-jl9vl:\"\nI0111 17:51:16.687275 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-j8b4c:\"\nI0111 17:51:16.687282 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-jd985:\"\nI0111 17:51:16.687289 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-jgnzj:\"\nI0111 17:51:16.722445 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:16.733452 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-k2p7n:\"\nI0111 17:51:16.733473 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-k4xlt:\"\nI0111 17:51:16.733481 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-k8k5t:\"\nI0111 17:51:16.733504 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-kk2cr:\"\nI0111 17:51:16.733518 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-kmndd:\"\nI0111 17:51:16.733578 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-kqklr:\"\nI0111 17:51:16.733590 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-kvdgg:\"\nI0111 17:51:16.759750 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:16.770443 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-ld26b:\"\nI0111 17:51:16.770463 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-lfwkn:\"\nI0111 17:51:16.770470 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-kwskk:\"\nI0111 17:51:16.770492 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-kwxmp:\"\nI0111 17:51:16.770501 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-l726n:\"\nI0111 17:51:16.796452 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:16.806555 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-lntnk:\"\nI0111 17:51:16.806573 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-lnzms:\"\nI0111 17:51:16.806581 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-ltfvt:\"\nI0111 17:51:16.806616 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-lmgcb:\"\nI0111 17:51:16.806629 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-lmv2c:\"\nI0111 17:51:16.832405 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:16.842138 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-lv8jh:\"\nI0111 17:51:16.842157 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-lxr7q:\"\nI0111 17:51:16.842165 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-lxs58:\"\nI0111 17:51:16.867484 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:16.876901 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-m54tw:\"\nI0111 17:51:16.876921 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-m5bpf:\"\nI0111 17:51:16.876928 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-mhjfx:\"\nI0111 17:51:16.876936 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-mrqf5:\"\nI0111 17:51:16.902196 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:16.911077 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-p2tzk:\"\nI0111 17:51:16.911095 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-p4jfx:\"\nI0111 17:51:16.911103 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-mrvgv:\"\nI0111 17:51:16.911110 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-mw7zl:\"\nI0111 17:51:16.911117 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-mxvqc:\"\nI0111 17:51:16.935454 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:16.944054 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-pcmnn:\"\nI0111 17:51:16.944072 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-pf59q:\"\nI0111 17:51:16.973049 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:16.981355 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-phh7t:\"\nI0111 17:51:16.981373 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-pkbc2:\"\nI0111 17:51:16.981381 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-pnntp:\"\nI0111 17:51:16.981388 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-ptll6:\"\nI0111 17:51:16.981396 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-q4xm7:\"\nI0111 17:51:17.006033 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:17.014115 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-q9snn:\"\nI0111 17:51:17.014134 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-qgk5l:\"\nI0111 17:51:17.014143 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-q7znd:\"\nI0111 17:51:17.014151 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-q8qd6:\"\nI0111 17:51:17.014159 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-q9m8h:\"\nI0111 17:51:17.037703 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:17.045332 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-qjm42:\"\nI0111 17:51:17.045353 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-qqfg9:\"\nI0111 17:51:17.068975 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:17.079660 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-qxbrh:\"\nI0111 17:51:17.106356 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:17.138656 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-r46hf:\"\nI0111 17:51:17.162830 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:17.169937 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-r5ddp:\"\nI0111 17:51:17.169955 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-r6d2d:\"\nI0111 17:51:17.169963 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-r7ncf:\"\nI0111 17:51:17.169969 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-rc2wb:\"\nI0111 17:51:17.200720 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:17.207611 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-rcfcz:\"\nI0111 17:51:17.230004 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:17.238801 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-rdn6f:\"\nI0111 17:51:17.261931 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:17.268762 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-rsp5z:\"\nI0111 17:51:17.268780 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-rjhwv:\"\nI0111 17:51:17.268788 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-rlw8d:\"\nI0111 17:51:17.268794 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-rrvc5:\"\nI0111 17:51:17.292130 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:17.298595 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-snbx2:\"\nI0111 17:51:17.298611 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-srgbh:\"\nI0111 17:51:17.298619 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-s672b:\"\nI0111 17:51:17.298627 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-s6dcx:\"\nI0111 17:51:17.321823 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:17.328233 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-swtmz:\"\nI0111 17:51:17.328250 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-t6pk7:\"\nI0111 17:51:17.328257 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-stsdr:\"\nI0111 17:51:17.350938 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:17.356848 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-t84g2:\"\nI0111 17:51:17.356864 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-tctgc:\"\nI0111 17:51:17.356872 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-tjclt:\"\nI0111 17:51:17.384617 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:17.393384 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-tscvw:\"\nI0111 17:51:17.393571 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-v8l5m:\"\nI0111 17:51:17.393651 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-vcsdq:\"\nI0111 17:51:17.393737 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-vf4x2:\"\nI0111 17:51:17.393801 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-vhhhn:\"\nI0111 17:51:17.425499 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:17.433143 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-vl2b4:\"\nI0111 17:51:17.464143 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:17.472187 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-x2lzk:\"\nI0111 17:51:17.472364 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-x5vmr:\"\nI0111 17:51:17.472432 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-vngrh:\"\nI0111 17:51:17.472511 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-vqbf2:\"\nI0111 17:51:17.472555 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-vsnnb:\"\nI0111 17:51:17.472565 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-vx7cx:\"\nI0111 17:51:17.500185 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:17.505294 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-x9g7w:\"\nI0111 17:51:17.505311 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-xdmk5:\"\nI0111 17:51:17.505318 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-xnsp9:\"\nI0111 17:51:17.536397 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:17.543162 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-xnx5b:\"\nI0111 17:51:17.543183 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-xq52d:\"\nI0111 17:51:17.543191 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-xtbm5:\"\nI0111 17:51:17.543198 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-xv9mt:\"\nI0111 17:51:17.576595 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:17.583043 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-z965h:\"\nI0111 17:51:17.583061 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-zdbqj:\"\nI0111 17:51:17.583070 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-zm7xz:\"\nI0111 17:51:17.583078 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-zqv5x:\"\nI0111 17:51:17.583086 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-z84p8:\"\nI0111 17:51:17.614891 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:17.620557 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-zxd2z:\"\nI0111 17:51:17.620573 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-zqw96:\"\nI0111 17:51:17.620583 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-zqzcg:\"\nI0111 17:51:17.620591 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-zwf66:\"\nI0111 17:51:17.620599 1 service.go:382] Removing service port \"svc-latency-980/latency-svc-zxcgj:\"\nI0111 17:51:17.653382 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:51:47.680705 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:52:17.705642 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:52:47.743276 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:53:17.771086 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:53:47.797084 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:54:17.822254 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:54:47.848015 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:55:17.874174 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:55:47.899412 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:55:57.106169 1 service.go:357] Adding new service port \"services-6103/clusterip-service:\" at 100.110.17.70:80/TCP\nI0111 17:55:57.127438 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:55:57.151147 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:55:57.199340 1 service.go:357] Adding new service port \"services-6103/externalsvc:\" at 100.104.253.21:80/TCP\nI0111 17:55:57.220570 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:55:57.244370 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:55:58.997947 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:55:59.022843 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:56:00.665014 1 service.go:382] Removing service port \"services-6103/clusterip-service:\"\nI0111 17:56:00.687625 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:56:04.960894 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:56:04.986347 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:56:13.908420 1 service.go:382] Removing service port \"services-6103/externalsvc:\"\nI0111 17:56:13.930957 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:56:13.955599 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:56:14.026007 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:56:44.057866 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:57:14.084144 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:57:44.111120 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:58:14.135951 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:58:44.166710 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:59:04.546623 1 service.go:357] Adding new service port \"webhook-2629/e2e-test-webhook:\" at 100.105.159.218:8443/TCP\nI0111 17:59:04.568652 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:59:04.592250 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:59:12.037666 1 service.go:382] Removing service port \"webhook-2629/e2e-test-webhook:\"\nI0111 17:59:12.076499 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:59:12.116676 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 17:59:42.142384 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:00:12.168012 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:00:42.194369 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:01:12.221597 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:01:42.246623 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:02:12.274100 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:02:42.299298 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:03:12.325113 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:03:42.351398 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:04:12.375845 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:04:42.400133 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:05:12.428284 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:05:42.453324 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:06:12.479080 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:06:13.796175 1 service.go:357] Adding new service port \"webhook-1365/e2e-test-webhook:\" at 100.106.250.157:8443/TCP\nI0111 18:06:13.818892 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:06:13.846006 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:06:23.573317 1 service.go:382] Removing service port \"webhook-1365/e2e-test-webhook:\"\nI0111 18:06:23.596238 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:06:23.621166 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:06:53.647683 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:07:23.671889 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:07:32.121009 1 service.go:357] Adding new service port \"kubectl-5864/rm2:\" at 100.107.125.220:1234/TCP\nI0111 18:07:32.147430 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:07:32.185344 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:07:34.910337 1 service.go:357] Adding new service port \"kubectl-5864/rm3:\" at 100.105.162.61:2345/TCP\nI0111 18:07:34.931379 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:07:34.959890 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:07:42.459378 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:07:42.487446 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:07:42.648723 1 service.go:382] Removing service port \"kubectl-5864/rm2:\"\nI0111 18:07:42.671584 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:07:42.736157 1 service.go:382] Removing service port \"kubectl-5864/rm3:\"\nI0111 18:07:42.762090 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:08:12.788398 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:08:42.813192 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:09:12.844931 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:09:42.872207 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:09:44.705036 1 service.go:357] Adding new service port \"nsdeletetest-1023/test-service:\" at 100.106.14.217:80/TCP\nI0111 18:09:44.727094 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:09:44.751676 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:09:49.858462 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:09:49.862071 1 service.go:382] Removing service port \"nsdeletetest-1023/test-service:\"\nI0111 18:09:49.882963 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:10:19.915210 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:10:49.941681 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:11:19.970448 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:11:49.996095 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:12:20.028440 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:12:50.054350 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:13:20.081020 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:13:50.113198 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:14:02.371374 1 service.go:357] Adding new service port \"services-610/externalname-service:http\" at 100.110.14.234:80/TCP\nI0111 18:14:02.392177 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:14:02.415154 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:14:04.137561 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:14:04.161760 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:14:11.474811 1 service.go:382] Removing service port \"services-610/externalname-service:http\"\nI0111 18:14:11.497075 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:14:11.521586 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:14:41.547202 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:15:11.573023 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:15:41.599427 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:16:11.624666 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:16:41.649751 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:17:01.585987 1 service.go:357] Adding new service port \"webhook-1614/e2e-test-webhook:\" at 100.104.110.190:8443/TCP\nI0111 18:17:01.610321 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:17:01.634440 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:17:09.789999 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:17:10.062602 1 service.go:382] Removing service port \"webhook-1614/e2e-test-webhook:\"\nI0111 18:17:10.102104 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nE0111 18:17:10.110881 1 proxier.go:1418] Failed to execute iptables-restore: exit status 1 (iptables-restore: line 124 failed\n)\nI0111 18:17:10.110964 1 proxier.go:1421] Closing local ports after iptables-restore failure\nI0111 18:17:25.511662 1 service.go:357] Adding new service port \"kubectl-3929/redis-slave:\" at 100.110.38.60:6379/TCP\nI0111 18:17:25.533416 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:17:25.557754 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:17:26.109600 1 service.go:357] Adding new service port \"kubectl-3929/redis-master:\" at 100.108.160.227:6379/TCP\nI0111 18:17:26.131547 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:17:26.155895 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:17:27.058068 1 service.go:357] Adding new service port \"kubectl-3929/frontend:\" at 100.104.86.163:80/TCP\nI0111 18:17:27.080409 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:17:27.104711 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:17:31.141648 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:17:31.499109 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:17:31.537150 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:17:41.581391 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:17:41.608758 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:18:06.569517 1 service.go:382] Removing service port \"kubectl-3929/redis-slave:\"\nI0111 18:18:06.594249 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:18:06.621204 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:18:07.088425 1 service.go:382] Removing service port \"kubectl-3929/redis-master:\"\nI0111 18:18:07.113081 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:18:07.138503 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:18:07.605986 1 service.go:382] Removing service port \"kubectl-3929/frontend:\"\nI0111 18:18:07.628398 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:18:07.653187 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:18:37.684419 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:19:07.709492 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:19:37.741801 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:20:07.782196 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:20:37.816755 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:21:07.843567 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:21:37.871965 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:22:07.898688 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:22:37.931072 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:23:07.956454 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:23:37.981850 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:24:08.006023 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:24:38.037063 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:24:48.856838 1 service.go:357] Adding new service port \"kubectl-855/redis-master:\" at 100.104.203.63:6379/TCP\nI0111 18:24:48.878923 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:24:48.903174 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:24:59.756903 1 service.go:382] Removing service port \"kubectl-855/redis-master:\"\nI0111 18:24:59.779414 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:24:59.803846 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:25:29.829827 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:25:59.855846 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:26:29.882248 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:26:59.907460 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:27:29.932845 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:27:59.958606 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:28:29.984262 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:29:00.013259 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:29:30.038132 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:30:00.063054 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:30:30.088084 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:31:00.113187 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:31:30.138191 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:32:00.163471 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:32:30.189573 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:33:00.215175 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:33:30.240635 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:34:00.267418 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:34:30.292517 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:35:00.317874 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:35:30.343464 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:36:00.369245 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:36:05.002757 1 service.go:357] Adding new service port \"services-9188/nodeport-service:\" at 100.104.18.175:80/TCP\nI0111 18:36:05.025229 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:36:05.025382 1 proxier.go:1519] Opened local port \"nodePort for services-9188/nodeport-service:\" (:31726/tcp)\nI0111 18:36:05.049844 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:36:05.096724 1 service.go:357] Adding new service port \"services-9188/externalsvc:\" at 100.107.51.74:80/TCP\nI0111 18:36:05.119986 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:36:05.145191 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:36:06.890696 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:36:06.916616 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:36:08.564187 1 service.go:382] Removing service port \"services-9188/nodeport-service:\"\nI0111 18:36:08.587928 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:36:13.013715 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:36:13.039607 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:36:23.860642 1 service.go:382] Removing service port \"services-9188/externalsvc:\"\nI0111 18:36:23.881918 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:36:23.905357 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:36:23.977945 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:36:54.042220 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:37:24.068586 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:37:54.105262 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:38:24.135310 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:38:54.161282 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:39:24.185623 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:39:54.210739 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:40:24.235540 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:40:54.260375 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:41:24.286457 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:41:54.310591 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:42:24.334958 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:42:54.360057 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:43:24.385059 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:43:54.409798 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:44:24.435100 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:44:54.460103 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:45:24.485560 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:45:54.511247 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:46:24.537079 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:46:54.561296 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:47:24.585590 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:47:54.609857 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:48:24.635090 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:48:54.659982 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:49:24.820851 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:49:54.905709 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:50:24.960142 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:50:54.986954 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:51:25.011293 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:51:55.035919 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:52:25.061005 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:52:55.086289 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:53:25.111155 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:53:55.136250 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:54:25.162161 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:54:55.187919 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:55:25.228034 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:55:44.961574 1 service.go:357] Adding new service port \"emptydir-wrapper-2029/git-server-svc:http-portal\" at 100.104.113.249:2345/TCP\nI0111 18:55:44.983499 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:55:45.007876 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:56:15.035351 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:56:45.065097 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:57:15.091288 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:57:45.118731 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:58:04.014170 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:58:04.040970 1 service.go:382] Removing service port \"emptydir-wrapper-2029/git-server-svc:http-portal\"\nI0111 18:58:04.069857 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:58:04.095168 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:58:34.121324 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:59:04.145755 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 18:59:34.170517 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:00:04.195902 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:00:34.221429 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:01:04.246760 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:01:30.814217 1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0111 19:01:30.814263 1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0111 19:01:34.271877 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nE0111 19:01:40.818322 1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: Get https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com/api/v1/services?allowWatchBookmarks=true&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=36201&timeout=5m17s&timeoutSeconds=317&watch=true: net/http: TLS handshake timeout\nE0111 19:01:40.818568 1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Endpoints: Get https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com/api/v1/endpoints?allowWatchBookmarks=true&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=36628&timeout=8m24s&timeoutSeconds=504&watch=true: net/http: TLS handshake timeout\nI0111 19:01:52.545373 1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0111 19:01:52.545415 1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nE0111 19:01:55.737474 1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: unknown (get services)\nE0111 19:01:55.740084 1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Endpoints: unknown (get endpoints)\nI0111 19:02:04.299919 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:02:34.325338 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:03:04.351135 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:03:34.376685 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:04:04.403211 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:04:34.428489 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:05:04.458913 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:05:34.485627 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:06:04.511230 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:06:34.537068 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:07:04.561507 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:07:34.604712 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:08:04.643178 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:08:34.670582 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:09:04.701457 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:09:34.727037 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:10:04.752950 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:10:34.778577 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:11:04.804592 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:11:34.854968 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:12:04.885088 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:12:34.912843 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:13:04.937740 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:13:34.962261 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:14:04.986997 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:14:35.012155 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:15:05.037589 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:15:35.062893 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:16:05.088221 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:16:35.113565 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:17:05.139481 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:17:35.167478 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:18:05.192128 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:18:35.216874 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:19:05.241406 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:19:35.266363 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:20:05.292172 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:20:35.318501 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:21:05.344974 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:21:35.371290 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:22:05.396011 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:22:35.420897 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:23:05.445731 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:23:35.471002 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:24:05.496604 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:24:35.522094 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:25:05.548572 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:25:35.575175 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:26:05.600116 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:26:35.625251 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:27:05.651140 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:27:35.676439 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:28:05.701416 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:28:35.736986 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:29:05.761332 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:29:25.244205 1 service.go:357] Adding new service port \"nsdeletetest-1069/test-service:\" at 100.110.201.181:80/TCP\nI0111 19:29:25.265101 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:29:25.288380 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:29:30.480091 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:29:30.538431 1 service.go:382] Removing service port \"nsdeletetest-1069/test-service:\"\nI0111 19:29:30.559441 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:30:00.583882 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:30:30.608431 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:31:00.633329 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:31:30.671728 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:32:00.748236 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:32:30.779318 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:33:00.805075 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:33:30.830650 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:34:00.857501 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:34:30.883614 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:34:34.070434 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:34:34.076320 1 service.go:357] Adding new service port \"services-8498/externalname-service:http\" at 100.111.232.102:80/TCP\nI0111 19:34:34.129027 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:34:35.283495 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:34:35.931482 1 service.go:357] Adding new service port \"provisioning-888/csi-hostpath-attacher:dummy\" at 100.107.151.111:12345/TCP\nI0111 19:34:35.982739 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:34:36.042374 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:34:36.203656 1 service.go:357] Adding new service port \"provisioning-888/csi-hostpathplugin:dummy\" at 100.110.48.29:12345/TCP\nI0111 19:34:36.236245 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:34:36.278262 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:34:36.386628 1 service.go:357] Adding new service port \"provisioning-888/csi-hostpath-provisioner:dummy\" at 100.111.110.38:12345/TCP\nI0111 19:34:36.420657 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:34:36.560016 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:34:36.569398 1 service.go:357] Adding new service port \"provisioning-888/csi-hostpath-resizer:dummy\" at 100.108.56.238:12345/TCP\nI0111 19:34:36.611263 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:34:36.688027 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:34:36.752238 1 service.go:357] Adding new service port \"provisioning-888/csi-snapshotter:dummy\" at 100.106.116.100:12345/TCP\nI0111 19:34:36.802024 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:34:36.852889 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:34:37.454871 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:34:41.150874 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:34:43.184308 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:34:45.258899 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:34:46.316809 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:34:46.338753 1 service.go:382] Removing service port \"services-8498/externalname-service:http\"\nI0111 19:34:46.371939 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:34:46.421208 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:34:48.365656 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:34:57.440781 1 service.go:357] Adding new service port \"services-6365/hairpin-test:\" at 100.104.2.229:8080/TCP\nI0111 19:34:57.471937 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:34:57.522810 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:34:59.403048 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:35:08.997585 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:35:09.080490 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:35:09.642978 1 service.go:382] Removing service port \"services-6365/hairpin-test:\"\nI0111 19:35:09.676099 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:35:10.774804 1 service.go:382] Removing service port \"provisioning-888/csi-hostpath-attacher:dummy\"\nI0111 19:35:10.847090 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:35:10.940651 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:35:11.077886 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:35:11.085238 1 service.go:382] Removing service port \"provisioning-888/csi-hostpathplugin:dummy\"\nI0111 19:35:11.108905 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:35:11.234906 1 service.go:382] Removing service port \"provisioning-888/csi-hostpath-provisioner:dummy\"\nI0111 19:35:11.260343 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:35:11.288598 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:35:11.419730 1 service.go:382] Removing service port \"provisioning-888/csi-hostpath-resizer:dummy\"\nI0111 19:35:11.444889 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:35:11.472635 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:35:11.606645 1 service.go:382] Removing service port \"provisioning-888/csi-snapshotter:dummy\"\nI0111 19:35:11.642818 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:35:11.688806 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:35:40.495907 1 service.go:357] Adding new service port \"webhook-9730/e2e-test-webhook:\" at 100.104.55.61:8443/TCP\nI0111 19:35:40.520829 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:35:40.548285 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:35:59.319581 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:35:59.636802 1 service.go:382] Removing service port \"webhook-9730/e2e-test-webhook:\"\nI0111 19:35:59.676761 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:35:59.717577 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:36:29.807948 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:36:54.260717 1 service.go:357] Adding new service port \"provisioning-9667/csi-hostpath-attacher:dummy\" at 100.108.71.0:12345/TCP\nI0111 19:36:54.297266 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:36:54.332278 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:36:54.533031 1 service.go:357] Adding new service port \"provisioning-9667/csi-hostpathplugin:dummy\" at 100.106.76.172:12345/TCP\nI0111 19:36:54.561157 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:36:54.596782 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:36:54.718390 1 service.go:357] Adding new service port \"provisioning-9667/csi-hostpath-provisioner:dummy\" at 100.107.196.36:12345/TCP\nI0111 19:36:54.741619 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:36:54.767372 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:36:54.906291 1 service.go:357] Adding new service port \"provisioning-9667/csi-hostpath-resizer:dummy\" at 100.110.45.244:12345/TCP\nI0111 19:36:54.929234 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:36:55.058458 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:36:55.086228 1 service.go:357] Adding new service port \"provisioning-9667/csi-snapshotter:dummy\" at 100.109.116.128:12345/TCP\nI0111 19:36:55.108833 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:36:55.259017 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:36:56.226621 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:36:57.278396 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:36:57.305362 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:36:57.331913 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:36:57.365800 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:37:27.393358 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:37:29.804507 1 service.go:357] Adding new service port \"provisioning-3332/csi-hostpath-attacher:dummy\" at 100.108.30.170:12345/TCP\nI0111 19:37:29.836230 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:37:29.868447 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:37:30.079361 1 service.go:357] Adding new service port \"provisioning-3332/csi-hostpathplugin:dummy\" at 100.108.92.174:12345/TCP\nI0111 19:37:30.103426 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:37:30.160176 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:37:30.262860 1 service.go:357] Adding new service port \"provisioning-3332/csi-hostpath-provisioner:dummy\" at 100.104.144.123:12345/TCP\nI0111 19:37:30.286870 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:37:30.357962 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:37:30.446757 1 service.go:357] Adding new service port \"provisioning-3332/csi-hostpath-resizer:dummy\" at 100.110.205.43:12345/TCP\nI0111 19:37:30.470024 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:37:30.575072 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:37:30.632112 1 service.go:357] Adding new service port \"provisioning-3332/csi-snapshotter:dummy\" at 100.110.240.69:12345/TCP\nI0111 19:37:30.656055 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:37:30.684697 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:37:32.263832 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:37:32.450606 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:37:32.478059 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:37:33.565665 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:37:41.290670 1 service.go:357] Adding new service port \"aggregator-7230/sample-api:\" at 100.108.11.104:7443/TCP\nI0111 19:37:41.326975 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:37:41.366569 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:37:43.064835 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:37:46.182037 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:37:46.286871 1 service.go:382] Removing service port \"aggregator-7230/sample-api:\"\nI0111 19:37:46.318438 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:37:46.354912 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:38:16.382982 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:38:46.411965 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:38:53.498056 1 service.go:357] Adding new service port \"ephemeral-1641/csi-hostpath-attacher:dummy\" at 100.111.99.89:12345/TCP\nI0111 19:38:53.528903 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:38:53.566361 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:38:53.772736 1 service.go:357] Adding new service port \"ephemeral-1641/csi-hostpathplugin:dummy\" at 100.106.163.81:12345/TCP\nI0111 19:38:53.806041 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:38:53.892772 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:38:53.957194 1 service.go:357] Adding new service port \"ephemeral-1641/csi-hostpath-provisioner:dummy\" at 100.110.184.75:12345/TCP\nI0111 19:38:53.989882 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:38:54.023667 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:38:54.208647 1 service.go:357] Adding new service port \"ephemeral-1641/csi-hostpath-resizer:dummy\" at 100.107.122.84:12345/TCP\nI0111 19:38:54.244411 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:38:54.293372 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:38:54.327718 1 service.go:357] Adding new service port \"ephemeral-1641/csi-snapshotter:dummy\" at 100.105.43.140:12345/TCP\nI0111 19:38:54.365697 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:38:54.419453 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:38:56.930976 1 service.go:382] Removing service port \"provisioning-9667/csi-hostpath-attacher:dummy\"\nI0111 19:38:56.979370 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:38:57.023359 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:38:57.206428 1 service.go:382] Removing service port \"provisioning-9667/csi-hostpathplugin:dummy\"\nI0111 19:38:57.241729 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:38:57.282805 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:38:57.393003 1 service.go:382] Removing service port \"provisioning-9667/csi-hostpath-provisioner:dummy\"\nI0111 19:38:57.418720 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:38:57.450170 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:38:57.576771 1 service.go:382] Removing service port \"provisioning-9667/csi-hostpath-resizer:dummy\"\nI0111 19:38:57.610731 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:38:57.656185 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:38:57.764695 1 service.go:382] Removing service port \"provisioning-9667/csi-snapshotter:dummy\"\nI0111 19:38:57.795392 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:38:57.830893 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:39:00.027716 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:39:00.644667 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:39:01.437752 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:39:03.481968 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:39:04.482855 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:39:06.520852 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:39:36.395421 1 service.go:382] Removing service port \"provisioning-3332/csi-hostpath-attacher:dummy\"\nI0111 19:39:36.429982 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:39:36.475971 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:39:36.714129 1 service.go:382] Removing service port \"provisioning-3332/csi-hostpathplugin:dummy\"\nI0111 19:39:36.751429 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:39:36.796964 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:39:36.862925 1 service.go:382] Removing service port \"provisioning-3332/csi-hostpath-provisioner:dummy\"\nI0111 19:39:36.904989 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:39:36.954214 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:39:37.051481 1 service.go:382] Removing service port \"provisioning-3332/csi-hostpath-resizer:dummy\"\nI0111 19:39:37.078203 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:39:37.113312 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:39:37.238822 1 service.go:382] Removing service port \"provisioning-3332/csi-snapshotter:dummy\"\nI0111 19:39:37.264285 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:39:37.294260 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:39:50.461561 1 service.go:357] Adding new service port \"crd-webhook-4150/e2e-test-crd-conversion-webhook:\" at 100.108.53.198:9443/TCP\nI0111 19:39:50.495996 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:39:50.524500 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:39:58.361046 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:39:58.365650 1 service.go:382] Removing service port \"crd-webhook-4150/e2e-test-crd-conversion-webhook:\"\nI0111 19:39:58.390634 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:40:18.000173 1 service.go:357] Adding new service port \"volumeio-3164/csi-hostpath-attacher:dummy\" at 100.106.26.141:12345/TCP\nI0111 19:40:18.024293 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:40:18.053769 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:40:18.274796 1 service.go:357] Adding new service port \"volumeio-3164/csi-hostpathplugin:dummy\" at 100.105.6.97:12345/TCP\nI0111 19:40:18.299189 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:40:18.332613 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:40:18.462294 1 service.go:357] Adding new service port \"volumeio-3164/csi-hostpath-provisioner:dummy\" at 100.111.81.6:12345/TCP\nI0111 19:40:18.486561 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:40:18.659347 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:40:18.663746 1 service.go:357] Adding new service port \"volumeio-3164/csi-hostpath-resizer:dummy\" at 100.104.246.169:12345/TCP\nI0111 19:40:18.687055 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:40:18.758857 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:40:18.830058 1 service.go:357] Adding new service port \"volumeio-3164/csi-snapshotter:dummy\" at 100.107.227.112:12345/TCP\nI0111 19:40:18.854382 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:40:19.058697 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:40:19.769774 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:40:20.975162 1 service.go:357] Adding new service port \"kubectl-16/redis-master:\" at 100.111.154.89:6379/TCP\nI0111 19:40:21.004295 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:40:21.045375 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:40:21.144125 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:40:21.224503 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:40:22.413961 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:40:24.971800 1 service.go:382] Removing service port \"ephemeral-1641/csi-hostpath-attacher:dummy\"\nI0111 19:40:24.997100 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:40:25.025905 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:40:25.251252 1 service.go:382] Removing service port \"ephemeral-1641/csi-hostpathplugin:dummy\"\nI0111 19:40:25.287552 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:40:25.434967 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:40:25.441802 1 service.go:382] Removing service port \"ephemeral-1641/csi-hostpath-provisioner:dummy\"\nI0111 19:40:25.480516 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:40:25.543065 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:40:25.627972 1 service.go:382] Removing service port \"ephemeral-1641/csi-hostpath-resizer:dummy\"\nI0111 19:40:25.664105 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:40:25.702612 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:40:25.813948 1 service.go:382] Removing service port \"ephemeral-1641/csi-snapshotter:dummy\"\nI0111 19:40:25.851612 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:40:25.980911 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:40:28.347880 1 service.go:382] Removing service port \"kubectl-16/redis-master:\"\nI0111 19:40:28.379022 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:40:28.406458 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:40:58.434099 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:41:10.387964 1 service.go:382] Removing service port \"volumeio-3164/csi-hostpath-attacher:dummy\"\nI0111 19:41:10.412049 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:41:10.437578 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:41:10.666103 1 service.go:382] Removing service port \"volumeio-3164/csi-hostpathplugin:dummy\"\nI0111 19:41:10.689400 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:41:10.714883 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:41:10.853793 1 service.go:382] Removing service port \"volumeio-3164/csi-hostpath-provisioner:dummy\"\nI0111 19:41:10.876713 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:41:10.902511 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:41:11.041237 1 service.go:382] Removing service port \"volumeio-3164/csi-hostpath-resizer:dummy\"\nI0111 19:41:11.064043 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:41:11.089155 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:41:11.228111 1 service.go:382] Removing service port \"volumeio-3164/csi-snapshotter:dummy\"\nI0111 19:41:11.251324 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:41:11.276371 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:41:27.914134 1 service.go:357] Adding new service port \"services-7435/nodeport-update-service:\" at 100.105.182.168:80/TCP\nI0111 19:41:27.943653 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:41:27.968559 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:41:28.098098 1 service.go:357] Adding new service port \"services-7435/nodeport-update-service:tcp-port\" at 100.105.182.168:80/TCP\nI0111 19:41:28.098121 1 service.go:357] Adding new service port \"services-7435/nodeport-update-service:udp-port\" at 100.105.182.168:80/UDP\nI0111 19:41:28.098131 1 service.go:382] Removing service port \"services-7435/nodeport-update-service:\"\nI0111 19:41:28.119901 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:41:28.120117 1 proxier.go:1519] Opened local port \"nodePort for services-7435/nodeport-update-service:tcp-port\" (:30723/tcp)\nI0111 19:41:28.120254 1 proxier.go:1519] Opened local port \"nodePort for services-7435/nodeport-update-service:udp-port\" (:30691/udp)\nI0111 19:41:28.189161 1 service.go:382] Removing service port \"services-7435/nodeport-update-service:tcp-port\"\nI0111 19:41:28.189185 1 service.go:382] Removing service port \"services-7435/nodeport-update-service:udp-port\"\nI0111 19:41:28.210726 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:41:28.239066 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:41:50.339785 1 service.go:357] Adding new service port \"dns-5564/dns-test-service-3:http\" at 100.107.56.47:80/TCP\nI0111 19:41:50.369510 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:41:55.252920 1 service.go:382] Removing service port \"dns-5564/dns-test-service-3:http\"\nI0111 19:41:55.292916 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:42:25.340909 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:42:36.915999 1 service.go:357] Adding new service port \"volume-expand-8983/csi-hostpath-attacher:dummy\" at 100.106.199.203:12345/TCP\nI0111 19:42:36.938510 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:42:36.963101 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:42:37.188808 1 service.go:357] Adding new service port \"volume-expand-8983/csi-hostpathplugin:dummy\" at 100.111.155.218:12345/TCP\nI0111 19:42:37.211395 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:42:37.236259 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:42:37.374033 1 service.go:357] Adding new service port \"volume-expand-8983/csi-hostpath-provisioner:dummy\" at 100.106.228.17:12345/TCP\nI0111 19:42:37.400677 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:42:37.432899 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:42:37.556734 1 service.go:357] Adding new service port \"volume-expand-8983/csi-hostpath-resizer:dummy\" at 100.107.187.84:12345/TCP\nI0111 19:42:37.585996 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:42:37.610764 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:42:37.739896 1 service.go:357] Adding new service port \"volume-expand-8983/csi-snapshotter:dummy\" at 100.111.117.210:12345/TCP\nI0111 19:42:37.762552 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:42:37.796510 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:42:38.503587 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:42:39.558600 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:42:39.618438 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:42:39.643613 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:42:39.669060 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:43:09.707763 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:43:35.530161 1 service.go:357] Adding new service port \"provisioning-6240/csi-hostpath-attacher:dummy\" at 100.106.228.19:12345/TCP\nI0111 19:43:35.561240 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:43:35.623391 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:43:35.804945 1 service.go:357] Adding new service port \"provisioning-6240/csi-hostpathplugin:dummy\" at 100.111.245.108:12345/TCP\nI0111 19:43:35.842863 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:43:35.868782 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:43:35.989685 1 service.go:357] Adding new service port \"provisioning-6240/csi-hostpath-provisioner:dummy\" at 100.105.123.238:12345/TCP\nI0111 19:43:36.022065 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:43:36.089113 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:43:36.173904 1 service.go:357] Adding new service port \"provisioning-6240/csi-hostpath-resizer:dummy\" at 100.109.16.216:12345/TCP\nI0111 19:43:36.308736 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:43:36.360071 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:43:36.372234 1 service.go:357] Adding new service port \"provisioning-6240/csi-snapshotter:dummy\" at 100.111.11.210:12345/TCP\nI0111 19:43:36.415161 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:43:36.489915 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:43:37.913743 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:43:37.965000 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:43:38.019756 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:43:39.062465 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:43:39.184670 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:43:54.541538 1 service.go:382] Removing service port \"provisioning-6240/csi-hostpath-attacher:dummy\"\nI0111 19:43:54.568040 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:43:54.597588 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:43:54.821825 1 service.go:382] Removing service port \"provisioning-6240/csi-hostpathplugin:dummy\"\nI0111 19:43:54.848144 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:43:54.877625 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:43:55.009610 1 service.go:382] Removing service port \"provisioning-6240/csi-hostpath-provisioner:dummy\"\nI0111 19:43:55.035870 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:43:55.074648 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:43:55.196324 1 service.go:382] Removing service port \"provisioning-6240/csi-hostpath-resizer:dummy\"\nI0111 19:43:55.258582 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:43:55.305492 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:43:55.383058 1 service.go:382] Removing service port \"provisioning-6240/csi-snapshotter:dummy\"\nI0111 19:43:55.519577 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:43:55.576312 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:44:25.611861 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:44:55.670500 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:45:25.700749 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:45:55.735477 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:46:10.900164 1 service.go:357] Adding new service port \"proxy-5995/proxy-service-hnjh4:portname2\" at 100.109.130.35:81/TCP\nI0111 19:46:10.900194 1 service.go:357] Adding new service port \"proxy-5995/proxy-service-hnjh4:tlsportname1\" at 100.109.130.35:443/TCP\nI0111 19:46:10.900208 1 service.go:357] Adding new service port \"proxy-5995/proxy-service-hnjh4:tlsportname2\" at 100.109.130.35:444/TCP\nI0111 19:46:10.900221 1 service.go:357] Adding new service port \"proxy-5995/proxy-service-hnjh4:portname1\" at 100.109.130.35:80/TCP\nI0111 19:46:10.972873 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:46:11.032448 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:46:20.132378 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:46:23.682134 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:46:34.316619 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:46:34.323621 1 service.go:357] Adding new service port \"services-1603/affinity-clusterip:\" at 100.105.158.181:80/TCP\nI0111 19:46:34.356049 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:46:36.676874 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:46:36.719007 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:46:36.803806 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:46:39.135780 1 service.go:382] Removing service port \"proxy-5995/proxy-service-hnjh4:portname1\"\nI0111 19:46:39.135801 1 service.go:382] Removing service port \"proxy-5995/proxy-service-hnjh4:portname2\"\nI0111 19:46:39.135808 1 service.go:382] Removing service port \"proxy-5995/proxy-service-hnjh4:tlsportname1\"\nI0111 19:46:39.135815 1 service.go:382] Removing service port \"proxy-5995/proxy-service-hnjh4:tlsportname2\"\nI0111 19:46:39.167065 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:46:39.205447 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:46:52.750461 1 service.go:357] Adding new service port \"services-9361/clusterip-service:\" at 100.107.168.179:80/TCP\nI0111 19:46:52.782986 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:46:52.811352 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:46:52.844424 1 service.go:357] Adding new service port \"services-9361/externalsvc:\" at 100.109.178.43:80/TCP\nI0111 19:46:52.868441 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:46:52.903518 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:46:54.764872 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:46:55.451755 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:46:56.309359 1 service.go:382] Removing service port \"services-9361/clusterip-service:\"\nI0111 19:46:56.334664 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:47:00.355021 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:47:13.910131 1 service.go:382] Removing service port \"services-9361/externalsvc:\"\nI0111 19:47:13.937605 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:47:13.977026 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:47:14.043727 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:47:14.888393 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:47:14.934329 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:47:23.924246 1 service.go:382] Removing service port \"services-1603/affinity-clusterip:\"\nI0111 19:47:23.948307 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:47:23.980743 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:47:31.695072 1 service.go:357] Adding new service port \"resourcequota-9564/test-service:\" at 100.104.148.118:80/TCP\nI0111 19:47:31.722011 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:47:33.876235 1 service.go:382] Removing service port \"resourcequota-9564/test-service:\"\nI0111 19:47:33.899623 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:47:36.669620 1 service.go:357] Adding new service port \"services-9378/nodeport-service:\" at 100.108.70.19:80/TCP\nI0111 19:47:36.693560 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:47:36.693815 1 proxier.go:1519] Opened local port \"nodePort for services-9378/nodeport-service:\" (:31216/tcp)\nI0111 19:47:36.721184 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:47:36.763647 1 service.go:357] Adding new service port \"services-9378/externalsvc:\" at 100.104.132.219:80/TCP\nI0111 19:47:36.786343 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:47:36.925690 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:47:38.170829 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:47:38.199703 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:47:40.229291 1 service.go:382] Removing service port \"services-9378/nodeport-service:\"\nI0111 19:47:40.270041 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:47:44.558504 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:47:44.587043 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:47:46.072209 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:47:46.114912 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:47:46.206782 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:47:46.576182 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:47:46.937319 1 service.go:382] Removing service port \"volume-expand-8983/csi-hostpath-attacher:dummy\"\nI0111 19:47:46.966239 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:47:46.992254 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:47:47.036189 1 service.go:382] Removing service port \"volume-expand-8983/csi-hostpath-provisioner:dummy\"\nI0111 19:47:47.068171 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:47:47.077593 1 service.go:382] Removing service port \"volume-expand-8983/csi-hostpath-resizer:dummy\"\nI0111 19:47:47.110303 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:47:47.136462 1 service.go:382] Removing service port \"volume-expand-8983/csi-hostpathplugin:dummy\"\nI0111 19:47:47.158569 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:47:47.184181 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:47:47.235808 1 service.go:382] Removing service port \"volume-expand-8983/csi-snapshotter:dummy\"\nI0111 19:47:47.258096 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:47:47.283936 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:47:53.850938 1 service.go:382] Removing service port \"services-9378/externalsvc:\"\nI0111 19:47:53.872875 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:47:53.897162 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:47:53.982195 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:48:07.390865 1 service.go:357] Adding new service port \"kubectl-8526/redis-slave:\" at 100.105.252.234:6379/TCP\nI0111 19:48:07.413168 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:48:07.457165 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:48:08.352426 1 service.go:357] Adding new service port \"kubectl-8526/redis-master:\" at 100.107.139.169:6379/TCP\nI0111 19:48:08.374781 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:48:08.398950 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:48:08.963725 1 service.go:357] Adding new service port \"kubectl-8526/frontend:\" at 100.105.185.251:80/TCP\nI0111 19:48:09.012147 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:48:09.037053 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:48:12.611743 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:48:12.724413 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:48:12.790977 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:48:12.845552 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:48:13.056976 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:48:13.909623 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:48:15.862920 1 service.go:357] Adding new service port \"provisioning-638/csi-hostpath-attacher:dummy\" at 100.107.106.222:12345/TCP\nI0111 19:48:15.888068 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:48:15.925671 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:48:16.135339 1 service.go:357] Adding new service port \"provisioning-638/csi-hostpathplugin:dummy\" at 100.106.73.214:12345/TCP\nI0111 19:48:16.157995 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:48:16.183625 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:48:16.318517 1 service.go:357] Adding new service port \"provisioning-638/csi-hostpath-provisioner:dummy\" at 100.111.163.16:12345/TCP\nI0111 19:48:16.341515 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:48:16.367487 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:48:16.501881 1 service.go:357] Adding new service port \"provisioning-638/csi-hostpath-resizer:dummy\" at 100.110.44.220:12345/TCP\nI0111 19:48:16.524875 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:48:16.684598 1 service.go:357] Adding new service port \"provisioning-638/csi-snapshotter:dummy\" at 100.105.65.0:12345/TCP\nI0111 19:48:16.707843 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:48:16.856846 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:48:16.883193 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:48:17.382702 1 service.go:357] Adding new service port \"volume-expand-7991/csi-hostpath-attacher:dummy\" at 100.111.160.134:12345/TCP\nI0111 19:48:17.405702 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:48:17.464782 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:48:17.557582 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:48:17.655664 1 service.go:357] Adding new service port \"volume-expand-7991/csi-hostpathplugin:dummy\" at 100.110.232.122:12345/TCP\nI0111 19:48:17.678797 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:48:17.767175 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:48:17.838486 1 service.go:357] Adding new service port \"volume-expand-7991/csi-hostpath-provisioner:dummy\" at 100.105.131.50:12345/TCP\nI0111 19:48:17.885634 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:48:17.914739 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:48:18.021753 1 service.go:357] Adding new service port \"volume-expand-7991/csi-hostpath-resizer:dummy\" at 100.105.90.149:12345/TCP\nI0111 19:48:18.058997 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:48:18.091171 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:48:18.205024 1 service.go:357] Adding new service port \"volume-expand-7991/csi-snapshotter:dummy\" at 100.109.232.139:12345/TCP\nI0111 19:48:18.253453 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:48:18.298330 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:48:18.346355 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:48:18.486054 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:48:19.498505 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:48:19.558429 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:48:19.622372 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:48:20.497072 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:48:20.533006 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:48:20.569822 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:48:28.202894 1 service.go:382] Removing service port \"kubectl-8526/redis-slave:\"\nI0111 19:48:28.231119 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:48:28.262075 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:48:28.721898 1 service.go:382] Removing service port \"kubectl-8526/redis-master:\"\nI0111 19:48:28.754897 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:48:28.786600 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:48:29.239401 1 service.go:382] Removing service port \"kubectl-8526/frontend:\"\nI0111 19:48:29.266276 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:48:29.296976 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:48:32.754848 1 service.go:382] Removing service port \"provisioning-638/csi-hostpath-attacher:dummy\"\nI0111 19:48:32.782723 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:48:32.814373 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:48:33.032117 1 service.go:382] Removing service port \"provisioning-638/csi-hostpathplugin:dummy\"\nI0111 19:48:33.060609 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:48:33.091147 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:48:33.218638 1 service.go:382] Removing service port \"provisioning-638/csi-hostpath-provisioner:dummy\"\nI0111 19:48:33.248506 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:48:33.279163 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:48:33.404717 1 service.go:382] Removing service port \"provisioning-638/csi-hostpath-resizer:dummy\"\nI0111 19:48:33.432030 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:48:33.484809 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:48:33.592866 1 service.go:382] Removing service port \"provisioning-638/csi-snapshotter:dummy\"\nI0111 19:48:33.622863 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:48:33.668984 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:48:53.634952 1 service.go:382] Removing service port \"volume-expand-7991/csi-hostpath-attacher:dummy\"\nI0111 19:48:53.668098 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:48:53.696319 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:48:53.911376 1 service.go:382] Removing service port \"volume-expand-7991/csi-hostpathplugin:dummy\"\nI0111 19:48:53.940407 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:48:53.983211 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:48:54.095967 1 service.go:382] Removing service port \"volume-expand-7991/csi-hostpath-provisioner:dummy\"\nI0111 19:48:54.130435 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:48:54.158064 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:48:54.281214 1 service.go:382] Removing service port \"volume-expand-7991/csi-hostpath-resizer:dummy\"\nI0111 19:48:54.307353 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:48:54.338423 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:48:54.465799 1 service.go:382] Removing service port \"volume-expand-7991/csi-snapshotter:dummy\"\nI0111 19:48:54.528996 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:48:54.582082 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:49:22.528560 1 service.go:357] Adding new service port \"provisioning-4625/csi-hostpath-attacher:dummy\" at 100.107.20.95:12345/TCP\nI0111 19:49:22.562313 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:49:22.600916 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:49:22.801045 1 service.go:357] Adding new service port \"provisioning-4625/csi-hostpathplugin:dummy\" at 100.110.254.231:12345/TCP\nI0111 19:49:22.837806 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:49:22.876388 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:49:22.983707 1 service.go:357] Adding new service port \"provisioning-4625/csi-hostpath-provisioner:dummy\" at 100.106.61.221:12345/TCP\nI0111 19:49:23.019879 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:49:23.078746 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:49:23.166418 1 service.go:357] Adding new service port \"provisioning-4625/csi-hostpath-resizer:dummy\" at 100.111.197.158:12345/TCP\nI0111 19:49:23.286796 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:49:23.326564 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:49:23.349874 1 service.go:357] Adding new service port \"provisioning-4625/csi-snapshotter:dummy\" at 100.111.200.248:12345/TCP\nI0111 19:49:23.384640 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:49:24.257338 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:49:24.807899 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:49:26.165485 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:49:26.198260 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:49:26.257037 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:49:26.283585 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:49:44.816747 1 service.go:357] Adding new service port \"webhook-5741/e2e-test-webhook:\" at 100.109.54.96:8443/TCP\nI0111 19:49:44.859790 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:49:44.897359 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:49:47.691053 1 service.go:382] Removing service port \"provisioning-4625/csi-hostpath-attacher:dummy\"\nI0111 19:49:47.721604 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:49:47.768731 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:49:47.970935 1 service.go:382] Removing service port \"provisioning-4625/csi-hostpathplugin:dummy\"\nI0111 19:49:48.012663 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:49:48.039200 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:49:48.158066 1 service.go:382] Removing service port \"provisioning-4625/csi-hostpath-provisioner:dummy\"\nI0111 19:49:48.182582 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:49:48.209386 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:49:48.344127 1 service.go:382] Removing service port \"provisioning-4625/csi-hostpath-resizer:dummy\"\nI0111 19:49:48.369372 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:49:48.405009 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:49:48.531572 1 service.go:382] Removing service port \"provisioning-4625/csi-snapshotter:dummy\"\nI0111 19:49:48.565585 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:49:48.592972 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:49:52.440488 1 service.go:382] Removing service port \"webhook-5741/e2e-test-webhook:\"\nI0111 19:49:52.485237 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:49:52.529356 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:50:02.201706 1 service.go:357] Adding new service port \"volume-2441/csi-hostpath-attacher:dummy\" at 100.109.220.55:12345/TCP\nI0111 19:50:02.226747 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:50:02.258313 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:50:02.474083 1 service.go:357] Adding new service port \"volume-2441/csi-hostpathplugin:dummy\" at 100.107.75.156:12345/TCP\nI0111 19:50:02.498862 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:50:02.559103 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:50:02.657278 1 service.go:357] Adding new service port \"volume-2441/csi-hostpath-provisioner:dummy\" at 100.107.43.184:12345/TCP\nI0111 19:50:02.681564 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:50:02.759267 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:50:02.840767 1 service.go:357] Adding new service port \"volume-2441/csi-hostpath-resizer:dummy\" at 100.104.164.149:12345/TCP\nI0111 19:50:02.862292 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:50:02.886711 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:50:03.023413 1 service.go:357] Adding new service port \"volume-2441/csi-snapshotter:dummy\" at 100.106.125.148:12345/TCP\nI0111 19:50:03.045353 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:50:03.069198 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:50:04.477104 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:50:04.735875 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:50:05.740474 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:50:05.764971 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:50:05.789343 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:50:30.829688 1 service.go:357] Adding new service port \"volumemode-2792/csi-hostpath-attacher:dummy\" at 100.110.54.141:12345/TCP\nI0111 19:50:30.852973 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:50:30.878438 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:50:31.103970 1 service.go:357] Adding new service port \"volumemode-2792/csi-hostpathplugin:dummy\" at 100.110.67.54:12345/TCP\nI0111 19:50:31.127247 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:50:31.152810 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:50:31.288368 1 service.go:357] Adding new service port \"volumemode-2792/csi-hostpath-provisioner:dummy\" at 100.109.204.185:12345/TCP\nI0111 19:50:31.311132 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:50:31.337475 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:50:31.472441 1 service.go:357] Adding new service port \"volumemode-2792/csi-hostpath-resizer:dummy\" at 100.111.197.170:12345/TCP\nI0111 19:50:31.501467 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:50:31.527101 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:50:31.657156 1 service.go:357] Adding new service port \"volumemode-2792/csi-snapshotter:dummy\" at 100.111.104.208:12345/TCP\nI0111 19:50:31.679792 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:50:31.758003 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:50:34.361006 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:50:34.387487 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:51:02.187426 1 service.go:382] Removing service port \"volume-2441/csi-hostpath-attacher:dummy\"\nI0111 19:51:02.211640 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:51:02.239084 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:51:02.466259 1 service.go:382] Removing service port \"volume-2441/csi-hostpathplugin:dummy\"\nI0111 19:51:02.491488 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:51:02.518676 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:51:02.651658 1 service.go:382] Removing service port \"volume-2441/csi-hostpath-provisioner:dummy\"\nI0111 19:51:02.677105 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:51:02.704387 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:51:02.847355 1 service.go:382] Removing service port \"volume-2441/csi-hostpath-resizer:dummy\"\nI0111 19:51:02.871808 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:51:02.898562 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:51:03.037746 1 service.go:382] Removing service port \"volume-2441/csi-snapshotter:dummy\"\nI0111 19:51:03.080351 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:51:03.108218 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:51:05.916422 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:51:09.187464 1 service.go:357] Adding new service port \"ephemeral-9708/csi-hostpath-attacher:dummy\" at 100.108.86.106:12345/TCP\nI0111 19:51:09.211267 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:51:09.238572 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:51:09.461412 1 service.go:357] Adding new service port \"ephemeral-9708/csi-hostpathplugin:dummy\" at 100.106.68.123:12345/TCP\nI0111 19:51:09.487477 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:51:09.514230 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:51:09.644512 1 service.go:357] Adding new service port \"ephemeral-9708/csi-hostpath-provisioner:dummy\" at 100.104.179.221:12345/TCP\nI0111 19:51:09.669560 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:51:09.696467 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:51:09.828247 1 service.go:357] Adding new service port \"ephemeral-9708/csi-hostpath-resizer:dummy\" at 100.110.139.128:12345/TCP\nI0111 19:51:09.852285 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:51:09.884671 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:51:10.012393 1 service.go:357] Adding new service port \"ephemeral-9708/csi-snapshotter:dummy\" at 100.108.245.40:12345/TCP\nI0111 19:51:10.036883 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:51:10.064391 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:51:11.290207 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:51:11.460206 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:51:12.440058 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:51:12.470982 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:51:21.731930 1 service.go:382] Removing service port \"volumemode-2792/csi-hostpath-attacher:dummy\"\nI0111 19:51:21.756958 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:51:21.784408 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:51:22.014099 1 service.go:382] Removing service port \"volumemode-2792/csi-hostpathplugin:dummy\"\nI0111 19:51:22.059725 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:51:22.101466 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:51:22.199813 1 service.go:382] Removing service port \"volumemode-2792/csi-hostpath-provisioner:dummy\"\nI0111 19:51:22.221920 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:51:22.246865 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:51:22.386192 1 service.go:382] Removing service port \"volumemode-2792/csi-hostpath-resizer:dummy\"\nI0111 19:51:22.408621 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:51:22.433733 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:51:22.572783 1 service.go:382] Removing service port \"volumemode-2792/csi-snapshotter:dummy\"\nI0111 19:51:22.594912 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:51:22.619858 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:51:24.904500 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:51:54.931348 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:52:03.033720 1 service.go:382] Removing service port \"ephemeral-9708/csi-hostpath-attacher:dummy\"\nI0111 19:52:03.068411 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:52:03.126747 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:52:03.312045 1 service.go:382] Removing service port \"ephemeral-9708/csi-hostpathplugin:dummy\"\nI0111 19:52:03.350820 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:52:03.403892 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:52:03.499033 1 service.go:382] Removing service port \"ephemeral-9708/csi-hostpath-provisioner:dummy\"\nI0111 19:52:03.523854 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:52:03.548405 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:52:03.685758 1 service.go:382] Removing service port \"ephemeral-9708/csi-hostpath-resizer:dummy\"\nI0111 19:52:03.720769 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:52:03.753211 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:52:03.873534 1 service.go:382] Removing service port \"ephemeral-9708/csi-snapshotter:dummy\"\nI0111 19:52:03.897353 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:52:03.924507 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:52:23.303649 1 service.go:357] Adding new service port \"services-4413/affinity-clusterip-transition:\" at 100.110.87.16:80/TCP\nI0111 19:52:23.327373 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:52:23.353408 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:52:24.574584 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:52:24.664036 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:52:24.691502 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:52:32.684984 1 service.go:359] Updating existing service port \"services-4413/affinity-clusterip-transition:\" at 100.110.87.16:80/TCP\nI0111 19:52:32.712240 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:52:33.345198 1 service.go:357] Adding new service port \"webhook-4534/e2e-test-webhook:\" at 100.105.99.124:8443/TCP\nI0111 19:52:33.373404 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:52:33.423812 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:52:37.570532 1 service.go:359] Updating existing service port \"services-4413/affinity-clusterip-transition:\" at 100.110.87.16:80/TCP\nI0111 19:52:37.594475 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:52:40.475318 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:52:40.585793 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:52:41.535439 1 service.go:382] Removing service port \"webhook-4534/e2e-test-webhook:\"\nI0111 19:52:41.565058 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:53:08.588191 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:53:08.630607 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:53:20.477193 1 service.go:357] Adding new service port \"webhook-9767/e2e-test-webhook:\" at 100.111.238.71:8443/TCP\nI0111 19:53:20.513906 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:53:20.547475 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:53:24.037221 1 service.go:382] Removing service port \"services-4413/affinity-clusterip-transition:\"\nI0111 19:53:24.073350 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:53:24.113548 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:53:28.859736 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:53:29.153690 1 service.go:382] Removing service port \"webhook-9767/e2e-test-webhook:\"\nI0111 19:53:29.177220 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:53:45.243876 1 service.go:357] Adding new service port \"services-3432/externalname-service:http\" at 100.109.111.210:80/TCP\nI0111 19:53:45.268788 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:53:45.269032 1 proxier.go:1519] Opened local port \"nodePort for services-3432/externalname-service:http\" (:31921/tcp)\nI0111 19:53:45.296558 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:53:46.307593 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:53:46.970063 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:53:57.381490 1 service.go:382] Removing service port \"services-3432/externalname-service:http\"\nI0111 19:53:57.405634 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:53:57.439218 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:53:58.262368 1 service.go:357] Adding new service port \"nettest-5543/node-port-service:http\" at 100.106.24.223:80/TCP\nI0111 19:53:58.262393 1 service.go:357] Adding new service port \"nettest-5543/node-port-service:udp\" at 100.106.24.223:90/UDP\nI0111 19:53:58.286538 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:53:58.286688 1 proxier.go:1519] Opened local port \"nodePort for nettest-5543/node-port-service:http\" (:31082/tcp)\nI0111 19:53:58.286851 1 proxier.go:1519] Opened local port \"nodePort for nettest-5543/node-port-service:udp\" (:30133/udp)\nI0111 19:53:58.295092 1 proxier.go:700] Stale udp service nettest-5543/node-port-service:udp -> 100.106.24.223\nI0111 19:53:58.318196 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:53:58.538353 1 service.go:357] Adding new service port \"nettest-5543/session-affinity-service:udp\" at 100.105.121.222:90/UDP\nI0111 19:53:58.538444 1 service.go:357] Adding new service port \"nettest-5543/session-affinity-service:http\" at 100.105.121.222:80/TCP\nI0111 19:53:58.563409 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:53:58.563742 1 proxier.go:1519] Opened local port \"nodePort for nettest-5543/session-affinity-service:udp\" (:32119/udp)\nI0111 19:53:58.568255 1 proxier.go:1519] Opened local port \"nodePort for nettest-5543/session-affinity-service:http\" (:32285/tcp)\nI0111 19:53:58.572777 1 proxier.go:700] Stale udp service nettest-5543/session-affinity-service:udp -> 100.105.121.222\nI0111 19:53:58.596091 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:54:05.751610 1 service.go:382] Removing service port \"nettest-5543/node-port-service:http\"\nI0111 19:54:05.751629 1 service.go:382] Removing service port \"nettest-5543/node-port-service:udp\"\nI0111 19:54:05.775756 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:54:05.816800 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:54:15.819770 1 service.go:357] Adding new service port \"webhook-6368/e2e-test-webhook:\" at 100.104.42.180:8443/TCP\nI0111 19:54:15.844115 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:54:15.871350 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:54:23.442020 1 service.go:382] Removing service port \"webhook-6368/e2e-test-webhook:\"\nI0111 19:54:23.466877 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:54:23.494389 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:54:53.522635 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:55:23.550833 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:55:53.581449 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:56:04.976941 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:56:05.349055 1 service.go:382] Removing service port \"nettest-5543/session-affinity-service:http\"\nI0111 19:56:05.349082 1 service.go:382] Removing service port \"nettest-5543/session-affinity-service:udp\"\nI0111 19:56:05.394613 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:56:23.877681 1 service.go:357] Adding new service port \"webhook-7214/e2e-test-webhook:\" at 100.105.108.252:8443/TCP\nI0111 19:56:23.901304 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:56:23.928013 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:56:31.428792 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:56:31.754858 1 service.go:382] Removing service port \"webhook-7214/e2e-test-webhook:\"\nI0111 19:56:31.780449 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:57:01.809835 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:57:31.837687 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:57:35.534391 1 service.go:357] Adding new service port \"provisioning-2263/csi-hostpath-attacher:dummy\" at 100.109.147.107:12345/TCP\nI0111 19:57:35.562792 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:57:35.604257 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:57:35.806877 1 service.go:357] Adding new service port \"provisioning-2263/csi-hostpathplugin:dummy\" at 100.108.205.78:12345/TCP\nI0111 19:57:35.831882 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:57:35.858537 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:57:35.989863 1 service.go:357] Adding new service port \"provisioning-2263/csi-hostpath-provisioner:dummy\" at 100.105.56.161:12345/TCP\nI0111 19:57:36.016279 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:57:36.172010 1 service.go:357] Adding new service port \"provisioning-2263/csi-hostpath-resizer:dummy\" at 100.108.81.63:12345/TCP\nI0111 19:57:36.196341 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:57:36.355989 1 service.go:357] Adding new service port \"provisioning-2263/csi-snapshotter:dummy\" at 100.104.107.87:12345/TCP\nI0111 19:57:36.379790 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:57:36.466477 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:57:36.504958 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:57:36.665753 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:57:38.087788 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:57:39.114644 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:57:39.142146 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:57:45.358850 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:57:45.398979 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:58:01.895714 1 service.go:382] Removing service port \"provisioning-2263/csi-hostpath-attacher:dummy\"\nI0111 19:58:01.922731 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:58:01.952081 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:58:02.172369 1 service.go:382] Removing service port \"provisioning-2263/csi-hostpathplugin:dummy\"\nI0111 19:58:02.200121 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:58:02.229631 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:58:02.357233 1 service.go:382] Removing service port \"provisioning-2263/csi-hostpath-provisioner:dummy\"\nI0111 19:58:02.383698 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:58:02.417738 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:58:02.541728 1 service.go:382] Removing service port \"provisioning-2263/csi-hostpath-resizer:dummy\"\nI0111 19:58:02.566167 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:58:02.592596 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:58:02.726450 1 service.go:382] Removing service port \"provisioning-2263/csi-snapshotter:dummy\"\nI0111 19:58:02.750068 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:58:02.776677 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:58:32.803681 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:59:02.831791 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 19:59:32.860672 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:00:01.603340 1 service.go:357] Adding new service port \"dns-8433/test-service-2:http\" at 100.110.223.87:80/TCP\nI0111 20:00:01.634225 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:00:01.674131 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:00:03.369237 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:00:33.395496 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:00:39.078711 1 service.go:357] Adding new service port \"webhook-1074/e2e-test-webhook:\" at 100.105.33.138:8443/TCP\nI0111 20:00:39.102697 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:00:39.129964 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:00:39.732724 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:00:39.739847 1 service.go:382] Removing service port \"dns-8433/test-service-2:http\"\nI0111 20:00:39.786615 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:00:46.963691 1 service.go:382] Removing service port \"webhook-1074/e2e-test-webhook:\"\nI0111 20:00:46.998718 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:00:47.046644 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:01:17.078921 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:01:40.522355 1 service.go:357] Adding new service port \"services-3570/affinity-nodeport-transition:\" at 100.106.99.75:80/TCP\nI0111 20:01:40.546200 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:01:40.546494 1 proxier.go:1519] Opened local port \"nodePort for services-3570/affinity-nodeport-transition:\" (:31636/tcp)\nI0111 20:01:40.573330 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:01:42.456127 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:01:42.484558 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:01:52.704252 1 service.go:359] Updating existing service port \"services-3570/affinity-nodeport-transition:\" at 100.106.99.75:80/TCP\nI0111 20:01:52.730225 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:01:57.528908 1 service.go:359] Updating existing service port \"services-3570/affinity-nodeport-transition:\" at 100.106.99.75:80/TCP\nI0111 20:01:57.560089 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:02:00.171754 1 service.go:357] Adding new service port \"pods-6711/fooservice:\" at 100.110.244.63:8765/TCP\nI0111 20:02:00.198062 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:02:00.224438 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:02:08.987885 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:02:09.739905 1 service.go:382] Removing service port \"pods-6711/fooservice:\"\nI0111 20:02:09.777502 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:02:25.002578 1 service.go:357] Adding new service port \"kubectl-5845/rm2:\" at 100.106.94.54:1234/TCP\nI0111 20:02:25.027590 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:02:25.058616 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:02:27.837690 1 service.go:357] Adding new service port \"kubectl-5845/rm3:\" at 100.107.40.0:2345/TCP\nI0111 20:02:27.863766 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:02:27.892207 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:02:28.584375 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:02:28.621626 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:02:35.335978 1 service.go:382] Removing service port \"kubectl-5845/rm2:\"\nI0111 20:02:35.360434 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:02:35.364588 1 service.go:382] Removing service port \"kubectl-5845/rm3:\"\nI0111 20:02:35.395184 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:02:39.132608 1 service.go:357] Adding new service port \"provisioning-5877/csi-hostpath-attacher:dummy\" at 100.106.177.230:12345/TCP\nI0111 20:02:39.156501 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:02:39.183021 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:02:39.406792 1 service.go:357] Adding new service port \"provisioning-5877/csi-hostpathplugin:dummy\" at 100.104.110.160:12345/TCP\nI0111 20:02:39.437499 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:02:39.464022 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:02:39.598094 1 service.go:357] Adding new service port \"provisioning-5877/csi-hostpath-provisioner:dummy\" at 100.107.109.180:12345/TCP\nI0111 20:02:39.622918 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:02:39.649831 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:02:39.780108 1 service.go:357] Adding new service port \"provisioning-5877/csi-hostpath-resizer:dummy\" at 100.106.98.2:12345/TCP\nI0111 20:02:39.805344 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:02:39.832158 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:02:39.963869 1 service.go:357] Adding new service port \"provisioning-5877/csi-snapshotter:dummy\" at 100.108.180.144:12345/TCP\nI0111 20:02:40.004464 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:02:40.032751 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:02:41.245147 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:02:41.272202 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:02:42.281453 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:02:42.308988 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:02:43.935748 1 service.go:382] Removing service port \"services-3570/affinity-nodeport-transition:\"\nI0111 20:02:43.958374 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:02:43.983923 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:02:56.344491 1 service.go:357] Adding new service port \"volume-expand-1929/csi-hostpath-attacher:dummy\" at 100.108.247.151:12345/TCP\nI0111 20:02:56.367263 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:02:56.399684 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:02:56.618926 1 service.go:357] Adding new service port \"volume-expand-1929/csi-hostpathplugin:dummy\" at 100.109.220.123:12345/TCP\nI0111 20:02:56.641577 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:02:56.667015 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:02:56.802780 1 service.go:357] Adding new service port \"volume-expand-1929/csi-hostpath-provisioner:dummy\" at 100.108.230.252:12345/TCP\nI0111 20:02:56.825462 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:02:56.851252 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:02:56.986035 1 service.go:357] Adding new service port \"volume-expand-1929/csi-hostpath-resizer:dummy\" at 100.107.85.243:12345/TCP\nI0111 20:02:57.008844 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:02:57.034382 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:02:57.169860 1 service.go:357] Adding new service port \"volume-expand-1929/csi-snapshotter:dummy\" at 100.111.105.38:12345/TCP\nI0111 20:02:57.210128 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:02:57.237104 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:02:57.858456 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:02:58.859434 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:02:58.885578 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:02:58.958263 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:03:01.516224 1 service.go:382] Removing service port \"provisioning-5877/csi-hostpath-attacher:dummy\"\nI0111 20:03:01.540121 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:03:01.567013 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:03:01.795102 1 service.go:382] Removing service port \"provisioning-5877/csi-hostpathplugin:dummy\"\nI0111 20:03:01.818241 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:03:01.844690 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:03:01.981500 1 service.go:382] Removing service port \"provisioning-5877/csi-hostpath-provisioner:dummy\"\nI0111 20:03:02.011601 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:03:02.076778 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:03:02.169342 1 service.go:382] Removing service port \"provisioning-5877/csi-hostpath-resizer:dummy\"\nI0111 20:03:02.192083 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:03:02.224742 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:03:02.355780 1 service.go:382] Removing service port \"provisioning-5877/csi-snapshotter:dummy\"\nI0111 20:03:02.379482 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:03:02.405358 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:03:04.609969 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:03:19.312226 1 service.go:357] Adding new service port \"webhook-4016/e2e-test-webhook:\" at 100.109.213.198:8443/TCP\nI0111 20:03:19.343774 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:03:19.394453 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:03:40.667378 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:03:40.673784 1 service.go:382] Removing service port \"webhook-4016/e2e-test-webhook:\"\nI0111 20:03:40.708353 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:04:05.031153 1 service.go:357] Adding new service port \"services-2057/sourceip-test:\" at 100.106.18.136:8080/TCP\nI0111 20:04:05.052944 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:04:05.078593 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:04:06.439355 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:04:12.871124 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:04:12.928919 1 service.go:382] Removing service port \"services-2057/sourceip-test:\"\nI0111 20:04:12.957870 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:04:13.025210 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:04:40.604402 1 service.go:357] Adding new service port \"services-6943/lb-finalizer:\" at 100.107.10.25:80/TCP\nI0111 20:04:40.629128 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:04:40.629385 1 proxier.go:1519] Opened local port \"nodePort for services-6943/lb-finalizer:\" (:32029/tcp)\nI0111 20:04:40.863513 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:04:42.068260 1 service.go:359] Updating existing service port \"services-6943/lb-finalizer:\" at 100.107.10.25:80/TCP\nI0111 20:04:42.092218 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:04:43.323875 1 service.go:359] Updating existing service port \"services-6943/lb-finalizer:\" at 100.107.10.25:80/TCP\nI0111 20:04:43.345289 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:04:43.990573 1 service.go:359] Updating existing service port \"services-6943/lb-finalizer:\" at 100.107.10.25:80/TCP\nI0111 20:04:44.011913 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:04:47.516746 1 service.go:382] Removing service port \"volume-expand-1929/csi-hostpath-attacher:dummy\"\nI0111 20:04:47.538089 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:04:47.572162 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:04:47.794294 1 service.go:382] Removing service port \"volume-expand-1929/csi-hostpathplugin:dummy\"\nI0111 20:04:47.815145 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:04:47.839365 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:04:47.981935 1 service.go:382] Removing service port \"volume-expand-1929/csi-hostpath-provisioner:dummy\"\nI0111 20:04:48.003178 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:04:48.027328 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:04:48.171669 1 service.go:382] Removing service port \"volume-expand-1929/csi-hostpath-resizer:dummy\"\nI0111 20:04:48.198783 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:04:48.222192 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:04:48.360586 1 service.go:382] Removing service port \"volume-expand-1929/csi-snapshotter:dummy\"\nI0111 20:04:48.382220 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:04:48.405720 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:05:13.870498 1 service.go:359] Updating existing service port \"services-6943/lb-finalizer:\" at 100.107.10.25:80/TCP\nI0111 20:05:13.902258 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:05:13.902785 1 proxier.go:1519] Opened local port \"nodePort for services-6943/lb-finalizer:\" (:32125/tcp)\nI0111 20:05:14.169590 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:05:15.311637 1 service.go:359] Updating existing service port \"services-6943/lb-finalizer:\" at 100.107.10.25:80/TCP\nI0111 20:05:15.346111 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:05:15.383025 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:05:15.699590 1 service.go:382] Removing service port \"services-6943/lb-finalizer:\"\nI0111 20:05:15.738336 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:05:15.779682 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:05:45.808377 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:06:12.040993 1 service.go:357] Adding new service port \"provisioning-5738/csi-hostpath-attacher:dummy\" at 100.104.206.73:12345/TCP\nI0111 20:06:12.093332 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:06:12.154434 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:06:12.315808 1 service.go:357] Adding new service port \"provisioning-5738/csi-hostpathplugin:dummy\" at 100.105.45.215:12345/TCP\nI0111 20:06:12.339607 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:06:12.376673 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:06:12.499051 1 service.go:357] Adding new service port \"provisioning-5738/csi-hostpath-provisioner:dummy\" at 100.108.206.199:12345/TCP\nI0111 20:06:12.535702 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:06:12.581384 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:06:12.682173 1 service.go:357] Adding new service port \"provisioning-5738/csi-hostpath-resizer:dummy\" at 100.110.205.209:12345/TCP\nI0111 20:06:12.710782 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:06:12.754087 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:06:12.865332 1 service.go:357] Adding new service port \"provisioning-5738/csi-snapshotter:dummy\" at 100.109.37.67:12345/TCP\nI0111 20:06:12.920120 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:06:12.981635 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:06:15.066324 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:06:15.115335 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:06:15.206461 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:06:15.238953 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:06:26.507436 1 service.go:357] Adding new service port \"provisioning-1947/csi-hostpath-attacher:dummy\" at 100.110.98.47:12345/TCP\nI0111 20:06:26.533947 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:06:26.563395 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:06:26.782967 1 service.go:357] Adding new service port \"provisioning-1947/csi-hostpathplugin:dummy\" at 100.105.32.126:12345/TCP\nI0111 20:06:26.817481 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:06:26.843949 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:06:26.969460 1 service.go:357] Adding new service port \"provisioning-1947/csi-hostpath-provisioner:dummy\" at 100.107.75.248:12345/TCP\nI0111 20:06:27.013731 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:06:27.043682 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:06:27.153439 1 service.go:357] Adding new service port \"provisioning-1947/csi-hostpath-resizer:dummy\" at 100.104.32.32:12345/TCP\nI0111 20:06:27.179681 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:06:27.208887 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:06:27.337130 1 service.go:357] Adding new service port \"provisioning-1947/csi-snapshotter:dummy\" at 100.107.242.130:12345/TCP\nI0111 20:06:27.363788 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:06:27.392698 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:06:30.666564 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:06:30.797822 1 service.go:357] Adding new service port \"volume-expand-8205/csi-hostpath-attacher:dummy\" at 100.110.87.213:12345/TCP\nI0111 20:06:30.824875 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:06:30.866738 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:06:30.898793 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:06:30.966401 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:06:31.073149 1 service.go:357] Adding new service port \"volume-expand-8205/csi-hostpathplugin:dummy\" at 100.109.55.191:12345/TCP\nI0111 20:06:31.119722 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:06:31.159303 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:06:31.258842 1 service.go:357] Adding new service port \"volume-expand-8205/csi-hostpath-provisioner:dummy\" at 100.104.240.231:12345/TCP\nI0111 20:06:31.282046 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:06:31.372239 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:06:31.443491 1 service.go:357] Adding new service port \"volume-expand-8205/csi-hostpath-resizer:dummy\" at 100.108.14.115:12345/TCP\nI0111 20:06:31.471611 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:06:31.604386 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:06:31.627494 1 service.go:357] Adding new service port \"volume-expand-8205/csi-snapshotter:dummy\" at 100.105.147.217:12345/TCP\nI0111 20:06:31.666268 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:06:31.723671 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:06:33.977363 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:06:34.160939 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:06:34.274608 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:06:34.307078 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:06:34.366216 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:06:34.406990 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:06:44.479217 1 service.go:382] Removing service port \"provisioning-5738/csi-hostpath-attacher:dummy\"\nI0111 20:06:44.513691 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:06:44.558838 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:06:44.757845 1 service.go:382] Removing service port \"provisioning-5738/csi-hostpathplugin:dummy\"\nI0111 20:06:44.790514 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:06:44.826802 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:06:44.944819 1 service.go:382] Removing service port \"provisioning-5738/csi-hostpath-provisioner:dummy\"\nI0111 20:06:44.986940 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:06:45.027096 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:06:45.132008 1 service.go:382] Removing service port \"provisioning-5738/csi-hostpath-resizer:dummy\"\nI0111 20:06:45.158452 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:06:45.188278 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:06:45.318862 1 service.go:382] Removing service port \"provisioning-5738/csi-snapshotter:dummy\"\nI0111 20:06:45.353104 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:06:45.383011 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:06:48.645991 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:07:01.303913 1 service.go:382] Removing service port \"provisioning-1947/csi-hostpath-attacher:dummy\"\nI0111 20:07:01.334144 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:07:01.361700 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:07:01.583218 1 service.go:382] Removing service port \"provisioning-1947/csi-hostpathplugin:dummy\"\nI0111 20:07:01.609745 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:07:01.643499 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:07:01.770496 1 service.go:382] Removing service port \"provisioning-1947/csi-hostpath-provisioner:dummy\"\nI0111 20:07:01.800444 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:07:01.841301 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:07:01.958347 1 service.go:382] Removing service port \"provisioning-1947/csi-hostpath-resizer:dummy\"\nI0111 20:07:01.998093 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:07:02.037778 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:07:02.146085 1 service.go:382] Removing service port \"provisioning-1947/csi-snapshotter:dummy\"\nI0111 20:07:02.177278 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:07:02.204356 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:07:32.241440 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:07:36.882514 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:07:44.969411 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:07:47.241450 1 service.go:382] Removing service port \"volume-expand-8205/csi-hostpath-attacher:dummy\"\nI0111 20:07:47.270810 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:07:47.310553 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:07:47.523546 1 service.go:382] Removing service port \"volume-expand-8205/csi-hostpathplugin:dummy\"\nI0111 20:07:47.555493 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:07:47.627635 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:07:47.712546 1 service.go:382] Removing service port \"volume-expand-8205/csi-hostpath-provisioner:dummy\"\nI0111 20:07:47.763325 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:07:47.814392 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:07:47.901197 1 service.go:382] Removing service port \"volume-expand-8205/csi-hostpath-resizer:dummy\"\nI0111 20:07:47.937010 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:07:47.984158 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:07:48.089666 1 service.go:382] Removing service port \"volume-expand-8205/csi-snapshotter:dummy\"\nI0111 20:07:48.131265 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:07:48.167925 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:08:06.226750 1 service.go:357] Adding new service port \"provisioning-8445/csi-hostpath-attacher:dummy\" at 100.110.236.175:12345/TCP\nI0111 20:08:06.257441 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:08:06.284266 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:08:06.500721 1 service.go:357] Adding new service port \"provisioning-8445/csi-hostpathplugin:dummy\" at 100.104.179.192:12345/TCP\nI0111 20:08:06.531268 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:08:06.557307 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:08:06.684921 1 service.go:357] Adding new service port \"provisioning-8445/csi-hostpath-provisioner:dummy\" at 100.110.104.0:12345/TCP\nI0111 20:08:06.715440 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:08:06.763297 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:08:06.871690 1 service.go:357] Adding new service port \"provisioning-8445/csi-hostpath-resizer:dummy\" at 100.107.38.225:12345/TCP\nI0111 20:08:06.908666 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:08:06.974018 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:08:07.054787 1 service.go:357] Adding new service port \"provisioning-8445/csi-snapshotter:dummy\" at 100.105.69.48:12345/TCP\nI0111 20:08:07.087754 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:08:07.193888 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:08:08.116705 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:08:09.562154 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:08:09.597054 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:08:09.764208 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:08:17.393738 1 service.go:357] Adding new service port \"volume-1340/csi-hostpath-attacher:dummy\" at 100.107.93.182:12345/TCP\nI0111 20:08:17.453936 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:08:17.497071 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:08:17.666365 1 service.go:357] Adding new service port \"volume-1340/csi-hostpathplugin:dummy\" at 100.108.84.118:12345/TCP\nI0111 20:08:17.707708 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:08:17.745144 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:08:17.849992 1 service.go:357] Adding new service port \"volume-1340/csi-hostpath-provisioner:dummy\" at 100.110.186.113:12345/TCP\nI0111 20:08:17.873225 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:08:18.033751 1 service.go:357] Adding new service port \"volume-1340/csi-hostpath-resizer:dummy\" at 100.111.255.119:12345/TCP\nI0111 20:08:18.087130 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:08:18.130122 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:08:18.217337 1 service.go:357] Adding new service port \"volume-1340/csi-snapshotter:dummy\" at 100.107.22.67:12345/TCP\nI0111 20:08:18.246761 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:08:18.384731 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:08:19.384891 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:08:20.700794 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:08:20.738016 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:08:20.770295 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:08:20.797840 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:08:23.215735 1 service.go:382] Removing service port \"provisioning-8445/csi-hostpath-attacher:dummy\"\nI0111 20:08:23.240670 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:08:23.270003 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:08:23.495062 1 service.go:382] Removing service port \"provisioning-8445/csi-hostpathplugin:dummy\"\nI0111 20:08:23.520213 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:08:23.547866 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:08:23.684969 1 service.go:382] Removing service port \"provisioning-8445/csi-hostpath-provisioner:dummy\"\nI0111 20:08:23.709731 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:08:23.750769 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:08:23.872146 1 service.go:382] Removing service port \"provisioning-8445/csi-hostpath-resizer:dummy\"\nI0111 20:08:23.937503 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:08:23.985897 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:08:24.059148 1 service.go:382] Removing service port \"provisioning-8445/csi-snapshotter:dummy\"\nI0111 20:08:24.096340 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:08:24.142892 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:08:28.591396 1 service.go:357] Adding new service port \"webhook-9223/e2e-test-webhook:\" at 100.107.34.101:8443/TCP\nI0111 20:08:28.616533 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:08:28.645065 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:08:37.352918 1 service.go:382] Removing service port \"webhook-9223/e2e-test-webhook:\"\nI0111 20:08:37.377137 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:08:37.404344 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:09:07.454011 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nE0111 20:09:07.459914 1 proxier.go:1418] Failed to execute iptables-restore: exit status 1 (iptables-restore: line 159 failed\n)\nI0111 20:09:07.459996 1 proxier.go:1421] Closing local ports after iptables-restore failure\nI0111 20:09:15.503185 1 service.go:357] Adding new service port \"webhook-1039/e2e-test-webhook:\" at 100.104.233.181:8443/TCP\nI0111 20:09:15.533778 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:09:15.567459 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:09:18.772662 1 service.go:357] Adding new service port \"network-7217/boom-server:\" at 100.104.44.53:9000/TCP\nI0111 20:09:18.800675 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:09:18.838766 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:09:23.338135 1 service.go:382] Removing service port \"webhook-1039/e2e-test-webhook:\"\nI0111 20:09:23.380079 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:09:23.535546 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:09:26.956474 1 service.go:382] Removing service port \"volume-1340/csi-hostpath-attacher:dummy\"\nI0111 20:09:26.986283 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:09:27.011172 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:09:27.236107 1 service.go:382] Removing service port \"volume-1340/csi-hostpathplugin:dummy\"\nI0111 20:09:27.259679 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:09:27.285641 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:09:27.425910 1 service.go:382] Removing service port \"volume-1340/csi-hostpath-provisioner:dummy\"\nI0111 20:09:27.448345 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:09:27.474479 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:09:27.614125 1 service.go:382] Removing service port \"volume-1340/csi-hostpath-resizer:dummy\"\nI0111 20:09:27.643133 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:09:27.669102 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:09:27.803926 1 service.go:382] Removing service port \"volume-1340/csi-snapshotter:dummy\"\nI0111 20:09:27.827511 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:09:27.853191 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:09:34.710894 1 service.go:357] Adding new service port \"services-8930/nodeport-test:http\" at 100.111.48.61:80/TCP\nI0111 20:09:34.737768 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:09:34.739774 1 proxier.go:1519] Opened local port \"nodePort for services-8930/nodeport-test:http\" (:31523/tcp)\nI0111 20:09:34.781016 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:09:35.886421 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:09:36.888931 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:09:47.813324 1 service.go:357] Adding new service port \"webhook-9977/e2e-test-webhook:\" at 100.107.155.214:8443/TCP\nI0111 20:09:47.953161 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:09:47.999870 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:09:51.842213 1 service.go:382] Removing service port \"services-8930/nodeport-test:http\"\nI0111 20:09:51.867325 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:09:51.895742 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:09:55.725755 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:09:56.050548 1 service.go:382] Removing service port \"webhook-9977/e2e-test-webhook:\"\nI0111 20:09:56.090457 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:09:56.142837 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:09:56.973396 1 service.go:357] Adding new service port \"services-4609/nodeports:tcp-port\" at 100.105.31.240:53/TCP\nI0111 20:09:56.973423 1 service.go:357] Adding new service port \"services-4609/nodeports:udp-port\" at 100.105.31.240:53/UDP\nI0111 20:09:57.009435 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:09:57.012864 1 proxier.go:1519] Opened local port \"nodePort for services-4609/nodeports:tcp-port\" (:31219/tcp)\nI0111 20:09:57.012925 1 proxier.go:1519] Opened local port \"nodePort for services-4609/nodeports:udp-port\" (:31219/udp)\nI0111 20:09:57.061234 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:09:57.065390 1 service.go:382] Removing service port \"services-4609/nodeports:tcp-port\"\nI0111 20:09:57.065409 1 service.go:382] Removing service port \"services-4609/nodeports:udp-port\"\nI0111 20:09:57.088976 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:09:57.119731 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:10:26.167877 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:10:26.269782 1 service.go:382] Removing service port \"network-7217/boom-server:\"\nI0111 20:10:26.316290 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:10:26.352668 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:10:56.382905 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:11:26.418681 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:11:56.446406 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:12:17.082075 1 service.go:357] Adding new service port \"volume-expand-1240/csi-hostpath-attacher:dummy\" at 100.109.110.58:12345/TCP\nI0111 20:12:17.111295 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:12:17.153739 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:12:17.354455 1 service.go:357] Adding new service port \"volume-expand-1240/csi-hostpathplugin:dummy\" at 100.107.205.142:12345/TCP\nI0111 20:12:17.379341 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:12:17.407472 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:12:17.538212 1 service.go:357] Adding new service port \"volume-expand-1240/csi-hostpath-provisioner:dummy\" at 100.110.254.121:12345/TCP\nI0111 20:12:17.562977 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:12:17.590370 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:12:17.721658 1 service.go:357] Adding new service port \"volume-expand-1240/csi-hostpath-resizer:dummy\" at 100.108.23.63:12345/TCP\nI0111 20:12:17.765237 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:12:17.805740 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nE0111 20:12:17.812451 1 proxier.go:1418] Failed to execute iptables-restore: exit status 1 (iptables-restore: line 128 failed\n)\nI0111 20:12:17.812551 1 proxier.go:1421] Closing local ports after iptables-restore failure\nI0111 20:12:17.992409 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:12:17.998246 1 service.go:357] Adding new service port \"volume-expand-1240/csi-snapshotter:dummy\" at 100.107.242.127:12345/TCP\nI0111 20:12:18.038347 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:12:20.089154 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:12:20.117249 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:12:20.175633 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:12:20.218836 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:12:50.263362 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:12:58.428658 1 service.go:382] Removing service port \"volume-expand-1240/csi-hostpath-attacher:dummy\"\nI0111 20:12:58.453150 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:12:58.479778 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:12:58.705968 1 service.go:382] Removing service port \"volume-expand-1240/csi-hostpathplugin:dummy\"\nI0111 20:12:58.730118 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:12:58.756870 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:12:58.897975 1 service.go:382] Removing service port \"volume-expand-1240/csi-hostpath-provisioner:dummy\"\nI0111 20:12:58.921914 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:12:58.948713 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:12:59.083186 1 service.go:382] Removing service port \"volume-expand-1240/csi-hostpath-resizer:dummy\"\nI0111 20:12:59.107127 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:12:59.132257 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:12:59.268501 1 service.go:382] Removing service port \"volume-expand-1240/csi-snapshotter:dummy\"\nI0111 20:12:59.296916 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:12:59.329619 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:13:18.206661 1 service.go:357] Adding new service port \"services-4281/endpoint-test2:\" at 100.107.132.97:80/TCP\nI0111 20:13:18.232636 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:13:18.268333 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:13:20.369950 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:13:22.425590 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:13:23.961835 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:13:24.228194 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:13:24.375433 1 service.go:382] Removing service port \"services-4281/endpoint-test2:\"\nI0111 20:13:24.414745 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:13:24.454797 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:13:34.265790 1 service.go:357] Adding new service port \"provisioning-5271/csi-hostpath-attacher:dummy\" at 100.109.111.235:12345/TCP\nI0111 20:13:34.292712 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:13:34.322905 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:13:34.540139 1 service.go:357] Adding new service port \"provisioning-5271/csi-hostpathplugin:dummy\" at 100.108.41.132:12345/TCP\nI0111 20:13:34.567381 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:13:34.597398 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:13:34.727349 1 service.go:357] Adding new service port \"provisioning-5271/csi-hostpath-provisioner:dummy\" at 100.105.231.186:12345/TCP\nI0111 20:13:34.761767 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:13:34.810196 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:13:34.915906 1 service.go:357] Adding new service port \"provisioning-5271/csi-hostpath-resizer:dummy\" at 100.104.109.64:12345/TCP\nI0111 20:13:34.955384 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:13:35.002964 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:13:35.098969 1 service.go:357] Adding new service port \"provisioning-5271/csi-snapshotter:dummy\" at 100.109.28.157:12345/TCP\nI0111 20:13:35.132822 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:13:35.474466 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:13:36.085820 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:13:36.981589 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:13:37.025847 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:13:38.017629 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:14:04.691434 1 service.go:382] Removing service port \"provisioning-5271/csi-hostpath-attacher:dummy\"\nI0111 20:14:04.715990 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:14:04.742950 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:14:04.973766 1 service.go:382] Removing service port \"provisioning-5271/csi-hostpathplugin:dummy\"\nI0111 20:14:05.012246 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:14:05.059778 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:14:05.163335 1 service.go:382] Removing service port \"provisioning-5271/csi-hostpath-provisioner:dummy\"\nI0111 20:14:05.192597 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:14:05.226835 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:14:05.352537 1 service.go:382] Removing service port \"provisioning-5271/csi-hostpath-resizer:dummy\"\nI0111 20:14:05.401312 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:14:05.444065 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:14:05.540424 1 service.go:382] Removing service port \"provisioning-5271/csi-snapshotter:dummy\"\nI0111 20:14:05.615549 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:14:05.660798 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:14:16.762034 1 service.go:357] Adding new service port \"webhook-9848/e2e-test-webhook:\" at 100.110.42.241:8443/TCP\nI0111 20:14:16.785984 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:14:16.812343 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:14:23.488962 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:14:23.737673 1 service.go:382] Removing service port \"webhook-9848/e2e-test-webhook:\"\nI0111 20:14:23.785376 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:14:23.823508 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:14:45.730729 1 service.go:357] Adding new service port \"webhook-1622/e2e-test-webhook:\" at 100.104.101.200:8443/TCP\nI0111 20:14:45.761561 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:14:45.788459 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:14:52.573455 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:14:52.579058 1 service.go:382] Removing service port \"webhook-1622/e2e-test-webhook:\"\nI0111 20:14:52.614725 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:15:11.389538 1 service.go:357] Adding new service port \"webhook-1400/e2e-test-webhook:\" at 100.111.254.59:8443/TCP\nI0111 20:15:11.413718 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:15:11.439982 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:15:13.489143 1 service.go:357] Adding new service port \"provisioning-1550/csi-hostpath-attacher:dummy\" at 100.106.144.153:12345/TCP\nI0111 20:15:13.513272 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:15:13.562286 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:15:13.765130 1 service.go:357] Adding new service port \"provisioning-1550/csi-hostpathplugin:dummy\" at 100.111.192.91:12345/TCP\nI0111 20:15:13.822256 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:15:13.855932 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:15:13.948398 1 service.go:357] Adding new service port \"provisioning-1550/csi-hostpath-provisioner:dummy\" at 100.106.130.64:12345/TCP\nI0111 20:15:13.970364 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:15:13.995220 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:15:14.131862 1 service.go:357] Adding new service port \"provisioning-1550/csi-hostpath-resizer:dummy\" at 100.109.67.138:12345/TCP\nI0111 20:15:14.158569 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:15:14.224452 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:15:14.314877 1 service.go:357] Adding new service port \"provisioning-1550/csi-snapshotter:dummy\" at 100.110.64.214:12345/TCP\nI0111 20:15:14.371832 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:15:14.422719 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:15:16.118543 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:15:16.170479 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:15:17.061451 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:15:17.104624 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:15:19.016050 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:15:19.047442 1 service.go:382] Removing service port \"webhook-1400/e2e-test-webhook:\"\nI0111 20:15:19.078539 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:15:19.113128 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:15:36.072384 1 service.go:357] Adding new service port \"volumemode-2239/csi-hostpath-attacher:dummy\" at 100.105.190.109:12345/TCP\nI0111 20:15:36.106414 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:15:36.142579 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:15:36.344479 1 service.go:357] Adding new service port \"volumemode-2239/csi-hostpathplugin:dummy\" at 100.107.177.223:12345/TCP\nI0111 20:15:36.380867 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:15:36.421829 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:15:36.527717 1 service.go:357] Adding new service port \"volumemode-2239/csi-hostpath-provisioner:dummy\" at 100.106.117.161:12345/TCP\nI0111 20:15:36.570632 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:15:36.637247 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:15:36.711158 1 service.go:357] Adding new service port \"volumemode-2239/csi-hostpath-resizer:dummy\" at 100.105.246.16:12345/TCP\nI0111 20:15:36.747679 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:15:36.786236 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:15:36.894374 1 service.go:357] Adding new service port \"volumemode-2239/csi-snapshotter:dummy\" at 100.106.238.61:12345/TCP\nI0111 20:15:36.924435 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:15:36.998830 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:15:38.192860 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:15:39.202763 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:15:39.264564 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:15:39.299917 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:15:58.181079 1 service.go:382] Removing service port \"provisioning-1550/csi-hostpath-attacher:dummy\"\nI0111 20:15:58.209222 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:15:58.241902 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:15:58.458517 1 service.go:382] Removing service port \"provisioning-1550/csi-hostpathplugin:dummy\"\nI0111 20:15:58.487245 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:15:58.518333 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:15:58.644331 1 service.go:382] Removing service port \"provisioning-1550/csi-hostpath-provisioner:dummy\"\nI0111 20:15:58.676696 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:15:58.707004 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:15:58.829681 1 service.go:382] Removing service port \"provisioning-1550/csi-hostpath-resizer:dummy\"\nI0111 20:15:58.868860 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:15:58.900053 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:15:59.015049 1 service.go:382] Removing service port \"provisioning-1550/csi-snapshotter:dummy\"\nI0111 20:15:59.042836 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:15:59.084599 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:16:02.834134 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:16:04.451431 1 service.go:357] Adding new service port \"webhook-4029/e2e-test-webhook:\" at 100.107.227.33:8443/TCP\nI0111 20:16:04.476600 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:16:04.562075 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:16:12.870915 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:16:13.039855 1 service.go:382] Removing service port \"webhook-4029/e2e-test-webhook:\"\nI0111 20:16:13.066004 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:16:21.510995 1 service.go:382] Removing service port \"volumemode-2239/csi-hostpath-attacher:dummy\"\nI0111 20:16:21.535494 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:16:21.562901 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:16:21.787694 1 service.go:382] Removing service port \"volumemode-2239/csi-hostpathplugin:dummy\"\nI0111 20:16:21.812221 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:16:21.845073 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:16:21.974406 1 service.go:382] Removing service port \"volumemode-2239/csi-hostpath-provisioner:dummy\"\nI0111 20:16:22.005092 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:16:22.053559 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:16:22.168906 1 service.go:382] Removing service port \"volumemode-2239/csi-hostpath-resizer:dummy\"\nI0111 20:16:22.237904 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:16:22.281830 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:16:22.354173 1 service.go:382] Removing service port \"volumemode-2239/csi-snapshotter:dummy\"\nI0111 20:16:22.390255 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:16:22.432356 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:16:52.471979 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:17:08.303536 1 service.go:357] Adding new service port \"services-9670/nodeport-reuse:\" at 100.106.55.71:80/TCP\nI0111 20:17:08.326078 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:17:08.326331 1 proxier.go:1519] Opened local port \"nodePort for services-9670/nodeport-reuse:\" (:30304/tcp)\nI0111 20:17:08.351042 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:17:08.394921 1 service.go:382] Removing service port \"services-9670/nodeport-reuse:\"\nI0111 20:17:08.436206 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:17:08.492499 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:17:12.163175 1 service.go:357] Adding new service port \"services-9670/nodeport-reuse:\" at 100.105.245.146:80/TCP\nI0111 20:17:12.193773 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:17:12.194000 1 proxier.go:1519] Opened local port \"nodePort for services-9670/nodeport-reuse:\" (:30304/tcp)\nI0111 20:17:12.218766 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:17:12.254336 1 service.go:382] Removing service port \"services-9670/nodeport-reuse:\"\nI0111 20:17:12.284610 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:17:12.309260 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:17:13.612152 1 service.go:357] Adding new service port \"ephemeral-1155/csi-hostpath-attacher:dummy\" at 100.108.187.111:12345/TCP\nI0111 20:17:13.635019 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:17:13.667811 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:17:13.885820 1 service.go:357] Adding new service port \"ephemeral-1155/csi-hostpathplugin:dummy\" at 100.108.1.251:12345/TCP\nI0111 20:17:13.908281 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:17:13.959378 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:17:14.067909 1 service.go:357] Adding new service port \"ephemeral-1155/csi-hostpath-provisioner:dummy\" at 100.106.54.117:12345/TCP\nI0111 20:17:14.090827 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:17:14.116435 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:17:14.249731 1 service.go:357] Adding new service port \"ephemeral-1155/csi-hostpath-resizer:dummy\" at 100.106.193.173:12345/TCP\nI0111 20:17:14.271951 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:17:14.297302 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:17:14.432268 1 service.go:357] Adding new service port \"ephemeral-1155/csi-snapshotter:dummy\" at 100.108.55.173:12345/TCP\nI0111 20:17:14.454869 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:17:14.480112 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:17:15.951653 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:17:17.010607 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:17:17.037504 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:17:17.064119 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:17:47.104488 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:17:56.693380 1 service.go:382] Removing service port \"ephemeral-1155/csi-hostpath-attacher:dummy\"\nI0111 20:17:56.717506 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:17:56.744797 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:17:56.975005 1 service.go:382] Removing service port \"ephemeral-1155/csi-hostpathplugin:dummy\"\nI0111 20:17:56.999864 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:17:57.026823 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:17:57.161551 1 service.go:382] Removing service port \"ephemeral-1155/csi-hostpath-provisioner:dummy\"\nI0111 20:17:57.185913 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:17:57.212916 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:17:57.347297 1 service.go:382] Removing service port \"ephemeral-1155/csi-hostpath-resizer:dummy\"\nI0111 20:17:57.372242 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:17:57.398919 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:17:57.533564 1 service.go:382] Removing service port \"ephemeral-1155/csi-snapshotter:dummy\"\nI0111 20:17:57.557505 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:17:57.584143 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:18:27.612758 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:18:57.650711 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nE0111 20:18:57.656431 1 proxier.go:1418] Failed to execute iptables-restore: exit status 1 (iptables-restore: line 124 failed\n)\nI0111 20:18:57.656513 1 proxier.go:1421] Closing local ports after iptables-restore failure\nI0111 20:19:20.482763 1 service.go:357] Adding new service port \"ephemeral-3918/csi-hostpath-attacher:dummy\" at 100.104.122.168:12345/TCP\nI0111 20:19:20.507699 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:19:20.560975 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:19:20.758305 1 service.go:357] Adding new service port \"ephemeral-3918/csi-hostpathplugin:dummy\" at 100.104.61.109:12345/TCP\nI0111 20:19:20.783935 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:19:20.970126 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:19:20.989898 1 service.go:357] Adding new service port \"ephemeral-3918/csi-hostpath-provisioner:dummy\" at 100.107.120.175:12345/TCP\nI0111 20:19:21.022167 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:19:21.127121 1 service.go:357] Adding new service port \"ephemeral-3918/csi-hostpath-resizer:dummy\" at 100.109.209.103:12345/TCP\nI0111 20:19:21.149901 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:19:21.181513 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:19:21.309903 1 service.go:357] Adding new service port \"ephemeral-3918/csi-snapshotter:dummy\" at 100.111.111.111:12345/TCP\nI0111 20:19:21.351948 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:19:21.486439 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:19:22.922317 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:19:23.908740 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:19:24.059156 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:19:24.090439 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:19:54.121039 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:20:01.564554 1 service.go:382] Removing service port \"ephemeral-3918/csi-hostpath-attacher:dummy\"\nI0111 20:20:01.591290 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:20:01.620117 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:20:01.856901 1 service.go:382] Removing service port \"ephemeral-3918/csi-hostpathplugin:dummy\"\nI0111 20:20:01.890255 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:20:01.919621 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:20:02.044799 1 service.go:382] Removing service port \"ephemeral-3918/csi-hostpath-provisioner:dummy\"\nI0111 20:20:02.087097 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:20:02.134634 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:20:02.283629 1 service.go:382] Removing service port \"ephemeral-3918/csi-hostpath-resizer:dummy\"\nI0111 20:20:02.321711 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:20:02.382895 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:20:02.470075 1 service.go:382] Removing service port \"ephemeral-3918/csi-snapshotter:dummy\"\nI0111 20:20:02.506298 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:20:02.553269 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:20:32.584246 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:21:02.612516 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:21:24.909244 1 service.go:357] Adding new service port \"kubectl-2777/redis-master:\" at 100.107.0.220:6379/TCP\nI0111 20:21:24.932962 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:21:24.959270 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:21:25.718558 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:21:35.082254 1 service.go:382] Removing service port \"kubectl-2777/redis-master:\"\nI0111 20:21:35.105588 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:21:35.138894 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:22:05.170696 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:22:35.200827 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:23:05.237015 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:23:13.447662 1 service.go:357] Adding new service port \"services-7215/tolerate-unready:http\" at 100.104.48.229:80/TCP\nI0111 20:23:13.485102 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:23:13.521304 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:23:14.323149 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:23:23.662774 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:23:27.223036 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:23:28.801124 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:23:29.134593 1 service.go:382] Removing service port \"services-7215/tolerate-unready:http\"\nI0111 20:23:29.168833 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:23:29.215342 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:23:59.244065 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:24:18.389642 1 service.go:357] Adding new service port \"dns-8876/test-service-2:http\" at 100.108.55.126:80/TCP\nI0111 20:24:18.414484 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:24:18.441486 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:24:19.879993 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:24:49.908872 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:24:54.816661 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:24:54.836475 1 service.go:382] Removing service port \"dns-8876/test-service-2:http\"\nI0111 20:24:54.871428 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:24:54.910380 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:25:24.939094 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:25:54.966137 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:26:24.994338 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:26:25.620497 1 service.go:357] Adding new service port \"services-6600/nodeport-collision-1:\" at 100.111.80.57:80/TCP\nI0111 20:26:25.644680 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:26:25.644896 1 proxier.go:1519] Opened local port \"nodePort for services-6600/nodeport-collision-1:\" (:30488/tcp)\nI0111 20:26:25.671210 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:26:25.807399 1 service.go:382] Removing service port \"services-6600/nodeport-collision-1:\"\nI0111 20:26:25.831814 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:26:25.858380 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:26:25.908875 1 service.go:357] Adding new service port \"services-6600/nodeport-collision-2:\" at 100.111.95.62:80/TCP\nI0111 20:26:25.932415 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:26:25.932659 1 proxier.go:1519] Opened local port \"nodePort for services-6600/nodeport-collision-2:\" (:30488/tcp)\nI0111 20:26:25.960187 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:26:25.999958 1 service.go:382] Removing service port \"services-6600/nodeport-collision-2:\"\nI0111 20:26:26.023060 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:26:26.050183 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:26:56.087455 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:27:26.170470 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:27:54.438915 1 service.go:357] Adding new service port \"webhook-7189/e2e-test-webhook:\" at 100.106.240.100:8443/TCP\nI0111 20:27:54.472550 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:27:54.513420 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:28:02.774571 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:28:02.778460 1 service.go:382] Removing service port \"webhook-7189/e2e-test-webhook:\"\nI0111 20:28:02.801294 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:28:32.840212 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:28:59.607162 1 service.go:357] Adding new service port \"services-1480/affinity-nodeport:\" at 100.108.78.161:80/TCP\nI0111 20:28:59.629819 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:28:59.630009 1 proxier.go:1519] Opened local port \"nodePort for services-1480/affinity-nodeport:\" (:31995/tcp)\nI0111 20:28:59.655971 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:01.183793 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:02.153308 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:02.196063 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:16.892656 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-s6rdv:\" at 100.110.50.165:80/TCP\nI0111 20:29:16.918972 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:16.954985 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:16.991823 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-q7krl:\" at 100.104.210.104:80/TCP\nI0111 20:29:17.017475 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:17.022039 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-s2b7g:\" at 100.109.34.243:80/TCP\nI0111 20:29:17.022061 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-cqmxc:\" at 100.108.159.113:80/TCP\nI0111 20:29:17.047618 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:17.076027 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-wblts:\" at 100.106.49.242:80/TCP\nI0111 20:29:17.103908 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:17.108727 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-ncb5r:\" at 100.109.10.194:80/TCP\nI0111 20:29:17.108746 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-2jj8w:\" at 100.108.132.249:80/TCP\nI0111 20:29:17.108758 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-mxv6b:\" at 100.105.37.245:80/TCP\nI0111 20:29:17.108769 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-7487x:\" at 100.109.158.98:80/TCP\nI0111 20:29:17.108778 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-5ps2n:\" at 100.108.97.89:80/TCP\nI0111 20:29:17.108788 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-zlkgw:\" at 100.111.6.117:80/TCP\nI0111 20:29:17.108799 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-52zhd:\" at 100.111.77.195:80/TCP\nI0111 20:29:17.132468 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:17.137622 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-bz8mw:\" at 100.110.69.72:80/TCP\nI0111 20:29:17.137642 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-mthv6:\" at 100.109.45.97:80/TCP\nI0111 20:29:17.137654 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-gl9mt:\" at 100.104.212.161:80/TCP\nI0111 20:29:17.137664 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-8wwgc:\" at 100.111.85.170:80/TCP\nI0111 20:29:17.137675 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-sz7fx:\" at 100.106.29.46:80/TCP\nI0111 20:29:17.137685 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-fjjc4:\" at 100.110.151.105:80/TCP\nI0111 20:29:17.161209 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:17.166713 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-m2zvw:\" at 100.111.195.89:80/TCP\nI0111 20:29:17.191444 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:17.197055 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-xb97n:\" at 100.105.255.28:80/TCP\nI0111 20:29:17.197074 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-x7bxp:\" at 100.106.205.191:80/TCP\nI0111 20:29:17.197086 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-9xzhq:\" at 100.105.45.12:80/TCP\nI0111 20:29:17.197096 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-ts8zr:\" at 100.111.248.209:80/TCP\nI0111 20:29:17.232578 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:17.241503 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-p5qn4:\" at 100.107.189.31:80/TCP\nI0111 20:29:17.241637 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-c7c64:\" at 100.104.26.166:80/TCP\nI0111 20:29:17.241723 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-6z7qf:\" at 100.107.183.82:80/TCP\nI0111 20:29:17.241803 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-8dgcz:\" at 100.104.245.255:80/TCP\nI0111 20:29:17.241892 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-nqjfp:\" at 100.105.216.233:80/TCP\nI0111 20:29:17.242159 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-wbrp9:\" at 100.106.100.240:80/TCP\nI0111 20:29:17.282722 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:17.292566 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-wgjmq:\" at 100.107.16.213:80/TCP\nI0111 20:29:17.292633 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-sqjqx:\" at 100.106.141.177:80/TCP\nI0111 20:29:17.292659 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-w859m:\" at 100.104.215.70:80/TCP\nI0111 20:29:17.292723 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-wc6hr:\" at 100.109.222.15:80/TCP\nI0111 20:29:17.292778 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-g5jvm:\" at 100.111.115.57:80/TCP\nI0111 20:29:17.292834 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-bq2lc:\" at 100.106.106.97:80/TCP\nI0111 20:29:17.292870 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-xdhk5:\" at 100.106.180.87:80/TCP\nI0111 20:29:17.292922 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-h9dnx:\" at 100.110.247.203:80/TCP\nI0111 20:29:17.403057 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:17.410445 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-58ct8:\" at 100.105.90.93:80/TCP\nI0111 20:29:17.410466 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-lw287:\" at 100.111.143.26:80/TCP\nI0111 20:29:17.410491 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-94qtf:\" at 100.106.145.155:80/TCP\nI0111 20:29:17.410557 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-rs69c:\" at 100.105.1.118:80/TCP\nI0111 20:29:17.410618 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-pg55q:\" at 100.111.133.104:80/TCP\nI0111 20:29:17.410694 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-9kzqd:\" at 100.107.201.138:80/TCP\nI0111 20:29:17.410709 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-cbkqt:\" at 100.109.228.244:80/TCP\nI0111 20:29:17.410721 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-jkck5:\" at 100.107.219.32:80/TCP\nI0111 20:29:17.410766 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-kxh8b:\" at 100.104.132.231:80/TCP\nI0111 20:29:17.410804 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-fbg4b:\" at 100.108.9.77:80/TCP\nI0111 20:29:17.410883 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-992f9:\" at 100.109.10.48:80/TCP\nI0111 20:29:17.410902 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-jnplv:\" at 100.111.194.181:80/TCP\nI0111 20:29:17.410983 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-c7khz:\" at 100.105.204.238:80/TCP\nI0111 20:29:17.410999 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-7zmw2:\" at 100.105.132.238:80/TCP\nI0111 20:29:17.411010 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-p4257:\" at 100.105.13.247:80/TCP\nI0111 20:29:17.436005 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:17.443659 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-9ljvn:\" at 100.110.183.93:80/TCP\nI0111 20:29:17.469130 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:17.502652 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:17.510034 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-npx27:\" at 100.107.8.132:80/TCP\nI0111 20:29:17.535689 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:17.543611 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-hk25w:\" at 100.111.70.131:80/TCP\nI0111 20:29:17.569869 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:17.618500 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:17.630234 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-pczsx:\" at 100.104.206.235:80/TCP\nI0111 20:29:17.667105 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:17.678179 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-fc7z5:\" at 100.109.1.158:80/TCP\nI0111 20:29:17.713516 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:17.725373 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-6dhzz:\" at 100.106.85.123:80/TCP\nI0111 20:29:17.762463 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:17.773352 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-vlpcx:\" at 100.110.56.135:80/TCP\nI0111 20:29:17.811591 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:17.823288 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-jbr4n:\" at 100.108.107.10:80/TCP\nI0111 20:29:17.865340 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:17.874640 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-r4d9z:\" at 100.108.124.213:80/TCP\nI0111 20:29:17.911169 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:17.923256 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-vwvc6:\" at 100.108.170.188:80/TCP\nI0111 20:29:17.957295 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:17.965443 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-4s2lt:\" at 100.110.234.154:80/TCP\nI0111 20:29:17.992712 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:18.000878 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-7dg5w:\" at 100.107.85.81:80/TCP\nI0111 20:29:18.029448 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:18.038622 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-fdplr:\" at 100.106.82.184:80/TCP\nI0111 20:29:18.073676 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:18.088239 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-m2r4k:\" at 100.105.34.36:80/TCP\nI0111 20:29:18.127261 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:18.140736 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-fhrxk:\" at 100.104.91.118:80/TCP\nI0111 20:29:18.179500 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:18.188391 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-kvtbx:\" at 100.110.180.178:80/TCP\nI0111 20:29:18.216428 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:18.257014 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:18.266137 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-6kl97:\" at 100.106.230.235:80/TCP\nI0111 20:29:18.293896 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:18.309169 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-kndd7:\" at 100.108.250.72:80/TCP\nI0111 20:29:18.344986 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:18.353593 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-k529k:\" at 100.109.188.200:80/TCP\nI0111 20:29:18.382029 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:18.390935 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-6xztd:\" at 100.106.15.1:80/TCP\nI0111 20:29:18.419048 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:18.455556 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:18.464298 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-hx4c9:\" at 100.108.18.30:80/TCP\nI0111 20:29:18.492165 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:18.506269 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-xp7z4:\" at 100.106.239.169:80/TCP\nI0111 20:29:18.549306 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:18.565094 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-pvn95:\" at 100.111.238.203:80/TCP\nI0111 20:29:18.607113 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:18.619954 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-xjpbr:\" at 100.104.141.186:80/TCP\nI0111 20:29:18.649805 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:18.659087 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-kbd7z:\" at 100.110.227.85:80/TCP\nI0111 20:29:18.687475 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:18.697085 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-5qnq6:\" at 100.104.135.167:80/TCP\nI0111 20:29:18.725845 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:18.738277 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-jwks9:\" at 100.109.201.181:80/TCP\nI0111 20:29:18.767156 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:18.805728 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:18.815342 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-5r6k6:\" at 100.104.54.3:80/TCP\nI0111 20:29:18.844694 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:18.854240 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-sz22t:\" at 100.111.189.31:80/TCP\nI0111 20:29:18.882884 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:18.892699 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-hzpx5:\" at 100.107.14.246:80/TCP\nI0111 20:29:18.921792 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:18.967246 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:18.977041 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-txx5w:\" at 100.109.37.31:80/TCP\nI0111 20:29:19.008070 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:19.017898 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-rlg5v:\" at 100.109.119.235:80/TCP\nI0111 20:29:19.049243 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:19.059017 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-989p5:\" at 100.111.73.68:80/TCP\nI0111 20:29:19.094651 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:19.105109 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-5qfmz:\" at 100.109.157.3:80/TCP\nI0111 20:29:19.135126 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:19.145803 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-6wt25:\" at 100.107.20.220:80/TCP\nI0111 20:29:19.174929 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:19.189132 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-r9vdn:\" at 100.104.245.110:80/TCP\nI0111 20:29:19.217324 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:19.256453 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:19.266674 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-7565s:\" at 100.111.12.117:80/TCP\nI0111 20:29:19.296597 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:19.307559 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-prmvm:\" at 100.106.81.1:80/TCP\nI0111 20:29:19.337627 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:19.347926 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-dz8hd:\" at 100.107.78.41:80/TCP\nI0111 20:29:19.377632 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:19.388194 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-bzdlz:\" at 100.109.126.85:80/TCP\nI0111 20:29:19.418340 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:19.468091 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:19.488080 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-hkxhd:\" at 100.108.205.83:80/TCP\nI0111 20:29:19.488103 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-fxd6l:\" at 100.108.38.225:80/TCP\nI0111 20:29:19.539145 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:19.549898 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-s65vs:\" at 100.109.41.46:80/TCP\nI0111 20:29:19.580036 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:19.594548 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-6mc9h:\" at 100.108.60.188:80/TCP\nI0111 20:29:19.642672 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:19.653693 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-b6448:\" at 100.110.134.144:80/TCP\nI0111 20:29:19.699597 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:19.716114 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-55gtf:\" at 100.107.132.2:80/TCP\nI0111 20:29:19.759579 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:19.776367 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-wrknt:\" at 100.107.232.191:80/TCP\nI0111 20:29:19.815504 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:19.826554 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-8md9n:\" at 100.108.157.224:80/TCP\nI0111 20:29:19.868637 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:19.885719 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-6tkm4:\" at 100.109.47.117:80/TCP\nI0111 20:29:19.930615 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:19.945599 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-5pmk8:\" at 100.104.139.0:80/TCP\nI0111 20:29:19.945621 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-sxsrj:\" at 100.106.241.95:80/TCP\nI0111 20:29:19.985504 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nE0111 20:29:20.002816 1 proxier.go:1418] Failed to execute iptables-restore: exit status 1 (iptables-restore: line 787 failed\n)\nI0111 20:29:20.002891 1 proxier.go:1421] Closing local ports after iptables-restore failure\nI0111 20:29:20.002922 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-q2xhr:\" at 100.109.33.166:80/TCP\nI0111 20:29:20.042399 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:20.057632 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-8hxbr:\" at 100.108.14.244:80/TCP\nI0111 20:29:20.112984 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:20.141173 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-tqpks:\" at 100.111.101.9:80/TCP\nI0111 20:29:20.141325 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-hksc7:\" at 100.107.253.69:80/TCP\nI0111 20:29:20.196958 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:20.215419 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-wvx2k:\" at 100.108.121.10:80/TCP\nI0111 20:29:20.266106 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:20.286688 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-rq5x4:\" at 100.104.140.34:80/TCP\nI0111 20:29:20.333056 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:20.349099 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-l2w82:\" at 100.106.180.55:80/TCP\nI0111 20:29:20.349121 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-vxsbd:\" at 100.108.232.3:80/TCP\nI0111 20:29:20.379711 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:20.392089 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-bw56h:\" at 100.108.110.225:80/TCP\nI0111 20:29:20.427442 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:20.461537 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-cqvqq:\" at 100.104.92.79:80/TCP\nI0111 20:29:20.518069 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:20.540014 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-t45wj:\" at 100.108.132.229:80/TCP\nI0111 20:29:20.540120 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-zq9n9:\" at 100.105.131.189:80/TCP\nI0111 20:29:20.582579 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:20.664449 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:20.691268 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-6f2l9:\" at 100.108.48.179:80/TCP\nI0111 20:29:20.691290 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-mgcrt:\" at 100.111.9.64:80/TCP\nI0111 20:29:20.770841 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:20.790075 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-7fm9r:\" at 100.111.0.3:80/TCP\nI0111 20:29:20.790097 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-jp7dq:\" at 100.105.120.67:80/TCP\nI0111 20:29:20.790108 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-nzg9v:\" at 100.108.215.174:80/TCP\nI0111 20:29:20.838829 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:20.855811 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-c6t4p:\" at 100.109.205.206:80/TCP\nI0111 20:29:20.900276 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:20.977467 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:20.998980 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-wkxr8:\" at 100.111.22.54:80/TCP\nI0111 20:29:20.999003 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-ptj6z:\" at 100.105.16.240:80/TCP\nI0111 20:29:20.999014 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-8llmx:\" at 100.106.150.163:80/TCP\nI0111 20:29:21.042495 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:21.059129 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-dhr5n:\" at 100.108.20.221:80/TCP\nI0111 20:29:21.112817 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:21.134888 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-c7vtv:\" at 100.109.120.118:80/TCP\nI0111 20:29:21.197120 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nE0111 20:29:21.218753 1 proxier.go:1418] Failed to execute iptables-restore: exit status 1 (iptables-restore: line 948 failed\n)\nI0111 20:29:21.218830 1 proxier.go:1421] Closing local ports after iptables-restore failure\nI0111 20:29:21.218868 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-shrdl:\" at 100.108.44.155:80/TCP\nI0111 20:29:21.218886 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-rzpqh:\" at 100.106.186.206:80/TCP\nI0111 20:29:21.256336 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:21.270624 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-dqmd8:\" at 100.110.13.169:80/TCP\nI0111 20:29:21.311845 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:21.326245 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-f72wt:\" at 100.108.171.237:80/TCP\nI0111 20:29:21.357236 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:21.372140 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-j99bh:\" at 100.110.60.255:80/TCP\nI0111 20:29:21.404931 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:21.419295 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-pssrp:\" at 100.109.198.1:80/TCP\nI0111 20:29:21.450939 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:21.464803 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-z5cpl:\" at 100.109.2.217:80/TCP\nI0111 20:29:21.516623 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:21.530463 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-9zslb:\" at 100.107.118.24:80/TCP\nI0111 20:29:21.561734 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:21.575385 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-2rnrs:\" at 100.106.44.212:80/TCP\nI0111 20:29:21.606049 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:21.619577 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-9m945:\" at 100.108.238.138:80/TCP\nI0111 20:29:21.650933 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:21.665011 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-sr8ck:\" at 100.110.247.217:80/TCP\nI0111 20:29:21.736796 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:21.763640 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-vvf98:\" at 100.110.252.124:80/TCP\nI0111 20:29:21.763661 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-2rxnn:\" at 100.107.60.33:80/TCP\nI0111 20:29:21.795700 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:21.809855 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-5q62h:\" at 100.108.246.174:80/TCP\nI0111 20:29:21.842093 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:21.863472 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-c9vww:\" at 100.109.238.2:80/TCP\nI0111 20:29:21.895026 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:21.909470 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-6rzg7:\" at 100.109.162.214:80/TCP\nI0111 20:29:21.940565 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:21.954684 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-bnbzz:\" at 100.105.82.22:80/TCP\nI0111 20:29:21.999503 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:22.023427 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-xk5p9:\" at 100.108.249.86:80/TCP\nI0111 20:29:22.089985 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:22.124909 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-xc6xg:\" at 100.106.244.154:80/TCP\nI0111 20:29:22.125070 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-mqmb7:\" at 100.104.140.116:80/TCP\nI0111 20:29:22.162964 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:22.178135 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-42ntw:\" at 100.108.223.123:80/TCP\nI0111 20:29:22.209852 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:22.224569 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-45vjl:\" at 100.104.82.145:80/TCP\nI0111 20:29:22.257001 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:22.273961 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-9vf74:\" at 100.106.216.184:80/TCP\nI0111 20:29:22.309869 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:22.326911 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-r6bp4:\" at 100.111.63.57:80/TCP\nI0111 20:29:22.359660 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:22.374992 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-47l2j:\" at 100.104.41.40:80/TCP\nI0111 20:29:22.406824 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:22.421669 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-jz2fm:\" at 100.109.118.224:80/TCP\nI0111 20:29:22.453991 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:22.469200 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-5vdf6:\" at 100.108.59.15:80/TCP\nI0111 20:29:22.531910 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:22.559431 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-h7vk6:\" at 100.109.148.180:80/TCP\nI0111 20:29:22.559468 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-wf46g:\" at 100.110.53.47:80/TCP\nI0111 20:29:22.614006 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:22.630128 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-vd6nh:\" at 100.104.8.153:80/TCP\nI0111 20:29:22.666507 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:22.690192 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-cljxl:\" at 100.106.195.228:80/TCP\nI0111 20:29:22.690216 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-wfmp8:\" at 100.110.81.234:80/TCP\nI0111 20:29:22.747311 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:22.763344 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-5h4t8:\" at 100.104.242.85:80/TCP\nI0111 20:29:22.811128 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:22.826877 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-svsxt:\" at 100.104.253.83:80/TCP\nI0111 20:29:22.874679 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:22.903551 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-5j26h:\" at 100.111.212.54:80/TCP\nI0111 20:29:22.903582 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-vchzg:\" at 100.108.89.66:80/TCP\nI0111 20:29:22.956931 1 proxier.go:793] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it\nI0111 20:29:22.986654 1 service.go:357] Adding new service port \"svc-latency-7431/latency-svc-46dd8:\" at 100.104.108.158:80/TCP\n==== END logs for container kube-proxy of pod kube-system/kube-proxy-rq4kf ====\n==== START logs for container metrics-server of pod kube-system/metrics-server-7c797fd994-4x7v9 ====\nI0111 15:56:47.739095 1 manager.go:95] Scraping metrics from 0 sources\nI0111 15:56:47.739171 1 manager.go:150] ScrapeMetrics: time: 867ns, nodes: 0, pods: 0\nI0111 15:56:48.034764 1 secure_serving.go:116] Serving securely on [::]:8443\nI0111 15:57:47.739210 1 manager.go:95] Scraping metrics from 2 sources\nI0111 15:57:47.746352 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 15:57:47.750356 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 15:57:47.784784 1 manager.go:150] ScrapeMetrics: time: 45.544595ms, nodes: 2, pods: 20\nI0111 15:58:47.739193 1 manager.go:95] Scraping metrics from 2 sources\nI0111 15:58:47.750443 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 15:58:47.754580 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 15:58:47.772200 1 manager.go:150] ScrapeMetrics: time: 32.953796ms, nodes: 2, pods: 20\nI0111 15:59:47.739208 1 manager.go:95] Scraping metrics from 2 sources\nI0111 15:59:47.747409 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 15:59:47.754341 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 15:59:47.774870 1 manager.go:150] ScrapeMetrics: time: 35.635395ms, nodes: 2, pods: 20\nI0111 16:00:47.739231 1 manager.go:95] Scraping metrics from 2 sources\nI0111 16:00:47.745389 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 16:00:47.746396 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 16:00:47.769952 1 manager.go:150] ScrapeMetrics: time: 30.683897ms, nodes: 2, pods: 20\nI0111 16:01:47.739202 1 manager.go:95] Scraping metrics from 2 sources\nI0111 16:01:47.744337 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 16:01:47.747363 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 16:01:47.778051 1 manager.go:150] ScrapeMetrics: time: 38.816081ms, nodes: 2, pods: 20\nI0111 16:02:27.596391 1 streamwatcher.go:103] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0111 16:02:27.598835 1 streamwatcher.go:103] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0111 16:02:47.739228 1 manager.go:95] Scraping metrics from 2 sources\nI0111 16:02:47.744388 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 16:02:47.751391 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 16:02:47.774856 1 manager.go:150] ScrapeMetrics: time: 35.579337ms, nodes: 2, pods: 20\nI0111 16:03:47.739211 1 manager.go:95] Scraping metrics from 2 sources\nI0111 16:03:47.746488 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 16:03:47.748377 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 16:03:47.768593 1 manager.go:150] ScrapeMetrics: time: 29.351988ms, nodes: 2, pods: 20\nI0111 16:04:47.739203 1 manager.go:95] Scraping metrics from 2 sources\nI0111 16:04:47.741349 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 16:04:47.752354 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 16:04:47.772695 1 manager.go:150] ScrapeMetrics: time: 33.457285ms, nodes: 2, pods: 20\nI0111 16:05:47.739198 1 manager.go:95] Scraping metrics from 2 sources\nI0111 16:05:47.741403 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 16:05:47.752424 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 16:05:47.764464 1 manager.go:150] ScrapeMetrics: time: 25.191708ms, nodes: 2, pods: 20\nI0111 16:06:47.739208 1 manager.go:95] Scraping metrics from 2 sources\nI0111 16:06:47.742355 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 16:06:47.758715 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 16:06:47.763587 1 manager.go:150] ScrapeMetrics: time: 24.345874ms, nodes: 2, pods: 20\nI0111 16:07:47.739201 1 manager.go:95] Scraping metrics from 2 sources\nI0111 16:07:47.740340 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 16:07:47.742373 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 16:07:47.770510 1 manager.go:150] ScrapeMetrics: time: 31.281588ms, nodes: 2, pods: 20\nI0111 16:08:47.739170 1 manager.go:95] Scraping metrics from 2 sources\nI0111 16:08:47.743298 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 16:08:47.754317 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 16:08:47.767594 1 manager.go:150] ScrapeMetrics: time: 28.400714ms, nodes: 2, pods: 20\nI0111 16:09:47.739182 1 manager.go:95] Scraping metrics from 2 sources\nI0111 16:09:47.744323 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 16:09:47.748303 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 16:09:47.770349 1 manager.go:150] ScrapeMetrics: time: 31.141485ms, nodes: 2, pods: 20\nI0111 16:10:47.739202 1 manager.go:95] Scraping metrics from 2 sources\nI0111 16:10:47.744449 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 16:10:47.746471 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 16:10:47.769033 1 manager.go:150] ScrapeMetrics: time: 29.689189ms, nodes: 2, pods: 20\nI0111 16:11:47.739192 1 manager.go:95] Scraping metrics from 2 sources\nI0111 16:11:47.740452 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 16:11:47.747472 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 16:11:47.761778 1 manager.go:150] ScrapeMetrics: time: 22.426816ms, nodes: 2, pods: 21\nI0111 16:12:47.739194 1 manager.go:95] Scraping metrics from 2 sources\nI0111 16:12:47.747339 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 16:12:47.749322 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 16:12:47.773696 1 manager.go:150] ScrapeMetrics: time: 34.476595ms, nodes: 2, pods: 20\nI0111 16:13:47.739197 1 manager.go:95] Scraping metrics from 2 sources\nI0111 16:13:47.746374 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 16:13:47.750653 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 16:13:47.774013 1 manager.go:150] ScrapeMetrics: time: 34.741248ms, nodes: 2, pods: 21\nI0111 16:14:47.739195 1 manager.go:95] Scraping metrics from 2 sources\nI0111 16:14:47.752517 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 16:14:47.753521 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 16:14:47.787154 1 manager.go:150] ScrapeMetrics: time: 47.7488ms, nodes: 2, pods: 20\nI0111 16:15:47.739195 1 manager.go:95] Scraping metrics from 2 sources\nI0111 16:15:47.749366 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 16:15:47.751534 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 16:15:47.779900 1 manager.go:150] ScrapeMetrics: time: 40.675956ms, nodes: 2, pods: 20\nE0111 16:15:47.779927 1 manager.go:111] unable to fully collect metrics: unable to fully scrape metrics from source kubelet_summary:ip-10-250-27-25.ec2.internal: unable to get CPU for container \"querier\" in pod dns-1144/dns-test-6cf1973c-97c2-47ff-a783-5b8a9e645633 on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric\nI0111 16:16:47.739230 1 manager.go:95] Scraping metrics from 2 sources\nI0111 16:16:47.743357 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 16:16:47.748479 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 16:16:47.772441 1 manager.go:150] ScrapeMetrics: time: 33.18461ms, nodes: 2, pods: 20\nI0111 16:17:47.739184 1 manager.go:95] Scraping metrics from 2 sources\nI0111 16:17:47.741358 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 16:17:47.754372 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 16:17:47.765600 1 manager.go:150] ScrapeMetrics: time: 26.347071ms, nodes: 2, pods: 20\nE0111 16:17:47.765618 1 manager.go:111] unable to fully collect metrics: unable to fully scrape metrics from source kubelet_summary:ip-10-250-27-25.ec2.internal: unable to get CPU for container \"projected-configmap-volume-test\" in pod projected-4220/pod-projected-configmaps-d680ff88-75f4-420c-9bec-b8f2bb5b3d5b on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric\nI0111 16:18:47.739191 1 manager.go:95] Scraping metrics from 2 sources\nI0111 16:18:47.747320 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 16:18:47.750349 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 16:18:47.766937 1 manager.go:150] ScrapeMetrics: time: 27.720275ms, nodes: 2, pods: 20\nI0111 16:19:47.739256 1 manager.go:95] Scraping metrics from 2 sources\nI0111 16:19:47.745459 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 16:19:47.749465 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 16:19:47.760038 1 manager.go:150] ScrapeMetrics: time: 20.710485ms, nodes: 2, pods: 20\nI0111 16:20:47.739163 1 manager.go:95] Scraping metrics from 2 sources\nI0111 16:20:47.745296 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 16:20:47.746296 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 16:20:47.757933 1 manager.go:150] ScrapeMetrics: time: 18.745308ms, nodes: 2, pods: 20\nI0111 16:21:47.739189 1 manager.go:95] Scraping metrics from 2 sources\nI0111 16:21:47.746315 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 16:21:47.748338 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 16:21:47.755313 1 manager.go:150] ScrapeMetrics: time: 16.103179ms, nodes: 2, pods: 21\nI0111 16:22:47.739174 1 manager.go:95] Scraping metrics from 2 sources\nI0111 16:22:47.743302 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 16:22:47.751337 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 16:22:47.768216 1 manager.go:150] ScrapeMetrics: time: 29.019915ms, nodes: 2, pods: 20\nI0111 16:23:47.739199 1 manager.go:95] Scraping metrics from 2 sources\nI0111 16:23:47.748333 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 16:23:47.761666 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 16:23:47.770188 1 manager.go:150] ScrapeMetrics: time: 30.960565ms, nodes: 2, pods: 20\nI0111 16:24:47.739214 1 manager.go:95] Scraping metrics from 2 sources\nI0111 16:24:47.748375 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 16:24:47.752392 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 16:24:47.773138 1 manager.go:150] ScrapeMetrics: time: 33.898193ms, nodes: 2, pods: 20\nI0111 16:25:47.739196 1 manager.go:95] Scraping metrics from 2 sources\nI0111 16:25:47.742334 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 16:25:47.748334 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 16:25:47.777110 1 manager.go:150] ScrapeMetrics: time: 37.88561ms, nodes: 2, pods: 20\nI0111 16:26:47.739181 1 manager.go:95] Scraping metrics from 2 sources\nI0111 16:26:47.741314 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 16:26:47.742321 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 16:26:47.766576 1 manager.go:150] ScrapeMetrics: time: 27.366843ms, nodes: 2, pods: 20\nI0111 16:27:47.739196 1 manager.go:95] Scraping metrics from 2 sources\nI0111 16:27:47.747400 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 16:27:47.753435 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 16:27:47.771720 1 manager.go:150] ScrapeMetrics: time: 32.437708ms, nodes: 2, pods: 20\nI0111 16:28:47.739194 1 manager.go:95] Scraping metrics from 2 sources\nI0111 16:28:47.741332 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 16:28:47.742336 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 16:28:47.764465 1 manager.go:150] ScrapeMetrics: time: 25.241508ms, nodes: 2, pods: 20\nI0111 16:29:47.739190 1 manager.go:95] Scraping metrics from 2 sources\nI0111 16:29:47.743329 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 16:29:47.746332 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 16:29:47.778201 1 manager.go:150] ScrapeMetrics: time: 38.979205ms, nodes: 2, pods: 20\nI0111 16:30:47.739202 1 manager.go:95] Scraping metrics from 2 sources\nI0111 16:30:47.742331 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 16:30:47.752583 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 16:30:47.776304 1 manager.go:150] ScrapeMetrics: time: 37.075756ms, nodes: 2, pods: 20\nI0111 16:31:47.739189 1 manager.go:95] Scraping metrics from 2 sources\nI0111 16:31:47.740327 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 16:31:47.753619 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 16:31:47.763546 1 manager.go:150] ScrapeMetrics: time: 24.330454ms, nodes: 2, pods: 20\nI0111 16:32:47.739218 1 manager.go:95] Scraping metrics from 2 sources\nI0111 16:32:47.750367 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 16:32:47.752356 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 16:32:47.777942 1 manager.go:150] ScrapeMetrics: time: 38.688173ms, nodes: 2, pods: 20\nI0111 16:33:47.739192 1 manager.go:95] Scraping metrics from 2 sources\nI0111 16:33:47.746328 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 16:33:47.749329 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 16:33:47.776799 1 manager.go:150] ScrapeMetrics: time: 37.57874ms, nodes: 2, pods: 20\nE0111 16:33:47.776819 1 manager.go:111] unable to fully collect metrics: [unable to fully scrape metrics from source kubelet_summary:ip-10-250-27-25.ec2.internal: [unable to get CPU for container \"httpd\" in pod deployment-4989/webserver-deployment-595b5b9587-mtvm2 on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"httpd\" in pod deployment-4989/webserver-deployment-595b5b9587-58m7q on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"httpd\" in pod deployment-4989/webserver-deployment-595b5b9587-4h5lf on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"httpd\" in pod deployment-4989/webserver-deployment-595b5b9587-cztv2 on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"httpd\" in pod deployment-4989/webserver-deployment-595b5b9587-s2zwj on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"httpd\" in pod deployment-4989/webserver-deployment-595b5b9587-rpgq6 on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"httpd\" in pod deployment-4989/webserver-deployment-595b5b9587-rblpb on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric], unable to fully scrape metrics from source kubelet_summary:ip-10-250-7-77.ec2.internal: [unable to get CPU for container \"httpd\" in pod deployment-4989/webserver-deployment-595b5b9587-w2zn8 on node \"ip-10-250-7-77.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"httpd\" in pod deployment-4989/webserver-deployment-595b5b9587-mtnjx on node \"ip-10-250-7-77.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"httpd\" in pod deployment-4989/webserver-deployment-595b5b9587-npxl4 on node \"ip-10-250-7-77.ec2.internal\", discarding data: missing cpu usage metric]]\nI0111 16:34:47.739198 1 manager.go:95] Scraping metrics from 2 sources\nI0111 16:34:47.741333 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 16:34:47.744568 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 16:34:47.764871 1 manager.go:150] ScrapeMetrics: time: 25.638871ms, nodes: 2, pods: 20\nI0111 16:35:47.739194 1 manager.go:95] Scraping metrics from 2 sources\nI0111 16:35:47.739266 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 16:35:47.753344 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 16:35:47.777051 1 manager.go:150] ScrapeMetrics: time: 37.829044ms, nodes: 2, pods: 21\nI0111 16:36:47.739189 1 manager.go:95] Scraping metrics from 2 sources\nI0111 16:36:47.748329 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 16:36:47.749659 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 16:36:47.770121 1 manager.go:150] ScrapeMetrics: time: 30.903304ms, nodes: 2, pods: 20\nI0111 16:37:47.739191 1 manager.go:95] Scraping metrics from 2 sources\nI0111 16:37:47.745336 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 16:37:47.754347 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 16:37:47.767004 1 manager.go:150] ScrapeMetrics: time: 27.776475ms, nodes: 2, pods: 20\nI0111 16:38:47.739208 1 manager.go:95] Scraping metrics from 2 sources\nI0111 16:38:47.740343 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 16:38:47.742358 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 16:38:47.769327 1 manager.go:150] ScrapeMetrics: time: 30.089219ms, nodes: 2, pods: 20\nE0111 16:38:47.769349 1 manager.go:111] unable to fully collect metrics: unable to fully scrape metrics from source kubelet_summary:ip-10-250-27-25.ec2.internal: unable to get CPU for container \"pod-handle-http-request\" in pod container-lifecycle-hook-9059/pod-handle-http-request on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric\nI0111 16:39:47.739203 1 manager.go:95] Scraping metrics from 2 sources\nI0111 16:39:47.746346 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 16:39:47.748403 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 16:39:47.768041 1 manager.go:150] ScrapeMetrics: time: 28.80922ms, nodes: 2, pods: 20\nE0111 16:39:47.768059 1 manager.go:111] unable to fully collect metrics: unable to fully scrape metrics from source kubelet_summary:ip-10-250-27-25.ec2.internal: unable to get CPU for container \"sample-apiserver\" in pod aggregator-2165/sample-apiserver-deployment-8447597c78-zhhvt on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric\nI0111 16:40:47.739191 1 manager.go:95] Scraping metrics from 2 sources\nI0111 16:40:47.749344 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 16:40:47.753381 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 16:40:47.774055 1 manager.go:150] ScrapeMetrics: time: 34.833494ms, nodes: 2, pods: 20\nI0111 16:41:47.739207 1 manager.go:95] Scraping metrics from 2 sources\nI0111 16:41:47.752366 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 16:41:47.754565 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 16:41:47.778364 1 manager.go:150] ScrapeMetrics: time: 39.12916ms, nodes: 2, pods: 20\nI0111 16:42:47.739197 1 manager.go:95] Scraping metrics from 2 sources\nI0111 16:42:47.741333 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 16:42:47.745336 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 16:42:47.767257 1 manager.go:150] ScrapeMetrics: time: 28.031283ms, nodes: 2, pods: 21\nI0111 16:43:47.739217 1 manager.go:95] Scraping metrics from 2 sources\nI0111 16:43:47.750376 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 16:43:47.753405 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 16:43:47.778943 1 manager.go:150] ScrapeMetrics: time: 39.684021ms, nodes: 2, pods: 20\nE0111 16:43:47.778961 1 manager.go:111] unable to fully collect metrics: unable to fully scrape metrics from source kubelet_summary:ip-10-250-27-25.ec2.internal: [unable to get CPU for container \"delcm-volume-test\" in pod projected-1750/pod-projected-configmaps-54e87962-499f-462a-a8dd-6ba5a363f109 on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"createcm-volume-test\" in pod projected-1750/pod-projected-configmaps-54e87962-499f-462a-a8dd-6ba5a363f109 on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"updcm-volume-test\" in pod projected-1750/pod-projected-configmaps-54e87962-499f-462a-a8dd-6ba5a363f109 on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric]\nI0111 16:44:47.739191 1 manager.go:95] Scraping metrics from 2 sources\nI0111 16:44:47.743328 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 16:44:47.744546 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 16:44:47.754084 1 manager.go:150] ScrapeMetrics: time: 14.86301ms, nodes: 2, pods: 21\nI0111 16:45:47.739215 1 manager.go:95] Scraping metrics from 2 sources\nI0111 16:45:47.744419 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 16:45:47.746359 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 16:45:47.755719 1 manager.go:150] ScrapeMetrics: time: 16.477837ms, nodes: 2, pods: 20\nE0111 16:45:47.755738 1 manager.go:111] unable to fully collect metrics: unable to fully scrape metrics from source kubelet_summary:ip-10-250-27-25.ec2.internal: unable to get CPU for container \"sample-webhook\" in pod webhook-1264/sample-webhook-deployment-86d95b659d-tjh78 on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric\nI0111 16:46:47.739206 1 manager.go:95] Scraping metrics from 2 sources\nI0111 16:46:47.750359 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 16:46:47.753345 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 16:46:47.765242 1 manager.go:150] ScrapeMetrics: time: 26.006458ms, nodes: 2, pods: 20\nE0111 16:46:47.765263 1 manager.go:111] unable to fully collect metrics: unable to fully scrape metrics from source kubelet_summary:ip-10-250-27-25.ec2.internal: unable to get CPU for container \"sample-webhook\" in pod webhook-3181/sample-webhook-deployment-86d95b659d-4ztw9 on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric\nI0111 16:47:47.739195 1 manager.go:95] Scraping metrics from 2 sources\nI0111 16:47:47.744331 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 16:47:47.747336 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 16:47:47.756792 1 manager.go:150] ScrapeMetrics: time: 17.567193ms, nodes: 2, pods: 22\nI0111 16:48:47.739193 1 manager.go:95] Scraping metrics from 2 sources\nI0111 16:48:47.739248 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 16:48:47.741317 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 16:48:47.750684 1 manager.go:150] ScrapeMetrics: time: 11.461849ms, nodes: 2, pods: 20\nI0111 16:49:47.739213 1 manager.go:95] Scraping metrics from 2 sources\nI0111 16:49:47.742354 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 16:49:47.751365 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 16:49:47.765205 1 manager.go:150] ScrapeMetrics: time: 25.953954ms, nodes: 2, pods: 20\nI0111 16:50:47.739204 1 manager.go:95] Scraping metrics from 2 sources\nI0111 16:50:47.739257 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 16:50:47.751344 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 16:50:47.762483 1 manager.go:150] ScrapeMetrics: time: 23.252195ms, nodes: 2, pods: 20\nI0111 16:51:47.739210 1 manager.go:95] Scraping metrics from 2 sources\nI0111 16:51:47.751363 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 16:51:47.754400 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 16:51:47.774305 1 manager.go:150] ScrapeMetrics: time: 35.067136ms, nodes: 2, pods: 20\nE0111 16:51:47.774326 1 manager.go:111] unable to fully collect metrics: unable to fully scrape metrics from source kubelet_summary:ip-10-250-27-25.ec2.internal: unable to get CPU for container \"test-container-subpath-configmap-ws98\" in pod subpath-5293/pod-subpath-test-configmap-ws98 on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric\nI0111 16:52:47.739195 1 manager.go:95] Scraping metrics from 2 sources\nI0111 16:52:47.749361 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 16:52:47.750342 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 16:52:47.774580 1 manager.go:150] ScrapeMetrics: time: 35.355431ms, nodes: 2, pods: 21\nI0111 16:53:47.739205 1 manager.go:95] Scraping metrics from 2 sources\nI0111 16:53:47.747379 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 16:53:47.751387 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 16:53:47.757139 1 manager.go:150] ScrapeMetrics: time: 17.906463ms, nodes: 2, pods: 20\nI0111 16:54:47.739206 1 manager.go:95] Scraping metrics from 2 sources\nI0111 16:54:47.746502 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 16:54:47.751483 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 16:54:47.763216 1 manager.go:150] ScrapeMetrics: time: 23.984016ms, nodes: 2, pods: 20\nE0111 16:54:47.763232 1 manager.go:111] unable to fully collect metrics: unable to fully scrape metrics from source kubelet_summary:ip-10-250-27-25.ec2.internal: unable to get CPU for container \"sample-webhook\" in pod webhook-2228/sample-webhook-deployment-86d95b659d-hwqdj on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric\nI0111 16:55:47.739184 1 manager.go:95] Scraping metrics from 2 sources\nI0111 16:55:47.740336 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 16:55:47.754353 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 16:55:47.769462 1 manager.go:150] ScrapeMetrics: time: 30.231334ms, nodes: 2, pods: 20\nE0111 16:55:47.769484 1 manager.go:111] unable to fully collect metrics: unable to fully scrape metrics from source kubelet_summary:ip-10-250-27-25.ec2.internal: [unable to get CPU for container \"c\" in pod job-4860/foo-xj94k on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"c\" in pod job-4860/foo-pvchl on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric]\nI0111 16:56:47.739199 1 manager.go:95] Scraping metrics from 2 sources\nI0111 16:56:47.740335 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 16:56:47.749361 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 16:56:47.763366 1 manager.go:150] ScrapeMetrics: time: 24.130597ms, nodes: 2, pods: 20\nI0111 16:57:47.739181 1 manager.go:95] Scraping metrics from 2 sources\nI0111 16:57:47.741325 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 16:57:47.741368 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 16:57:47.756023 1 manager.go:150] ScrapeMetrics: time: 16.81073ms, nodes: 2, pods: 21\nI0111 16:58:47.739195 1 manager.go:95] Scraping metrics from 2 sources\nI0111 16:58:47.749337 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 16:58:47.754344 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 16:58:47.766701 1 manager.go:150] ScrapeMetrics: time: 27.476061ms, nodes: 2, pods: 20\nI0111 16:59:47.739228 1 manager.go:95] Scraping metrics from 2 sources\nI0111 16:59:47.749371 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 16:59:47.753405 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 16:59:47.790424 1 manager.go:150] ScrapeMetrics: time: 51.16915ms, nodes: 2, pods: 20\nI0111 17:00:47.739167 1 manager.go:95] Scraping metrics from 2 sources\nI0111 17:00:47.746305 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 17:00:47.753304 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 17:00:47.771502 1 manager.go:150] ScrapeMetrics: time: 32.30572ms, nodes: 2, pods: 20\nI0111 17:01:47.739198 1 manager.go:95] Scraping metrics from 2 sources\nI0111 17:01:47.739360 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 17:01:47.754405 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 17:01:47.766160 1 manager.go:150] ScrapeMetrics: time: 26.937283ms, nodes: 2, pods: 20\nE0111 17:01:47.766181 1 manager.go:111] unable to fully collect metrics: unable to fully scrape metrics from source kubelet_summary:ip-10-250-27-25.ec2.internal: unable to get CPU for container \"busybox-scheduling-1407a1a8-8ca3-4c56-a635-dfed85af34ae\" in pod kubelet-test-5453/busybox-scheduling-1407a1a8-8ca3-4c56-a635-dfed85af34ae on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric\nI0111 17:02:47.739194 1 manager.go:95] Scraping metrics from 2 sources\nI0111 17:02:47.739243 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 17:02:47.753407 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 17:02:47.762386 1 manager.go:150] ScrapeMetrics: time: 23.164971ms, nodes: 2, pods: 20\nI0111 17:03:47.739221 1 manager.go:95] Scraping metrics from 2 sources\nI0111 17:03:47.746359 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 17:03:47.753372 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 17:03:47.762282 1 manager.go:150] ScrapeMetrics: time: 23.035536ms, nodes: 2, pods: 21\nI0111 17:04:47.739216 1 manager.go:95] Scraping metrics from 2 sources\nI0111 17:04:47.747403 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 17:04:47.747656 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 17:04:47.760686 1 manager.go:150] ScrapeMetrics: time: 21.444744ms, nodes: 2, pods: 23\nI0111 17:05:47.739205 1 manager.go:95] Scraping metrics from 2 sources\nI0111 17:05:47.740357 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 17:05:47.754366 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 17:05:47.766515 1 manager.go:150] ScrapeMetrics: time: 27.281036ms, nodes: 2, pods: 20\nI0111 17:06:47.739191 1 manager.go:95] Scraping metrics from 2 sources\nI0111 17:06:47.750330 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 17:06:47.754351 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 17:06:47.763829 1 manager.go:150] ScrapeMetrics: time: 24.610226ms, nodes: 2, pods: 20\nI0111 17:07:47.739234 1 manager.go:95] Scraping metrics from 2 sources\nI0111 17:07:47.743413 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 17:07:47.748408 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 17:07:47.764865 1 manager.go:150] ScrapeMetrics: time: 25.60439ms, nodes: 2, pods: 20\nI0111 17:08:47.739207 1 manager.go:95] Scraping metrics from 2 sources\nI0111 17:08:47.739281 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 17:08:47.754354 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 17:08:47.765036 1 manager.go:150] ScrapeMetrics: time: 25.791898ms, nodes: 2, pods: 20\nI0111 17:09:47.739220 1 manager.go:95] Scraping metrics from 2 sources\nI0111 17:09:47.745364 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 17:09:47.761704 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 17:09:47.768605 1 manager.go:150] ScrapeMetrics: time: 29.359509ms, nodes: 2, pods: 20\nI0111 17:10:47.739181 1 manager.go:95] Scraping metrics from 2 sources\nI0111 17:10:47.746331 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 17:10:47.747325 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 17:10:47.774909 1 manager.go:150] ScrapeMetrics: time: 35.698588ms, nodes: 2, pods: 24\nE0111 17:11:32.814143 1 reflector.go:270] k8s.io/client-go/informers/factory.go:133: Failed to watch *v1.Pod: Get https://100.104.0.1:443/api/v1/pods?resourceVersion=14361&timeout=7m0s&timeoutSeconds=420&watch=true: net/http: TLS handshake timeout\nE0111 17:11:32.819102 1 reflector.go:270] k8s.io/client-go/informers/factory.go:133: Failed to watch *v1.Node: Get https://100.104.0.1:443/api/v1/nodes?resourceVersion=14344&timeout=7m46s&timeoutSeconds=466&watch=true: net/http: TLS handshake timeout\nE0111 17:11:43.436728 1 webhook.go:196] Failed to make webhook authorizer request: Post https://100.104.0.1:443/apis/authorization.k8s.io/v1beta1/subjectaccessreviews: net/http: TLS handshake timeout\nE0111 17:11:43.436845 1 errors.go:77] Post https://100.104.0.1:443/apis/authorization.k8s.io/v1beta1/subjectaccessreviews: net/http: TLS handshake timeout\nI0111 17:11:43.817001 1 trace.go:81] Trace[1087694162]: \"Reflector k8s.io/client-go/informers/factory.go:133 ListAndWatch\" (started: 2020-01-11 17:11:33.814349333 +0000 UTC m=+4498.957679709) (total time: 10.002612461s):\nTrace[1087694162]: [10.002612461s] [10.002612461s] END\nE0111 17:11:43.817021 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Pod: Get https://100.104.0.1:443/api/v1/pods?limit=500&resourceVersion=0: net/http: TLS handshake timeout\nI0111 17:11:43.822229 1 trace.go:81] Trace[362236829]: \"Reflector k8s.io/client-go/informers/factory.go:133 ListAndWatch\" (started: 2020-01-11 17:11:33.819242716 +0000 UTC m=+4498.962573139) (total time: 10.002953623s):\nTrace[362236829]: [10.002953623s] [10.002953623s] END\nE0111 17:11:43.822277 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: Get https://100.104.0.1:443/api/v1/nodes?limit=500&resourceVersion=0: net/http: TLS handshake timeout\nI0111 17:11:47.739206 1 manager.go:95] Scraping metrics from 2 sources\nI0111 17:11:47.739302 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 17:11:47.739446 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 17:11:47.762215 1 manager.go:150] ScrapeMetrics: time: 22.980731ms, nodes: 2, pods: 24\nI0111 17:11:54.819775 1 trace.go:81] Trace[1503856756]: \"Reflector k8s.io/client-go/informers/factory.go:133 ListAndWatch\" (started: 2020-01-11 17:11:44.817161565 +0000 UTC m=+4509.960491929) (total time: 10.002584943s):\nTrace[1503856756]: [10.002584943s] [10.002584943s] END\nE0111 17:11:54.819794 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Pod: Get https://100.104.0.1:443/api/v1/pods?limit=500&resourceVersion=0: net/http: TLS handshake timeout\nI0111 17:11:54.824900 1 trace.go:81] Trace[124395695]: \"Reflector k8s.io/client-go/informers/factory.go:133 ListAndWatch\" (started: 2020-01-11 17:11:44.822482891 +0000 UTC m=+4509.965813314) (total time: 10.002396134s):\nTrace[124395695]: [10.002396134s] [10.002396134s] END\nE0111 17:11:54.824915 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: Get https://100.104.0.1:443/api/v1/nodes?limit=500&resourceVersion=0: net/http: TLS handshake timeout\nI0111 17:12:05.822894 1 trace.go:81] Trace[2145678666]: \"Reflector k8s.io/client-go/informers/factory.go:133 ListAndWatch\" (started: 2020-01-11 17:11:55.819904768 +0000 UTC m=+4520.963235140) (total time: 10.002960329s):\nTrace[2145678666]: [10.002960329s] [10.002960329s] END\nE0111 17:12:05.822914 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Pod: Get https://100.104.0.1:443/api/v1/pods?limit=500&resourceVersion=0: net/http: TLS handshake timeout\nI0111 17:12:05.827390 1 trace.go:81] Trace[883776200]: \"Reflector k8s.io/client-go/informers/factory.go:133 ListAndWatch\" (started: 2020-01-11 17:11:55.824990129 +0000 UTC m=+4520.968320563) (total time: 10.002386285s):\nTrace[883776200]: [10.002386285s] [10.002386285s] END\nE0111 17:12:05.827404 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: Get https://100.104.0.1:443/api/v1/nodes?limit=500&resourceVersion=0: net/http: TLS handshake timeout\nI0111 17:12:47.739192 1 manager.go:95] Scraping metrics from 2 sources\nI0111 17:12:47.743326 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 17:12:47.754809 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 17:12:47.785321 1 manager.go:150] ScrapeMetrics: time: 46.101058ms, nodes: 2, pods: 26\nE0111 17:12:47.785339 1 manager.go:111] unable to fully collect metrics: unable to fully scrape metrics from source kubelet_summary:ip-10-250-27-25.ec2.internal: [unable to get CPU for container \"agnhost\" in pod pod-network-test-2190/host-test-container-pod on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"webserver\" in pod pod-network-test-2190/test-container-pod on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric]\nI0111 17:13:47.739210 1 manager.go:95] Scraping metrics from 2 sources\nI0111 17:13:47.748350 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 17:13:47.753562 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 17:13:47.776803 1 manager.go:150] ScrapeMetrics: time: 37.561368ms, nodes: 2, pods: 24\nI0111 17:14:47.739203 1 manager.go:95] Scraping metrics from 2 sources\nI0111 17:14:47.746389 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 17:14:47.754932 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 17:14:47.775057 1 manager.go:150] ScrapeMetrics: time: 35.788149ms, nodes: 2, pods: 24\nE0111 17:14:47.775075 1 manager.go:111] unable to fully collect metrics: unable to fully scrape metrics from source kubelet_summary:ip-10-250-27-25.ec2.internal: unable to get CPU for container \"sample-webhook\" in pod webhook-3412/sample-webhook-deployment-86d95b659d-8hgvc on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric\nI0111 17:15:47.739202 1 manager.go:95] Scraping metrics from 2 sources\nI0111 17:15:47.746341 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 17:15:47.749348 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 17:15:47.772827 1 manager.go:150] ScrapeMetrics: time: 33.597244ms, nodes: 2, pods: 24\nI0111 17:16:47.739189 1 manager.go:95] Scraping metrics from 2 sources\nI0111 17:16:47.739330 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 17:16:47.746693 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 17:16:47.762555 1 manager.go:150] ScrapeMetrics: time: 23.3401ms, nodes: 2, pods: 22\nI0111 17:17:47.739220 1 manager.go:95] Scraping metrics from 2 sources\nI0111 17:17:47.739290 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 17:17:47.754365 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 17:17:47.769530 1 manager.go:150] ScrapeMetrics: time: 30.281587ms, nodes: 2, pods: 22\nI0111 17:18:47.739190 1 manager.go:95] Scraping metrics from 2 sources\nI0111 17:18:47.751336 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 17:18:47.753338 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 17:18:47.836390 1 manager.go:150] ScrapeMetrics: time: 97.165706ms, nodes: 2, pods: 21\nI0111 17:19:47.739198 1 manager.go:95] Scraping metrics from 2 sources\nI0111 17:19:47.742330 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 17:19:47.754564 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 17:19:47.786068 1 manager.go:150] ScrapeMetrics: time: 46.843304ms, nodes: 2, pods: 21\nI0111 17:20:47.739206 1 manager.go:95] Scraping metrics from 2 sources\nI0111 17:20:47.751357 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 17:20:47.752596 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 17:20:47.774617 1 manager.go:150] ScrapeMetrics: time: 35.383058ms, nodes: 2, pods: 21\nI0111 17:21:47.739202 1 manager.go:95] Scraping metrics from 2 sources\nI0111 17:21:47.744332 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 17:21:47.752342 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 17:21:47.794274 1 manager.go:150] ScrapeMetrics: time: 55.045711ms, nodes: 2, pods: 21\nI0111 17:22:47.739195 1 manager.go:95] Scraping metrics from 2 sources\nI0111 17:22:47.740337 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 17:22:47.750359 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 17:22:47.762730 1 manager.go:150] ScrapeMetrics: time: 23.510073ms, nodes: 2, pods: 21\nI0111 17:23:47.739197 1 manager.go:95] Scraping metrics from 2 sources\nI0111 17:23:47.740335 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 17:23:47.745347 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 17:23:47.767989 1 manager.go:150] ScrapeMetrics: time: 28.761974ms, nodes: 2, pods: 22\nI0111 17:24:47.739220 1 manager.go:95] Scraping metrics from 2 sources\nI0111 17:24:47.743384 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 17:24:47.749510 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 17:24:47.772972 1 manager.go:150] ScrapeMetrics: time: 33.700086ms, nodes: 2, pods: 22\nI0111 17:25:47.739226 1 manager.go:95] Scraping metrics from 2 sources\nI0111 17:25:47.747367 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 17:25:47.747556 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 17:25:47.770359 1 manager.go:150] ScrapeMetrics: time: 31.103538ms, nodes: 2, pods: 22\nI0111 17:26:47.739210 1 manager.go:95] Scraping metrics from 2 sources\nI0111 17:26:47.741346 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 17:26:47.744349 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 17:26:47.768110 1 manager.go:150] ScrapeMetrics: time: 28.875758ms, nodes: 2, pods: 22\nI0111 17:27:47.739195 1 manager.go:95] Scraping metrics from 2 sources\nI0111 17:27:47.743329 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 17:27:47.751299 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 17:27:47.768410 1 manager.go:150] ScrapeMetrics: time: 29.18698ms, nodes: 2, pods: 21\nE0111 17:27:47.768431 1 manager.go:111] unable to fully collect metrics: unable to fully scrape metrics from source kubelet_summary:ip-10-250-27-25.ec2.internal: unable to get CPU for container \"busybox\" in pod container-probe-7706/busybox-862f8c2f-3df7-43e9-a21d-05967f17295c on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric\nI0111 17:28:47.739189 1 manager.go:95] Scraping metrics from 2 sources\nI0111 17:28:47.746336 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 17:28:47.749313 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 17:28:47.772893 1 manager.go:150] ScrapeMetrics: time: 33.673695ms, nodes: 2, pods: 22\nI0111 17:29:47.739179 1 manager.go:95] Scraping metrics from 2 sources\nI0111 17:29:47.747445 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 17:29:47.751594 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 17:29:47.779610 1 manager.go:150] ScrapeMetrics: time: 40.4063ms, nodes: 2, pods: 22\nI0111 17:30:47.739224 1 manager.go:95] Scraping metrics from 2 sources\nI0111 17:30:47.752388 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 17:30:47.754700 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 17:30:47.775699 1 manager.go:150] ScrapeMetrics: time: 36.428516ms, nodes: 2, pods: 22\nI0111 17:31:47.739202 1 manager.go:95] Scraping metrics from 2 sources\nI0111 17:31:47.751348 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 17:31:47.752354 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 17:31:47.774752 1 manager.go:150] ScrapeMetrics: time: 35.509062ms, nodes: 2, pods: 21\nI0111 17:32:47.739186 1 manager.go:95] Scraping metrics from 2 sources\nI0111 17:32:47.747438 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 17:32:47.754328 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 17:32:47.781156 1 manager.go:150] ScrapeMetrics: time: 41.942811ms, nodes: 2, pods: 22\nI0111 17:33:47.739212 1 manager.go:95] Scraping metrics from 2 sources\nI0111 17:33:47.748368 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 17:33:47.752362 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 17:33:47.775439 1 manager.go:150] ScrapeMetrics: time: 36.200015ms, nodes: 2, pods: 21\nE0111 17:33:47.775460 1 manager.go:111] unable to fully collect metrics: unable to fully scrape metrics from source kubelet_summary:ip-10-250-27-25.ec2.internal: [unable to get CPU for container \"jessie-querier\" in pod dns-5967/dns-test-c23a5c0b-e0d0-4d73-a680-edd25422608a on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"querier\" in pod dns-5967/dns-test-c23a5c0b-e0d0-4d73-a680-edd25422608a on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric]\nI0111 17:34:47.739198 1 manager.go:95] Scraping metrics from 2 sources\nI0111 17:34:47.744361 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 17:34:47.752356 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 17:34:47.774441 1 manager.go:150] ScrapeMetrics: time: 35.19817ms, nodes: 2, pods: 21\nI0111 17:35:47.739208 1 manager.go:95] Scraping metrics from 2 sources\nI0111 17:35:47.746421 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 17:35:47.754698 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 17:35:47.838754 1 manager.go:150] ScrapeMetrics: time: 99.51297ms, nodes: 2, pods: 21\nE0111 17:35:47.838795 1 manager.go:111] unable to fully collect metrics: unable to fully scrape metrics from source kubelet_summary:ip-10-250-27-25.ec2.internal: unable to get CPU for container \"sample-webhook\" in pod webhook-9616/sample-webhook-deployment-86d95b659d-tm4c9 on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric\nI0111 17:36:47.739198 1 manager.go:95] Scraping metrics from 2 sources\nI0111 17:36:47.739378 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 17:36:47.745361 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 17:36:47.762476 1 manager.go:150] ScrapeMetrics: time: 23.232836ms, nodes: 2, pods: 21\nE0111 17:36:47.762497 1 manager.go:111] unable to fully collect metrics: unable to fully scrape metrics from source kubelet_summary:ip-10-250-27-25.ec2.internal: unable to get CPU for container \"app\" in pod daemonsets-9149/daemon-set-tsh6s on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric\nI0111 17:37:47.739197 1 manager.go:95] Scraping metrics from 2 sources\nI0111 17:37:47.745329 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 17:37:47.751298 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 17:37:47.778036 1 manager.go:150] ScrapeMetrics: time: 38.808782ms, nodes: 2, pods: 21\nI0111 17:38:47.739195 1 manager.go:95] Scraping metrics from 2 sources\nI0111 17:38:47.743369 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 17:38:47.744446 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 17:38:47.766895 1 manager.go:150] ScrapeMetrics: time: 27.672269ms, nodes: 2, pods: 21\nI0111 17:39:47.739202 1 manager.go:95] Scraping metrics from 2 sources\nI0111 17:39:47.750502 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 17:39:47.750508 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 17:39:47.774581 1 manager.go:150] ScrapeMetrics: time: 35.329639ms, nodes: 2, pods: 21\nI0111 17:40:47.739201 1 manager.go:95] Scraping metrics from 2 sources\nI0111 17:40:47.745341 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 17:40:47.753348 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 17:40:47.783351 1 manager.go:150] ScrapeMetrics: time: 44.121329ms, nodes: 2, pods: 21\nI0111 17:41:47.739214 1 manager.go:95] Scraping metrics from 2 sources\nI0111 17:41:47.744435 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 17:41:47.745449 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 17:41:47.768896 1 manager.go:150] ScrapeMetrics: time: 29.5887ms, nodes: 2, pods: 21\nE0111 17:41:47.768918 1 manager.go:111] unable to fully collect metrics: unable to fully scrape metrics from source kubelet_summary:ip-10-250-27-25.ec2.internal: unable to get CPU for container \"e2e-test-httpd-deployment\" in pod kubectl-5771/e2e-test-httpd-deployment-594dddd44f-qvbq5 on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric\nI0111 17:42:47.739190 1 manager.go:95] Scraping metrics from 2 sources\nI0111 17:42:47.739244 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 17:42:47.749366 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 17:42:47.762430 1 manager.go:150] ScrapeMetrics: time: 23.208294ms, nodes: 2, pods: 22\nI0111 17:43:47.739201 1 manager.go:95] Scraping metrics from 2 sources\nI0111 17:43:47.739251 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 17:43:47.747492 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 17:43:47.775994 1 manager.go:150] ScrapeMetrics: time: 36.762167ms, nodes: 2, pods: 21\nI0111 17:44:47.739197 1 manager.go:95] Scraping metrics from 2 sources\nI0111 17:44:47.744337 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 17:44:47.753577 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 17:44:47.768014 1 manager.go:150] ScrapeMetrics: time: 28.79121ms, nodes: 2, pods: 21\nI0111 17:45:47.739201 1 manager.go:95] Scraping metrics from 2 sources\nI0111 17:45:47.743343 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 17:45:47.752355 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 17:45:47.779123 1 manager.go:150] ScrapeMetrics: time: 39.893318ms, nodes: 2, pods: 21\nI0111 17:46:47.739220 1 manager.go:95] Scraping metrics from 2 sources\nI0111 17:46:47.739276 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 17:46:47.748018 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 17:46:47.835973 1 manager.go:150] ScrapeMetrics: time: 96.72289ms, nodes: 2, pods: 21\nE0111 17:46:47.836004 1 manager.go:111] unable to fully collect metrics: unable to fully scrape metrics from source kubelet_summary:ip-10-250-27-25.ec2.internal: unable to get CPU for container \"pod-handle-http-request\" in pod container-lifecycle-hook-2258/pod-handle-http-request on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric\nI0111 17:47:47.739194 1 manager.go:95] Scraping metrics from 2 sources\nI0111 17:47:47.751327 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 17:47:47.753370 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 17:47:47.775000 1 manager.go:150] ScrapeMetrics: time: 35.778911ms, nodes: 2, pods: 21\nI0111 17:48:47.739217 1 manager.go:95] Scraping metrics from 2 sources\nI0111 17:48:47.742351 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 17:48:47.744348 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 17:48:47.769766 1 manager.go:150] ScrapeMetrics: time: 30.509061ms, nodes: 2, pods: 21\nI0111 17:49:47.739201 1 manager.go:95] Scraping metrics from 2 sources\nI0111 17:49:47.740344 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 17:49:47.746365 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 17:49:47.776716 1 manager.go:150] ScrapeMetrics: time: 37.477654ms, nodes: 2, pods: 21\nE0111 17:49:47.776738 1 manager.go:111] unable to fully collect metrics: unable to fully scrape metrics from source kubelet_summary:ip-10-250-27-25.ec2.internal: unable to get CPU for container \"sample-webhook\" in pod webhook-3373/sample-webhook-deployment-86d95b659d-p2wfd on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric\nI0111 17:50:47.739184 1 manager.go:95] Scraping metrics from 2 sources\nI0111 17:50:47.746308 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 17:50:47.749470 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 17:50:47.780987 1 manager.go:150] ScrapeMetrics: time: 41.780526ms, nodes: 2, pods: 21\nI0111 17:51:47.739169 1 manager.go:95] Scraping metrics from 2 sources\nI0111 17:51:47.740294 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 17:51:47.746314 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 17:51:47.775910 1 manager.go:150] ScrapeMetrics: time: 36.709239ms, nodes: 2, pods: 21\nI0111 17:52:47.739225 1 manager.go:95] Scraping metrics from 2 sources\nI0111 17:52:47.740990 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 17:52:47.750205 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 17:52:47.776961 1 manager.go:150] ScrapeMetrics: time: 37.625227ms, nodes: 2, pods: 24\nI0111 17:53:47.739208 1 manager.go:95] Scraping metrics from 2 sources\nI0111 17:53:47.754355 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 17:53:47.754355 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 17:53:47.797027 1 manager.go:150] ScrapeMetrics: time: 57.788059ms, nodes: 2, pods: 22\nE0111 17:53:47.797064 1 manager.go:111] unable to fully collect metrics: unable to fully scrape metrics from source kubelet_summary:ip-10-250-27-25.ec2.internal: [unable to get CPU for container \"webserver\" in pod statefulset-1237/ss2-0 on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"webserver\" in pod statefulset-1237/ss2-1 on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric]\nI0111 17:54:47.739201 1 manager.go:95] Scraping metrics from 2 sources\nI0111 17:54:47.752344 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 17:54:47.758065 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 17:54:47.777537 1 manager.go:150] ScrapeMetrics: time: 38.305562ms, nodes: 2, pods: 23\nE0111 17:54:47.777560 1 manager.go:111] unable to fully collect metrics: unable to fully scrape metrics from source kubelet_summary:ip-10-250-27-25.ec2.internal: [unable to get CPU for container \"agnhost\" in pod pod-network-test-6970/host-test-container-pod on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"webserver\" in pod pod-network-test-6970/test-container-pod on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric]\nI0111 17:55:47.739192 1 manager.go:95] Scraping metrics from 2 sources\nI0111 17:55:47.742328 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 17:55:47.750560 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 17:55:47.774982 1 manager.go:150] ScrapeMetrics: time: 35.760207ms, nodes: 2, pods: 21\nE0111 17:55:47.775005 1 manager.go:111] unable to fully collect metrics: unable to fully scrape metrics from source kubelet_summary:ip-10-250-27-25.ec2.internal: unable to get CPU for container \"redis\" in pod deployment-3396/test-rollover-deployment-7d7dc6548c-f92lh on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric\nI0111 17:56:47.739204 1 manager.go:95] Scraping metrics from 2 sources\nI0111 17:56:47.739381 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 17:56:47.747415 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 17:56:47.767880 1 manager.go:150] ScrapeMetrics: time: 28.6494ms, nodes: 2, pods: 21\nE0111 17:56:47.767899 1 manager.go:111] unable to fully collect metrics: unable to fully scrape metrics from source kubelet_summary:ip-10-250-27-25.ec2.internal: unable to get CPU for container \"liveness\" in pod container-probe-6029/liveness-0297d234-6fd6-420b-b422-3aa945fc36b2 on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric\nI0111 17:57:47.739193 1 manager.go:95] Scraping metrics from 2 sources\nI0111 17:57:47.745337 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 17:57:47.751339 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 17:57:47.770923 1 manager.go:150] ScrapeMetrics: time: 31.701783ms, nodes: 2, pods: 21\nE0111 17:57:47.770942 1 manager.go:111] unable to fully collect metrics: unable to fully scrape metrics from source kubelet_summary:ip-10-250-27-25.ec2.internal: unable to get CPU for container \"liveness\" in pod container-probe-6029/liveness-0297d234-6fd6-420b-b422-3aa945fc36b2 on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric\nI0111 17:58:47.739195 1 manager.go:95] Scraping metrics from 2 sources\nI0111 17:58:47.741334 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 17:58:47.746365 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 17:58:47.767108 1 manager.go:150] ScrapeMetrics: time: 27.881495ms, nodes: 2, pods: 22\nI0111 17:59:47.739212 1 manager.go:95] Scraping metrics from 2 sources\nI0111 17:59:47.748365 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 17:59:47.749432 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 17:59:47.776067 1 manager.go:150] ScrapeMetrics: time: 36.820759ms, nodes: 2, pods: 21\nI0111 18:00:47.739209 1 manager.go:95] Scraping metrics from 2 sources\nI0111 18:00:47.742359 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 18:00:47.753379 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 18:00:47.769319 1 manager.go:150] ScrapeMetrics: time: 30.082586ms, nodes: 2, pods: 22\nI0111 18:01:47.739202 1 manager.go:95] Scraping metrics from 2 sources\nI0111 18:01:47.741345 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 18:01:47.741329 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 18:01:47.766006 1 manager.go:150] ScrapeMetrics: time: 26.775589ms, nodes: 2, pods: 21\nI0111 18:02:47.739204 1 manager.go:95] Scraping metrics from 2 sources\nI0111 18:02:47.749354 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 18:02:47.750610 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 18:02:47.781583 1 manager.go:150] ScrapeMetrics: time: 42.341467ms, nodes: 2, pods: 21\nE0111 18:02:47.781609 1 manager.go:111] unable to fully collect metrics: unable to fully scrape metrics from source kubelet_summary:ip-10-250-27-25.ec2.internal: unable to get CPU for container \"httpd\" in pod deployment-9127/test-cleanup-controller-s542s on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric\nI0111 18:03:47.739187 1 manager.go:95] Scraping metrics from 2 sources\nI0111 18:03:47.742336 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 18:03:47.748436 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 18:03:47.770908 1 manager.go:150] ScrapeMetrics: time: 31.671867ms, nodes: 2, pods: 21\nI0111 18:04:47.739192 1 manager.go:95] Scraping metrics from 2 sources\nI0111 18:04:47.746462 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 18:04:47.753614 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 18:04:47.771012 1 manager.go:150] ScrapeMetrics: time: 31.784121ms, nodes: 2, pods: 21\nI0111 18:05:47.739211 1 manager.go:95] Scraping metrics from 2 sources\nI0111 18:05:47.741351 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 18:05:47.742352 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 18:05:47.774454 1 manager.go:150] ScrapeMetrics: time: 35.216022ms, nodes: 2, pods: 21\nI0111 18:06:47.739222 1 manager.go:95] Scraping metrics from 2 sources\nI0111 18:06:47.739431 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 18:06:47.746512 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 18:06:47.769620 1 manager.go:150] ScrapeMetrics: time: 30.217799ms, nodes: 2, pods: 21\nI0111 18:07:47.739184 1 manager.go:95] Scraping metrics from 2 sources\nI0111 18:07:47.751353 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 18:07:47.753356 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 18:07:47.780051 1 manager.go:150] ScrapeMetrics: time: 40.816037ms, nodes: 2, pods: 21\nI0111 18:08:47.739204 1 manager.go:95] Scraping metrics from 2 sources\nI0111 18:08:47.743484 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 18:08:47.744742 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 18:08:47.838245 1 manager.go:150] ScrapeMetrics: time: 98.989529ms, nodes: 2, pods: 21\nI0111 18:09:47.739211 1 manager.go:95] Scraping metrics from 2 sources\nI0111 18:09:47.742358 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 18:09:47.751403 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 18:09:47.766278 1 manager.go:150] ScrapeMetrics: time: 27.028736ms, nodes: 2, pods: 21\nI0111 18:10:47.739180 1 manager.go:95] Scraping metrics from 2 sources\nI0111 18:10:47.740313 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 18:10:47.741317 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 18:10:47.768341 1 manager.go:150] ScrapeMetrics: time: 29.128629ms, nodes: 2, pods: 21\nI0111 18:11:47.739197 1 manager.go:95] Scraping metrics from 2 sources\nI0111 18:11:47.745362 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 18:11:47.746619 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 18:11:47.838558 1 manager.go:150] ScrapeMetrics: time: 99.320595ms, nodes: 2, pods: 21\nE0111 18:11:47.838583 1 manager.go:111] unable to fully collect metrics: unable to fully scrape metrics from source kubelet_summary:ip-10-250-27-25.ec2.internal: unable to get CPU for container \"busybox\" in pod container-probe-4636/busybox-1b8b644c-5eb4-4dbc-8b00-818dc330d3db on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric\nI0111 18:12:47.739194 1 manager.go:95] Scraping metrics from 2 sources\nI0111 18:12:47.747335 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 18:12:47.748338 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 18:12:47.782605 1 manager.go:150] ScrapeMetrics: time: 43.380145ms, nodes: 2, pods: 21\nE0111 18:12:47.782642 1 manager.go:111] unable to fully collect metrics: unable to fully scrape metrics from source kubelet_summary:ip-10-250-27-25.ec2.internal: unable to get CPU for container \"busybox-readonly-fsc1b953ca-5e33-402e-90ba-036cf2cf01b6\" in pod kubelet-test-8900/busybox-readonly-fsc1b953ca-5e33-402e-90ba-036cf2cf01b6 on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric\nI0111 18:13:47.739213 1 manager.go:95] Scraping metrics from 2 sources\nI0111 18:13:47.746367 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 18:13:47.752365 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 18:13:47.783607 1 manager.go:150] ScrapeMetrics: time: 44.34628ms, nodes: 2, pods: 21\nI0111 18:14:47.739198 1 manager.go:95] Scraping metrics from 2 sources\nI0111 18:14:47.749342 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 18:14:47.749342 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 18:14:47.773360 1 manager.go:150] ScrapeMetrics: time: 34.136058ms, nodes: 2, pods: 21\nI0111 18:15:47.739209 1 manager.go:95] Scraping metrics from 2 sources\nI0111 18:15:47.748362 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 18:15:47.754581 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 18:15:47.781026 1 manager.go:150] ScrapeMetrics: time: 41.788569ms, nodes: 2, pods: 21\nE0111 18:15:47.781047 1 manager.go:111] unable to fully collect metrics: unable to fully scrape metrics from source kubelet_summary:ip-10-250-27-25.ec2.internal: [unable to get CPU for container \"update-demo\" in pod kubectl-120/update-demo-nautilus-c2whm on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"update-demo\" in pod kubectl-120/update-demo-nautilus-wl45n on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"update-demo\" in pod kubectl-120/update-demo-kitten-t49jh on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric]\nI0111 18:16:47.739190 1 manager.go:95] Scraping metrics from 2 sources\nI0111 18:16:47.739257 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 18:16:47.747539 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 18:16:47.770704 1 manager.go:150] ScrapeMetrics: time: 31.483204ms, nodes: 2, pods: 21\nE0111 18:16:47.770727 1 manager.go:111] unable to fully collect metrics: unable to fully scrape metrics from source kubelet_summary:ip-10-250-27-25.ec2.internal: unable to get CPU for container \"with-labels\" in pod sched-pred-4159/with-labels on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric\nI0111 18:17:47.739213 1 manager.go:95] Scraping metrics from 2 sources\nI0111 18:17:47.744364 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 18:17:47.748361 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 18:17:47.785027 1 manager.go:150] ScrapeMetrics: time: 45.785559ms, nodes: 2, pods: 24\nE0111 18:17:47.785051 1 manager.go:111] unable to fully collect metrics: unable to fully scrape metrics from source kubelet_summary:ip-10-250-27-25.ec2.internal: [unable to get CPU for container \"php-redis\" in pod kubectl-3929/frontend-79ff456bff-4vx5n on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"php-redis\" in pod kubectl-3929/frontend-79ff456bff-lt8zk on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"php-redis\" in pod kubectl-3929/frontend-79ff456bff-pft59 on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric]\nI0111 18:18:47.739190 1 manager.go:95] Scraping metrics from 2 sources\nI0111 18:18:47.744332 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 18:18:47.749338 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 18:18:47.776134 1 manager.go:150] ScrapeMetrics: time: 36.909933ms, nodes: 2, pods: 21\nI0111 18:19:47.739207 1 manager.go:95] Scraping metrics from 2 sources\nI0111 18:19:47.751343 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 18:19:47.754373 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 18:19:47.781680 1 manager.go:150] ScrapeMetrics: time: 42.446211ms, nodes: 2, pods: 21\nI0111 18:20:47.739197 1 manager.go:95] Scraping metrics from 2 sources\nI0111 18:20:47.741356 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 18:20:47.743373 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 18:20:47.769979 1 manager.go:150] ScrapeMetrics: time: 30.754681ms, nodes: 2, pods: 24\nE0111 18:20:47.770000 1 manager.go:111] unable to fully collect metrics: unable to fully scrape metrics from source kubelet_summary:ip-10-250-27-25.ec2.internal: [unable to get CPU for container \"test-container\" in pod emptydir-wrapper-8723/wrapped-volume-race-14741c3e-688a-4e12-b99c-77d4217de31e-w9nv2 on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"test-container\" in pod emptydir-wrapper-8723/wrapped-volume-race-14741c3e-688a-4e12-b99c-77d4217de31e-w28kv on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric]\nI0111 18:21:47.739190 1 manager.go:95] Scraping metrics from 2 sources\nI0111 18:21:47.750333 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 18:21:47.752327 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 18:21:47.835366 1 manager.go:150] ScrapeMetrics: time: 96.142234ms, nodes: 2, pods: 26\nI0111 18:22:47.739190 1 manager.go:95] Scraping metrics from 2 sources\nI0111 18:22:47.745320 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 18:22:47.754531 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 18:22:47.777268 1 manager.go:150] ScrapeMetrics: time: 38.051085ms, nodes: 2, pods: 21\nI0111 18:23:47.739190 1 manager.go:95] Scraping metrics from 2 sources\nI0111 18:23:47.740325 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 18:23:47.743284 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 18:23:47.765182 1 manager.go:150] ScrapeMetrics: time: 25.965353ms, nodes: 2, pods: 21\nI0111 18:24:47.739180 1 manager.go:95] Scraping metrics from 2 sources\nI0111 18:24:47.746318 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 18:24:47.748323 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 18:24:47.784837 1 manager.go:150] ScrapeMetrics: time: 45.628641ms, nodes: 2, pods: 21\nI0111 18:25:47.739193 1 manager.go:95] Scraping metrics from 2 sources\nI0111 18:25:47.743354 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 18:25:47.753592 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 18:25:47.770378 1 manager.go:150] ScrapeMetrics: time: 31.158849ms, nodes: 2, pods: 24\nI0111 18:26:47.739181 1 manager.go:95] Scraping metrics from 2 sources\nI0111 18:26:47.749326 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 18:26:47.754323 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 18:26:47.779571 1 manager.go:150] ScrapeMetrics: time: 40.363082ms, nodes: 2, pods: 21\nI0111 18:27:47.739233 1 manager.go:95] Scraping metrics from 2 sources\nI0111 18:27:47.748386 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 18:27:47.751385 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 18:27:47.834444 1 manager.go:150] ScrapeMetrics: time: 95.159874ms, nodes: 2, pods: 21\nE0111 18:27:47.834479 1 manager.go:111] unable to fully collect metrics: unable to fully scrape metrics from source kubelet_summary:ip-10-250-27-25.ec2.internal: unable to get CPU for container \"p\" in pod events-706/send-events-a1d2322f-6575-4072-873d-8a2b1a28acea on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric\nI0111 18:28:47.739207 1 manager.go:95] Scraping metrics from 2 sources\nI0111 18:28:47.744350 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 18:28:47.745351 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 18:28:47.782476 1 manager.go:150] ScrapeMetrics: time: 43.240439ms, nodes: 2, pods: 21\nE0111 18:28:47.782503 1 manager.go:111] unable to fully collect metrics: unable to fully scrape metrics from source kubelet_summary:ip-10-250-27-25.ec2.internal: unable to get CPU for container \"pod4\" in pod sched-pred-9397/pod4 on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric\nI0111 18:29:47.739195 1 manager.go:95] Scraping metrics from 2 sources\nI0111 18:29:47.739398 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 18:29:47.746343 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 18:29:47.835792 1 manager.go:150] ScrapeMetrics: time: 96.562902ms, nodes: 2, pods: 22\nI0111 18:30:47.739196 1 manager.go:95] Scraping metrics from 2 sources\nI0111 18:30:47.742356 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 18:30:47.748314 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 18:30:47.766224 1 manager.go:150] ScrapeMetrics: time: 26.99837ms, nodes: 2, pods: 22\nI0111 18:31:47.739198 1 manager.go:95] Scraping metrics from 2 sources\nI0111 18:31:47.749342 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 18:31:47.753501 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 18:31:47.794494 1 manager.go:150] ScrapeMetrics: time: 55.269176ms, nodes: 2, pods: 22\nI0111 18:32:47.739197 1 manager.go:95] Scraping metrics from 2 sources\nI0111 18:32:47.742334 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 18:32:47.751524 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 18:32:47.784266 1 manager.go:150] ScrapeMetrics: time: 45.042603ms, nodes: 2, pods: 22\nI0111 18:33:47.739214 1 manager.go:95] Scraping metrics from 2 sources\nI0111 18:33:47.744374 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 18:33:47.751381 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 18:33:47.767257 1 manager.go:150] ScrapeMetrics: time: 27.996141ms, nodes: 2, pods: 21\nI0111 18:34:47.739177 1 manager.go:95] Scraping metrics from 2 sources\nI0111 18:34:47.745310 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 18:34:47.750322 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 18:34:47.771838 1 manager.go:150] ScrapeMetrics: time: 32.633959ms, nodes: 2, pods: 22\nE0111 18:34:47.771856 1 manager.go:111] unable to fully collect metrics: unable to fully scrape metrics from source kubelet_summary:ip-10-250-27-25.ec2.internal: unable to get CPU for container \"webserver\" in pod statefulset-465/ss2-0 on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric\nI0111 18:35:47.739212 1 manager.go:95] Scraping metrics from 2 sources\nI0111 18:35:47.749351 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 18:35:47.753364 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 18:35:47.777114 1 manager.go:150] ScrapeMetrics: time: 37.872292ms, nodes: 2, pods: 21\nI0111 18:36:47.739196 1 manager.go:95] Scraping metrics from 2 sources\nI0111 18:36:47.745336 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 18:36:47.750314 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 18:36:47.794598 1 manager.go:150] ScrapeMetrics: time: 55.372659ms, nodes: 2, pods: 21\nI0111 18:37:47.739200 1 manager.go:95] Scraping metrics from 2 sources\nI0111 18:37:47.751345 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 18:37:47.752354 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 18:37:47.783154 1 manager.go:150] ScrapeMetrics: time: 43.924586ms, nodes: 2, pods: 21\nI0111 18:38:47.739175 1 manager.go:95] Scraping metrics from 2 sources\nI0111 18:38:47.740311 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 18:38:47.753317 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 18:38:47.786159 1 manager.go:150] ScrapeMetrics: time: 46.951336ms, nodes: 2, pods: 21\nI0111 18:39:47.739204 1 manager.go:95] Scraping metrics from 2 sources\nI0111 18:39:47.749346 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 18:39:47.752570 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 18:39:47.787941 1 manager.go:150] ScrapeMetrics: time: 48.706737ms, nodes: 2, pods: 22\nI0111 18:40:47.739227 1 manager.go:95] Scraping metrics from 2 sources\nI0111 18:40:47.745436 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 18:40:47.753410 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 18:40:47.760727 1 manager.go:150] ScrapeMetrics: time: 21.411309ms, nodes: 2, pods: 21\nI0111 18:41:47.739194 1 manager.go:95] Scraping metrics from 2 sources\nI0111 18:41:47.746346 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 18:41:47.747351 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 18:41:47.757875 1 manager.go:150] ScrapeMetrics: time: 18.650928ms, nodes: 2, pods: 21\nI0111 18:42:47.739186 1 manager.go:95] Scraping metrics from 2 sources\nI0111 18:42:47.751320 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 18:42:47.751320 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 18:42:47.765336 1 manager.go:150] ScrapeMetrics: time: 26.117504ms, nodes: 2, pods: 21\nI0111 18:43:47.739182 1 manager.go:95] Scraping metrics from 2 sources\nI0111 18:43:47.746335 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 18:43:47.754346 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 18:43:47.761468 1 manager.go:150] ScrapeMetrics: time: 22.245302ms, nodes: 2, pods: 21\nI0111 18:44:47.739178 1 manager.go:95] Scraping metrics from 2 sources\nI0111 18:44:47.743310 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 18:44:47.752311 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 18:44:47.762009 1 manager.go:150] ScrapeMetrics: time: 22.805563ms, nodes: 2, pods: 21\nI0111 18:45:47.739194 1 manager.go:95] Scraping metrics from 2 sources\nI0111 18:45:47.744342 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 18:45:47.754689 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 18:45:47.764560 1 manager.go:150] ScrapeMetrics: time: 25.329045ms, nodes: 2, pods: 21\nI0111 18:46:47.739204 1 manager.go:95] Scraping metrics from 2 sources\nI0111 18:46:47.748340 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 18:46:47.751337 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 18:46:47.758656 1 manager.go:150] ScrapeMetrics: time: 19.426693ms, nodes: 2, pods: 20\nI0111 18:47:47.739202 1 manager.go:95] Scraping metrics from 2 sources\nI0111 18:47:47.740342 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 18:47:47.745371 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 18:47:47.757213 1 manager.go:150] ScrapeMetrics: time: 17.985377ms, nodes: 2, pods: 20\nI0111 18:48:47.739200 1 manager.go:95] Scraping metrics from 2 sources\nI0111 18:48:47.742346 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 18:48:47.752351 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 18:48:47.780213 1 manager.go:150] ScrapeMetrics: time: 40.973562ms, nodes: 2, pods: 20\nI0111 18:49:47.739255 1 manager.go:95] Scraping metrics from 2 sources\nI0111 18:49:47.749756 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 18:49:47.753528 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 18:49:50.836528 1 manager.go:150] ScrapeMetrics: time: 3.097222683s, nodes: 2, pods: 157\nE0111 18:49:50.836549 1 manager.go:111] unable to fully collect metrics: [unable to fully scrape metrics from source kubelet_summary:ip-10-250-7-77.ec2.internal: [unable to get CPU for container \"maxp-43\" in pod sched-pred-1910/maxp-43 on node \"ip-10-250-7-77.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"maxp-66\" in pod sched-pred-1910/maxp-66 on node \"ip-10-250-7-77.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"maxp-78\" in pod sched-pred-1910/maxp-78 on node \"ip-10-250-7-77.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"maxp-47\" in pod sched-pred-1910/maxp-47 on node \"ip-10-250-7-77.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"maxp-46\" in pod sched-pred-1910/maxp-46 on node \"ip-10-250-7-77.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"maxp-67\" in pod sched-pred-1910/maxp-67 on node \"ip-10-250-7-77.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"maxp-59\" in pod sched-pred-1910/maxp-59 on node \"ip-10-250-7-77.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"maxp-56\" in pod sched-pred-1910/maxp-56 on node \"ip-10-250-7-77.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"maxp-48\" in pod sched-pred-1910/maxp-48 on node \"ip-10-250-7-77.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"maxp-45\" in pod sched-pred-1910/maxp-45 on node \"ip-10-250-7-77.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"maxp-72\" in pod sched-pred-1910/maxp-72 on node \"ip-10-250-7-77.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"maxp-61\" in pod sched-pred-1910/maxp-61 on node \"ip-10-250-7-77.ec2.internal\", discarding data: missing cpu usage metric], unable to fully scrape metrics from source kubelet_summary:ip-10-250-27-25.ec2.internal: [unable to get CPU for container \"maxp-90\" in pod sched-pred-1910/maxp-90 on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"maxp-27\" in pod sched-pred-1910/maxp-27 on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"maxp-117\" in pod sched-pred-1910/maxp-117 on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"maxp-87\" in pod sched-pred-1910/maxp-87 on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"maxp-24\" in pod sched-pred-1910/maxp-24 on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get a valid timestamp for metric point for container \"maxp-121\" in pod sched-pred-1910/maxp-121 on node \"ip-10-250-27-25.ec2.internal\", discarding data: no non-zero timestamp on either CPU or memory, unable to get CPU for container \"maxp-23\" in pod sched-pred-1910/maxp-23 on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric]]\nI0111 18:50:47.739221 1 manager.go:95] Scraping metrics from 2 sources\nI0111 18:50:47.750437 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 18:50:47.752672 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 18:50:47.774414 1 manager.go:150] ScrapeMetrics: time: 35.097921ms, nodes: 2, pods: 20\nI0111 18:51:47.739222 1 manager.go:95] Scraping metrics from 2 sources\nI0111 18:51:47.743488 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 18:51:47.754662 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 18:51:47.769980 1 manager.go:150] ScrapeMetrics: time: 30.706578ms, nodes: 2, pods: 20\nI0111 18:52:47.739204 1 manager.go:95] Scraping metrics from 2 sources\nI0111 18:52:47.743345 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 18:52:47.750323 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 18:52:47.772134 1 manager.go:150] ScrapeMetrics: time: 32.903438ms, nodes: 2, pods: 21\nI0111 18:53:47.739196 1 manager.go:95] Scraping metrics from 2 sources\nI0111 18:53:47.752331 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 18:53:47.753359 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 18:53:47.766463 1 manager.go:150] ScrapeMetrics: time: 27.238382ms, nodes: 2, pods: 20\nI0111 18:54:47.739204 1 manager.go:95] Scraping metrics from 2 sources\nI0111 18:54:47.748349 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 18:54:47.754348 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 18:54:47.767174 1 manager.go:150] ScrapeMetrics: time: 27.944265ms, nodes: 2, pods: 20\nI0111 18:55:47.739188 1 manager.go:95] Scraping metrics from 2 sources\nI0111 18:55:47.743316 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 18:55:47.752325 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 18:55:48.199394 1 manager.go:150] ScrapeMetrics: time: 460.177414ms, nodes: 2, pods: 24\nE0111 18:55:48.199417 1 manager.go:111] unable to fully collect metrics: unable to fully scrape metrics from source kubelet_summary:ip-10-250-27-25.ec2.internal: [unable to get CPU for container \"git-repo\" in pod emptydir-wrapper-2029/git-server-63bf8a0f-3eee-4403-8ee3-4accbb8590db on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get a valid timestamp for metric point for container \"test-container\" in pod emptydir-wrapper-2029/wrapped-volume-race-3fe6230d-e2fe-4c11-8208-a41cb010e463-k6cmh on node \"ip-10-250-27-25.ec2.internal\", discarding data: no non-zero timestamp on either CPU or memory]\nI0111 18:56:47.739196 1 manager.go:95] Scraping metrics from 2 sources\nI0111 18:56:47.749340 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 18:56:47.753621 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 18:56:47.769695 1 manager.go:150] ScrapeMetrics: time: 30.467241ms, nodes: 2, pods: 21\nE0111 18:56:47.769714 1 manager.go:111] unable to fully collect metrics: unable to fully scrape metrics from source kubelet_summary:ip-10-250-27-25.ec2.internal: [unable to get CPU for container \"test-container\" in pod emptydir-wrapper-2029/wrapped-volume-race-91bd5a23-081f-48fe-bb96-5dfcc9890a4d-qhz4k on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"test-container\" in pod emptydir-wrapper-2029/wrapped-volume-race-91bd5a23-081f-48fe-bb96-5dfcc9890a4d-x7wpv on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"test-container\" in pod emptydir-wrapper-2029/wrapped-volume-race-91bd5a23-081f-48fe-bb96-5dfcc9890a4d-phdq8 on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"test-container\" in pod emptydir-wrapper-2029/wrapped-volume-race-91bd5a23-081f-48fe-bb96-5dfcc9890a4d-9f92p on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"test-container\" in pod emptydir-wrapper-2029/wrapped-volume-race-91bd5a23-081f-48fe-bb96-5dfcc9890a4d-2kshq on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric]\nI0111 18:57:47.739196 1 manager.go:95] Scraping metrics from 2 sources\nI0111 18:57:47.747346 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 18:57:47.750353 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 18:57:47.760388 1 manager.go:150] ScrapeMetrics: time: 21.16427ms, nodes: 2, pods: 26\nI0111 18:58:47.745120 1 manager.go:95] Scraping metrics from 2 sources\nI0111 18:58:47.747279 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 18:58:47.834536 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 18:58:47.848079 1 manager.go:150] ScrapeMetrics: time: 102.930056ms, nodes: 2, pods: 20\nI0111 18:59:47.739212 1 manager.go:95] Scraping metrics from 2 sources\nI0111 18:59:47.740371 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 18:59:47.745488 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 18:59:47.758194 1 manager.go:150] ScrapeMetrics: time: 18.940107ms, nodes: 2, pods: 21\nI0111 19:00:47.739190 1 manager.go:95] Scraping metrics from 2 sources\nI0111 19:00:47.741318 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 19:00:47.753571 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 19:00:47.766337 1 manager.go:150] ScrapeMetrics: time: 27.120404ms, nodes: 2, pods: 20\nI0111 19:01:47.739199 1 manager.go:95] Scraping metrics from 2 sources\nI0111 19:01:47.742336 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 19:01:47.749339 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 19:01:52.553605 1 streamwatcher.go:103] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0111 19:01:52.553939 1 streamwatcher.go:103] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0111 19:01:56.035990 1 manager.go:150] ScrapeMetrics: time: 8.296761926s, nodes: 2, pods: 20\nW0111 19:02:01.496053 1 reflector.go:289] k8s.io/client-go/informers/factory.go:133: watch of *v1.Pod ended with: too old resource version: 36609 (36628)\nI0111 19:02:47.739190 1 manager.go:95] Scraping metrics from 2 sources\nI0111 19:02:47.752334 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 19:02:47.752335 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 19:02:47.837059 1 manager.go:150] ScrapeMetrics: time: 97.837785ms, nodes: 2, pods: 20\nI0111 19:03:47.739206 1 manager.go:95] Scraping metrics from 2 sources\nI0111 19:03:47.747346 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 19:03:47.754701 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 19:03:47.770086 1 manager.go:150] ScrapeMetrics: time: 30.854735ms, nodes: 2, pods: 20\nE0111 19:03:47.770104 1 manager.go:111] unable to fully collect metrics: unable to fully scrape metrics from source kubelet_summary:ip-10-250-27-25.ec2.internal: unable to get CPU for container \"pod-with-label-security-s1\" in pod sched-priority-8440/pod-with-label-security-s1 on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric\nI0111 19:04:47.739198 1 manager.go:95] Scraping metrics from 2 sources\nI0111 19:04:47.745330 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 19:04:47.753556 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 19:04:47.778801 1 manager.go:150] ScrapeMetrics: time: 39.576553ms, nodes: 2, pods: 21\nI0111 19:05:47.739216 1 manager.go:95] Scraping metrics from 2 sources\nI0111 19:05:47.749373 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 19:05:47.754409 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 19:05:47.772336 1 manager.go:150] ScrapeMetrics: time: 33.093647ms, nodes: 2, pods: 21\nI0111 19:06:26.671426 1 streamwatcher.go:103] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0111 19:06:26.671505 1 streamwatcher.go:103] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0111 19:06:47.739196 1 manager.go:95] Scraping metrics from 2 sources\nI0111 19:06:47.749356 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 19:06:47.753352 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 19:06:47.779440 1 manager.go:150] ScrapeMetrics: time: 40.199902ms, nodes: 2, pods: 20\nI0111 19:07:47.739199 1 manager.go:95] Scraping metrics from 2 sources\nI0111 19:07:47.740351 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 19:07:47.742357 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 19:07:47.769853 1 manager.go:150] ScrapeMetrics: time: 30.627138ms, nodes: 2, pods: 21\nE0111 19:07:47.769873 1 manager.go:111] unable to fully collect metrics: unable to fully scrape metrics from source kubelet_summary:ip-10-250-7-77.ec2.internal: unable to get CPU for container \"agnhost\" in pod persistent-local-volumes-test-9854/hostexec-ip-10-250-7-77.ec2.internal on node \"ip-10-250-7-77.ec2.internal\", discarding data: missing cpu usage metric\nI0111 19:08:47.739237 1 manager.go:95] Scraping metrics from 2 sources\nI0111 19:08:47.740392 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 19:08:47.744419 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 19:08:47.770746 1 manager.go:150] ScrapeMetrics: time: 31.477346ms, nodes: 2, pods: 22\nI0111 19:09:47.739195 1 manager.go:95] Scraping metrics from 2 sources\nI0111 19:09:47.739263 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 19:09:47.739421 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 19:09:47.762812 1 manager.go:150] ScrapeMetrics: time: 23.588079ms, nodes: 2, pods: 22\nI0111 19:10:47.739194 1 manager.go:95] Scraping metrics from 2 sources\nI0111 19:10:47.740345 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 19:10:47.745342 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 19:10:47.771217 1 manager.go:150] ScrapeMetrics: time: 31.991986ms, nodes: 2, pods: 20\nI0111 19:11:47.739200 1 manager.go:95] Scraping metrics from 2 sources\nI0111 19:11:47.744341 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 19:11:47.751550 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 19:11:47.775269 1 manager.go:150] ScrapeMetrics: time: 36.041993ms, nodes: 2, pods: 22\nE0111 19:11:47.775292 1 manager.go:111] unable to fully collect metrics: unable to fully scrape metrics from source kubelet_summary:ip-10-250-27-25.ec2.internal: [unable to get CPU for container \"test-container\" in pod emptydir-wrapper-6238/wrapped-volume-race-11efadd1-4d0b-4f26-a129-47c44e55335c-zg6q6 on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"test-container\" in pod emptydir-wrapper-6238/wrapped-volume-race-11efadd1-4d0b-4f26-a129-47c44e55335c-cvls7 on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"test-container\" in pod emptydir-wrapper-6238/wrapped-volume-race-11efadd1-4d0b-4f26-a129-47c44e55335c-5x9kf on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric]\nI0111 19:12:47.739194 1 manager.go:95] Scraping metrics from 2 sources\nI0111 19:12:47.749329 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 19:12:47.750331 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 19:12:47.779309 1 manager.go:150] ScrapeMetrics: time: 40.086626ms, nodes: 2, pods: 25\nI0111 19:13:47.739214 1 manager.go:95] Scraping metrics from 2 sources\nI0111 19:13:47.747367 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 19:13:47.753533 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 19:13:47.771003 1 manager.go:150] ScrapeMetrics: time: 31.745762ms, nodes: 2, pods: 20\nI0111 19:14:47.739206 1 manager.go:95] Scraping metrics from 2 sources\nI0111 19:14:47.748348 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 19:14:47.753346 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 19:14:47.778599 1 manager.go:150] ScrapeMetrics: time: 39.364489ms, nodes: 2, pods: 20\nE0111 19:14:47.778618 1 manager.go:111] unable to fully collect metrics: [unable to fully scrape metrics from source kubelet_summary:ip-10-250-27-25.ec2.internal: unable to get CPU for container \"preemptor-pod\" in pod sched-preemption-7146/preemptor-pod on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to fully scrape metrics from source kubelet_summary:ip-10-250-7-77.ec2.internal: unable to get CPU for container \"pod1-sched-preemption-medium-priority\" in pod sched-preemption-7146/pod1-sched-preemption-medium-priority on node \"ip-10-250-7-77.ec2.internal\", discarding data: missing cpu usage metric]\nI0111 19:15:47.739189 1 manager.go:95] Scraping metrics from 2 sources\nI0111 19:15:47.744320 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 19:15:47.745592 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 19:15:47.835869 1 manager.go:150] ScrapeMetrics: time: 96.649926ms, nodes: 2, pods: 20\nI0111 19:16:47.739219 1 manager.go:95] Scraping metrics from 2 sources\nI0111 19:16:47.740352 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 19:16:47.743648 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 19:16:47.767251 1 manager.go:150] ScrapeMetrics: time: 28.002831ms, nodes: 2, pods: 20\nI0111 19:17:47.739196 1 manager.go:95] Scraping metrics from 2 sources\nI0111 19:17:47.746354 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 19:17:47.751361 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 19:17:47.772143 1 manager.go:150] ScrapeMetrics: time: 32.901841ms, nodes: 2, pods: 20\nE0111 19:17:47.772163 1 manager.go:111] unable to fully collect metrics: unable to fully scrape metrics from source kubelet_summary:ip-10-250-27-25.ec2.internal: unable to get CPU for container \"pause\" in pod taint-single-pod-2451/taint-eviction-2 on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric\nI0111 19:18:47.739186 1 manager.go:95] Scraping metrics from 2 sources\nI0111 19:18:47.739238 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 19:18:47.754336 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 19:18:47.780021 1 manager.go:150] ScrapeMetrics: time: 40.807893ms, nodes: 2, pods: 21\nI0111 19:19:47.739211 1 manager.go:95] Scraping metrics from 2 sources\nI0111 19:19:47.741348 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 19:19:47.744334 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 19:19:47.764866 1 manager.go:150] ScrapeMetrics: time: 25.625669ms, nodes: 2, pods: 20\nE0111 19:19:47.764885 1 manager.go:111] unable to fully collect metrics: unable to fully scrape metrics from source kubelet_summary:ip-10-250-27-25.ec2.internal: unable to get CPU for container \"with-tolerations\" in pod sched-pred-7636/with-tolerations on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric\nI0111 19:20:47.739189 1 manager.go:95] Scraping metrics from 2 sources\nI0111 19:20:47.739385 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 19:20:47.749390 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 19:20:47.774294 1 manager.go:150] ScrapeMetrics: time: 35.073338ms, nodes: 2, pods: 20\nI0111 19:21:47.739190 1 manager.go:95] Scraping metrics from 2 sources\nI0111 19:21:47.747418 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 19:21:47.750705 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 19:21:47.771049 1 manager.go:150] ScrapeMetrics: time: 31.74642ms, nodes: 2, pods: 20\nI0111 19:22:47.739194 1 manager.go:95] Scraping metrics from 2 sources\nI0111 19:22:47.742327 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 19:22:47.743587 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 19:22:47.770313 1 manager.go:150] ScrapeMetrics: time: 31.089977ms, nodes: 2, pods: 20\nE0111 19:22:47.770330 1 manager.go:111] unable to fully collect metrics: [unable to fully scrape metrics from source kubelet_summary:ip-10-250-27-25.ec2.internal: [unable to get CPU for container \"754cea51-cd7b-4b59-9d44-4b48d72d114b-0\" in pod sched-priority-741/754cea51-cd7b-4b59-9d44-4b48d72d114b-0 on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"with-tolerations\" in pod sched-priority-741/with-tolerations on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric], unable to fully scrape metrics from source kubelet_summary:ip-10-250-7-77.ec2.internal: unable to get CPU for container \"22cbc05c-3bfb-4768-9eef-b2f836cae1b8-0\" in pod sched-priority-741/22cbc05c-3bfb-4768-9eef-b2f836cae1b8-0 on node \"ip-10-250-7-77.ec2.internal\", discarding data: missing cpu usage metric]\nI0111 19:23:47.739197 1 manager.go:95] Scraping metrics from 2 sources\nI0111 19:23:47.741358 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 19:23:47.750486 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 19:23:47.768287 1 manager.go:150] ScrapeMetrics: time: 29.063423ms, nodes: 2, pods: 20\nI0111 19:24:47.739209 1 manager.go:95] Scraping metrics from 2 sources\nI0111 19:24:47.740477 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 19:24:47.744378 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 19:24:47.770278 1 manager.go:150] ScrapeMetrics: time: 31.02408ms, nodes: 2, pods: 20\nI0111 19:25:47.739189 1 manager.go:95] Scraping metrics from 2 sources\nI0111 19:25:47.749327 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 19:25:47.752329 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 19:25:47.778225 1 manager.go:150] ScrapeMetrics: time: 39.009234ms, nodes: 2, pods: 21\nI0111 19:26:47.739209 1 manager.go:95] Scraping metrics from 2 sources\nI0111 19:26:47.753354 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 19:26:47.754544 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 19:26:47.777318 1 manager.go:150] ScrapeMetrics: time: 38.082451ms, nodes: 2, pods: 20\nI0111 19:27:47.739178 1 manager.go:95] Scraping metrics from 2 sources\nI0111 19:27:47.744315 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 19:27:47.834403 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 19:27:47.860117 1 manager.go:150] ScrapeMetrics: time: 120.907618ms, nodes: 2, pods: 20\nI0111 19:28:47.739203 1 manager.go:95] Scraping metrics from 2 sources\nI0111 19:28:47.745343 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 19:28:47.752388 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 19:28:47.768323 1 manager.go:150] ScrapeMetrics: time: 29.091363ms, nodes: 2, pods: 21\nI0111 19:29:47.739190 1 manager.go:95] Scraping metrics from 2 sources\nI0111 19:29:47.739257 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 19:29:47.747373 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 19:29:47.775286 1 manager.go:150] ScrapeMetrics: time: 36.070931ms, nodes: 2, pods: 20\nI0111 19:30:47.739185 1 manager.go:95] Scraping metrics from 2 sources\nI0111 19:30:47.739251 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 19:30:47.747492 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 19:30:47.776094 1 manager.go:150] ScrapeMetrics: time: 36.882533ms, nodes: 2, pods: 20\nI0111 19:31:47.739186 1 manager.go:95] Scraping metrics from 2 sources\nI0111 19:31:47.746335 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 19:31:47.753333 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 19:31:48.634712 1 manager.go:150] ScrapeMetrics: time: 895.495646ms, nodes: 2, pods: 39\nE0111 19:31:48.634742 1 manager.go:111] unable to fully collect metrics: unable to fully scrape metrics from source kubelet_summary:ip-10-250-27-25.ec2.internal: [unable to get CPU for container \"write-pod\" in pod persistent-local-volumes-test-9333/security-context-53115992-892c-4654-8a4d-6a036ebea26f on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"write-pod\" in pod persistent-local-volumes-test-9333/security-context-3a42835c-57e1-496e-9c8c-254dca8a6f4e on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"write-pod\" in pod persistent-local-volumes-test-9333/security-context-05ac1d28-ee27-457b-b882-b93cb61e226c on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"write-pod\" in pod persistent-local-volumes-test-9333/security-context-b6e321fd-7763-417c-b333-24e5058e40e0 on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"write-pod\" in pod persistent-local-volumes-test-9333/security-context-1d0285f4-73c0-460a-9d1f-be5e554388f6 on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"write-pod\" in pod persistent-local-volumes-test-9333/security-context-3a169a6a-46d3-4239-85ee-4fdefb17bc02 on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"write-pod\" in pod persistent-local-volumes-test-9333/security-context-b0d1a6f1-9825-4333-9da8-b46ab60f99c2 on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"write-pod\" in pod persistent-local-volumes-test-9333/security-context-a6181202-a1db-4a6d-b960-cfc7f5f9cc63 on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"write-pod\" in pod persistent-local-volumes-test-9333/security-context-9c0c00d8-17dd-4c92-ae57-6acfd659a458 on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"write-pod\" in pod persistent-local-volumes-test-9333/security-context-c48ba6ba-b4b3-4751-95ff-904f2dd31aa9 on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"write-pod\" in pod persistent-local-volumes-test-9333/security-context-a01586d7-13e5-41c7-b0ae-f5552a180272 on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get a valid timestamp for metric point for container \"write-pod\" in pod persistent-local-volumes-test-9333/security-context-17cf9e12-63bf-4b7d-8ce8-c1afa3557538 on node \"ip-10-250-27-25.ec2.internal\", discarding data: no non-zero timestamp on either CPU or memory, unable to get CPU for container \"write-pod\" in pod persistent-local-volumes-test-9333/security-context-05a4192e-e8f0-4dbf-abe5-68c88bddabcf on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"write-pod\" in pod persistent-local-volumes-test-9333/security-context-c90655d7-bcf0-4372-a7b5-6d624b230c16 on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"write-pod\" in pod persistent-local-volumes-test-9333/security-context-9c01030c-91e8-4a24-b7b7-1bd4b529aad6 on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"write-pod\" in pod persistent-local-volumes-test-9333/security-context-9f9b2c39-3126-424f-89af-3216b9102ff8 on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"write-pod\" in pod persistent-local-volumes-test-9333/security-context-2f7bcbfb-b2ed-4185-8d0a-b8230a18cd2d on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"write-pod\" in pod persistent-local-volumes-test-9333/security-context-ed893798-2c43-48c8-947b-654cfccd7559 on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"write-pod\" in pod persistent-local-volumes-test-9333/security-context-03ed9297-6cc9-40bc-88ce-1370dd3001aa on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"write-pod\" in pod persistent-local-volumes-test-9333/security-context-c60615b7-09fa-4649-8ecb-8f728c2c22f8 on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"write-pod\" in pod persistent-local-volumes-test-9333/security-context-128dffa9-5fa5-4a0b-ac2e-e1149978ddab on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"write-pod\" in pod persistent-local-volumes-test-9333/security-context-0bd0bba3-6e93-4d4a-b38c-75ec0e6b1bfe on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"write-pod\" in pod persistent-local-volumes-test-9333/security-context-6087502c-e77a-4314-854e-4e275aabf2a2 on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"write-pod\" in pod persistent-local-volumes-test-9333/security-context-c097bf5b-6ef6-44a8-b9cc-068ccb48abe6 on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"write-pod\" in pod persistent-local-volumes-test-9333/security-context-e5ba6c97-87db-4321-851e-28a64fcb9b25 on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric]\nI0111 19:32:47.739175 1 manager.go:95] Scraping metrics from 2 sources\nI0111 19:32:47.743310 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 19:32:47.752667 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 19:32:47.767406 1 manager.go:150] ScrapeMetrics: time: 28.204891ms, nodes: 2, pods: 20\nE0111 19:32:47.767425 1 manager.go:111] unable to fully collect metrics: [unable to fully scrape metrics from source kubelet_summary:ip-10-250-27-25.ec2.internal: unable to get CPU for container \"filler-pod-929c2015-3bbd-4225-bf56-7b0601fba7e8\" in pod sched-pred-6831/filler-pod-929c2015-3bbd-4225-bf56-7b0601fba7e8 on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to fully scrape metrics from source kubelet_summary:ip-10-250-7-77.ec2.internal: unable to get CPU for container \"filler-pod-be9644c7-da9e-4eb0-8c37-1feda19f12bb\" in pod sched-pred-6831/filler-pod-be9644c7-da9e-4eb0-8c37-1feda19f12bb on node \"ip-10-250-7-77.ec2.internal\", discarding data: missing cpu usage metric]\nI0111 19:33:47.739206 1 manager.go:95] Scraping metrics from 2 sources\nI0111 19:33:47.749355 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 19:33:47.752365 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 19:33:47.776290 1 manager.go:150] ScrapeMetrics: time: 37.038239ms, nodes: 2, pods: 20\nE0111 19:33:47.776308 1 manager.go:111] unable to fully collect metrics: unable to fully scrape metrics from source kubelet_summary:ip-10-250-27-25.ec2.internal: unable to get CPU for container \"app\" in pod daemonsets-5878/daemon-set-tjzcc on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric\nI0111 19:34:47.739226 1 manager.go:95] Scraping metrics from 2 sources\nI0111 19:34:47.742434 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 19:34:47.752613 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 19:34:47.852414 1 manager.go:150] ScrapeMetrics: time: 113.160285ms, nodes: 2, pods: 20\nE0111 19:34:47.852433 1 manager.go:111] unable to fully collect metrics: [unable to fully scrape metrics from source kubelet_summary:ip-10-250-27-25.ec2.internal: [unable to get CPU for container \"test-webserver\" in pod container-probe-5574/test-webserver-2aad54db-db36-40fb-bb83-deb356b00ebb on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"agnhost\" in pod persistent-local-volumes-test-6552/hostexec-ip-10-250-27-25.ec2.internal on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"busybox\" in pod disruption-8708/rs-wmsfz on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"write-pod\" in pod persistent-local-volumes-test-6552/security-context-cc48bf40-13ac-4c04-a6ba-9091c2e61ee7 on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"externalname-service\" in pod services-8498/externalname-service-65dxv on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"busybox\" in pod disruption-8708/rs-pclhg on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"busybox\" in pod disruption-8708/rs-qz865 on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"agnhost-pause\" in pod services-8498/execpod82kct on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"agnhost\" in pod persistent-local-volumes-test-2649/hostexec-ip-10-250-27-25.ec2.internal on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"busybox\" in pod disruption-8708/rs-w92wg on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"busybox\" in pod disruption-8708/rs-9fpjv on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"busybox\" in pod disruption-8708/rs-npccf on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"busybox\" in pod disruption-8708/rs-657cp on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"busybox\" in pod disruption-8708/rs-zfqbc on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric], unable to fully scrape metrics from source kubelet_summary:ip-10-250-7-77.ec2.internal: [unable to get CPU for container \"csi-provisioner\" in pod provisioning-888/csi-hostpath-provisioner-0 on node \"ip-10-250-7-77.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"csi-resizer\" in pod provisioning-888/csi-hostpath-resizer-0 on node \"ip-10-250-7-77.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"busybox\" in pod disruption-8708/rs-zsg8s on node \"ip-10-250-7-77.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"busybox\" in pod disruption-8708/rs-b9z8c on node \"ip-10-250-7-77.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"hostpath\" in pod provisioning-888/csi-hostpathplugin-0 on node \"ip-10-250-7-77.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"liveness-probe\" in pod provisioning-888/csi-hostpathplugin-0 on node \"ip-10-250-7-77.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"node-driver-registrar\" in pod provisioning-888/csi-hostpathplugin-0 on node \"ip-10-250-7-77.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"csi-snapshotter\" in pod provisioning-888/csi-snapshotter-0 on node \"ip-10-250-7-77.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"busybox\" in pod disruption-8708/rs-g98k5 on node \"ip-10-250-7-77.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"externalname-service\" in pod services-8498/externalname-service-kjtxc on node \"ip-10-250-7-77.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"csi-attacher\" in pod provisioning-888/csi-hostpath-attacher-0 on node \"ip-10-250-7-77.ec2.internal\", discarding data: missing cpu usage metric]]\nI0111 19:35:47.739183 1 manager.go:95] Scraping metrics from 2 sources\nI0111 19:35:47.745417 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 19:35:47.748715 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 19:35:47.771253 1 manager.go:150] ScrapeMetrics: time: 31.983699ms, nodes: 2, pods: 22\nE0111 19:35:47.771270 1 manager.go:111] unable to fully collect metrics: [unable to fully scrape metrics from source kubelet_summary:ip-10-250-27-25.ec2.internal: [unable to get CPU for container \"nginx\" in pod gc-4317/simpletest.deployment-fb5f5c75d-r4ddg on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"run-log-test\" in pod kubectl-230/run-log-test on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"client-container\" in pod projected-8100/annotationupdate738607b6-7ce5-40e2-a724-b47c635e3a2c on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"sample-webhook\" in pod webhook-9730/sample-webhook-deployment-86d95b659d-8qz45 on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric], unable to fully scrape metrics from source kubelet_summary:ip-10-250-7-77.ec2.internal: unable to get CPU for container \"nginx\" in pod gc-4317/simpletest.deployment-fb5f5c75d-xd8nh on node \"ip-10-250-7-77.ec2.internal\", discarding data: missing cpu usage metric]\nI0111 19:36:47.739196 1 manager.go:95] Scraping metrics from 2 sources\nI0111 19:36:47.747338 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 19:36:47.750348 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 19:36:47.792525 1 manager.go:150] ScrapeMetrics: time: 53.298487ms, nodes: 2, pods: 23\nE0111 19:36:47.792546 1 manager.go:111] unable to fully collect metrics: unable to fully scrape metrics from source kubelet_summary:ip-10-250-7-77.ec2.internal: unable to get CPU for container \"agnhost\" in pod persistent-local-volumes-test-8319/hostexec-ip-10-250-7-77.ec2.internal on node \"ip-10-250-7-77.ec2.internal\", discarding data: missing cpu usage metric\nI0111 19:37:47.739186 1 manager.go:95] Scraping metrics from 2 sources\nI0111 19:37:47.742315 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 19:37:47.744599 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 19:37:47.792318 1 manager.go:150] ScrapeMetrics: time: 53.105812ms, nodes: 2, pods: 30\nE0111 19:37:47.792338 1 manager.go:111] unable to fully collect metrics: unable to fully scrape metrics from source kubelet_summary:ip-10-250-7-77.ec2.internal: [unable to get CPU for container \"test-container-subpath-csi-hostpath-dynamicpv-wpkq\" in pod provisioning-9667/pod-subpath-test-csi-hostpath-dynamicpv-wpkq on node \"ip-10-250-7-77.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"csi-provisioner\" in pod provisioning-3332/csi-hostpath-provisioner-0 on node \"ip-10-250-7-77.ec2.internal\", discarding data: missing cpu usage metric]\nI0111 19:38:47.739197 1 manager.go:95] Scraping metrics from 2 sources\nI0111 19:38:47.750373 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 19:38:47.751360 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 19:38:47.835936 1 manager.go:150] ScrapeMetrics: time: 96.689785ms, nodes: 2, pods: 29\nE0111 19:38:47.835975 1 manager.go:111] unable to fully collect metrics: unable to fully scrape metrics from source kubelet_summary:ip-10-250-27-25.ec2.internal: [unable to get CPU for container \"createcm-volume-test\" in pod projected-9280/pod-projected-configmaps-2ae0c067-6d0e-4ca8-9fd3-3bdd0ca22eef on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"updcm-volume-test\" in pod projected-9280/pod-projected-configmaps-2ae0c067-6d0e-4ca8-9fd3-3bdd0ca22eef on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"delcm-volume-test\" in pod projected-9280/pod-projected-configmaps-2ae0c067-6d0e-4ca8-9fd3-3bdd0ca22eef on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric]\nI0111 19:39:47.739221 1 manager.go:95] Scraping metrics from 2 sources\nI0111 19:39:47.739386 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 19:39:47.745492 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 19:39:47.779579 1 manager.go:150] ScrapeMetrics: time: 40.219202ms, nodes: 2, pods: 28\nE0111 19:39:47.779602 1 manager.go:111] unable to fully collect metrics: unable to fully scrape metrics from source kubelet_summary:ip-10-250-27-25.ec2.internal: unable to get CPU for container \"update-demo\" in pod kubectl-2618/update-demo-nautilus-rqkwn on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric\nI0111 19:40:47.739181 1 manager.go:95] Scraping metrics from 2 sources\nI0111 19:40:47.746326 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 19:40:47.752342 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 19:40:47.794494 1 manager.go:150] ScrapeMetrics: time: 55.283939ms, nodes: 2, pods: 25\nE0111 19:40:47.794511 1 manager.go:111] unable to fully collect metrics: unable to fully scrape metrics from source kubelet_summary:ip-10-250-27-25.ec2.internal: [unable to get CPU for container \"driver-registrar\" in pod csi-mock-volumes-1062/csi-mockplugin-0 on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"csi-provisioner\" in pod csi-mock-volumes-1062/csi-mockplugin-0 on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"mock\" in pod csi-mock-volumes-1062/csi-mockplugin-0 on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"volume-tester\" in pod csi-mock-volumes-1062/pvc-volume-tester-6rjqv on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"csi-attacher\" in pod csi-mock-volumes-1062/csi-mockplugin-attacher-0 on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric]\nI0111 19:41:47.739208 1 manager.go:95] Scraping metrics from 2 sources\nI0111 19:41:47.740349 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 19:41:47.747346 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 19:41:47.775689 1 manager.go:150] ScrapeMetrics: time: 36.447146ms, nodes: 2, pods: 22\nE0111 19:41:47.775723 1 manager.go:111] unable to fully collect metrics: unable to fully scrape metrics from source kubelet_summary:ip-10-250-27-25.ec2.internal: [unable to get CPU for container \"write-pod\" in pod persistent-local-volumes-test-3766/security-context-ef892f2c-64e9-44f9-be4f-d7e90c0481db on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"webserver\" in pod nettest-7326/netserver-0 on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"write-pod\" in pod persistent-local-volumes-test-3766/security-context-a1ad5e08-87d1-417f-9c4b-6395135e2d12 on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"agnhost\" in pod persistent-local-volumes-test-657/hostexec-ip-10-250-27-25.ec2.internal on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"agnhost\" in pod persistent-local-volumes-test-3766/hostexec-ip-10-250-27-25.ec2.internal on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric]\nI0111 19:42:47.739178 1 manager.go:95] Scraping metrics from 2 sources\nI0111 19:42:47.740317 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 19:42:47.749319 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 19:42:47.783073 1 manager.go:150] ScrapeMetrics: time: 43.863075ms, nodes: 2, pods: 23\nE0111 19:42:47.783093 1 manager.go:111] unable to fully collect metrics: unable to fully scrape metrics from source kubelet_summary:ip-10-250-7-77.ec2.internal: [unable to get CPU for container \"csi-snapshotter\" in pod volume-expand-8983/csi-snapshotter-0 on node \"ip-10-250-7-77.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"csi-attacher\" in pod volume-expand-8983/csi-hostpath-attacher-0 on node \"ip-10-250-7-77.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"node-driver-registrar\" in pod volume-expand-8983/csi-hostpathplugin-0 on node \"ip-10-250-7-77.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"hostpath\" in pod volume-expand-8983/csi-hostpathplugin-0 on node \"ip-10-250-7-77.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"liveness-probe\" in pod volume-expand-8983/csi-hostpathplugin-0 on node \"ip-10-250-7-77.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"csi-resizer\" in pod volume-expand-8983/csi-hostpath-resizer-0 on node \"ip-10-250-7-77.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"csi-provisioner\" in pod volume-expand-8983/csi-hostpath-provisioner-0 on node \"ip-10-250-7-77.ec2.internal\", discarding data: missing cpu usage metric]\nI0111 19:43:47.739190 1 manager.go:95] Scraping metrics from 2 sources\nI0111 19:43:47.747358 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 19:43:47.751345 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 19:43:47.784395 1 manager.go:150] ScrapeMetrics: time: 45.159946ms, nodes: 2, pods: 29\nE0111 19:43:47.784416 1 manager.go:111] unable to fully collect metrics: unable to fully scrape metrics from source kubelet_summary:ip-10-250-27-25.ec2.internal: [unable to get CPU for container \"webserver\" in pod statefulset-6005/ss2-1 on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"csi-snapshotter\" in pod provisioning-6240/csi-snapshotter-0 on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"csi-provisioner\" in pod provisioning-6240/csi-hostpath-provisioner-0 on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"csi-resizer\" in pod provisioning-6240/csi-hostpath-resizer-0 on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"liveness-probe\" in pod provisioning-6240/csi-hostpathplugin-0 on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"hostpath\" in pod provisioning-6240/csi-hostpathplugin-0 on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"node-driver-registrar\" in pod provisioning-6240/csi-hostpathplugin-0 on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"csi-attacher\" in pod provisioning-6240/csi-hostpath-attacher-0 on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric]\nI0111 19:44:47.739199 1 manager.go:95] Scraping metrics from 2 sources\nI0111 19:44:47.744385 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 19:44:47.749343 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 19:44:47.780265 1 manager.go:150] ScrapeMetrics: time: 41.021793ms, nodes: 2, pods: 25\nE0111 19:44:47.780284 1 manager.go:111] unable to fully collect metrics: [unable to fully scrape metrics from source kubelet_summary:ip-10-250-27-25.ec2.internal: [unable to get CPU for container \"agnhost\" in pod persistent-local-volumes-test-2002/hostexec-ip-10-250-27-25.ec2.internal on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"webserver\" in pod nettest-2130/netserver-0 on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"write-pod\" in pod persistent-local-volumes-test-2002/security-context-5d0ec471-88f7-4180-b83d-ff3208187c00 on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric], unable to fully scrape metrics from source kubelet_summary:ip-10-250-7-77.ec2.internal: unable to get CPU for container \"webserver\" in pod nettest-2130/netserver-1 on node \"ip-10-250-7-77.ec2.internal\", discarding data: missing cpu usage metric]\nI0111 19:45:47.739209 1 manager.go:95] Scraping metrics from 2 sources\nI0111 19:45:47.743574 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 19:45:47.750805 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 19:45:47.834401 1 manager.go:150] ScrapeMetrics: time: 94.929984ms, nodes: 2, pods: 25\nE0111 19:45:47.835427 1 manager.go:111] unable to fully collect metrics: unable to fully scrape metrics from source kubelet_summary:ip-10-250-27-25.ec2.internal: [unable to get CPU for container \"agnhost\" in pod persistent-local-volumes-test-7579/hostexec-ip-10-250-27-25.ec2.internal on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"write-pod\" in pod persistent-local-volumes-test-7579/security-context-32c59899-0cdc-466a-adb1-7ff50e25f0d7 on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric]\nI0111 19:46:47.739195 1 manager.go:95] Scraping metrics from 2 sources\nI0111 19:46:47.741334 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 19:46:47.742336 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 19:46:47.776729 1 manager.go:150] ScrapeMetrics: time: 37.500952ms, nodes: 2, pods: 27\nE0111 19:46:47.776748 1 manager.go:111] unable to fully collect metrics: [unable to fully scrape metrics from source kubelet_summary:ip-10-250-27-25.ec2.internal: [unable to get CPU for container \"agnhost-pause\" in pod services-1603/execpod-affinityvk9gf on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"affinity-clusterip\" in pod services-1603/affinity-clusterip-tzllm on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"affinity-clusterip\" in pod services-1603/affinity-clusterip-s594x on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"httpd\" in pod deployment-498/webserver-64dbff79df-sdft9 on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"affinity-clusterip\" in pod services-1603/affinity-clusterip-gfdws on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"httpd\" in pod deployment-498/webserver-64dbff79df-fhrkm on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"httpd\" in pod deployment-498/webserver-64dbff79df-qt6s6 on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric], unable to fully scrape metrics from source kubelet_summary:ip-10-250-7-77.ec2.internal: unable to get CPU for container \"httpd\" in pod deployment-498/webserver-64dbff79df-2p9z2 on node \"ip-10-250-7-77.ec2.internal\", discarding data: missing cpu usage metric]\nI0111 19:47:47.739994 1 manager.go:95] Scraping metrics from 2 sources\nI0111 19:47:47.740131 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 19:47:47.741618 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 19:47:47.781041 1 manager.go:150] ScrapeMetrics: time: 40.946862ms, nodes: 2, pods: 22\nE0111 19:47:47.781062 1 manager.go:111] unable to fully collect metrics: unable to fully scrape metrics from source kubelet_summary:ip-10-250-27-25.ec2.internal: unable to get CPU for container \"agnhost-pause\" in pod services-9378/execpodtmgk8 on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric\nI0111 19:48:47.739193 1 manager.go:95] Scraping metrics from 2 sources\nI0111 19:48:47.740356 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 19:48:47.753369 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 19:48:47.838235 1 manager.go:150] ScrapeMetrics: time: 98.992683ms, nodes: 2, pods: 26\nE0111 19:48:47.838258 1 manager.go:111] unable to fully collect metrics: unable to fully scrape metrics from source kubelet_summary:ip-10-250-27-25.ec2.internal: [unable to get CPU for container \"csi-attacher\" in pod csi-mock-volumes-2239/csi-mockplugin-attacher-0 on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"driver-registrar\" in pod csi-mock-volumes-2239/csi-mockplugin-0 on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"csi-provisioner\" in pod csi-mock-volumes-2239/csi-mockplugin-0 on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"mock\" in pod csi-mock-volumes-2239/csi-mockplugin-0 on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"csi-provisioner\" in pod csi-mock-volumes-795/csi-mockplugin-0 on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"mock\" in pod csi-mock-volumes-795/csi-mockplugin-0 on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"driver-registrar\" in pod csi-mock-volumes-795/csi-mockplugin-0 on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric]\nI0111 19:49:47.739261 1 manager.go:95] Scraping metrics from 2 sources\nI0111 19:49:47.744452 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 19:49:47.751984 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 19:49:47.788563 1 manager.go:150] ScrapeMetrics: time: 49.270788ms, nodes: 2, pods: 27\nE0111 19:49:47.788582 1 manager.go:111] unable to fully collect metrics: unable to fully scrape metrics from source kubelet_summary:ip-10-250-27-25.ec2.internal: [unable to get CPU for container \"agnhost\" in pod nettest-941/host-test-container-pod on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"webserver\" in pod statefulset-8024/ss2-1 on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"webserver\" in pod statefulset-8024/ss2-0 on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"sample-webhook\" in pod webhook-5741/sample-webhook-deployment-86d95b659d-swl8z on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"webserver\" in pod nettest-941/test-container-pod on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric]\nI0111 19:50:47.739183 1 manager.go:95] Scraping metrics from 2 sources\nI0111 19:50:47.739236 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 19:50:47.748323 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 19:50:48.136280 1 manager.go:150] ScrapeMetrics: time: 397.065975ms, nodes: 2, pods: 31\nE0111 19:50:48.136322 1 manager.go:111] unable to fully collect metrics: [unable to fully scrape metrics from source kubelet_summary:ip-10-250-27-25.ec2.internal: unable to get CPU for container \"liveness\" in pod container-probe-8415/liveness-d9c04d87-22d3-4723-91d9-3bcb6c488d03 on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to fully scrape metrics from source kubelet_summary:ip-10-250-7-77.ec2.internal: [unable to get CPU for container \"test-container-volume-hostpath-67j8\" in pod provisioning-8361/pod-subpath-test-hostpath-67j8 on node \"ip-10-250-7-77.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"csi-provisioner\" in pod volumemode-2792/csi-hostpath-provisioner-0 on node \"ip-10-250-7-77.ec2.internal\", discarding data: missing cpu usage metric]]\nI0111 19:51:47.739197 1 manager.go:95] Scraping metrics from 2 sources\nI0111 19:51:47.745359 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 19:51:47.747359 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 19:51:47.781178 1 manager.go:150] ScrapeMetrics: time: 41.932558ms, nodes: 2, pods: 30\nI0111 19:52:47.739179 1 manager.go:95] Scraping metrics from 2 sources\nI0111 19:52:47.740299 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 19:52:47.750324 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 19:52:47.779506 1 manager.go:150] ScrapeMetrics: time: 40.303742ms, nodes: 2, pods: 26\nE0111 19:52:47.779525 1 manager.go:111] unable to fully collect metrics: unable to fully scrape metrics from source kubelet_summary:ip-10-250-27-25.ec2.internal: unable to get CPU for container \"test-container-1\" in pod hostpath-1462/pod-host-path-test on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric\nI0111 19:53:47.739176 1 manager.go:95] Scraping metrics from 2 sources\nI0111 19:53:47.742321 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 19:53:47.745393 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 19:53:47.770975 1 manager.go:150] ScrapeMetrics: time: 31.755667ms, nodes: 2, pods: 25\nE0111 19:53:47.770995 1 manager.go:111] unable to fully collect metrics: [unable to fully scrape metrics from source kubelet_summary:ip-10-250-27-25.ec2.internal: [unable to get CPU for container \"webserver\" in pod nettest-5543/netserver-1 on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"externalname-service\" in pod services-3432/externalname-service-46v5d on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"client-container\" in pod projected-1847/labelsupdate408b13c6-acd6-40e1-b582-70203e36ef0f on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric], unable to fully scrape metrics from source kubelet_summary:ip-10-250-7-77.ec2.internal: unable to get CPU for container \"externalname-service\" in pod services-3432/externalname-service-cjz2m on node \"ip-10-250-7-77.ec2.internal\", discarding data: missing cpu usage metric]\nI0111 19:54:47.739207 1 manager.go:95] Scraping metrics from 2 sources\nI0111 19:54:47.745386 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 19:54:47.751429 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 19:54:47.785251 1 manager.go:150] ScrapeMetrics: time: 46.018576ms, nodes: 2, pods: 26\nE0111 19:54:47.785268 1 manager.go:111] unable to fully collect metrics: [unable to fully scrape metrics from source kubelet_summary:ip-10-250-27-25.ec2.internal: unable to get CPU for container \"image-pull-test\" in pod container-runtime-5766/image-pull-test7833ca22-5d91-4616-be18-82dfcf6d67b7 on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to fully scrape metrics from source kubelet_summary:ip-10-250-7-77.ec2.internal: [unable to get CPU for container \"csi-attacher\" in pod csi-mock-volumes-8663/csi-mockplugin-attacher-0 on node \"ip-10-250-7-77.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"nginx\" in pod prestop-8721/pod-prestop-hook-38cef27e-a245-4d51-9f9c-605eddd37573 on node \"ip-10-250-7-77.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"csi-provisioner\" in pod csi-mock-volumes-8663/csi-mockplugin-0 on node \"ip-10-250-7-77.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"driver-registrar\" in pod csi-mock-volumes-8663/csi-mockplugin-0 on node \"ip-10-250-7-77.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"mock\" in pod csi-mock-volumes-8663/csi-mockplugin-0 on node \"ip-10-250-7-77.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"csi-resizer\" in pod csi-mock-volumes-8663/csi-mockplugin-resizer-0 on node \"ip-10-250-7-77.ec2.internal\", discarding data: missing cpu usage metric]]\nI0111 19:55:47.739214 1 manager.go:95] Scraping metrics from 2 sources\nI0111 19:55:47.749354 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 19:55:47.754374 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 19:55:47.810330 1 manager.go:150] ScrapeMetrics: time: 71.075033ms, nodes: 2, pods: 25\nE0111 19:55:47.810355 1 manager.go:111] unable to fully collect metrics: unable to fully scrape metrics from source kubelet_summary:ip-10-250-27-25.ec2.internal: [unable to get CPU for container \"csi-resizer\" in pod csi-mock-volumes-4249/csi-mockplugin-resizer-0 on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"driver-registrar\" in pod csi-mock-volumes-4249/csi-mockplugin-0 on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"mock\" in pod csi-mock-volumes-4249/csi-mockplugin-0 on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"csi-provisioner\" in pod csi-mock-volumes-4249/csi-mockplugin-0 on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"volume-tester\" in pod csi-mock-volumes-4249/pvc-volume-tester-49x8p on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric]\nI0111 19:56:47.739189 1 manager.go:95] Scraping metrics from 2 sources\nI0111 19:56:47.748336 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 19:56:47.750331 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 19:56:47.776657 1 manager.go:150] ScrapeMetrics: time: 37.439932ms, nodes: 2, pods: 24\nE0111 19:56:47.776675 1 manager.go:111] unable to fully collect metrics: unable to fully scrape metrics from source kubelet_summary:ip-10-250-27-25.ec2.internal: [unable to get CPU for container \"write-pod\" in pod persistent-local-volumes-test-1290/security-context-93fead58-b5f8-48ee-9d8d-197211e02c78 on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"agnhost\" in pod persistent-local-volumes-test-1290/hostexec-ip-10-250-27-25.ec2.internal on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric]\nI0111 19:57:47.739199 1 manager.go:95] Scraping metrics from 2 sources\nI0111 19:57:47.747352 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 19:57:47.754351 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 19:57:47.772859 1 manager.go:150] ScrapeMetrics: time: 33.62649ms, nodes: 2, pods: 26\nE0111 19:57:47.772879 1 manager.go:111] unable to fully collect metrics: unable to fully scrape metrics from source kubelet_summary:ip-10-250-7-77.ec2.internal: [unable to get CPU for container \"liveness-probe\" in pod provisioning-2263/csi-hostpathplugin-0 on node \"ip-10-250-7-77.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"node-driver-registrar\" in pod provisioning-2263/csi-hostpathplugin-0 on node \"ip-10-250-7-77.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"hostpath\" in pod provisioning-2263/csi-hostpathplugin-0 on node \"ip-10-250-7-77.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"csi-provisioner\" in pod provisioning-2263/csi-hostpath-provisioner-0 on node \"ip-10-250-7-77.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"jessie-querier\" in pod dns-1736/dns-test-db35f479-a1b0-429c-8955-dc9ef9b39129 on node \"ip-10-250-7-77.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"webserver\" in pod dns-1736/dns-test-db35f479-a1b0-429c-8955-dc9ef9b39129 on node \"ip-10-250-7-77.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"querier\" in pod dns-1736/dns-test-db35f479-a1b0-429c-8955-dc9ef9b39129 on node \"ip-10-250-7-77.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"csi-snapshotter\" in pod provisioning-2263/csi-snapshotter-0 on node \"ip-10-250-7-77.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"csi-attacher\" in pod provisioning-2263/csi-hostpath-attacher-0 on node \"ip-10-250-7-77.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"csi-resizer\" in pod provisioning-2263/csi-hostpath-resizer-0 on node \"ip-10-250-7-77.ec2.internal\", discarding data: missing cpu usage metric]\nI0111 19:58:47.739241 1 manager.go:95] Scraping metrics from 2 sources\nI0111 19:58:47.743383 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 19:58:47.750421 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 19:58:47.781865 1 manager.go:150] ScrapeMetrics: time: 42.574442ms, nodes: 2, pods: 25\nE0111 19:58:47.781887 1 manager.go:111] unable to fully collect metrics: [unable to fully scrape metrics from source kubelet_summary:ip-10-250-27-25.ec2.internal: [unable to get CPU for container \"webserver\" in pod statefulset-5504/ss2-0 on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"webserver\" in pod statefulset-5504/ss2-2 on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric], unable to fully scrape metrics from source kubelet_summary:ip-10-250-7-77.ec2.internal: unable to get CPU for container \"webserver\" in pod statefulset-5504/ss2-1 on node \"ip-10-250-7-77.ec2.internal\", discarding data: missing cpu usage metric]\nI0111 19:59:47.739192 1 manager.go:95] Scraping metrics from 2 sources\nI0111 19:59:47.740333 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 19:59:47.750352 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 19:59:47.770887 1 manager.go:150] ScrapeMetrics: time: 31.66707ms, nodes: 2, pods: 28\nE0111 19:59:47.770907 1 manager.go:111] unable to fully collect metrics: unable to fully scrape metrics from source kubelet_summary:ip-10-250-27-25.ec2.internal: [unable to get CPU for container \"dapi-container\" in pod var-expansion-2244/var-expansion-f9086647-2328-41bd-b14f-7fe95b086b20 on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"webserver\" in pod statefulset-5504/ss2-0 on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric]\nI0111 20:00:47.739180 1 manager.go:95] Scraping metrics from 2 sources\nI0111 20:00:47.743312 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 20:00:47.753574 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 20:00:47.772718 1 manager.go:150] ScrapeMetrics: time: 33.505889ms, nodes: 2, pods: 22\nE0111 20:00:47.772741 1 manager.go:111] unable to fully collect metrics: unable to fully scrape metrics from source kubelet_summary:ip-10-250-27-25.ec2.internal: unable to get CPU for container \"httpd\" in pod kubectl-5656/httpd on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric\nI0111 20:01:47.739196 1 manager.go:95] Scraping metrics from 2 sources\nI0111 20:01:47.741344 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 20:01:47.745354 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 20:01:47.768813 1 manager.go:150] ScrapeMetrics: time: 29.588531ms, nodes: 2, pods: 21\nE0111 20:01:47.768833 1 manager.go:111] unable to fully collect metrics: unable to fully scrape metrics from source kubelet_summary:ip-10-250-27-25.ec2.internal: [unable to get CPU for container \"affinity-nodeport-transition\" in pod services-3570/affinity-nodeport-transition-cj2fd on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"webserver\" in pod nettest-6140/netserver-1 on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"agnhost-pause\" in pod services-3570/execpod-affinity2hdwk on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"affinity-nodeport-transition\" in pod services-3570/affinity-nodeport-transition-lbqg2 on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"affinity-nodeport-transition\" in pod services-3570/affinity-nodeport-transition-ksppn on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric]\nI0111 20:02:47.739199 1 manager.go:95] Scraping metrics from 2 sources\nI0111 20:02:47.745336 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 20:02:47.750863 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 20:02:47.834847 1 manager.go:150] ScrapeMetrics: time: 95.194276ms, nodes: 2, pods: 22\nE0111 20:02:47.834897 1 manager.go:111] unable to fully collect metrics: [unable to fully scrape metrics from source kubelet_summary:ip-10-250-27-25.ec2.internal: unable to get CPU for container \"test-container-subpath-emptydir-nx47\" in pod provisioning-702/pod-subpath-test-emptydir-nx47 on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to fully scrape metrics from source kubelet_summary:ip-10-250-7-77.ec2.internal: [unable to get CPU for container \"csi-resizer\" in pod provisioning-5877/csi-hostpath-resizer-0 on node \"ip-10-250-7-77.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"csi-provisioner\" in pod provisioning-5877/csi-hostpath-provisioner-0 on node \"ip-10-250-7-77.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"csi-attacher\" in pod provisioning-5877/csi-hostpath-attacher-0 on node \"ip-10-250-7-77.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"hostpath\" in pod provisioning-5877/csi-hostpathplugin-0 on node \"ip-10-250-7-77.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"node-driver-registrar\" in pod provisioning-5877/csi-hostpathplugin-0 on node \"ip-10-250-7-77.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"liveness-probe\" in pod provisioning-5877/csi-hostpathplugin-0 on node \"ip-10-250-7-77.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"csi-snapshotter\" in pod provisioning-5877/csi-snapshotter-0 on node \"ip-10-250-7-77.ec2.internal\", discarding data: missing cpu usage metric]]\nI0111 20:03:47.739193 1 manager.go:95] Scraping metrics from 2 sources\nI0111 20:03:47.751335 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 20:03:47.753611 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 20:03:47.834549 1 manager.go:150] ScrapeMetrics: time: 95.32255ms, nodes: 2, pods: 36\nI0111 20:04:47.739187 1 manager.go:95] Scraping metrics from 2 sources\nI0111 20:04:47.745404 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 20:04:47.751332 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 20:04:47.779162 1 manager.go:150] ScrapeMetrics: time: 39.948431ms, nodes: 2, pods: 25\nI0111 20:05:47.739216 1 manager.go:95] Scraping metrics from 2 sources\nI0111 20:05:47.743363 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 20:05:47.748370 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 20:05:47.776562 1 manager.go:150] ScrapeMetrics: time: 37.301141ms, nodes: 2, pods: 22\nE0111 20:05:47.776580 1 manager.go:111] unable to fully collect metrics: unable to fully scrape metrics from source kubelet_summary:ip-10-250-7-77.ec2.internal: [unable to get CPU for container \"busybox\" in pod disruption-7899/pod-0 on node \"ip-10-250-7-77.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"busybox\" in pod disruption-7899/pod-2 on node \"ip-10-250-7-77.ec2.internal\", discarding data: missing cpu usage metric]\nI0111 20:06:47.739210 1 manager.go:95] Scraping metrics from 2 sources\nI0111 20:06:47.747350 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 20:06:47.749562 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 20:06:47.791035 1 manager.go:150] ScrapeMetrics: time: 51.793249ms, nodes: 2, pods: 22\nE0111 20:06:47.791057 1 manager.go:111] unable to fully collect metrics: [unable to fully scrape metrics from source kubelet_summary:ip-10-250-27-25.ec2.internal: [unable to get CPU for container \"csi-attacher\" in pod volume-expand-8205/csi-hostpath-attacher-0 on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"csi-resizer\" in pod volume-expand-8205/csi-hostpath-resizer-0 on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"node-driver-registrar\" in pod volume-expand-8205/csi-hostpathplugin-0 on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"hostpath\" in pod volume-expand-8205/csi-hostpathplugin-0 on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"liveness-probe\" in pod volume-expand-8205/csi-hostpathplugin-0 on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"csi-snapshotter\" in pod volume-expand-8205/csi-snapshotter-0 on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"csi-provisioner\" in pod volume-expand-8205/csi-hostpath-provisioner-0 on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric], unable to fully scrape metrics from source kubelet_summary:ip-10-250-7-77.ec2.internal: [unable to get CPU for container \"liveness-probe\" in pod provisioning-1947/csi-hostpathplugin-0 on node \"ip-10-250-7-77.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"nfs-server\" in pod pv-6847/nfs-server on node \"ip-10-250-7-77.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"test-container-volume-csi-hostpath-dynamicpv-vhcm\" in pod provisioning-1947/pod-subpath-test-csi-hostpath-dynamicpv-vhcm on node \"ip-10-250-7-77.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"csi-snapshotter\" in pod provisioning-1947/csi-snapshotter-0 on node \"ip-10-250-7-77.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"csi-attacher\" in pod provisioning-1947/csi-hostpath-attacher-0 on node \"ip-10-250-7-77.ec2.internal\", discarding data: missing cpu usage metric]]\nI0111 20:07:47.739359 1 manager.go:95] Scraping metrics from 2 sources\nI0111 20:07:47.742545 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 20:07:47.742585 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 20:07:47.804020 1 manager.go:150] ScrapeMetrics: time: 64.630876ms, nodes: 2, pods: 29\nE0111 20:07:47.804040 1 manager.go:111] unable to fully collect metrics: unable to fully scrape metrics from source kubelet_summary:ip-10-250-27-25.ec2.internal: unable to get CPU for container \"test-webserver\" in pod container-probe-9477/test-webserver-6eaad1be-adb4-4751-973d-fa5a153aee9a on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric\nI0111 20:08:47.739188 1 manager.go:95] Scraping metrics from 2 sources\nI0111 20:08:47.748319 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 20:08:47.752352 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 20:08:47.798107 1 manager.go:150] ScrapeMetrics: time: 58.877705ms, nodes: 2, pods: 26\nE0111 20:08:47.798129 1 manager.go:111] unable to fully collect metrics: unable to fully scrape metrics from source kubelet_summary:ip-10-250-27-25.ec2.internal: unable to get CPU for container \"webserver\" in pod pod-network-test-5211/netserver-1 on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric\nI0111 20:09:47.739186 1 manager.go:95] Scraping metrics from 2 sources\nI0111 20:09:47.744335 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 20:09:47.745342 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 20:09:47.775757 1 manager.go:150] ScrapeMetrics: time: 36.542138ms, nodes: 2, pods: 23\nE0111 20:09:47.775776 1 manager.go:111] unable to fully collect metrics: [unable to fully scrape metrics from source kubelet_summary:ip-10-250-7-77.ec2.internal: unable to get CPU for container \"test-container-subpath-hostpathsymlink-l2f6\" in pod provisioning-2677/pod-subpath-test-hostpathsymlink-l2f6 on node \"ip-10-250-7-77.ec2.internal\", discarding data: missing cpu usage metric, unable to fully scrape metrics from source kubelet_summary:ip-10-250-27-25.ec2.internal: [unable to get CPU for container \"sample-webhook\" in pod webhook-9977/sample-webhook-deployment-86d95b659d-mkmwj on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"nodeport-test\" in pod services-8930/nodeport-test-7rwdn on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"webserver\" in pod statefulset-1265/ss-0 on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"nodeport-test\" in pod services-8930/nodeport-test-p25l6 on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"agnhost\" in pod persistent-local-volumes-test-9335/hostexec-ip-10-250-27-25.ec2.internal on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"agnhost-pause\" in pod services-8930/execpod9xn5d on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"write-pod\" in pod persistent-local-volumes-test-9335/security-context-69511bb5-22a3-4548-8a50-223fd7683e7f on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric]]\nI0111 20:10:47.739243 1 manager.go:95] Scraping metrics from 2 sources\nI0111 20:10:47.748421 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 20:10:47.753427 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 20:10:47.790053 1 manager.go:150] ScrapeMetrics: time: 50.754611ms, nodes: 2, pods: 26\nE0111 20:10:47.790076 1 manager.go:111] unable to fully collect metrics: unable to fully scrape metrics from source kubelet_summary:ip-10-250-27-25.ec2.internal: [unable to get CPU for container \"webserver\" in pod statefulset-1265/ss-2 on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"hostpath-client\" in pod volume-5889/hostpath-client on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric]\nI0111 20:11:47.739195 1 manager.go:95] Scraping metrics from 2 sources\nI0111 20:11:47.740348 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 20:11:47.749356 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 20:11:47.779471 1 manager.go:150] ScrapeMetrics: time: 40.228346ms, nodes: 2, pods: 25\nE0111 20:11:47.779490 1 manager.go:111] unable to fully collect metrics: unable to fully scrape metrics from source kubelet_summary:ip-10-250-27-25.ec2.internal: [unable to get CPU for container \"c\" in pod job-3861/all-pods-removed-kw2zc on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"webserver\" in pod statefulset-1265/ss-0 on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric]\nI0111 20:12:47.739205 1 manager.go:95] Scraping metrics from 2 sources\nI0111 20:12:47.740338 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 20:12:47.748347 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 20:12:47.783853 1 manager.go:150] ScrapeMetrics: time: 44.614759ms, nodes: 2, pods: 26\nE0111 20:12:47.783876 1 manager.go:111] unable to fully collect metrics: unable to fully scrape metrics from source kubelet_summary:ip-10-250-27-25.ec2.internal: unable to get CPU for container \"httpd\" in pod kubectl-9458/httpd on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric\nI0111 20:13:47.739180 1 manager.go:95] Scraping metrics from 2 sources\nI0111 20:13:47.741327 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 20:13:47.753336 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 20:13:47.803505 1 manager.go:150] ScrapeMetrics: time: 64.29015ms, nodes: 2, pods: 38\nE0111 20:13:47.803524 1 manager.go:111] unable to fully collect metrics: [unable to fully scrape metrics from source kubelet_summary:ip-10-250-27-25.ec2.internal: [unable to get CPU for container \"csi-provisioner\" in pod provisioning-5271/csi-hostpath-provisioner-0 on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"agnhost\" in pod pod-network-test-7598/host-test-container-pod on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"csi-resizer\" in pod provisioning-5271/csi-hostpath-resizer-0 on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"agnhost\" in pod persistent-local-volumes-test-7832/hostexec-ip-10-250-27-25.ec2.internal on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"hostpath\" in pod provisioning-5271/csi-hostpathplugin-0 on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"liveness-probe\" in pod provisioning-5271/csi-hostpathplugin-0 on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"node-driver-registrar\" in pod provisioning-5271/csi-hostpathplugin-0 on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric], unable to fully scrape metrics from source kubelet_summary:ip-10-250-7-77.ec2.internal: [unable to get CPU for container \"pod3\" in pod sched-preemption-path-4134/rs-pod3-8rjzv on node \"ip-10-250-7-77.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"pod3\" in pod sched-preemption-path-4134/rs-pod3-82s8b on node \"ip-10-250-7-77.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"webserver\" in pod pod-network-test-7598/test-container-pod on node \"ip-10-250-7-77.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"pod3\" in pod sched-preemption-path-4134/rs-pod3-lrvrm on node \"ip-10-250-7-77.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"pod3\" in pod sched-preemption-path-4134/rs-pod3-q6mt6 on node \"ip-10-250-7-77.ec2.internal\", discarding data: missing cpu usage metric]]\nI0111 20:14:47.739188 1 manager.go:95] Scraping metrics from 2 sources\nI0111 20:14:47.739239 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 20:14:47.754389 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 20:14:47.781723 1 manager.go:150] ScrapeMetrics: time: 42.501115ms, nodes: 2, pods: 25\nE0111 20:14:47.781748 1 manager.go:111] unable to fully collect metrics: unable to fully scrape metrics from source kubelet_summary:ip-10-250-27-25.ec2.internal: unable to get CPU for container \"sample-webhook\" in pod webhook-1622/sample-webhook-deployment-86d95b659d-rd87t on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric\nI0111 20:15:47.739229 1 manager.go:95] Scraping metrics from 2 sources\nI0111 20:15:47.740434 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 20:15:47.746621 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 20:15:47.838402 1 manager.go:150] ScrapeMetrics: time: 99.114423ms, nodes: 2, pods: 29\nE0111 20:15:47.838433 1 manager.go:111] unable to fully collect metrics: unable to fully scrape metrics from source kubelet_summary:ip-10-250-27-25.ec2.internal: [unable to get CPU for container \"csi-provisioner\" in pod volumemode-2239/csi-hostpath-provisioner-0 on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"csi-snapshotter\" in pod volumemode-2239/csi-snapshotter-0 on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"webserver\" in pod nettest-3621/netserver-1 on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"csi-resizer\" in pod volumemode-2239/csi-hostpath-resizer-0 on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric]\nI0111 20:16:47.739205 1 manager.go:95] Scraping metrics from 2 sources\nI0111 20:16:47.739270 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 20:16:47.743518 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 20:16:47.775734 1 manager.go:150] ScrapeMetrics: time: 36.500152ms, nodes: 2, pods: 22\nE0111 20:16:47.775754 1 manager.go:111] unable to fully collect metrics: [unable to fully scrape metrics from source kubelet_summary:ip-10-250-27-25.ec2.internal: unable to get CPU for container \"webserver\" in pod nettest-1791/netserver-1 on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to fully scrape metrics from source kubelet_summary:ip-10-250-7-77.ec2.internal: unable to get CPU for container \"webserver\" in pod nettest-1791/netserver-0 on node \"ip-10-250-7-77.ec2.internal\", discarding data: missing cpu usage metric]\nI0111 20:17:47.739188 1 manager.go:95] Scraping metrics from 2 sources\nI0111 20:17:47.741329 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 20:17:47.750335 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 20:17:47.783900 1 manager.go:150] ScrapeMetrics: time: 44.685216ms, nodes: 2, pods: 28\nE0111 20:17:47.783923 1 manager.go:111] unable to fully collect metrics: unable to fully scrape metrics from source kubelet_summary:ip-10-250-27-25.ec2.internal: unable to get CPU for container \"pod-handle-http-request\" in pod container-lifecycle-hook-8897/pod-handle-http-request on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric\nI0111 20:18:47.739182 1 manager.go:95] Scraping metrics from 2 sources\nI0111 20:18:47.750332 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 20:18:47.752564 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 20:18:47.792393 1 manager.go:150] ScrapeMetrics: time: 53.178339ms, nodes: 2, pods: 26\nE0111 20:18:47.792416 1 manager.go:111] unable to fully collect metrics: unable to fully scrape metrics from source kubelet_summary:ip-10-250-27-25.ec2.internal: unable to get CPU for container \"hostpath-io-client\" in pod volumeio-8041/hostpath-io-client on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric\nI0111 20:19:47.739192 1 manager.go:95] Scraping metrics from 2 sources\nI0111 20:19:47.743468 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 20:19:47.751685 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 20:19:47.935122 1 manager.go:150] ScrapeMetrics: time: 195.898762ms, nodes: 2, pods: 32\nE0111 20:19:47.935144 1 manager.go:111] unable to fully collect metrics: unable to fully scrape metrics from source kubelet_summary:ip-10-250-7-77.ec2.internal: unable to get CPU for container \"c\" in pod job-3813/adopt-release-5nmxj on node \"ip-10-250-7-77.ec2.internal\", discarding data: missing cpu usage metric\nI0111 20:20:47.739180 1 manager.go:95] Scraping metrics from 2 sources\nI0111 20:20:47.745358 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 20:20:47.751359 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 20:20:47.771960 1 manager.go:150] ScrapeMetrics: time: 32.750043ms, nodes: 2, pods: 23\nE0111 20:20:47.771983 1 manager.go:111] unable to fully collect metrics: unable to fully scrape metrics from source kubelet_summary:ip-10-250-27-25.ec2.internal: unable to get CPU for container \"liveness\" in pod container-probe-8193/liveness-81af33dc-8925-4583-8828-cf006b5db52e on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric\nI0111 20:21:47.739216 1 manager.go:95] Scraping metrics from 2 sources\nI0111 20:21:47.745359 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 20:21:47.746366 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 20:21:47.836793 1 manager.go:150] ScrapeMetrics: time: 97.547635ms, nodes: 2, pods: 24\nE0111 20:21:47.836817 1 manager.go:111] unable to fully collect metrics: unable to fully scrape metrics from source kubelet_summary:ip-10-250-27-25.ec2.internal: unable to get CPU for container \"busybox\" in pod disruption-8214/pod-0 on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric\nI0111 20:22:47.739213 1 manager.go:95] Scraping metrics from 2 sources\nI0111 20:22:47.746352 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 20:22:47.751372 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 20:22:47.781151 1 manager.go:150] ScrapeMetrics: time: 41.909986ms, nodes: 2, pods: 26\nE0111 20:22:47.781186 1 manager.go:111] unable to fully collect metrics: unable to fully scrape metrics from source kubelet_summary:ip-10-250-27-25.ec2.internal: unable to get CPU for container \"agnhost\" in pod persistent-local-volumes-test-746/hostexec-ip-10-250-27-25.ec2.internal on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric\nI0111 20:23:47.739150 1 manager.go:95] Scraping metrics from 2 sources\nI0111 20:23:47.739200 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 20:23:47.745294 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 20:23:47.775243 1 manager.go:150] ScrapeMetrics: time: 36.06577ms, nodes: 2, pods: 26\nE0111 20:23:47.775263 1 manager.go:111] unable to fully collect metrics: unable to fully scrape metrics from source kubelet_summary:ip-10-250-27-25.ec2.internal: [unable to get CPU for container \"webserver\" in pod statefulset-4/ss-0 on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"failure-2\" in pod kubectl-8905/failure-2-d7msk on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric]\nI0111 20:24:47.739204 1 manager.go:95] Scraping metrics from 2 sources\nI0111 20:24:47.743332 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 20:24:47.753342 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 20:24:47.786472 1 manager.go:150] ScrapeMetrics: time: 47.244004ms, nodes: 2, pods: 26\nE0111 20:24:47.786491 1 manager.go:111] unable to fully collect metrics: [unable to fully scrape metrics from source kubelet_summary:ip-10-250-27-25.ec2.internal: unable to get CPU for container \"redis\" in pod deployment-5905/test-rolling-update-deployment-55d946486-mzh95 on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to fully scrape metrics from source kubelet_summary:ip-10-250-7-77.ec2.internal: unable to get CPU for container \"webserver\" in pod statefulset-4/ss-1 on node \"ip-10-250-7-77.ec2.internal\", discarding data: missing cpu usage metric]\nI0111 20:25:47.739183 1 manager.go:95] Scraping metrics from 2 sources\nI0111 20:25:47.746327 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 20:25:47.747556 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 20:25:47.776027 1 manager.go:150] ScrapeMetrics: time: 36.815489ms, nodes: 2, pods: 24\nE0111 20:25:47.776046 1 manager.go:111] unable to fully collect metrics: unable to fully scrape metrics from source kubelet_summary:ip-10-250-7-77.ec2.internal: unable to get CPU for container \"test-container-subpath-hostpath-dzdn\" in pod provisioning-4456/pod-subpath-test-hostpath-dzdn on node \"ip-10-250-7-77.ec2.internal\", discarding data: missing cpu usage metric\nI0111 20:26:47.739182 1 manager.go:95] Scraping metrics from 2 sources\nI0111 20:26:47.745317 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 20:26:47.754576 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 20:26:47.782966 1 manager.go:150] ScrapeMetrics: time: 43.754233ms, nodes: 2, pods: 26\nE0111 20:26:47.782988 1 manager.go:111] unable to fully collect metrics: unable to fully scrape metrics from source kubelet_summary:ip-10-250-27-25.ec2.internal: [unable to get CPU for container \"agnhost\" in pod persistent-local-volumes-test-1786/hostexec-ip-10-250-27-25.ec2.internal on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"write-pod\" in pod persistent-local-volumes-test-1786/security-context-7e234d2b-4162-4b36-a12b-a5c764a2df06 on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"webserver\" in pod pod-network-test-7882/test-container-pod on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"agnhost\" in pod pod-network-test-7882/host-test-container-pod on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric]\nI0111 20:27:47.739185 1 manager.go:95] Scraping metrics from 2 sources\nI0111 20:27:47.741334 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 20:27:47.748339 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 20:27:47.783190 1 manager.go:150] ScrapeMetrics: time: 43.976626ms, nodes: 2, pods: 23\nE0111 20:27:47.783209 1 manager.go:111] unable to fully collect metrics: [unable to fully scrape metrics from source kubelet_summary:ip-10-250-7-77.ec2.internal: unable to get CPU for container \"test-container-subpath-emptydir-svsj\" in pod provisioning-1360/pod-subpath-test-emptydir-svsj on node \"ip-10-250-7-77.ec2.internal\", discarding data: missing cpu usage metric, unable to fully scrape metrics from source kubelet_summary:ip-10-250-27-25.ec2.internal: [unable to get CPU for container \"csi-provisioner\" in pod csi-mock-volumes-104/csi-mockplugin-0 on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"csi-attacher\" in pod csi-mock-volumes-104/csi-mockplugin-attacher-0 on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric, unable to get CPU for container \"csi-resizer\" in pod csi-mock-volumes-104/csi-mockplugin-resizer-0 on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric]]\nI0111 20:28:47.739188 1 manager.go:95] Scraping metrics from 2 sources\nI0111 20:28:47.741350 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-27-25.ec2.internal\nI0111 20:28:47.743600 1 manager.go:120] Querying source: kubelet_summary:ip-10-250-7-77.ec2.internal\nI0111 20:28:47.771034 1 manager.go:150] ScrapeMetrics: time: 31.787868ms, nodes: 2, pods: 21\nE0111 20:28:47.771054 1 manager.go:111] unable to fully collect metrics: unable to fully scrape metrics from source kubelet_summary:ip-10-250-27-25.ec2.internal: unable to get CPU for container \"nginx\" in pod pods-9564/pod-update-2b9cb251-cfb3-4f33-8b4a-3148ff4b0d89 on node \"ip-10-250-27-25.ec2.internal\", discarding data: missing cpu usage metric\n==== END logs for container metrics-server of pod kube-system/metrics-server-7c797fd994-4x7v9 ====\n==== START logs for container node-exporter of pod kube-system/node-exporter-gp57h ====\n==== END logs for container node-exporter of pod kube-system/node-exporter-gp57h ====\n==== START logs for container node-exporter of pod kube-system/node-exporter-l6q84 ====\n==== END logs for container node-exporter of pod kube-system/node-exporter-l6q84 ====\n==== START logs for container node-problem-detector of pod kube-system/node-problem-detector-9z5sq ====\nI0111 15:56:23.373195 1 custom_plugin_monitor.go:81] Finish parsing custom plugin monitor config file /config/kernel-monitor-counter.json: {Plugin:custom PluginGlobalConfig:{InvokeIntervalString:0xc000329330 TimeoutString:0xc000329340 InvokeInterval:5m0s Timeout:1m0s MaxOutputLength:0xc00022d5e0 Concurrency:0xc00022d5f0 EnableMessageChangeBasedConditionUpdate:0x1e125a4} Source:kernel-monitor DefaultConditions:[{Type:FrequentUnregisterNetDevice Status: Transition:0001-01-01 00:00:00 +0000 UTC Reason:NoFrequentUnregisterNetDevice Message:node is functioning properly}] Rules:[0xc0001473b0] EnableMetricsReporting:0xc00022d5f8}\nI0111 15:56:23.373387 1 custom_plugin_monitor.go:81] Finish parsing custom plugin monitor config file /config/systemd-monitor-counter.json: {Plugin:custom PluginGlobalConfig:{InvokeIntervalString:0xc0003293f0 TimeoutString:0xc000329400 InvokeInterval:5m0s Timeout:1m0s MaxOutputLength:0xc00022d7b0 Concurrency:0xc00022d7c0 EnableMessageChangeBasedConditionUpdate:0x1e125a4} Source:systemd-monitor DefaultConditions:[{Type:FrequentKubeletRestart Status: Transition:0001-01-01 00:00:00 +0000 UTC Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status: Transition:0001-01-01 00:00:00 +0000 UTC Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status: Transition:0001-01-01 00:00:00 +0000 UTC Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}] Rules:[0xc000147490 0xc000147500 0xc000147570] EnableMetricsReporting:0xc00022d7c8}\nI0111 15:56:23.373702 1 log_monitor.go:79] Finish parsing log monitor config file /config/kernel-monitor.json: {WatcherConfig:{Plugin:kmsg PluginConfig:map[] LogPath:/dev/kmsg Lookback:5m Delay:} BufferSize:10 Source:kernel-monitor DefaultConditions:[{Type:KernelDeadlock Status: Transition:0001-01-01 00:00:00 +0000 UTC Reason:KernelHasNoDeadlock Message:kernel has no deadlock} {Type:ReadonlyFilesystem Status: Transition:0001-01-01 00:00:00 +0000 UTC Reason:FilesystemIsNotReadOnly Message:Filesystem is not read-only}] Rules:[{Type:temporary Condition: Reason:OOMKilling Pattern:Kill process \\d+ (.+) score \\d+ or sacrifice child\\nKilled process \\d+ (.+) total-vm:\\d+kB, anon-rss:\\d+kB, file-rss:\\d+kB.*} {Type:temporary Condition: Reason:TaskHung Pattern:task \\S+:\\w+ blocked for more than \\w+ seconds\\.} {Type:temporary Condition: Reason:UnregisterNetDevice Pattern:unregister_netdevice: waiting for \\w+ to become free. Usage count = \\d+} {Type:temporary Condition: Reason:KernelOops Pattern:BUG: unable to handle kernel NULL pointer dereference at .*} {Type:temporary Condition: Reason:KernelOops Pattern:divide error: 0000 \\[#\\d+\\] SMP} {Type:permanent Condition:KernelDeadlock Reason:AUFSUmountHung Pattern:task umount\\.aufs:\\w+ blocked for more than \\w+ seconds\\.} {Type:permanent Condition:KernelDeadlock Reason:DockerHung Pattern:task docker:\\w+ blocked for more than \\w+ seconds\\.} {Type:permanent Condition:ReadonlyFilesystem Reason:FilesystemIsReadOnly Pattern:Remounting filesystem read-only}] EnableMetricsReporting:0xc00022deb0}\nI0111 15:56:23.373744 1 log_watchers.go:40] Use log watcher of plugin \"kmsg\"\nI0111 15:56:23.373906 1 log_monitor.go:79] Finish parsing log monitor config file /config/docker-monitor.json: {WatcherConfig:{Plugin:journald PluginConfig:map[source:dockerd] LogPath:/var/log/journal Lookback:5m Delay:} BufferSize:10 Source:docker-monitor DefaultConditions:[{Type:CorruptDockerOverlay2 Status: Transition:0001-01-01 00:00:00 +0000 UTC Reason:NoCorruptDockerOverlay2 Message:docker overlay2 is functioning properly}] Rules:[{Type:temporary Condition: Reason:CorruptDockerImage Pattern:Error trying v2 registry: failed to register layer: rename /var/lib/docker/image/(.+) /var/lib/docker/image/(.+): directory not empty.*} {Type:permanent Condition:CorruptDockerOverlay2 Reason:CorruptDockerOverlay2 Pattern:returned error: readlink /var/lib/docker/overlay2.*: invalid argument.*}] EnableMetricsReporting:0xc00034c740}\nI0111 15:56:23.373936 1 log_watchers.go:40] Use log watcher of plugin \"journald\"\nI0111 15:56:23.374129 1 log_monitor.go:79] Finish parsing log monitor config file /config/systemd-monitor.json: {WatcherConfig:{Plugin:journald PluginConfig:map[source:systemd] LogPath:/var/log/journal Lookback:5m Delay:} BufferSize:10 Source:systemd-monitor DefaultConditions:[] Rules:[{Type:temporary Condition: Reason:KubeletStart Pattern:Started Kubernetes kubelet.} {Type:temporary Condition: Reason:DockerStart Pattern:Starting Docker Application Container Engine...} {Type:temporary Condition: Reason:ContainerdStart Pattern:Starting containerd container runtime...}] EnableMetricsReporting:0xc00034c9aa}\nI0111 15:56:23.374152 1 log_watchers.go:40] Use log watcher of plugin \"journald\"\nI0111 15:56:23.375182 1 k8s_exporter.go:54] Waiting for kube-apiserver to be ready (timeout 5m0s)...\nI0111 15:56:58.389200 1 node_problem_detector.go:60] K8s exporter started.\nI0111 15:56:58.389329 1 node_problem_detector.go:64] Prometheus exporter started.\nI0111 15:56:58.389337 1 custom_plugin_monitor.go:112] Start custom plugin monitor /config/kernel-monitor-counter.json\nI0111 15:56:58.389345 1 custom_plugin_monitor.go:112] Start custom plugin monitor /config/systemd-monitor-counter.json\nI0111 15:56:58.389352 1 log_monitor.go:111] Start log monitor /config/kernel-monitor.json\nI0111 15:56:58.389412 1 log_monitor.go:111] Start log monitor /config/docker-monitor.json\nI0111 15:56:58.389568 1 plugin.go:65] Start to run custom plugins\nI0111 15:56:58.390327 1 log_monitor.go:228] Initialize condition generated: [{Type:KernelDeadlock Status:False Transition:2020-01-11 15:56:58.390310947 +0000 UTC m=+35.204458604 Reason:KernelHasNoDeadlock Message:kernel has no deadlock} {Type:ReadonlyFilesystem Status:False Transition:2020-01-11 15:56:58.390311132 +0000 UTC m=+35.204458751 Reason:FilesystemIsNotReadOnly Message:Filesystem is not read-only}]\nI0111 15:56:58.390399 1 custom_plugin_monitor.go:285] Initialize condition generated: [{Type:FrequentUnregisterNetDevice Status:False Transition:2020-01-11 15:56:58.390388959 +0000 UTC m=+35.204536572 Reason:NoFrequentUnregisterNetDevice Message:node is functioning properly}]\nI0111 15:56:58.390440 1 plugin.go:65] Start to run custom plugins\nI0111 15:56:58.390581 1 custom_plugin_monitor.go:285] Initialize condition generated: [{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]\nI0111 15:56:58.394949 1 log_watcher.go:80] Start watching journald\nI0111 15:56:58.395094 1 log_monitor.go:111] Start log monitor /config/systemd-monitor.json\nI0111 15:56:58.395317 1 log_watcher.go:80] Start watching journald\nI0111 15:56:58.395400 1 system_stats_monitor.go:85] Start system stats monitor /config/system-stats-monitor.json\nI0111 15:56:58.395492 1 problem_detector.go:67] Problem detector started\nI0111 15:56:58.468223 1 log_monitor.go:228] Initialize condition generated: []\nE0111 15:56:58.566290 1 disk_collector.go:145] Error calling lsblk\nI0111 15:56:58.564871 1 log_monitor.go:228] Initialize condition generated: [{Type:CorruptDockerOverlay2 Status:False Transition:2020-01-11 15:56:58.468124243 +0000 UTC m=+35.282271948 Reason:NoCorruptDockerOverlay2 Message:docker overlay2 is functioning properly}]\nI0111 15:56:59.671018 1 log_monitor.go:153] New status generated: &{Source:systemd-monitor Events:[{Severity:warn Timestamp:2020-01-11 15:55:30.913345 +0000 UTC Reason:DockerStart Message:Starting Docker Application Container Engine...}] Conditions:[]}\nI0111 15:56:59.765946 1 log_monitor.go:153] New status generated: &{Source:systemd-monitor Events:[{Severity:warn Timestamp:2020-01-11 15:55:46.677618 +0000 UTC Reason:DockerStart Message:Starting Docker Application Container Engine...}] Conditions:[]}\nI0111 15:57:00.171425 1 log_monitor.go:153] New status generated: &{Source:systemd-monitor Events:[{Severity:warn Timestamp:2020-01-11 15:56:00.645176 +0000 UTC Reason:DockerStart Message:Starting Docker Application Container Engine...}] Conditions:[]}\nI0111 15:57:01.069855 1 plugin.go:91] Add check result {Rule:0xc000147490 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentKubeletRestart Reason:FrequentKubeletRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --delay=5m --count=5 --pattern=Started Kubernetes kubelet.] TimeoutString:0xc000329420 Timeout:1m0s}\nI0111 15:57:01.070137 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 15:57:01.468358 1 plugin.go:91] Add check result {Rule:0xc0001473b0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentUnregisterNetDevice Reason:UnregisterNetDevice Path:/home/kubernetes/bin/log-counter Args:[--journald-source=kernel --log-path=/var/log/journal --lookback=20m --count=3 --pattern=unregister_netdevice: waiting for \\w+ to become free. Usage count = \\d+] TimeoutString:0xc000329350 Timeout:1m0s}\nI0111 15:57:01.468605 1 plugin.go:96] Finish running custom plugins\nI0111 15:57:01.468518 1 custom_plugin_monitor.go:134] New status generated: &{Source:kernel-monitor Events:[] Conditions:[{Type:FrequentUnregisterNetDevice Status:False Transition:2020-01-11 15:56:58.390388959 +0000 UTC m=+35.204536572 Reason:NoFrequentUnregisterNetDevice Message:node is functioning properly}]}\nI0111 15:57:02.764325 1 plugin.go:91] Add check result {Rule:0xc000147500 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentDockerRestart Reason:FrequentDockerRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting Docker Application Container Engine...] TimeoutString:0xc000329440 Timeout:1m0s}\nI0111 15:57:02.764623 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 15:57:04.277813 1 plugin.go:91] Add check result {Rule:0xc000147570 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentContainerdRestart Reason:FrequentContainerdRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting containerd container runtime...] TimeoutString:0xc000329450 Timeout:1m0s}\nI0111 15:57:04.277877 1 plugin.go:96] Finish running custom plugins\nI0111 15:57:04.277925 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nE0111 15:57:58.468429 1 disk_collector.go:145] Error calling lsblk\nE0111 15:58:58.468399 1 disk_collector.go:145] Error calling lsblk\nE0111 15:59:58.468400 1 disk_collector.go:145] Error calling lsblk\nE0111 16:00:58.468391 1 disk_collector.go:145] Error calling lsblk\nI0111 16:01:58.389644 1 plugin.go:65] Start to run custom plugins\nI0111 16:01:58.390559 1 plugin.go:65] Start to run custom plugins\nE0111 16:01:58.468434 1 disk_collector.go:145] Error calling lsblk\nI0111 16:02:00.173039 1 plugin.go:91] Add check result {Rule:0xc000147490 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentKubeletRestart Reason:FrequentKubeletRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --delay=5m --count=5 --pattern=Started Kubernetes kubelet.] TimeoutString:0xc000329420 Timeout:1m0s}\nI0111 16:02:00.173219 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 16:02:00.469711 1 plugin.go:91] Add check result {Rule:0xc0001473b0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentUnregisterNetDevice Reason:UnregisterNetDevice Path:/home/kubernetes/bin/log-counter Args:[--journald-source=kernel --log-path=/var/log/journal --lookback=20m --count=3 --pattern=unregister_netdevice: waiting for \\w+ to become free. Usage count = \\d+] TimeoutString:0xc000329350 Timeout:1m0s}\nI0111 16:02:00.469990 1 custom_plugin_monitor.go:134] New status generated: &{Source:kernel-monitor Events:[] Conditions:[{Type:FrequentUnregisterNetDevice Status:False Transition:2020-01-11 15:56:58.390388959 +0000 UTC m=+35.204536572 Reason:NoFrequentUnregisterNetDevice Message:node is functioning properly}]}\nI0111 16:02:00.470074 1 plugin.go:96] Finish running custom plugins\nI0111 16:02:01.971825 1 plugin.go:91] Add check result {Rule:0xc000147500 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentDockerRestart Reason:FrequentDockerRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting Docker Application Container Engine...] TimeoutString:0xc000329440 Timeout:1m0s}\nI0111 16:02:01.972110 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 16:02:03.774844 1 plugin.go:91] Add check result {Rule:0xc000147570 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentContainerdRestart Reason:FrequentContainerdRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting containerd container runtime...] TimeoutString:0xc000329450 Timeout:1m0s}\nI0111 16:02:03.774900 1 plugin.go:96] Finish running custom plugins\nI0111 16:02:03.774940 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nE0111 16:02:58.468397 1 disk_collector.go:145] Error calling lsblk\nE0111 16:03:58.468404 1 disk_collector.go:145] Error calling lsblk\nE0111 16:04:58.468396 1 disk_collector.go:145] Error calling lsblk\nE0111 16:05:58.468431 1 disk_collector.go:145] Error calling lsblk\nI0111 16:06:58.389669 1 plugin.go:65] Start to run custom plugins\nI0111 16:06:58.390558 1 plugin.go:65] Start to run custom plugins\nE0111 16:06:58.468414 1 disk_collector.go:145] Error calling lsblk\nI0111 16:07:00.067997 1 plugin.go:91] Add check result {Rule:0xc000147490 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentKubeletRestart Reason:FrequentKubeletRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --delay=5m --count=5 --pattern=Started Kubernetes kubelet.] TimeoutString:0xc000329420 Timeout:1m0s}\nI0111 16:07:00.068150 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 16:07:00.463738 1 plugin.go:91] Add check result {Rule:0xc0001473b0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentUnregisterNetDevice Reason:UnregisterNetDevice Path:/home/kubernetes/bin/log-counter Args:[--journald-source=kernel --log-path=/var/log/journal --lookback=20m --count=3 --pattern=unregister_netdevice: waiting for \\w+ to become free. Usage count = \\d+] TimeoutString:0xc000329350 Timeout:1m0s}\nI0111 16:07:00.464009 1 plugin.go:96] Finish running custom plugins\nI0111 16:07:00.463913 1 custom_plugin_monitor.go:134] New status generated: &{Source:kernel-monitor Events:[] Conditions:[{Type:FrequentUnregisterNetDevice Status:False Transition:2020-01-11 15:56:58.390388959 +0000 UTC m=+35.204536572 Reason:NoFrequentUnregisterNetDevice Message:node is functioning properly}]}\nI0111 16:07:01.868835 1 plugin.go:91] Add check result {Rule:0xc000147500 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentDockerRestart Reason:FrequentDockerRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting Docker Application Container Engine...] TimeoutString:0xc000329440 Timeout:1m0s}\nI0111 16:07:01.869070 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 16:07:03.669715 1 plugin.go:91] Add check result {Rule:0xc000147570 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentContainerdRestart Reason:FrequentContainerdRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting containerd container runtime...] TimeoutString:0xc000329450 Timeout:1m0s}\nI0111 16:07:03.669994 1 plugin.go:96] Finish running custom plugins\nI0111 16:07:03.669815 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nE0111 16:07:58.468402 1 disk_collector.go:145] Error calling lsblk\nE0111 16:08:58.468393 1 disk_collector.go:145] Error calling lsblk\nE0111 16:09:58.468396 1 disk_collector.go:145] Error calling lsblk\nE0111 16:10:58.468409 1 disk_collector.go:145] Error calling lsblk\nI0111 16:11:58.389649 1 plugin.go:65] Start to run custom plugins\nI0111 16:11:58.390486 1 plugin.go:65] Start to run custom plugins\nE0111 16:11:58.468401 1 disk_collector.go:145] Error calling lsblk\nI0111 16:12:00.464779 1 plugin.go:91] Add check result {Rule:0xc000147490 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentKubeletRestart Reason:FrequentKubeletRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --delay=5m --count=5 --pattern=Started Kubernetes kubelet.] TimeoutString:0xc000329420 Timeout:1m0s}\nI0111 16:12:00.465038 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 16:12:00.575853 1 plugin.go:91] Add check result {Rule:0xc0001473b0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentUnregisterNetDevice Reason:UnregisterNetDevice Path:/home/kubernetes/bin/log-counter Args:[--journald-source=kernel --log-path=/var/log/journal --lookback=20m --count=3 --pattern=unregister_netdevice: waiting for \\w+ to become free. Usage count = \\d+] TimeoutString:0xc000329350 Timeout:1m0s}\nI0111 16:12:00.575910 1 plugin.go:96] Finish running custom plugins\nI0111 16:12:00.575941 1 custom_plugin_monitor.go:134] New status generated: &{Source:kernel-monitor Events:[] Conditions:[{Type:FrequentUnregisterNetDevice Status:False Transition:2020-01-11 15:56:58.390388959 +0000 UTC m=+35.204536572 Reason:NoFrequentUnregisterNetDevice Message:node is functioning properly}]}\nI0111 16:12:02.369247 1 plugin.go:91] Add check result {Rule:0xc000147500 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentDockerRestart Reason:FrequentDockerRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting Docker Application Container Engine...] TimeoutString:0xc000329440 Timeout:1m0s}\nI0111 16:12:02.369398 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 16:12:04.268798 1 plugin.go:91] Add check result {Rule:0xc000147570 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentContainerdRestart Reason:FrequentContainerdRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting containerd container runtime...] TimeoutString:0xc000329450 Timeout:1m0s}\nI0111 16:12:04.268875 1 plugin.go:96] Finish running custom plugins\nI0111 16:12:04.268929 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nE0111 16:12:58.468420 1 disk_collector.go:145] Error calling lsblk\nE0111 16:13:58.468401 1 disk_collector.go:145] Error calling lsblk\nE0111 16:14:58.468412 1 disk_collector.go:145] Error calling lsblk\nE0111 16:15:58.468411 1 disk_collector.go:145] Error calling lsblk\nI0111 16:16:58.389644 1 plugin.go:65] Start to run custom plugins\nI0111 16:16:58.390559 1 plugin.go:65] Start to run custom plugins\nE0111 16:16:58.468417 1 disk_collector.go:145] Error calling lsblk\nI0111 16:17:00.065859 1 plugin.go:91] Add check result {Rule:0xc0001473b0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentUnregisterNetDevice Reason:UnregisterNetDevice Path:/home/kubernetes/bin/log-counter Args:[--journald-source=kernel --log-path=/var/log/journal --lookback=20m --count=3 --pattern=unregister_netdevice: waiting for \\w+ to become free. Usage count = \\d+] TimeoutString:0xc000329350 Timeout:1m0s}\nI0111 16:17:00.065934 1 plugin.go:96] Finish running custom plugins\nI0111 16:17:00.065969 1 custom_plugin_monitor.go:134] New status generated: &{Source:kernel-monitor Events:[] Conditions:[{Type:FrequentUnregisterNetDevice Status:False Transition:2020-01-11 15:56:58.390388959 +0000 UTC m=+35.204536572 Reason:NoFrequentUnregisterNetDevice Message:node is functioning properly}]}\nI0111 16:17:00.467354 1 plugin.go:91] Add check result {Rule:0xc000147490 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentKubeletRestart Reason:FrequentKubeletRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --delay=5m --count=5 --pattern=Started Kubernetes kubelet.] TimeoutString:0xc000329420 Timeout:1m0s}\nI0111 16:17:00.467583 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 16:17:02.069490 1 plugin.go:91] Add check result {Rule:0xc000147500 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentDockerRestart Reason:FrequentDockerRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting Docker Application Container Engine...] TimeoutString:0xc000329440 Timeout:1m0s}\nI0111 16:17:02.069666 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 16:17:03.673129 1 plugin.go:91] Add check result {Rule:0xc000147570 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentContainerdRestart Reason:FrequentContainerdRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting containerd container runtime...] TimeoutString:0xc000329450 Timeout:1m0s}\nI0111 16:17:03.673189 1 plugin.go:96] Finish running custom plugins\nI0111 16:17:03.673316 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nE0111 16:17:58.472570 1 disk_collector.go:145] Error calling lsblk\nE0111 16:18:58.468407 1 disk_collector.go:145] Error calling lsblk\nE0111 16:19:58.468425 1 disk_collector.go:145] Error calling lsblk\nE0111 16:20:58.468401 1 disk_collector.go:145] Error calling lsblk\nI0111 16:21:58.389647 1 plugin.go:65] Start to run custom plugins\nI0111 16:21:58.390551 1 plugin.go:65] Start to run custom plugins\nE0111 16:21:58.468611 1 disk_collector.go:145] Error calling lsblk\nI0111 16:21:59.964489 1 plugin.go:91] Add check result {Rule:0xc0001473b0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentUnregisterNetDevice Reason:UnregisterNetDevice Path:/home/kubernetes/bin/log-counter Args:[--journald-source=kernel --log-path=/var/log/journal --lookback=20m --count=3 --pattern=unregister_netdevice: waiting for \\w+ to become free. Usage count = \\d+] TimeoutString:0xc000329350 Timeout:1m0s}\nI0111 16:21:59.964576 1 plugin.go:96] Finish running custom plugins\nI0111 16:21:59.964590 1 custom_plugin_monitor.go:134] New status generated: &{Source:kernel-monitor Events:[] Conditions:[{Type:FrequentUnregisterNetDevice Status:False Transition:2020-01-11 15:56:58.390388959 +0000 UTC m=+35.204536572 Reason:NoFrequentUnregisterNetDevice Message:node is functioning properly}]}\nI0111 16:22:00.374869 1 plugin.go:91] Add check result {Rule:0xc000147490 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentKubeletRestart Reason:FrequentKubeletRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --delay=5m --count=5 --pattern=Started Kubernetes kubelet.] TimeoutString:0xc000329420 Timeout:1m0s}\nI0111 16:22:00.374963 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 16:22:01.972932 1 plugin.go:91] Add check result {Rule:0xc000147500 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentDockerRestart Reason:FrequentDockerRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting Docker Application Container Engine...] TimeoutString:0xc000329440 Timeout:1m0s}\nI0111 16:22:01.973214 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 16:22:03.765314 1 plugin.go:91] Add check result {Rule:0xc000147570 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentContainerdRestart Reason:FrequentContainerdRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting containerd container runtime...] TimeoutString:0xc000329450 Timeout:1m0s}\nI0111 16:22:03.765375 1 plugin.go:96] Finish running custom plugins\nI0111 16:22:03.765415 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nE0111 16:22:58.468396 1 disk_collector.go:145] Error calling lsblk\nE0111 16:23:58.468399 1 disk_collector.go:145] Error calling lsblk\nE0111 16:24:58.468415 1 disk_collector.go:145] Error calling lsblk\nE0111 16:25:58.468400 1 disk_collector.go:145] Error calling lsblk\nI0111 16:26:58.389657 1 plugin.go:65] Start to run custom plugins\nI0111 16:26:58.390551 1 plugin.go:65] Start to run custom plugins\nE0111 16:26:58.468393 1 disk_collector.go:145] Error calling lsblk\nI0111 16:27:00.074611 1 plugin.go:91] Add check result {Rule:0xc0001473b0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentUnregisterNetDevice Reason:UnregisterNetDevice Path:/home/kubernetes/bin/log-counter Args:[--journald-source=kernel --log-path=/var/log/journal --lookback=20m --count=3 --pattern=unregister_netdevice: waiting for \\w+ to become free. Usage count = \\d+] TimeoutString:0xc000329350 Timeout:1m0s}\nI0111 16:27:00.074688 1 plugin.go:96] Finish running custom plugins\nI0111 16:27:00.074763 1 custom_plugin_monitor.go:134] New status generated: &{Source:kernel-monitor Events:[] Conditions:[{Type:FrequentUnregisterNetDevice Status:False Transition:2020-01-11 15:56:58.390388959 +0000 UTC m=+35.204536572 Reason:NoFrequentUnregisterNetDevice Message:node is functioning properly}]}\nI0111 16:27:00.765244 1 plugin.go:91] Add check result {Rule:0xc000147490 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentKubeletRestart Reason:FrequentKubeletRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --delay=5m --count=5 --pattern=Started Kubernetes kubelet.] TimeoutString:0xc000329420 Timeout:1m0s}\nI0111 16:27:00.765339 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 16:27:02.570361 1 plugin.go:91] Add check result {Rule:0xc000147500 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentDockerRestart Reason:FrequentDockerRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting Docker Application Container Engine...] TimeoutString:0xc000329440 Timeout:1m0s}\nI0111 16:27:02.570555 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 16:27:04.369752 1 plugin.go:91] Add check result {Rule:0xc000147570 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentContainerdRestart Reason:FrequentContainerdRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting containerd container runtime...] TimeoutString:0xc000329450 Timeout:1m0s}\nI0111 16:27:04.369848 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 16:27:04.369955 1 plugin.go:96] Finish running custom plugins\nE0111 16:27:58.468404 1 disk_collector.go:145] Error calling lsblk\nE0111 16:28:58.468394 1 disk_collector.go:145] Error calling lsblk\nE0111 16:29:58.468398 1 disk_collector.go:145] Error calling lsblk\nE0111 16:30:58.468407 1 disk_collector.go:145] Error calling lsblk\nI0111 16:31:58.389653 1 plugin.go:65] Start to run custom plugins\nI0111 16:31:58.390561 1 plugin.go:65] Start to run custom plugins\nE0111 16:31:58.468400 1 disk_collector.go:145] Error calling lsblk\nI0111 16:32:00.072549 1 plugin.go:91] Add check result {Rule:0xc0001473b0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentUnregisterNetDevice Reason:UnregisterNetDevice Path:/home/kubernetes/bin/log-counter Args:[--journald-source=kernel --log-path=/var/log/journal --lookback=20m --count=3 --pattern=unregister_netdevice: waiting for \\w+ to become free. Usage count = \\d+] TimeoutString:0xc000329350 Timeout:1m0s}\nI0111 16:32:00.072609 1 plugin.go:96] Finish running custom plugins\nI0111 16:32:00.072645 1 custom_plugin_monitor.go:134] New status generated: &{Source:kernel-monitor Events:[] Conditions:[{Type:FrequentUnregisterNetDevice Status:False Transition:2020-01-11 15:56:58.390388959 +0000 UTC m=+35.204536572 Reason:NoFrequentUnregisterNetDevice Message:node is functioning properly}]}\nI0111 16:32:00.570048 1 plugin.go:91] Add check result {Rule:0xc000147490 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentKubeletRestart Reason:FrequentKubeletRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --delay=5m --count=5 --pattern=Started Kubernetes kubelet.] TimeoutString:0xc000329420 Timeout:1m0s}\nI0111 16:32:00.570113 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 16:32:02.272708 1 plugin.go:91] Add check result {Rule:0xc000147500 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentDockerRestart Reason:FrequentDockerRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting Docker Application Container Engine...] TimeoutString:0xc000329440 Timeout:1m0s}\nI0111 16:32:02.272832 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 16:32:04.066234 1 plugin.go:91] Add check result {Rule:0xc000147570 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentContainerdRestart Reason:FrequentContainerdRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting containerd container runtime...] TimeoutString:0xc000329450 Timeout:1m0s}\nI0111 16:32:04.066300 1 plugin.go:96] Finish running custom plugins\nI0111 16:32:04.066481 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nE0111 16:32:58.468407 1 disk_collector.go:145] Error calling lsblk\nE0111 16:33:58.468429 1 disk_collector.go:145] Error calling lsblk\nE0111 16:34:58.468428 1 disk_collector.go:145] Error calling lsblk\nE0111 16:35:58.468433 1 disk_collector.go:145] Error calling lsblk\nI0111 16:36:58.389644 1 plugin.go:65] Start to run custom plugins\nI0111 16:36:58.390482 1 plugin.go:65] Start to run custom plugins\nE0111 16:36:58.468388 1 disk_collector.go:145] Error calling lsblk\nI0111 16:37:00.072052 1 plugin.go:91] Add check result {Rule:0xc0001473b0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentUnregisterNetDevice Reason:UnregisterNetDevice Path:/home/kubernetes/bin/log-counter Args:[--journald-source=kernel --log-path=/var/log/journal --lookback=20m --count=3 --pattern=unregister_netdevice: waiting for \\w+ to become free. Usage count = \\d+] TimeoutString:0xc000329350 Timeout:1m0s}\nI0111 16:37:00.072114 1 plugin.go:96] Finish running custom plugins\nI0111 16:37:00.072150 1 custom_plugin_monitor.go:134] New status generated: &{Source:kernel-monitor Events:[] Conditions:[{Type:FrequentUnregisterNetDevice Status:False Transition:2020-01-11 15:56:58.390388959 +0000 UTC m=+35.204536572 Reason:NoFrequentUnregisterNetDevice Message:node is functioning properly}]}\nI0111 16:37:00.769579 1 plugin.go:91] Add check result {Rule:0xc000147490 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentKubeletRestart Reason:FrequentKubeletRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --delay=5m --count=5 --pattern=Started Kubernetes kubelet.] TimeoutString:0xc000329420 Timeout:1m0s}\nI0111 16:37:00.769876 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 16:37:02.672891 1 plugin.go:91] Add check result {Rule:0xc000147500 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentDockerRestart Reason:FrequentDockerRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting Docker Application Container Engine...] TimeoutString:0xc000329440 Timeout:1m0s}\nI0111 16:37:02.673084 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 16:37:04.571562 1 plugin.go:91] Add check result {Rule:0xc000147570 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentContainerdRestart Reason:FrequentContainerdRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting containerd container runtime...] TimeoutString:0xc000329450 Timeout:1m0s}\nI0111 16:37:04.571620 1 plugin.go:96] Finish running custom plugins\nI0111 16:37:04.571654 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nE0111 16:37:58.468409 1 disk_collector.go:145] Error calling lsblk\nE0111 16:38:58.468401 1 disk_collector.go:145] Error calling lsblk\nE0111 16:39:58.468397 1 disk_collector.go:145] Error calling lsblk\nE0111 16:40:58.468430 1 disk_collector.go:145] Error calling lsblk\nI0111 16:41:58.389651 1 plugin.go:65] Start to run custom plugins\nI0111 16:41:58.390549 1 plugin.go:65] Start to run custom plugins\nE0111 16:41:58.468415 1 disk_collector.go:145] Error calling lsblk\nI0111 16:41:59.965081 1 plugin.go:91] Add check result {Rule:0xc0001473b0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentUnregisterNetDevice Reason:UnregisterNetDevice Path:/home/kubernetes/bin/log-counter Args:[--journald-source=kernel --log-path=/var/log/journal --lookback=20m --count=3 --pattern=unregister_netdevice: waiting for \\w+ to become free. Usage count = \\d+] TimeoutString:0xc000329350 Timeout:1m0s}\nI0111 16:41:59.965271 1 plugin.go:96] Finish running custom plugins\nI0111 16:41:59.965128 1 custom_plugin_monitor.go:134] New status generated: &{Source:kernel-monitor Events:[] Conditions:[{Type:FrequentUnregisterNetDevice Status:False Transition:2020-01-11 15:56:58.390388959 +0000 UTC m=+35.204536572 Reason:NoFrequentUnregisterNetDevice Message:node is functioning properly}]}\nI0111 16:42:00.775837 1 plugin.go:91] Add check result {Rule:0xc000147490 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentKubeletRestart Reason:FrequentKubeletRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --delay=5m --count=5 --pattern=Started Kubernetes kubelet.] TimeoutString:0xc000329420 Timeout:1m0s}\nI0111 16:42:00.775931 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 16:42:01.870312 1 plugin.go:91] Add check result {Rule:0xc000147500 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentDockerRestart Reason:FrequentDockerRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting Docker Application Container Engine...] TimeoutString:0xc000329440 Timeout:1m0s}\nI0111 16:42:01.870593 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 16:42:03.976118 1 plugin.go:91] Add check result {Rule:0xc000147570 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentContainerdRestart Reason:FrequentContainerdRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting containerd container runtime...] TimeoutString:0xc000329450 Timeout:1m0s}\nI0111 16:42:03.976177 1 plugin.go:96] Finish running custom plugins\nI0111 16:42:03.976356 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nE0111 16:42:58.468401 1 disk_collector.go:145] Error calling lsblk\nE0111 16:43:58.468401 1 disk_collector.go:145] Error calling lsblk\nE0111 16:44:58.468398 1 disk_collector.go:145] Error calling lsblk\nE0111 16:45:58.468400 1 disk_collector.go:145] Error calling lsblk\nI0111 16:46:58.389648 1 plugin.go:65] Start to run custom plugins\nI0111 16:46:58.390549 1 plugin.go:65] Start to run custom plugins\nE0111 16:46:58.468396 1 disk_collector.go:145] Error calling lsblk\nI0111 16:46:59.874436 1 plugin.go:91] Add check result {Rule:0xc0001473b0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentUnregisterNetDevice Reason:UnregisterNetDevice Path:/home/kubernetes/bin/log-counter Args:[--journald-source=kernel --log-path=/var/log/journal --lookback=20m --count=3 --pattern=unregister_netdevice: waiting for \\w+ to become free. Usage count = \\d+] TimeoutString:0xc000329350 Timeout:1m0s}\nI0111 16:46:59.874597 1 custom_plugin_monitor.go:134] New status generated: &{Source:kernel-monitor Events:[] Conditions:[{Type:FrequentUnregisterNetDevice Status:False Transition:2020-01-11 15:56:58.390388959 +0000 UTC m=+35.204536572 Reason:NoFrequentUnregisterNetDevice Message:node is functioning properly}]}\nI0111 16:46:59.874680 1 plugin.go:96] Finish running custom plugins\nI0111 16:47:00.769132 1 plugin.go:91] Add check result {Rule:0xc000147490 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentKubeletRestart Reason:FrequentKubeletRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --delay=5m --count=5 --pattern=Started Kubernetes kubelet.] TimeoutString:0xc000329420 Timeout:1m0s}\nI0111 16:47:00.769319 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 16:47:02.770626 1 plugin.go:91] Add check result {Rule:0xc000147500 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentDockerRestart Reason:FrequentDockerRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting Docker Application Container Engine...] TimeoutString:0xc000329440 Timeout:1m0s}\nI0111 16:47:02.770857 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 16:47:04.772696 1 plugin.go:91] Add check result {Rule:0xc000147570 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentContainerdRestart Reason:FrequentContainerdRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting containerd container runtime...] TimeoutString:0xc000329450 Timeout:1m0s}\nI0111 16:47:04.772757 1 plugin.go:96] Finish running custom plugins\nI0111 16:47:04.772800 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nE0111 16:47:58.468408 1 disk_collector.go:145] Error calling lsblk\nE0111 16:48:58.468400 1 disk_collector.go:145] Error calling lsblk\nE0111 16:49:58.468400 1 disk_collector.go:145] Error calling lsblk\nE0111 16:50:58.468409 1 disk_collector.go:145] Error calling lsblk\nI0111 16:51:58.389644 1 plugin.go:65] Start to run custom plugins\nI0111 16:51:58.390579 1 plugin.go:65] Start to run custom plugins\nE0111 16:51:58.468441 1 disk_collector.go:145] Error calling lsblk\nI0111 16:52:00.067900 1 plugin.go:91] Add check result {Rule:0xc0001473b0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentUnregisterNetDevice Reason:UnregisterNetDevice Path:/home/kubernetes/bin/log-counter Args:[--journald-source=kernel --log-path=/var/log/journal --lookback=20m --count=3 --pattern=unregister_netdevice: waiting for \\w+ to become free. Usage count = \\d+] TimeoutString:0xc000329350 Timeout:1m0s}\nI0111 16:52:00.068269 1 plugin.go:96] Finish running custom plugins\nI0111 16:52:00.068280 1 custom_plugin_monitor.go:134] New status generated: &{Source:kernel-monitor Events:[] Conditions:[{Type:FrequentUnregisterNetDevice Status:False Transition:2020-01-11 15:56:58.390388959 +0000 UTC m=+35.204536572 Reason:NoFrequentUnregisterNetDevice Message:node is functioning properly}]}\nI0111 16:52:00.874423 1 plugin.go:91] Add check result {Rule:0xc000147490 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentKubeletRestart Reason:FrequentKubeletRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --delay=5m --count=5 --pattern=Started Kubernetes kubelet.] TimeoutString:0xc000329420 Timeout:1m0s}\nI0111 16:52:00.874563 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 16:52:02.875413 1 plugin.go:91] Add check result {Rule:0xc000147500 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentDockerRestart Reason:FrequentDockerRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting Docker Application Container Engine...] TimeoutString:0xc000329440 Timeout:1m0s}\nI0111 16:52:02.875588 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 16:52:04.806173 1 plugin.go:91] Add check result {Rule:0xc000147570 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentContainerdRestart Reason:FrequentContainerdRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting containerd container runtime...] TimeoutString:0xc000329450 Timeout:1m0s}\nI0111 16:52:04.806275 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 16:52:04.806408 1 plugin.go:96] Finish running custom plugins\nE0111 16:52:58.468418 1 disk_collector.go:145] Error calling lsblk\nE0111 16:53:58.468395 1 disk_collector.go:145] Error calling lsblk\nE0111 16:54:58.468409 1 disk_collector.go:145] Error calling lsblk\nE0111 16:55:58.468411 1 disk_collector.go:145] Error calling lsblk\nI0111 16:56:58.389655 1 plugin.go:65] Start to run custom plugins\nI0111 16:56:58.390587 1 plugin.go:65] Start to run custom plugins\nE0111 16:56:58.468462 1 disk_collector.go:145] Error calling lsblk\nI0111 16:56:59.664715 1 plugin.go:91] Add check result {Rule:0xc000147490 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentKubeletRestart Reason:FrequentKubeletRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --delay=5m --count=5 --pattern=Started Kubernetes kubelet.] TimeoutString:0xc000329420 Timeout:1m0s}\nI0111 16:56:59.664958 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 16:56:59.971765 1 plugin.go:91] Add check result {Rule:0xc0001473b0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentUnregisterNetDevice Reason:UnregisterNetDevice Path:/home/kubernetes/bin/log-counter Args:[--journald-source=kernel --log-path=/var/log/journal --lookback=20m --count=3 --pattern=unregister_netdevice: waiting for \\w+ to become free. Usage count = \\d+] TimeoutString:0xc000329350 Timeout:1m0s}\nI0111 16:56:59.972083 1 plugin.go:96] Finish running custom plugins\nI0111 16:56:59.971944 1 custom_plugin_monitor.go:134] New status generated: &{Source:kernel-monitor Events:[] Conditions:[{Type:FrequentUnregisterNetDevice Status:False Transition:2020-01-11 15:56:58.390388959 +0000 UTC m=+35.204536572 Reason:NoFrequentUnregisterNetDevice Message:node is functioning properly}]}\nI0111 16:57:01.470481 1 plugin.go:91] Add check result {Rule:0xc000147500 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentDockerRestart Reason:FrequentDockerRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting Docker Application Container Engine...] TimeoutString:0xc000329440 Timeout:1m0s}\nI0111 16:57:01.470593 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 16:57:03.274815 1 plugin.go:91] Add check result {Rule:0xc000147570 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentContainerdRestart Reason:FrequentContainerdRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting containerd container runtime...] TimeoutString:0xc000329450 Timeout:1m0s}\nI0111 16:57:03.274884 1 plugin.go:96] Finish running custom plugins\nI0111 16:57:03.274936 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nE0111 16:57:58.468407 1 disk_collector.go:145] Error calling lsblk\nE0111 16:58:58.468413 1 disk_collector.go:145] Error calling lsblk\nE0111 16:59:58.468402 1 disk_collector.go:145] Error calling lsblk\nE0111 17:00:58.468407 1 disk_collector.go:145] Error calling lsblk\nI0111 17:01:58.389650 1 plugin.go:65] Start to run custom plugins\nI0111 17:01:58.390547 1 plugin.go:65] Start to run custom plugins\nE0111 17:01:58.468391 1 disk_collector.go:145] Error calling lsblk\nI0111 17:02:00.266625 1 plugin.go:91] Add check result {Rule:0xc0001473b0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentUnregisterNetDevice Reason:UnregisterNetDevice Path:/home/kubernetes/bin/log-counter Args:[--journald-source=kernel --log-path=/var/log/journal --lookback=20m --count=3 --pattern=unregister_netdevice: waiting for \\w+ to become free. Usage count = \\d+] TimeoutString:0xc000329350 Timeout:1m0s}\nI0111 17:02:00.266706 1 custom_plugin_monitor.go:134] New status generated: &{Source:kernel-monitor Events:[] Conditions:[{Type:FrequentUnregisterNetDevice Status:False Transition:2020-01-11 15:56:58.390388959 +0000 UTC m=+35.204536572 Reason:NoFrequentUnregisterNetDevice Message:node is functioning properly}]}\nI0111 17:02:00.266851 1 plugin.go:96] Finish running custom plugins\nI0111 17:02:00.573252 1 plugin.go:91] Add check result {Rule:0xc000147490 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentKubeletRestart Reason:FrequentKubeletRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --delay=5m --count=5 --pattern=Started Kubernetes kubelet.] TimeoutString:0xc000329420 Timeout:1m0s}\nI0111 17:02:00.573476 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 17:02:02.306301 1 plugin.go:91] Add check result {Rule:0xc000147500 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentDockerRestart Reason:FrequentDockerRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting Docker Application Container Engine...] TimeoutString:0xc000329440 Timeout:1m0s}\nI0111 17:02:02.306585 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 17:02:04.070842 1 plugin.go:91] Add check result {Rule:0xc000147570 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentContainerdRestart Reason:FrequentContainerdRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting containerd container runtime...] TimeoutString:0xc000329450 Timeout:1m0s}\nI0111 17:02:04.070901 1 plugin.go:96] Finish running custom plugins\nI0111 17:02:04.070948 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nE0111 17:02:58.468402 1 disk_collector.go:145] Error calling lsblk\nE0111 17:03:58.468401 1 disk_collector.go:145] Error calling lsblk\nE0111 17:04:58.468410 1 disk_collector.go:145] Error calling lsblk\nE0111 17:05:58.468409 1 disk_collector.go:145] Error calling lsblk\nI0111 17:06:58.389650 1 plugin.go:65] Start to run custom plugins\nI0111 17:06:58.390539 1 plugin.go:65] Start to run custom plugins\nE0111 17:06:58.468395 1 disk_collector.go:145] Error calling lsblk\nI0111 17:07:00.072904 1 plugin.go:91] Add check result {Rule:0xc0001473b0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentUnregisterNetDevice Reason:UnregisterNetDevice Path:/home/kubernetes/bin/log-counter Args:[--journald-source=kernel --log-path=/var/log/journal --lookback=20m --count=3 --pattern=unregister_netdevice: waiting for \\w+ to become free. Usage count = \\d+] TimeoutString:0xc000329350 Timeout:1m0s}\nI0111 17:07:00.072972 1 plugin.go:96] Finish running custom plugins\nI0111 17:07:00.073071 1 custom_plugin_monitor.go:134] New status generated: &{Source:kernel-monitor Events:[] Conditions:[{Type:FrequentUnregisterNetDevice Status:False Transition:2020-01-11 15:56:58.390388959 +0000 UTC m=+35.204536572 Reason:NoFrequentUnregisterNetDevice Message:node is functioning properly}]}\nI0111 17:07:00.671612 1 plugin.go:91] Add check result {Rule:0xc000147490 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentKubeletRestart Reason:FrequentKubeletRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --delay=5m --count=5 --pattern=Started Kubernetes kubelet.] TimeoutString:0xc000329420 Timeout:1m0s}\nI0111 17:07:00.671788 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 17:07:02.375080 1 plugin.go:91] Add check result {Rule:0xc000147500 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentDockerRestart Reason:FrequentDockerRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting Docker Application Container Engine...] TimeoutString:0xc000329440 Timeout:1m0s}\nI0111 17:07:02.375346 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 17:07:04.168241 1 plugin.go:91] Add check result {Rule:0xc000147570 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentContainerdRestart Reason:FrequentContainerdRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting containerd container runtime...] TimeoutString:0xc000329450 Timeout:1m0s}\nI0111 17:07:04.168308 1 plugin.go:96] Finish running custom plugins\nI0111 17:07:04.168357 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nE0111 17:07:58.468414 1 disk_collector.go:145] Error calling lsblk\nE0111 17:08:58.468406 1 disk_collector.go:145] Error calling lsblk\nE0111 17:09:58.468401 1 disk_collector.go:145] Error calling lsblk\nE0111 17:10:58.468414 1 disk_collector.go:145] Error calling lsblk\nE0111 17:11:08.409880 1 manager.go:160] failed to update node conditions: Timeout: request did not complete within requested timeout 30s\nE0111 17:11:23.411662 1 manager.go:160] failed to update node conditions: Patch https://100.104.0.1:443/api/v1/nodes/ip-10-250-27-25.ec2.internal/status: http2: server sent GOAWAY and closed the connection; LastStreamID=3, ErrCode=NO_ERROR, debug=\"\"\nE0111 17:11:33.414588 1 manager.go:160] failed to update node conditions: Patch https://100.104.0.1:443/api/v1/nodes/ip-10-250-27-25.ec2.internal/status: net/http: TLS handshake timeout\nE0111 17:11:43.417184 1 manager.go:160] failed to update node conditions: Patch https://100.104.0.1:443/api/v1/nodes/ip-10-250-27-25.ec2.internal/status: net/http: TLS handshake timeout\nI0111 17:11:58.389649 1 plugin.go:65] Start to run custom plugins\nI0111 17:11:58.390571 1 plugin.go:65] Start to run custom plugins\nE0111 17:11:58.468423 1 disk_collector.go:145] Error calling lsblk\nI0111 17:11:59.966452 1 plugin.go:91] Add check result {Rule:0xc0001473b0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentUnregisterNetDevice Reason:UnregisterNetDevice Path:/home/kubernetes/bin/log-counter Args:[--journald-source=kernel --log-path=/var/log/journal --lookback=20m --count=3 --pattern=unregister_netdevice: waiting for \\w+ to become free. Usage count = \\d+] TimeoutString:0xc000329350 Timeout:1m0s}\nI0111 17:11:59.966514 1 plugin.go:96] Finish running custom plugins\nI0111 17:11:59.966616 1 custom_plugin_monitor.go:134] New status generated: &{Source:kernel-monitor Events:[] Conditions:[{Type:FrequentUnregisterNetDevice Status:False Transition:2020-01-11 15:56:58.390388959 +0000 UTC m=+35.204536572 Reason:NoFrequentUnregisterNetDevice Message:node is functioning properly}]}\nI0111 17:12:00.470262 1 plugin.go:91] Add check result {Rule:0xc000147490 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentKubeletRestart Reason:FrequentKubeletRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --delay=5m --count=5 --pattern=Started Kubernetes kubelet.] TimeoutString:0xc000329420 Timeout:1m0s}\nI0111 17:12:00.470501 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 17:12:02.164457 1 plugin.go:91] Add check result {Rule:0xc000147500 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentDockerRestart Reason:FrequentDockerRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting Docker Application Container Engine...] TimeoutString:0xc000329440 Timeout:1m0s}\nI0111 17:12:02.164596 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 17:12:03.865706 1 plugin.go:91] Add check result {Rule:0xc000147570 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentContainerdRestart Reason:FrequentContainerdRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting containerd container runtime...] TimeoutString:0xc000329450 Timeout:1m0s}\nI0111 17:12:03.865763 1 plugin.go:96] Finish running custom plugins\nI0111 17:12:03.865816 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nE0111 17:12:58.468404 1 disk_collector.go:145] Error calling lsblk\nE0111 17:13:58.468407 1 disk_collector.go:145] Error calling lsblk\nE0111 17:14:58.468401 1 disk_collector.go:145] Error calling lsblk\nE0111 17:15:58.468406 1 disk_collector.go:145] Error calling lsblk\nI0111 17:16:58.389655 1 plugin.go:65] Start to run custom plugins\nI0111 17:16:58.390513 1 plugin.go:65] Start to run custom plugins\nE0111 17:16:58.468389 1 disk_collector.go:145] Error calling lsblk\nI0111 17:16:59.970711 1 plugin.go:91] Add check result {Rule:0xc0001473b0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentUnregisterNetDevice Reason:UnregisterNetDevice Path:/home/kubernetes/bin/log-counter Args:[--journald-source=kernel --log-path=/var/log/journal --lookback=20m --count=3 --pattern=unregister_netdevice: waiting for \\w+ to become free. Usage count = \\d+] TimeoutString:0xc000329350 Timeout:1m0s}\nI0111 17:16:59.970781 1 plugin.go:96] Finish running custom plugins\nI0111 17:16:59.970814 1 custom_plugin_monitor.go:134] New status generated: &{Source:kernel-monitor Events:[] Conditions:[{Type:FrequentUnregisterNetDevice Status:False Transition:2020-01-11 15:56:58.390388959 +0000 UTC m=+35.204536572 Reason:NoFrequentUnregisterNetDevice Message:node is functioning properly}]}\nI0111 17:17:00.468796 1 plugin.go:91] Add check result {Rule:0xc000147490 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentKubeletRestart Reason:FrequentKubeletRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --delay=5m --count=5 --pattern=Started Kubernetes kubelet.] TimeoutString:0xc000329420 Timeout:1m0s}\nI0111 17:17:00.468938 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 17:17:02.075699 1 plugin.go:91] Add check result {Rule:0xc000147500 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentDockerRestart Reason:FrequentDockerRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting Docker Application Container Engine...] TimeoutString:0xc000329440 Timeout:1m0s}\nI0111 17:17:02.075991 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 17:17:03.770885 1 plugin.go:91] Add check result {Rule:0xc000147570 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentContainerdRestart Reason:FrequentContainerdRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting containerd container runtime...] TimeoutString:0xc000329450 Timeout:1m0s}\nI0111 17:17:03.770961 1 plugin.go:96] Finish running custom plugins\nI0111 17:17:03.771010 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nE0111 17:17:58.468408 1 disk_collector.go:145] Error calling lsblk\nE0111 17:18:58.468428 1 disk_collector.go:145] Error calling lsblk\nE0111 17:19:58.468407 1 disk_collector.go:145] Error calling lsblk\nE0111 17:20:58.468425 1 disk_collector.go:145] Error calling lsblk\nI0111 17:21:58.389656 1 plugin.go:65] Start to run custom plugins\nI0111 17:21:58.390549 1 plugin.go:65] Start to run custom plugins\nE0111 17:21:58.468415 1 disk_collector.go:145] Error calling lsblk\nI0111 17:22:00.268862 1 plugin.go:91] Add check result {Rule:0xc0001473b0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentUnregisterNetDevice Reason:UnregisterNetDevice Path:/home/kubernetes/bin/log-counter Args:[--journald-source=kernel --log-path=/var/log/journal --lookback=20m --count=3 --pattern=unregister_netdevice: waiting for \\w+ to become free. Usage count = \\d+] TimeoutString:0xc000329350 Timeout:1m0s}\nI0111 17:22:00.268937 1 plugin.go:96] Finish running custom plugins\nI0111 17:22:00.268972 1 custom_plugin_monitor.go:134] New status generated: &{Source:kernel-monitor Events:[] Conditions:[{Type:FrequentUnregisterNetDevice Status:False Transition:2020-01-11 15:56:58.390388959 +0000 UTC m=+35.204536572 Reason:NoFrequentUnregisterNetDevice Message:node is functioning properly}]}\nI0111 17:22:00.465681 1 plugin.go:91] Add check result {Rule:0xc000147490 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentKubeletRestart Reason:FrequentKubeletRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --delay=5m --count=5 --pattern=Started Kubernetes kubelet.] TimeoutString:0xc000329420 Timeout:1m0s}\nI0111 17:22:00.465906 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 17:22:01.168066 1 plugin.go:91] Add check result {Rule:0xc000147500 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentDockerRestart Reason:FrequentDockerRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting Docker Application Container Engine...] TimeoutString:0xc000329440 Timeout:1m0s}\nI0111 17:22:01.168420 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 17:22:02.773347 1 plugin.go:91] Add check result {Rule:0xc000147570 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentContainerdRestart Reason:FrequentContainerdRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting containerd container runtime...] TimeoutString:0xc000329450 Timeout:1m0s}\nI0111 17:22:02.773410 1 plugin.go:96] Finish running custom plugins\nI0111 17:22:02.773462 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nE0111 17:22:58.468410 1 disk_collector.go:145] Error calling lsblk\nE0111 17:23:58.468415 1 disk_collector.go:145] Error calling lsblk\nE0111 17:24:58.468426 1 disk_collector.go:145] Error calling lsblk\nE0111 17:25:58.468405 1 disk_collector.go:145] Error calling lsblk\nI0111 17:26:58.389652 1 plugin.go:65] Start to run custom plugins\nI0111 17:26:58.390548 1 plugin.go:65] Start to run custom plugins\nE0111 17:26:58.468353 1 disk_collector.go:145] Error calling lsblk\nI0111 17:26:59.966902 1 plugin.go:91] Add check result {Rule:0xc0001473b0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentUnregisterNetDevice Reason:UnregisterNetDevice Path:/home/kubernetes/bin/log-counter Args:[--journald-source=kernel --log-path=/var/log/journal --lookback=20m --count=3 --pattern=unregister_netdevice: waiting for \\w+ to become free. Usage count = \\d+] TimeoutString:0xc000329350 Timeout:1m0s}\nI0111 17:26:59.966964 1 plugin.go:96] Finish running custom plugins\nI0111 17:26:59.966981 1 custom_plugin_monitor.go:134] New status generated: &{Source:kernel-monitor Events:[] Conditions:[{Type:FrequentUnregisterNetDevice Status:False Transition:2020-01-11 15:56:58.390388959 +0000 UTC m=+35.204536572 Reason:NoFrequentUnregisterNetDevice Message:node is functioning properly}]}\nI0111 17:27:00.370139 1 plugin.go:91] Add check result {Rule:0xc000147490 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentKubeletRestart Reason:FrequentKubeletRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --delay=5m --count=5 --pattern=Started Kubernetes kubelet.] TimeoutString:0xc000329420 Timeout:1m0s}\nI0111 17:27:00.370405 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 17:27:01.964043 1 plugin.go:91] Add check result {Rule:0xc000147500 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentDockerRestart Reason:FrequentDockerRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting Docker Application Container Engine...] TimeoutString:0xc000329440 Timeout:1m0s}\nI0111 17:27:01.964217 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 17:27:03.664574 1 plugin.go:91] Add check result {Rule:0xc000147570 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentContainerdRestart Reason:FrequentContainerdRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting containerd container runtime...] TimeoutString:0xc000329450 Timeout:1m0s}\nI0111 17:27:03.664631 1 plugin.go:96] Finish running custom plugins\nI0111 17:27:03.664679 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nE0111 17:27:58.468400 1 disk_collector.go:145] Error calling lsblk\nE0111 17:28:58.468399 1 disk_collector.go:145] Error calling lsblk\nE0111 17:29:58.468395 1 disk_collector.go:145] Error calling lsblk\nE0111 17:30:58.468421 1 disk_collector.go:145] Error calling lsblk\nI0111 17:31:58.389648 1 plugin.go:65] Start to run custom plugins\nI0111 17:31:58.390552 1 plugin.go:65] Start to run custom plugins\nE0111 17:31:58.468395 1 disk_collector.go:145] Error calling lsblk\nI0111 17:32:00.063808 1 plugin.go:91] Add check result {Rule:0xc0001473b0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentUnregisterNetDevice Reason:UnregisterNetDevice Path:/home/kubernetes/bin/log-counter Args:[--journald-source=kernel --log-path=/var/log/journal --lookback=20m --count=3 --pattern=unregister_netdevice: waiting for \\w+ to become free. Usage count = \\d+] TimeoutString:0xc000329350 Timeout:1m0s}\nI0111 17:32:00.063936 1 plugin.go:96] Finish running custom plugins\nI0111 17:32:00.063888 1 custom_plugin_monitor.go:134] New status generated: &{Source:kernel-monitor Events:[] Conditions:[{Type:FrequentUnregisterNetDevice Status:False Transition:2020-01-11 15:56:58.390388959 +0000 UTC m=+35.204536572 Reason:NoFrequentUnregisterNetDevice Message:node is functioning properly}]}\nI0111 17:32:00.375362 1 plugin.go:91] Add check result {Rule:0xc000147490 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentKubeletRestart Reason:FrequentKubeletRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --delay=5m --count=5 --pattern=Started Kubernetes kubelet.] TimeoutString:0xc000329420 Timeout:1m0s}\nI0111 17:32:00.375592 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 17:32:01.420221 1 plugin.go:91] Add check result {Rule:0xc000147500 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentDockerRestart Reason:FrequentDockerRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting Docker Application Container Engine...] TimeoutString:0xc000329440 Timeout:1m0s}\nI0111 17:32:01.420516 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 17:32:01.973412 1 plugin.go:91] Add check result {Rule:0xc000147570 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentContainerdRestart Reason:FrequentContainerdRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting containerd container runtime...] TimeoutString:0xc000329450 Timeout:1m0s}\nI0111 17:32:01.973478 1 plugin.go:96] Finish running custom plugins\nI0111 17:32:01.973553 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nE0111 17:32:58.468407 1 disk_collector.go:145] Error calling lsblk\nE0111 17:33:58.468415 1 disk_collector.go:145] Error calling lsblk\nE0111 17:34:58.468431 1 disk_collector.go:145] Error calling lsblk\nE0111 17:35:58.468402 1 disk_collector.go:145] Error calling lsblk\nI0111 17:36:58.389648 1 plugin.go:65] Start to run custom plugins\nI0111 17:36:58.390547 1 plugin.go:65] Start to run custom plugins\nE0111 17:36:58.468401 1 disk_collector.go:145] Error calling lsblk\nI0111 17:37:00.071677 1 plugin.go:91] Add check result {Rule:0xc0001473b0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentUnregisterNetDevice Reason:UnregisterNetDevice Path:/home/kubernetes/bin/log-counter Args:[--journald-source=kernel --log-path=/var/log/journal --lookback=20m --count=3 --pattern=unregister_netdevice: waiting for \\w+ to become free. Usage count = \\d+] TimeoutString:0xc000329350 Timeout:1m0s}\nI0111 17:37:00.071757 1 plugin.go:96] Finish running custom plugins\nI0111 17:37:00.071794 1 custom_plugin_monitor.go:134] New status generated: &{Source:kernel-monitor Events:[] Conditions:[{Type:FrequentUnregisterNetDevice Status:False Transition:2020-01-11 15:56:58.390388959 +0000 UTC m=+35.204536572 Reason:NoFrequentUnregisterNetDevice Message:node is functioning properly}]}\nI0111 17:37:00.569896 1 plugin.go:91] Add check result {Rule:0xc000147490 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentKubeletRestart Reason:FrequentKubeletRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --delay=5m --count=5 --pattern=Started Kubernetes kubelet.] TimeoutString:0xc000329420 Timeout:1m0s}\nI0111 17:37:00.570162 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 17:37:02.175489 1 plugin.go:91] Add check result {Rule:0xc000147500 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentDockerRestart Reason:FrequentDockerRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting Docker Application Container Engine...] TimeoutString:0xc000329440 Timeout:1m0s}\nI0111 17:37:02.175810 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 17:37:02.964289 1 plugin.go:91] Add check result {Rule:0xc000147570 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentContainerdRestart Reason:FrequentContainerdRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting containerd container runtime...] TimeoutString:0xc000329450 Timeout:1m0s}\nI0111 17:37:02.964688 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 17:37:02.965939 1 plugin.go:96] Finish running custom plugins\nE0111 17:37:58.468407 1 disk_collector.go:145] Error calling lsblk\nE0111 17:38:58.468401 1 disk_collector.go:145] Error calling lsblk\nE0111 17:39:58.468411 1 disk_collector.go:145] Error calling lsblk\nE0111 17:40:58.468408 1 disk_collector.go:145] Error calling lsblk\nI0111 17:41:58.389656 1 plugin.go:65] Start to run custom plugins\nI0111 17:41:58.390552 1 plugin.go:65] Start to run custom plugins\nE0111 17:41:58.468384 1 disk_collector.go:145] Error calling lsblk\nI0111 17:42:00.068981 1 plugin.go:91] Add check result {Rule:0xc0001473b0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentUnregisterNetDevice Reason:UnregisterNetDevice Path:/home/kubernetes/bin/log-counter Args:[--journald-source=kernel --log-path=/var/log/journal --lookback=20m --count=3 --pattern=unregister_netdevice: waiting for \\w+ to become free. Usage count = \\d+] TimeoutString:0xc000329350 Timeout:1m0s}\nI0111 17:42:00.069044 1 plugin.go:96] Finish running custom plugins\nI0111 17:42:00.069076 1 custom_plugin_monitor.go:134] New status generated: &{Source:kernel-monitor Events:[] Conditions:[{Type:FrequentUnregisterNetDevice Status:False Transition:2020-01-11 15:56:58.390388959 +0000 UTC m=+35.204536572 Reason:NoFrequentUnregisterNetDevice Message:node is functioning properly}]}\nI0111 17:42:00.474455 1 plugin.go:91] Add check result {Rule:0xc000147490 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentKubeletRestart Reason:FrequentKubeletRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --delay=5m --count=5 --pattern=Started Kubernetes kubelet.] TimeoutString:0xc000329420 Timeout:1m0s}\nI0111 17:42:00.474574 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 17:42:02.168713 1 plugin.go:91] Add check result {Rule:0xc000147500 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentDockerRestart Reason:FrequentDockerRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting Docker Application Container Engine...] TimeoutString:0xc000329440 Timeout:1m0s}\nI0111 17:42:02.169021 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 17:42:03.870081 1 plugin.go:91] Add check result {Rule:0xc000147570 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentContainerdRestart Reason:FrequentContainerdRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting containerd container runtime...] TimeoutString:0xc000329450 Timeout:1m0s}\nI0111 17:42:03.870144 1 plugin.go:96] Finish running custom plugins\nI0111 17:42:03.870198 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nE0111 17:42:58.468413 1 disk_collector.go:145] Error calling lsblk\nE0111 17:43:58.468404 1 disk_collector.go:145] Error calling lsblk\nE0111 17:44:58.468410 1 disk_collector.go:145] Error calling lsblk\nE0111 17:45:58.468409 1 disk_collector.go:145] Error calling lsblk\nI0111 17:46:58.389659 1 plugin.go:65] Start to run custom plugins\nI0111 17:46:58.390553 1 plugin.go:65] Start to run custom plugins\nE0111 17:46:58.468423 1 disk_collector.go:145] Error calling lsblk\nI0111 17:46:59.970514 1 plugin.go:91] Add check result {Rule:0xc0001473b0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentUnregisterNetDevice Reason:UnregisterNetDevice Path:/home/kubernetes/bin/log-counter Args:[--journald-source=kernel --log-path=/var/log/journal --lookback=20m --count=3 --pattern=unregister_netdevice: waiting for \\w+ to become free. Usage count = \\d+] TimeoutString:0xc000329350 Timeout:1m0s}\nI0111 17:46:59.970633 1 plugin.go:96] Finish running custom plugins\nI0111 17:46:59.970668 1 custom_plugin_monitor.go:134] New status generated: &{Source:kernel-monitor Events:[] Conditions:[{Type:FrequentUnregisterNetDevice Status:False Transition:2020-01-11 15:56:58.390388959 +0000 UTC m=+35.204536572 Reason:NoFrequentUnregisterNetDevice Message:node is functioning properly}]}\nI0111 17:47:00.668899 1 plugin.go:91] Add check result {Rule:0xc000147490 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentKubeletRestart Reason:FrequentKubeletRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --delay=5m --count=5 --pattern=Started Kubernetes kubelet.] TimeoutString:0xc000329420 Timeout:1m0s}\nI0111 17:47:00.669315 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 17:47:02.568913 1 plugin.go:91] Add check result {Rule:0xc000147500 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentDockerRestart Reason:FrequentDockerRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting Docker Application Container Engine...] TimeoutString:0xc000329440 Timeout:1m0s}\nI0111 17:47:02.569207 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 17:47:04.467623 1 plugin.go:91] Add check result {Rule:0xc000147570 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentContainerdRestart Reason:FrequentContainerdRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting containerd container runtime...] TimeoutString:0xc000329450 Timeout:1m0s}\nI0111 17:47:04.467690 1 plugin.go:96] Finish running custom plugins\nI0111 17:47:04.467741 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nE0111 17:47:58.468407 1 disk_collector.go:145] Error calling lsblk\nE0111 17:48:58.468407 1 disk_collector.go:145] Error calling lsblk\nE0111 17:49:58.468418 1 disk_collector.go:145] Error calling lsblk\nE0111 17:50:58.468405 1 disk_collector.go:145] Error calling lsblk\nI0111 17:51:58.389657 1 plugin.go:65] Start to run custom plugins\nI0111 17:51:58.390552 1 plugin.go:65] Start to run custom plugins\nE0111 17:51:58.468404 1 disk_collector.go:145] Error calling lsblk\nI0111 17:51:59.973400 1 plugin.go:91] Add check result {Rule:0xc0001473b0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentUnregisterNetDevice Reason:UnregisterNetDevice Path:/home/kubernetes/bin/log-counter Args:[--journald-source=kernel --log-path=/var/log/journal --lookback=20m --count=3 --pattern=unregister_netdevice: waiting for \\w+ to become free. Usage count = \\d+] TimeoutString:0xc000329350 Timeout:1m0s}\nI0111 17:51:59.973484 1 plugin.go:96] Finish running custom plugins\nI0111 17:51:59.973537 1 custom_plugin_monitor.go:134] New status generated: &{Source:kernel-monitor Events:[] Conditions:[{Type:FrequentUnregisterNetDevice Status:False Transition:2020-01-11 15:56:58.390388959 +0000 UTC m=+35.204536572 Reason:NoFrequentUnregisterNetDevice Message:node is functioning properly}]}\nI0111 17:52:00.873400 1 plugin.go:91] Add check result {Rule:0xc000147490 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentKubeletRestart Reason:FrequentKubeletRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --delay=5m --count=5 --pattern=Started Kubernetes kubelet.] TimeoutString:0xc000329420 Timeout:1m0s}\nI0111 17:52:00.873456 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 17:52:02.967707 1 plugin.go:91] Add check result {Rule:0xc000147500 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentDockerRestart Reason:FrequentDockerRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting Docker Application Container Engine...] TimeoutString:0xc000329440 Timeout:1m0s}\nI0111 17:52:02.967898 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 17:52:04.164250 1 plugin.go:91] Add check result {Rule:0xc000147570 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentContainerdRestart Reason:FrequentContainerdRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting containerd container runtime...] TimeoutString:0xc000329450 Timeout:1m0s}\nI0111 17:52:04.164318 1 plugin.go:96] Finish running custom plugins\nI0111 17:52:04.164472 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nE0111 17:52:58.468405 1 disk_collector.go:145] Error calling lsblk\nE0111 17:53:58.468411 1 disk_collector.go:145] Error calling lsblk\nE0111 17:54:58.468405 1 disk_collector.go:145] Error calling lsblk\nE0111 17:55:58.468399 1 disk_collector.go:145] Error calling lsblk\nI0111 17:56:58.389642 1 plugin.go:65] Start to run custom plugins\nI0111 17:56:58.390556 1 plugin.go:65] Start to run custom plugins\nE0111 17:56:58.468402 1 disk_collector.go:145] Error calling lsblk\nI0111 17:57:00.169104 1 plugin.go:91] Add check result {Rule:0xc0001473b0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentUnregisterNetDevice Reason:UnregisterNetDevice Path:/home/kubernetes/bin/log-counter Args:[--journald-source=kernel --log-path=/var/log/journal --lookback=20m --count=3 --pattern=unregister_netdevice: waiting for \\w+ to become free. Usage count = \\d+] TimeoutString:0xc000329350 Timeout:1m0s}\nI0111 17:57:00.169178 1 plugin.go:96] Finish running custom plugins\nI0111 17:57:00.169211 1 custom_plugin_monitor.go:134] New status generated: &{Source:kernel-monitor Events:[] Conditions:[{Type:FrequentUnregisterNetDevice Status:False Transition:2020-01-11 15:56:58.390388959 +0000 UTC m=+35.204536572 Reason:NoFrequentUnregisterNetDevice Message:node is functioning properly}]}\nI0111 17:57:00.865630 1 plugin.go:91] Add check result {Rule:0xc000147490 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentKubeletRestart Reason:FrequentKubeletRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --delay=5m --count=5 --pattern=Started Kubernetes kubelet.] TimeoutString:0xc000329420 Timeout:1m0s}\nI0111 17:57:00.865730 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 17:57:02.967115 1 plugin.go:91] Add check result {Rule:0xc000147500 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentDockerRestart Reason:FrequentDockerRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting Docker Application Container Engine...] TimeoutString:0xc000329440 Timeout:1m0s}\nI0111 17:57:02.967415 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 17:57:05.065873 1 plugin.go:91] Add check result {Rule:0xc000147570 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentContainerdRestart Reason:FrequentContainerdRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting containerd container runtime...] TimeoutString:0xc000329450 Timeout:1m0s}\nI0111 17:57:05.065939 1 plugin.go:96] Finish running custom plugins\nI0111 17:57:05.065989 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nE0111 17:57:58.468404 1 disk_collector.go:145] Error calling lsblk\nE0111 17:58:58.468406 1 disk_collector.go:145] Error calling lsblk\nE0111 17:59:58.468439 1 disk_collector.go:145] Error calling lsblk\nE0111 18:00:58.468420 1 disk_collector.go:145] Error calling lsblk\nI0111 18:01:58.389654 1 plugin.go:65] Start to run custom plugins\nI0111 18:01:58.390483 1 plugin.go:65] Start to run custom plugins\nE0111 18:01:58.468401 1 disk_collector.go:145] Error calling lsblk\nI0111 18:02:00.064670 1 plugin.go:91] Add check result {Rule:0xc0001473b0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentUnregisterNetDevice Reason:UnregisterNetDevice Path:/home/kubernetes/bin/log-counter Args:[--journald-source=kernel --log-path=/var/log/journal --lookback=20m --count=3 --pattern=unregister_netdevice: waiting for \\w+ to become free. Usage count = \\d+] TimeoutString:0xc000329350 Timeout:1m0s}\nI0111 18:02:00.064735 1 plugin.go:96] Finish running custom plugins\nI0111 18:02:00.064770 1 custom_plugin_monitor.go:134] New status generated: &{Source:kernel-monitor Events:[] Conditions:[{Type:FrequentUnregisterNetDevice Status:False Transition:2020-01-11 15:56:58.390388959 +0000 UTC m=+35.204536572 Reason:NoFrequentUnregisterNetDevice Message:node is functioning properly}]}\nI0111 18:02:00.772328 1 plugin.go:91] Add check result {Rule:0xc000147490 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentKubeletRestart Reason:FrequentKubeletRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --delay=5m --count=5 --pattern=Started Kubernetes kubelet.] TimeoutString:0xc000329420 Timeout:1m0s}\nI0111 18:02:00.772494 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 18:02:02.773168 1 plugin.go:91] Add check result {Rule:0xc000147500 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentDockerRestart Reason:FrequentDockerRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting Docker Application Container Engine...] TimeoutString:0xc000329440 Timeout:1m0s}\nI0111 18:02:02.773336 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 18:02:04.768154 1 plugin.go:91] Add check result {Rule:0xc000147570 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentContainerdRestart Reason:FrequentContainerdRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting containerd container runtime...] TimeoutString:0xc000329450 Timeout:1m0s}\nI0111 18:02:04.768223 1 plugin.go:96] Finish running custom plugins\nI0111 18:02:04.768272 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nE0111 18:02:58.468396 1 disk_collector.go:145] Error calling lsblk\nE0111 18:03:58.468401 1 disk_collector.go:145] Error calling lsblk\nE0111 18:04:58.468396 1 disk_collector.go:145] Error calling lsblk\nE0111 18:05:58.468413 1 disk_collector.go:145] Error calling lsblk\nI0111 18:06:58.389651 1 plugin.go:65] Start to run custom plugins\nI0111 18:06:58.390584 1 plugin.go:65] Start to run custom plugins\nE0111 18:06:58.468496 1 disk_collector.go:145] Error calling lsblk\nI0111 18:06:59.973750 1 plugin.go:91] Add check result {Rule:0xc0001473b0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentUnregisterNetDevice Reason:UnregisterNetDevice Path:/home/kubernetes/bin/log-counter Args:[--journald-source=kernel --log-path=/var/log/journal --lookback=20m --count=3 --pattern=unregister_netdevice: waiting for \\w+ to become free. Usage count = \\d+] TimeoutString:0xc000329350 Timeout:1m0s}\nI0111 18:06:59.973830 1 plugin.go:96] Finish running custom plugins\nI0111 18:06:59.973871 1 custom_plugin_monitor.go:134] New status generated: &{Source:kernel-monitor Events:[] Conditions:[{Type:FrequentUnregisterNetDevice Status:False Transition:2020-01-11 15:56:58.390388959 +0000 UTC m=+35.204536572 Reason:NoFrequentUnregisterNetDevice Message:node is functioning properly}]}\nI0111 18:07:00.806115 1 plugin.go:91] Add check result {Rule:0xc000147490 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentKubeletRestart Reason:FrequentKubeletRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --delay=5m --count=5 --pattern=Started Kubernetes kubelet.] TimeoutString:0xc000329420 Timeout:1m0s}\nI0111 18:07:00.806401 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 18:07:01.773276 1 plugin.go:91] Add check result {Rule:0xc000147500 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentDockerRestart Reason:FrequentDockerRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting Docker Application Container Engine...] TimeoutString:0xc000329440 Timeout:1m0s}\nI0111 18:07:01.773604 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 18:07:03.876209 1 plugin.go:91] Add check result {Rule:0xc000147570 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentContainerdRestart Reason:FrequentContainerdRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting containerd container runtime...] TimeoutString:0xc000329450 Timeout:1m0s}\nI0111 18:07:03.876278 1 plugin.go:96] Finish running custom plugins\nI0111 18:07:03.876329 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nE0111 18:07:58.468406 1 disk_collector.go:145] Error calling lsblk\nE0111 18:08:58.468404 1 disk_collector.go:145] Error calling lsblk\nE0111 18:09:58.468431 1 disk_collector.go:145] Error calling lsblk\nE0111 18:10:58.468421 1 disk_collector.go:145] Error calling lsblk\nI0111 18:11:58.389653 1 plugin.go:65] Start to run custom plugins\nI0111 18:11:58.390577 1 plugin.go:65] Start to run custom plugins\nE0111 18:11:58.468395 1 disk_collector.go:145] Error calling lsblk\nI0111 18:11:59.974721 1 plugin.go:91] Add check result {Rule:0xc0001473b0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentUnregisterNetDevice Reason:UnregisterNetDevice Path:/home/kubernetes/bin/log-counter Args:[--journald-source=kernel --log-path=/var/log/journal --lookback=20m --count=3 --pattern=unregister_netdevice: waiting for \\w+ to become free. Usage count = \\d+] TimeoutString:0xc000329350 Timeout:1m0s}\nI0111 18:11:59.974799 1 plugin.go:96] Finish running custom plugins\nI0111 18:11:59.974986 1 custom_plugin_monitor.go:134] New status generated: &{Source:kernel-monitor Events:[] Conditions:[{Type:FrequentUnregisterNetDevice Status:False Transition:2020-01-11 15:56:58.390388959 +0000 UTC m=+35.204536572 Reason:NoFrequentUnregisterNetDevice Message:node is functioning properly}]}\nI0111 18:12:00.675246 1 plugin.go:91] Add check result {Rule:0xc000147490 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentKubeletRestart Reason:FrequentKubeletRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --delay=5m --count=5 --pattern=Started Kubernetes kubelet.] TimeoutString:0xc000329420 Timeout:1m0s}\nI0111 18:12:00.675580 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 18:12:02.474803 1 plugin.go:91] Add check result {Rule:0xc000147500 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentDockerRestart Reason:FrequentDockerRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting Docker Application Container Engine...] TimeoutString:0xc000329440 Timeout:1m0s}\nI0111 18:12:02.474896 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 18:12:04.369603 1 plugin.go:91] Add check result {Rule:0xc000147570 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentContainerdRestart Reason:FrequentContainerdRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting containerd container runtime...] TimeoutString:0xc000329450 Timeout:1m0s}\nI0111 18:12:04.369695 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 18:12:04.369812 1 plugin.go:96] Finish running custom plugins\nE0111 18:12:58.468396 1 disk_collector.go:145] Error calling lsblk\nE0111 18:13:58.468411 1 disk_collector.go:145] Error calling lsblk\nE0111 18:14:58.468402 1 disk_collector.go:145] Error calling lsblk\nE0111 18:15:58.468411 1 disk_collector.go:145] Error calling lsblk\nI0111 18:16:58.389658 1 plugin.go:65] Start to run custom plugins\nI0111 18:16:58.390552 1 plugin.go:65] Start to run custom plugins\nE0111 18:16:58.468416 1 disk_collector.go:145] Error calling lsblk\nI0111 18:16:59.572900 1 plugin.go:91] Add check result {Rule:0xc000147490 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentKubeletRestart Reason:FrequentKubeletRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --delay=5m --count=5 --pattern=Started Kubernetes kubelet.] TimeoutString:0xc000329420 Timeout:1m0s}\nI0111 18:16:59.573298 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 18:16:59.966738 1 plugin.go:91] Add check result {Rule:0xc0001473b0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentUnregisterNetDevice Reason:UnregisterNetDevice Path:/home/kubernetes/bin/log-counter Args:[--journald-source=kernel --log-path=/var/log/journal --lookback=20m --count=3 --pattern=unregister_netdevice: waiting for \\w+ to become free. Usage count = \\d+] TimeoutString:0xc000329350 Timeout:1m0s}\nI0111 18:16:59.966818 1 plugin.go:96] Finish running custom plugins\nI0111 18:16:59.966856 1 custom_plugin_monitor.go:134] New status generated: &{Source:kernel-monitor Events:[] Conditions:[{Type:FrequentUnregisterNetDevice Status:False Transition:2020-01-11 15:56:58.390388959 +0000 UTC m=+35.204536572 Reason:NoFrequentUnregisterNetDevice Message:node is functioning properly}]}\nI0111 18:17:01.572824 1 plugin.go:91] Add check result {Rule:0xc000147500 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentDockerRestart Reason:FrequentDockerRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting Docker Application Container Engine...] TimeoutString:0xc000329440 Timeout:1m0s}\nI0111 18:17:01.572919 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 18:17:03.372278 1 plugin.go:91] Add check result {Rule:0xc000147570 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentContainerdRestart Reason:FrequentContainerdRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting containerd container runtime...] TimeoutString:0xc000329450 Timeout:1m0s}\nI0111 18:17:03.372393 1 plugin.go:96] Finish running custom plugins\nI0111 18:17:03.372344 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nE0111 18:17:58.468406 1 disk_collector.go:145] Error calling lsblk\nE0111 18:18:58.468422 1 disk_collector.go:145] Error calling lsblk\nE0111 18:19:58.468437 1 disk_collector.go:145] Error calling lsblk\nE0111 18:20:58.468413 1 disk_collector.go:145] Error calling lsblk\nI0111 18:21:58.389654 1 plugin.go:65] Start to run custom plugins\nI0111 18:21:58.390479 1 plugin.go:65] Start to run custom plugins\nE0111 18:21:58.468413 1 disk_collector.go:145] Error calling lsblk\nI0111 18:22:00.068488 1 plugin.go:91] Add check result {Rule:0xc0001473b0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentUnregisterNetDevice Reason:UnregisterNetDevice Path:/home/kubernetes/bin/log-counter Args:[--journald-source=kernel --log-path=/var/log/journal --lookback=20m --count=3 --pattern=unregister_netdevice: waiting for \\w+ to become free. Usage count = \\d+] TimeoutString:0xc000329350 Timeout:1m0s}\nI0111 18:22:00.068578 1 plugin.go:96] Finish running custom plugins\nI0111 18:22:00.068616 1 custom_plugin_monitor.go:134] New status generated: &{Source:kernel-monitor Events:[] Conditions:[{Type:FrequentUnregisterNetDevice Status:False Transition:2020-01-11 15:56:58.390388959 +0000 UTC m=+35.204536572 Reason:NoFrequentUnregisterNetDevice Message:node is functioning properly}]}\nI0111 18:22:00.868212 1 plugin.go:91] Add check result {Rule:0xc000147490 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentKubeletRestart Reason:FrequentKubeletRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --delay=5m --count=5 --pattern=Started Kubernetes kubelet.] TimeoutString:0xc000329420 Timeout:1m0s}\nI0111 18:22:00.868264 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 18:22:02.970300 1 plugin.go:91] Add check result {Rule:0xc000147500 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentDockerRestart Reason:FrequentDockerRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting Docker Application Container Engine...] TimeoutString:0xc000329440 Timeout:1m0s}\nI0111 18:22:02.970393 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 18:22:05.070113 1 plugin.go:91] Add check result {Rule:0xc000147570 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentContainerdRestart Reason:FrequentContainerdRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting containerd container runtime...] TimeoutString:0xc000329450 Timeout:1m0s}\nI0111 18:22:05.070191 1 plugin.go:96] Finish running custom plugins\nI0111 18:22:05.070242 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nE0111 18:22:58.468421 1 disk_collector.go:145] Error calling lsblk\nE0111 18:23:58.468428 1 disk_collector.go:145] Error calling lsblk\nE0111 18:24:58.468403 1 disk_collector.go:145] Error calling lsblk\nE0111 18:25:58.468421 1 disk_collector.go:145] Error calling lsblk\nI0111 18:26:58.389658 1 plugin.go:65] Start to run custom plugins\nI0111 18:26:58.390552 1 plugin.go:65] Start to run custom plugins\nE0111 18:26:58.468423 1 disk_collector.go:145] Error calling lsblk\nI0111 18:27:00.063899 1 plugin.go:91] Add check result {Rule:0xc0001473b0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentUnregisterNetDevice Reason:UnregisterNetDevice Path:/home/kubernetes/bin/log-counter Args:[--journald-source=kernel --log-path=/var/log/journal --lookback=20m --count=3 --pattern=unregister_netdevice: waiting for \\w+ to become free. Usage count = \\d+] TimeoutString:0xc000329350 Timeout:1m0s}\nI0111 18:27:00.064239 1 plugin.go:96] Finish running custom plugins\nI0111 18:27:00.064083 1 custom_plugin_monitor.go:134] New status generated: &{Source:kernel-monitor Events:[] Conditions:[{Type:FrequentUnregisterNetDevice Status:False Transition:2020-01-11 15:56:58.390388959 +0000 UTC m=+35.204536572 Reason:NoFrequentUnregisterNetDevice Message:node is functioning properly}]}\nI0111 18:27:00.073661 1 plugin.go:91] Add check result {Rule:0xc000147490 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentKubeletRestart Reason:FrequentKubeletRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --delay=5m --count=5 --pattern=Started Kubernetes kubelet.] TimeoutString:0xc000329420 Timeout:1m0s}\nI0111 18:27:00.073830 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 18:27:01.263856 1 plugin.go:91] Add check result {Rule:0xc000147500 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentDockerRestart Reason:FrequentDockerRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting Docker Application Container Engine...] TimeoutString:0xc000329440 Timeout:1m0s}\nI0111 18:27:01.264044 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 18:27:03.377817 1 plugin.go:91] Add check result {Rule:0xc000147570 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentContainerdRestart Reason:FrequentContainerdRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting containerd container runtime...] TimeoutString:0xc000329450 Timeout:1m0s}\nI0111 18:27:03.377885 1 plugin.go:96] Finish running custom plugins\nI0111 18:27:03.377935 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nE0111 18:27:58.468397 1 disk_collector.go:145] Error calling lsblk\nE0111 18:28:58.468405 1 disk_collector.go:145] Error calling lsblk\nE0111 18:29:58.468420 1 disk_collector.go:145] Error calling lsblk\nE0111 18:30:58.468414 1 disk_collector.go:145] Error calling lsblk\nI0111 18:31:58.389657 1 plugin.go:65] Start to run custom plugins\nI0111 18:31:58.390578 1 plugin.go:65] Start to run custom plugins\nE0111 18:31:58.468385 1 disk_collector.go:145] Error calling lsblk\nI0111 18:31:59.967035 1 plugin.go:91] Add check result {Rule:0xc0001473b0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentUnregisterNetDevice Reason:UnregisterNetDevice Path:/home/kubernetes/bin/log-counter Args:[--journald-source=kernel --log-path=/var/log/journal --lookback=20m --count=3 --pattern=unregister_netdevice: waiting for \\w+ to become free. Usage count = \\d+] TimeoutString:0xc000329350 Timeout:1m0s}\nI0111 18:31:59.967098 1 plugin.go:96] Finish running custom plugins\nI0111 18:31:59.967133 1 custom_plugin_monitor.go:134] New status generated: &{Source:kernel-monitor Events:[] Conditions:[{Type:FrequentUnregisterNetDevice Status:False Transition:2020-01-11 15:56:58.390388959 +0000 UTC m=+35.204536572 Reason:NoFrequentUnregisterNetDevice Message:node is functioning properly}]}\nI0111 18:32:00.774426 1 plugin.go:91] Add check result {Rule:0xc000147490 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentKubeletRestart Reason:FrequentKubeletRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --delay=5m --count=5 --pattern=Started Kubernetes kubelet.] TimeoutString:0xc000329420 Timeout:1m0s}\nI0111 18:32:00.774499 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 18:32:02.771346 1 plugin.go:91] Add check result {Rule:0xc000147500 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentDockerRestart Reason:FrequentDockerRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting Docker Application Container Engine...] TimeoutString:0xc000329440 Timeout:1m0s}\nI0111 18:32:02.771645 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 18:32:04.868111 1 plugin.go:91] Add check result {Rule:0xc000147570 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentContainerdRestart Reason:FrequentContainerdRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting containerd container runtime...] TimeoutString:0xc000329450 Timeout:1m0s}\nI0111 18:32:04.868182 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 18:32:04.868224 1 plugin.go:96] Finish running custom plugins\nE0111 18:32:58.468431 1 disk_collector.go:145] Error calling lsblk\nE0111 18:33:58.468417 1 disk_collector.go:145] Error calling lsblk\nE0111 18:34:58.468410 1 disk_collector.go:145] Error calling lsblk\nE0111 18:35:58.468411 1 disk_collector.go:145] Error calling lsblk\nI0111 18:36:58.389641 1 plugin.go:65] Start to run custom plugins\nI0111 18:36:58.390478 1 plugin.go:65] Start to run custom plugins\nE0111 18:36:58.468411 1 disk_collector.go:145] Error calling lsblk\nI0111 18:37:00.168510 1 plugin.go:91] Add check result {Rule:0xc0001473b0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentUnregisterNetDevice Reason:UnregisterNetDevice Path:/home/kubernetes/bin/log-counter Args:[--journald-source=kernel --log-path=/var/log/journal --lookback=20m --count=3 --pattern=unregister_netdevice: waiting for \\w+ to become free. Usage count = \\d+] TimeoutString:0xc000329350 Timeout:1m0s}\nI0111 18:37:00.168600 1 plugin.go:96] Finish running custom plugins\nI0111 18:37:00.168638 1 custom_plugin_monitor.go:134] New status generated: &{Source:kernel-monitor Events:[] Conditions:[{Type:FrequentUnregisterNetDevice Status:False Transition:2020-01-11 15:56:58.390388959 +0000 UTC m=+35.204536572 Reason:NoFrequentUnregisterNetDevice Message:node is functioning properly}]}\nI0111 18:37:00.968914 1 plugin.go:91] Add check result {Rule:0xc000147490 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentKubeletRestart Reason:FrequentKubeletRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --delay=5m --count=5 --pattern=Started Kubernetes kubelet.] TimeoutString:0xc000329420 Timeout:1m0s}\nI0111 18:37:00.969128 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 18:37:03.070243 1 plugin.go:91] Add check result {Rule:0xc000147500 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentDockerRestart Reason:FrequentDockerRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting Docker Application Container Engine...] TimeoutString:0xc000329440 Timeout:1m0s}\nI0111 18:37:03.070401 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 18:37:04.214962 1 plugin.go:91] Add check result {Rule:0xc000147570 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentContainerdRestart Reason:FrequentContainerdRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting containerd container runtime...] TimeoutString:0xc000329450 Timeout:1m0s}\nI0111 18:37:04.215030 1 plugin.go:96] Finish running custom plugins\nI0111 18:37:04.215078 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nE0111 18:37:58.468410 1 disk_collector.go:145] Error calling lsblk\nE0111 18:38:58.468432 1 disk_collector.go:145] Error calling lsblk\nE0111 18:39:58.468421 1 disk_collector.go:145] Error calling lsblk\nE0111 18:40:58.468414 1 disk_collector.go:145] Error calling lsblk\nI0111 18:41:58.389653 1 plugin.go:65] Start to run custom plugins\nI0111 18:41:58.390483 1 plugin.go:65] Start to run custom plugins\nE0111 18:41:58.468736 1 disk_collector.go:145] Error calling lsblk\nI0111 18:42:00.064799 1 plugin.go:91] Add check result {Rule:0xc0001473b0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentUnregisterNetDevice Reason:UnregisterNetDevice Path:/home/kubernetes/bin/log-counter Args:[--journald-source=kernel --log-path=/var/log/journal --lookback=20m --count=3 --pattern=unregister_netdevice: waiting for \\w+ to become free. Usage count = \\d+] TimeoutString:0xc000329350 Timeout:1m0s}\nI0111 18:42:00.064864 1 plugin.go:96] Finish running custom plugins\nI0111 18:42:00.064914 1 custom_plugin_monitor.go:134] New status generated: &{Source:kernel-monitor Events:[] Conditions:[{Type:FrequentUnregisterNetDevice Status:False Transition:2020-01-11 15:56:58.390388959 +0000 UTC m=+35.204536572 Reason:NoFrequentUnregisterNetDevice Message:node is functioning properly}]}\nI0111 18:42:00.403359 1 plugin.go:91] Add check result {Rule:0xc000147490 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentKubeletRestart Reason:FrequentKubeletRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --delay=5m --count=5 --pattern=Started Kubernetes kubelet.] TimeoutString:0xc000329420 Timeout:1m0s}\nI0111 18:42:00.403350 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 18:42:01.468824 1 plugin.go:91] Add check result {Rule:0xc000147500 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentDockerRestart Reason:FrequentDockerRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting Docker Application Container Engine...] TimeoutString:0xc000329440 Timeout:1m0s}\nI0111 18:42:01.468918 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 18:42:02.470381 1 plugin.go:91] Add check result {Rule:0xc000147570 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentContainerdRestart Reason:FrequentContainerdRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting containerd container runtime...] TimeoutString:0xc000329450 Timeout:1m0s}\nI0111 18:42:02.470457 1 plugin.go:96] Finish running custom plugins\nI0111 18:42:02.470547 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nE0111 18:42:58.468405 1 disk_collector.go:145] Error calling lsblk\nE0111 18:43:58.468406 1 disk_collector.go:145] Error calling lsblk\nE0111 18:44:58.468396 1 disk_collector.go:145] Error calling lsblk\nE0111 18:45:58.468400 1 disk_collector.go:145] Error calling lsblk\nI0111 18:46:58.389649 1 plugin.go:65] Start to run custom plugins\nI0111 18:46:58.390577 1 plugin.go:65] Start to run custom plugins\nE0111 18:46:58.468412 1 disk_collector.go:145] Error calling lsblk\nI0111 18:47:00.168781 1 plugin.go:91] Add check result {Rule:0xc0001473b0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentUnregisterNetDevice Reason:UnregisterNetDevice Path:/home/kubernetes/bin/log-counter Args:[--journald-source=kernel --log-path=/var/log/journal --lookback=20m --count=3 --pattern=unregister_netdevice: waiting for \\w+ to become free. Usage count = \\d+] TimeoutString:0xc000329350 Timeout:1m0s}\nI0111 18:47:00.168871 1 plugin.go:96] Finish running custom plugins\nI0111 18:47:00.168926 1 custom_plugin_monitor.go:134] New status generated: &{Source:kernel-monitor Events:[] Conditions:[{Type:FrequentUnregisterNetDevice Status:False Transition:2020-01-11 15:56:58.390388959 +0000 UTC m=+35.204536572 Reason:NoFrequentUnregisterNetDevice Message:node is functioning properly}]}\nI0111 18:47:00.670140 1 plugin.go:91] Add check result {Rule:0xc000147490 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentKubeletRestart Reason:FrequentKubeletRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --delay=5m --count=5 --pattern=Started Kubernetes kubelet.] TimeoutString:0xc000329420 Timeout:1m0s}\nI0111 18:47:00.670206 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 18:47:01.470790 1 plugin.go:91] Add check result {Rule:0xc000147500 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentDockerRestart Reason:FrequentDockerRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting Docker Application Container Engine...] TimeoutString:0xc000329440 Timeout:1m0s}\nI0111 18:47:01.470956 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 18:47:03.365126 1 plugin.go:91] Add check result {Rule:0xc000147570 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentContainerdRestart Reason:FrequentContainerdRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting containerd container runtime...] TimeoutString:0xc000329450 Timeout:1m0s}\nI0111 18:47:03.365192 1 plugin.go:96] Finish running custom plugins\nI0111 18:47:03.365243 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nE0111 18:47:58.468398 1 disk_collector.go:145] Error calling lsblk\nE0111 18:48:58.468401 1 disk_collector.go:145] Error calling lsblk\nE0111 18:49:58.468422 1 disk_collector.go:145] Error calling lsblk\nE0111 18:50:58.468417 1 disk_collector.go:145] Error calling lsblk\nI0111 18:51:58.389663 1 plugin.go:65] Start to run custom plugins\nI0111 18:51:58.390516 1 plugin.go:65] Start to run custom plugins\nE0111 18:51:58.468400 1 disk_collector.go:145] Error calling lsblk\nI0111 18:52:00.064002 1 plugin.go:91] Add check result {Rule:0xc0001473b0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentUnregisterNetDevice Reason:UnregisterNetDevice Path:/home/kubernetes/bin/log-counter Args:[--journald-source=kernel --log-path=/var/log/journal --lookback=20m --count=3 --pattern=unregister_netdevice: waiting for \\w+ to become free. Usage count = \\d+] TimeoutString:0xc000329350 Timeout:1m0s}\nI0111 18:52:00.064405 1 plugin.go:96] Finish running custom plugins\nI0111 18:52:00.064289 1 custom_plugin_monitor.go:134] New status generated: &{Source:kernel-monitor Events:[] Conditions:[{Type:FrequentUnregisterNetDevice Status:False Transition:2020-01-11 15:56:58.390388959 +0000 UTC m=+35.204536572 Reason:NoFrequentUnregisterNetDevice Message:node is functioning properly}]}\nI0111 18:52:01.665574 1 plugin.go:91] Add check result {Rule:0xc000147490 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentKubeletRestart Reason:FrequentKubeletRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --delay=5m --count=5 --pattern=Started Kubernetes kubelet.] TimeoutString:0xc000329420 Timeout:1m0s}\nI0111 18:52:01.665740 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 18:52:04.570639 1 plugin.go:91] Add check result {Rule:0xc000147500 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentDockerRestart Reason:FrequentDockerRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting Docker Application Container Engine...] TimeoutString:0xc000329440 Timeout:1m0s}\nI0111 18:52:04.570930 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 18:52:07.464826 1 plugin.go:91] Add check result {Rule:0xc000147570 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentContainerdRestart Reason:FrequentContainerdRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting containerd container runtime...] TimeoutString:0xc000329450 Timeout:1m0s}\nI0111 18:52:07.464904 1 plugin.go:96] Finish running custom plugins\nI0111 18:52:07.464971 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nE0111 18:52:58.468368 1 disk_collector.go:145] Error calling lsblk\nE0111 18:53:58.468406 1 disk_collector.go:145] Error calling lsblk\nE0111 18:54:58.468406 1 disk_collector.go:145] Error calling lsblk\nE0111 18:55:58.468433 1 disk_collector.go:145] Error calling lsblk\nI0111 18:56:58.389669 1 plugin.go:65] Start to run custom plugins\nI0111 18:56:58.390545 1 plugin.go:65] Start to run custom plugins\nE0111 18:56:58.468403 1 disk_collector.go:145] Error calling lsblk\nI0111 18:57:00.065062 1 plugin.go:91] Add check result {Rule:0xc0001473b0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentUnregisterNetDevice Reason:UnregisterNetDevice Path:/home/kubernetes/bin/log-counter Args:[--journald-source=kernel --log-path=/var/log/journal --lookback=20m --count=3 --pattern=unregister_netdevice: waiting for \\w+ to become free. Usage count = \\d+] TimeoutString:0xc000329350 Timeout:1m0s}\nI0111 18:57:00.065203 1 custom_plugin_monitor.go:134] New status generated: &{Source:kernel-monitor Events:[] Conditions:[{Type:FrequentUnregisterNetDevice Status:False Transition:2020-01-11 15:56:58.390388959 +0000 UTC m=+35.204536572 Reason:NoFrequentUnregisterNetDevice Message:node is functioning properly}]}\nI0111 18:57:00.065329 1 plugin.go:96] Finish running custom plugins\nI0111 18:57:01.575067 1 plugin.go:91] Add check result {Rule:0xc000147490 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentKubeletRestart Reason:FrequentKubeletRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --delay=5m --count=5 --pattern=Started Kubernetes kubelet.] TimeoutString:0xc000329420 Timeout:1m0s}\nI0111 18:57:01.575259 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 18:57:04.465406 1 plugin.go:91] Add check result {Rule:0xc000147500 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentDockerRestart Reason:FrequentDockerRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting Docker Application Container Engine...] TimeoutString:0xc000329440 Timeout:1m0s}\nI0111 18:57:04.465621 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 18:57:07.270309 1 plugin.go:91] Add check result {Rule:0xc000147570 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentContainerdRestart Reason:FrequentContainerdRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting containerd container runtime...] TimeoutString:0xc000329450 Timeout:1m0s}\nI0111 18:57:07.270373 1 plugin.go:96] Finish running custom plugins\nI0111 18:57:07.270424 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nE0111 18:57:58.468423 1 disk_collector.go:145] Error calling lsblk\nE0111 18:58:58.468400 1 disk_collector.go:145] Error calling lsblk\nE0111 18:59:58.468402 1 disk_collector.go:145] Error calling lsblk\nE0111 19:00:58.468403 1 disk_collector.go:145] Error calling lsblk\nI0111 19:01:58.389634 1 plugin.go:65] Start to run custom plugins\nI0111 19:01:58.390544 1 plugin.go:65] Start to run custom plugins\nE0111 19:01:58.468409 1 disk_collector.go:145] Error calling lsblk\nI0111 19:01:59.970204 1 plugin.go:91] Add check result {Rule:0xc0001473b0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentUnregisterNetDevice Reason:UnregisterNetDevice Path:/home/kubernetes/bin/log-counter Args:[--journald-source=kernel --log-path=/var/log/journal --lookback=20m --count=3 --pattern=unregister_netdevice: waiting for \\w+ to become free. Usage count = \\d+] TimeoutString:0xc000329350 Timeout:1m0s}\nI0111 19:01:59.970455 1 plugin.go:96] Finish running custom plugins\nI0111 19:01:59.970383 1 custom_plugin_monitor.go:134] New status generated: &{Source:kernel-monitor Events:[] Conditions:[{Type:FrequentUnregisterNetDevice Status:False Transition:2020-01-11 15:56:58.390388959 +0000 UTC m=+35.204536572 Reason:NoFrequentUnregisterNetDevice Message:node is functioning properly}]}\nI0111 19:02:01.473997 1 plugin.go:91] Add check result {Rule:0xc000147490 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentKubeletRestart Reason:FrequentKubeletRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --delay=5m --count=5 --pattern=Started Kubernetes kubelet.] TimeoutString:0xc000329420 Timeout:1m0s}\nI0111 19:02:01.474233 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 19:02:04.173212 1 plugin.go:91] Add check result {Rule:0xc000147500 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentDockerRestart Reason:FrequentDockerRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting Docker Application Container Engine...] TimeoutString:0xc000329440 Timeout:1m0s}\nI0111 19:02:04.173487 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 19:02:06.875106 1 plugin.go:91] Add check result {Rule:0xc000147570 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentContainerdRestart Reason:FrequentContainerdRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting containerd container runtime...] TimeoutString:0xc000329450 Timeout:1m0s}\nI0111 19:02:06.875177 1 plugin.go:96] Finish running custom plugins\nI0111 19:02:06.875227 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nE0111 19:02:58.468414 1 disk_collector.go:145] Error calling lsblk\nE0111 19:03:58.468404 1 disk_collector.go:145] Error calling lsblk\nE0111 19:04:58.468423 1 disk_collector.go:145] Error calling lsblk\nE0111 19:05:58.468409 1 disk_collector.go:145] Error calling lsblk\nI0111 19:06:58.389650 1 plugin.go:65] Start to run custom plugins\nI0111 19:06:58.390551 1 plugin.go:65] Start to run custom plugins\nE0111 19:06:58.468413 1 disk_collector.go:145] Error calling lsblk\nI0111 19:07:00.165075 1 plugin.go:91] Add check result {Rule:0xc0001473b0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentUnregisterNetDevice Reason:UnregisterNetDevice Path:/home/kubernetes/bin/log-counter Args:[--journald-source=kernel --log-path=/var/log/journal --lookback=20m --count=3 --pattern=unregister_netdevice: waiting for \\w+ to become free. Usage count = \\d+] TimeoutString:0xc000329350 Timeout:1m0s}\nI0111 19:07:00.165321 1 plugin.go:96] Finish running custom plugins\nI0111 19:07:00.165230 1 custom_plugin_monitor.go:134] New status generated: &{Source:kernel-monitor Events:[] Conditions:[{Type:FrequentUnregisterNetDevice Status:False Transition:2020-01-11 15:56:58.390388959 +0000 UTC m=+35.204536572 Reason:NoFrequentUnregisterNetDevice Message:node is functioning properly}]}\nI0111 19:07:01.669097 1 plugin.go:91] Add check result {Rule:0xc000147490 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentKubeletRestart Reason:FrequentKubeletRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --delay=5m --count=5 --pattern=Started Kubernetes kubelet.] TimeoutString:0xc000329420 Timeout:1m0s}\nI0111 19:07:01.669192 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 19:07:03.565107 1 plugin.go:91] Add check result {Rule:0xc000147500 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentDockerRestart Reason:FrequentDockerRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting Docker Application Container Engine...] TimeoutString:0xc000329440 Timeout:1m0s}\nI0111 19:07:03.565267 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 19:07:06.367110 1 plugin.go:91] Add check result {Rule:0xc000147570 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentContainerdRestart Reason:FrequentContainerdRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting containerd container runtime...] TimeoutString:0xc000329450 Timeout:1m0s}\nI0111 19:07:06.367178 1 plugin.go:96] Finish running custom plugins\nI0111 19:07:06.367228 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nE0111 19:07:58.468402 1 disk_collector.go:145] Error calling lsblk\nE0111 19:08:58.468413 1 disk_collector.go:145] Error calling lsblk\nE0111 19:09:58.468396 1 disk_collector.go:145] Error calling lsblk\nE0111 19:10:58.468412 1 disk_collector.go:145] Error calling lsblk\nI0111 19:11:58.389730 1 plugin.go:65] Start to run custom plugins\nI0111 19:11:58.390551 1 plugin.go:65] Start to run custom plugins\nE0111 19:11:58.468439 1 disk_collector.go:145] Error calling lsblk\nI0111 19:12:00.068353 1 plugin.go:91] Add check result {Rule:0xc0001473b0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentUnregisterNetDevice Reason:UnregisterNetDevice Path:/home/kubernetes/bin/log-counter Args:[--journald-source=kernel --log-path=/var/log/journal --lookback=20m --count=3 --pattern=unregister_netdevice: waiting for \\w+ to become free. Usage count = \\d+] TimeoutString:0xc000329350 Timeout:1m0s}\nI0111 19:12:00.068471 1 plugin.go:96] Finish running custom plugins\nI0111 19:12:00.068607 1 custom_plugin_monitor.go:134] New status generated: &{Source:kernel-monitor Events:[] Conditions:[{Type:FrequentUnregisterNetDevice Status:False Transition:2020-01-11 15:56:58.390388959 +0000 UTC m=+35.204536572 Reason:NoFrequentUnregisterNetDevice Message:node is functioning properly}]}\nI0111 19:12:00.964093 1 plugin.go:91] Add check result {Rule:0xc000147490 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentKubeletRestart Reason:FrequentKubeletRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --delay=5m --count=5 --pattern=Started Kubernetes kubelet.] TimeoutString:0xc000329420 Timeout:1m0s}\nI0111 19:12:00.964341 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 19:12:03.173081 1 plugin.go:91] Add check result {Rule:0xc000147500 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentDockerRestart Reason:FrequentDockerRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting Docker Application Container Engine...] TimeoutString:0xc000329440 Timeout:1m0s}\nI0111 19:12:03.173222 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 19:12:05.468045 1 plugin.go:91] Add check result {Rule:0xc000147570 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentContainerdRestart Reason:FrequentContainerdRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting containerd container runtime...] TimeoutString:0xc000329450 Timeout:1m0s}\nI0111 19:12:05.468227 1 plugin.go:96] Finish running custom plugins\nI0111 19:12:05.468117 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nE0111 19:12:58.468401 1 disk_collector.go:145] Error calling lsblk\nE0111 19:13:58.468410 1 disk_collector.go:145] Error calling lsblk\nE0111 19:14:58.468437 1 disk_collector.go:145] Error calling lsblk\nE0111 19:15:58.468416 1 disk_collector.go:145] Error calling lsblk\nI0111 19:16:58.389659 1 plugin.go:65] Start to run custom plugins\nI0111 19:16:58.390554 1 plugin.go:65] Start to run custom plugins\nE0111 19:16:58.468751 1 disk_collector.go:145] Error calling lsblk\nI0111 19:17:00.066347 1 plugin.go:91] Add check result {Rule:0xc0001473b0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentUnregisterNetDevice Reason:UnregisterNetDevice Path:/home/kubernetes/bin/log-counter Args:[--journald-source=kernel --log-path=/var/log/journal --lookback=20m --count=3 --pattern=unregister_netdevice: waiting for \\w+ to become free. Usage count = \\d+] TimeoutString:0xc000329350 Timeout:1m0s}\nI0111 19:17:00.066415 1 plugin.go:96] Finish running custom plugins\nI0111 19:17:00.066450 1 custom_plugin_monitor.go:134] New status generated: &{Source:kernel-monitor Events:[] Conditions:[{Type:FrequentUnregisterNetDevice Status:False Transition:2020-01-11 15:56:58.390388959 +0000 UTC m=+35.204536572 Reason:NoFrequentUnregisterNetDevice Message:node is functioning properly}]}\nI0111 19:17:00.970608 1 plugin.go:91] Add check result {Rule:0xc000147490 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentKubeletRestart Reason:FrequentKubeletRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --delay=5m --count=5 --pattern=Started Kubernetes kubelet.] TimeoutString:0xc000329420 Timeout:1m0s}\nI0111 19:17:00.970844 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 19:17:03.074968 1 plugin.go:91] Add check result {Rule:0xc000147500 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentDockerRestart Reason:FrequentDockerRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting Docker Application Container Engine...] TimeoutString:0xc000329440 Timeout:1m0s}\nI0111 19:17:03.075134 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 19:17:05.174720 1 plugin.go:91] Add check result {Rule:0xc000147570 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentContainerdRestart Reason:FrequentContainerdRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting containerd container runtime...] TimeoutString:0xc000329450 Timeout:1m0s}\nI0111 19:17:05.174785 1 plugin.go:96] Finish running custom plugins\nI0111 19:17:05.174834 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nE0111 19:17:58.468420 1 disk_collector.go:145] Error calling lsblk\nE0111 19:18:58.468422 1 disk_collector.go:145] Error calling lsblk\nE0111 19:19:58.468402 1 disk_collector.go:145] Error calling lsblk\nE0111 19:20:58.468435 1 disk_collector.go:145] Error calling lsblk\nI0111 19:21:58.389656 1 plugin.go:65] Start to run custom plugins\nI0111 19:21:58.390563 1 plugin.go:65] Start to run custom plugins\nE0111 19:21:58.468434 1 disk_collector.go:145] Error calling lsblk\nI0111 19:21:59.969830 1 plugin.go:91] Add check result {Rule:0xc0001473b0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentUnregisterNetDevice Reason:UnregisterNetDevice Path:/home/kubernetes/bin/log-counter Args:[--journald-source=kernel --log-path=/var/log/journal --lookback=20m --count=3 --pattern=unregister_netdevice: waiting for \\w+ to become free. Usage count = \\d+] TimeoutString:0xc000329350 Timeout:1m0s}\nI0111 19:21:59.969900 1 plugin.go:96] Finish running custom plugins\nI0111 19:21:59.970063 1 custom_plugin_monitor.go:134] New status generated: &{Source:kernel-monitor Events:[] Conditions:[{Type:FrequentUnregisterNetDevice Status:False Transition:2020-01-11 15:56:58.390388959 +0000 UTC m=+35.204536572 Reason:NoFrequentUnregisterNetDevice Message:node is functioning properly}]}\nI0111 19:22:00.869965 1 plugin.go:91] Add check result {Rule:0xc000147490 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentKubeletRestart Reason:FrequentKubeletRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --delay=5m --count=5 --pattern=Started Kubernetes kubelet.] TimeoutString:0xc000329420 Timeout:1m0s}\nI0111 19:22:00.870231 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 19:22:02.975464 1 plugin.go:91] Add check result {Rule:0xc000147500 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentDockerRestart Reason:FrequentDockerRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting Docker Application Container Engine...] TimeoutString:0xc000329440 Timeout:1m0s}\nI0111 19:22:02.975612 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 19:22:05.072673 1 plugin.go:91] Add check result {Rule:0xc000147570 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentContainerdRestart Reason:FrequentContainerdRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting containerd container runtime...] TimeoutString:0xc000329450 Timeout:1m0s}\nI0111 19:22:05.072732 1 plugin.go:96] Finish running custom plugins\nI0111 19:22:05.072773 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nE0111 19:22:58.468411 1 disk_collector.go:145] Error calling lsblk\nE0111 19:23:58.468401 1 disk_collector.go:145] Error calling lsblk\nE0111 19:24:58.468397 1 disk_collector.go:145] Error calling lsblk\nE0111 19:25:58.468410 1 disk_collector.go:145] Error calling lsblk\nI0111 19:26:58.389666 1 plugin.go:65] Start to run custom plugins\nI0111 19:26:58.390552 1 plugin.go:65] Start to run custom plugins\nE0111 19:26:58.468427 1 disk_collector.go:145] Error calling lsblk\nI0111 19:26:59.982997 1 plugin.go:91] Add check result {Rule:0xc000147490 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentKubeletRestart Reason:FrequentKubeletRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --delay=5m --count=5 --pattern=Started Kubernetes kubelet.] TimeoutString:0xc000329420 Timeout:1m0s}\nI0111 19:26:59.983094 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 19:27:00.166103 1 plugin.go:91] Add check result {Rule:0xc0001473b0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentUnregisterNetDevice Reason:UnregisterNetDevice Path:/home/kubernetes/bin/log-counter Args:[--journald-source=kernel --log-path=/var/log/journal --lookback=20m --count=3 --pattern=unregister_netdevice: waiting for \\w+ to become free. Usage count = \\d+] TimeoutString:0xc000329350 Timeout:1m0s}\nI0111 19:27:00.166460 1 plugin.go:96] Finish running custom plugins\nI0111 19:27:00.166367 1 custom_plugin_monitor.go:134] New status generated: &{Source:kernel-monitor Events:[] Conditions:[{Type:FrequentUnregisterNetDevice Status:False Transition:2020-01-11 15:56:58.390388959 +0000 UTC m=+35.204536572 Reason:NoFrequentUnregisterNetDevice Message:node is functioning properly}]}\nI0111 19:27:01.364388 1 plugin.go:91] Add check result {Rule:0xc000147500 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentDockerRestart Reason:FrequentDockerRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting Docker Application Container Engine...] TimeoutString:0xc000329440 Timeout:1m0s}\nI0111 19:27:01.364583 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 19:27:03.477407 1 plugin.go:91] Add check result {Rule:0xc000147570 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentContainerdRestart Reason:FrequentContainerdRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting containerd container runtime...] TimeoutString:0xc000329450 Timeout:1m0s}\nI0111 19:27:03.477502 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 19:27:03.477647 1 plugin.go:96] Finish running custom plugins\nE0111 19:27:58.468414 1 disk_collector.go:145] Error calling lsblk\nE0111 19:28:58.468401 1 disk_collector.go:145] Error calling lsblk\nE0111 19:29:58.468401 1 disk_collector.go:145] Error calling lsblk\nE0111 19:30:58.468400 1 disk_collector.go:145] Error calling lsblk\nI0111 19:31:58.389664 1 plugin.go:65] Start to run custom plugins\nI0111 19:31:58.390537 1 plugin.go:65] Start to run custom plugins\nE0111 19:31:58.468417 1 disk_collector.go:145] Error calling lsblk\nI0111 19:32:00.068809 1 plugin.go:91] Add check result {Rule:0xc0001473b0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentUnregisterNetDevice Reason:UnregisterNetDevice Path:/home/kubernetes/bin/log-counter Args:[--journald-source=kernel --log-path=/var/log/journal --lookback=20m --count=3 --pattern=unregister_netdevice: waiting for \\w+ to become free. Usage count = \\d+] TimeoutString:0xc000329350 Timeout:1m0s}\nI0111 19:32:00.068959 1 plugin.go:96] Finish running custom plugins\nI0111 19:32:00.068848 1 custom_plugin_monitor.go:134] New status generated: &{Source:kernel-monitor Events:[] Conditions:[{Type:FrequentUnregisterNetDevice Status:False Transition:2020-01-11 15:56:58.390388959 +0000 UTC m=+35.204536572 Reason:NoFrequentUnregisterNetDevice Message:node is functioning properly}]}\nI0111 19:32:00.564902 1 plugin.go:91] Add check result {Rule:0xc000147490 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentKubeletRestart Reason:FrequentKubeletRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --delay=5m --count=5 --pattern=Started Kubernetes kubelet.] TimeoutString:0xc000329420 Timeout:1m0s}\nI0111 19:32:00.565211 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 19:32:01.873156 1 plugin.go:91] Add check result {Rule:0xc000147500 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentDockerRestart Reason:FrequentDockerRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting Docker Application Container Engine...] TimeoutString:0xc000329440 Timeout:1m0s}\nI0111 19:32:01.873443 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 19:32:03.278217 1 plugin.go:91] Add check result {Rule:0xc000147570 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentContainerdRestart Reason:FrequentContainerdRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting containerd container runtime...] TimeoutString:0xc000329450 Timeout:1m0s}\nI0111 19:32:03.278301 1 plugin.go:96] Finish running custom plugins\nI0111 19:32:03.278372 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nE0111 19:32:58.468398 1 disk_collector.go:145] Error calling lsblk\nE0111 19:33:58.468403 1 disk_collector.go:145] Error calling lsblk\nE0111 19:34:58.468419 1 disk_collector.go:145] Error calling lsblk\nE0111 19:35:58.468408 1 disk_collector.go:145] Error calling lsblk\nI0111 19:36:58.389664 1 plugin.go:65] Start to run custom plugins\nI0111 19:36:58.390570 1 plugin.go:65] Start to run custom plugins\nE0111 19:36:58.468419 1 disk_collector.go:145] Error calling lsblk\nI0111 19:37:00.165318 1 plugin.go:91] Add check result {Rule:0xc0001473b0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentUnregisterNetDevice Reason:UnregisterNetDevice Path:/home/kubernetes/bin/log-counter Args:[--journald-source=kernel --log-path=/var/log/journal --lookback=20m --count=3 --pattern=unregister_netdevice: waiting for \\w+ to become free. Usage count = \\d+] TimeoutString:0xc000329350 Timeout:1m0s}\nI0111 19:37:00.165786 1 plugin.go:96] Finish running custom plugins\nI0111 19:37:00.165681 1 custom_plugin_monitor.go:134] New status generated: &{Source:kernel-monitor Events:[] Conditions:[{Type:FrequentUnregisterNetDevice Status:False Transition:2020-01-11 15:56:58.390388959 +0000 UTC m=+35.204536572 Reason:NoFrequentUnregisterNetDevice Message:node is functioning properly}]}\nI0111 19:37:01.282589 1 plugin.go:91] Add check result {Rule:0xc000147490 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentKubeletRestart Reason:FrequentKubeletRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --delay=5m --count=5 --pattern=Started Kubernetes kubelet.] TimeoutString:0xc000329420 Timeout:1m0s}\nI0111 19:37:01.282755 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 19:37:03.764174 1 plugin.go:91] Add check result {Rule:0xc000147500 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentDockerRestart Reason:FrequentDockerRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting Docker Application Container Engine...] TimeoutString:0xc000329440 Timeout:1m0s}\nI0111 19:37:03.764449 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 19:37:06.164716 1 plugin.go:91] Add check result {Rule:0xc000147570 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentContainerdRestart Reason:FrequentContainerdRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting containerd container runtime...] TimeoutString:0xc000329450 Timeout:1m0s}\nI0111 19:37:06.164832 1 plugin.go:96] Finish running custom plugins\nI0111 19:37:06.164791 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nE0111 19:37:58.468398 1 disk_collector.go:145] Error calling lsblk\nE0111 19:38:58.468425 1 disk_collector.go:145] Error calling lsblk\nE0111 19:39:58.468430 1 disk_collector.go:145] Error calling lsblk\nE0111 19:40:58.468412 1 disk_collector.go:145] Error calling lsblk\nI0111 19:41:58.389653 1 plugin.go:65] Start to run custom plugins\nI0111 19:41:58.390562 1 plugin.go:65] Start to run custom plugins\nE0111 19:41:58.468405 1 disk_collector.go:145] Error calling lsblk\nI0111 19:42:00.174002 1 plugin.go:91] Add check result {Rule:0xc0001473b0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentUnregisterNetDevice Reason:UnregisterNetDevice Path:/home/kubernetes/bin/log-counter Args:[--journald-source=kernel --log-path=/var/log/journal --lookback=20m --count=3 --pattern=unregister_netdevice: waiting for \\w+ to become free. Usage count = \\d+] TimeoutString:0xc000329350 Timeout:1m0s}\nI0111 19:42:00.174079 1 plugin.go:96] Finish running custom plugins\nI0111 19:42:00.174117 1 custom_plugin_monitor.go:134] New status generated: &{Source:kernel-monitor Events:[] Conditions:[{Type:FrequentUnregisterNetDevice Status:False Transition:2020-01-11 15:56:58.390388959 +0000 UTC m=+35.204536572 Reason:NoFrequentUnregisterNetDevice Message:node is functioning properly}]}\nI0111 19:42:02.269881 1 plugin.go:91] Add check result {Rule:0xc000147490 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentKubeletRestart Reason:FrequentKubeletRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --delay=5m --count=5 --pattern=Started Kubernetes kubelet.] TimeoutString:0xc000329420 Timeout:1m0s}\nI0111 19:42:02.270268 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 19:42:05.768303 1 plugin.go:91] Add check result {Rule:0xc000147500 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentDockerRestart Reason:FrequentDockerRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting Docker Application Container Engine...] TimeoutString:0xc000329440 Timeout:1m0s}\nI0111 19:42:05.768559 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 19:42:09.583301 1 plugin.go:91] Add check result {Rule:0xc000147570 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentContainerdRestart Reason:FrequentContainerdRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting containerd container runtime...] TimeoutString:0xc000329450 Timeout:1m0s}\nI0111 19:42:09.583386 1 plugin.go:96] Finish running custom plugins\nI0111 19:42:09.583468 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nE0111 19:42:58.468426 1 disk_collector.go:145] Error calling lsblk\nE0111 19:43:58.468406 1 disk_collector.go:145] Error calling lsblk\nE0111 19:44:58.468426 1 disk_collector.go:145] Error calling lsblk\nE0111 19:45:58.468410 1 disk_collector.go:145] Error calling lsblk\nI0111 19:46:58.389661 1 plugin.go:65] Start to run custom plugins\nI0111 19:46:58.391176 1 plugin.go:65] Start to run custom plugins\nE0111 19:46:58.468407 1 disk_collector.go:145] Error calling lsblk\nI0111 19:47:00.165636 1 plugin.go:91] Add check result {Rule:0xc0001473b0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentUnregisterNetDevice Reason:UnregisterNetDevice Path:/home/kubernetes/bin/log-counter Args:[--journald-source=kernel --log-path=/var/log/journal --lookback=20m --count=3 --pattern=unregister_netdevice: waiting for \\w+ to become free. Usage count = \\d+] TimeoutString:0xc000329350 Timeout:1m0s}\nI0111 19:47:00.166110 1 plugin.go:96] Finish running custom plugins\nI0111 19:47:00.165980 1 custom_plugin_monitor.go:134] New status generated: &{Source:kernel-monitor Events:[] Conditions:[{Type:FrequentUnregisterNetDevice Status:False Transition:2020-01-11 15:56:58.390388959 +0000 UTC m=+35.204536572 Reason:NoFrequentUnregisterNetDevice Message:node is functioning properly}]}\nI0111 19:47:03.765778 1 plugin.go:91] Add check result {Rule:0xc000147490 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentKubeletRestart Reason:FrequentKubeletRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --delay=5m --count=5 --pattern=Started Kubernetes kubelet.] TimeoutString:0xc000329420 Timeout:1m0s}\nI0111 19:47:03.765876 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 19:47:08.966942 1 plugin.go:91] Add check result {Rule:0xc000147500 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentDockerRestart Reason:FrequentDockerRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting Docker Application Container Engine...] TimeoutString:0xc000329440 Timeout:1m0s}\nI0111 19:47:08.967158 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 19:47:14.165790 1 plugin.go:91] Add check result {Rule:0xc000147570 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentContainerdRestart Reason:FrequentContainerdRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting containerd container runtime...] TimeoutString:0xc000329450 Timeout:1m0s}\nI0111 19:47:14.165893 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 19:47:14.166055 1 plugin.go:96] Finish running custom plugins\nE0111 19:47:58.468427 1 disk_collector.go:145] Error calling lsblk\nE0111 19:48:58.468406 1 disk_collector.go:145] Error calling lsblk\nE0111 19:49:58.468452 1 disk_collector.go:145] Error calling lsblk\nE0111 19:50:58.468441 1 disk_collector.go:145] Error calling lsblk\nI0111 19:51:58.390575 1 plugin.go:65] Start to run custom plugins\nI0111 19:51:58.391846 1 plugin.go:65] Start to run custom plugins\nE0111 19:51:58.468403 1 disk_collector.go:145] Error calling lsblk\nI0111 19:52:00.364267 1 plugin.go:91] Add check result {Rule:0xc0001473b0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentUnregisterNetDevice Reason:UnregisterNetDevice Path:/home/kubernetes/bin/log-counter Args:[--journald-source=kernel --log-path=/var/log/journal --lookback=20m --count=3 --pattern=unregister_netdevice: waiting for \\w+ to become free. Usage count = \\d+] TimeoutString:0xc000329350 Timeout:1m0s}\nI0111 19:52:00.364563 1 plugin.go:96] Finish running custom plugins\nI0111 19:52:00.364489 1 custom_plugin_monitor.go:134] New status generated: &{Source:kernel-monitor Events:[] Conditions:[{Type:FrequentUnregisterNetDevice Status:False Transition:2020-01-11 15:56:58.390388959 +0000 UTC m=+35.204536572 Reason:NoFrequentUnregisterNetDevice Message:node is functioning properly}]}\nI0111 19:52:04.171412 1 plugin.go:91] Add check result {Rule:0xc000147490 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentKubeletRestart Reason:FrequentKubeletRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --delay=5m --count=5 --pattern=Started Kubernetes kubelet.] TimeoutString:0xc000329420 Timeout:1m0s}\nI0111 19:52:04.171585 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 19:52:09.668303 1 plugin.go:91] Add check result {Rule:0xc000147500 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentDockerRestart Reason:FrequentDockerRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting Docker Application Container Engine...] TimeoutString:0xc000329440 Timeout:1m0s}\nI0111 19:52:09.668492 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 19:52:15.768861 1 plugin.go:91] Add check result {Rule:0xc000147570 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentContainerdRestart Reason:FrequentContainerdRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting containerd container runtime...] TimeoutString:0xc000329450 Timeout:1m0s}\nI0111 19:52:15.768921 1 plugin.go:96] Finish running custom plugins\nI0111 19:52:15.768974 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nE0111 19:52:58.468430 1 disk_collector.go:145] Error calling lsblk\nE0111 19:53:58.468415 1 disk_collector.go:145] Error calling lsblk\nE0111 19:54:58.468407 1 disk_collector.go:145] Error calling lsblk\nE0111 19:55:58.468436 1 disk_collector.go:145] Error calling lsblk\nI0111 19:56:58.389661 1 plugin.go:65] Start to run custom plugins\nI0111 19:56:58.390499 1 plugin.go:65] Start to run custom plugins\nE0111 19:56:58.468451 1 disk_collector.go:145] Error calling lsblk\nI0111 19:57:00.263099 1 plugin.go:91] Add check result {Rule:0xc0001473b0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentUnregisterNetDevice Reason:UnregisterNetDevice Path:/home/kubernetes/bin/log-counter Args:[--journald-source=kernel --log-path=/var/log/journal --lookback=20m --count=3 --pattern=unregister_netdevice: waiting for \\w+ to become free. Usage count = \\d+] TimeoutString:0xc000329350 Timeout:1m0s}\nI0111 19:57:00.263190 1 plugin.go:96] Finish running custom plugins\nI0111 19:57:00.263230 1 custom_plugin_monitor.go:134] New status generated: &{Source:kernel-monitor Events:[] Conditions:[{Type:FrequentUnregisterNetDevice Status:False Transition:2020-01-11 15:56:58.390388959 +0000 UTC m=+35.204536572 Reason:NoFrequentUnregisterNetDevice Message:node is functioning properly}]}\nI0111 19:57:03.572068 1 plugin.go:91] Add check result {Rule:0xc000147490 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentKubeletRestart Reason:FrequentKubeletRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --delay=5m --count=5 --pattern=Started Kubernetes kubelet.] TimeoutString:0xc000329420 Timeout:1m0s}\nI0111 19:57:03.572313 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 19:57:08.667232 1 plugin.go:91] Add check result {Rule:0xc000147500 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentDockerRestart Reason:FrequentDockerRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting Docker Application Container Engine...] TimeoutString:0xc000329440 Timeout:1m0s}\nI0111 19:57:08.667457 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 19:57:13.670404 1 plugin.go:91] Add check result {Rule:0xc000147570 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentContainerdRestart Reason:FrequentContainerdRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting containerd container runtime...] TimeoutString:0xc000329450 Timeout:1m0s}\nI0111 19:57:13.670537 1 plugin.go:96] Finish running custom plugins\nI0111 19:57:13.670623 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nE0111 19:57:58.468408 1 disk_collector.go:145] Error calling lsblk\nE0111 19:58:58.468423 1 disk_collector.go:145] Error calling lsblk\nE0111 19:59:58.468428 1 disk_collector.go:145] Error calling lsblk\nE0111 20:00:58.468409 1 disk_collector.go:145] Error calling lsblk\nI0111 20:01:58.389649 1 plugin.go:65] Start to run custom plugins\nI0111 20:01:58.390553 1 plugin.go:65] Start to run custom plugins\nE0111 20:01:58.468419 1 disk_collector.go:145] Error calling lsblk\nI0111 20:02:00.274758 1 plugin.go:91] Add check result {Rule:0xc0001473b0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentUnregisterNetDevice Reason:UnregisterNetDevice Path:/home/kubernetes/bin/log-counter Args:[--journald-source=kernel --log-path=/var/log/journal --lookback=20m --count=3 --pattern=unregister_netdevice: waiting for \\w+ to become free. Usage count = \\d+] TimeoutString:0xc000329350 Timeout:1m0s}\nI0111 20:02:00.274847 1 plugin.go:96] Finish running custom plugins\nI0111 20:02:00.274885 1 custom_plugin_monitor.go:134] New status generated: &{Source:kernel-monitor Events:[] Conditions:[{Type:FrequentUnregisterNetDevice Status:False Transition:2020-01-11 15:56:58.390388959 +0000 UTC m=+35.204536572 Reason:NoFrequentUnregisterNetDevice Message:node is functioning properly}]}\nI0111 20:02:03.576729 1 plugin.go:91] Add check result {Rule:0xc000147490 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentKubeletRestart Reason:FrequentKubeletRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --delay=5m --count=5 --pattern=Started Kubernetes kubelet.] TimeoutString:0xc000329420 Timeout:1m0s}\nI0111 20:02:03.576909 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 20:02:08.370672 1 plugin.go:91] Add check result {Rule:0xc000147500 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentDockerRestart Reason:FrequentDockerRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting Docker Application Container Engine...] TimeoutString:0xc000329440 Timeout:1m0s}\nI0111 20:02:08.371023 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 20:02:12.969254 1 plugin.go:91] Add check result {Rule:0xc000147570 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentContainerdRestart Reason:FrequentContainerdRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting containerd container runtime...] TimeoutString:0xc000329450 Timeout:1m0s}\nI0111 20:02:12.969320 1 plugin.go:96] Finish running custom plugins\nI0111 20:02:12.969370 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nE0111 20:02:58.468440 1 disk_collector.go:145] Error calling lsblk\nE0111 20:03:58.468410 1 disk_collector.go:145] Error calling lsblk\nE0111 20:04:58.468407 1 disk_collector.go:145] Error calling lsblk\nE0111 20:05:58.468419 1 disk_collector.go:145] Error calling lsblk\nI0111 20:06:58.389662 1 plugin.go:65] Start to run custom plugins\nI0111 20:06:58.390554 1 plugin.go:65] Start to run custom plugins\nE0111 20:06:58.468417 1 disk_collector.go:145] Error calling lsblk\nI0111 20:06:59.970120 1 plugin.go:91] Add check result {Rule:0xc0001473b0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentUnregisterNetDevice Reason:UnregisterNetDevice Path:/home/kubernetes/bin/log-counter Args:[--journald-source=kernel --log-path=/var/log/journal --lookback=20m --count=3 --pattern=unregister_netdevice: waiting for \\w+ to become free. Usage count = \\d+] TimeoutString:0xc000329350 Timeout:1m0s}\nI0111 20:06:59.970215 1 plugin.go:96] Finish running custom plugins\nI0111 20:06:59.970286 1 custom_plugin_monitor.go:134] New status generated: &{Source:kernel-monitor Events:[] Conditions:[{Type:FrequentUnregisterNetDevice Status:False Transition:2020-01-11 15:56:58.390388959 +0000 UTC m=+35.204536572 Reason:NoFrequentUnregisterNetDevice Message:node is functioning properly}]}\nI0111 20:07:02.668099 1 plugin.go:91] Add check result {Rule:0xc000147490 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentKubeletRestart Reason:FrequentKubeletRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --delay=5m --count=5 --pattern=Started Kubernetes kubelet.] TimeoutString:0xc000329420 Timeout:1m0s}\nI0111 20:07:02.668402 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 20:07:06.773569 1 plugin.go:91] Add check result {Rule:0xc000147500 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentDockerRestart Reason:FrequentDockerRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting Docker Application Container Engine...] TimeoutString:0xc000329440 Timeout:1m0s}\nI0111 20:07:06.773766 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 20:07:10.969493 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 20:07:10.969515 1 plugin.go:91] Add check result {Rule:0xc000147570 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentContainerdRestart Reason:FrequentContainerdRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting containerd container runtime...] TimeoutString:0xc000329450 Timeout:1m0s}\nI0111 20:07:10.969806 1 plugin.go:96] Finish running custom plugins\nE0111 20:07:58.468423 1 disk_collector.go:145] Error calling lsblk\nE0111 20:08:58.468400 1 disk_collector.go:145] Error calling lsblk\nE0111 20:09:58.468403 1 disk_collector.go:145] Error calling lsblk\nE0111 20:10:58.468434 1 disk_collector.go:145] Error calling lsblk\nI0111 20:11:58.389664 1 plugin.go:65] Start to run custom plugins\nI0111 20:11:58.390545 1 plugin.go:65] Start to run custom plugins\nE0111 20:11:58.468415 1 disk_collector.go:145] Error calling lsblk\nI0111 20:12:00.281335 1 plugin.go:91] Add check result {Rule:0xc0001473b0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentUnregisterNetDevice Reason:UnregisterNetDevice Path:/home/kubernetes/bin/log-counter Args:[--journald-source=kernel --log-path=/var/log/journal --lookback=20m --count=3 --pattern=unregister_netdevice: waiting for \\w+ to become free. Usage count = \\d+] TimeoutString:0xc000329350 Timeout:1m0s}\nI0111 20:12:00.281419 1 plugin.go:96] Finish running custom plugins\nI0111 20:12:00.281458 1 custom_plugin_monitor.go:134] New status generated: &{Source:kernel-monitor Events:[] Conditions:[{Type:FrequentUnregisterNetDevice Status:False Transition:2020-01-11 15:56:58.390388959 +0000 UTC m=+35.204536572 Reason:NoFrequentUnregisterNetDevice Message:node is functioning properly}]}\nI0111 20:12:03.065173 1 plugin.go:91] Add check result {Rule:0xc000147490 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentKubeletRestart Reason:FrequentKubeletRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --delay=5m --count=5 --pattern=Started Kubernetes kubelet.] TimeoutString:0xc000329420 Timeout:1m0s}\nI0111 20:12:03.065483 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 20:12:07.369682 1 plugin.go:91] Add check result {Rule:0xc000147500 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentDockerRestart Reason:FrequentDockerRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting Docker Application Container Engine...] TimeoutString:0xc000329440 Timeout:1m0s}\nI0111 20:12:07.369784 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 20:12:11.564255 1 plugin.go:91] Add check result {Rule:0xc000147570 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentContainerdRestart Reason:FrequentContainerdRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting containerd container runtime...] TimeoutString:0xc000329450 Timeout:1m0s}\nI0111 20:12:11.564369 1 plugin.go:96] Finish running custom plugins\nI0111 20:12:11.564431 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nE0111 20:12:58.468425 1 disk_collector.go:145] Error calling lsblk\nE0111 20:13:58.468398 1 disk_collector.go:145] Error calling lsblk\nE0111 20:14:58.468439 1 disk_collector.go:145] Error calling lsblk\nE0111 20:15:58.468427 1 disk_collector.go:145] Error calling lsblk\nI0111 20:16:58.389652 1 plugin.go:65] Start to run custom plugins\nI0111 20:16:58.390581 1 plugin.go:65] Start to run custom plugins\nE0111 20:16:58.468420 1 disk_collector.go:145] Error calling lsblk\nI0111 20:17:00.275709 1 plugin.go:91] Add check result {Rule:0xc0001473b0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentUnregisterNetDevice Reason:UnregisterNetDevice Path:/home/kubernetes/bin/log-counter Args:[--journald-source=kernel --log-path=/var/log/journal --lookback=20m --count=3 --pattern=unregister_netdevice: waiting for \\w+ to become free. Usage count = \\d+] TimeoutString:0xc000329350 Timeout:1m0s}\nI0111 20:17:00.275839 1 plugin.go:96] Finish running custom plugins\nI0111 20:17:00.275890 1 custom_plugin_monitor.go:134] New status generated: &{Source:kernel-monitor Events:[] Conditions:[{Type:FrequentUnregisterNetDevice Status:False Transition:2020-01-11 15:56:58.390388959 +0000 UTC m=+35.204536572 Reason:NoFrequentUnregisterNetDevice Message:node is functioning properly}]}\nI0111 20:17:03.271246 1 plugin.go:91] Add check result {Rule:0xc000147490 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentKubeletRestart Reason:FrequentKubeletRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --delay=5m --count=5 --pattern=Started Kubernetes kubelet.] TimeoutString:0xc000329420 Timeout:1m0s}\nI0111 20:17:03.271489 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 20:17:07.770267 1 plugin.go:91] Add check result {Rule:0xc000147500 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentDockerRestart Reason:FrequentDockerRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting Docker Application Container Engine...] TimeoutString:0xc000329440 Timeout:1m0s}\nI0111 20:17:07.770429 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 20:17:12.364252 1 plugin.go:91] Add check result {Rule:0xc000147570 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentContainerdRestart Reason:FrequentContainerdRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting containerd container runtime...] TimeoutString:0xc000329450 Timeout:1m0s}\nI0111 20:17:12.364310 1 plugin.go:96] Finish running custom plugins\nI0111 20:17:12.364369 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nE0111 20:17:58.468409 1 disk_collector.go:145] Error calling lsblk\nE0111 20:18:58.468420 1 disk_collector.go:145] Error calling lsblk\nE0111 20:19:58.468409 1 disk_collector.go:145] Error calling lsblk\nE0111 20:20:58.468405 1 disk_collector.go:145] Error calling lsblk\nI0111 20:21:58.389668 1 plugin.go:65] Start to run custom plugins\nI0111 20:21:58.390551 1 plugin.go:65] Start to run custom plugins\nE0111 20:21:58.468407 1 disk_collector.go:145] Error calling lsblk\nI0111 20:22:00.163088 1 plugin.go:91] Add check result {Rule:0xc0001473b0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentUnregisterNetDevice Reason:UnregisterNetDevice Path:/home/kubernetes/bin/log-counter Args:[--journald-source=kernel --log-path=/var/log/journal --lookback=20m --count=3 --pattern=unregister_netdevice: waiting for \\w+ to become free. Usage count = \\d+] TimeoutString:0xc000329350 Timeout:1m0s}\nI0111 20:22:00.163173 1 plugin.go:96] Finish running custom plugins\nI0111 20:22:00.163209 1 custom_plugin_monitor.go:134] New status generated: &{Source:kernel-monitor Events:[] Conditions:[{Type:FrequentUnregisterNetDevice Status:False Transition:2020-01-11 15:56:58.390388959 +0000 UTC m=+35.204536572 Reason:NoFrequentUnregisterNetDevice Message:node is functioning properly}]}\nI0111 20:22:02.866631 1 plugin.go:91] Add check result {Rule:0xc000147490 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentKubeletRestart Reason:FrequentKubeletRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --delay=5m --count=5 --pattern=Started Kubernetes kubelet.] TimeoutString:0xc000329420 Timeout:1m0s}\nI0111 20:22:02.866909 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 20:22:07.975124 1 plugin.go:91] Add check result {Rule:0xc000147500 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentDockerRestart Reason:FrequentDockerRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting Docker Application Container Engine...] TimeoutString:0xc000329440 Timeout:1m0s}\nI0111 20:22:07.975390 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 20:22:11.964254 1 plugin.go:91] Add check result {Rule:0xc000147570 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentContainerdRestart Reason:FrequentContainerdRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting containerd container runtime...] TimeoutString:0xc000329450 Timeout:1m0s}\nI0111 20:22:11.964369 1 plugin.go:96] Finish running custom plugins\nI0111 20:22:11.964487 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nE0111 20:22:58.468421 1 disk_collector.go:145] Error calling lsblk\nE0111 20:23:58.468408 1 disk_collector.go:145] Error calling lsblk\nE0111 20:24:58.468415 1 disk_collector.go:145] Error calling lsblk\nE0111 20:25:58.468422 1 disk_collector.go:145] Error calling lsblk\nI0111 20:26:58.389663 1 plugin.go:65] Start to run custom plugins\nI0111 20:26:58.390553 1 plugin.go:65] Start to run custom plugins\nE0111 20:26:58.563185 1 disk_collector.go:145] Error calling lsblk\nI0111 20:27:00.174830 1 plugin.go:91] Add check result {Rule:0xc0001473b0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentUnregisterNetDevice Reason:UnregisterNetDevice Path:/home/kubernetes/bin/log-counter Args:[--journald-source=kernel --log-path=/var/log/journal --lookback=20m --count=3 --pattern=unregister_netdevice: waiting for \\w+ to become free. Usage count = \\d+] TimeoutString:0xc000329350 Timeout:1m0s}\nI0111 20:27:00.174911 1 plugin.go:96] Finish running custom plugins\nI0111 20:27:00.174951 1 custom_plugin_monitor.go:134] New status generated: &{Source:kernel-monitor Events:[] Conditions:[{Type:FrequentUnregisterNetDevice Status:False Transition:2020-01-11 15:56:58.390388959 +0000 UTC m=+35.204536572 Reason:NoFrequentUnregisterNetDevice Message:node is functioning properly}]}\nI0111 20:27:02.764745 1 plugin.go:91] Add check result {Rule:0xc000147490 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentKubeletRestart Reason:FrequentKubeletRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --delay=5m --count=5 --pattern=Started Kubernetes kubelet.] TimeoutString:0xc000329420 Timeout:1m0s}\nI0111 20:27:02.764898 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 20:27:06.971594 1 plugin.go:91] Add check result {Rule:0xc000147500 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentDockerRestart Reason:FrequentDockerRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting Docker Application Container Engine...] TimeoutString:0xc000329440 Timeout:1m0s}\nI0111 20:27:06.971929 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 20:27:10.965037 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:58.390572204 +0000 UTC m=+35.204719825 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:58.390572341 +0000 UTC m=+35.204719950 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:58.390572445 +0000 UTC m=+35.204720055 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 20:27:10.964975 1 plugin.go:91] Add check result {Rule:0xc000147570 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentContainerdRestart Reason:FrequentContainerdRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting containerd container runtime...] TimeoutString:0xc000329450 Timeout:1m0s}\nI0111 20:27:10.965100 1 plugin.go:96] Finish running custom plugins\nE0111 20:27:58.468421 1 disk_collector.go:145] Error calling lsblk\nE0111 20:28:58.468408 1 disk_collector.go:145] Error calling lsblk\n==== END logs for container node-problem-detector of pod kube-system/node-problem-detector-9z5sq ====\n==== START logs for container node-problem-detector of pod kube-system/node-problem-detector-jx2p4 ====\nI0111 15:56:28.145012 1 custom_plugin_monitor.go:81] Finish parsing custom plugin monitor config file /config/kernel-monitor-counter.json: {Plugin:custom PluginGlobalConfig:{InvokeIntervalString:0xc00037f910 TimeoutString:0xc00037f920 InvokeInterval:5m0s Timeout:1m0s MaxOutputLength:0xc0000496a8 Concurrency:0xc0000496b8 EnableMessageChangeBasedConditionUpdate:0x1e125a4} Source:kernel-monitor DefaultConditions:[{Type:FrequentUnregisterNetDevice Status: Transition:0001-01-01 00:00:00 +0000 UTC Reason:NoFrequentUnregisterNetDevice Message:node is functioning properly}] Rules:[0xc0002c8bd0] EnableMetricsReporting:0xc0000496ce}\nI0111 15:56:28.145206 1 custom_plugin_monitor.go:81] Finish parsing custom plugin monitor config file /config/systemd-monitor-counter.json: {Plugin:custom PluginGlobalConfig:{InvokeIntervalString:0xc00037f9e0 TimeoutString:0xc00037f9f0 InvokeInterval:5m0s Timeout:1m0s MaxOutputLength:0xc0000498c0 Concurrency:0xc0000498d0 EnableMessageChangeBasedConditionUpdate:0x1e125a4} Source:systemd-monitor DefaultConditions:[{Type:FrequentKubeletRestart Status: Transition:0001-01-01 00:00:00 +0000 UTC Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status: Transition:0001-01-01 00:00:00 +0000 UTC Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status: Transition:0001-01-01 00:00:00 +0000 UTC Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}] Rules:[0xc0002c8cb0 0xc0002c8d20 0xc0002c8d90] EnableMetricsReporting:0xc0000498d8}\nI0111 15:56:28.145549 1 log_monitor.go:79] Finish parsing log monitor config file /config/kernel-monitor.json: {WatcherConfig:{Plugin:kmsg PluginConfig:map[] LogPath:/dev/kmsg Lookback:5m Delay:} BufferSize:10 Source:kernel-monitor DefaultConditions:[{Type:KernelDeadlock Status: Transition:0001-01-01 00:00:00 +0000 UTC Reason:KernelHasNoDeadlock Message:kernel has no deadlock} {Type:ReadonlyFilesystem Status: Transition:0001-01-01 00:00:00 +0000 UTC Reason:FilesystemIsNotReadOnly Message:Filesystem is not read-only}] Rules:[{Type:temporary Condition: Reason:OOMKilling Pattern:Kill process \\d+ (.+) score \\d+ or sacrifice child\\nKilled process \\d+ (.+) total-vm:\\d+kB, anon-rss:\\d+kB, file-rss:\\d+kB.*} {Type:temporary Condition: Reason:TaskHung Pattern:task \\S+:\\w+ blocked for more than \\w+ seconds\\.} {Type:temporary Condition: Reason:UnregisterNetDevice Pattern:unregister_netdevice: waiting for \\w+ to become free. Usage count = \\d+} {Type:temporary Condition: Reason:KernelOops Pattern:BUG: unable to handle kernel NULL pointer dereference at .*} {Type:temporary Condition: Reason:KernelOops Pattern:divide error: 0000 \\[#\\d+\\] SMP} {Type:permanent Condition:KernelDeadlock Reason:AUFSUmountHung Pattern:task umount\\.aufs:\\w+ blocked for more than \\w+ seconds\\.} {Type:permanent Condition:KernelDeadlock Reason:DockerHung Pattern:task docker:\\w+ blocked for more than \\w+ seconds\\.} {Type:permanent Condition:ReadonlyFilesystem Reason:FilesystemIsReadOnly Pattern:Remounting filesystem read-only}] EnableMetricsReporting:0xc0002ea260}\nI0111 15:56:28.145600 1 log_watchers.go:40] Use log watcher of plugin \"kmsg\"\nI0111 15:56:28.145788 1 log_monitor.go:79] Finish parsing log monitor config file /config/docker-monitor.json: {WatcherConfig:{Plugin:journald PluginConfig:map[source:dockerd] LogPath:/var/log/journal Lookback:5m Delay:} BufferSize:10 Source:docker-monitor DefaultConditions:[{Type:CorruptDockerOverlay2 Status: Transition:0001-01-01 00:00:00 +0000 UTC Reason:NoCorruptDockerOverlay2 Message:docker overlay2 is functioning properly}] Rules:[{Type:temporary Condition: Reason:CorruptDockerImage Pattern:Error trying v2 registry: failed to register layer: rename /var/lib/docker/image/(.+) /var/lib/docker/image/(.+): directory not empty.*} {Type:permanent Condition:CorruptDockerOverlay2 Reason:CorruptDockerOverlay2 Pattern:returned error: readlink /var/lib/docker/overlay2.*: invalid argument.*}] EnableMetricsReporting:0xc0002eac20}\nI0111 15:56:28.145816 1 log_watchers.go:40] Use log watcher of plugin \"journald\"\nI0111 15:56:28.145942 1 log_monitor.go:79] Finish parsing log monitor config file /config/systemd-monitor.json: {WatcherConfig:{Plugin:journald PluginConfig:map[source:systemd] LogPath:/var/log/journal Lookback:5m Delay:} BufferSize:10 Source:systemd-monitor DefaultConditions:[] Rules:[{Type:temporary Condition: Reason:KubeletStart Pattern:Started Kubernetes kubelet.} {Type:temporary Condition: Reason:DockerStart Pattern:Starting Docker Application Container Engine...} {Type:temporary Condition: Reason:ContainerdStart Pattern:Starting containerd container runtime...}] EnableMetricsReporting:0xc0002eaf3a}\nI0111 15:56:28.145968 1 log_watchers.go:40] Use log watcher of plugin \"journald\"\nI0111 15:56:28.147617 1 k8s_exporter.go:54] Waiting for kube-apiserver to be ready (timeout 5m0s)...\nI0111 15:56:28.162728 1 node_problem_detector.go:60] K8s exporter started.\nI0111 15:56:28.162833 1 node_problem_detector.go:64] Prometheus exporter started.\nI0111 15:56:28.162846 1 custom_plugin_monitor.go:112] Start custom plugin monitor /config/kernel-monitor-counter.json\nI0111 15:56:28.162858 1 custom_plugin_monitor.go:112] Start custom plugin monitor /config/systemd-monitor-counter.json\nI0111 15:56:28.162865 1 log_monitor.go:111] Start log monitor /config/kernel-monitor.json\nI0111 15:56:28.162904 1 log_monitor.go:111] Start log monitor /config/docker-monitor.json\nI0111 15:56:28.234422 1 plugin.go:65] Start to run custom plugins\nI0111 15:56:28.235476 1 log_monitor.go:228] Initialize condition generated: [{Type:KernelDeadlock Status:False Transition:2020-01-11 15:56:28.235455445 +0000 UTC m=+0.288951129 Reason:KernelHasNoDeadlock Message:kernel has no deadlock} {Type:ReadonlyFilesystem Status:False Transition:2020-01-11 15:56:28.235455758 +0000 UTC m=+0.288951329 Reason:FilesystemIsNotReadOnly Message:Filesystem is not read-only}]\nI0111 15:56:28.235769 1 custom_plugin_monitor.go:285] Initialize condition generated: [{Type:FrequentUnregisterNetDevice Status:False Transition:2020-01-11 15:56:28.235753628 +0000 UTC m=+0.289249248 Reason:NoFrequentUnregisterNetDevice Message:node is functioning properly}]\nI0111 15:56:28.235880 1 plugin.go:65] Start to run custom plugins\nI0111 15:56:28.236655 1 custom_plugin_monitor.go:285] Initialize condition generated: [{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]\nI0111 15:56:28.237839 1 log_watcher.go:80] Start watching journald\nI0111 15:56:28.237860 1 log_monitor.go:111] Start log monitor /config/systemd-monitor.json\nI0111 15:56:28.239962 1 log_monitor.go:228] Initialize condition generated: [{Type:CorruptDockerOverlay2 Status:False Transition:2020-01-11 15:56:28.239944347 +0000 UTC m=+0.293439923 Reason:NoCorruptDockerOverlay2 Message:docker overlay2 is functioning properly}]\nI0111 15:56:28.337267 1 log_watcher.go:80] Start watching journald\nI0111 15:56:28.337307 1 system_stats_monitor.go:85] Start system stats monitor /config/system-stats-monitor.json\nI0111 15:56:28.337319 1 problem_detector.go:67] Problem detector started\nI0111 15:56:28.359940 1 log_monitor.go:228] Initialize condition generated: []\nE0111 15:56:28.438356 1 disk_collector.go:145] Error calling lsblk\nI0111 15:56:29.442063 1 log_monitor.go:153] New status generated: &{Source:systemd-monitor Events:[{Severity:warn Timestamp:2020-01-11 15:55:29.466738 +0000 UTC Reason:DockerStart Message:Starting Docker Application Container Engine...}] Conditions:[]}\nI0111 15:56:29.836910 1 log_monitor.go:153] New status generated: &{Source:systemd-monitor Events:[{Severity:warn Timestamp:2020-01-11 15:55:44.517592 +0000 UTC Reason:DockerStart Message:Starting Docker Application Container Engine...}] Conditions:[]}\nI0111 15:56:30.041724 1 log_monitor.go:153] New status generated: &{Source:systemd-monitor Events:[{Severity:warn Timestamp:2020-01-11 15:55:55.521248 +0000 UTC Reason:DockerStart Message:Starting Docker Application Container Engine...}] Conditions:[]}\nI0111 15:56:30.440712 1 plugin.go:91] Add check result {Rule:0xc0002c8cb0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentKubeletRestart Reason:FrequentKubeletRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --delay=5m --count=5 --pattern=Started Kubernetes kubelet.] TimeoutString:0xc00037fa10 Timeout:1m0s}\nI0111 15:56:30.536638 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 15:56:31.644868 1 plugin.go:91] Add check result {Rule:0xc0002c8bd0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentUnregisterNetDevice Reason:UnregisterNetDevice Path:/home/kubernetes/bin/log-counter Args:[--journald-source=kernel --log-path=/var/log/journal --lookback=20m --count=3 --pattern=unregister_netdevice: waiting for \\w+ to become free. Usage count = \\d+] TimeoutString:0xc00037f930 Timeout:1m0s}\nI0111 15:56:31.644941 1 plugin.go:96] Finish running custom plugins\nI0111 15:56:31.645002 1 custom_plugin_monitor.go:134] New status generated: &{Source:kernel-monitor Events:[] Conditions:[{Type:FrequentUnregisterNetDevice Status:False Transition:2020-01-11 15:56:28.235753628 +0000 UTC m=+0.289249248 Reason:NoFrequentUnregisterNetDevice Message:node is functioning properly}]}\nI0111 15:56:32.951453 1 plugin.go:91] Add check result {Rule:0xc0002c8d20 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentDockerRestart Reason:FrequentDockerRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting Docker Application Container Engine...] TimeoutString:0xc00037fa20 Timeout:1m0s}\nI0111 15:56:32.951621 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 15:56:34.861647 1 plugin.go:91] Add check result {Rule:0xc0002c8d90 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentContainerdRestart Reason:FrequentContainerdRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting containerd container runtime...] TimeoutString:0xc00037fa30 Timeout:1m0s}\nI0111 15:56:34.861891 1 plugin.go:96] Finish running custom plugins\nI0111 15:56:34.861948 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nE0111 15:57:28.438312 1 disk_collector.go:145] Error calling lsblk\nE0111 15:58:28.438293 1 disk_collector.go:145] Error calling lsblk\nE0111 15:59:28.438302 1 disk_collector.go:145] Error calling lsblk\nE0111 16:00:28.438279 1 disk_collector.go:145] Error calling lsblk\nI0111 16:01:28.234515 1 plugin.go:65] Start to run custom plugins\nI0111 16:01:28.235977 1 plugin.go:65] Start to run custom plugins\nE0111 16:01:28.438292 1 disk_collector.go:145] Error calling lsblk\nI0111 16:01:29.942498 1 plugin.go:91] Add check result {Rule:0xc0002c8cb0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentKubeletRestart Reason:FrequentKubeletRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --delay=5m --count=5 --pattern=Started Kubernetes kubelet.] TimeoutString:0xc00037fa10 Timeout:1m0s}\nI0111 16:01:29.942775 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 16:01:30.341779 1 plugin.go:91] Add check result {Rule:0xc0002c8bd0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentUnregisterNetDevice Reason:UnregisterNetDevice Path:/home/kubernetes/bin/log-counter Args:[--journald-source=kernel --log-path=/var/log/journal --lookback=20m --count=3 --pattern=unregister_netdevice: waiting for \\w+ to become free. Usage count = \\d+] TimeoutString:0xc00037f930 Timeout:1m0s}\nI0111 16:01:30.341852 1 plugin.go:96] Finish running custom plugins\nI0111 16:01:30.341897 1 custom_plugin_monitor.go:134] New status generated: &{Source:kernel-monitor Events:[] Conditions:[{Type:FrequentUnregisterNetDevice Status:False Transition:2020-01-11 15:56:28.235753628 +0000 UTC m=+0.289249248 Reason:NoFrequentUnregisterNetDevice Message:node is functioning properly}]}\nI0111 16:01:31.837727 1 plugin.go:91] Add check result {Rule:0xc0002c8d20 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentDockerRestart Reason:FrequentDockerRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting Docker Application Container Engine...] TimeoutString:0xc00037fa20 Timeout:1m0s}\nI0111 16:01:31.837867 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 16:01:33.645256 1 plugin.go:91] Add check result {Rule:0xc0002c8d90 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentContainerdRestart Reason:FrequentContainerdRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting containerd container runtime...] TimeoutString:0xc00037fa30 Timeout:1m0s}\nI0111 16:01:33.645322 1 plugin.go:96] Finish running custom plugins\nI0111 16:01:33.645379 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nE0111 16:02:28.438281 1 disk_collector.go:145] Error calling lsblk\nE0111 16:03:28.438284 1 disk_collector.go:145] Error calling lsblk\nE0111 16:04:28.438279 1 disk_collector.go:145] Error calling lsblk\nE0111 16:05:28.438277 1 disk_collector.go:145] Error calling lsblk\nI0111 16:06:28.234494 1 plugin.go:65] Start to run custom plugins\nI0111 16:06:28.235981 1 plugin.go:65] Start to run custom plugins\nE0111 16:06:28.438287 1 disk_collector.go:145] Error calling lsblk\nI0111 16:06:30.143805 1 plugin.go:91] Add check result {Rule:0xc0002c8cb0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentKubeletRestart Reason:FrequentKubeletRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --delay=5m --count=5 --pattern=Started Kubernetes kubelet.] TimeoutString:0xc00037fa10 Timeout:1m0s}\nI0111 16:06:30.143997 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 16:06:30.442276 1 plugin.go:91] Add check result {Rule:0xc0002c8bd0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentUnregisterNetDevice Reason:UnregisterNetDevice Path:/home/kubernetes/bin/log-counter Args:[--journald-source=kernel --log-path=/var/log/journal --lookback=20m --count=3 --pattern=unregister_netdevice: waiting for \\w+ to become free. Usage count = \\d+] TimeoutString:0xc00037f930 Timeout:1m0s}\nI0111 16:06:30.442422 1 plugin.go:96] Finish running custom plugins\nI0111 16:06:30.442559 1 custom_plugin_monitor.go:134] New status generated: &{Source:kernel-monitor Events:[] Conditions:[{Type:FrequentUnregisterNetDevice Status:False Transition:2020-01-11 15:56:28.235753628 +0000 UTC m=+0.289249248 Reason:NoFrequentUnregisterNetDevice Message:node is functioning properly}]}\nI0111 16:06:32.136505 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 16:06:32.136437 1 plugin.go:91] Add check result {Rule:0xc0002c8d20 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentDockerRestart Reason:FrequentDockerRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting Docker Application Container Engine...] TimeoutString:0xc00037fa20 Timeout:1m0s}\nI0111 16:06:33.948138 1 plugin.go:91] Add check result {Rule:0xc0002c8d90 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentContainerdRestart Reason:FrequentContainerdRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting containerd container runtime...] TimeoutString:0xc00037fa30 Timeout:1m0s}\nI0111 16:06:33.948208 1 plugin.go:96] Finish running custom plugins\nI0111 16:06:33.948273 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nE0111 16:07:28.438249 1 disk_collector.go:145] Error calling lsblk\nE0111 16:08:28.438296 1 disk_collector.go:145] Error calling lsblk\nE0111 16:09:28.438368 1 disk_collector.go:145] Error calling lsblk\nE0111 16:10:28.438312 1 disk_collector.go:145] Error calling lsblk\nI0111 16:11:28.234498 1 plugin.go:65] Start to run custom plugins\nI0111 16:11:28.235939 1 plugin.go:65] Start to run custom plugins\nE0111 16:11:28.438316 1 disk_collector.go:145] Error calling lsblk\nI0111 16:11:30.136716 1 plugin.go:91] Add check result {Rule:0xc0002c8cb0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentKubeletRestart Reason:FrequentKubeletRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --delay=5m --count=5 --pattern=Started Kubernetes kubelet.] TimeoutString:0xc00037fa10 Timeout:1m0s}\nI0111 16:11:30.136810 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 16:11:30.345310 1 plugin.go:91] Add check result {Rule:0xc0002c8bd0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentUnregisterNetDevice Reason:UnregisterNetDevice Path:/home/kubernetes/bin/log-counter Args:[--journald-source=kernel --log-path=/var/log/journal --lookback=20m --count=3 --pattern=unregister_netdevice: waiting for \\w+ to become free. Usage count = \\d+] TimeoutString:0xc00037f930 Timeout:1m0s}\nI0111 16:11:30.345707 1 plugin.go:96] Finish running custom plugins\nI0111 16:11:30.345582 1 custom_plugin_monitor.go:134] New status generated: &{Source:kernel-monitor Events:[] Conditions:[{Type:FrequentUnregisterNetDevice Status:False Transition:2020-01-11 15:56:28.235753628 +0000 UTC m=+0.289249248 Reason:NoFrequentUnregisterNetDevice Message:node is functioning properly}]}\nI0111 16:11:31.787333 1 plugin.go:91] Add check result {Rule:0xc0002c8d20 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentDockerRestart Reason:FrequentDockerRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting Docker Application Container Engine...] TimeoutString:0xc00037fa20 Timeout:1m0s}\nI0111 16:11:31.787504 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 16:11:33.736161 1 plugin.go:91] Add check result {Rule:0xc0002c8d90 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentContainerdRestart Reason:FrequentContainerdRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting containerd container runtime...] TimeoutString:0xc00037fa30 Timeout:1m0s}\nI0111 16:11:33.736226 1 plugin.go:96] Finish running custom plugins\nI0111 16:11:33.736278 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nE0111 16:12:28.438320 1 disk_collector.go:145] Error calling lsblk\nE0111 16:13:28.438294 1 disk_collector.go:145] Error calling lsblk\nE0111 16:14:28.438308 1 disk_collector.go:145] Error calling lsblk\nE0111 16:15:28.438300 1 disk_collector.go:145] Error calling lsblk\nI0111 16:16:28.234515 1 plugin.go:65] Start to run custom plugins\nI0111 16:16:28.235932 1 plugin.go:65] Start to run custom plugins\nE0111 16:16:28.438289 1 disk_collector.go:145] Error calling lsblk\nI0111 16:16:30.038059 1 plugin.go:91] Add check result {Rule:0xc0002c8cb0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentKubeletRestart Reason:FrequentKubeletRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --delay=5m --count=5 --pattern=Started Kubernetes kubelet.] TimeoutString:0xc00037fa10 Timeout:1m0s}\nI0111 16:16:30.038323 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 16:16:30.041789 1 plugin.go:91] Add check result {Rule:0xc0002c8bd0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentUnregisterNetDevice Reason:UnregisterNetDevice Path:/home/kubernetes/bin/log-counter Args:[--journald-source=kernel --log-path=/var/log/journal --lookback=20m --count=3 --pattern=unregister_netdevice: waiting for \\w+ to become free. Usage count = \\d+] TimeoutString:0xc00037f930 Timeout:1m0s}\nI0111 16:16:30.041845 1 plugin.go:96] Finish running custom plugins\nI0111 16:16:30.041875 1 custom_plugin_monitor.go:134] New status generated: &{Source:kernel-monitor Events:[] Conditions:[{Type:FrequentUnregisterNetDevice Status:False Transition:2020-01-11 15:56:28.235753628 +0000 UTC m=+0.289249248 Reason:NoFrequentUnregisterNetDevice Message:node is functioning properly}]}\nI0111 16:16:31.438561 1 plugin.go:91] Add check result {Rule:0xc0002c8d20 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentDockerRestart Reason:FrequentDockerRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting Docker Application Container Engine...] TimeoutString:0xc00037fa20 Timeout:1m0s}\nI0111 16:16:31.438679 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 16:16:32.840960 1 plugin.go:91] Add check result {Rule:0xc0002c8d90 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentContainerdRestart Reason:FrequentContainerdRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting containerd container runtime...] TimeoutString:0xc00037fa30 Timeout:1m0s}\nI0111 16:16:32.841029 1 plugin.go:96] Finish running custom plugins\nI0111 16:16:32.841078 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nE0111 16:17:28.438302 1 disk_collector.go:145] Error calling lsblk\nE0111 16:18:28.438298 1 disk_collector.go:145] Error calling lsblk\nE0111 16:19:28.438295 1 disk_collector.go:145] Error calling lsblk\nE0111 16:20:28.438307 1 disk_collector.go:145] Error calling lsblk\nI0111 16:21:28.234528 1 plugin.go:65] Start to run custom plugins\nI0111 16:21:28.235917 1 plugin.go:65] Start to run custom plugins\nE0111 16:21:28.438263 1 disk_collector.go:145] Error calling lsblk\nI0111 16:21:29.838415 1 plugin.go:91] Add check result {Rule:0xc0002c8bd0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentUnregisterNetDevice Reason:UnregisterNetDevice Path:/home/kubernetes/bin/log-counter Args:[--journald-source=kernel --log-path=/var/log/journal --lookback=20m --count=3 --pattern=unregister_netdevice: waiting for \\w+ to become free. Usage count = \\d+] TimeoutString:0xc00037f930 Timeout:1m0s}\nI0111 16:21:29.838506 1 custom_plugin_monitor.go:134] New status generated: &{Source:kernel-monitor Events:[] Conditions:[{Type:FrequentUnregisterNetDevice Status:False Transition:2020-01-11 15:56:28.235753628 +0000 UTC m=+0.289249248 Reason:NoFrequentUnregisterNetDevice Message:node is functioning properly}]}\nI0111 16:21:29.838682 1 plugin.go:96] Finish running custom plugins\nI0111 16:21:30.040337 1 plugin.go:91] Add check result {Rule:0xc0002c8cb0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentKubeletRestart Reason:FrequentKubeletRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --delay=5m --count=5 --pattern=Started Kubernetes kubelet.] TimeoutString:0xc00037fa10 Timeout:1m0s}\nI0111 16:21:30.040457 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 16:21:31.347932 1 plugin.go:91] Add check result {Rule:0xc0002c8d20 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentDockerRestart Reason:FrequentDockerRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting Docker Application Container Engine...] TimeoutString:0xc00037fa20 Timeout:1m0s}\nI0111 16:21:31.348227 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 16:21:32.738385 1 plugin.go:91] Add check result {Rule:0xc0002c8d90 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentContainerdRestart Reason:FrequentContainerdRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting containerd container runtime...] TimeoutString:0xc00037fa30 Timeout:1m0s}\nI0111 16:21:32.738505 1 plugin.go:96] Finish running custom plugins\nI0111 16:21:32.738486 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nE0111 16:22:28.438283 1 disk_collector.go:145] Error calling lsblk\nE0111 16:23:28.438278 1 disk_collector.go:145] Error calling lsblk\nE0111 16:24:28.438280 1 disk_collector.go:145] Error calling lsblk\nE0111 16:25:28.438282 1 disk_collector.go:145] Error calling lsblk\nI0111 16:26:28.234506 1 plugin.go:65] Start to run custom plugins\nI0111 16:26:28.235934 1 plugin.go:65] Start to run custom plugins\nE0111 16:26:28.438310 1 disk_collector.go:145] Error calling lsblk\nI0111 16:26:29.841415 1 plugin.go:91] Add check result {Rule:0xc0002c8bd0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentUnregisterNetDevice Reason:UnregisterNetDevice Path:/home/kubernetes/bin/log-counter Args:[--journald-source=kernel --log-path=/var/log/journal --lookback=20m --count=3 --pattern=unregister_netdevice: waiting for \\w+ to become free. Usage count = \\d+] TimeoutString:0xc00037f930 Timeout:1m0s}\nI0111 16:26:29.841684 1 plugin.go:96] Finish running custom plugins\nI0111 16:26:29.841523 1 custom_plugin_monitor.go:134] New status generated: &{Source:kernel-monitor Events:[] Conditions:[{Type:FrequentUnregisterNetDevice Status:False Transition:2020-01-11 15:56:28.235753628 +0000 UTC m=+0.289249248 Reason:NoFrequentUnregisterNetDevice Message:node is functioning properly}]}\nI0111 16:26:30.044292 1 plugin.go:91] Add check result {Rule:0xc0002c8cb0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentKubeletRestart Reason:FrequentKubeletRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --delay=5m --count=5 --pattern=Started Kubernetes kubelet.] TimeoutString:0xc00037fa10 Timeout:1m0s}\nI0111 16:26:30.044454 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 16:26:31.440803 1 plugin.go:91] Add check result {Rule:0xc0002c8d20 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentDockerRestart Reason:FrequentDockerRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting Docker Application Container Engine...] TimeoutString:0xc00037fa20 Timeout:1m0s}\nI0111 16:26:31.440987 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 16:26:32.836162 1 plugin.go:91] Add check result {Rule:0xc0002c8d90 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentContainerdRestart Reason:FrequentContainerdRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting containerd container runtime...] TimeoutString:0xc00037fa30 Timeout:1m0s}\nI0111 16:26:32.836245 1 plugin.go:96] Finish running custom plugins\nI0111 16:26:32.836359 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nE0111 16:27:28.438298 1 disk_collector.go:145] Error calling lsblk\nE0111 16:28:28.438294 1 disk_collector.go:145] Error calling lsblk\nE0111 16:29:28.438279 1 disk_collector.go:145] Error calling lsblk\nE0111 16:30:28.438278 1 disk_collector.go:145] Error calling lsblk\nI0111 16:31:28.234501 1 plugin.go:65] Start to run custom plugins\nI0111 16:31:28.235966 1 plugin.go:65] Start to run custom plugins\nE0111 16:31:28.438285 1 disk_collector.go:145] Error calling lsblk\nI0111 16:31:29.843952 1 plugin.go:91] Add check result {Rule:0xc0002c8bd0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentUnregisterNetDevice Reason:UnregisterNetDevice Path:/home/kubernetes/bin/log-counter Args:[--journald-source=kernel --log-path=/var/log/journal --lookback=20m --count=3 --pattern=unregister_netdevice: waiting for \\w+ to become free. Usage count = \\d+] TimeoutString:0xc00037f930 Timeout:1m0s}\nI0111 16:31:29.844116 1 custom_plugin_monitor.go:134] New status generated: &{Source:kernel-monitor Events:[] Conditions:[{Type:FrequentUnregisterNetDevice Status:False Transition:2020-01-11 15:56:28.235753628 +0000 UTC m=+0.289249248 Reason:NoFrequentUnregisterNetDevice Message:node is functioning properly}]}\nI0111 16:31:29.844206 1 plugin.go:96] Finish running custom plugins\nI0111 16:31:30.041197 1 plugin.go:91] Add check result {Rule:0xc0002c8cb0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentKubeletRestart Reason:FrequentKubeletRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --delay=5m --count=5 --pattern=Started Kubernetes kubelet.] TimeoutString:0xc00037fa10 Timeout:1m0s}\nI0111 16:31:30.041357 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 16:31:31.439301 1 plugin.go:91] Add check result {Rule:0xc0002c8d20 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentDockerRestart Reason:FrequentDockerRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting Docker Application Container Engine...] TimeoutString:0xc00037fa20 Timeout:1m0s}\nI0111 16:31:31.439349 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 16:31:32.835549 1 plugin.go:91] Add check result {Rule:0xc0002c8d90 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentContainerdRestart Reason:FrequentContainerdRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting containerd container runtime...] TimeoutString:0xc00037fa30 Timeout:1m0s}\nI0111 16:31:32.835616 1 plugin.go:96] Finish running custom plugins\nI0111 16:31:32.835693 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nE0111 16:32:28.438299 1 disk_collector.go:145] Error calling lsblk\nE0111 16:33:28.438360 1 disk_collector.go:145] Error calling lsblk\nE0111 16:34:28.438298 1 disk_collector.go:145] Error calling lsblk\nE0111 16:35:28.438286 1 disk_collector.go:145] Error calling lsblk\nI0111 16:36:28.234496 1 plugin.go:65] Start to run custom plugins\nI0111 16:36:28.235971 1 plugin.go:65] Start to run custom plugins\nE0111 16:36:28.438350 1 disk_collector.go:145] Error calling lsblk\nI0111 16:36:29.940687 1 plugin.go:91] Add check result {Rule:0xc0002c8bd0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentUnregisterNetDevice Reason:UnregisterNetDevice Path:/home/kubernetes/bin/log-counter Args:[--journald-source=kernel --log-path=/var/log/journal --lookback=20m --count=3 --pattern=unregister_netdevice: waiting for \\w+ to become free. Usage count = \\d+] TimeoutString:0xc00037f930 Timeout:1m0s}\nI0111 16:36:29.940761 1 plugin.go:96] Finish running custom plugins\nI0111 16:36:29.940796 1 custom_plugin_monitor.go:134] New status generated: &{Source:kernel-monitor Events:[] Conditions:[{Type:FrequentUnregisterNetDevice Status:False Transition:2020-01-11 15:56:28.235753628 +0000 UTC m=+0.289249248 Reason:NoFrequentUnregisterNetDevice Message:node is functioning properly}]}\nI0111 16:36:30.240047 1 plugin.go:91] Add check result {Rule:0xc0002c8cb0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentKubeletRestart Reason:FrequentKubeletRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --delay=5m --count=5 --pattern=Started Kubernetes kubelet.] TimeoutString:0xc00037fa10 Timeout:1m0s}\nI0111 16:36:30.240238 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 16:36:31.742286 1 plugin.go:91] Add check result {Rule:0xc0002c8d20 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentDockerRestart Reason:FrequentDockerRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting Docker Application Container Engine...] TimeoutString:0xc00037fa20 Timeout:1m0s}\nI0111 16:36:31.742551 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 16:36:33.251266 1 plugin.go:91] Add check result {Rule:0xc0002c8d90 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentContainerdRestart Reason:FrequentContainerdRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting containerd container runtime...] TimeoutString:0xc00037fa30 Timeout:1m0s}\nI0111 16:36:33.251329 1 plugin.go:96] Finish running custom plugins\nI0111 16:36:33.251587 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nE0111 16:37:28.438277 1 disk_collector.go:145] Error calling lsblk\nE0111 16:38:28.438279 1 disk_collector.go:145] Error calling lsblk\nE0111 16:39:28.438277 1 disk_collector.go:145] Error calling lsblk\nE0111 16:40:28.438284 1 disk_collector.go:145] Error calling lsblk\nI0111 16:41:28.234495 1 plugin.go:65] Start to run custom plugins\nI0111 16:41:28.235957 1 plugin.go:65] Start to run custom plugins\nE0111 16:41:28.438301 1 disk_collector.go:145] Error calling lsblk\nI0111 16:41:30.141074 1 plugin.go:91] Add check result {Rule:0xc0002c8bd0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentUnregisterNetDevice Reason:UnregisterNetDevice Path:/home/kubernetes/bin/log-counter Args:[--journald-source=kernel --log-path=/var/log/journal --lookback=20m --count=3 --pattern=unregister_netdevice: waiting for \\w+ to become free. Usage count = \\d+] TimeoutString:0xc00037f930 Timeout:1m0s}\nI0111 16:41:30.141324 1 plugin.go:96] Finish running custom plugins\nI0111 16:41:30.141224 1 custom_plugin_monitor.go:134] New status generated: &{Source:kernel-monitor Events:[] Conditions:[{Type:FrequentUnregisterNetDevice Status:False Transition:2020-01-11 15:56:28.235753628 +0000 UTC m=+0.289249248 Reason:NoFrequentUnregisterNetDevice Message:node is functioning properly}]}\nI0111 16:41:30.236231 1 plugin.go:91] Add check result {Rule:0xc0002c8cb0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentKubeletRestart Reason:FrequentKubeletRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --delay=5m --count=5 --pattern=Started Kubernetes kubelet.] TimeoutString:0xc00037fa10 Timeout:1m0s}\nI0111 16:41:30.236388 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 16:41:31.739261 1 plugin.go:91] Add check result {Rule:0xc0002c8d20 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentDockerRestart Reason:FrequentDockerRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting Docker Application Container Engine...] TimeoutString:0xc00037fa20 Timeout:1m0s}\nI0111 16:41:31.739308 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 16:41:33.237196 1 plugin.go:91] Add check result {Rule:0xc0002c8d90 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentContainerdRestart Reason:FrequentContainerdRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting containerd container runtime...] TimeoutString:0xc00037fa30 Timeout:1m0s}\nI0111 16:41:33.237263 1 plugin.go:96] Finish running custom plugins\nI0111 16:41:33.237319 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nE0111 16:42:28.438317 1 disk_collector.go:145] Error calling lsblk\nE0111 16:43:28.438313 1 disk_collector.go:145] Error calling lsblk\nE0111 16:44:28.438297 1 disk_collector.go:145] Error calling lsblk\nE0111 16:45:28.438277 1 disk_collector.go:145] Error calling lsblk\nI0111 16:46:28.234494 1 plugin.go:65] Start to run custom plugins\nI0111 16:46:28.235962 1 plugin.go:65] Start to run custom plugins\nE0111 16:46:28.438291 1 disk_collector.go:145] Error calling lsblk\nI0111 16:46:29.939049 1 plugin.go:91] Add check result {Rule:0xc0002c8bd0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentUnregisterNetDevice Reason:UnregisterNetDevice Path:/home/kubernetes/bin/log-counter Args:[--journald-source=kernel --log-path=/var/log/journal --lookback=20m --count=3 --pattern=unregister_netdevice: waiting for \\w+ to become free. Usage count = \\d+] TimeoutString:0xc00037f930 Timeout:1m0s}\nI0111 16:46:29.939120 1 plugin.go:96] Finish running custom plugins\nI0111 16:46:29.939154 1 custom_plugin_monitor.go:134] New status generated: &{Source:kernel-monitor Events:[] Conditions:[{Type:FrequentUnregisterNetDevice Status:False Transition:2020-01-11 15:56:28.235753628 +0000 UTC m=+0.289249248 Reason:NoFrequentUnregisterNetDevice Message:node is functioning properly}]}\nI0111 16:46:30.236171 1 plugin.go:91] Add check result {Rule:0xc0002c8cb0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentKubeletRestart Reason:FrequentKubeletRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --delay=5m --count=5 --pattern=Started Kubernetes kubelet.] TimeoutString:0xc00037fa10 Timeout:1m0s}\nI0111 16:46:30.236440 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 16:46:31.737895 1 plugin.go:91] Add check result {Rule:0xc0002c8d20 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentDockerRestart Reason:FrequentDockerRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting Docker Application Container Engine...] TimeoutString:0xc00037fa20 Timeout:1m0s}\nI0111 16:46:31.738080 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 16:46:33.238681 1 plugin.go:91] Add check result {Rule:0xc0002c8d90 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentContainerdRestart Reason:FrequentContainerdRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting containerd container runtime...] TimeoutString:0xc00037fa30 Timeout:1m0s}\nI0111 16:46:33.238740 1 plugin.go:96] Finish running custom plugins\nI0111 16:46:33.238920 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nE0111 16:47:28.438302 1 disk_collector.go:145] Error calling lsblk\nE0111 16:48:28.438303 1 disk_collector.go:145] Error calling lsblk\nE0111 16:49:28.438302 1 disk_collector.go:145] Error calling lsblk\nE0111 16:50:28.438291 1 disk_collector.go:145] Error calling lsblk\nI0111 16:51:28.234502 1 plugin.go:65] Start to run custom plugins\nI0111 16:51:28.235943 1 plugin.go:65] Start to run custom plugins\nE0111 16:51:28.438313 1 disk_collector.go:145] Error calling lsblk\nI0111 16:51:29.839739 1 plugin.go:91] Add check result {Rule:0xc0002c8bd0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentUnregisterNetDevice Reason:UnregisterNetDevice Path:/home/kubernetes/bin/log-counter Args:[--journald-source=kernel --log-path=/var/log/journal --lookback=20m --count=3 --pattern=unregister_netdevice: waiting for \\w+ to become free. Usage count = \\d+] TimeoutString:0xc00037f930 Timeout:1m0s}\nI0111 16:51:29.839807 1 plugin.go:96] Finish running custom plugins\nI0111 16:51:29.839947 1 custom_plugin_monitor.go:134] New status generated: &{Source:kernel-monitor Events:[] Conditions:[{Type:FrequentUnregisterNetDevice Status:False Transition:2020-01-11 15:56:28.235753628 +0000 UTC m=+0.289249248 Reason:NoFrequentUnregisterNetDevice Message:node is functioning properly}]}\nI0111 16:51:30.143318 1 plugin.go:91] Add check result {Rule:0xc0002c8cb0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentKubeletRestart Reason:FrequentKubeletRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --delay=5m --count=5 --pattern=Started Kubernetes kubelet.] TimeoutString:0xc00037fa10 Timeout:1m0s}\nI0111 16:51:30.143503 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 16:51:31.640660 1 plugin.go:91] Add check result {Rule:0xc0002c8d20 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentDockerRestart Reason:FrequentDockerRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting Docker Application Container Engine...] TimeoutString:0xc00037fa20 Timeout:1m0s}\nI0111 16:51:31.640847 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 16:51:33.050895 1 plugin.go:91] Add check result {Rule:0xc0002c8d90 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentContainerdRestart Reason:FrequentContainerdRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting containerd container runtime...] TimeoutString:0xc00037fa30 Timeout:1m0s}\nI0111 16:51:33.050967 1 plugin.go:96] Finish running custom plugins\nI0111 16:51:33.051048 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nE0111 16:52:28.438376 1 disk_collector.go:145] Error calling lsblk\nE0111 16:53:28.438284 1 disk_collector.go:145] Error calling lsblk\nE0111 16:54:28.438300 1 disk_collector.go:145] Error calling lsblk\nE0111 16:55:28.438286 1 disk_collector.go:145] Error calling lsblk\nI0111 16:56:28.234511 1 plugin.go:65] Start to run custom plugins\nI0111 16:56:28.235938 1 plugin.go:65] Start to run custom plugins\nE0111 16:56:28.438298 1 disk_collector.go:145] Error calling lsblk\nI0111 16:56:30.040198 1 plugin.go:91] Add check result {Rule:0xc0002c8bd0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentUnregisterNetDevice Reason:UnregisterNetDevice Path:/home/kubernetes/bin/log-counter Args:[--journald-source=kernel --log-path=/var/log/journal --lookback=20m --count=3 --pattern=unregister_netdevice: waiting for \\w+ to become free. Usage count = \\d+] TimeoutString:0xc00037f930 Timeout:1m0s}\nI0111 16:56:30.040271 1 plugin.go:96] Finish running custom plugins\nI0111 16:56:30.040326 1 custom_plugin_monitor.go:134] New status generated: &{Source:kernel-monitor Events:[] Conditions:[{Type:FrequentUnregisterNetDevice Status:False Transition:2020-01-11 15:56:28.235753628 +0000 UTC m=+0.289249248 Reason:NoFrequentUnregisterNetDevice Message:node is functioning properly}]}\nI0111 16:56:30.139542 1 plugin.go:91] Add check result {Rule:0xc0002c8cb0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentKubeletRestart Reason:FrequentKubeletRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --delay=5m --count=5 --pattern=Started Kubernetes kubelet.] TimeoutString:0xc00037fa10 Timeout:1m0s}\nI0111 16:56:30.139781 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 16:56:31.535788 1 plugin.go:91] Add check result {Rule:0xc0002c8d20 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentDockerRestart Reason:FrequentDockerRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting Docker Application Container Engine...] TimeoutString:0xc00037fa20 Timeout:1m0s}\nI0111 16:56:31.535963 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 16:56:32.937465 1 plugin.go:91] Add check result {Rule:0xc0002c8d90 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentContainerdRestart Reason:FrequentContainerdRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting containerd container runtime...] TimeoutString:0xc00037fa30 Timeout:1m0s}\nI0111 16:56:32.937567 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 16:56:32.937740 1 plugin.go:96] Finish running custom plugins\nE0111 16:57:28.438312 1 disk_collector.go:145] Error calling lsblk\nE0111 16:58:28.438292 1 disk_collector.go:145] Error calling lsblk\nE0111 16:59:28.438338 1 disk_collector.go:145] Error calling lsblk\nE0111 17:00:28.438292 1 disk_collector.go:145] Error calling lsblk\nI0111 17:01:28.234500 1 plugin.go:65] Start to run custom plugins\nI0111 17:01:28.235966 1 plugin.go:65] Start to run custom plugins\nE0111 17:01:28.438297 1 disk_collector.go:145] Error calling lsblk\nI0111 17:01:29.843701 1 plugin.go:91] Add check result {Rule:0xc0002c8bd0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentUnregisterNetDevice Reason:UnregisterNetDevice Path:/home/kubernetes/bin/log-counter Args:[--journald-source=kernel --log-path=/var/log/journal --lookback=20m --count=3 --pattern=unregister_netdevice: waiting for \\w+ to become free. Usage count = \\d+] TimeoutString:0xc00037f930 Timeout:1m0s}\nI0111 17:01:29.843769 1 plugin.go:96] Finish running custom plugins\nI0111 17:01:29.843947 1 custom_plugin_monitor.go:134] New status generated: &{Source:kernel-monitor Events:[] Conditions:[{Type:FrequentUnregisterNetDevice Status:False Transition:2020-01-11 15:56:28.235753628 +0000 UTC m=+0.289249248 Reason:NoFrequentUnregisterNetDevice Message:node is functioning properly}]}\nI0111 17:01:30.136458 1 plugin.go:91] Add check result {Rule:0xc0002c8cb0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentKubeletRestart Reason:FrequentKubeletRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --delay=5m --count=5 --pattern=Started Kubernetes kubelet.] TimeoutString:0xc00037fa10 Timeout:1m0s}\nI0111 17:01:30.136706 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 17:01:31.540303 1 plugin.go:91] Add check result {Rule:0xc0002c8d20 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentDockerRestart Reason:FrequentDockerRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting Docker Application Container Engine...] TimeoutString:0xc00037fa20 Timeout:1m0s}\nI0111 17:01:31.540583 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 17:01:32.937893 1 plugin.go:91] Add check result {Rule:0xc0002c8d90 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentContainerdRestart Reason:FrequentContainerdRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting containerd container runtime...] TimeoutString:0xc00037fa30 Timeout:1m0s}\nI0111 17:01:32.938028 1 plugin.go:96] Finish running custom plugins\nI0111 17:01:32.937946 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nE0111 17:02:28.438305 1 disk_collector.go:145] Error calling lsblk\nE0111 17:03:28.438299 1 disk_collector.go:145] Error calling lsblk\nE0111 17:04:28.438301 1 disk_collector.go:145] Error calling lsblk\nE0111 17:05:28.438325 1 disk_collector.go:145] Error calling lsblk\nI0111 17:06:28.234521 1 plugin.go:65] Start to run custom plugins\nI0111 17:06:28.235923 1 plugin.go:65] Start to run custom plugins\nE0111 17:06:28.438299 1 disk_collector.go:145] Error calling lsblk\nI0111 17:06:29.140843 1 plugin.go:91] Add check result {Rule:0xc0002c8cb0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentKubeletRestart Reason:FrequentKubeletRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --delay=5m --count=5 --pattern=Started Kubernetes kubelet.] TimeoutString:0xc00037fa10 Timeout:1m0s}\nI0111 17:06:29.141021 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 17:06:29.844019 1 plugin.go:91] Add check result {Rule:0xc0002c8bd0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentUnregisterNetDevice Reason:UnregisterNetDevice Path:/home/kubernetes/bin/log-counter Args:[--journald-source=kernel --log-path=/var/log/journal --lookback=20m --count=3 --pattern=unregister_netdevice: waiting for \\w+ to become free. Usage count = \\d+] TimeoutString:0xc00037f930 Timeout:1m0s}\nI0111 17:06:29.844137 1 plugin.go:96] Finish running custom plugins\nI0111 17:06:29.844191 1 custom_plugin_monitor.go:134] New status generated: &{Source:kernel-monitor Events:[] Conditions:[{Type:FrequentUnregisterNetDevice Status:False Transition:2020-01-11 15:56:28.235753628 +0000 UTC m=+0.289249248 Reason:NoFrequentUnregisterNetDevice Message:node is functioning properly}]}\nI0111 17:06:30.635740 1 plugin.go:91] Add check result {Rule:0xc0002c8d20 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentDockerRestart Reason:FrequentDockerRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting Docker Application Container Engine...] TimeoutString:0xc00037fa20 Timeout:1m0s}\nI0111 17:06:30.636000 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 17:06:32.043684 1 plugin.go:91] Add check result {Rule:0xc0002c8d90 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentContainerdRestart Reason:FrequentContainerdRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting containerd container runtime...] TimeoutString:0xc00037fa30 Timeout:1m0s}\nI0111 17:06:32.043741 1 plugin.go:96] Finish running custom plugins\nI0111 17:06:32.043792 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nE0111 17:07:28.438276 1 disk_collector.go:145] Error calling lsblk\nE0111 17:08:28.438300 1 disk_collector.go:145] Error calling lsblk\nE0111 17:09:28.438357 1 disk_collector.go:145] Error calling lsblk\nE0111 17:10:28.438285 1 disk_collector.go:145] Error calling lsblk\nE0111 17:11:22.528917 1 manager.go:160] failed to update node conditions: Patch https://100.104.0.1:443/api/v1/nodes/ip-10-250-7-77.ec2.internal/status: http2: server sent GOAWAY and closed the connection; LastStreamID=137, ErrCode=NO_ERROR, debug=\"\"\nI0111 17:11:28.234514 1 plugin.go:65] Start to run custom plugins\nI0111 17:11:28.235938 1 plugin.go:65] Start to run custom plugins\nE0111 17:11:28.438311 1 disk_collector.go:145] Error calling lsblk\nI0111 17:11:29.943198 1 plugin.go:91] Add check result {Rule:0xc0002c8bd0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentUnregisterNetDevice Reason:UnregisterNetDevice Path:/home/kubernetes/bin/log-counter Args:[--journald-source=kernel --log-path=/var/log/journal --lookback=20m --count=3 --pattern=unregister_netdevice: waiting for \\w+ to become free. Usage count = \\d+] TimeoutString:0xc00037f930 Timeout:1m0s}\nI0111 17:11:29.943352 1 plugin.go:96] Finish running custom plugins\nI0111 17:11:29.943287 1 custom_plugin_monitor.go:134] New status generated: &{Source:kernel-monitor Events:[] Conditions:[{Type:FrequentUnregisterNetDevice Status:False Transition:2020-01-11 15:56:28.235753628 +0000 UTC m=+0.289249248 Reason:NoFrequentUnregisterNetDevice Message:node is functioning properly}]}\nI0111 17:11:30.140388 1 plugin.go:91] Add check result {Rule:0xc0002c8cb0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentKubeletRestart Reason:FrequentKubeletRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --delay=5m --count=5 --pattern=Started Kubernetes kubelet.] TimeoutString:0xc00037fa10 Timeout:1m0s}\nI0111 17:11:30.140613 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 17:11:31.536408 1 plugin.go:91] Add check result {Rule:0xc0002c8d20 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentDockerRestart Reason:FrequentDockerRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting Docker Application Container Engine...] TimeoutString:0xc00037fa20 Timeout:1m0s}\nI0111 17:11:31.536709 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 17:11:32.936918 1 plugin.go:91] Add check result {Rule:0xc0002c8d90 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentContainerdRestart Reason:FrequentContainerdRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting containerd container runtime...] TimeoutString:0xc00037fa30 Timeout:1m0s}\nI0111 17:11:32.936976 1 plugin.go:96] Finish running custom plugins\nI0111 17:11:32.937154 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nE0111 17:11:35.240231 1 manager.go:160] failed to update node conditions: Patch https://100.104.0.1:443/api/v1/nodes/ip-10-250-7-77.ec2.internal/status: net/http: TLS handshake timeout\nE0111 17:12:28.438332 1 disk_collector.go:145] Error calling lsblk\nE0111 17:13:28.438300 1 disk_collector.go:145] Error calling lsblk\nE0111 17:14:28.438299 1 disk_collector.go:145] Error calling lsblk\nE0111 17:15:28.438263 1 disk_collector.go:145] Error calling lsblk\nI0111 17:16:28.234495 1 plugin.go:65] Start to run custom plugins\nI0111 17:16:28.235930 1 plugin.go:65] Start to run custom plugins\nE0111 17:16:28.438293 1 disk_collector.go:145] Error calling lsblk\nI0111 17:16:29.939600 1 plugin.go:91] Add check result {Rule:0xc0002c8bd0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentUnregisterNetDevice Reason:UnregisterNetDevice Path:/home/kubernetes/bin/log-counter Args:[--journald-source=kernel --log-path=/var/log/journal --lookback=20m --count=3 --pattern=unregister_netdevice: waiting for \\w+ to become free. Usage count = \\d+] TimeoutString:0xc00037f930 Timeout:1m0s}\nI0111 17:16:29.939690 1 plugin.go:96] Finish running custom plugins\nI0111 17:16:29.939837 1 custom_plugin_monitor.go:134] New status generated: &{Source:kernel-monitor Events:[] Conditions:[{Type:FrequentUnregisterNetDevice Status:False Transition:2020-01-11 15:56:28.235753628 +0000 UTC m=+0.289249248 Reason:NoFrequentUnregisterNetDevice Message:node is functioning properly}]}\nI0111 17:16:30.137401 1 plugin.go:91] Add check result {Rule:0xc0002c8cb0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentKubeletRestart Reason:FrequentKubeletRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --delay=5m --count=5 --pattern=Started Kubernetes kubelet.] TimeoutString:0xc00037fa10 Timeout:1m0s}\nI0111 17:16:30.137539 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 17:16:31.537420 1 plugin.go:91] Add check result {Rule:0xc0002c8d20 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentDockerRestart Reason:FrequentDockerRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting Docker Application Container Engine...] TimeoutString:0xc00037fa20 Timeout:1m0s}\nI0111 17:16:31.537622 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 17:16:32.843203 1 plugin.go:91] Add check result {Rule:0xc0002c8d90 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentContainerdRestart Reason:FrequentContainerdRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting containerd container runtime...] TimeoutString:0xc00037fa30 Timeout:1m0s}\nI0111 17:16:32.843262 1 plugin.go:96] Finish running custom plugins\nI0111 17:16:32.843431 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nE0111 17:17:28.438280 1 disk_collector.go:145] Error calling lsblk\nE0111 17:18:28.438280 1 disk_collector.go:145] Error calling lsblk\nE0111 17:19:28.438301 1 disk_collector.go:145] Error calling lsblk\nE0111 17:20:28.438308 1 disk_collector.go:145] Error calling lsblk\nI0111 17:21:28.234496 1 plugin.go:65] Start to run custom plugins\nI0111 17:21:28.235937 1 plugin.go:65] Start to run custom plugins\nE0111 17:21:28.438306 1 disk_collector.go:145] Error calling lsblk\nI0111 17:21:29.935793 1 plugin.go:91] Add check result {Rule:0xc0002c8bd0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentUnregisterNetDevice Reason:UnregisterNetDevice Path:/home/kubernetes/bin/log-counter Args:[--journald-source=kernel --log-path=/var/log/journal --lookback=20m --count=3 --pattern=unregister_netdevice: waiting for \\w+ to become free. Usage count = \\d+] TimeoutString:0xc00037f930 Timeout:1m0s}\nI0111 17:21:29.935861 1 plugin.go:96] Finish running custom plugins\nI0111 17:21:29.935893 1 custom_plugin_monitor.go:134] New status generated: &{Source:kernel-monitor Events:[] Conditions:[{Type:FrequentUnregisterNetDevice Status:False Transition:2020-01-11 15:56:28.235753628 +0000 UTC m=+0.289249248 Reason:NoFrequentUnregisterNetDevice Message:node is functioning properly}]}\nI0111 17:21:30.135312 1 plugin.go:91] Add check result {Rule:0xc0002c8cb0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentKubeletRestart Reason:FrequentKubeletRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --delay=5m --count=5 --pattern=Started Kubernetes kubelet.] TimeoutString:0xc00037fa10 Timeout:1m0s}\nI0111 17:21:30.135501 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 17:21:31.444686 1 plugin.go:91] Add check result {Rule:0xc0002c8d20 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentDockerRestart Reason:FrequentDockerRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting Docker Application Container Engine...] TimeoutString:0xc00037fa20 Timeout:1m0s}\nI0111 17:21:31.444985 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 17:21:32.838064 1 plugin.go:91] Add check result {Rule:0xc0002c8d90 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentContainerdRestart Reason:FrequentContainerdRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting containerd container runtime...] TimeoutString:0xc00037fa30 Timeout:1m0s}\nI0111 17:21:32.838121 1 plugin.go:96] Finish running custom plugins\nI0111 17:21:32.838165 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nE0111 17:22:28.438281 1 disk_collector.go:145] Error calling lsblk\nE0111 17:23:28.438293 1 disk_collector.go:145] Error calling lsblk\nE0111 17:24:28.438306 1 disk_collector.go:145] Error calling lsblk\nE0111 17:25:28.438305 1 disk_collector.go:145] Error calling lsblk\nI0111 17:26:28.234502 1 plugin.go:65] Start to run custom plugins\nI0111 17:26:28.235961 1 plugin.go:65] Start to run custom plugins\nE0111 17:26:28.438293 1 disk_collector.go:145] Error calling lsblk\nI0111 17:26:30.040216 1 plugin.go:91] Add check result {Rule:0xc0002c8bd0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentUnregisterNetDevice Reason:UnregisterNetDevice Path:/home/kubernetes/bin/log-counter Args:[--journald-source=kernel --log-path=/var/log/journal --lookback=20m --count=3 --pattern=unregister_netdevice: waiting for \\w+ to become free. Usage count = \\d+] TimeoutString:0xc00037f930 Timeout:1m0s}\nI0111 17:26:30.040300 1 plugin.go:96] Finish running custom plugins\nI0111 17:26:30.040345 1 custom_plugin_monitor.go:134] New status generated: &{Source:kernel-monitor Events:[] Conditions:[{Type:FrequentUnregisterNetDevice Status:False Transition:2020-01-11 15:56:28.235753628 +0000 UTC m=+0.289249248 Reason:NoFrequentUnregisterNetDevice Message:node is functioning properly}]}\nI0111 17:26:30.041753 1 plugin.go:91] Add check result {Rule:0xc0002c8cb0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentKubeletRestart Reason:FrequentKubeletRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --delay=5m --count=5 --pattern=Started Kubernetes kubelet.] TimeoutString:0xc00037fa10 Timeout:1m0s}\nI0111 17:26:30.041899 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 17:26:31.439739 1 plugin.go:91] Add check result {Rule:0xc0002c8d20 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentDockerRestart Reason:FrequentDockerRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting Docker Application Container Engine...] TimeoutString:0xc00037fa20 Timeout:1m0s}\nI0111 17:26:31.440026 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 17:26:32.744082 1 plugin.go:91] Add check result {Rule:0xc0002c8d90 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentContainerdRestart Reason:FrequentContainerdRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting containerd container runtime...] TimeoutString:0xc00037fa30 Timeout:1m0s}\nI0111 17:26:32.744148 1 plugin.go:96] Finish running custom plugins\nI0111 17:26:32.744302 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nE0111 17:27:28.438285 1 disk_collector.go:145] Error calling lsblk\nE0111 17:28:28.438290 1 disk_collector.go:145] Error calling lsblk\nE0111 17:29:28.438280 1 disk_collector.go:145] Error calling lsblk\nE0111 17:30:28.438281 1 disk_collector.go:145] Error calling lsblk\nI0111 17:31:28.234509 1 plugin.go:65] Start to run custom plugins\nI0111 17:31:28.236004 1 plugin.go:65] Start to run custom plugins\nE0111 17:31:28.438309 1 disk_collector.go:145] Error calling lsblk\nI0111 17:31:29.944494 1 plugin.go:91] Add check result {Rule:0xc0002c8bd0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentUnregisterNetDevice Reason:UnregisterNetDevice Path:/home/kubernetes/bin/log-counter Args:[--journald-source=kernel --log-path=/var/log/journal --lookback=20m --count=3 --pattern=unregister_netdevice: waiting for \\w+ to become free. Usage count = \\d+] TimeoutString:0xc00037f930 Timeout:1m0s}\nI0111 17:31:29.944553 1 plugin.go:96] Finish running custom plugins\nI0111 17:31:29.944589 1 custom_plugin_monitor.go:134] New status generated: &{Source:kernel-monitor Events:[] Conditions:[{Type:FrequentUnregisterNetDevice Status:False Transition:2020-01-11 15:56:28.235753628 +0000 UTC m=+0.289249248 Reason:NoFrequentUnregisterNetDevice Message:node is functioning properly}]}\nI0111 17:31:30.139159 1 plugin.go:91] Add check result {Rule:0xc0002c8cb0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentKubeletRestart Reason:FrequentKubeletRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --delay=5m --count=5 --pattern=Started Kubernetes kubelet.] TimeoutString:0xc00037fa10 Timeout:1m0s}\nI0111 17:31:30.139458 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 17:31:31.535665 1 plugin.go:91] Add check result {Rule:0xc0002c8d20 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentDockerRestart Reason:FrequentDockerRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting Docker Application Container Engine...] TimeoutString:0xc00037fa20 Timeout:1m0s}\nI0111 17:31:31.535847 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 17:31:32.844604 1 plugin.go:91] Add check result {Rule:0xc0002c8d90 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentContainerdRestart Reason:FrequentContainerdRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting containerd container runtime...] TimeoutString:0xc00037fa30 Timeout:1m0s}\nI0111 17:31:32.844694 1 plugin.go:96] Finish running custom plugins\nI0111 17:31:32.844745 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nE0111 17:32:28.438291 1 disk_collector.go:145] Error calling lsblk\nE0111 17:33:28.438289 1 disk_collector.go:145] Error calling lsblk\nE0111 17:34:28.438283 1 disk_collector.go:145] Error calling lsblk\nE0111 17:35:28.438283 1 disk_collector.go:145] Error calling lsblk\nI0111 17:36:28.234516 1 plugin.go:65] Start to run custom plugins\nI0111 17:36:28.235965 1 plugin.go:65] Start to run custom plugins\nE0111 17:36:28.438280 1 disk_collector.go:145] Error calling lsblk\nI0111 17:36:29.940726 1 plugin.go:91] Add check result {Rule:0xc0002c8bd0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentUnregisterNetDevice Reason:UnregisterNetDevice Path:/home/kubernetes/bin/log-counter Args:[--journald-source=kernel --log-path=/var/log/journal --lookback=20m --count=3 --pattern=unregister_netdevice: waiting for \\w+ to become free. Usage count = \\d+] TimeoutString:0xc00037f930 Timeout:1m0s}\nI0111 17:36:29.940819 1 plugin.go:96] Finish running custom plugins\nI0111 17:36:29.940859 1 custom_plugin_monitor.go:134] New status generated: &{Source:kernel-monitor Events:[] Conditions:[{Type:FrequentUnregisterNetDevice Status:False Transition:2020-01-11 15:56:28.235753628 +0000 UTC m=+0.289249248 Reason:NoFrequentUnregisterNetDevice Message:node is functioning properly}]}\nI0111 17:36:30.140569 1 plugin.go:91] Add check result {Rule:0xc0002c8cb0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentKubeletRestart Reason:FrequentKubeletRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --delay=5m --count=5 --pattern=Started Kubernetes kubelet.] TimeoutString:0xc00037fa10 Timeout:1m0s}\nI0111 17:36:30.140879 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 17:36:31.536749 1 plugin.go:91] Add check result {Rule:0xc0002c8d20 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentDockerRestart Reason:FrequentDockerRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting Docker Application Container Engine...] TimeoutString:0xc00037fa20 Timeout:1m0s}\nI0111 17:36:31.537015 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 17:36:32.938266 1 plugin.go:91] Add check result {Rule:0xc0002c8d90 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentContainerdRestart Reason:FrequentContainerdRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting containerd container runtime...] TimeoutString:0xc00037fa30 Timeout:1m0s}\nI0111 17:36:32.938334 1 plugin.go:96] Finish running custom plugins\nI0111 17:36:32.938386 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nE0111 17:37:28.438309 1 disk_collector.go:145] Error calling lsblk\nE0111 17:38:28.438305 1 disk_collector.go:145] Error calling lsblk\nE0111 17:39:28.438284 1 disk_collector.go:145] Error calling lsblk\nE0111 17:40:28.438302 1 disk_collector.go:145] Error calling lsblk\nI0111 17:41:28.234498 1 plugin.go:65] Start to run custom plugins\nI0111 17:41:28.235947 1 plugin.go:65] Start to run custom plugins\nE0111 17:41:28.438283 1 disk_collector.go:145] Error calling lsblk\nI0111 17:41:29.943005 1 plugin.go:91] Add check result {Rule:0xc0002c8bd0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentUnregisterNetDevice Reason:UnregisterNetDevice Path:/home/kubernetes/bin/log-counter Args:[--journald-source=kernel --log-path=/var/log/journal --lookback=20m --count=3 --pattern=unregister_netdevice: waiting for \\w+ to become free. Usage count = \\d+] TimeoutString:0xc00037f930 Timeout:1m0s}\nI0111 17:41:29.943081 1 plugin.go:96] Finish running custom plugins\nI0111 17:41:29.943117 1 custom_plugin_monitor.go:134] New status generated: &{Source:kernel-monitor Events:[] Conditions:[{Type:FrequentUnregisterNetDevice Status:False Transition:2020-01-11 15:56:28.235753628 +0000 UTC m=+0.289249248 Reason:NoFrequentUnregisterNetDevice Message:node is functioning properly}]}\nI0111 17:41:30.138071 1 plugin.go:91] Add check result {Rule:0xc0002c8cb0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentKubeletRestart Reason:FrequentKubeletRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --delay=5m --count=5 --pattern=Started Kubernetes kubelet.] TimeoutString:0xc00037fa10 Timeout:1m0s}\nI0111 17:41:30.138234 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 17:41:31.539980 1 plugin.go:91] Add check result {Rule:0xc0002c8d20 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentDockerRestart Reason:FrequentDockerRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting Docker Application Container Engine...] TimeoutString:0xc00037fa20 Timeout:1m0s}\nI0111 17:41:31.540143 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 17:41:32.845532 1 plugin.go:91] Add check result {Rule:0xc0002c8d90 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentContainerdRestart Reason:FrequentContainerdRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting containerd container runtime...] TimeoutString:0xc00037fa30 Timeout:1m0s}\nI0111 17:41:32.845954 1 plugin.go:96] Finish running custom plugins\nI0111 17:41:32.845875 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nE0111 17:42:28.438288 1 disk_collector.go:145] Error calling lsblk\nE0111 17:43:28.438286 1 disk_collector.go:145] Error calling lsblk\nE0111 17:44:28.438279 1 disk_collector.go:145] Error calling lsblk\nE0111 17:45:28.438293 1 disk_collector.go:145] Error calling lsblk\nI0111 17:46:28.234502 1 plugin.go:65] Start to run custom plugins\nI0111 17:46:28.235935 1 plugin.go:65] Start to run custom plugins\nE0111 17:46:28.438296 1 disk_collector.go:145] Error calling lsblk\nI0111 17:46:29.841121 1 plugin.go:91] Add check result {Rule:0xc0002c8bd0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentUnregisterNetDevice Reason:UnregisterNetDevice Path:/home/kubernetes/bin/log-counter Args:[--journald-source=kernel --log-path=/var/log/journal --lookback=20m --count=3 --pattern=unregister_netdevice: waiting for \\w+ to become free. Usage count = \\d+] TimeoutString:0xc00037f930 Timeout:1m0s}\nI0111 17:46:29.841194 1 plugin.go:96] Finish running custom plugins\nI0111 17:46:29.841230 1 custom_plugin_monitor.go:134] New status generated: &{Source:kernel-monitor Events:[] Conditions:[{Type:FrequentUnregisterNetDevice Status:False Transition:2020-01-11 15:56:28.235753628 +0000 UTC m=+0.289249248 Reason:NoFrequentUnregisterNetDevice Message:node is functioning properly}]}\nI0111 17:46:30.044985 1 plugin.go:91] Add check result {Rule:0xc0002c8cb0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentKubeletRestart Reason:FrequentKubeletRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --delay=5m --count=5 --pattern=Started Kubernetes kubelet.] TimeoutString:0xc00037fa10 Timeout:1m0s}\nI0111 17:46:30.045176 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 17:46:30.442615 1 plugin.go:91] Add check result {Rule:0xc0002c8d20 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentDockerRestart Reason:FrequentDockerRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting Docker Application Container Engine...] TimeoutString:0xc00037fa20 Timeout:1m0s}\nI0111 17:46:30.442871 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 17:46:31.843208 1 plugin.go:91] Add check result {Rule:0xc0002c8d90 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentContainerdRestart Reason:FrequentContainerdRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting containerd container runtime...] TimeoutString:0xc00037fa30 Timeout:1m0s}\nI0111 17:46:31.843267 1 plugin.go:96] Finish running custom plugins\nI0111 17:46:31.843434 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nE0111 17:47:28.438304 1 disk_collector.go:145] Error calling lsblk\nE0111 17:48:28.438296 1 disk_collector.go:145] Error calling lsblk\nE0111 17:49:28.438298 1 disk_collector.go:145] Error calling lsblk\nE0111 17:50:28.438312 1 disk_collector.go:145] Error calling lsblk\nI0111 17:51:28.234520 1 plugin.go:65] Start to run custom plugins\nI0111 17:51:28.235965 1 plugin.go:65] Start to run custom plugins\nE0111 17:51:28.438290 1 disk_collector.go:145] Error calling lsblk\nI0111 17:51:29.942510 1 plugin.go:91] Add check result {Rule:0xc0002c8bd0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentUnregisterNetDevice Reason:UnregisterNetDevice Path:/home/kubernetes/bin/log-counter Args:[--journald-source=kernel --log-path=/var/log/journal --lookback=20m --count=3 --pattern=unregister_netdevice: waiting for \\w+ to become free. Usage count = \\d+] TimeoutString:0xc00037f930 Timeout:1m0s}\nI0111 17:51:29.942594 1 plugin.go:96] Finish running custom plugins\nI0111 17:51:29.942652 1 custom_plugin_monitor.go:134] New status generated: &{Source:kernel-monitor Events:[] Conditions:[{Type:FrequentUnregisterNetDevice Status:False Transition:2020-01-11 15:56:28.235753628 +0000 UTC m=+0.289249248 Reason:NoFrequentUnregisterNetDevice Message:node is functioning properly}]}\nI0111 17:51:30.143920 1 plugin.go:91] Add check result {Rule:0xc0002c8cb0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentKubeletRestart Reason:FrequentKubeletRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --delay=5m --count=5 --pattern=Started Kubernetes kubelet.] TimeoutString:0xc00037fa10 Timeout:1m0s}\nI0111 17:51:30.144129 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 17:51:31.543247 1 plugin.go:91] Add check result {Rule:0xc0002c8d20 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentDockerRestart Reason:FrequentDockerRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting Docker Application Container Engine...] TimeoutString:0xc00037fa20 Timeout:1m0s}\nI0111 17:51:31.543489 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 17:51:32.938284 1 plugin.go:91] Add check result {Rule:0xc0002c8d90 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentContainerdRestart Reason:FrequentContainerdRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting containerd container runtime...] TimeoutString:0xc00037fa30 Timeout:1m0s}\nI0111 17:51:32.938384 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 17:51:32.938585 1 plugin.go:96] Finish running custom plugins\nE0111 17:52:28.438366 1 disk_collector.go:145] Error calling lsblk\nE0111 17:53:28.438287 1 disk_collector.go:145] Error calling lsblk\nE0111 17:54:28.438302 1 disk_collector.go:145] Error calling lsblk\nE0111 17:55:28.438303 1 disk_collector.go:145] Error calling lsblk\nI0111 17:56:28.234500 1 plugin.go:65] Start to run custom plugins\nI0111 17:56:28.235980 1 plugin.go:65] Start to run custom plugins\nE0111 17:56:28.438330 1 disk_collector.go:145] Error calling lsblk\nI0111 17:56:29.745098 1 plugin.go:91] Add check result {Rule:0xc0002c8bd0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentUnregisterNetDevice Reason:UnregisterNetDevice Path:/home/kubernetes/bin/log-counter Args:[--journald-source=kernel --log-path=/var/log/journal --lookback=20m --count=3 --pattern=unregister_netdevice: waiting for \\w+ to become free. Usage count = \\d+] TimeoutString:0xc00037f930 Timeout:1m0s}\nI0111 17:56:29.745167 1 plugin.go:96] Finish running custom plugins\nI0111 17:56:29.745202 1 custom_plugin_monitor.go:134] New status generated: &{Source:kernel-monitor Events:[] Conditions:[{Type:FrequentUnregisterNetDevice Status:False Transition:2020-01-11 15:56:28.235753628 +0000 UTC m=+0.289249248 Reason:NoFrequentUnregisterNetDevice Message:node is functioning properly}]}\nI0111 17:56:30.040185 1 plugin.go:91] Add check result {Rule:0xc0002c8cb0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentKubeletRestart Reason:FrequentKubeletRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --delay=5m --count=5 --pattern=Started Kubernetes kubelet.] TimeoutString:0xc00037fa10 Timeout:1m0s}\nI0111 17:56:30.040284 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 17:56:31.442348 1 plugin.go:91] Add check result {Rule:0xc0002c8d20 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentDockerRestart Reason:FrequentDockerRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting Docker Application Container Engine...] TimeoutString:0xc00037fa20 Timeout:1m0s}\nI0111 17:56:31.442547 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 17:56:32.842544 1 plugin.go:91] Add check result {Rule:0xc0002c8d90 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentContainerdRestart Reason:FrequentContainerdRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting containerd container runtime...] TimeoutString:0xc00037fa30 Timeout:1m0s}\nI0111 17:56:32.842603 1 plugin.go:96] Finish running custom plugins\nI0111 17:56:32.842762 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nE0111 17:57:28.438301 1 disk_collector.go:145] Error calling lsblk\nE0111 17:58:28.438293 1 disk_collector.go:145] Error calling lsblk\nE0111 17:59:28.438302 1 disk_collector.go:145] Error calling lsblk\nE0111 18:00:28.438319 1 disk_collector.go:145] Error calling lsblk\nI0111 18:01:28.234536 1 plugin.go:65] Start to run custom plugins\nI0111 18:01:28.235932 1 plugin.go:65] Start to run custom plugins\nE0111 18:01:28.438296 1 disk_collector.go:145] Error calling lsblk\nI0111 18:01:29.840557 1 plugin.go:91] Add check result {Rule:0xc0002c8bd0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentUnregisterNetDevice Reason:UnregisterNetDevice Path:/home/kubernetes/bin/log-counter Args:[--journald-source=kernel --log-path=/var/log/journal --lookback=20m --count=3 --pattern=unregister_netdevice: waiting for \\w+ to become free. Usage count = \\d+] TimeoutString:0xc00037f930 Timeout:1m0s}\nI0111 18:01:29.840650 1 plugin.go:96] Finish running custom plugins\nI0111 18:01:29.840818 1 custom_plugin_monitor.go:134] New status generated: &{Source:kernel-monitor Events:[] Conditions:[{Type:FrequentUnregisterNetDevice Status:False Transition:2020-01-11 15:56:28.235753628 +0000 UTC m=+0.289249248 Reason:NoFrequentUnregisterNetDevice Message:node is functioning properly}]}\nI0111 18:01:30.046066 1 plugin.go:91] Add check result {Rule:0xc0002c8cb0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentKubeletRestart Reason:FrequentKubeletRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --delay=5m --count=5 --pattern=Started Kubernetes kubelet.] TimeoutString:0xc00037fa10 Timeout:1m0s}\nI0111 18:01:30.046160 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 18:01:31.437776 1 plugin.go:91] Add check result {Rule:0xc0002c8d20 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentDockerRestart Reason:FrequentDockerRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting Docker Application Container Engine...] TimeoutString:0xc00037fa20 Timeout:1m0s}\nI0111 18:01:31.437934 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 18:01:32.841005 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 18:01:32.840934 1 plugin.go:91] Add check result {Rule:0xc0002c8d90 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentContainerdRestart Reason:FrequentContainerdRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting containerd container runtime...] TimeoutString:0xc00037fa30 Timeout:1m0s}\nI0111 18:01:32.841174 1 plugin.go:96] Finish running custom plugins\nE0111 18:02:28.438293 1 disk_collector.go:145] Error calling lsblk\nE0111 18:03:28.438309 1 disk_collector.go:145] Error calling lsblk\nE0111 18:04:28.438306 1 disk_collector.go:145] Error calling lsblk\nE0111 18:05:28.438304 1 disk_collector.go:145] Error calling lsblk\nI0111 18:06:28.234516 1 plugin.go:65] Start to run custom plugins\nI0111 18:06:28.236026 1 plugin.go:65] Start to run custom plugins\nE0111 18:06:28.438295 1 disk_collector.go:145] Error calling lsblk\nI0111 18:06:29.938737 1 plugin.go:91] Add check result {Rule:0xc0002c8bd0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentUnregisterNetDevice Reason:UnregisterNetDevice Path:/home/kubernetes/bin/log-counter Args:[--journald-source=kernel --log-path=/var/log/journal --lookback=20m --count=3 --pattern=unregister_netdevice: waiting for \\w+ to become free. Usage count = \\d+] TimeoutString:0xc00037f930 Timeout:1m0s}\nI0111 18:06:29.938814 1 plugin.go:96] Finish running custom plugins\nI0111 18:06:29.938850 1 custom_plugin_monitor.go:134] New status generated: &{Source:kernel-monitor Events:[] Conditions:[{Type:FrequentUnregisterNetDevice Status:False Transition:2020-01-11 15:56:28.235753628 +0000 UTC m=+0.289249248 Reason:NoFrequentUnregisterNetDevice Message:node is functioning properly}]}\nI0111 18:06:30.137184 1 plugin.go:91] Add check result {Rule:0xc0002c8cb0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentKubeletRestart Reason:FrequentKubeletRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --delay=5m --count=5 --pattern=Started Kubernetes kubelet.] TimeoutString:0xc00037fa10 Timeout:1m0s}\nI0111 18:06:30.137419 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 18:06:31.541817 1 plugin.go:91] Add check result {Rule:0xc0002c8d20 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentDockerRestart Reason:FrequentDockerRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting Docker Application Container Engine...] TimeoutString:0xc00037fa20 Timeout:1m0s}\nI0111 18:06:31.541919 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 18:06:32.941759 1 plugin.go:91] Add check result {Rule:0xc0002c8d90 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentContainerdRestart Reason:FrequentContainerdRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting containerd container runtime...] TimeoutString:0xc00037fa30 Timeout:1m0s}\nI0111 18:06:32.941818 1 plugin.go:96] Finish running custom plugins\nI0111 18:06:32.942004 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nE0111 18:07:28.438284 1 disk_collector.go:145] Error calling lsblk\nE0111 18:08:28.438307 1 disk_collector.go:145] Error calling lsblk\nE0111 18:09:28.438305 1 disk_collector.go:145] Error calling lsblk\nE0111 18:10:28.438318 1 disk_collector.go:145] Error calling lsblk\nI0111 18:11:28.234504 1 plugin.go:65] Start to run custom plugins\nI0111 18:11:28.235941 1 plugin.go:65] Start to run custom plugins\nE0111 18:11:28.438285 1 disk_collector.go:145] Error calling lsblk\nI0111 18:11:29.845251 1 plugin.go:91] Add check result {Rule:0xc0002c8bd0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentUnregisterNetDevice Reason:UnregisterNetDevice Path:/home/kubernetes/bin/log-counter Args:[--journald-source=kernel --log-path=/var/log/journal --lookback=20m --count=3 --pattern=unregister_netdevice: waiting for \\w+ to become free. Usage count = \\d+] TimeoutString:0xc00037f930 Timeout:1m0s}\nI0111 18:11:29.845318 1 plugin.go:96] Finish running custom plugins\nI0111 18:11:29.845385 1 custom_plugin_monitor.go:134] New status generated: &{Source:kernel-monitor Events:[] Conditions:[{Type:FrequentUnregisterNetDevice Status:False Transition:2020-01-11 15:56:28.235753628 +0000 UTC m=+0.289249248 Reason:NoFrequentUnregisterNetDevice Message:node is functioning properly}]}\nI0111 18:11:30.136480 1 plugin.go:91] Add check result {Rule:0xc0002c8cb0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentKubeletRestart Reason:FrequentKubeletRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --delay=5m --count=5 --pattern=Started Kubernetes kubelet.] TimeoutString:0xc00037fa10 Timeout:1m0s}\nI0111 18:11:30.136709 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 18:11:31.445209 1 plugin.go:91] Add check result {Rule:0xc0002c8d20 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentDockerRestart Reason:FrequentDockerRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting Docker Application Container Engine...] TimeoutString:0xc00037fa20 Timeout:1m0s}\nI0111 18:11:31.445427 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 18:11:32.839024 1 plugin.go:91] Add check result {Rule:0xc0002c8d90 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentContainerdRestart Reason:FrequentContainerdRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting containerd container runtime...] TimeoutString:0xc00037fa30 Timeout:1m0s}\nI0111 18:11:32.839093 1 plugin.go:96] Finish running custom plugins\nI0111 18:11:32.839147 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nE0111 18:12:28.438294 1 disk_collector.go:145] Error calling lsblk\nE0111 18:13:28.438283 1 disk_collector.go:145] Error calling lsblk\nE0111 18:14:28.438295 1 disk_collector.go:145] Error calling lsblk\nE0111 18:15:28.438279 1 disk_collector.go:145] Error calling lsblk\nI0111 18:16:28.234506 1 plugin.go:65] Start to run custom plugins\nI0111 18:16:28.235948 1 plugin.go:65] Start to run custom plugins\nE0111 18:16:28.438316 1 disk_collector.go:145] Error calling lsblk\nI0111 18:16:29.836728 1 plugin.go:91] Add check result {Rule:0xc0002c8bd0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentUnregisterNetDevice Reason:UnregisterNetDevice Path:/home/kubernetes/bin/log-counter Args:[--journald-source=kernel --log-path=/var/log/journal --lookback=20m --count=3 --pattern=unregister_netdevice: waiting for \\w+ to become free. Usage count = \\d+] TimeoutString:0xc00037f930 Timeout:1m0s}\nI0111 18:16:29.836803 1 plugin.go:96] Finish running custom plugins\nI0111 18:16:29.836861 1 custom_plugin_monitor.go:134] New status generated: &{Source:kernel-monitor Events:[] Conditions:[{Type:FrequentUnregisterNetDevice Status:False Transition:2020-01-11 15:56:28.235753628 +0000 UTC m=+0.289249248 Reason:NoFrequentUnregisterNetDevice Message:node is functioning properly}]}\nI0111 18:16:30.040293 1 plugin.go:91] Add check result {Rule:0xc0002c8cb0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentKubeletRestart Reason:FrequentKubeletRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --delay=5m --count=5 --pattern=Started Kubernetes kubelet.] TimeoutString:0xc00037fa10 Timeout:1m0s}\nI0111 18:16:30.040425 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 18:16:31.443352 1 plugin.go:91] Add check result {Rule:0xc0002c8d20 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentDockerRestart Reason:FrequentDockerRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting Docker Application Container Engine...] TimeoutString:0xc00037fa20 Timeout:1m0s}\nI0111 18:16:31.443580 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 18:16:32.746279 1 plugin.go:91] Add check result {Rule:0xc0002c8d90 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentContainerdRestart Reason:FrequentContainerdRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting containerd container runtime...] TimeoutString:0xc00037fa30 Timeout:1m0s}\nI0111 18:16:32.746345 1 plugin.go:96] Finish running custom plugins\nI0111 18:16:32.746494 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nE0111 18:17:28.438280 1 disk_collector.go:145] Error calling lsblk\nE0111 18:18:28.438282 1 disk_collector.go:145] Error calling lsblk\nE0111 18:19:28.438284 1 disk_collector.go:145] Error calling lsblk\nE0111 18:20:28.438304 1 disk_collector.go:145] Error calling lsblk\nI0111 18:21:28.234499 1 plugin.go:65] Start to run custom plugins\nI0111 18:21:28.235965 1 plugin.go:65] Start to run custom plugins\nE0111 18:21:28.438309 1 disk_collector.go:145] Error calling lsblk\nI0111 18:21:29.838360 1 plugin.go:91] Add check result {Rule:0xc0002c8bd0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentUnregisterNetDevice Reason:UnregisterNetDevice Path:/home/kubernetes/bin/log-counter Args:[--journald-source=kernel --log-path=/var/log/journal --lookback=20m --count=3 --pattern=unregister_netdevice: waiting for \\w+ to become free. Usage count = \\d+] TimeoutString:0xc00037f930 Timeout:1m0s}\nI0111 18:21:29.838420 1 plugin.go:96] Finish running custom plugins\nI0111 18:21:29.838444 1 custom_plugin_monitor.go:134] New status generated: &{Source:kernel-monitor Events:[] Conditions:[{Type:FrequentUnregisterNetDevice Status:False Transition:2020-01-11 15:56:28.235753628 +0000 UTC m=+0.289249248 Reason:NoFrequentUnregisterNetDevice Message:node is functioning properly}]}\nI0111 18:21:30.038502 1 plugin.go:91] Add check result {Rule:0xc0002c8cb0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentKubeletRestart Reason:FrequentKubeletRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --delay=5m --count=5 --pattern=Started Kubernetes kubelet.] TimeoutString:0xc00037fa10 Timeout:1m0s}\nI0111 18:21:30.038697 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 18:21:31.440235 1 plugin.go:91] Add check result {Rule:0xc0002c8d20 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentDockerRestart Reason:FrequentDockerRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting Docker Application Container Engine...] TimeoutString:0xc00037fa20 Timeout:1m0s}\nI0111 18:21:31.440458 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 18:21:32.838224 1 plugin.go:91] Add check result {Rule:0xc0002c8d90 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentContainerdRestart Reason:FrequentContainerdRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting containerd container runtime...] TimeoutString:0xc00037fa30 Timeout:1m0s}\nI0111 18:21:32.838339 1 plugin.go:96] Finish running custom plugins\nI0111 18:21:32.838322 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nE0111 18:22:28.438296 1 disk_collector.go:145] Error calling lsblk\nE0111 18:23:28.438282 1 disk_collector.go:145] Error calling lsblk\nE0111 18:24:28.438285 1 disk_collector.go:145] Error calling lsblk\nE0111 18:25:28.438297 1 disk_collector.go:145] Error calling lsblk\nI0111 18:26:28.234532 1 plugin.go:65] Start to run custom plugins\nI0111 18:26:28.235971 1 plugin.go:65] Start to run custom plugins\nE0111 18:26:28.438283 1 disk_collector.go:145] Error calling lsblk\nI0111 18:26:29.937240 1 plugin.go:91] Add check result {Rule:0xc0002c8bd0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentUnregisterNetDevice Reason:UnregisterNetDevice Path:/home/kubernetes/bin/log-counter Args:[--journald-source=kernel --log-path=/var/log/journal --lookback=20m --count=3 --pattern=unregister_netdevice: waiting for \\w+ to become free. Usage count = \\d+] TimeoutString:0xc00037f930 Timeout:1m0s}\nI0111 18:26:29.937301 1 plugin.go:96] Finish running custom plugins\nI0111 18:26:29.937428 1 custom_plugin_monitor.go:134] New status generated: &{Source:kernel-monitor Events:[] Conditions:[{Type:FrequentUnregisterNetDevice Status:False Transition:2020-01-11 15:56:28.235753628 +0000 UTC m=+0.289249248 Reason:NoFrequentUnregisterNetDevice Message:node is functioning properly}]}\nI0111 18:26:30.135495 1 plugin.go:91] Add check result {Rule:0xc0002c8cb0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentKubeletRestart Reason:FrequentKubeletRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --delay=5m --count=5 --pattern=Started Kubernetes kubelet.] TimeoutString:0xc00037fa10 Timeout:1m0s}\nI0111 18:26:30.135755 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 18:26:31.550706 1 plugin.go:91] Add check result {Rule:0xc0002c8d20 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentDockerRestart Reason:FrequentDockerRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting Docker Application Container Engine...] TimeoutString:0xc00037fa20 Timeout:1m0s}\nI0111 18:26:31.550945 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 18:26:32.936444 1 plugin.go:91] Add check result {Rule:0xc0002c8d90 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentContainerdRestart Reason:FrequentContainerdRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting containerd container runtime...] TimeoutString:0xc00037fa30 Timeout:1m0s}\nI0111 18:26:32.936579 1 plugin.go:96] Finish running custom plugins\nI0111 18:26:32.936498 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nE0111 18:27:28.438412 1 disk_collector.go:145] Error calling lsblk\nE0111 18:28:28.438309 1 disk_collector.go:145] Error calling lsblk\nE0111 18:29:28.438291 1 disk_collector.go:145] Error calling lsblk\nE0111 18:30:28.438307 1 disk_collector.go:145] Error calling lsblk\nI0111 18:31:28.234500 1 plugin.go:65] Start to run custom plugins\nI0111 18:31:28.235927 1 plugin.go:65] Start to run custom plugins\nE0111 18:31:28.438300 1 disk_collector.go:145] Error calling lsblk\nI0111 18:31:29.937485 1 plugin.go:91] Add check result {Rule:0xc0002c8bd0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentUnregisterNetDevice Reason:UnregisterNetDevice Path:/home/kubernetes/bin/log-counter Args:[--journald-source=kernel --log-path=/var/log/journal --lookback=20m --count=3 --pattern=unregister_netdevice: waiting for \\w+ to become free. Usage count = \\d+] TimeoutString:0xc00037f930 Timeout:1m0s}\nI0111 18:31:29.937564 1 plugin.go:96] Finish running custom plugins\nI0111 18:31:29.937606 1 custom_plugin_monitor.go:134] New status generated: &{Source:kernel-monitor Events:[] Conditions:[{Type:FrequentUnregisterNetDevice Status:False Transition:2020-01-11 15:56:28.235753628 +0000 UTC m=+0.289249248 Reason:NoFrequentUnregisterNetDevice Message:node is functioning properly}]}\nI0111 18:31:30.138807 1 plugin.go:91] Add check result {Rule:0xc0002c8cb0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentKubeletRestart Reason:FrequentKubeletRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --delay=5m --count=5 --pattern=Started Kubernetes kubelet.] TimeoutString:0xc00037fa10 Timeout:1m0s}\nI0111 18:31:30.138986 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 18:31:31.541242 1 plugin.go:91] Add check result {Rule:0xc0002c8d20 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentDockerRestart Reason:FrequentDockerRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting Docker Application Container Engine...] TimeoutString:0xc00037fa20 Timeout:1m0s}\nI0111 18:31:31.541416 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 18:31:32.935948 1 plugin.go:91] Add check result {Rule:0xc0002c8d90 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentContainerdRestart Reason:FrequentContainerdRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting containerd container runtime...] TimeoutString:0xc00037fa30 Timeout:1m0s}\nI0111 18:31:32.936061 1 plugin.go:96] Finish running custom plugins\nI0111 18:31:32.936050 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nE0111 18:32:28.438290 1 disk_collector.go:145] Error calling lsblk\nE0111 18:33:28.438302 1 disk_collector.go:145] Error calling lsblk\nE0111 18:34:28.438289 1 disk_collector.go:145] Error calling lsblk\nE0111 18:35:28.438296 1 disk_collector.go:145] Error calling lsblk\nI0111 18:36:28.234505 1 plugin.go:65] Start to run custom plugins\nI0111 18:36:28.235933 1 plugin.go:65] Start to run custom plugins\nE0111 18:36:28.438309 1 disk_collector.go:145] Error calling lsblk\nI0111 18:36:29.937915 1 plugin.go:91] Add check result {Rule:0xc0002c8bd0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentUnregisterNetDevice Reason:UnregisterNetDevice Path:/home/kubernetes/bin/log-counter Args:[--journald-source=kernel --log-path=/var/log/journal --lookback=20m --count=3 --pattern=unregister_netdevice: waiting for \\w+ to become free. Usage count = \\d+] TimeoutString:0xc00037f930 Timeout:1m0s}\nI0111 18:36:29.937978 1 plugin.go:96] Finish running custom plugins\nI0111 18:36:29.938013 1 custom_plugin_monitor.go:134] New status generated: &{Source:kernel-monitor Events:[] Conditions:[{Type:FrequentUnregisterNetDevice Status:False Transition:2020-01-11 15:56:28.235753628 +0000 UTC m=+0.289249248 Reason:NoFrequentUnregisterNetDevice Message:node is functioning properly}]}\nI0111 18:36:30.045108 1 plugin.go:91] Add check result {Rule:0xc0002c8cb0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentKubeletRestart Reason:FrequentKubeletRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --delay=5m --count=5 --pattern=Started Kubernetes kubelet.] TimeoutString:0xc00037fa10 Timeout:1m0s}\nI0111 18:36:30.045216 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 18:36:31.436106 1 plugin.go:91] Add check result {Rule:0xc0002c8d20 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentDockerRestart Reason:FrequentDockerRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting Docker Application Container Engine...] TimeoutString:0xc00037fa20 Timeout:1m0s}\nI0111 18:36:31.436318 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 18:36:32.837430 1 plugin.go:91] Add check result {Rule:0xc0002c8d90 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentContainerdRestart Reason:FrequentContainerdRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting containerd container runtime...] TimeoutString:0xc00037fa30 Timeout:1m0s}\nI0111 18:36:32.837547 1 plugin.go:96] Finish running custom plugins\nI0111 18:36:32.837502 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nE0111 18:37:28.438264 1 disk_collector.go:145] Error calling lsblk\nE0111 18:38:28.438283 1 disk_collector.go:145] Error calling lsblk\nE0111 18:39:28.438283 1 disk_collector.go:145] Error calling lsblk\nE0111 18:40:28.438303 1 disk_collector.go:145] Error calling lsblk\nI0111 18:41:28.234481 1 plugin.go:65] Start to run custom plugins\nI0111 18:41:28.235962 1 plugin.go:65] Start to run custom plugins\nE0111 18:41:28.438276 1 disk_collector.go:145] Error calling lsblk\nI0111 18:41:30.038947 1 plugin.go:91] Add check result {Rule:0xc0002c8bd0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentUnregisterNetDevice Reason:UnregisterNetDevice Path:/home/kubernetes/bin/log-counter Args:[--journald-source=kernel --log-path=/var/log/journal --lookback=20m --count=3 --pattern=unregister_netdevice: waiting for \\w+ to become free. Usage count = \\d+] TimeoutString:0xc00037f930 Timeout:1m0s}\nI0111 18:41:30.039097 1 plugin.go:96] Finish running custom plugins\nI0111 18:41:30.039034 1 custom_plugin_monitor.go:134] New status generated: &{Source:kernel-monitor Events:[] Conditions:[{Type:FrequentUnregisterNetDevice Status:False Transition:2020-01-11 15:56:28.235753628 +0000 UTC m=+0.289249248 Reason:NoFrequentUnregisterNetDevice Message:node is functioning properly}]}\nI0111 18:41:30.236321 1 plugin.go:91] Add check result {Rule:0xc0002c8cb0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentKubeletRestart Reason:FrequentKubeletRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --delay=5m --count=5 --pattern=Started Kubernetes kubelet.] TimeoutString:0xc00037fa10 Timeout:1m0s}\nI0111 18:41:30.236501 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 18:41:31.637965 1 plugin.go:91] Add check result {Rule:0xc0002c8d20 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentDockerRestart Reason:FrequentDockerRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting Docker Application Container Engine...] TimeoutString:0xc00037fa20 Timeout:1m0s}\nI0111 18:41:31.638013 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 18:41:32.948080 1 plugin.go:91] Add check result {Rule:0xc0002c8d90 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentContainerdRestart Reason:FrequentContainerdRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting containerd container runtime...] TimeoutString:0xc00037fa30 Timeout:1m0s}\nI0111 18:41:32.948144 1 plugin.go:96] Finish running custom plugins\nI0111 18:41:32.948193 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nE0111 18:42:28.438294 1 disk_collector.go:145] Error calling lsblk\nE0111 18:43:28.438295 1 disk_collector.go:145] Error calling lsblk\nE0111 18:44:28.438319 1 disk_collector.go:145] Error calling lsblk\nE0111 18:45:28.438383 1 disk_collector.go:145] Error calling lsblk\nI0111 18:46:28.234511 1 plugin.go:65] Start to run custom plugins\nI0111 18:46:28.235965 1 plugin.go:65] Start to run custom plugins\nE0111 18:46:28.438285 1 disk_collector.go:145] Error calling lsblk\nI0111 18:46:29.940020 1 plugin.go:91] Add check result {Rule:0xc0002c8bd0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentUnregisterNetDevice Reason:UnregisterNetDevice Path:/home/kubernetes/bin/log-counter Args:[--journald-source=kernel --log-path=/var/log/journal --lookback=20m --count=3 --pattern=unregister_netdevice: waiting for \\w+ to become free. Usage count = \\d+] TimeoutString:0xc00037f930 Timeout:1m0s}\nI0111 18:46:29.940087 1 plugin.go:96] Finish running custom plugins\nI0111 18:46:29.940121 1 custom_plugin_monitor.go:134] New status generated: &{Source:kernel-monitor Events:[] Conditions:[{Type:FrequentUnregisterNetDevice Status:False Transition:2020-01-11 15:56:28.235753628 +0000 UTC m=+0.289249248 Reason:NoFrequentUnregisterNetDevice Message:node is functioning properly}]}\nI0111 18:46:30.045978 1 plugin.go:91] Add check result {Rule:0xc0002c8cb0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentKubeletRestart Reason:FrequentKubeletRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --delay=5m --count=5 --pattern=Started Kubernetes kubelet.] TimeoutString:0xc00037fa10 Timeout:1m0s}\nI0111 18:46:30.046298 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 18:46:31.287838 1 plugin.go:91] Add check result {Rule:0xc0002c8d20 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentDockerRestart Reason:FrequentDockerRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting Docker Application Container Engine...] TimeoutString:0xc00037fa20 Timeout:1m0s}\nI0111 18:46:31.287933 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 18:46:32.644066 1 plugin.go:91] Add check result {Rule:0xc0002c8d90 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentContainerdRestart Reason:FrequentContainerdRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting containerd container runtime...] TimeoutString:0xc00037fa30 Timeout:1m0s}\nI0111 18:46:32.644180 1 plugin.go:96] Finish running custom plugins\nI0111 18:46:32.644132 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nE0111 18:47:28.438292 1 disk_collector.go:145] Error calling lsblk\nE0111 18:48:28.438552 1 disk_collector.go:145] Error calling lsblk\nE0111 18:49:28.438318 1 disk_collector.go:145] Error calling lsblk\nE0111 18:50:28.438300 1 disk_collector.go:145] Error calling lsblk\nI0111 18:51:28.234532 1 plugin.go:65] Start to run custom plugins\nI0111 18:51:28.235932 1 plugin.go:65] Start to run custom plugins\nE0111 18:51:28.438292 1 disk_collector.go:145] Error calling lsblk\nI0111 18:51:29.842556 1 plugin.go:91] Add check result {Rule:0xc0002c8bd0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentUnregisterNetDevice Reason:UnregisterNetDevice Path:/home/kubernetes/bin/log-counter Args:[--journald-source=kernel --log-path=/var/log/journal --lookback=20m --count=3 --pattern=unregister_netdevice: waiting for \\w+ to become free. Usage count = \\d+] TimeoutString:0xc00037f930 Timeout:1m0s}\nI0111 18:51:29.842761 1 custom_plugin_monitor.go:134] New status generated: &{Source:kernel-monitor Events:[] Conditions:[{Type:FrequentUnregisterNetDevice Status:False Transition:2020-01-11 15:56:28.235753628 +0000 UTC m=+0.289249248 Reason:NoFrequentUnregisterNetDevice Message:node is functioning properly}]}\nI0111 18:51:29.842863 1 plugin.go:96] Finish running custom plugins\nI0111 18:51:30.940128 1 plugin.go:91] Add check result {Rule:0xc0002c8cb0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentKubeletRestart Reason:FrequentKubeletRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --delay=5m --count=5 --pattern=Started Kubernetes kubelet.] TimeoutString:0xc00037fa10 Timeout:1m0s}\nI0111 18:51:30.940415 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 18:51:33.244553 1 plugin.go:91] Add check result {Rule:0xc0002c8d20 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentDockerRestart Reason:FrequentDockerRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting Docker Application Container Engine...] TimeoutString:0xc00037fa20 Timeout:1m0s}\nI0111 18:51:33.244805 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 18:51:35.037864 1 plugin.go:91] Add check result {Rule:0xc0002c8d90 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentContainerdRestart Reason:FrequentContainerdRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting containerd container runtime...] TimeoutString:0xc00037fa30 Timeout:1m0s}\nI0111 18:51:35.037925 1 plugin.go:96] Finish running custom plugins\nI0111 18:51:35.038031 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nE0111 18:52:28.438358 1 disk_collector.go:145] Error calling lsblk\nE0111 18:53:28.438302 1 disk_collector.go:145] Error calling lsblk\nE0111 18:54:28.438303 1 disk_collector.go:145] Error calling lsblk\nE0111 18:55:28.438280 1 disk_collector.go:145] Error calling lsblk\nI0111 18:56:28.234520 1 plugin.go:65] Start to run custom plugins\nI0111 18:56:28.235970 1 plugin.go:65] Start to run custom plugins\nE0111 18:56:28.438300 1 disk_collector.go:145] Error calling lsblk\nI0111 18:56:29.936246 1 plugin.go:91] Add check result {Rule:0xc0002c8bd0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentUnregisterNetDevice Reason:UnregisterNetDevice Path:/home/kubernetes/bin/log-counter Args:[--journald-source=kernel --log-path=/var/log/journal --lookback=20m --count=3 --pattern=unregister_netdevice: waiting for \\w+ to become free. Usage count = \\d+] TimeoutString:0xc00037f930 Timeout:1m0s}\nI0111 18:56:29.936319 1 plugin.go:96] Finish running custom plugins\nI0111 18:56:29.936356 1 custom_plugin_monitor.go:134] New status generated: &{Source:kernel-monitor Events:[] Conditions:[{Type:FrequentUnregisterNetDevice Status:False Transition:2020-01-11 15:56:28.235753628 +0000 UTC m=+0.289249248 Reason:NoFrequentUnregisterNetDevice Message:node is functioning properly}]}\nI0111 18:56:30.842084 1 plugin.go:91] Add check result {Rule:0xc0002c8cb0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentKubeletRestart Reason:FrequentKubeletRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --delay=5m --count=5 --pattern=Started Kubernetes kubelet.] TimeoutString:0xc00037fa10 Timeout:1m0s}\nI0111 18:56:30.842181 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 18:56:33.048571 1 plugin.go:91] Add check result {Rule:0xc0002c8d20 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentDockerRestart Reason:FrequentDockerRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting Docker Application Container Engine...] TimeoutString:0xc00037fa20 Timeout:1m0s}\nI0111 18:56:33.048705 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 18:56:35.337771 1 plugin.go:91] Add check result {Rule:0xc0002c8d90 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentContainerdRestart Reason:FrequentContainerdRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting containerd container runtime...] TimeoutString:0xc00037fa30 Timeout:1m0s}\nI0111 18:56:35.337846 1 plugin.go:96] Finish running custom plugins\nI0111 18:56:35.338023 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nE0111 18:57:28.438282 1 disk_collector.go:145] Error calling lsblk\nE0111 18:58:28.438297 1 disk_collector.go:145] Error calling lsblk\nE0111 18:59:28.438303 1 disk_collector.go:145] Error calling lsblk\nE0111 19:00:28.438298 1 disk_collector.go:145] Error calling lsblk\nI0111 19:01:28.234494 1 plugin.go:65] Start to run custom plugins\nI0111 19:01:28.235982 1 plugin.go:65] Start to run custom plugins\nE0111 19:01:28.438311 1 disk_collector.go:145] Error calling lsblk\nI0111 19:01:29.945118 1 plugin.go:91] Add check result {Rule:0xc0002c8bd0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentUnregisterNetDevice Reason:UnregisterNetDevice Path:/home/kubernetes/bin/log-counter Args:[--journald-source=kernel --log-path=/var/log/journal --lookback=20m --count=3 --pattern=unregister_netdevice: waiting for \\w+ to become free. Usage count = \\d+] TimeoutString:0xc00037f930 Timeout:1m0s}\nI0111 19:01:29.945179 1 plugin.go:96] Finish running custom plugins\nI0111 19:01:29.945216 1 custom_plugin_monitor.go:134] New status generated: &{Source:kernel-monitor Events:[] Conditions:[{Type:FrequentUnregisterNetDevice Status:False Transition:2020-01-11 15:56:28.235753628 +0000 UTC m=+0.289249248 Reason:NoFrequentUnregisterNetDevice Message:node is functioning properly}]}\nI0111 19:01:30.937351 1 plugin.go:91] Add check result {Rule:0xc0002c8cb0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentKubeletRestart Reason:FrequentKubeletRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --delay=5m --count=5 --pattern=Started Kubernetes kubelet.] TimeoutString:0xc00037fa10 Timeout:1m0s}\nI0111 19:01:30.937679 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 19:01:33.236088 1 plugin.go:91] Add check result {Rule:0xc0002c8d20 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentDockerRestart Reason:FrequentDockerRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting Docker Application Container Engine...] TimeoutString:0xc00037fa20 Timeout:1m0s}\nI0111 19:01:33.236459 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 19:01:35.442909 1 plugin.go:91] Add check result {Rule:0xc0002c8d90 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentContainerdRestart Reason:FrequentContainerdRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting containerd container runtime...] TimeoutString:0xc00037fa30 Timeout:1m0s}\nI0111 19:01:35.442985 1 plugin.go:96] Finish running custom plugins\nI0111 19:01:35.443037 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nE0111 19:02:28.438287 1 disk_collector.go:145] Error calling lsblk\nE0111 19:03:28.438285 1 disk_collector.go:145] Error calling lsblk\nE0111 19:04:28.438297 1 disk_collector.go:145] Error calling lsblk\nE0111 19:05:28.438292 1 disk_collector.go:145] Error calling lsblk\nI0111 19:06:28.234528 1 plugin.go:65] Start to run custom plugins\nI0111 19:06:28.235982 1 plugin.go:65] Start to run custom plugins\nE0111 19:06:28.438317 1 disk_collector.go:145] Error calling lsblk\nI0111 19:06:30.036480 1 plugin.go:91] Add check result {Rule:0xc0002c8bd0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentUnregisterNetDevice Reason:UnregisterNetDevice Path:/home/kubernetes/bin/log-counter Args:[--journald-source=kernel --log-path=/var/log/journal --lookback=20m --count=3 --pattern=unregister_netdevice: waiting for \\w+ to become free. Usage count = \\d+] TimeoutString:0xc00037f930 Timeout:1m0s}\nI0111 19:06:30.036545 1 plugin.go:96] Finish running custom plugins\nI0111 19:06:30.036585 1 custom_plugin_monitor.go:134] New status generated: &{Source:kernel-monitor Events:[] Conditions:[{Type:FrequentUnregisterNetDevice Status:False Transition:2020-01-11 15:56:28.235753628 +0000 UTC m=+0.289249248 Reason:NoFrequentUnregisterNetDevice Message:node is functioning properly}]}\nI0111 19:06:30.846085 1 plugin.go:91] Add check result {Rule:0xc0002c8cb0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentKubeletRestart Reason:FrequentKubeletRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --delay=5m --count=5 --pattern=Started Kubernetes kubelet.] TimeoutString:0xc00037fa10 Timeout:1m0s}\nI0111 19:06:30.846353 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 19:06:33.139777 1 plugin.go:91] Add check result {Rule:0xc0002c8d20 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentDockerRestart Reason:FrequentDockerRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting Docker Application Container Engine...] TimeoutString:0xc00037fa20 Timeout:1m0s}\nI0111 19:06:33.139871 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 19:06:35.345907 1 plugin.go:91] Add check result {Rule:0xc0002c8d90 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentContainerdRestart Reason:FrequentContainerdRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting containerd container runtime...] TimeoutString:0xc00037fa30 Timeout:1m0s}\nI0111 19:06:35.345975 1 plugin.go:96] Finish running custom plugins\nI0111 19:06:35.346026 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nE0111 19:07:28.438290 1 disk_collector.go:145] Error calling lsblk\nE0111 19:08:28.438303 1 disk_collector.go:145] Error calling lsblk\nE0111 19:09:28.438308 1 disk_collector.go:145] Error calling lsblk\nE0111 19:10:28.438297 1 disk_collector.go:145] Error calling lsblk\nI0111 19:11:28.234495 1 plugin.go:65] Start to run custom plugins\nI0111 19:11:28.236029 1 plugin.go:65] Start to run custom plugins\nE0111 19:11:28.438304 1 disk_collector.go:145] Error calling lsblk\nI0111 19:11:29.844748 1 plugin.go:91] Add check result {Rule:0xc0002c8bd0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentUnregisterNetDevice Reason:UnregisterNetDevice Path:/home/kubernetes/bin/log-counter Args:[--journald-source=kernel --log-path=/var/log/journal --lookback=20m --count=3 --pattern=unregister_netdevice: waiting for \\w+ to become free. Usage count = \\d+] TimeoutString:0xc00037f930 Timeout:1m0s}\nI0111 19:11:29.844906 1 plugin.go:96] Finish running custom plugins\nI0111 19:11:29.845003 1 custom_plugin_monitor.go:134] New status generated: &{Source:kernel-monitor Events:[] Conditions:[{Type:FrequentUnregisterNetDevice Status:False Transition:2020-01-11 15:56:28.235753628 +0000 UTC m=+0.289249248 Reason:NoFrequentUnregisterNetDevice Message:node is functioning properly}]}\nI0111 19:11:30.438115 1 plugin.go:91] Add check result {Rule:0xc0002c8cb0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentKubeletRestart Reason:FrequentKubeletRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --delay=5m --count=5 --pattern=Started Kubernetes kubelet.] TimeoutString:0xc00037fa10 Timeout:1m0s}\nI0111 19:11:30.438340 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 19:11:32.236076 1 plugin.go:91] Add check result {Rule:0xc0002c8d20 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentDockerRestart Reason:FrequentDockerRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting Docker Application Container Engine...] TimeoutString:0xc00037fa20 Timeout:1m0s}\nI0111 19:11:32.236125 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 19:11:33.944292 1 plugin.go:91] Add check result {Rule:0xc0002c8d90 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentContainerdRestart Reason:FrequentContainerdRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting containerd container runtime...] TimeoutString:0xc00037fa30 Timeout:1m0s}\nI0111 19:11:33.944359 1 plugin.go:96] Finish running custom plugins\nI0111 19:11:33.944504 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nE0111 19:12:28.438282 1 disk_collector.go:145] Error calling lsblk\nE0111 19:13:28.438279 1 disk_collector.go:145] Error calling lsblk\nE0111 19:14:28.438303 1 disk_collector.go:145] Error calling lsblk\nE0111 19:15:28.438303 1 disk_collector.go:145] Error calling lsblk\nI0111 19:16:28.234508 1 plugin.go:65] Start to run custom plugins\nI0111 19:16:28.235964 1 plugin.go:65] Start to run custom plugins\nE0111 19:16:28.438312 1 disk_collector.go:145] Error calling lsblk\nI0111 19:16:29.941136 1 plugin.go:91] Add check result {Rule:0xc0002c8bd0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentUnregisterNetDevice Reason:UnregisterNetDevice Path:/home/kubernetes/bin/log-counter Args:[--journald-source=kernel --log-path=/var/log/journal --lookback=20m --count=3 --pattern=unregister_netdevice: waiting for \\w+ to become free. Usage count = \\d+] TimeoutString:0xc00037f930 Timeout:1m0s}\nI0111 19:16:29.941415 1 plugin.go:96] Finish running custom plugins\nI0111 19:16:29.941311 1 custom_plugin_monitor.go:134] New status generated: &{Source:kernel-monitor Events:[] Conditions:[{Type:FrequentUnregisterNetDevice Status:False Transition:2020-01-11 15:56:28.235753628 +0000 UTC m=+0.289249248 Reason:NoFrequentUnregisterNetDevice Message:node is functioning properly}]}\nI0111 19:16:30.451088 1 plugin.go:91] Add check result {Rule:0xc0002c8cb0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentKubeletRestart Reason:FrequentKubeletRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --delay=5m --count=5 --pattern=Started Kubernetes kubelet.] TimeoutString:0xc00037fa10 Timeout:1m0s}\nI0111 19:16:30.451329 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 19:16:32.142085 1 plugin.go:91] Add check result {Rule:0xc0002c8d20 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentDockerRestart Reason:FrequentDockerRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting Docker Application Container Engine...] TimeoutString:0xc00037fa20 Timeout:1m0s}\nI0111 19:16:32.142388 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 19:16:33.843983 1 plugin.go:91] Add check result {Rule:0xc0002c8d90 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentContainerdRestart Reason:FrequentContainerdRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting containerd container runtime...] TimeoutString:0xc00037fa30 Timeout:1m0s}\nI0111 19:16:33.844043 1 plugin.go:96] Finish running custom plugins\nI0111 19:16:33.844078 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nE0111 19:17:28.438300 1 disk_collector.go:145] Error calling lsblk\nE0111 19:18:28.438558 1 disk_collector.go:145] Error calling lsblk\nE0111 19:19:28.438300 1 disk_collector.go:145] Error calling lsblk\nE0111 19:20:28.438356 1 disk_collector.go:145] Error calling lsblk\nI0111 19:21:28.234521 1 plugin.go:65] Start to run custom plugins\nI0111 19:21:28.235968 1 plugin.go:65] Start to run custom plugins\nE0111 19:21:28.438302 1 disk_collector.go:145] Error calling lsblk\nI0111 19:21:30.040901 1 plugin.go:91] Add check result {Rule:0xc0002c8bd0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentUnregisterNetDevice Reason:UnregisterNetDevice Path:/home/kubernetes/bin/log-counter Args:[--journald-source=kernel --log-path=/var/log/journal --lookback=20m --count=3 --pattern=unregister_netdevice: waiting for \\w+ to become free. Usage count = \\d+] TimeoutString:0xc00037f930 Timeout:1m0s}\nI0111 19:21:30.040966 1 plugin.go:96] Finish running custom plugins\nI0111 19:21:30.041003 1 custom_plugin_monitor.go:134] New status generated: &{Source:kernel-monitor Events:[] Conditions:[{Type:FrequentUnregisterNetDevice Status:False Transition:2020-01-11 15:56:28.235753628 +0000 UTC m=+0.289249248 Reason:NoFrequentUnregisterNetDevice Message:node is functioning properly}]}\nI0111 19:21:30.441354 1 plugin.go:91] Add check result {Rule:0xc0002c8cb0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentKubeletRestart Reason:FrequentKubeletRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --delay=5m --count=5 --pattern=Started Kubernetes kubelet.] TimeoutString:0xc00037fa10 Timeout:1m0s}\nI0111 19:21:30.441568 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 19:21:32.241292 1 plugin.go:91] Add check result {Rule:0xc0002c8d20 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentDockerRestart Reason:FrequentDockerRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting Docker Application Container Engine...] TimeoutString:0xc00037fa20 Timeout:1m0s}\nI0111 19:21:32.241518 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 19:21:33.944925 1 plugin.go:91] Add check result {Rule:0xc0002c8d90 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentContainerdRestart Reason:FrequentContainerdRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting containerd container runtime...] TimeoutString:0xc00037fa30 Timeout:1m0s}\nI0111 19:21:33.944990 1 plugin.go:96] Finish running custom plugins\nI0111 19:21:33.945040 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nE0111 19:22:28.438290 1 disk_collector.go:145] Error calling lsblk\nE0111 19:23:28.438275 1 disk_collector.go:145] Error calling lsblk\nE0111 19:24:28.438308 1 disk_collector.go:145] Error calling lsblk\nE0111 19:25:28.438371 1 disk_collector.go:145] Error calling lsblk\nI0111 19:26:28.234523 1 plugin.go:65] Start to run custom plugins\nI0111 19:26:28.235990 1 plugin.go:65] Start to run custom plugins\nE0111 19:26:28.438307 1 disk_collector.go:145] Error calling lsblk\nI0111 19:26:29.840154 1 plugin.go:91] Add check result {Rule:0xc0002c8bd0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentUnregisterNetDevice Reason:UnregisterNetDevice Path:/home/kubernetes/bin/log-counter Args:[--journald-source=kernel --log-path=/var/log/journal --lookback=20m --count=3 --pattern=unregister_netdevice: waiting for \\w+ to become free. Usage count = \\d+] TimeoutString:0xc00037f930 Timeout:1m0s}\nI0111 19:26:29.840239 1 plugin.go:96] Finish running custom plugins\nI0111 19:26:29.840375 1 custom_plugin_monitor.go:134] New status generated: &{Source:kernel-monitor Events:[] Conditions:[{Type:FrequentUnregisterNetDevice Status:False Transition:2020-01-11 15:56:28.235753628 +0000 UTC m=+0.289249248 Reason:NoFrequentUnregisterNetDevice Message:node is functioning properly}]}\nI0111 19:26:30.438274 1 plugin.go:91] Add check result {Rule:0xc0002c8cb0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentKubeletRestart Reason:FrequentKubeletRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --delay=5m --count=5 --pattern=Started Kubernetes kubelet.] TimeoutString:0xc00037fa10 Timeout:1m0s}\nI0111 19:26:30.438582 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 19:26:32.240050 1 plugin.go:91] Add check result {Rule:0xc0002c8d20 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentDockerRestart Reason:FrequentDockerRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting Docker Application Container Engine...] TimeoutString:0xc00037fa20 Timeout:1m0s}\nI0111 19:26:32.240385 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 19:26:33.946985 1 plugin.go:91] Add check result {Rule:0xc0002c8d90 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentContainerdRestart Reason:FrequentContainerdRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting containerd container runtime...] TimeoutString:0xc00037fa30 Timeout:1m0s}\nI0111 19:26:33.947045 1 plugin.go:96] Finish running custom plugins\nI0111 19:26:33.947103 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nE0111 19:27:28.438299 1 disk_collector.go:145] Error calling lsblk\nE0111 19:28:28.438303 1 disk_collector.go:145] Error calling lsblk\nE0111 19:29:28.438287 1 disk_collector.go:145] Error calling lsblk\nE0111 19:30:28.438280 1 disk_collector.go:145] Error calling lsblk\nI0111 19:31:28.234506 1 plugin.go:65] Start to run custom plugins\nI0111 19:31:28.235932 1 plugin.go:65] Start to run custom plugins\nE0111 19:31:28.438305 1 disk_collector.go:145] Error calling lsblk\nI0111 19:31:30.038819 1 plugin.go:91] Add check result {Rule:0xc0002c8bd0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentUnregisterNetDevice Reason:UnregisterNetDevice Path:/home/kubernetes/bin/log-counter Args:[--journald-source=kernel --log-path=/var/log/journal --lookback=20m --count=3 --pattern=unregister_netdevice: waiting for \\w+ to become free. Usage count = \\d+] TimeoutString:0xc00037f930 Timeout:1m0s}\nI0111 19:31:30.038884 1 plugin.go:96] Finish running custom plugins\nI0111 19:31:30.038962 1 custom_plugin_monitor.go:134] New status generated: &{Source:kernel-monitor Events:[] Conditions:[{Type:FrequentUnregisterNetDevice Status:False Transition:2020-01-11 15:56:28.235753628 +0000 UTC m=+0.289249248 Reason:NoFrequentUnregisterNetDevice Message:node is functioning properly}]}\nI0111 19:31:30.136442 1 plugin.go:91] Add check result {Rule:0xc0002c8cb0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentKubeletRestart Reason:FrequentKubeletRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --delay=5m --count=5 --pattern=Started Kubernetes kubelet.] TimeoutString:0xc00037fa10 Timeout:1m0s}\nI0111 19:31:30.136684 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 19:31:31.444968 1 plugin.go:91] Add check result {Rule:0xc0002c8d20 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentDockerRestart Reason:FrequentDockerRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting Docker Application Container Engine...] TimeoutString:0xc00037fa20 Timeout:1m0s}\nI0111 19:31:31.445060 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 19:31:32.840331 1 plugin.go:91] Add check result {Rule:0xc0002c8d90 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentContainerdRestart Reason:FrequentContainerdRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting containerd container runtime...] TimeoutString:0xc00037fa30 Timeout:1m0s}\nI0111 19:31:32.840398 1 plugin.go:96] Finish running custom plugins\nI0111 19:31:32.840450 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nE0111 19:32:28.438289 1 disk_collector.go:145] Error calling lsblk\nE0111 19:33:28.438299 1 disk_collector.go:145] Error calling lsblk\nE0111 19:34:28.438300 1 disk_collector.go:145] Error calling lsblk\nE0111 19:35:28.438308 1 disk_collector.go:145] Error calling lsblk\nI0111 19:36:28.234507 1 plugin.go:65] Start to run custom plugins\nI0111 19:36:28.235966 1 plugin.go:65] Start to run custom plugins\nE0111 19:36:28.438283 1 disk_collector.go:145] Error calling lsblk\nI0111 19:36:29.947082 1 plugin.go:91] Add check result {Rule:0xc0002c8bd0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentUnregisterNetDevice Reason:UnregisterNetDevice Path:/home/kubernetes/bin/log-counter Args:[--journald-source=kernel --log-path=/var/log/journal --lookback=20m --count=3 --pattern=unregister_netdevice: waiting for \\w+ to become free. Usage count = \\d+] TimeoutString:0xc00037f930 Timeout:1m0s}\nI0111 19:36:29.947144 1 plugin.go:96] Finish running custom plugins\nI0111 19:36:29.947172 1 custom_plugin_monitor.go:134] New status generated: &{Source:kernel-monitor Events:[] Conditions:[{Type:FrequentUnregisterNetDevice Status:False Transition:2020-01-11 15:56:28.235753628 +0000 UTC m=+0.289249248 Reason:NoFrequentUnregisterNetDevice Message:node is functioning properly}]}\nI0111 19:36:30.437983 1 plugin.go:91] Add check result {Rule:0xc0002c8cb0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentKubeletRestart Reason:FrequentKubeletRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --delay=5m --count=5 --pattern=Started Kubernetes kubelet.] TimeoutString:0xc00037fa10 Timeout:1m0s}\nI0111 19:36:30.438215 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 19:36:31.503272 1 plugin.go:91] Add check result {Rule:0xc0002c8d20 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentDockerRestart Reason:FrequentDockerRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting Docker Application Container Engine...] TimeoutString:0xc00037fa20 Timeout:1m0s}\nI0111 19:36:31.503541 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 19:36:32.144312 1 plugin.go:91] Add check result {Rule:0xc0002c8d90 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentContainerdRestart Reason:FrequentContainerdRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting containerd container runtime...] TimeoutString:0xc00037fa30 Timeout:1m0s}\nI0111 19:36:32.144371 1 plugin.go:96] Finish running custom plugins\nI0111 19:36:32.144423 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nE0111 19:37:28.438267 1 disk_collector.go:145] Error calling lsblk\nE0111 19:38:28.438307 1 disk_collector.go:145] Error calling lsblk\nE0111 19:39:28.438323 1 disk_collector.go:145] Error calling lsblk\nE0111 19:40:28.438288 1 disk_collector.go:145] Error calling lsblk\nI0111 19:41:28.234516 1 plugin.go:65] Start to run custom plugins\nI0111 19:41:28.235994 1 plugin.go:65] Start to run custom plugins\nE0111 19:41:28.438318 1 disk_collector.go:145] Error calling lsblk\nI0111 19:41:30.038097 1 plugin.go:91] Add check result {Rule:0xc0002c8cb0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentKubeletRestart Reason:FrequentKubeletRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --delay=5m --count=5 --pattern=Started Kubernetes kubelet.] TimeoutString:0xc00037fa10 Timeout:1m0s}\nI0111 19:41:30.038388 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 19:41:30.041769 1 plugin.go:91] Add check result {Rule:0xc0002c8bd0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentUnregisterNetDevice Reason:UnregisterNetDevice Path:/home/kubernetes/bin/log-counter Args:[--journald-source=kernel --log-path=/var/log/journal --lookback=20m --count=3 --pattern=unregister_netdevice: waiting for \\w+ to become free. Usage count = \\d+] TimeoutString:0xc00037f930 Timeout:1m0s}\nI0111 19:41:30.041825 1 plugin.go:96] Finish running custom plugins\nI0111 19:41:30.041856 1 custom_plugin_monitor.go:134] New status generated: &{Source:kernel-monitor Events:[] Conditions:[{Type:FrequentUnregisterNetDevice Status:False Transition:2020-01-11 15:56:28.235753628 +0000 UTC m=+0.289249248 Reason:NoFrequentUnregisterNetDevice Message:node is functioning properly}]}\nI0111 19:41:32.239271 1 plugin.go:91] Add check result {Rule:0xc0002c8d20 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentDockerRestart Reason:FrequentDockerRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting Docker Application Container Engine...] TimeoutString:0xc00037fa20 Timeout:1m0s}\nI0111 19:41:32.239569 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 19:41:33.438751 1 plugin.go:91] Add check result {Rule:0xc0002c8d90 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentContainerdRestart Reason:FrequentContainerdRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting containerd container runtime...] TimeoutString:0xc00037fa30 Timeout:1m0s}\nI0111 19:41:33.438832 1 plugin.go:96] Finish running custom plugins\nI0111 19:41:33.438887 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nE0111 19:42:28.438296 1 disk_collector.go:145] Error calling lsblk\nE0111 19:43:28.438284 1 disk_collector.go:145] Error calling lsblk\nE0111 19:44:28.438307 1 disk_collector.go:145] Error calling lsblk\nE0111 19:45:28.438280 1 disk_collector.go:145] Error calling lsblk\nI0111 19:46:28.234498 1 plugin.go:65] Start to run custom plugins\nI0111 19:46:28.235970 1 plugin.go:65] Start to run custom plugins\nE0111 19:46:28.438315 1 disk_collector.go:145] Error calling lsblk\nI0111 19:46:29.838744 1 plugin.go:91] Add check result {Rule:0xc0002c8bd0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentUnregisterNetDevice Reason:UnregisterNetDevice Path:/home/kubernetes/bin/log-counter Args:[--journald-source=kernel --log-path=/var/log/journal --lookback=20m --count=3 --pattern=unregister_netdevice: waiting for \\w+ to become free. Usage count = \\d+] TimeoutString:0xc00037f930 Timeout:1m0s}\nI0111 19:46:29.838820 1 plugin.go:96] Finish running custom plugins\nI0111 19:46:29.838858 1 custom_plugin_monitor.go:134] New status generated: &{Source:kernel-monitor Events:[] Conditions:[{Type:FrequentUnregisterNetDevice Status:False Transition:2020-01-11 15:56:28.235753628 +0000 UTC m=+0.289249248 Reason:NoFrequentUnregisterNetDevice Message:node is functioning properly}]}\nI0111 19:46:30.840609 1 plugin.go:91] Add check result {Rule:0xc0002c8cb0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentKubeletRestart Reason:FrequentKubeletRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --delay=5m --count=5 --pattern=Started Kubernetes kubelet.] TimeoutString:0xc00037fa10 Timeout:1m0s}\nI0111 19:46:30.840739 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 19:46:33.035689 1 plugin.go:91] Add check result {Rule:0xc0002c8d20 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentDockerRestart Reason:FrequentDockerRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting Docker Application Container Engine...] TimeoutString:0xc00037fa20 Timeout:1m0s}\nI0111 19:46:33.035789 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 19:46:35.240570 1 plugin.go:91] Add check result {Rule:0xc0002c8d90 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentContainerdRestart Reason:FrequentContainerdRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting containerd container runtime...] TimeoutString:0xc00037fa30 Timeout:1m0s}\nI0111 19:46:35.240669 1 plugin.go:96] Finish running custom plugins\nI0111 19:46:35.240718 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nE0111 19:47:28.438313 1 disk_collector.go:145] Error calling lsblk\nE0111 19:48:28.438328 1 disk_collector.go:145] Error calling lsblk\nE0111 19:49:28.438293 1 disk_collector.go:145] Error calling lsblk\nE0111 19:50:28.438303 1 disk_collector.go:145] Error calling lsblk\nI0111 19:51:28.234519 1 plugin.go:65] Start to run custom plugins\nI0111 19:51:28.235992 1 plugin.go:65] Start to run custom plugins\nE0111 19:51:28.438294 1 disk_collector.go:145] Error calling lsblk\nI0111 19:51:30.045644 1 plugin.go:91] Add check result {Rule:0xc0002c8bd0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentUnregisterNetDevice Reason:UnregisterNetDevice Path:/home/kubernetes/bin/log-counter Args:[--journald-source=kernel --log-path=/var/log/journal --lookback=20m --count=3 --pattern=unregister_netdevice: waiting for \\w+ to become free. Usage count = \\d+] TimeoutString:0xc00037f930 Timeout:1m0s}\nI0111 19:51:30.045724 1 plugin.go:96] Finish running custom plugins\nI0111 19:51:30.045762 1 custom_plugin_monitor.go:134] New status generated: &{Source:kernel-monitor Events:[] Conditions:[{Type:FrequentUnregisterNetDevice Status:False Transition:2020-01-11 15:56:28.235753628 +0000 UTC m=+0.289249248 Reason:NoFrequentUnregisterNetDevice Message:node is functioning properly}]}\nI0111 19:51:31.536030 1 plugin.go:91] Add check result {Rule:0xc0002c8cb0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentKubeletRestart Reason:FrequentKubeletRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --delay=5m --count=5 --pattern=Started Kubernetes kubelet.] TimeoutString:0xc00037fa10 Timeout:1m0s}\nI0111 19:51:31.536302 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 19:51:34.339545 1 plugin.go:91] Add check result {Rule:0xc0002c8d20 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentDockerRestart Reason:FrequentDockerRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting Docker Application Container Engine...] TimeoutString:0xc00037fa20 Timeout:1m0s}\nI0111 19:51:34.339678 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 19:51:37.239541 1 plugin.go:91] Add check result {Rule:0xc0002c8d90 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentContainerdRestart Reason:FrequentContainerdRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting containerd container runtime...] TimeoutString:0xc00037fa30 Timeout:1m0s}\nI0111 19:51:37.239604 1 plugin.go:96] Finish running custom plugins\nI0111 19:51:37.239689 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nE0111 19:52:28.438362 1 disk_collector.go:145] Error calling lsblk\nE0111 19:53:28.438289 1 disk_collector.go:145] Error calling lsblk\nE0111 19:54:28.438290 1 disk_collector.go:145] Error calling lsblk\nE0111 19:55:28.438309 1 disk_collector.go:145] Error calling lsblk\nI0111 19:56:28.234514 1 plugin.go:65] Start to run custom plugins\nI0111 19:56:28.235982 1 plugin.go:65] Start to run custom plugins\nE0111 19:56:28.438302 1 disk_collector.go:145] Error calling lsblk\nI0111 19:56:29.839713 1 plugin.go:91] Add check result {Rule:0xc0002c8bd0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentUnregisterNetDevice Reason:UnregisterNetDevice Path:/home/kubernetes/bin/log-counter Args:[--journald-source=kernel --log-path=/var/log/journal --lookback=20m --count=3 --pattern=unregister_netdevice: waiting for \\w+ to become free. Usage count = \\d+] TimeoutString:0xc00037f930 Timeout:1m0s}\nI0111 19:56:29.840036 1 plugin.go:96] Finish running custom plugins\nI0111 19:56:29.839898 1 custom_plugin_monitor.go:134] New status generated: &{Source:kernel-monitor Events:[] Conditions:[{Type:FrequentUnregisterNetDevice Status:False Transition:2020-01-11 15:56:28.235753628 +0000 UTC m=+0.289249248 Reason:NoFrequentUnregisterNetDevice Message:node is functioning properly}]}\nI0111 19:56:31.546393 1 plugin.go:91] Add check result {Rule:0xc0002c8cb0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentKubeletRestart Reason:FrequentKubeletRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --delay=5m --count=5 --pattern=Started Kubernetes kubelet.] TimeoutString:0xc00037fa10 Timeout:1m0s}\nI0111 19:56:31.546564 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 19:56:34.544164 1 plugin.go:91] Add check result {Rule:0xc0002c8d20 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentDockerRestart Reason:FrequentDockerRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting Docker Application Container Engine...] TimeoutString:0xc00037fa20 Timeout:1m0s}\nI0111 19:56:34.544332 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 19:56:37.444240 1 plugin.go:91] Add check result {Rule:0xc0002c8d90 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentContainerdRestart Reason:FrequentContainerdRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting containerd container runtime...] TimeoutString:0xc00037fa30 Timeout:1m0s}\nI0111 19:56:37.444462 1 plugin.go:96] Finish running custom plugins\nI0111 19:56:37.444401 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nE0111 19:57:28.438286 1 disk_collector.go:145] Error calling lsblk\nE0111 19:58:28.438308 1 disk_collector.go:145] Error calling lsblk\nE0111 19:59:28.438298 1 disk_collector.go:145] Error calling lsblk\nE0111 20:00:28.438279 1 disk_collector.go:145] Error calling lsblk\nI0111 20:01:28.234521 1 plugin.go:65] Start to run custom plugins\nI0111 20:01:28.235977 1 plugin.go:65] Start to run custom plugins\nE0111 20:01:28.438290 1 disk_collector.go:145] Error calling lsblk\nI0111 20:01:30.038864 1 plugin.go:91] Add check result {Rule:0xc0002c8bd0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentUnregisterNetDevice Reason:UnregisterNetDevice Path:/home/kubernetes/bin/log-counter Args:[--journald-source=kernel --log-path=/var/log/journal --lookback=20m --count=3 --pattern=unregister_netdevice: waiting for \\w+ to become free. Usage count = \\d+] TimeoutString:0xc00037f930 Timeout:1m0s}\nI0111 20:01:30.038924 1 plugin.go:96] Finish running custom plugins\nI0111 20:01:30.038953 1 custom_plugin_monitor.go:134] New status generated: &{Source:kernel-monitor Events:[] Conditions:[{Type:FrequentUnregisterNetDevice Status:False Transition:2020-01-11 15:56:28.235753628 +0000 UTC m=+0.289249248 Reason:NoFrequentUnregisterNetDevice Message:node is functioning properly}]}\nI0111 20:01:31.342285 1 plugin.go:91] Add check result {Rule:0xc0002c8cb0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentKubeletRestart Reason:FrequentKubeletRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --delay=5m --count=5 --pattern=Started Kubernetes kubelet.] TimeoutString:0xc00037fa10 Timeout:1m0s}\nI0111 20:01:31.342542 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 20:01:33.941993 1 plugin.go:91] Add check result {Rule:0xc0002c8d20 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentDockerRestart Reason:FrequentDockerRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting Docker Application Container Engine...] TimeoutString:0xc00037fa20 Timeout:1m0s}\nI0111 20:01:33.942216 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 20:01:35.544112 1 plugin.go:91] Add check result {Rule:0xc0002c8d90 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentContainerdRestart Reason:FrequentContainerdRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting containerd container runtime...] TimeoutString:0xc00037fa30 Timeout:1m0s}\nI0111 20:01:35.544170 1 plugin.go:96] Finish running custom plugins\nI0111 20:01:35.544223 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nE0111 20:02:28.438300 1 disk_collector.go:145] Error calling lsblk\nE0111 20:03:28.438293 1 disk_collector.go:145] Error calling lsblk\nE0111 20:04:28.438287 1 disk_collector.go:145] Error calling lsblk\nE0111 20:05:28.438307 1 disk_collector.go:145] Error calling lsblk\nI0111 20:06:28.234515 1 plugin.go:65] Start to run custom plugins\nI0111 20:06:28.235971 1 plugin.go:65] Start to run custom plugins\nE0111 20:06:28.438300 1 disk_collector.go:145] Error calling lsblk\nI0111 20:06:30.234520 1 plugin.go:91] Add check result {Rule:0xc0002c8bd0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentUnregisterNetDevice Reason:UnregisterNetDevice Path:/home/kubernetes/bin/log-counter Args:[--journald-source=kernel --log-path=/var/log/journal --lookback=20m --count=3 --pattern=unregister_netdevice: waiting for \\w+ to become free. Usage count = \\d+] TimeoutString:0xc00037f930 Timeout:1m0s}\nI0111 20:06:30.234603 1 plugin.go:96] Finish running custom plugins\nI0111 20:06:30.234667 1 custom_plugin_monitor.go:134] New status generated: &{Source:kernel-monitor Events:[] Conditions:[{Type:FrequentUnregisterNetDevice Status:False Transition:2020-01-11 15:56:28.235753628 +0000 UTC m=+0.289249248 Reason:NoFrequentUnregisterNetDevice Message:node is functioning properly}]}\nI0111 20:06:30.639733 1 plugin.go:91] Add check result {Rule:0xc0002c8cb0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentKubeletRestart Reason:FrequentKubeletRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --delay=5m --count=5 --pattern=Started Kubernetes kubelet.] TimeoutString:0xc00037fa10 Timeout:1m0s}\nI0111 20:06:30.640032 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 20:06:32.739003 1 plugin.go:91] Add check result {Rule:0xc0002c8d20 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentDockerRestart Reason:FrequentDockerRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting Docker Application Container Engine...] TimeoutString:0xc00037fa20 Timeout:1m0s}\nI0111 20:06:32.739139 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 20:06:35.846017 1 plugin.go:91] Add check result {Rule:0xc0002c8d90 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentContainerdRestart Reason:FrequentContainerdRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting containerd container runtime...] TimeoutString:0xc00037fa30 Timeout:1m0s}\nI0111 20:06:35.846170 1 plugin.go:96] Finish running custom plugins\nI0111 20:06:35.846314 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nE0111 20:07:28.438284 1 disk_collector.go:145] Error calling lsblk\nE0111 20:08:28.438304 1 disk_collector.go:145] Error calling lsblk\nE0111 20:09:28.438276 1 disk_collector.go:145] Error calling lsblk\nE0111 20:10:28.438271 1 disk_collector.go:145] Error calling lsblk\nI0111 20:11:28.234492 1 plugin.go:65] Start to run custom plugins\nI0111 20:11:28.235963 1 plugin.go:65] Start to run custom plugins\nE0111 20:11:28.438287 1 disk_collector.go:145] Error calling lsblk\nI0111 20:11:30.037497 1 plugin.go:91] Add check result {Rule:0xc0002c8bd0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentUnregisterNetDevice Reason:UnregisterNetDevice Path:/home/kubernetes/bin/log-counter Args:[--journald-source=kernel --log-path=/var/log/journal --lookback=20m --count=3 --pattern=unregister_netdevice: waiting for \\w+ to become free. Usage count = \\d+] TimeoutString:0xc00037f930 Timeout:1m0s}\nI0111 20:11:30.037931 1 plugin.go:96] Finish running custom plugins\nI0111 20:11:30.037838 1 custom_plugin_monitor.go:134] New status generated: &{Source:kernel-monitor Events:[] Conditions:[{Type:FrequentUnregisterNetDevice Status:False Transition:2020-01-11 15:56:28.235753628 +0000 UTC m=+0.289249248 Reason:NoFrequentUnregisterNetDevice Message:node is functioning properly}]}\nI0111 20:11:31.240111 1 plugin.go:91] Add check result {Rule:0xc0002c8cb0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentKubeletRestart Reason:FrequentKubeletRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --delay=5m --count=5 --pattern=Started Kubernetes kubelet.] TimeoutString:0xc00037fa10 Timeout:1m0s}\nI0111 20:11:31.240293 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 20:11:33.265337 1 plugin.go:91] Add check result {Rule:0xc0002c8d20 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentDockerRestart Reason:FrequentDockerRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting Docker Application Container Engine...] TimeoutString:0xc00037fa20 Timeout:1m0s}\nI0111 20:11:33.271902 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 20:11:35.842255 1 plugin.go:91] Add check result {Rule:0xc0002c8d90 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentContainerdRestart Reason:FrequentContainerdRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting containerd container runtime...] TimeoutString:0xc00037fa30 Timeout:1m0s}\nI0111 20:11:35.842323 1 plugin.go:96] Finish running custom plugins\nI0111 20:11:35.842483 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nE0111 20:12:28.438313 1 disk_collector.go:145] Error calling lsblk\nE0111 20:13:28.438303 1 disk_collector.go:145] Error calling lsblk\nE0111 20:14:28.438316 1 disk_collector.go:145] Error calling lsblk\nE0111 20:15:28.438280 1 disk_collector.go:145] Error calling lsblk\nI0111 20:16:28.234514 1 plugin.go:65] Start to run custom plugins\nI0111 20:16:28.235934 1 plugin.go:65] Start to run custom plugins\nE0111 20:16:28.438315 1 disk_collector.go:145] Error calling lsblk\nI0111 20:16:30.037875 1 plugin.go:91] Add check result {Rule:0xc0002c8bd0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentUnregisterNetDevice Reason:UnregisterNetDevice Path:/home/kubernetes/bin/log-counter Args:[--journald-source=kernel --log-path=/var/log/journal --lookback=20m --count=3 --pattern=unregister_netdevice: waiting for \\w+ to become free. Usage count = \\d+] TimeoutString:0xc00037f930 Timeout:1m0s}\nI0111 20:16:30.037958 1 plugin.go:96] Finish running custom plugins\nI0111 20:16:30.037995 1 custom_plugin_monitor.go:134] New status generated: &{Source:kernel-monitor Events:[] Conditions:[{Type:FrequentUnregisterNetDevice Status:False Transition:2020-01-11 15:56:28.235753628 +0000 UTC m=+0.289249248 Reason:NoFrequentUnregisterNetDevice Message:node is functioning properly}]}\nI0111 20:16:31.449256 1 plugin.go:91] Add check result {Rule:0xc0002c8cb0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentKubeletRestart Reason:FrequentKubeletRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --delay=5m --count=5 --pattern=Started Kubernetes kubelet.] TimeoutString:0xc00037fa10 Timeout:1m0s}\nI0111 20:16:31.449447 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 20:16:33.143927 1 plugin.go:91] Add check result {Rule:0xc0002c8d20 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentDockerRestart Reason:FrequentDockerRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting Docker Application Container Engine...] TimeoutString:0xc00037fa20 Timeout:1m0s}\nI0111 20:16:33.144121 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 20:16:36.045356 1 plugin.go:91] Add check result {Rule:0xc0002c8d90 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentContainerdRestart Reason:FrequentContainerdRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting containerd container runtime...] TimeoutString:0xc00037fa30 Timeout:1m0s}\nI0111 20:16:36.045433 1 plugin.go:96] Finish running custom plugins\nI0111 20:16:36.045484 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nE0111 20:17:28.438285 1 disk_collector.go:145] Error calling lsblk\nE0111 20:18:28.438278 1 disk_collector.go:145] Error calling lsblk\nE0111 20:19:28.438305 1 disk_collector.go:145] Error calling lsblk\nE0111 20:20:28.438280 1 disk_collector.go:145] Error calling lsblk\nI0111 20:21:28.234500 1 plugin.go:65] Start to run custom plugins\nI0111 20:21:28.235947 1 plugin.go:65] Start to run custom plugins\nE0111 20:21:28.438307 1 disk_collector.go:145] Error calling lsblk\nI0111 20:21:30.039757 1 plugin.go:91] Add check result {Rule:0xc0002c8bd0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentUnregisterNetDevice Reason:UnregisterNetDevice Path:/home/kubernetes/bin/log-counter Args:[--journald-source=kernel --log-path=/var/log/journal --lookback=20m --count=3 --pattern=unregister_netdevice: waiting for \\w+ to become free. Usage count = \\d+] TimeoutString:0xc00037f930 Timeout:1m0s}\nI0111 20:21:30.039815 1 plugin.go:96] Finish running custom plugins\nI0111 20:21:30.040009 1 custom_plugin_monitor.go:134] New status generated: &{Source:kernel-monitor Events:[] Conditions:[{Type:FrequentUnregisterNetDevice Status:False Transition:2020-01-11 15:56:28.235753628 +0000 UTC m=+0.289249248 Reason:NoFrequentUnregisterNetDevice Message:node is functioning properly}]}\nI0111 20:21:31.026064 1 plugin.go:91] Add check result {Rule:0xc0002c8cb0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentKubeletRestart Reason:FrequentKubeletRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --delay=5m --count=5 --pattern=Started Kubernetes kubelet.] TimeoutString:0xc00037fa10 Timeout:1m0s}\nI0111 20:21:31.026272 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 20:21:33.739864 1 plugin.go:91] Add check result {Rule:0xc0002c8d20 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentDockerRestart Reason:FrequentDockerRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting Docker Application Container Engine...] TimeoutString:0xc00037fa20 Timeout:1m0s}\nI0111 20:21:33.739933 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 20:21:36.437564 1 plugin.go:91] Add check result {Rule:0xc0002c8d90 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentContainerdRestart Reason:FrequentContainerdRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting containerd container runtime...] TimeoutString:0xc00037fa30 Timeout:1m0s}\nI0111 20:21:36.437638 1 plugin.go:96] Finish running custom plugins\nI0111 20:21:36.437727 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nE0111 20:22:28.438280 1 disk_collector.go:145] Error calling lsblk\nE0111 20:23:28.438315 1 disk_collector.go:145] Error calling lsblk\nE0111 20:24:28.438296 1 disk_collector.go:145] Error calling lsblk\nE0111 20:25:28.438280 1 disk_collector.go:145] Error calling lsblk\nI0111 20:26:28.234521 1 plugin.go:65] Start to run custom plugins\nI0111 20:26:28.235966 1 plugin.go:65] Start to run custom plugins\nE0111 20:26:28.438293 1 disk_collector.go:145] Error calling lsblk\nI0111 20:26:30.045057 1 plugin.go:91] Add check result {Rule:0xc0002c8bd0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentUnregisterNetDevice Reason:UnregisterNetDevice Path:/home/kubernetes/bin/log-counter Args:[--journald-source=kernel --log-path=/var/log/journal --lookback=20m --count=3 --pattern=unregister_netdevice: waiting for \\w+ to become free. Usage count = \\d+] TimeoutString:0xc00037f930 Timeout:1m0s}\nI0111 20:26:30.045220 1 custom_plugin_monitor.go:134] New status generated: &{Source:kernel-monitor Events:[] Conditions:[{Type:FrequentUnregisterNetDevice Status:False Transition:2020-01-11 15:56:28.235753628 +0000 UTC m=+0.289249248 Reason:NoFrequentUnregisterNetDevice Message:node is functioning properly}]}\nI0111 20:26:30.045341 1 plugin.go:96] Finish running custom plugins\nI0111 20:26:31.236674 1 plugin.go:91] Add check result {Rule:0xc0002c8cb0 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentKubeletRestart Reason:FrequentKubeletRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --delay=5m --count=5 --pattern=Started Kubernetes kubelet.] TimeoutString:0xc00037fa10 Timeout:1m0s}\nI0111 20:26:31.236839 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 20:26:33.537961 1 plugin.go:91] Add check result {Rule:0xc0002c8d20 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentDockerRestart Reason:FrequentDockerRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting Docker Application Container Engine...] TimeoutString:0xc00037fa20 Timeout:1m0s}\nI0111 20:26:33.538122 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nI0111 20:26:36.042155 1 plugin.go:91] Add check result {Rule:0xc0002c8d90 ExitStatus:0 Message:} for rule &{Type:permanent Condition:FrequentContainerdRestart Reason:FrequentContainerdRestart Path:/home/kubernetes/bin/log-counter Args:[--journald-source=systemd --log-path=/var/log/journal --lookback=20m --count=5 --pattern=Starting containerd container runtime...] TimeoutString:0xc00037fa30 Timeout:1m0s}\nI0111 20:26:36.042211 1 plugin.go:96] Finish running custom plugins\nI0111 20:26:36.042253 1 custom_plugin_monitor.go:134] New status generated: &{Source:systemd-monitor Events:[] Conditions:[{Type:FrequentKubeletRestart Status:False Transition:2020-01-11 15:56:28.23664144 +0000 UTC m=+0.290136994 Reason:NoFrequentKubeletRestart Message:kubelet is functioning properly} {Type:FrequentDockerRestart Status:False Transition:2020-01-11 15:56:28.236641585 +0000 UTC m=+0.290137129 Reason:NoFrequentDockerRestart Message:docker is functioning properly} {Type:FrequentContainerdRestart Status:False Transition:2020-01-11 15:56:28.236641661 +0000 UTC m=+0.290137206 Reason:NoFrequentContainerdRestart Message:containerd is functioning properly}]}\nE0111 20:27:28.438286 1 disk_collector.go:145] Error calling lsblk\nE0111 20:28:28.438297 1 disk_collector.go:145] Error calling lsblk\n==== END logs for container node-problem-detector of pod kube-system/node-problem-detector-jx2p4 ====\n==== START logs for container vpn-shoot of pod kube-system/vpn-shoot-5d76665b65-6rkww ====\nSat Jan 11 15:56:40 2020 WARNING: file '/srv/secrets/vpn-shoot/tls.key' is group or others accessible\nSat Jan 11 15:56:40 2020 WARNING: file '/srv/secrets/tlsauth/vpn.tlsauth' is group or others accessible\nSat Jan 11 15:56:40 2020 OpenVPN 2.4.6 x86_64-alpine-linux-musl [SSL (OpenSSL)] [LZO] [LZ4] [EPOLL] [MH/PKTINFO] [AEAD] built on Jul 8 2018\nSat Jan 11 15:56:40 2020 library versions: LibreSSL 2.7.5, LZO 2.10\nSat Jan 11 15:56:40 2020 TUN/TAP device tun0 opened\nSat Jan 11 15:56:40 2020 do_ifconfig, tt->did_ifconfig_ipv6_setup=0\nSat Jan 11 15:56:40 2020 /sbin/ip link set dev tun0 up mtu 1500\nSat Jan 11 15:56:40 2020 /sbin/ip addr add dev tun0 local 192.168.123.1 peer 192.168.123.2\nSat Jan 11 15:56:40 2020 Listening for incoming TCP connection on [AF_INET][undef]:1194\nSat Jan 11 15:56:40 2020 TCPv4_SERVER link local (bound): [AF_INET][undef]:1194\nSat Jan 11 15:56:40 2020 TCPv4_SERVER link remote: [AF_UNSPEC]\nSat Jan 11 15:56:40 2020 Initialization Sequence Completed\nSat Jan 11 15:57:24 2020 TCP connection established with [AF_INET]100.64.1.1:47046\nSat Jan 11 15:57:24 2020 100.64.1.1:47046 TCP connection established with [AF_INET]10.250.7.77:6106\nSat Jan 11 15:57:24 2020 100.64.1.1:47046 Connection reset, restarting [0]\nSat Jan 11 15:57:24 2020 10.250.7.77:6106 Connection reset, restarting [0]\nSat Jan 11 15:57:33 2020 TCP connection established with [AF_INET]10.250.7.77:6142\nSat Jan 11 15:57:33 2020 10.250.7.77:6142 TCP connection established with [AF_INET]100.64.1.1:47082\nSat Jan 11 15:57:33 2020 10.250.7.77:6142 Connection reset, restarting [0]\nSat Jan 11 15:57:33 2020 100.64.1.1:47082 Connection reset, restarting [0]\nSat Jan 11 15:57:43 2020 TCP connection established with [AF_INET]100.64.1.1:47086\nSat Jan 11 15:57:43 2020 100.64.1.1:47086 Connection reset, restarting [0]\nSat Jan 11 15:57:43 2020 TCP connection established with [AF_INET]10.250.7.77:6146\nSat Jan 11 15:57:43 2020 10.250.7.77:6146 Connection reset, restarting [0]\nSat Jan 11 15:57:53 2020 TCP connection established with [AF_INET]10.250.7.77:6200\nSat Jan 11 15:57:53 2020 10.250.7.77:6200 TCP connection established with [AF_INET]100.64.1.1:47140\nSat Jan 11 15:57:53 2020 10.250.7.77:6200 Connection reset, restarting [0]\nSat Jan 11 15:57:53 2020 100.64.1.1:47140 Connection reset, restarting [0]\nSat Jan 11 15:58:03 2020 TCP connection established with [AF_INET]10.250.7.77:6212\nSat Jan 11 15:58:03 2020 10.250.7.77:6212 TCP connection established with [AF_INET]100.64.1.1:47152\nSat Jan 11 15:58:03 2020 10.250.7.77:6212 Connection reset, restarting [0]\nSat Jan 11 15:58:03 2020 100.64.1.1:47152 Connection reset, restarting [0]\nSat Jan 11 15:58:13 2020 TCP connection established with [AF_INET]10.250.7.77:6232\nSat Jan 11 15:58:13 2020 10.250.7.77:6232 TCP connection established with [AF_INET]100.64.1.1:47172\nSat Jan 11 15:58:13 2020 10.250.7.77:6232 Connection reset, restarting [0]\nSat Jan 11 15:58:13 2020 100.64.1.1:47172 Connection reset, restarting [0]\nSat Jan 11 15:58:23 2020 TCP connection established with [AF_INET]10.250.7.77:6240\nSat Jan 11 15:58:23 2020 10.250.7.77:6240 Connection reset, restarting [0]\nSat Jan 11 15:58:23 2020 TCP connection established with [AF_INET]100.64.1.1:47180\nSat Jan 11 15:58:23 2020 100.64.1.1:47180 Connection reset, restarting [0]\nSat Jan 11 15:58:25 2020 TCP connection established with [AF_INET]10.250.7.77:6244\nSat Jan 11 15:58:25 2020 10.250.7.77:6244 Connection reset, restarting [0]\nSat Jan 11 15:58:25 2020 TCP connection established with [AF_INET]100.64.1.1:47188\nSat Jan 11 15:58:26 2020 100.64.1.1:47188 peer info: IV_VER=2.4.6\nSat Jan 11 15:58:26 2020 100.64.1.1:47188 peer info: IV_PLAT=linux\nSat Jan 11 15:58:26 2020 100.64.1.1:47188 peer info: IV_PROTO=2\nSat Jan 11 15:58:26 2020 100.64.1.1:47188 peer info: IV_NCP=2\nSat Jan 11 15:58:26 2020 100.64.1.1:47188 peer info: IV_LZ4=1\nSat Jan 11 15:58:26 2020 100.64.1.1:47188 peer info: IV_LZ4v2=1\nSat Jan 11 15:58:26 2020 100.64.1.1:47188 peer info: IV_LZO=1\nSat Jan 11 15:58:26 2020 100.64.1.1:47188 peer info: IV_COMP_STUB=1\nSat Jan 11 15:58:26 2020 100.64.1.1:47188 peer info: IV_COMP_STUBv2=1\nSat Jan 11 15:58:26 2020 100.64.1.1:47188 peer info: IV_TCPNL=1\nSat Jan 11 15:58:26 2020 100.64.1.1:47188 [vpn-seed] Peer Connection Initiated with [AF_INET]100.64.1.1:47188\nSat Jan 11 15:58:26 2020 vpn-seed/100.64.1.1:47188 MULTI_sva: pool returned IPv4=192.168.123.6, IPv6=(Not enabled)\nSat Jan 11 15:58:33 2020 TCP connection established with [AF_INET]10.250.7.77:6254\nSat Jan 11 15:58:33 2020 10.250.7.77:6254 TCP connection established with [AF_INET]100.64.1.1:47194\nSat Jan 11 15:58:33 2020 10.250.7.77:6254 Connection reset, restarting [0]\nSat Jan 11 15:58:33 2020 100.64.1.1:47194 Connection reset, restarting [0]\nSat Jan 11 15:58:43 2020 TCP connection established with [AF_INET]10.250.7.77:6258\nSat Jan 11 15:58:43 2020 10.250.7.77:6258 TCP connection established with [AF_INET]100.64.1.1:47198\nSat Jan 11 15:58:43 2020 10.250.7.77:6258 Connection reset, restarting [0]\nSat Jan 11 15:58:43 2020 100.64.1.1:47198 Connection reset, restarting [0]\nSat Jan 11 15:58:53 2020 TCP connection established with [AF_INET]10.250.7.77:6270\nSat Jan 11 15:58:53 2020 10.250.7.77:6270 TCP connection established with [AF_INET]100.64.1.1:47210\nSat Jan 11 15:58:53 2020 10.250.7.77:6270 Connection reset, restarting [0]\nSat Jan 11 15:58:53 2020 100.64.1.1:47210 Connection reset, restarting [0]\nSat Jan 11 15:59:03 2020 TCP connection established with [AF_INET]10.250.7.77:6284\nSat Jan 11 15:59:03 2020 10.250.7.77:6284 TCP connection established with [AF_INET]100.64.1.1:47224\nSat Jan 11 15:59:03 2020 10.250.7.77:6284 Connection reset, restarting [0]\nSat Jan 11 15:59:03 2020 100.64.1.1:47224 Connection reset, restarting [0]\nSat Jan 11 15:59:13 2020 TCP connection established with [AF_INET]10.250.7.77:6292\nSat Jan 11 15:59:13 2020 10.250.7.77:6292 TCP connection established with [AF_INET]100.64.1.1:47232\nSat Jan 11 15:59:13 2020 10.250.7.77:6292 Connection reset, restarting [0]\nSat Jan 11 15:59:13 2020 100.64.1.1:47232 Connection reset, restarting [0]\nSat Jan 11 15:59:23 2020 TCP connection established with [AF_INET]10.250.7.77:6296\nSat Jan 11 15:59:23 2020 10.250.7.77:6296 TCP connection established with [AF_INET]100.64.1.1:47236\nSat Jan 11 15:59:23 2020 10.250.7.77:6296 Connection reset, restarting [0]\nSat Jan 11 15:59:23 2020 100.64.1.1:47236 Connection reset, restarting [0]\nSat Jan 11 15:59:33 2020 TCP connection established with [AF_INET]10.250.7.77:6306\nSat Jan 11 15:59:33 2020 10.250.7.77:6306 TCP connection established with [AF_INET]100.64.1.1:47246\nSat Jan 11 15:59:33 2020 10.250.7.77:6306 Connection reset, restarting [0]\nSat Jan 11 15:59:33 2020 100.64.1.1:47246 Connection reset, restarting [0]\nSat Jan 11 15:59:43 2020 TCP connection established with [AF_INET]10.250.7.77:6314\nSat Jan 11 15:59:43 2020 10.250.7.77:6314 TCP connection established with [AF_INET]100.64.1.1:47254\nSat Jan 11 15:59:43 2020 10.250.7.77:6314 Connection reset, restarting [0]\nSat Jan 11 15:59:43 2020 100.64.1.1:47254 Connection reset, restarting [0]\nSat Jan 11 15:59:46 2020 TCP connection established with [AF_INET]10.250.7.77:6318\nSat Jan 11 15:59:46 2020 10.250.7.77:6318 Connection reset, restarting [0]\nSat Jan 11 15:59:46 2020 TCP connection established with [AF_INET]100.64.1.1:47264\nSat Jan 11 15:59:47 2020 100.64.1.1:47264 peer info: IV_VER=2.4.6\nSat Jan 11 15:59:47 2020 100.64.1.1:47264 peer info: IV_PLAT=linux\nSat Jan 11 15:59:47 2020 100.64.1.1:47264 peer info: IV_PROTO=2\nSat Jan 11 15:59:47 2020 100.64.1.1:47264 peer info: IV_NCP=2\nSat Jan 11 15:59:47 2020 100.64.1.1:47264 peer info: IV_LZ4=1\nSat Jan 11 15:59:47 2020 100.64.1.1:47264 peer info: IV_LZ4v2=1\nSat Jan 11 15:59:47 2020 100.64.1.1:47264 peer info: IV_LZO=1\nSat Jan 11 15:59:47 2020 100.64.1.1:47264 peer info: IV_COMP_STUB=1\nSat Jan 11 15:59:47 2020 100.64.1.1:47264 peer info: IV_COMP_STUBv2=1\nSat Jan 11 15:59:47 2020 100.64.1.1:47264 peer info: IV_TCPNL=1\nSat Jan 11 15:59:47 2020 100.64.1.1:47264 [vpn-seed] Peer Connection Initiated with [AF_INET]100.64.1.1:47264\nSat Jan 11 15:59:47 2020 vpn-seed/100.64.1.1:47264 MULTI_sva: pool returned IPv4=192.168.123.10, IPv6=(Not enabled)\nSat Jan 11 15:59:53 2020 TCP connection established with [AF_INET]10.250.7.77:6330\nSat Jan 11 15:59:53 2020 10.250.7.77:6330 TCP connection established with [AF_INET]100.64.1.1:47270\nSat Jan 11 15:59:53 2020 10.250.7.77:6330 Connection reset, restarting [0]\nSat Jan 11 15:59:53 2020 100.64.1.1:47270 Connection reset, restarting [0]\nSat Jan 11 16:00:03 2020 TCP connection established with [AF_INET]10.250.7.77:6346\nSat Jan 11 16:00:03 2020 10.250.7.77:6346 TCP connection established with [AF_INET]100.64.1.1:47286\nSat Jan 11 16:00:03 2020 10.250.7.77:6346 Connection reset, restarting [0]\nSat Jan 11 16:00:03 2020 100.64.1.1:47286 Connection reset, restarting [0]\nSat Jan 11 16:00:13 2020 TCP connection established with [AF_INET]10.250.7.77:6354\nSat Jan 11 16:00:13 2020 10.250.7.77:6354 TCP connection established with [AF_INET]100.64.1.1:47294\nSat Jan 11 16:00:13 2020 10.250.7.77:6354 Connection reset, restarting [0]\nSat Jan 11 16:00:13 2020 100.64.1.1:47294 Connection reset, restarting [0]\nSat Jan 11 16:00:20 2020 TCP connection established with [AF_INET]10.250.7.77:22438\nSat Jan 11 16:00:20 2020 10.250.7.77:22438 TCP connection established with [AF_INET]100.64.1.1:51670\nSat Jan 11 16:00:20 2020 10.250.7.77:22438 Connection reset, restarting [0]\nSat Jan 11 16:00:20 2020 100.64.1.1:51670 Connection reset, restarting [0]\nSat Jan 11 16:00:23 2020 TCP connection established with [AF_INET]10.250.7.77:6368\nSat Jan 11 16:00:23 2020 10.250.7.77:6368 TCP connection established with [AF_INET]100.64.1.1:47308\nSat Jan 11 16:00:23 2020 10.250.7.77:6368 Connection reset, restarting [0]\nSat Jan 11 16:00:23 2020 100.64.1.1:47308 Connection reset, restarting [0]\nSat Jan 11 16:00:27 2020 TCP connection established with [AF_INET]10.250.7.77:6376\nSat Jan 11 16:00:27 2020 10.250.7.77:6376 Connection reset, restarting [0]\nSat Jan 11 16:00:28 2020 TCP connection established with [AF_INET]100.64.1.1:47320\nSat Jan 11 16:00:29 2020 100.64.1.1:47320 peer info: IV_VER=2.4.6\nSat Jan 11 16:00:29 2020 100.64.1.1:47320 peer info: IV_PLAT=linux\nSat Jan 11 16:00:29 2020 100.64.1.1:47320 peer info: IV_PROTO=2\nSat Jan 11 16:00:29 2020 100.64.1.1:47320 peer info: IV_NCP=2\nSat Jan 11 16:00:29 2020 100.64.1.1:47320 peer info: IV_LZ4=1\nSat Jan 11 16:00:29 2020 100.64.1.1:47320 peer info: IV_LZ4v2=1\nSat Jan 11 16:00:29 2020 100.64.1.1:47320 peer info: IV_LZO=1\nSat Jan 11 16:00:29 2020 100.64.1.1:47320 peer info: IV_COMP_STUB=1\nSat Jan 11 16:00:29 2020 100.64.1.1:47320 peer info: IV_COMP_STUBv2=1\nSat Jan 11 16:00:29 2020 100.64.1.1:47320 peer info: IV_TCPNL=1\nSat Jan 11 16:00:29 2020 100.64.1.1:47320 [vpn-seed] Peer Connection Initiated with [AF_INET]100.64.1.1:47320\nSat Jan 11 16:00:29 2020 vpn-seed/100.64.1.1:47320 MULTI_sva: pool returned IPv4=192.168.123.14, IPv6=(Not enabled)\nSat Jan 11 16:00:29 2020 TCP connection established with [AF_INET]10.250.7.77:22474\nSat Jan 11 16:00:29 2020 10.250.7.77:22474 TCP connection established with [AF_INET]100.64.1.1:51706\nSat Jan 11 16:00:29 2020 10.250.7.77:22474 Connection reset, restarting [0]\nSat Jan 11 16:00:29 2020 100.64.1.1:51706 Connection reset, restarting [0]\nSat Jan 11 16:00:33 2020 TCP connection established with [AF_INET]10.250.7.77:6382\nSat Jan 11 16:00:33 2020 10.250.7.77:6382 TCP connection established with [AF_INET]100.64.1.1:47322\nSat Jan 11 16:00:33 2020 10.250.7.77:6382 Connection reset, restarting [0]\nSat Jan 11 16:00:33 2020 100.64.1.1:47322 Connection reset, restarting [0]\nSat Jan 11 16:00:39 2020 TCP connection established with [AF_INET]10.250.7.77:22478\nSat Jan 11 16:00:39 2020 10.250.7.77:22478 TCP connection established with [AF_INET]100.64.1.1:51710\nSat Jan 11 16:00:39 2020 10.250.7.77:22478 Connection reset, restarting [0]\nSat Jan 11 16:00:39 2020 100.64.1.1:51710 Connection reset, restarting [0]\nSat Jan 11 16:00:43 2020 TCP connection established with [AF_INET]10.250.7.77:6388\nSat Jan 11 16:00:43 2020 10.250.7.77:6388 TCP connection established with [AF_INET]100.64.1.1:47328\nSat Jan 11 16:00:43 2020 10.250.7.77:6388 Connection reset, restarting [0]\nSat Jan 11 16:00:43 2020 100.64.1.1:47328 Connection reset, restarting [0]\nSat Jan 11 16:00:49 2020 TCP connection established with [AF_INET]100.64.1.1:51762\nSat Jan 11 16:00:49 2020 100.64.1.1:51762 Connection reset, restarting [0]\nSat Jan 11 16:00:49 2020 TCP connection established with [AF_INET]10.250.7.77:22530\nSat Jan 11 16:00:49 2020 10.250.7.77:22530 Connection reset, restarting [0]\nSat Jan 11 16:00:50 2020 TCP connection established with [AF_INET]10.250.7.77:6400\nSat Jan 11 16:00:50 2020 10.250.7.77:6400 Connection reset, restarting [0]\nSat Jan 11 16:00:50 2020 TCP connection established with [AF_INET]100.64.1.1:51770\nSat Jan 11 16:00:51 2020 100.64.1.1:51770 peer info: IV_VER=2.4.6\nSat Jan 11 16:00:51 2020 100.64.1.1:51770 peer info: IV_PLAT=linux\nSat Jan 11 16:00:51 2020 100.64.1.1:51770 peer info: IV_PROTO=2\nSat Jan 11 16:00:51 2020 100.64.1.1:51770 peer info: IV_NCP=2\nSat Jan 11 16:00:51 2020 100.64.1.1:51770 peer info: IV_LZ4=1\nSat Jan 11 16:00:51 2020 100.64.1.1:51770 peer info: IV_LZ4v2=1\nSat Jan 11 16:00:51 2020 100.64.1.1:51770 peer info: IV_LZO=1\nSat Jan 11 16:00:51 2020 100.64.1.1:51770 peer info: IV_COMP_STUB=1\nSat Jan 11 16:00:51 2020 100.64.1.1:51770 peer info: IV_COMP_STUBv2=1\nSat Jan 11 16:00:51 2020 100.64.1.1:51770 peer info: IV_TCPNL=1\nSat Jan 11 16:00:51 2020 100.64.1.1:51770 [vpn-seed] Peer Connection Initiated with [AF_INET]100.64.1.1:51770\nSat Jan 11 16:00:51 2020 vpn-seed/100.64.1.1:51770 MULTI_sva: pool returned IPv4=192.168.123.18, IPv6=(Not enabled)\nSat Jan 11 16:00:53 2020 TCP connection established with [AF_INET]10.250.7.77:6404\nSat Jan 11 16:00:53 2020 10.250.7.77:6404 TCP connection established with [AF_INET]100.64.1.1:47344\nSat Jan 11 16:00:53 2020 10.250.7.77:6404 Connection reset, restarting [0]\nSat Jan 11 16:00:53 2020 100.64.1.1:47344 Connection reset, restarting [0]\nSat Jan 11 16:00:59 2020 TCP connection established with [AF_INET]10.250.7.77:22544\nSat Jan 11 16:00:59 2020 10.250.7.77:22544 TCP connection established with [AF_INET]100.64.1.1:51776\nSat Jan 11 16:00:59 2020 10.250.7.77:22544 Connection reset, restarting [0]\nSat Jan 11 16:00:59 2020 100.64.1.1:51776 Connection reset, restarting [0]\nSat Jan 11 16:01:03 2020 TCP connection established with [AF_INET]10.250.7.77:6418\nSat Jan 11 16:01:03 2020 10.250.7.77:6418 TCP connection established with [AF_INET]100.64.1.1:47358\nSat Jan 11 16:01:03 2020 10.250.7.77:6418 Connection reset, restarting [0]\nSat Jan 11 16:01:03 2020 100.64.1.1:47358 Connection reset, restarting [0]\nSat Jan 11 16:01:09 2020 TCP connection established with [AF_INET]10.250.7.77:22558\nSat Jan 11 16:01:09 2020 10.250.7.77:22558 TCP connection established with [AF_INET]100.64.1.1:51790\nSat Jan 11 16:01:09 2020 10.250.7.77:22558 Connection reset, restarting [0]\nSat Jan 11 16:01:09 2020 100.64.1.1:51790 Connection reset, restarting [0]\nSat Jan 11 16:01:13 2020 TCP connection established with [AF_INET]10.250.7.77:6426\nSat Jan 11 16:01:13 2020 10.250.7.77:6426 TCP connection established with [AF_INET]100.64.1.1:47366\nSat Jan 11 16:01:13 2020 10.250.7.77:6426 Connection reset, restarting [0]\nSat Jan 11 16:01:13 2020 100.64.1.1:47366 Connection reset, restarting [0]\nSat Jan 11 16:01:19 2020 TCP connection established with [AF_INET]10.250.7.77:22570\nSat Jan 11 16:01:19 2020 10.250.7.77:22570 TCP connection established with [AF_INET]100.64.1.1:51802\nSat Jan 11 16:01:19 2020 10.250.7.77:22570 Connection reset, restarting [0]\nSat Jan 11 16:01:19 2020 100.64.1.1:51802 Connection reset, restarting [0]\nSat Jan 11 16:01:20 2020 TCP connection established with [AF_INET]100.64.1.1:47370\nSat Jan 11 16:01:20 2020 100.64.1.1:47370 Connection reset, restarting [0]\nSat Jan 11 16:01:20 2020 TCP connection established with [AF_INET]10.250.7.77:22572\nSat Jan 11 16:01:21 2020 10.250.7.77:22572 peer info: IV_VER=2.4.6\nSat Jan 11 16:01:21 2020 10.250.7.77:22572 peer info: IV_PLAT=linux\nSat Jan 11 16:01:21 2020 10.250.7.77:22572 peer info: IV_PROTO=2\nSat Jan 11 16:01:21 2020 10.250.7.77:22572 peer info: IV_NCP=2\nSat Jan 11 16:01:21 2020 10.250.7.77:22572 peer info: IV_LZ4=1\nSat Jan 11 16:01:21 2020 10.250.7.77:22572 peer info: IV_LZ4v2=1\nSat Jan 11 16:01:21 2020 10.250.7.77:22572 peer info: IV_LZO=1\nSat Jan 11 16:01:21 2020 10.250.7.77:22572 peer info: IV_COMP_STUB=1\nSat Jan 11 16:01:21 2020 10.250.7.77:22572 peer info: IV_COMP_STUBv2=1\nSat Jan 11 16:01:21 2020 10.250.7.77:22572 peer info: IV_TCPNL=1\nSat Jan 11 16:01:21 2020 10.250.7.77:22572 [vpn-seed] Peer Connection Initiated with [AF_INET]10.250.7.77:22572\nSat Jan 11 16:01:21 2020 vpn-seed/10.250.7.77:22572 MULTI_sva: pool returned IPv4=192.168.123.22, IPv6=(Not enabled)\nSat Jan 11 16:01:23 2020 TCP connection established with [AF_INET]10.250.7.77:6434\nSat Jan 11 16:01:23 2020 10.250.7.77:6434 TCP connection established with [AF_INET]100.64.1.1:47374\nSat Jan 11 16:01:23 2020 10.250.7.77:6434 Connection reset, restarting [0]\nSat Jan 11 16:01:23 2020 100.64.1.1:47374 Connection reset, restarting [0]\nSat Jan 11 16:01:29 2020 TCP connection established with [AF_INET]10.250.7.77:22580\nSat Jan 11 16:01:29 2020 10.250.7.77:22580 TCP connection established with [AF_INET]100.64.1.1:51812\nSat Jan 11 16:01:29 2020 10.250.7.77:22580 Connection reset, restarting [0]\nSat Jan 11 16:01:29 2020 100.64.1.1:51812 Connection reset, restarting [0]\nSat Jan 11 16:01:33 2020 TCP connection established with [AF_INET]10.250.7.77:6444\nSat Jan 11 16:01:33 2020 10.250.7.77:6444 TCP connection established with [AF_INET]100.64.1.1:47384\nSat Jan 11 16:01:33 2020 10.250.7.77:6444 Connection reset, restarting [0]\nSat Jan 11 16:01:33 2020 100.64.1.1:47384 Connection reset, restarting [0]\nSat Jan 11 16:01:39 2020 TCP connection established with [AF_INET]10.250.7.77:22584\nSat Jan 11 16:01:39 2020 10.250.7.77:22584 TCP connection established with [AF_INET]100.64.1.1:51816\nSat Jan 11 16:01:39 2020 10.250.7.77:22584 Connection reset, restarting [0]\nSat Jan 11 16:01:39 2020 100.64.1.1:51816 Connection reset, restarting [0]\nSat Jan 11 16:01:43 2020 TCP connection established with [AF_INET]10.250.7.77:6450\nSat Jan 11 16:01:43 2020 10.250.7.77:6450 TCP connection established with [AF_INET]100.64.1.1:47390\nSat Jan 11 16:01:43 2020 10.250.7.77:6450 Connection reset, restarting [0]\nSat Jan 11 16:01:43 2020 100.64.1.1:47390 Connection reset, restarting [0]\nSat Jan 11 16:01:49 2020 TCP connection established with [AF_INET]10.250.7.77:22596\nSat Jan 11 16:01:49 2020 10.250.7.77:22596 TCP connection established with [AF_INET]100.64.1.1:51828\nSat Jan 11 16:01:49 2020 10.250.7.77:22596 Connection reset, restarting [0]\nSat Jan 11 16:01:49 2020 100.64.1.1:51828 Connection reset, restarting [0]\nSat Jan 11 16:01:53 2020 TCP connection established with [AF_INET]10.250.7.77:6460\nSat Jan 11 16:01:53 2020 10.250.7.77:6460 TCP connection established with [AF_INET]100.64.1.1:47400\nSat Jan 11 16:01:53 2020 10.250.7.77:6460 Connection reset, restarting [0]\nSat Jan 11 16:01:53 2020 100.64.1.1:47400 Connection reset, restarting [0]\nSat Jan 11 16:01:59 2020 TCP connection established with [AF_INET]10.250.7.77:22604\nSat Jan 11 16:01:59 2020 10.250.7.77:22604 TCP connection established with [AF_INET]100.64.1.1:51836\nSat Jan 11 16:01:59 2020 10.250.7.77:22604 Connection reset, restarting [0]\nSat Jan 11 16:01:59 2020 100.64.1.1:51836 Connection reset, restarting [0]\nSat Jan 11 16:02:03 2020 TCP connection established with [AF_INET]10.250.7.77:6474\nSat Jan 11 16:02:03 2020 10.250.7.77:6474 TCP connection established with [AF_INET]100.64.1.1:47414\nSat Jan 11 16:02:03 2020 10.250.7.77:6474 Connection reset, restarting [0]\nSat Jan 11 16:02:03 2020 100.64.1.1:47414 Connection reset, restarting [0]\nSat Jan 11 16:02:09 2020 TCP connection established with [AF_INET]10.250.7.77:22618\nSat Jan 11 16:02:09 2020 10.250.7.77:22618 TCP connection established with [AF_INET]100.64.1.1:51850\nSat Jan 11 16:02:09 2020 10.250.7.77:22618 Connection reset, restarting [0]\nSat Jan 11 16:02:09 2020 100.64.1.1:51850 Connection reset, restarting [0]\nSat Jan 11 16:02:13 2020 TCP connection established with [AF_INET]10.250.7.77:6486\nSat Jan 11 16:02:13 2020 10.250.7.77:6486 TCP connection established with [AF_INET]100.64.1.1:47426\nSat Jan 11 16:02:13 2020 10.250.7.77:6486 Connection reset, restarting [0]\nSat Jan 11 16:02:13 2020 100.64.1.1:47426 Connection reset, restarting [0]\nSat Jan 11 16:02:19 2020 TCP connection established with [AF_INET]10.250.7.77:22626\nSat Jan 11 16:02:19 2020 10.250.7.77:22626 TCP connection established with [AF_INET]100.64.1.1:51858\nSat Jan 11 16:02:19 2020 10.250.7.77:22626 Connection reset, restarting [0]\nSat Jan 11 16:02:19 2020 100.64.1.1:51858 Connection reset, restarting [0]\nSat Jan 11 16:02:23 2020 TCP connection established with [AF_INET]10.250.7.77:6492\nSat Jan 11 16:02:23 2020 10.250.7.77:6492 TCP connection established with [AF_INET]100.64.1.1:47432\nSat Jan 11 16:02:23 2020 10.250.7.77:6492 Connection reset, restarting [0]\nSat Jan 11 16:02:23 2020 100.64.1.1:47432 Connection reset, restarting [0]\nSat Jan 11 16:02:29 2020 TCP connection established with [AF_INET]10.250.7.77:22638\nSat Jan 11 16:02:29 2020 10.250.7.77:22638 TCP connection established with [AF_INET]100.64.1.1:51870\nSat Jan 11 16:02:29 2020 10.250.7.77:22638 Connection reset, restarting [0]\nSat Jan 11 16:02:29 2020 100.64.1.1:51870 Connection reset, restarting [0]\nSat Jan 11 16:02:32 2020 vpn-seed/100.64.1.1:47188 Connection reset, restarting [0]\nSat Jan 11 16:02:33 2020 TCP connection established with [AF_INET]10.250.7.77:6504\nSat Jan 11 16:02:33 2020 10.250.7.77:6504 TCP connection established with [AF_INET]100.64.1.1:47444\nSat Jan 11 16:02:33 2020 10.250.7.77:6504 Connection reset, restarting [0]\nSat Jan 11 16:02:33 2020 100.64.1.1:47444 Connection reset, restarting [0]\nSat Jan 11 16:02:39 2020 TCP connection established with [AF_INET]10.250.7.77:22644\nSat Jan 11 16:02:39 2020 10.250.7.77:22644 TCP connection established with [AF_INET]100.64.1.1:51876\nSat Jan 11 16:02:39 2020 10.250.7.77:22644 Connection reset, restarting [0]\nSat Jan 11 16:02:39 2020 100.64.1.1:51876 Connection reset, restarting [0]\nSat Jan 11 16:02:43 2020 TCP connection established with [AF_INET]10.250.7.77:6508\nSat Jan 11 16:02:43 2020 10.250.7.77:6508 TCP connection established with [AF_INET]100.64.1.1:47448\nSat Jan 11 16:02:43 2020 10.250.7.77:6508 Connection reset, restarting [0]\nSat Jan 11 16:02:43 2020 100.64.1.1:47448 Connection reset, restarting [0]\nSat Jan 11 16:02:49 2020 TCP connection established with [AF_INET]10.250.7.77:22654\nSat Jan 11 16:02:49 2020 10.250.7.77:22654 TCP connection established with [AF_INET]100.64.1.1:51886\nSat Jan 11 16:02:49 2020 10.250.7.77:22654 Connection reset, restarting [0]\nSat Jan 11 16:02:49 2020 100.64.1.1:51886 Connection reset, restarting [0]\nSat Jan 11 16:02:53 2020 TCP connection established with [AF_INET]10.250.7.77:6518\nSat Jan 11 16:02:53 2020 10.250.7.77:6518 TCP connection established with [AF_INET]100.64.1.1:47458\nSat Jan 11 16:02:53 2020 10.250.7.77:6518 Connection reset, restarting [0]\nSat Jan 11 16:02:53 2020 100.64.1.1:47458 Connection reset, restarting [0]\nSat Jan 11 16:02:59 2020 TCP connection established with [AF_INET]10.250.7.77:22662\nSat Jan 11 16:02:59 2020 10.250.7.77:22662 TCP connection established with [AF_INET]100.64.1.1:51894\nSat Jan 11 16:02:59 2020 10.250.7.77:22662 Connection reset, restarting [0]\nSat Jan 11 16:02:59 2020 100.64.1.1:51894 Connection reset, restarting [0]\nSat Jan 11 16:03:03 2020 TCP connection established with [AF_INET]10.250.7.77:6534\nSat Jan 11 16:03:03 2020 10.250.7.77:6534 TCP connection established with [AF_INET]100.64.1.1:47474\nSat Jan 11 16:03:03 2020 10.250.7.77:6534 Connection reset, restarting [0]\nSat Jan 11 16:03:03 2020 100.64.1.1:47474 Connection reset, restarting [0]\nSat Jan 11 16:03:09 2020 TCP connection established with [AF_INET]10.250.7.77:22676\nSat Jan 11 16:03:09 2020 10.250.7.77:22676 TCP connection established with [AF_INET]100.64.1.1:51908\nSat Jan 11 16:03:09 2020 10.250.7.77:22676 Connection reset, restarting [0]\nSat Jan 11 16:03:09 2020 100.64.1.1:51908 Connection reset, restarting [0]\nSat Jan 11 16:03:13 2020 TCP connection established with [AF_INET]10.250.7.77:6542\nSat Jan 11 16:03:13 2020 10.250.7.77:6542 TCP connection established with [AF_INET]100.64.1.1:47482\nSat Jan 11 16:03:13 2020 10.250.7.77:6542 Connection reset, restarting [0]\nSat Jan 11 16:03:13 2020 100.64.1.1:47482 Connection reset, restarting [0]\nSat Jan 11 16:03:19 2020 TCP connection established with [AF_INET]10.250.7.77:22684\nSat Jan 11 16:03:19 2020 10.250.7.77:22684 TCP connection established with [AF_INET]100.64.1.1:51916\nSat Jan 11 16:03:19 2020 10.250.7.77:22684 Connection reset, restarting [0]\nSat Jan 11 16:03:19 2020 100.64.1.1:51916 Connection reset, restarting [0]\nSat Jan 11 16:03:23 2020 TCP connection established with [AF_INET]10.250.7.77:6552\nSat Jan 11 16:03:23 2020 10.250.7.77:6552 TCP connection established with [AF_INET]100.64.1.1:47492\nSat Jan 11 16:03:23 2020 10.250.7.77:6552 Connection reset, restarting [0]\nSat Jan 11 16:03:23 2020 100.64.1.1:47492 Connection reset, restarting [0]\nSat Jan 11 16:03:29 2020 TCP connection established with [AF_INET]10.250.7.77:22692\nSat Jan 11 16:03:29 2020 10.250.7.77:22692 TCP connection established with [AF_INET]100.64.1.1:51924\nSat Jan 11 16:03:29 2020 10.250.7.77:22692 Connection reset, restarting [0]\nSat Jan 11 16:03:29 2020 100.64.1.1:51924 Connection reset, restarting [0]\nSat Jan 11 16:03:33 2020 TCP connection established with [AF_INET]10.250.7.77:6566\nSat Jan 11 16:03:33 2020 10.250.7.77:6566 TCP connection established with [AF_INET]100.64.1.1:47506\nSat Jan 11 16:03:33 2020 10.250.7.77:6566 Connection reset, restarting [0]\nSat Jan 11 16:03:33 2020 100.64.1.1:47506 Connection reset, restarting [0]\nSat Jan 11 16:03:39 2020 TCP connection established with [AF_INET]10.250.7.77:22698\nSat Jan 11 16:03:39 2020 10.250.7.77:22698 TCP connection established with [AF_INET]100.64.1.1:51930\nSat Jan 11 16:03:39 2020 10.250.7.77:22698 Connection reset, restarting [0]\nSat Jan 11 16:03:39 2020 100.64.1.1:51930 Connection reset, restarting [0]\nSat Jan 11 16:03:43 2020 TCP connection established with [AF_INET]10.250.7.77:6570\nSat Jan 11 16:03:43 2020 10.250.7.77:6570 TCP connection established with [AF_INET]100.64.1.1:47510\nSat Jan 11 16:03:43 2020 10.250.7.77:6570 Connection reset, restarting [0]\nSat Jan 11 16:03:43 2020 100.64.1.1:47510 Connection reset, restarting [0]\nSat Jan 11 16:03:49 2020 TCP connection established with [AF_INET]10.250.7.77:22712\nSat Jan 11 16:03:49 2020 10.250.7.77:22712 TCP connection established with [AF_INET]100.64.1.1:51944\nSat Jan 11 16:03:49 2020 10.250.7.77:22712 Connection reset, restarting [0]\nSat Jan 11 16:03:49 2020 100.64.1.1:51944 Connection reset, restarting [0]\nSat Jan 11 16:03:53 2020 TCP connection established with [AF_INET]10.250.7.77:6580\nSat Jan 11 16:03:53 2020 10.250.7.77:6580 TCP connection established with [AF_INET]100.64.1.1:47520\nSat Jan 11 16:03:53 2020 10.250.7.77:6580 Connection reset, restarting [0]\nSat Jan 11 16:03:53 2020 100.64.1.1:47520 Connection reset, restarting [0]\nSat Jan 11 16:03:59 2020 TCP connection established with [AF_INET]10.250.7.77:22720\nSat Jan 11 16:03:59 2020 10.250.7.77:22720 TCP connection established with [AF_INET]100.64.1.1:51952\nSat Jan 11 16:03:59 2020 10.250.7.77:22720 Connection reset, restarting [0]\nSat Jan 11 16:03:59 2020 100.64.1.1:51952 Connection reset, restarting [0]\nSat Jan 11 16:04:03 2020 TCP connection established with [AF_INET]10.250.7.77:6596\nSat Jan 11 16:04:03 2020 10.250.7.77:6596 TCP connection established with [AF_INET]100.64.1.1:47536\nSat Jan 11 16:04:03 2020 10.250.7.77:6596 Connection reset, restarting [0]\nSat Jan 11 16:04:03 2020 100.64.1.1:47536 Connection reset, restarting [0]\nSat Jan 11 16:04:09 2020 TCP connection established with [AF_INET]10.250.7.77:22734\nSat Jan 11 16:04:09 2020 10.250.7.77:22734 TCP connection established with [AF_INET]100.64.1.1:51966\nSat Jan 11 16:04:09 2020 10.250.7.77:22734 Connection reset, restarting [0]\nSat Jan 11 16:04:09 2020 100.64.1.1:51966 Connection reset, restarting [0]\nSat Jan 11 16:04:13 2020 TCP connection established with [AF_INET]10.250.7.77:6604\nSat Jan 11 16:04:13 2020 10.250.7.77:6604 TCP connection established with [AF_INET]100.64.1.1:47544\nSat Jan 11 16:04:13 2020 10.250.7.77:6604 Connection reset, restarting [0]\nSat Jan 11 16:04:13 2020 100.64.1.1:47544 Connection reset, restarting [0]\nSat Jan 11 16:04:19 2020 TCP connection established with [AF_INET]10.250.7.77:22742\nSat Jan 11 16:04:19 2020 10.250.7.77:22742 TCP connection established with [AF_INET]100.64.1.1:51974\nSat Jan 11 16:04:19 2020 10.250.7.77:22742 Connection reset, restarting [0]\nSat Jan 11 16:04:19 2020 100.64.1.1:51974 Connection reset, restarting [0]\nSat Jan 11 16:04:23 2020 TCP connection established with [AF_INET]10.250.7.77:6610\nSat Jan 11 16:04:23 2020 10.250.7.77:6610 TCP connection established with [AF_INET]100.64.1.1:47550\nSat Jan 11 16:04:23 2020 10.250.7.77:6610 Connection reset, restarting [0]\nSat Jan 11 16:04:23 2020 100.64.1.1:47550 Connection reset, restarting [0]\nSat Jan 11 16:04:29 2020 TCP connection established with [AF_INET]10.250.7.77:22752\nSat Jan 11 16:04:29 2020 10.250.7.77:22752 TCP connection established with [AF_INET]100.64.1.1:51984\nSat Jan 11 16:04:29 2020 10.250.7.77:22752 Connection reset, restarting [0]\nSat Jan 11 16:04:29 2020 100.64.1.1:51984 Connection reset, restarting [0]\nSat Jan 11 16:04:33 2020 TCP connection established with [AF_INET]10.250.7.77:6622\nSat Jan 11 16:04:33 2020 10.250.7.77:6622 TCP connection established with [AF_INET]100.64.1.1:47562\nSat Jan 11 16:04:33 2020 10.250.7.77:6622 Connection reset, restarting [0]\nSat Jan 11 16:04:33 2020 100.64.1.1:47562 Connection reset, restarting [0]\nSat Jan 11 16:04:39 2020 TCP connection established with [AF_INET]10.250.7.77:22756\nSat Jan 11 16:04:39 2020 10.250.7.77:22756 TCP connection established with [AF_INET]100.64.1.1:51988\nSat Jan 11 16:04:39 2020 10.250.7.77:22756 Connection reset, restarting [0]\nSat Jan 11 16:04:39 2020 100.64.1.1:51988 Connection reset, restarting [0]\nSat Jan 11 16:04:43 2020 TCP connection established with [AF_INET]10.250.7.77:6632\nSat Jan 11 16:04:43 2020 10.250.7.77:6632 TCP connection established with [AF_INET]100.64.1.1:47572\nSat Jan 11 16:04:43 2020 10.250.7.77:6632 Connection reset, restarting [0]\nSat Jan 11 16:04:43 2020 100.64.1.1:47572 Connection reset, restarting [0]\nSat Jan 11 16:04:49 2020 TCP connection established with [AF_INET]10.250.7.77:22766\nSat Jan 11 16:04:49 2020 10.250.7.77:22766 TCP connection established with [AF_INET]100.64.1.1:51998\nSat Jan 11 16:04:49 2020 10.250.7.77:22766 Connection reset, restarting [0]\nSat Jan 11 16:04:49 2020 100.64.1.1:51998 Connection reset, restarting [0]\nSat Jan 11 16:04:53 2020 TCP connection established with [AF_INET]10.250.7.77:6642\nSat Jan 11 16:04:53 2020 10.250.7.77:6642 TCP connection established with [AF_INET]100.64.1.1:47582\nSat Jan 11 16:04:53 2020 10.250.7.77:6642 Connection reset, restarting [0]\nSat Jan 11 16:04:53 2020 100.64.1.1:47582 Connection reset, restarting [0]\nSat Jan 11 16:04:59 2020 TCP connection established with [AF_INET]10.250.7.77:22778\nSat Jan 11 16:04:59 2020 10.250.7.77:22778 TCP connection established with [AF_INET]100.64.1.1:52010\nSat Jan 11 16:04:59 2020 10.250.7.77:22778 Connection reset, restarting [0]\nSat Jan 11 16:04:59 2020 100.64.1.1:52010 Connection reset, restarting [0]\nSat Jan 11 16:05:03 2020 TCP connection established with [AF_INET]10.250.7.77:6656\nSat Jan 11 16:05:03 2020 10.250.7.77:6656 TCP connection established with [AF_INET]100.64.1.1:47596\nSat Jan 11 16:05:03 2020 10.250.7.77:6656 Connection reset, restarting [0]\nSat Jan 11 16:05:03 2020 100.64.1.1:47596 Connection reset, restarting [0]\nSat Jan 11 16:05:09 2020 TCP connection established with [AF_INET]10.250.7.77:22792\nSat Jan 11 16:05:09 2020 10.250.7.77:22792 TCP connection established with [AF_INET]100.64.1.1:52024\nSat Jan 11 16:05:09 2020 10.250.7.77:22792 Connection reset, restarting [0]\nSat Jan 11 16:05:09 2020 100.64.1.1:52024 Connection reset, restarting [0]\nSat Jan 11 16:05:13 2020 TCP connection established with [AF_INET]10.250.7.77:6664\nSat Jan 11 16:05:13 2020 10.250.7.77:6664 TCP connection established with [AF_INET]100.64.1.1:47604\nSat Jan 11 16:05:13 2020 10.250.7.77:6664 Connection reset, restarting [0]\nSat Jan 11 16:05:13 2020 100.64.1.1:47604 Connection reset, restarting [0]\nSat Jan 11 16:05:19 2020 TCP connection established with [AF_INET]10.250.7.77:22800\nSat Jan 11 16:05:19 2020 10.250.7.77:22800 TCP connection established with [AF_INET]100.64.1.1:52032\nSat Jan 11 16:05:19 2020 10.250.7.77:22800 Connection reset, restarting [0]\nSat Jan 11 16:05:19 2020 100.64.1.1:52032 Connection reset, restarting [0]\nSat Jan 11 16:05:23 2020 TCP connection established with [AF_INET]10.250.7.77:6672\nSat Jan 11 16:05:23 2020 10.250.7.77:6672 TCP connection established with [AF_INET]100.64.1.1:47612\nSat Jan 11 16:05:23 2020 10.250.7.77:6672 Connection reset, restarting [0]\nSat Jan 11 16:05:23 2020 100.64.1.1:47612 Connection reset, restarting [0]\nSat Jan 11 16:05:29 2020 TCP connection established with [AF_INET]10.250.7.77:22810\nSat Jan 11 16:05:29 2020 10.250.7.77:22810 TCP connection established with [AF_INET]100.64.1.1:52042\nSat Jan 11 16:05:29 2020 10.250.7.77:22810 Connection reset, restarting [0]\nSat Jan 11 16:05:29 2020 100.64.1.1:52042 Connection reset, restarting [0]\nSat Jan 11 16:05:33 2020 TCP connection established with [AF_INET]10.250.7.77:6682\nSat Jan 11 16:05:33 2020 10.250.7.77:6682 TCP connection established with [AF_INET]100.64.1.1:47622\nSat Jan 11 16:05:33 2020 10.250.7.77:6682 Connection reset, restarting [0]\nSat Jan 11 16:05:33 2020 100.64.1.1:47622 Connection reset, restarting [0]\nSat Jan 11 16:05:39 2020 TCP connection established with [AF_INET]10.250.7.77:22814\nSat Jan 11 16:05:39 2020 10.250.7.77:22814 TCP connection established with [AF_INET]100.64.1.1:52046\nSat Jan 11 16:05:39 2020 10.250.7.77:22814 Connection reset, restarting [0]\nSat Jan 11 16:05:39 2020 100.64.1.1:52046 Connection reset, restarting [0]\nSat Jan 11 16:05:43 2020 TCP connection established with [AF_INET]10.250.7.77:6686\nSat Jan 11 16:05:43 2020 10.250.7.77:6686 TCP connection established with [AF_INET]100.64.1.1:47626\nSat Jan 11 16:05:43 2020 10.250.7.77:6686 Connection reset, restarting [0]\nSat Jan 11 16:05:43 2020 100.64.1.1:47626 Connection reset, restarting [0]\nSat Jan 11 16:05:49 2020 TCP connection established with [AF_INET]10.250.7.77:22824\nSat Jan 11 16:05:49 2020 10.250.7.77:22824 TCP connection established with [AF_INET]100.64.1.1:52056\nSat Jan 11 16:05:49 2020 10.250.7.77:22824 Connection reset, restarting [0]\nSat Jan 11 16:05:49 2020 100.64.1.1:52056 Connection reset, restarting [0]\nSat Jan 11 16:05:53 2020 TCP connection established with [AF_INET]10.250.7.77:6700\nSat Jan 11 16:05:53 2020 10.250.7.77:6700 TCP connection established with [AF_INET]100.64.1.1:47640\nSat Jan 11 16:05:53 2020 10.250.7.77:6700 Connection reset, restarting [0]\nSat Jan 11 16:05:53 2020 100.64.1.1:47640 Connection reset, restarting [0]\nSat Jan 11 16:05:59 2020 TCP connection established with [AF_INET]10.250.7.77:22834\nSat Jan 11 16:05:59 2020 10.250.7.77:22834 TCP connection established with [AF_INET]100.64.1.1:52066\nSat Jan 11 16:05:59 2020 10.250.7.77:22834 Connection reset, restarting [0]\nSat Jan 11 16:05:59 2020 100.64.1.1:52066 Connection reset, restarting [0]\nSat Jan 11 16:06:03 2020 TCP connection established with [AF_INET]10.250.7.77:6748\nSat Jan 11 16:06:03 2020 10.250.7.77:6748 TCP connection established with [AF_INET]100.64.1.1:47688\nSat Jan 11 16:06:03 2020 10.250.7.77:6748 Connection reset, restarting [0]\nSat Jan 11 16:06:03 2020 100.64.1.1:47688 Connection reset, restarting [0]\nSat Jan 11 16:06:09 2020 TCP connection established with [AF_INET]10.250.7.77:22848\nSat Jan 11 16:06:09 2020 10.250.7.77:22848 TCP connection established with [AF_INET]100.64.1.1:52080\nSat Jan 11 16:06:09 2020 10.250.7.77:22848 Connection reset, restarting [0]\nSat Jan 11 16:06:09 2020 100.64.1.1:52080 Connection reset, restarting [0]\nSat Jan 11 16:06:13 2020 TCP connection established with [AF_INET]10.250.7.77:6760\nSat Jan 11 16:06:13 2020 10.250.7.77:6760 TCP connection established with [AF_INET]100.64.1.1:47700\nSat Jan 11 16:06:13 2020 10.250.7.77:6760 Connection reset, restarting [0]\nSat Jan 11 16:06:13 2020 100.64.1.1:47700 Connection reset, restarting [0]\nSat Jan 11 16:06:19 2020 TCP connection established with [AF_INET]10.250.7.77:22862\nSat Jan 11 16:06:19 2020 10.250.7.77:22862 TCP connection established with [AF_INET]100.64.1.1:52094\nSat Jan 11 16:06:19 2020 10.250.7.77:22862 Connection reset, restarting [0]\nSat Jan 11 16:06:19 2020 100.64.1.1:52094 Connection reset, restarting [0]\nSat Jan 11 16:06:23 2020 TCP connection established with [AF_INET]10.250.7.77:6768\nSat Jan 11 16:06:23 2020 10.250.7.77:6768 TCP connection established with [AF_INET]100.64.1.1:47708\nSat Jan 11 16:06:23 2020 10.250.7.77:6768 Connection reset, restarting [0]\nSat Jan 11 16:06:23 2020 100.64.1.1:47708 Connection reset, restarting [0]\nSat Jan 11 16:06:29 2020 TCP connection established with [AF_INET]10.250.7.77:22872\nSat Jan 11 16:06:29 2020 10.250.7.77:22872 TCP connection established with [AF_INET]100.64.1.1:52104\nSat Jan 11 16:06:29 2020 10.250.7.77:22872 Connection reset, restarting [0]\nSat Jan 11 16:06:29 2020 100.64.1.1:52104 Connection reset, restarting [0]\nSat Jan 11 16:06:33 2020 TCP connection established with [AF_INET]10.250.7.77:6778\nSat Jan 11 16:06:33 2020 10.250.7.77:6778 TCP connection established with [AF_INET]100.64.1.1:47718\nSat Jan 11 16:06:33 2020 10.250.7.77:6778 Connection reset, restarting [0]\nSat Jan 11 16:06:33 2020 100.64.1.1:47718 Connection reset, restarting [0]\nSat Jan 11 16:06:39 2020 TCP connection established with [AF_INET]10.250.7.77:22876\nSat Jan 11 16:06:39 2020 10.250.7.77:22876 TCP connection established with [AF_INET]100.64.1.1:52108\nSat Jan 11 16:06:39 2020 10.250.7.77:22876 Connection reset, restarting [0]\nSat Jan 11 16:06:39 2020 100.64.1.1:52108 Connection reset, restarting [0]\nSat Jan 11 16:06:43 2020 TCP connection established with [AF_INET]10.250.7.77:6782\nSat Jan 11 16:06:43 2020 10.250.7.77:6782 TCP connection established with [AF_INET]100.64.1.1:47722\nSat Jan 11 16:06:43 2020 10.250.7.77:6782 Connection reset, restarting [0]\nSat Jan 11 16:06:43 2020 100.64.1.1:47722 Connection reset, restarting [0]\nSat Jan 11 16:06:49 2020 TCP connection established with [AF_INET]10.250.7.77:22886\nSat Jan 11 16:06:49 2020 10.250.7.77:22886 TCP connection established with [AF_INET]100.64.1.1:52118\nSat Jan 11 16:06:49 2020 10.250.7.77:22886 Connection reset, restarting [0]\nSat Jan 11 16:06:49 2020 100.64.1.1:52118 Connection reset, restarting [0]\nSat Jan 11 16:06:53 2020 TCP connection established with [AF_INET]10.250.7.77:6792\nSat Jan 11 16:06:53 2020 10.250.7.77:6792 TCP connection established with [AF_INET]100.64.1.1:47732\nSat Jan 11 16:06:53 2020 10.250.7.77:6792 Connection reset, restarting [0]\nSat Jan 11 16:06:53 2020 100.64.1.1:47732 Connection reset, restarting [0]\nSat Jan 11 16:06:59 2020 TCP connection established with [AF_INET]10.250.7.77:22896\nSat Jan 11 16:06:59 2020 10.250.7.77:22896 TCP connection established with [AF_INET]100.64.1.1:52128\nSat Jan 11 16:06:59 2020 10.250.7.77:22896 Connection reset, restarting [0]\nSat Jan 11 16:06:59 2020 100.64.1.1:52128 Connection reset, restarting [0]\nSat Jan 11 16:07:03 2020 TCP connection established with [AF_INET]10.250.7.77:6806\nSat Jan 11 16:07:03 2020 10.250.7.77:6806 TCP connection established with [AF_INET]100.64.1.1:47746\nSat Jan 11 16:07:03 2020 10.250.7.77:6806 Connection reset, restarting [0]\nSat Jan 11 16:07:03 2020 100.64.1.1:47746 Connection reset, restarting [0]\nSat Jan 11 16:07:09 2020 TCP connection established with [AF_INET]10.250.7.77:22910\nSat Jan 11 16:07:09 2020 10.250.7.77:22910 TCP connection established with [AF_INET]100.64.1.1:52142\nSat Jan 11 16:07:09 2020 10.250.7.77:22910 Connection reset, restarting [0]\nSat Jan 11 16:07:09 2020 100.64.1.1:52142 Connection reset, restarting [0]\nSat Jan 11 16:07:13 2020 TCP connection established with [AF_INET]10.250.7.77:6820\nSat Jan 11 16:07:13 2020 10.250.7.77:6820 TCP connection established with [AF_INET]100.64.1.1:47760\nSat Jan 11 16:07:13 2020 10.250.7.77:6820 Connection reset, restarting [0]\nSat Jan 11 16:07:13 2020 100.64.1.1:47760 Connection reset, restarting [0]\nSat Jan 11 16:07:19 2020 TCP connection established with [AF_INET]10.250.7.77:22920\nSat Jan 11 16:07:19 2020 10.250.7.77:22920 TCP connection established with [AF_INET]100.64.1.1:52152\nSat Jan 11 16:07:19 2020 10.250.7.77:22920 Connection reset, restarting [0]\nSat Jan 11 16:07:19 2020 100.64.1.1:52152 Connection reset, restarting [0]\nSat Jan 11 16:07:23 2020 TCP connection established with [AF_INET]10.250.7.77:6826\nSat Jan 11 16:07:23 2020 10.250.7.77:6826 Connection reset, restarting [0]\nSat Jan 11 16:07:23 2020 TCP connection established with [AF_INET]100.64.1.1:47766\nSat Jan 11 16:07:23 2020 100.64.1.1:47766 Connection reset, restarting [0]\nSat Jan 11 16:07:29 2020 TCP connection established with [AF_INET]10.250.7.77:22932\nSat Jan 11 16:07:29 2020 10.250.7.77:22932 TCP connection established with [AF_INET]100.64.1.1:52164\nSat Jan 11 16:07:29 2020 10.250.7.77:22932 Connection reset, restarting [0]\nSat Jan 11 16:07:29 2020 100.64.1.1:52164 Connection reset, restarting [0]\nSat Jan 11 16:07:33 2020 TCP connection established with [AF_INET]10.250.7.77:6836\nSat Jan 11 16:07:33 2020 10.250.7.77:6836 TCP connection established with [AF_INET]100.64.1.1:47776\nSat Jan 11 16:07:33 2020 10.250.7.77:6836 Connection reset, restarting [0]\nSat Jan 11 16:07:33 2020 100.64.1.1:47776 Connection reset, restarting [0]\nSat Jan 11 16:07:39 2020 TCP connection established with [AF_INET]10.250.7.77:22938\nSat Jan 11 16:07:39 2020 10.250.7.77:22938 TCP connection established with [AF_INET]100.64.1.1:52170\nSat Jan 11 16:07:39 2020 10.250.7.77:22938 Connection reset, restarting [0]\nSat Jan 11 16:07:39 2020 100.64.1.1:52170 Connection reset, restarting [0]\nSat Jan 11 16:07:43 2020 TCP connection established with [AF_INET]10.250.7.77:6840\nSat Jan 11 16:07:43 2020 10.250.7.77:6840 TCP connection established with [AF_INET]100.64.1.1:47780\nSat Jan 11 16:07:43 2020 10.250.7.77:6840 Connection reset, restarting [0]\nSat Jan 11 16:07:43 2020 100.64.1.1:47780 Connection reset, restarting [0]\nSat Jan 11 16:07:49 2020 TCP connection established with [AF_INET]10.250.7.77:22948\nSat Jan 11 16:07:49 2020 10.250.7.77:22948 TCP connection established with [AF_INET]100.64.1.1:52180\nSat Jan 11 16:07:49 2020 10.250.7.77:22948 Connection reset, restarting [0]\nSat Jan 11 16:07:49 2020 100.64.1.1:52180 Connection reset, restarting [0]\nSat Jan 11 16:07:53 2020 TCP connection established with [AF_INET]10.250.7.77:6850\nSat Jan 11 16:07:53 2020 10.250.7.77:6850 TCP connection established with [AF_INET]100.64.1.1:47790\nSat Jan 11 16:07:53 2020 10.250.7.77:6850 Connection reset, restarting [0]\nSat Jan 11 16:07:53 2020 100.64.1.1:47790 Connection reset, restarting [0]\nSat Jan 11 16:07:59 2020 TCP connection established with [AF_INET]10.250.7.77:22956\nSat Jan 11 16:07:59 2020 10.250.7.77:22956 TCP connection established with [AF_INET]100.64.1.1:52188\nSat Jan 11 16:07:59 2020 10.250.7.77:22956 Connection reset, restarting [0]\nSat Jan 11 16:07:59 2020 100.64.1.1:52188 Connection reset, restarting [0]\nSat Jan 11 16:08:03 2020 TCP connection established with [AF_INET]10.250.7.77:6864\nSat Jan 11 16:08:03 2020 10.250.7.77:6864 TCP connection established with [AF_INET]100.64.1.1:47804\nSat Jan 11 16:08:03 2020 10.250.7.77:6864 Connection reset, restarting [0]\nSat Jan 11 16:08:03 2020 100.64.1.1:47804 Connection reset, restarting [0]\nSat Jan 11 16:08:09 2020 TCP connection established with [AF_INET]10.250.7.77:22970\nSat Jan 11 16:08:09 2020 10.250.7.77:22970 TCP connection established with [AF_INET]100.64.1.1:52202\nSat Jan 11 16:08:09 2020 10.250.7.77:22970 Connection reset, restarting [0]\nSat Jan 11 16:08:09 2020 100.64.1.1:52202 Connection reset, restarting [0]\nSat Jan 11 16:08:13 2020 TCP connection established with [AF_INET]10.250.7.77:6874\nSat Jan 11 16:08:13 2020 10.250.7.77:6874 TCP connection established with [AF_INET]100.64.1.1:47814\nSat Jan 11 16:08:13 2020 10.250.7.77:6874 Connection reset, restarting [0]\nSat Jan 11 16:08:13 2020 100.64.1.1:47814 Connection reset, restarting [0]\nSat Jan 11 16:08:19 2020 TCP connection established with [AF_INET]10.250.7.77:22980\nSat Jan 11 16:08:19 2020 10.250.7.77:22980 TCP connection established with [AF_INET]100.64.1.1:52212\nSat Jan 11 16:08:19 2020 10.250.7.77:22980 Connection reset, restarting [0]\nSat Jan 11 16:08:19 2020 100.64.1.1:52212 Connection reset, restarting [0]\nSat Jan 11 16:08:23 2020 TCP connection established with [AF_INET]10.250.7.77:6884\nSat Jan 11 16:08:23 2020 10.250.7.77:6884 TCP connection established with [AF_INET]100.64.1.1:47824\nSat Jan 11 16:08:23 2020 10.250.7.77:6884 Connection reset, restarting [0]\nSat Jan 11 16:08:23 2020 100.64.1.1:47824 Connection reset, restarting [0]\nSat Jan 11 16:08:29 2020 TCP connection established with [AF_INET]10.250.7.77:22988\nSat Jan 11 16:08:29 2020 10.250.7.77:22988 TCP connection established with [AF_INET]100.64.1.1:52220\nSat Jan 11 16:08:29 2020 10.250.7.77:22988 Connection reset, restarting [0]\nSat Jan 11 16:08:29 2020 100.64.1.1:52220 Connection reset, restarting [0]\nSat Jan 11 16:08:33 2020 TCP connection established with [AF_INET]10.250.7.77:6894\nSat Jan 11 16:08:33 2020 10.250.7.77:6894 TCP connection established with [AF_INET]100.64.1.1:47834\nSat Jan 11 16:08:33 2020 10.250.7.77:6894 Connection reset, restarting [0]\nSat Jan 11 16:08:33 2020 100.64.1.1:47834 Connection reset, restarting [0]\nSat Jan 11 16:08:39 2020 TCP connection established with [AF_INET]10.250.7.77:22992\nSat Jan 11 16:08:39 2020 10.250.7.77:22992 TCP connection established with [AF_INET]100.64.1.1:52224\nSat Jan 11 16:08:39 2020 10.250.7.77:22992 Connection reset, restarting [0]\nSat Jan 11 16:08:39 2020 100.64.1.1:52224 Connection reset, restarting [0]\nSat Jan 11 16:08:43 2020 TCP connection established with [AF_INET]10.250.7.77:6898\nSat Jan 11 16:08:43 2020 10.250.7.77:6898 TCP connection established with [AF_INET]100.64.1.1:47838\nSat Jan 11 16:08:43 2020 10.250.7.77:6898 Connection reset, restarting [0]\nSat Jan 11 16:08:43 2020 100.64.1.1:47838 Connection reset, restarting [0]\nSat Jan 11 16:08:49 2020 TCP connection established with [AF_INET]10.250.7.77:23006\nSat Jan 11 16:08:49 2020 10.250.7.77:23006 TCP connection established with [AF_INET]100.64.1.1:52238\nSat Jan 11 16:08:49 2020 10.250.7.77:23006 Connection reset, restarting [0]\nSat Jan 11 16:08:49 2020 100.64.1.1:52238 Connection reset, restarting [0]\nSat Jan 11 16:08:53 2020 TCP connection established with [AF_INET]10.250.7.77:6908\nSat Jan 11 16:08:53 2020 10.250.7.77:6908 TCP connection established with [AF_INET]100.64.1.1:47848\nSat Jan 11 16:08:53 2020 10.250.7.77:6908 Connection reset, restarting [0]\nSat Jan 11 16:08:53 2020 100.64.1.1:47848 Connection reset, restarting [0]\nSat Jan 11 16:08:59 2020 TCP connection established with [AF_INET]10.250.7.77:23052\nSat Jan 11 16:08:59 2020 10.250.7.77:23052 TCP connection established with [AF_INET]100.64.1.1:52284\nSat Jan 11 16:08:59 2020 10.250.7.77:23052 Connection reset, restarting [0]\nSat Jan 11 16:08:59 2020 100.64.1.1:52284 Connection reset, restarting [0]\nSat Jan 11 16:09:03 2020 TCP connection established with [AF_INET]10.250.7.77:6922\nSat Jan 11 16:09:03 2020 10.250.7.77:6922 TCP connection established with [AF_INET]100.64.1.1:47862\nSat Jan 11 16:09:03 2020 10.250.7.77:6922 Connection reset, restarting [0]\nSat Jan 11 16:09:03 2020 100.64.1.1:47862 Connection reset, restarting [0]\nSat Jan 11 16:09:09 2020 TCP connection established with [AF_INET]10.250.7.77:23068\nSat Jan 11 16:09:09 2020 10.250.7.77:23068 TCP connection established with [AF_INET]100.64.1.1:52300\nSat Jan 11 16:09:09 2020 10.250.7.77:23068 Connection reset, restarting [0]\nSat Jan 11 16:09:09 2020 100.64.1.1:52300 Connection reset, restarting [0]\nSat Jan 11 16:09:13 2020 TCP connection established with [AF_INET]10.250.7.77:6932\nSat Jan 11 16:09:13 2020 10.250.7.77:6932 TCP connection established with [AF_INET]100.64.1.1:47872\nSat Jan 11 16:09:13 2020 10.250.7.77:6932 Connection reset, restarting [0]\nSat Jan 11 16:09:13 2020 100.64.1.1:47872 Connection reset, restarting [0]\nSat Jan 11 16:09:19 2020 TCP connection established with [AF_INET]10.250.7.77:23076\nSat Jan 11 16:09:19 2020 10.250.7.77:23076 TCP connection established with [AF_INET]100.64.1.1:52308\nSat Jan 11 16:09:19 2020 10.250.7.77:23076 Connection reset, restarting [0]\nSat Jan 11 16:09:19 2020 100.64.1.1:52308 Connection reset, restarting [0]\nSat Jan 11 16:09:23 2020 TCP connection established with [AF_INET]10.250.7.77:6938\nSat Jan 11 16:09:23 2020 10.250.7.77:6938 TCP connection established with [AF_INET]100.64.1.1:47878\nSat Jan 11 16:09:23 2020 10.250.7.77:6938 Connection reset, restarting [0]\nSat Jan 11 16:09:23 2020 100.64.1.1:47878 Connection reset, restarting [0]\nSat Jan 11 16:09:29 2020 TCP connection established with [AF_INET]10.250.7.77:23084\nSat Jan 11 16:09:29 2020 10.250.7.77:23084 TCP connection established with [AF_INET]100.64.1.1:52316\nSat Jan 11 16:09:29 2020 10.250.7.77:23084 Connection reset, restarting [0]\nSat Jan 11 16:09:29 2020 100.64.1.1:52316 Connection reset, restarting [0]\nSat Jan 11 16:09:33 2020 TCP connection established with [AF_INET]10.250.7.77:6948\nSat Jan 11 16:09:33 2020 10.250.7.77:6948 TCP connection established with [AF_INET]100.64.1.1:47888\nSat Jan 11 16:09:33 2020 10.250.7.77:6948 Connection reset, restarting [0]\nSat Jan 11 16:09:33 2020 100.64.1.1:47888 Connection reset, restarting [0]\nSat Jan 11 16:09:39 2020 TCP connection established with [AF_INET]10.250.7.77:23088\nSat Jan 11 16:09:39 2020 10.250.7.77:23088 TCP connection established with [AF_INET]100.64.1.1:52320\nSat Jan 11 16:09:39 2020 10.250.7.77:23088 Connection reset, restarting [0]\nSat Jan 11 16:09:39 2020 100.64.1.1:52320 Connection reset, restarting [0]\nSat Jan 11 16:09:43 2020 TCP connection established with [AF_INET]10.250.7.77:6956\nSat Jan 11 16:09:43 2020 10.250.7.77:6956 TCP connection established with [AF_INET]100.64.1.1:47896\nSat Jan 11 16:09:43 2020 10.250.7.77:6956 Connection reset, restarting [0]\nSat Jan 11 16:09:43 2020 100.64.1.1:47896 Connection reset, restarting [0]\nSat Jan 11 16:09:49 2020 TCP connection established with [AF_INET]10.250.7.77:23098\nSat Jan 11 16:09:49 2020 10.250.7.77:23098 TCP connection established with [AF_INET]100.64.1.1:52330\nSat Jan 11 16:09:49 2020 10.250.7.77:23098 Connection reset, restarting [0]\nSat Jan 11 16:09:49 2020 100.64.1.1:52330 Connection reset, restarting [0]\nSat Jan 11 16:09:53 2020 TCP connection established with [AF_INET]10.250.7.77:6966\nSat Jan 11 16:09:53 2020 10.250.7.77:6966 TCP connection established with [AF_INET]100.64.1.1:47906\nSat Jan 11 16:09:53 2020 10.250.7.77:6966 Connection reset, restarting [0]\nSat Jan 11 16:09:53 2020 100.64.1.1:47906 Connection reset, restarting [0]\nSat Jan 11 16:09:59 2020 TCP connection established with [AF_INET]10.250.7.77:23110\nSat Jan 11 16:09:59 2020 10.250.7.77:23110 TCP connection established with [AF_INET]100.64.1.1:52342\nSat Jan 11 16:09:59 2020 10.250.7.77:23110 Connection reset, restarting [0]\nSat Jan 11 16:09:59 2020 100.64.1.1:52342 Connection reset, restarting [0]\nSat Jan 11 16:10:03 2020 TCP connection established with [AF_INET]10.250.7.77:6982\nSat Jan 11 16:10:03 2020 10.250.7.77:6982 TCP connection established with [AF_INET]100.64.1.1:47922\nSat Jan 11 16:10:03 2020 10.250.7.77:6982 Connection reset, restarting [0]\nSat Jan 11 16:10:03 2020 100.64.1.1:47922 Connection reset, restarting [0]\nSat Jan 11 16:10:09 2020 TCP connection established with [AF_INET]10.250.7.77:23126\nSat Jan 11 16:10:09 2020 10.250.7.77:23126 TCP connection established with [AF_INET]100.64.1.1:52358\nSat Jan 11 16:10:09 2020 10.250.7.77:23126 Connection reset, restarting [0]\nSat Jan 11 16:10:09 2020 100.64.1.1:52358 Connection reset, restarting [0]\nSat Jan 11 16:10:13 2020 TCP connection established with [AF_INET]10.250.7.77:6990\nSat Jan 11 16:10:13 2020 10.250.7.77:6990 TCP connection established with [AF_INET]100.64.1.1:47930\nSat Jan 11 16:10:13 2020 10.250.7.77:6990 Connection reset, restarting [0]\nSat Jan 11 16:10:13 2020 100.64.1.1:47930 Connection reset, restarting [0]\nSat Jan 11 16:10:19 2020 TCP connection established with [AF_INET]10.250.7.77:23134\nSat Jan 11 16:10:19 2020 10.250.7.77:23134 TCP connection established with [AF_INET]100.64.1.1:52366\nSat Jan 11 16:10:19 2020 10.250.7.77:23134 Connection reset, restarting [0]\nSat Jan 11 16:10:19 2020 100.64.1.1:52366 Connection reset, restarting [0]\nSat Jan 11 16:10:23 2020 TCP connection established with [AF_INET]10.250.7.77:6996\nSat Jan 11 16:10:23 2020 10.250.7.77:6996 TCP connection established with [AF_INET]100.64.1.1:47936\nSat Jan 11 16:10:23 2020 10.250.7.77:6996 Connection reset, restarting [0]\nSat Jan 11 16:10:23 2020 100.64.1.1:47936 Connection reset, restarting [0]\nSat Jan 11 16:10:29 2020 TCP connection established with [AF_INET]10.250.7.77:23142\nSat Jan 11 16:10:29 2020 10.250.7.77:23142 TCP connection established with [AF_INET]100.64.1.1:52374\nSat Jan 11 16:10:29 2020 10.250.7.77:23142 Connection reset, restarting [0]\nSat Jan 11 16:10:29 2020 100.64.1.1:52374 Connection reset, restarting [0]\nSat Jan 11 16:10:33 2020 TCP connection established with [AF_INET]10.250.7.77:7006\nSat Jan 11 16:10:33 2020 10.250.7.77:7006 TCP connection established with [AF_INET]100.64.1.1:47946\nSat Jan 11 16:10:33 2020 10.250.7.77:7006 Connection reset, restarting [0]\nSat Jan 11 16:10:33 2020 100.64.1.1:47946 Connection reset, restarting [0]\nSat Jan 11 16:10:36 2020 vpn-seed/100.64.1.1:47264 Connection reset, restarting [0]\nSat Jan 11 16:10:39 2020 TCP connection established with [AF_INET]10.250.7.77:23146\nSat Jan 11 16:10:39 2020 10.250.7.77:23146 TCP connection established with [AF_INET]100.64.1.1:52378\nSat Jan 11 16:10:39 2020 10.250.7.77:23146 Connection reset, restarting [0]\nSat Jan 11 16:10:39 2020 100.64.1.1:52378 Connection reset, restarting [0]\nSat Jan 11 16:10:43 2020 TCP connection established with [AF_INET]10.250.7.77:7010\nSat Jan 11 16:10:43 2020 10.250.7.77:7010 TCP connection established with [AF_INET]100.64.1.1:47950\nSat Jan 11 16:10:43 2020 10.250.7.77:7010 Connection reset, restarting [0]\nSat Jan 11 16:10:43 2020 100.64.1.1:47950 Connection reset, restarting [0]\nSat Jan 11 16:10:49 2020 TCP connection established with [AF_INET]10.250.7.77:23156\nSat Jan 11 16:10:49 2020 10.250.7.77:23156 TCP connection established with [AF_INET]100.64.1.1:52388\nSat Jan 11 16:10:49 2020 10.250.7.77:23156 Connection reset, restarting [0]\nSat Jan 11 16:10:49 2020 100.64.1.1:52388 Connection reset, restarting [0]\nSat Jan 11 16:10:53 2020 TCP connection established with [AF_INET]10.250.7.77:7024\nSat Jan 11 16:10:53 2020 10.250.7.77:7024 TCP connection established with [AF_INET]100.64.1.1:47964\nSat Jan 11 16:10:53 2020 10.250.7.77:7024 Connection reset, restarting [0]\nSat Jan 11 16:10:53 2020 100.64.1.1:47964 Connection reset, restarting [0]\nSat Jan 11 16:10:59 2020 TCP connection established with [AF_INET]10.250.7.77:23166\nSat Jan 11 16:10:59 2020 10.250.7.77:23166 TCP connection established with [AF_INET]100.64.1.1:52398\nSat Jan 11 16:10:59 2020 10.250.7.77:23166 Connection reset, restarting [0]\nSat Jan 11 16:10:59 2020 100.64.1.1:52398 Connection reset, restarting [0]\nSat Jan 11 16:11:03 2020 TCP connection established with [AF_INET]10.250.7.77:7040\nSat Jan 11 16:11:03 2020 10.250.7.77:7040 TCP connection established with [AF_INET]100.64.1.1:47980\nSat Jan 11 16:11:03 2020 10.250.7.77:7040 Connection reset, restarting [0]\nSat Jan 11 16:11:03 2020 100.64.1.1:47980 Connection reset, restarting [0]\nSat Jan 11 16:11:09 2020 TCP connection established with [AF_INET]10.250.7.77:23180\nSat Jan 11 16:11:09 2020 10.250.7.77:23180 TCP connection established with [AF_INET]100.64.1.1:52412\nSat Jan 11 16:11:09 2020 10.250.7.77:23180 Connection reset, restarting [0]\nSat Jan 11 16:11:09 2020 100.64.1.1:52412 Connection reset, restarting [0]\nSat Jan 11 16:11:10 2020 TCP connection established with [AF_INET]10.250.7.77:7046\nSat Jan 11 16:11:10 2020 10.250.7.77:7046 Connection reset, restarting [0]\nSat Jan 11 16:11:10 2020 TCP connection established with [AF_INET]100.64.1.1:52416\nSat Jan 11 16:11:11 2020 100.64.1.1:52416 peer info: IV_VER=2.4.6\nSat Jan 11 16:11:11 2020 100.64.1.1:52416 peer info: IV_PLAT=linux\nSat Jan 11 16:11:11 2020 100.64.1.1:52416 peer info: IV_PROTO=2\nSat Jan 11 16:11:11 2020 100.64.1.1:52416 peer info: IV_NCP=2\nSat Jan 11 16:11:11 2020 100.64.1.1:52416 peer info: IV_LZ4=1\nSat Jan 11 16:11:11 2020 100.64.1.1:52416 peer info: IV_LZ4v2=1\nSat Jan 11 16:11:11 2020 100.64.1.1:52416 peer info: IV_LZO=1\nSat Jan 11 16:11:11 2020 100.64.1.1:52416 peer info: IV_COMP_STUB=1\nSat Jan 11 16:11:11 2020 100.64.1.1:52416 peer info: IV_COMP_STUBv2=1\nSat Jan 11 16:11:11 2020 100.64.1.1:52416 peer info: IV_TCPNL=1\nSat Jan 11 16:11:11 2020 100.64.1.1:52416 [vpn-seed] Peer Connection Initiated with [AF_INET]100.64.1.1:52416\nSat Jan 11 16:11:11 2020 vpn-seed/100.64.1.1:52416 MULTI_sva: pool returned IPv4=192.168.123.6, IPv6=(Not enabled)\nSat Jan 11 16:11:13 2020 TCP connection established with [AF_INET]10.250.7.77:7050\nSat Jan 11 16:11:13 2020 10.250.7.77:7050 TCP connection established with [AF_INET]100.64.1.1:47990\nSat Jan 11 16:11:13 2020 10.250.7.77:7050 Connection reset, restarting [0]\nSat Jan 11 16:11:13 2020 100.64.1.1:47990 Connection reset, restarting [0]\nSat Jan 11 16:11:19 2020 TCP connection established with [AF_INET]10.250.7.77:23194\nSat Jan 11 16:11:19 2020 10.250.7.77:23194 TCP connection established with [AF_INET]100.64.1.1:52426\nSat Jan 11 16:11:19 2020 10.250.7.77:23194 Connection reset, restarting [0]\nSat Jan 11 16:11:19 2020 100.64.1.1:52426 Connection reset, restarting [0]\nSat Jan 11 16:11:23 2020 TCP connection established with [AF_INET]10.250.7.77:7056\nSat Jan 11 16:11:23 2020 10.250.7.77:7056 TCP connection established with [AF_INET]100.64.1.1:47996\nSat Jan 11 16:11:23 2020 10.250.7.77:7056 Connection reset, restarting [0]\nSat Jan 11 16:11:23 2020 100.64.1.1:47996 Connection reset, restarting [0]\nSat Jan 11 16:11:29 2020 TCP connection established with [AF_INET]10.250.7.77:23202\nSat Jan 11 16:11:29 2020 10.250.7.77:23202 TCP connection established with [AF_INET]100.64.1.1:52434\nSat Jan 11 16:11:29 2020 10.250.7.77:23202 Connection reset, restarting [0]\nSat Jan 11 16:11:29 2020 100.64.1.1:52434 Connection reset, restarting [0]\nSat Jan 11 16:11:33 2020 TCP connection established with [AF_INET]10.250.7.77:7066\nSat Jan 11 16:11:33 2020 10.250.7.77:7066 TCP connection established with [AF_INET]100.64.1.1:48006\nSat Jan 11 16:11:33 2020 10.250.7.77:7066 Connection reset, restarting [0]\nSat Jan 11 16:11:33 2020 100.64.1.1:48006 Connection reset, restarting [0]\nSat Jan 11 16:11:39 2020 TCP connection established with [AF_INET]10.250.7.77:23206\nSat Jan 11 16:11:39 2020 10.250.7.77:23206 TCP connection established with [AF_INET]100.64.1.1:52438\nSat Jan 11 16:11:39 2020 10.250.7.77:23206 Connection reset, restarting [0]\nSat Jan 11 16:11:39 2020 100.64.1.1:52438 Connection reset, restarting [0]\nSat Jan 11 16:11:43 2020 TCP connection established with [AF_INET]10.250.7.77:7070\nSat Jan 11 16:11:43 2020 10.250.7.77:7070 TCP connection established with [AF_INET]100.64.1.1:48010\nSat Jan 11 16:11:43 2020 10.250.7.77:7070 Connection reset, restarting [0]\nSat Jan 11 16:11:43 2020 100.64.1.1:48010 Connection reset, restarting [0]\nSat Jan 11 16:11:49 2020 TCP connection established with [AF_INET]10.250.7.77:23216\nSat Jan 11 16:11:49 2020 10.250.7.77:23216 TCP connection established with [AF_INET]100.64.1.1:52448\nSat Jan 11 16:11:49 2020 10.250.7.77:23216 Connection reset, restarting [0]\nSat Jan 11 16:11:49 2020 100.64.1.1:52448 Connection reset, restarting [0]\nSat Jan 11 16:11:53 2020 TCP connection established with [AF_INET]10.250.7.77:7082\nSat Jan 11 16:11:53 2020 10.250.7.77:7082 TCP connection established with [AF_INET]100.64.1.1:48022\nSat Jan 11 16:11:53 2020 10.250.7.77:7082 Connection reset, restarting [0]\nSat Jan 11 16:11:53 2020 100.64.1.1:48022 Connection reset, restarting [0]\nSat Jan 11 16:11:59 2020 TCP connection established with [AF_INET]10.250.7.77:23224\nSat Jan 11 16:11:59 2020 10.250.7.77:23224 TCP connection established with [AF_INET]100.64.1.1:52456\nSat Jan 11 16:11:59 2020 10.250.7.77:23224 Connection reset, restarting [0]\nSat Jan 11 16:11:59 2020 100.64.1.1:52456 Connection reset, restarting [0]\nSat Jan 11 16:12:03 2020 TCP connection established with [AF_INET]10.250.7.77:7096\nSat Jan 11 16:12:03 2020 10.250.7.77:7096 TCP connection established with [AF_INET]100.64.1.1:48036\nSat Jan 11 16:12:03 2020 10.250.7.77:7096 Connection reset, restarting [0]\nSat Jan 11 16:12:03 2020 100.64.1.1:48036 Connection reset, restarting [0]\nSat Jan 11 16:12:09 2020 TCP connection established with [AF_INET]100.64.1.1:52472\nSat Jan 11 16:12:09 2020 100.64.1.1:52472 Connection reset, restarting [0]\nSat Jan 11 16:12:09 2020 TCP connection established with [AF_INET]10.250.7.77:23240\nSat Jan 11 16:12:09 2020 10.250.7.77:23240 Connection reset, restarting [0]\nSat Jan 11 16:12:13 2020 TCP connection established with [AF_INET]10.250.7.77:7108\nSat Jan 11 16:12:13 2020 10.250.7.77:7108 TCP connection established with [AF_INET]100.64.1.1:48048\nSat Jan 11 16:12:13 2020 10.250.7.77:7108 Connection reset, restarting [0]\nSat Jan 11 16:12:13 2020 100.64.1.1:48048 Connection reset, restarting [0]\nSat Jan 11 16:12:19 2020 TCP connection established with [AF_INET]10.250.7.77:23248\nSat Jan 11 16:12:19 2020 10.250.7.77:23248 TCP connection established with [AF_INET]100.64.1.1:52480\nSat Jan 11 16:12:19 2020 10.250.7.77:23248 Connection reset, restarting [0]\nSat Jan 11 16:12:19 2020 100.64.1.1:52480 Connection reset, restarting [0]\nSat Jan 11 16:12:23 2020 TCP connection established with [AF_INET]10.250.7.77:7114\nSat Jan 11 16:12:23 2020 10.250.7.77:7114 TCP connection established with [AF_INET]100.64.1.1:48054\nSat Jan 11 16:12:23 2020 10.250.7.77:7114 Connection reset, restarting [0]\nSat Jan 11 16:12:23 2020 100.64.1.1:48054 Connection reset, restarting [0]\nSat Jan 11 16:12:29 2020 TCP connection established with [AF_INET]10.250.7.77:23260\nSat Jan 11 16:12:29 2020 10.250.7.77:23260 TCP connection established with [AF_INET]100.64.1.1:52492\nSat Jan 11 16:12:29 2020 10.250.7.77:23260 Connection reset, restarting [0]\nSat Jan 11 16:12:29 2020 100.64.1.1:52492 Connection reset, restarting [0]\nSat Jan 11 16:12:33 2020 TCP connection established with [AF_INET]10.250.7.77:7124\nSat Jan 11 16:12:33 2020 10.250.7.77:7124 TCP connection established with [AF_INET]100.64.1.1:48064\nSat Jan 11 16:12:33 2020 10.250.7.77:7124 Connection reset, restarting [0]\nSat Jan 11 16:12:33 2020 100.64.1.1:48064 Connection reset, restarting [0]\nSat Jan 11 16:12:39 2020 TCP connection established with [AF_INET]100.64.1.1:52496\nSat Jan 11 16:12:39 2020 100.64.1.1:52496 TCP connection established with [AF_INET]10.250.7.77:23264\nSat Jan 11 16:12:39 2020 100.64.1.1:52496 Connection reset, restarting [0]\nSat Jan 11 16:12:39 2020 10.250.7.77:23264 Connection reset, restarting [0]\nSat Jan 11 16:12:43 2020 TCP connection established with [AF_INET]10.250.7.77:7128\nSat Jan 11 16:12:43 2020 10.250.7.77:7128 TCP connection established with [AF_INET]100.64.1.1:48068\nSat Jan 11 16:12:43 2020 10.250.7.77:7128 Connection reset, restarting [0]\nSat Jan 11 16:12:43 2020 100.64.1.1:48068 Connection reset, restarting [0]\nSat Jan 11 16:12:49 2020 TCP connection established with [AF_INET]10.250.7.77:23274\nSat Jan 11 16:12:49 2020 10.250.7.77:23274 TCP connection established with [AF_INET]100.64.1.1:52506\nSat Jan 11 16:12:49 2020 10.250.7.77:23274 Connection reset, restarting [0]\nSat Jan 11 16:12:49 2020 100.64.1.1:52506 Connection reset, restarting [0]\nSat Jan 11 16:12:53 2020 TCP connection established with [AF_INET]10.250.7.77:7140\nSat Jan 11 16:12:53 2020 10.250.7.77:7140 TCP connection established with [AF_INET]100.64.1.1:48080\nSat Jan 11 16:12:53 2020 10.250.7.77:7140 Connection reset, restarting [0]\nSat Jan 11 16:12:53 2020 100.64.1.1:48080 Connection reset, restarting [0]\nSat Jan 11 16:12:59 2020 TCP connection established with [AF_INET]10.250.7.77:23284\nSat Jan 11 16:12:59 2020 10.250.7.77:23284 TCP connection established with [AF_INET]100.64.1.1:52516\nSat Jan 11 16:12:59 2020 10.250.7.77:23284 Connection reset, restarting [0]\nSat Jan 11 16:12:59 2020 100.64.1.1:52516 Connection reset, restarting [0]\nSat Jan 11 16:13:03 2020 TCP connection established with [AF_INET]10.250.7.77:7156\nSat Jan 11 16:13:03 2020 10.250.7.77:7156 TCP connection established with [AF_INET]100.64.1.1:48096\nSat Jan 11 16:13:03 2020 10.250.7.77:7156 Connection reset, restarting [0]\nSat Jan 11 16:13:03 2020 100.64.1.1:48096 Connection reset, restarting [0]\nSat Jan 11 16:13:09 2020 TCP connection established with [AF_INET]10.250.7.77:23298\nSat Jan 11 16:13:09 2020 10.250.7.77:23298 TCP connection established with [AF_INET]100.64.1.1:52530\nSat Jan 11 16:13:09 2020 10.250.7.77:23298 Connection reset, restarting [0]\nSat Jan 11 16:13:09 2020 100.64.1.1:52530 Connection reset, restarting [0]\nSat Jan 11 16:13:13 2020 TCP connection established with [AF_INET]10.250.7.77:7164\nSat Jan 11 16:13:13 2020 10.250.7.77:7164 TCP connection established with [AF_INET]100.64.1.1:48104\nSat Jan 11 16:13:13 2020 10.250.7.77:7164 Connection reset, restarting [0]\nSat Jan 11 16:13:13 2020 100.64.1.1:48104 Connection reset, restarting [0]\nSat Jan 11 16:13:19 2020 TCP connection established with [AF_INET]10.250.7.77:23306\nSat Jan 11 16:13:19 2020 10.250.7.77:23306 TCP connection established with [AF_INET]100.64.1.1:52538\nSat Jan 11 16:13:19 2020 10.250.7.77:23306 Connection reset, restarting [0]\nSat Jan 11 16:13:19 2020 100.64.1.1:52538 Connection reset, restarting [0]\nSat Jan 11 16:13:23 2020 TCP connection established with [AF_INET]10.250.7.77:7174\nSat Jan 11 16:13:23 2020 10.250.7.77:7174 TCP connection established with [AF_INET]100.64.1.1:48114\nSat Jan 11 16:13:23 2020 10.250.7.77:7174 Connection reset, restarting [0]\nSat Jan 11 16:13:23 2020 100.64.1.1:48114 Connection reset, restarting [0]\nSat Jan 11 16:13:29 2020 TCP connection established with [AF_INET]10.250.7.77:23314\nSat Jan 11 16:13:29 2020 10.250.7.77:23314 TCP connection established with [AF_INET]100.64.1.1:52546\nSat Jan 11 16:13:29 2020 10.250.7.77:23314 Connection reset, restarting [0]\nSat Jan 11 16:13:29 2020 100.64.1.1:52546 Connection reset, restarting [0]\nSat Jan 11 16:13:33 2020 TCP connection established with [AF_INET]10.250.7.77:7186\nSat Jan 11 16:13:33 2020 10.250.7.77:7186 TCP connection established with [AF_INET]100.64.1.1:48126\nSat Jan 11 16:13:33 2020 10.250.7.77:7186 Connection reset, restarting [0]\nSat Jan 11 16:13:33 2020 100.64.1.1:48126 Connection reset, restarting [0]\nSat Jan 11 16:13:39 2020 TCP connection established with [AF_INET]10.250.7.77:23318\nSat Jan 11 16:13:39 2020 10.250.7.77:23318 TCP connection established with [AF_INET]100.64.1.1:52550\nSat Jan 11 16:13:39 2020 10.250.7.77:23318 Connection reset, restarting [0]\nSat Jan 11 16:13:39 2020 100.64.1.1:52550 Connection reset, restarting [0]\nSat Jan 11 16:13:43 2020 TCP connection established with [AF_INET]10.250.7.77:7190\nSat Jan 11 16:13:43 2020 10.250.7.77:7190 TCP connection established with [AF_INET]100.64.1.1:48130\nSat Jan 11 16:13:43 2020 10.250.7.77:7190 Connection reset, restarting [0]\nSat Jan 11 16:13:43 2020 100.64.1.1:48130 Connection reset, restarting [0]\nSat Jan 11 16:13:49 2020 TCP connection established with [AF_INET]10.250.7.77:23332\nSat Jan 11 16:13:49 2020 10.250.7.77:23332 TCP connection established with [AF_INET]100.64.1.1:52564\nSat Jan 11 16:13:49 2020 10.250.7.77:23332 Connection reset, restarting [0]\nSat Jan 11 16:13:49 2020 100.64.1.1:52564 Connection reset, restarting [0]\nSat Jan 11 16:13:53 2020 TCP connection established with [AF_INET]10.250.7.77:7202\nSat Jan 11 16:13:53 2020 10.250.7.77:7202 TCP connection established with [AF_INET]100.64.1.1:48142\nSat Jan 11 16:13:53 2020 10.250.7.77:7202 Connection reset, restarting [0]\nSat Jan 11 16:13:53 2020 100.64.1.1:48142 Connection reset, restarting [0]\nSat Jan 11 16:13:59 2020 TCP connection established with [AF_INET]10.250.7.77:23342\nSat Jan 11 16:13:59 2020 10.250.7.77:23342 TCP connection established with [AF_INET]100.64.1.1:52574\nSat Jan 11 16:13:59 2020 10.250.7.77:23342 Connection reset, restarting [0]\nSat Jan 11 16:13:59 2020 100.64.1.1:52574 Connection reset, restarting [0]\nSat Jan 11 16:14:03 2020 TCP connection established with [AF_INET]10.250.7.77:7224\nSat Jan 11 16:14:03 2020 10.250.7.77:7224 TCP connection established with [AF_INET]100.64.1.1:48164\nSat Jan 11 16:14:03 2020 10.250.7.77:7224 Connection reset, restarting [0]\nSat Jan 11 16:14:03 2020 100.64.1.1:48164 Connection reset, restarting [0]\nSat Jan 11 16:14:09 2020 TCP connection established with [AF_INET]10.250.7.77:23362\nSat Jan 11 16:14:09 2020 10.250.7.77:23362 TCP connection established with [AF_INET]100.64.1.1:52594\nSat Jan 11 16:14:09 2020 10.250.7.77:23362 Connection reset, restarting [0]\nSat Jan 11 16:14:09 2020 100.64.1.1:52594 Connection reset, restarting [0]\nSat Jan 11 16:14:13 2020 TCP connection established with [AF_INET]100.64.1.1:48172\nSat Jan 11 16:14:13 2020 100.64.1.1:48172 Connection reset, restarting [0]\nSat Jan 11 16:14:13 2020 TCP connection established with [AF_INET]10.250.7.77:7232\nSat Jan 11 16:14:13 2020 10.250.7.77:7232 Connection reset, restarting [0]\nSat Jan 11 16:14:19 2020 TCP connection established with [AF_INET]10.250.7.77:23370\nSat Jan 11 16:14:19 2020 10.250.7.77:23370 TCP connection established with [AF_INET]100.64.1.1:52602\nSat Jan 11 16:14:19 2020 10.250.7.77:23370 Connection reset, restarting [0]\nSat Jan 11 16:14:19 2020 100.64.1.1:52602 Connection reset, restarting [0]\nSat Jan 11 16:14:23 2020 TCP connection established with [AF_INET]10.250.7.77:7238\nSat Jan 11 16:14:23 2020 10.250.7.77:7238 TCP connection established with [AF_INET]100.64.1.1:48178\nSat Jan 11 16:14:23 2020 10.250.7.77:7238 Connection reset, restarting [0]\nSat Jan 11 16:14:23 2020 100.64.1.1:48178 Connection reset, restarting [0]\nSat Jan 11 16:14:29 2020 TCP connection established with [AF_INET]10.250.7.77:23384\nSat Jan 11 16:14:29 2020 10.250.7.77:23384 TCP connection established with [AF_INET]100.64.1.1:52616\nSat Jan 11 16:14:29 2020 10.250.7.77:23384 Connection reset, restarting [0]\nSat Jan 11 16:14:29 2020 100.64.1.1:52616 Connection reset, restarting [0]\nSat Jan 11 16:14:33 2020 TCP connection established with [AF_INET]10.250.7.77:7248\nSat Jan 11 16:14:33 2020 10.250.7.77:7248 TCP connection established with [AF_INET]100.64.1.1:48188\nSat Jan 11 16:14:33 2020 10.250.7.77:7248 Connection reset, restarting [0]\nSat Jan 11 16:14:33 2020 100.64.1.1:48188 Connection reset, restarting [0]\nSat Jan 11 16:14:39 2020 TCP connection established with [AF_INET]10.250.7.77:23394\nSat Jan 11 16:14:39 2020 10.250.7.77:23394 TCP connection established with [AF_INET]100.64.1.1:52626\nSat Jan 11 16:14:39 2020 10.250.7.77:23394 Connection reset, restarting [0]\nSat Jan 11 16:14:39 2020 100.64.1.1:52626 Connection reset, restarting [0]\nSat Jan 11 16:14:43 2020 TCP connection established with [AF_INET]10.250.7.77:7260\nSat Jan 11 16:14:43 2020 10.250.7.77:7260 TCP connection established with [AF_INET]100.64.1.1:48200\nSat Jan 11 16:14:43 2020 10.250.7.77:7260 Connection reset, restarting [0]\nSat Jan 11 16:14:43 2020 100.64.1.1:48200 Connection reset, restarting [0]\nSat Jan 11 16:14:49 2020 TCP connection established with [AF_INET]10.250.7.77:23406\nSat Jan 11 16:14:49 2020 10.250.7.77:23406 TCP connection established with [AF_INET]100.64.1.1:52638\nSat Jan 11 16:14:49 2020 10.250.7.77:23406 Connection reset, restarting [0]\nSat Jan 11 16:14:49 2020 100.64.1.1:52638 Connection reset, restarting [0]\nSat Jan 11 16:14:53 2020 TCP connection established with [AF_INET]10.250.7.77:7270\nSat Jan 11 16:14:53 2020 10.250.7.77:7270 TCP connection established with [AF_INET]100.64.1.1:48210\nSat Jan 11 16:14:53 2020 10.250.7.77:7270 Connection reset, restarting [0]\nSat Jan 11 16:14:53 2020 100.64.1.1:48210 Connection reset, restarting [0]\nSat Jan 11 16:14:59 2020 TCP connection established with [AF_INET]10.250.7.77:23418\nSat Jan 11 16:14:59 2020 10.250.7.77:23418 TCP connection established with [AF_INET]100.64.1.1:52650\nSat Jan 11 16:14:59 2020 10.250.7.77:23418 Connection reset, restarting [0]\nSat Jan 11 16:14:59 2020 100.64.1.1:52650 Connection reset, restarting [0]\nSat Jan 11 16:15:03 2020 TCP connection established with [AF_INET]10.250.7.77:7284\nSat Jan 11 16:15:03 2020 10.250.7.77:7284 TCP connection established with [AF_INET]100.64.1.1:48224\nSat Jan 11 16:15:03 2020 10.250.7.77:7284 Connection reset, restarting [0]\nSat Jan 11 16:15:03 2020 100.64.1.1:48224 Connection reset, restarting [0]\nSat Jan 11 16:15:09 2020 TCP connection established with [AF_INET]10.250.7.77:23432\nSat Jan 11 16:15:09 2020 10.250.7.77:23432 TCP connection established with [AF_INET]100.64.1.1:52664\nSat Jan 11 16:15:09 2020 10.250.7.77:23432 Connection reset, restarting [0]\nSat Jan 11 16:15:09 2020 100.64.1.1:52664 Connection reset, restarting [0]\nSat Jan 11 16:15:13 2020 TCP connection established with [AF_INET]10.250.7.77:7292\nSat Jan 11 16:15:13 2020 10.250.7.77:7292 TCP connection established with [AF_INET]100.64.1.1:48232\nSat Jan 11 16:15:13 2020 10.250.7.77:7292 Connection reset, restarting [0]\nSat Jan 11 16:15:13 2020 100.64.1.1:48232 Connection reset, restarting [0]\nSat Jan 11 16:15:19 2020 TCP connection established with [AF_INET]10.250.7.77:23440\nSat Jan 11 16:15:19 2020 10.250.7.77:23440 TCP connection established with [AF_INET]100.64.1.1:52672\nSat Jan 11 16:15:19 2020 10.250.7.77:23440 Connection reset, restarting [0]\nSat Jan 11 16:15:19 2020 100.64.1.1:52672 Connection reset, restarting [0]\nSat Jan 11 16:15:23 2020 TCP connection established with [AF_INET]10.250.7.77:7298\nSat Jan 11 16:15:23 2020 10.250.7.77:7298 TCP connection established with [AF_INET]100.64.1.1:48238\nSat Jan 11 16:15:23 2020 10.250.7.77:7298 Connection reset, restarting [0]\nSat Jan 11 16:15:23 2020 100.64.1.1:48238 Connection reset, restarting [0]\nSat Jan 11 16:15:29 2020 TCP connection established with [AF_INET]100.64.1.1:52680\nSat Jan 11 16:15:29 2020 100.64.1.1:52680 Connection reset, restarting [0]\nSat Jan 11 16:15:29 2020 TCP connection established with [AF_INET]10.250.7.77:23448\nSat Jan 11 16:15:29 2020 10.250.7.77:23448 Connection reset, restarting [0]\nSat Jan 11 16:15:33 2020 TCP connection established with [AF_INET]10.250.7.77:7308\nSat Jan 11 16:15:33 2020 10.250.7.77:7308 TCP connection established with [AF_INET]100.64.1.1:48248\nSat Jan 11 16:15:33 2020 10.250.7.77:7308 Connection reset, restarting [0]\nSat Jan 11 16:15:33 2020 100.64.1.1:48248 Connection reset, restarting [0]\nSat Jan 11 16:15:39 2020 TCP connection established with [AF_INET]10.250.7.77:23452\nSat Jan 11 16:15:39 2020 10.250.7.77:23452 TCP connection established with [AF_INET]100.64.1.1:52684\nSat Jan 11 16:15:39 2020 10.250.7.77:23452 Connection reset, restarting [0]\nSat Jan 11 16:15:39 2020 100.64.1.1:52684 Connection reset, restarting [0]\nSat Jan 11 16:15:43 2020 TCP connection established with [AF_INET]10.250.7.77:7314\nSat Jan 11 16:15:43 2020 10.250.7.77:7314 TCP connection established with [AF_INET]100.64.1.1:48254\nSat Jan 11 16:15:43 2020 10.250.7.77:7314 Connection reset, restarting [0]\nSat Jan 11 16:15:43 2020 100.64.1.1:48254 Connection reset, restarting [0]\nSat Jan 11 16:15:49 2020 TCP connection established with [AF_INET]10.250.7.77:23464\nSat Jan 11 16:15:49 2020 10.250.7.77:23464 TCP connection established with [AF_INET]100.64.1.1:52696\nSat Jan 11 16:15:49 2020 10.250.7.77:23464 Connection reset, restarting [0]\nSat Jan 11 16:15:49 2020 100.64.1.1:52696 Connection reset, restarting [0]\nSat Jan 11 16:15:53 2020 TCP connection established with [AF_INET]10.250.7.77:7328\nSat Jan 11 16:15:53 2020 10.250.7.77:7328 TCP connection established with [AF_INET]100.64.1.1:48268\nSat Jan 11 16:15:53 2020 10.250.7.77:7328 Connection reset, restarting [0]\nSat Jan 11 16:15:53 2020 100.64.1.1:48268 Connection reset, restarting [0]\nSat Jan 11 16:15:59 2020 TCP connection established with [AF_INET]10.250.7.77:23474\nSat Jan 11 16:15:59 2020 10.250.7.77:23474 TCP connection established with [AF_INET]100.64.1.1:52706\nSat Jan 11 16:15:59 2020 10.250.7.77:23474 Connection reset, restarting [0]\nSat Jan 11 16:15:59 2020 100.64.1.1:52706 Connection reset, restarting [0]\nSat Jan 11 16:16:03 2020 TCP connection established with [AF_INET]10.250.7.77:7342\nSat Jan 11 16:16:03 2020 10.250.7.77:7342 TCP connection established with [AF_INET]100.64.1.1:48282\nSat Jan 11 16:16:03 2020 10.250.7.77:7342 Connection reset, restarting [0]\nSat Jan 11 16:16:03 2020 100.64.1.1:48282 Connection reset, restarting [0]\nSat Jan 11 16:16:09 2020 TCP connection established with [AF_INET]10.250.7.77:23488\nSat Jan 11 16:16:09 2020 10.250.7.77:23488 TCP connection established with [AF_INET]100.64.1.1:52720\nSat Jan 11 16:16:09 2020 10.250.7.77:23488 Connection reset, restarting [0]\nSat Jan 11 16:16:09 2020 100.64.1.1:52720 Connection reset, restarting [0]\nSat Jan 11 16:16:13 2020 TCP connection established with [AF_INET]10.250.7.77:7386\nSat Jan 11 16:16:13 2020 10.250.7.77:7386 TCP connection established with [AF_INET]100.64.1.1:48326\nSat Jan 11 16:16:13 2020 10.250.7.77:7386 Connection reset, restarting [0]\nSat Jan 11 16:16:13 2020 100.64.1.1:48326 Connection reset, restarting [0]\nSat Jan 11 16:16:19 2020 TCP connection established with [AF_INET]10.250.7.77:23500\nSat Jan 11 16:16:19 2020 10.250.7.77:23500 TCP connection established with [AF_INET]100.64.1.1:52732\nSat Jan 11 16:16:19 2020 10.250.7.77:23500 Connection reset, restarting [0]\nSat Jan 11 16:16:19 2020 100.64.1.1:52732 Connection reset, restarting [0]\nSat Jan 11 16:16:23 2020 TCP connection established with [AF_INET]10.250.7.77:7392\nSat Jan 11 16:16:23 2020 10.250.7.77:7392 TCP connection established with [AF_INET]100.64.1.1:48332\nSat Jan 11 16:16:23 2020 10.250.7.77:7392 Connection reset, restarting [0]\nSat Jan 11 16:16:23 2020 100.64.1.1:48332 Connection reset, restarting [0]\nSat Jan 11 16:16:29 2020 TCP connection established with [AF_INET]10.250.7.77:23510\nSat Jan 11 16:16:29 2020 10.250.7.77:23510 TCP connection established with [AF_INET]100.64.1.1:52742\nSat Jan 11 16:16:29 2020 10.250.7.77:23510 Connection reset, restarting [0]\nSat Jan 11 16:16:29 2020 100.64.1.1:52742 Connection reset, restarting [0]\nSat Jan 11 16:16:33 2020 TCP connection established with [AF_INET]10.250.7.77:7404\nSat Jan 11 16:16:33 2020 10.250.7.77:7404 TCP connection established with [AF_INET]100.64.1.1:48344\nSat Jan 11 16:16:33 2020 10.250.7.77:7404 Connection reset, restarting [0]\nSat Jan 11 16:16:33 2020 100.64.1.1:48344 Connection reset, restarting [0]\nSat Jan 11 16:16:39 2020 TCP connection established with [AF_INET]10.250.7.77:23514\nSat Jan 11 16:16:39 2020 10.250.7.77:23514 TCP connection established with [AF_INET]100.64.1.1:52746\nSat Jan 11 16:16:39 2020 10.250.7.77:23514 Connection reset, restarting [0]\nSat Jan 11 16:16:39 2020 100.64.1.1:52746 Connection reset, restarting [0]\nSat Jan 11 16:16:43 2020 TCP connection established with [AF_INET]10.250.7.77:7408\nSat Jan 11 16:16:43 2020 10.250.7.77:7408 TCP connection established with [AF_INET]100.64.1.1:48348\nSat Jan 11 16:16:43 2020 10.250.7.77:7408 Connection reset, restarting [0]\nSat Jan 11 16:16:43 2020 100.64.1.1:48348 Connection reset, restarting [0]\nSat Jan 11 16:16:49 2020 TCP connection established with [AF_INET]10.250.7.77:23526\nSat Jan 11 16:16:49 2020 10.250.7.77:23526 TCP connection established with [AF_INET]100.64.1.1:52758\nSat Jan 11 16:16:49 2020 10.250.7.77:23526 Connection reset, restarting [0]\nSat Jan 11 16:16:49 2020 100.64.1.1:52758 Connection reset, restarting [0]\nSat Jan 11 16:16:53 2020 TCP connection established with [AF_INET]10.250.7.77:7418\nSat Jan 11 16:16:53 2020 10.250.7.77:7418 TCP connection established with [AF_INET]100.64.1.1:48358\nSat Jan 11 16:16:53 2020 10.250.7.77:7418 Connection reset, restarting [0]\nSat Jan 11 16:16:53 2020 100.64.1.1:48358 Connection reset, restarting [0]\nSat Jan 11 16:16:59 2020 TCP connection established with [AF_INET]10.250.7.77:23536\nSat Jan 11 16:16:59 2020 10.250.7.77:23536 TCP connection established with [AF_INET]100.64.1.1:52768\nSat Jan 11 16:16:59 2020 10.250.7.77:23536 Connection reset, restarting [0]\nSat Jan 11 16:16:59 2020 100.64.1.1:52768 Connection reset, restarting [0]\nSat Jan 11 16:17:03 2020 TCP connection established with [AF_INET]10.250.7.77:7432\nSat Jan 11 16:17:03 2020 10.250.7.77:7432 TCP connection established with [AF_INET]100.64.1.1:48372\nSat Jan 11 16:17:03 2020 10.250.7.77:7432 Connection reset, restarting [0]\nSat Jan 11 16:17:03 2020 100.64.1.1:48372 Connection reset, restarting [0]\nSat Jan 11 16:17:09 2020 TCP connection established with [AF_INET]10.250.7.77:23550\nSat Jan 11 16:17:09 2020 10.250.7.77:23550 TCP connection established with [AF_INET]100.64.1.1:52782\nSat Jan 11 16:17:09 2020 10.250.7.77:23550 Connection reset, restarting [0]\nSat Jan 11 16:17:09 2020 100.64.1.1:52782 Connection reset, restarting [0]\nSat Jan 11 16:17:13 2020 TCP connection established with [AF_INET]10.250.7.77:7444\nSat Jan 11 16:17:13 2020 10.250.7.77:7444 TCP connection established with [AF_INET]100.64.1.1:48384\nSat Jan 11 16:17:13 2020 10.250.7.77:7444 Connection reset, restarting [0]\nSat Jan 11 16:17:13 2020 100.64.1.1:48384 Connection reset, restarting [0]\nSat Jan 11 16:17:19 2020 TCP connection established with [AF_INET]10.250.7.77:23566\nSat Jan 11 16:17:19 2020 10.250.7.77:23566 TCP connection established with [AF_INET]100.64.1.1:52798\nSat Jan 11 16:17:19 2020 10.250.7.77:23566 Connection reset, restarting [0]\nSat Jan 11 16:17:19 2020 100.64.1.1:52798 Connection reset, restarting [0]\nSat Jan 11 16:17:23 2020 TCP connection established with [AF_INET]10.250.7.77:7450\nSat Jan 11 16:17:23 2020 10.250.7.77:7450 TCP connection established with [AF_INET]100.64.1.1:48390\nSat Jan 11 16:17:23 2020 10.250.7.77:7450 Connection reset, restarting [0]\nSat Jan 11 16:17:23 2020 100.64.1.1:48390 Connection reset, restarting [0]\nSat Jan 11 16:17:29 2020 TCP connection established with [AF_INET]10.250.7.77:23578\nSat Jan 11 16:17:29 2020 10.250.7.77:23578 TCP connection established with [AF_INET]100.64.1.1:52810\nSat Jan 11 16:17:29 2020 10.250.7.77:23578 Connection reset, restarting [0]\nSat Jan 11 16:17:29 2020 100.64.1.1:52810 Connection reset, restarting [0]\nSat Jan 11 16:17:33 2020 TCP connection established with [AF_INET]10.250.7.77:7470\nSat Jan 11 16:17:33 2020 10.250.7.77:7470 TCP connection established with [AF_INET]100.64.1.1:48410\nSat Jan 11 16:17:33 2020 10.250.7.77:7470 Connection reset, restarting [0]\nSat Jan 11 16:17:33 2020 100.64.1.1:48410 Connection reset, restarting [0]\nSat Jan 11 16:17:39 2020 TCP connection established with [AF_INET]10.250.7.77:23586\nSat Jan 11 16:17:39 2020 10.250.7.77:23586 TCP connection established with [AF_INET]100.64.1.1:52818\nSat Jan 11 16:17:39 2020 10.250.7.77:23586 Connection reset, restarting [0]\nSat Jan 11 16:17:39 2020 100.64.1.1:52818 Connection reset, restarting [0]\nSat Jan 11 16:17:43 2020 TCP connection established with [AF_INET]10.250.7.77:7474\nSat Jan 11 16:17:43 2020 10.250.7.77:7474 TCP connection established with [AF_INET]100.64.1.1:48414\nSat Jan 11 16:17:43 2020 10.250.7.77:7474 Connection reset, restarting [0]\nSat Jan 11 16:17:43 2020 100.64.1.1:48414 Connection reset, restarting [0]\nSat Jan 11 16:17:49 2020 TCP connection established with [AF_INET]10.250.7.77:23596\nSat Jan 11 16:17:49 2020 10.250.7.77:23596 TCP connection established with [AF_INET]100.64.1.1:52828\nSat Jan 11 16:17:49 2020 10.250.7.77:23596 Connection reset, restarting [0]\nSat Jan 11 16:17:49 2020 100.64.1.1:52828 Connection reset, restarting [0]\nSat Jan 11 16:17:53 2020 TCP connection established with [AF_INET]10.250.7.77:7484\nSat Jan 11 16:17:53 2020 10.250.7.77:7484 TCP connection established with [AF_INET]100.64.1.1:48424\nSat Jan 11 16:17:53 2020 10.250.7.77:7484 Connection reset, restarting [0]\nSat Jan 11 16:17:53 2020 100.64.1.1:48424 Connection reset, restarting [0]\nSat Jan 11 16:17:59 2020 TCP connection established with [AF_INET]10.250.7.77:23604\nSat Jan 11 16:17:59 2020 10.250.7.77:23604 TCP connection established with [AF_INET]100.64.1.1:52836\nSat Jan 11 16:17:59 2020 10.250.7.77:23604 Connection reset, restarting [0]\nSat Jan 11 16:17:59 2020 100.64.1.1:52836 Connection reset, restarting [0]\nSat Jan 11 16:18:03 2020 TCP connection established with [AF_INET]10.250.7.77:7506\nSat Jan 11 16:18:03 2020 10.250.7.77:7506 TCP connection established with [AF_INET]100.64.1.1:48446\nSat Jan 11 16:18:03 2020 10.250.7.77:7506 Connection reset, restarting [0]\nSat Jan 11 16:18:03 2020 100.64.1.1:48446 Connection reset, restarting [0]\nSat Jan 11 16:18:09 2020 TCP connection established with [AF_INET]10.250.7.77:23626\nSat Jan 11 16:18:09 2020 10.250.7.77:23626 TCP connection established with [AF_INET]100.64.1.1:52858\nSat Jan 11 16:18:09 2020 10.250.7.77:23626 Connection reset, restarting [0]\nSat Jan 11 16:18:09 2020 100.64.1.1:52858 Connection reset, restarting [0]\nSat Jan 11 16:18:13 2020 TCP connection established with [AF_INET]10.250.7.77:7514\nSat Jan 11 16:18:13 2020 10.250.7.77:7514 TCP connection established with [AF_INET]100.64.1.1:48454\nSat Jan 11 16:18:13 2020 10.250.7.77:7514 Connection reset, restarting [0]\nSat Jan 11 16:18:13 2020 100.64.1.1:48454 Connection reset, restarting [0]\nSat Jan 11 16:18:19 2020 TCP connection established with [AF_INET]10.250.7.77:23634\nSat Jan 11 16:18:19 2020 10.250.7.77:23634 TCP connection established with [AF_INET]100.64.1.1:52866\nSat Jan 11 16:18:19 2020 10.250.7.77:23634 Connection reset, restarting [0]\nSat Jan 11 16:18:19 2020 100.64.1.1:52866 Connection reset, restarting [0]\nSat Jan 11 16:18:23 2020 TCP connection established with [AF_INET]10.250.7.77:7524\nSat Jan 11 16:18:23 2020 10.250.7.77:7524 TCP connection established with [AF_INET]100.64.1.1:48464\nSat Jan 11 16:18:23 2020 10.250.7.77:7524 Connection reset, restarting [0]\nSat Jan 11 16:18:23 2020 100.64.1.1:48464 Connection reset, restarting [0]\nSat Jan 11 16:18:29 2020 TCP connection established with [AF_INET]10.250.7.77:23642\nSat Jan 11 16:18:29 2020 10.250.7.77:23642 Connection reset, restarting [0]\nSat Jan 11 16:18:29 2020 TCP connection established with [AF_INET]100.64.1.1:52874\nSat Jan 11 16:18:29 2020 100.64.1.1:52874 Connection reset, restarting [0]\nSat Jan 11 16:18:33 2020 TCP connection established with [AF_INET]10.250.7.77:7536\nSat Jan 11 16:18:33 2020 10.250.7.77:7536 TCP connection established with [AF_INET]100.64.1.1:48476\nSat Jan 11 16:18:33 2020 10.250.7.77:7536 Connection reset, restarting [0]\nSat Jan 11 16:18:33 2020 100.64.1.1:48476 Connection reset, restarting [0]\nSat Jan 11 16:18:39 2020 TCP connection established with [AF_INET]10.250.7.77:23648\nSat Jan 11 16:18:39 2020 10.250.7.77:23648 TCP connection established with [AF_INET]100.64.1.1:52880\nSat Jan 11 16:18:39 2020 10.250.7.77:23648 Connection reset, restarting [0]\nSat Jan 11 16:18:39 2020 100.64.1.1:52880 Connection reset, restarting [0]\nSat Jan 11 16:18:43 2020 TCP connection established with [AF_INET]10.250.7.77:7540\nSat Jan 11 16:18:43 2020 10.250.7.77:7540 TCP connection established with [AF_INET]100.64.1.1:48480\nSat Jan 11 16:18:43 2020 10.250.7.77:7540 Connection reset, restarting [0]\nSat Jan 11 16:18:43 2020 100.64.1.1:48480 Connection reset, restarting [0]\nSat Jan 11 16:18:49 2020 TCP connection established with [AF_INET]10.250.7.77:23662\nSat Jan 11 16:18:49 2020 10.250.7.77:23662 TCP connection established with [AF_INET]100.64.1.1:52894\nSat Jan 11 16:18:49 2020 10.250.7.77:23662 Connection reset, restarting [0]\nSat Jan 11 16:18:49 2020 100.64.1.1:52894 Connection reset, restarting [0]\nSat Jan 11 16:18:53 2020 TCP connection established with [AF_INET]10.250.7.77:7550\nSat Jan 11 16:18:53 2020 10.250.7.77:7550 TCP connection established with [AF_INET]100.64.1.1:48490\nSat Jan 11 16:18:53 2020 10.250.7.77:7550 Connection reset, restarting [0]\nSat Jan 11 16:18:53 2020 100.64.1.1:48490 Connection reset, restarting [0]\nSat Jan 11 16:18:59 2020 TCP connection established with [AF_INET]10.250.7.77:23670\nSat Jan 11 16:18:59 2020 10.250.7.77:23670 TCP connection established with [AF_INET]100.64.1.1:52902\nSat Jan 11 16:18:59 2020 10.250.7.77:23670 Connection reset, restarting [0]\nSat Jan 11 16:18:59 2020 100.64.1.1:52902 Connection reset, restarting [0]\nSat Jan 11 16:19:03 2020 TCP connection established with [AF_INET]10.250.7.77:7564\nSat Jan 11 16:19:03 2020 10.250.7.77:7564 TCP connection established with [AF_INET]100.64.1.1:48504\nSat Jan 11 16:19:03 2020 10.250.7.77:7564 Connection reset, restarting [0]\nSat Jan 11 16:19:03 2020 100.64.1.1:48504 Connection reset, restarting [0]\nSat Jan 11 16:19:09 2020 TCP connection established with [AF_INET]10.250.7.77:23720\nSat Jan 11 16:19:09 2020 10.250.7.77:23720 TCP connection established with [AF_INET]100.64.1.1:52952\nSat Jan 11 16:19:09 2020 10.250.7.77:23720 Connection reset, restarting [0]\nSat Jan 11 16:19:09 2020 100.64.1.1:52952 Connection reset, restarting [0]\nSat Jan 11 16:19:13 2020 TCP connection established with [AF_INET]10.250.7.77:7572\nSat Jan 11 16:19:13 2020 10.250.7.77:7572 TCP connection established with [AF_INET]100.64.1.1:48512\nSat Jan 11 16:19:13 2020 10.250.7.77:7572 Connection reset, restarting [0]\nSat Jan 11 16:19:13 2020 100.64.1.1:48512 Connection reset, restarting [0]\nSat Jan 11 16:19:19 2020 TCP connection established with [AF_INET]10.250.7.77:23728\nSat Jan 11 16:19:19 2020 10.250.7.77:23728 TCP connection established with [AF_INET]100.64.1.1:52960\nSat Jan 11 16:19:19 2020 10.250.7.77:23728 Connection reset, restarting [0]\nSat Jan 11 16:19:19 2020 100.64.1.1:52960 Connection reset, restarting [0]\nSat Jan 11 16:19:23 2020 TCP connection established with [AF_INET]100.64.1.1:48520\nSat Jan 11 16:19:23 2020 100.64.1.1:48520 TCP connection established with [AF_INET]10.250.7.77:7580\nSat Jan 11 16:19:23 2020 100.64.1.1:48520 Connection reset, restarting [0]\nSat Jan 11 16:19:23 2020 10.250.7.77:7580 Connection reset, restarting [0]\nSat Jan 11 16:19:29 2020 TCP connection established with [AF_INET]10.250.7.77:23738\nSat Jan 11 16:19:29 2020 10.250.7.77:23738 TCP connection established with [AF_INET]100.64.1.1:52970\nSat Jan 11 16:19:29 2020 10.250.7.77:23738 Connection reset, restarting [0]\nSat Jan 11 16:19:29 2020 100.64.1.1:52970 Connection reset, restarting [0]\nSat Jan 11 16:19:33 2020 TCP connection established with [AF_INET]10.250.7.77:7590\nSat Jan 11 16:19:33 2020 10.250.7.77:7590 TCP connection established with [AF_INET]100.64.1.1:48530\nSat Jan 11 16:19:33 2020 10.250.7.77:7590 Connection reset, restarting [0]\nSat Jan 11 16:19:33 2020 100.64.1.1:48530 Connection reset, restarting [0]\nSat Jan 11 16:19:39 2020 TCP connection established with [AF_INET]10.250.7.77:23742\nSat Jan 11 16:19:39 2020 10.250.7.77:23742 TCP connection established with [AF_INET]100.64.1.1:52974\nSat Jan 11 16:19:39 2020 10.250.7.77:23742 Connection reset, restarting [0]\nSat Jan 11 16:19:39 2020 100.64.1.1:52974 Connection reset, restarting [0]\nSat Jan 11 16:19:43 2020 TCP connection established with [AF_INET]10.250.7.77:7604\nSat Jan 11 16:19:43 2020 10.250.7.77:7604 TCP connection established with [AF_INET]100.64.1.1:48544\nSat Jan 11 16:19:43 2020 10.250.7.77:7604 Connection reset, restarting [0]\nSat Jan 11 16:19:43 2020 100.64.1.1:48544 Connection reset, restarting [0]\nSat Jan 11 16:19:49 2020 TCP connection established with [AF_INET]10.250.7.77:23752\nSat Jan 11 16:19:49 2020 10.250.7.77:23752 TCP connection established with [AF_INET]100.64.1.1:52984\nSat Jan 11 16:19:49 2020 10.250.7.77:23752 Connection reset, restarting [0]\nSat Jan 11 16:19:49 2020 100.64.1.1:52984 Connection reset, restarting [0]\nSat Jan 11 16:19:53 2020 TCP connection established with [AF_INET]10.250.7.77:7622\nSat Jan 11 16:19:53 2020 10.250.7.77:7622 TCP connection established with [AF_INET]100.64.1.1:48562\nSat Jan 11 16:19:53 2020 10.250.7.77:7622 Connection reset, restarting [0]\nSat Jan 11 16:19:53 2020 100.64.1.1:48562 Connection reset, restarting [0]\nSat Jan 11 16:19:59 2020 TCP connection established with [AF_INET]10.250.7.77:23764\nSat Jan 11 16:19:59 2020 10.250.7.77:23764 TCP connection established with [AF_INET]100.64.1.1:52996\nSat Jan 11 16:19:59 2020 10.250.7.77:23764 Connection reset, restarting [0]\nSat Jan 11 16:19:59 2020 100.64.1.1:52996 Connection reset, restarting [0]\nSat Jan 11 16:20:03 2020 TCP connection established with [AF_INET]10.250.7.77:7636\nSat Jan 11 16:20:03 2020 10.250.7.77:7636 TCP connection established with [AF_INET]100.64.1.1:48576\nSat Jan 11 16:20:03 2020 10.250.7.77:7636 Connection reset, restarting [0]\nSat Jan 11 16:20:03 2020 100.64.1.1:48576 Connection reset, restarting [0]\nSat Jan 11 16:20:09 2020 TCP connection established with [AF_INET]10.250.7.77:23778\nSat Jan 11 16:20:09 2020 10.250.7.77:23778 TCP connection established with [AF_INET]100.64.1.1:53010\nSat Jan 11 16:20:09 2020 10.250.7.77:23778 Connection reset, restarting [0]\nSat Jan 11 16:20:09 2020 100.64.1.1:53010 Connection reset, restarting [0]\nSat Jan 11 16:20:13 2020 TCP connection established with [AF_INET]10.250.7.77:7644\nSat Jan 11 16:20:13 2020 10.250.7.77:7644 Connection reset, restarting [0]\nSat Jan 11 16:20:13 2020 TCP connection established with [AF_INET]100.64.1.1:48584\nSat Jan 11 16:20:13 2020 100.64.1.1:48584 Connection reset, restarting [0]\nSat Jan 11 16:20:19 2020 TCP connection established with [AF_INET]10.250.7.77:23786\nSat Jan 11 16:20:19 2020 10.250.7.77:23786 TCP connection established with [AF_INET]100.64.1.1:53018\nSat Jan 11 16:20:19 2020 10.250.7.77:23786 Connection reset, restarting [0]\nSat Jan 11 16:20:19 2020 100.64.1.1:53018 Connection reset, restarting [0]\nSat Jan 11 16:20:23 2020 TCP connection established with [AF_INET]10.250.7.77:7652\nSat Jan 11 16:20:23 2020 10.250.7.77:7652 TCP connection established with [AF_INET]100.64.1.1:48592\nSat Jan 11 16:20:23 2020 10.250.7.77:7652 Connection reset, restarting [0]\nSat Jan 11 16:20:23 2020 100.64.1.1:48592 Connection reset, restarting [0]\nSat Jan 11 16:20:29 2020 TCP connection established with [AF_INET]100.64.1.1:53028\nSat Jan 11 16:20:29 2020 100.64.1.1:53028 Connection reset, restarting [0]\nSat Jan 11 16:20:29 2020 TCP connection established with [AF_INET]10.250.7.77:23796\nSat Jan 11 16:20:29 2020 10.250.7.77:23796 Connection reset, restarting [0]\nSat Jan 11 16:20:33 2020 TCP connection established with [AF_INET]10.250.7.77:7662\nSat Jan 11 16:20:33 2020 10.250.7.77:7662 TCP connection established with [AF_INET]100.64.1.1:48602\nSat Jan 11 16:20:33 2020 10.250.7.77:7662 Connection reset, restarting [0]\nSat Jan 11 16:20:33 2020 100.64.1.1:48602 Connection reset, restarting [0]\nSat Jan 11 16:20:39 2020 TCP connection established with [AF_INET]10.250.7.77:23800\nSat Jan 11 16:20:39 2020 10.250.7.77:23800 TCP connection established with [AF_INET]100.64.1.1:53032\nSat Jan 11 16:20:39 2020 10.250.7.77:23800 Connection reset, restarting [0]\nSat Jan 11 16:20:39 2020 100.64.1.1:53032 Connection reset, restarting [0]\nSat Jan 11 16:20:43 2020 TCP connection established with [AF_INET]10.250.7.77:7666\nSat Jan 11 16:20:43 2020 10.250.7.77:7666 TCP connection established with [AF_INET]100.64.1.1:48606\nSat Jan 11 16:20:43 2020 10.250.7.77:7666 Connection reset, restarting [0]\nSat Jan 11 16:20:43 2020 100.64.1.1:48606 Connection reset, restarting [0]\nSat Jan 11 16:20:49 2020 TCP connection established with [AF_INET]10.250.7.77:23810\nSat Jan 11 16:20:49 2020 10.250.7.77:23810 TCP connection established with [AF_INET]100.64.1.1:53042\nSat Jan 11 16:20:49 2020 10.250.7.77:23810 Connection reset, restarting [0]\nSat Jan 11 16:20:49 2020 100.64.1.1:53042 Connection reset, restarting [0]\nSat Jan 11 16:20:53 2020 TCP connection established with [AF_INET]10.250.7.77:7680\nSat Jan 11 16:20:53 2020 10.250.7.77:7680 TCP connection established with [AF_INET]100.64.1.1:48620\nSat Jan 11 16:20:53 2020 10.250.7.77:7680 Connection reset, restarting [0]\nSat Jan 11 16:20:53 2020 100.64.1.1:48620 Connection reset, restarting [0]\nSat Jan 11 16:20:59 2020 TCP connection established with [AF_INET]10.250.7.77:23818\nSat Jan 11 16:20:59 2020 10.250.7.77:23818 TCP connection established with [AF_INET]100.64.1.1:53050\nSat Jan 11 16:20:59 2020 10.250.7.77:23818 Connection reset, restarting [0]\nSat Jan 11 16:20:59 2020 100.64.1.1:53050 Connection reset, restarting [0]\nSat Jan 11 16:21:03 2020 TCP connection established with [AF_INET]10.250.7.77:7694\nSat Jan 11 16:21:03 2020 10.250.7.77:7694 TCP connection established with [AF_INET]100.64.1.1:48634\nSat Jan 11 16:21:03 2020 10.250.7.77:7694 Connection reset, restarting [0]\nSat Jan 11 16:21:03 2020 100.64.1.1:48634 Connection reset, restarting [0]\nSat Jan 11 16:21:09 2020 TCP connection established with [AF_INET]10.250.7.77:23832\nSat Jan 11 16:21:09 2020 10.250.7.77:23832 TCP connection established with [AF_INET]100.64.1.1:53064\nSat Jan 11 16:21:09 2020 10.250.7.77:23832 Connection reset, restarting [0]\nSat Jan 11 16:21:09 2020 100.64.1.1:53064 Connection reset, restarting [0]\nSat Jan 11 16:21:13 2020 TCP connection established with [AF_INET]10.250.7.77:7704\nSat Jan 11 16:21:13 2020 10.250.7.77:7704 TCP connection established with [AF_INET]100.64.1.1:48644\nSat Jan 11 16:21:13 2020 10.250.7.77:7704 Connection reset, restarting [0]\nSat Jan 11 16:21:13 2020 100.64.1.1:48644 Connection reset, restarting [0]\nSat Jan 11 16:21:19 2020 TCP connection established with [AF_INET]10.250.7.77:23844\nSat Jan 11 16:21:19 2020 10.250.7.77:23844 TCP connection established with [AF_INET]100.64.1.1:53076\nSat Jan 11 16:21:19 2020 10.250.7.77:23844 Connection reset, restarting [0]\nSat Jan 11 16:21:19 2020 100.64.1.1:53076 Connection reset, restarting [0]\nSat Jan 11 16:21:23 2020 TCP connection established with [AF_INET]10.250.7.77:7710\nSat Jan 11 16:21:23 2020 10.250.7.77:7710 TCP connection established with [AF_INET]100.64.1.1:48650\nSat Jan 11 16:21:23 2020 10.250.7.77:7710 Connection reset, restarting [0]\nSat Jan 11 16:21:23 2020 100.64.1.1:48650 Connection reset, restarting [0]\nSat Jan 11 16:21:29 2020 TCP connection established with [AF_INET]10.250.7.77:23854\nSat Jan 11 16:21:29 2020 10.250.7.77:23854 TCP connection established with [AF_INET]100.64.1.1:53086\nSat Jan 11 16:21:29 2020 10.250.7.77:23854 Connection reset, restarting [0]\nSat Jan 11 16:21:29 2020 100.64.1.1:53086 Connection reset, restarting [0]\nSat Jan 11 16:21:33 2020 TCP connection established with [AF_INET]10.250.7.77:7720\nSat Jan 11 16:21:33 2020 10.250.7.77:7720 TCP connection established with [AF_INET]100.64.1.1:48660\nSat Jan 11 16:21:33 2020 10.250.7.77:7720 Connection reset, restarting [0]\nSat Jan 11 16:21:33 2020 100.64.1.1:48660 Connection reset, restarting [0]\nSat Jan 11 16:21:36 2020 vpn-seed/100.64.1.1:52416 Connection reset, restarting [0]\nSat Jan 11 16:21:39 2020 TCP connection established with [AF_INET]10.250.7.77:23858\nSat Jan 11 16:21:39 2020 10.250.7.77:23858 TCP connection established with [AF_INET]100.64.1.1:53090\nSat Jan 11 16:21:39 2020 10.250.7.77:23858 Connection reset, restarting [0]\nSat Jan 11 16:21:39 2020 100.64.1.1:53090 Connection reset, restarting [0]\nSat Jan 11 16:21:43 2020 TCP connection established with [AF_INET]10.250.7.77:7724\nSat Jan 11 16:21:43 2020 10.250.7.77:7724 TCP connection established with [AF_INET]100.64.1.1:48664\nSat Jan 11 16:21:43 2020 10.250.7.77:7724 Connection reset, restarting [0]\nSat Jan 11 16:21:43 2020 100.64.1.1:48664 Connection reset, restarting [0]\nSat Jan 11 16:21:49 2020 TCP connection established with [AF_INET]10.250.7.77:23868\nSat Jan 11 16:21:49 2020 10.250.7.77:23868 TCP connection established with [AF_INET]100.64.1.1:53100\nSat Jan 11 16:21:49 2020 10.250.7.77:23868 Connection reset, restarting [0]\nSat Jan 11 16:21:49 2020 100.64.1.1:53100 Connection reset, restarting [0]\nSat Jan 11 16:21:53 2020 TCP connection established with [AF_INET]10.250.7.77:7734\nSat Jan 11 16:21:53 2020 10.250.7.77:7734 TCP connection established with [AF_INET]100.64.1.1:48674\nSat Jan 11 16:21:53 2020 10.250.7.77:7734 Connection reset, restarting [0]\nSat Jan 11 16:21:53 2020 100.64.1.1:48674 Connection reset, restarting [0]\nSat Jan 11 16:21:59 2020 TCP connection established with [AF_INET]10.250.7.77:23874\nSat Jan 11 16:21:59 2020 10.250.7.77:23874 Connection reset, restarting [0]\nSat Jan 11 16:21:59 2020 TCP connection established with [AF_INET]100.64.1.1:48682\nSat Jan 11 16:21:59 2020 TCP connection established with [AF_INET]10.250.7.77:23878\nSat Jan 11 16:21:59 2020 10.250.7.77:23878 TCP connection established with [AF_INET]100.64.1.1:53110\nSat Jan 11 16:21:59 2020 10.250.7.77:23878 Connection reset, restarting [0]\nSat Jan 11 16:21:59 2020 100.64.1.1:53110 Connection reset, restarting [0]\nSat Jan 11 16:22:00 2020 100.64.1.1:48682 peer info: IV_VER=2.4.6\nSat Jan 11 16:22:00 2020 100.64.1.1:48682 peer info: IV_PLAT=linux\nSat Jan 11 16:22:00 2020 100.64.1.1:48682 peer info: IV_PROTO=2\nSat Jan 11 16:22:00 2020 100.64.1.1:48682 peer info: IV_NCP=2\nSat Jan 11 16:22:00 2020 100.64.1.1:48682 peer info: IV_LZ4=1\nSat Jan 11 16:22:00 2020 100.64.1.1:48682 peer info: IV_LZ4v2=1\nSat Jan 11 16:22:00 2020 100.64.1.1:48682 peer info: IV_LZO=1\nSat Jan 11 16:22:00 2020 100.64.1.1:48682 peer info: IV_COMP_STUB=1\nSat Jan 11 16:22:00 2020 100.64.1.1:48682 peer info: IV_COMP_STUBv2=1\nSat Jan 11 16:22:00 2020 100.64.1.1:48682 peer info: IV_TCPNL=1\nSat Jan 11 16:22:00 2020 100.64.1.1:48682 [vpn-seed] Peer Connection Initiated with [AF_INET]100.64.1.1:48682\nSat Jan 11 16:22:00 2020 vpn-seed/100.64.1.1:48682 MULTI_sva: pool returned IPv4=192.168.123.6, IPv6=(Not enabled)\nSat Jan 11 16:22:03 2020 TCP connection established with [AF_INET]10.250.7.77:7750\nSat Jan 11 16:22:03 2020 10.250.7.77:7750 TCP connection established with [AF_INET]100.64.1.1:48690\nSat Jan 11 16:22:03 2020 10.250.7.77:7750 Connection reset, restarting [0]\nSat Jan 11 16:22:03 2020 100.64.1.1:48690 Connection reset, restarting [0]\nSat Jan 11 16:22:09 2020 TCP connection established with [AF_INET]10.250.7.77:23892\nSat Jan 11 16:22:09 2020 10.250.7.77:23892 TCP connection established with [AF_INET]100.64.1.1:53124\nSat Jan 11 16:22:09 2020 10.250.7.77:23892 Connection reset, restarting [0]\nSat Jan 11 16:22:09 2020 100.64.1.1:53124 Connection reset, restarting [0]\nSat Jan 11 16:22:13 2020 TCP connection established with [AF_INET]10.250.7.77:7764\nSat Jan 11 16:22:13 2020 10.250.7.77:7764 TCP connection established with [AF_INET]100.64.1.1:48704\nSat Jan 11 16:22:13 2020 10.250.7.77:7764 Connection reset, restarting [0]\nSat Jan 11 16:22:13 2020 100.64.1.1:48704 Connection reset, restarting [0]\nSat Jan 11 16:22:19 2020 TCP connection established with [AF_INET]10.250.7.77:23902\nSat Jan 11 16:22:19 2020 10.250.7.77:23902 TCP connection established with [AF_INET]100.64.1.1:53134\nSat Jan 11 16:22:19 2020 10.250.7.77:23902 Connection reset, restarting [0]\nSat Jan 11 16:22:19 2020 100.64.1.1:53134 Connection reset, restarting [0]\nSat Jan 11 16:22:23 2020 TCP connection established with [AF_INET]10.250.7.77:7770\nSat Jan 11 16:22:23 2020 10.250.7.77:7770 TCP connection established with [AF_INET]100.64.1.1:48710\nSat Jan 11 16:22:23 2020 10.250.7.77:7770 Connection reset, restarting [0]\nSat Jan 11 16:22:23 2020 100.64.1.1:48710 Connection reset, restarting [0]\nSat Jan 11 16:22:29 2020 TCP connection established with [AF_INET]10.250.7.77:23914\nSat Jan 11 16:22:29 2020 10.250.7.77:23914 TCP connection established with [AF_INET]100.64.1.1:53146\nSat Jan 11 16:22:29 2020 10.250.7.77:23914 Connection reset, restarting [0]\nSat Jan 11 16:22:29 2020 100.64.1.1:53146 Connection reset, restarting [0]\nSat Jan 11 16:22:33 2020 TCP connection established with [AF_INET]10.250.7.77:7780\nSat Jan 11 16:22:33 2020 10.250.7.77:7780 TCP connection established with [AF_INET]100.64.1.1:48720\nSat Jan 11 16:22:33 2020 10.250.7.77:7780 Connection reset, restarting [0]\nSat Jan 11 16:22:33 2020 100.64.1.1:48720 Connection reset, restarting [0]\nSat Jan 11 16:22:39 2020 TCP connection established with [AF_INET]10.250.7.77:23918\nSat Jan 11 16:22:39 2020 10.250.7.77:23918 TCP connection established with [AF_INET]100.64.1.1:53150\nSat Jan 11 16:22:39 2020 10.250.7.77:23918 Connection reset, restarting [0]\nSat Jan 11 16:22:39 2020 100.64.1.1:53150 Connection reset, restarting [0]\nSat Jan 11 16:22:43 2020 TCP connection established with [AF_INET]10.250.7.77:7784\nSat Jan 11 16:22:43 2020 10.250.7.77:7784 TCP connection established with [AF_INET]100.64.1.1:48724\nSat Jan 11 16:22:43 2020 10.250.7.77:7784 Connection reset, restarting [0]\nSat Jan 11 16:22:43 2020 100.64.1.1:48724 Connection reset, restarting [0]\nSat Jan 11 16:22:49 2020 TCP connection established with [AF_INET]10.250.7.77:23928\nSat Jan 11 16:22:49 2020 10.250.7.77:23928 TCP connection established with [AF_INET]100.64.1.1:53160\nSat Jan 11 16:22:49 2020 10.250.7.77:23928 Connection reset, restarting [0]\nSat Jan 11 16:22:49 2020 100.64.1.1:53160 Connection reset, restarting [0]\nSat Jan 11 16:22:53 2020 TCP connection established with [AF_INET]10.250.7.77:7794\nSat Jan 11 16:22:53 2020 10.250.7.77:7794 TCP connection established with [AF_INET]100.64.1.1:48734\nSat Jan 11 16:22:53 2020 10.250.7.77:7794 Connection reset, restarting [0]\nSat Jan 11 16:22:53 2020 100.64.1.1:48734 Connection reset, restarting [0]\nSat Jan 11 16:22:59 2020 TCP connection established with [AF_INET]10.250.7.77:23936\nSat Jan 11 16:22:59 2020 10.250.7.77:23936 TCP connection established with [AF_INET]100.64.1.1:53168\nSat Jan 11 16:22:59 2020 10.250.7.77:23936 Connection reset, restarting [0]\nSat Jan 11 16:22:59 2020 100.64.1.1:53168 Connection reset, restarting [0]\nSat Jan 11 16:23:03 2020 TCP connection established with [AF_INET]10.250.7.77:7810\nSat Jan 11 16:23:03 2020 10.250.7.77:7810 TCP connection established with [AF_INET]100.64.1.1:48750\nSat Jan 11 16:23:03 2020 10.250.7.77:7810 Connection reset, restarting [0]\nSat Jan 11 16:23:03 2020 100.64.1.1:48750 Connection reset, restarting [0]\nSat Jan 11 16:23:09 2020 TCP connection established with [AF_INET]10.250.7.77:23950\nSat Jan 11 16:23:09 2020 10.250.7.77:23950 TCP connection established with [AF_INET]100.64.1.1:53182\nSat Jan 11 16:23:09 2020 10.250.7.77:23950 Connection reset, restarting [0]\nSat Jan 11 16:23:09 2020 100.64.1.1:53182 Connection reset, restarting [0]\nSat Jan 11 16:23:13 2020 TCP connection established with [AF_INET]10.250.7.77:7820\nSat Jan 11 16:23:13 2020 10.250.7.77:7820 TCP connection established with [AF_INET]100.64.1.1:48760\nSat Jan 11 16:23:13 2020 10.250.7.77:7820 Connection reset, restarting [0]\nSat Jan 11 16:23:13 2020 100.64.1.1:48760 Connection reset, restarting [0]\nSat Jan 11 16:23:19 2020 TCP connection established with [AF_INET]10.250.7.77:23960\nSat Jan 11 16:23:19 2020 10.250.7.77:23960 TCP connection established with [AF_INET]100.64.1.1:53192\nSat Jan 11 16:23:19 2020 10.250.7.77:23960 Connection reset, restarting [0]\nSat Jan 11 16:23:19 2020 100.64.1.1:53192 Connection reset, restarting [0]\nSat Jan 11 16:23:23 2020 TCP connection established with [AF_INET]10.250.7.77:7830\nSat Jan 11 16:23:23 2020 10.250.7.77:7830 TCP connection established with [AF_INET]100.64.1.1:48770\nSat Jan 11 16:23:23 2020 10.250.7.77:7830 Connection reset, restarting [0]\nSat Jan 11 16:23:23 2020 100.64.1.1:48770 Connection reset, restarting [0]\nSat Jan 11 16:23:29 2020 TCP connection established with [AF_INET]10.250.7.77:23968\nSat Jan 11 16:23:29 2020 10.250.7.77:23968 TCP connection established with [AF_INET]100.64.1.1:53200\nSat Jan 11 16:23:29 2020 10.250.7.77:23968 Connection reset, restarting [0]\nSat Jan 11 16:23:29 2020 100.64.1.1:53200 Connection reset, restarting [0]\nSat Jan 11 16:23:33 2020 TCP connection established with [AF_INET]10.250.7.77:7842\nSat Jan 11 16:23:33 2020 10.250.7.77:7842 TCP connection established with [AF_INET]100.64.1.1:48782\nSat Jan 11 16:23:33 2020 10.250.7.77:7842 Connection reset, restarting [0]\nSat Jan 11 16:23:33 2020 100.64.1.1:48782 Connection reset, restarting [0]\nSat Jan 11 16:23:39 2020 TCP connection established with [AF_INET]10.250.7.77:23972\nSat Jan 11 16:23:39 2020 10.250.7.77:23972 TCP connection established with [AF_INET]100.64.1.1:53204\nSat Jan 11 16:23:39 2020 10.250.7.77:23972 Connection reset, restarting [0]\nSat Jan 11 16:23:39 2020 100.64.1.1:53204 Connection reset, restarting [0]\nSat Jan 11 16:23:43 2020 TCP connection established with [AF_INET]10.250.7.77:7846\nSat Jan 11 16:23:43 2020 10.250.7.77:7846 TCP connection established with [AF_INET]100.64.1.1:48786\nSat Jan 11 16:23:43 2020 10.250.7.77:7846 Connection reset, restarting [0]\nSat Jan 11 16:23:43 2020 100.64.1.1:48786 Connection reset, restarting [0]\nSat Jan 11 16:23:49 2020 TCP connection established with [AF_INET]10.250.7.77:23986\nSat Jan 11 16:23:49 2020 10.250.7.77:23986 TCP connection established with [AF_INET]100.64.1.1:53218\nSat Jan 11 16:23:49 2020 10.250.7.77:23986 Connection reset, restarting [0]\nSat Jan 11 16:23:49 2020 100.64.1.1:53218 Connection reset, restarting [0]\nSat Jan 11 16:23:53 2020 TCP connection established with [AF_INET]10.250.7.77:7856\nSat Jan 11 16:23:53 2020 10.250.7.77:7856 TCP connection established with [AF_INET]100.64.1.1:48796\nSat Jan 11 16:23:53 2020 10.250.7.77:7856 Connection reset, restarting [0]\nSat Jan 11 16:23:53 2020 100.64.1.1:48796 Connection reset, restarting [0]\nSat Jan 11 16:23:59 2020 TCP connection established with [AF_INET]10.250.7.77:23994\nSat Jan 11 16:23:59 2020 10.250.7.77:23994 TCP connection established with [AF_INET]100.64.1.1:53226\nSat Jan 11 16:23:59 2020 10.250.7.77:23994 Connection reset, restarting [0]\nSat Jan 11 16:23:59 2020 100.64.1.1:53226 Connection reset, restarting [0]\nSat Jan 11 16:24:03 2020 TCP connection established with [AF_INET]10.250.7.77:7874\nSat Jan 11 16:24:03 2020 10.250.7.77:7874 TCP connection established with [AF_INET]100.64.1.1:48814\nSat Jan 11 16:24:03 2020 10.250.7.77:7874 Connection reset, restarting [0]\nSat Jan 11 16:24:03 2020 100.64.1.1:48814 Connection reset, restarting [0]\nSat Jan 11 16:24:09 2020 TCP connection established with [AF_INET]10.250.7.77:24010\nSat Jan 11 16:24:09 2020 10.250.7.77:24010 TCP connection established with [AF_INET]100.64.1.1:53242\nSat Jan 11 16:24:09 2020 10.250.7.77:24010 Connection reset, restarting [0]\nSat Jan 11 16:24:09 2020 100.64.1.1:53242 Connection reset, restarting [0]\nSat Jan 11 16:24:13 2020 TCP connection established with [AF_INET]10.250.7.77:7882\nSat Jan 11 16:24:13 2020 10.250.7.77:7882 TCP connection established with [AF_INET]100.64.1.1:48822\nSat Jan 11 16:24:13 2020 10.250.7.77:7882 Connection reset, restarting [0]\nSat Jan 11 16:24:13 2020 100.64.1.1:48822 Connection reset, restarting [0]\nSat Jan 11 16:24:19 2020 TCP connection established with [AF_INET]100.64.1.1:53250\nSat Jan 11 16:24:19 2020 100.64.1.1:53250 TCP connection established with [AF_INET]10.250.7.77:24018\nSat Jan 11 16:24:19 2020 100.64.1.1:53250 Connection reset, restarting [0]\nSat Jan 11 16:24:19 2020 10.250.7.77:24018 Connection reset, restarting [0]\nSat Jan 11 16:24:23 2020 TCP connection established with [AF_INET]10.250.7.77:7888\nSat Jan 11 16:24:23 2020 10.250.7.77:7888 TCP connection established with [AF_INET]100.64.1.1:48828\nSat Jan 11 16:24:23 2020 10.250.7.77:7888 Connection reset, restarting [0]\nSat Jan 11 16:24:23 2020 100.64.1.1:48828 Connection reset, restarting [0]\nSat Jan 11 16:24:29 2020 TCP connection established with [AF_INET]10.250.7.77:24026\nSat Jan 11 16:24:29 2020 10.250.7.77:24026 TCP connection established with [AF_INET]100.64.1.1:53258\nSat Jan 11 16:24:29 2020 10.250.7.77:24026 Connection reset, restarting [0]\nSat Jan 11 16:24:29 2020 100.64.1.1:53258 Connection reset, restarting [0]\nSat Jan 11 16:24:33 2020 TCP connection established with [AF_INET]10.250.7.77:7898\nSat Jan 11 16:24:33 2020 10.250.7.77:7898 TCP connection established with [AF_INET]100.64.1.1:48838\nSat Jan 11 16:24:33 2020 10.250.7.77:7898 Connection reset, restarting [0]\nSat Jan 11 16:24:33 2020 100.64.1.1:48838 Connection reset, restarting [0]\nSat Jan 11 16:24:39 2020 TCP connection established with [AF_INET]10.250.7.77:24030\nSat Jan 11 16:24:39 2020 10.250.7.77:24030 TCP connection established with [AF_INET]100.64.1.1:53262\nSat Jan 11 16:24:39 2020 10.250.7.77:24030 Connection reset, restarting [0]\nSat Jan 11 16:24:39 2020 100.64.1.1:53262 Connection reset, restarting [0]\nSat Jan 11 16:24:43 2020 TCP connection established with [AF_INET]10.250.7.77:7910\nSat Jan 11 16:24:43 2020 10.250.7.77:7910 TCP connection established with [AF_INET]100.64.1.1:48850\nSat Jan 11 16:24:43 2020 10.250.7.77:7910 Connection reset, restarting [0]\nSat Jan 11 16:24:43 2020 100.64.1.1:48850 Connection reset, restarting [0]\nSat Jan 11 16:24:49 2020 TCP connection established with [AF_INET]10.250.7.77:24040\nSat Jan 11 16:24:49 2020 10.250.7.77:24040 TCP connection established with [AF_INET]100.64.1.1:53272\nSat Jan 11 16:24:49 2020 10.250.7.77:24040 Connection reset, restarting [0]\nSat Jan 11 16:24:49 2020 100.64.1.1:53272 Connection reset, restarting [0]\nSat Jan 11 16:24:53 2020 TCP connection established with [AF_INET]10.250.7.77:7920\nSat Jan 11 16:24:53 2020 10.250.7.77:7920 TCP connection established with [AF_INET]100.64.1.1:48860\nSat Jan 11 16:24:53 2020 10.250.7.77:7920 Connection reset, restarting [0]\nSat Jan 11 16:24:53 2020 100.64.1.1:48860 Connection reset, restarting [0]\nSat Jan 11 16:24:59 2020 TCP connection established with [AF_INET]10.250.7.77:24052\nSat Jan 11 16:24:59 2020 10.250.7.77:24052 TCP connection established with [AF_INET]100.64.1.1:53284\nSat Jan 11 16:24:59 2020 10.250.7.77:24052 Connection reset, restarting [0]\nSat Jan 11 16:24:59 2020 100.64.1.1:53284 Connection reset, restarting [0]\nSat Jan 11 16:25:03 2020 TCP connection established with [AF_INET]10.250.7.77:7936\nSat Jan 11 16:25:03 2020 10.250.7.77:7936 TCP connection established with [AF_INET]100.64.1.1:48876\nSat Jan 11 16:25:03 2020 10.250.7.77:7936 Connection reset, restarting [0]\nSat Jan 11 16:25:03 2020 100.64.1.1:48876 Connection reset, restarting [0]\nSat Jan 11 16:25:09 2020 TCP connection established with [AF_INET]10.250.7.77:24068\nSat Jan 11 16:25:09 2020 10.250.7.77:24068 TCP connection established with [AF_INET]100.64.1.1:53300\nSat Jan 11 16:25:09 2020 10.250.7.77:24068 Connection reset, restarting [0]\nSat Jan 11 16:25:09 2020 100.64.1.1:53300 Connection reset, restarting [0]\nSat Jan 11 16:25:13 2020 TCP connection established with [AF_INET]10.250.7.77:7944\nSat Jan 11 16:25:13 2020 10.250.7.77:7944 TCP connection established with [AF_INET]100.64.1.1:48884\nSat Jan 11 16:25:13 2020 10.250.7.77:7944 Connection reset, restarting [0]\nSat Jan 11 16:25:13 2020 100.64.1.1:48884 Connection reset, restarting [0]\nSat Jan 11 16:25:19 2020 TCP connection established with [AF_INET]10.250.7.77:24076\nSat Jan 11 16:25:19 2020 10.250.7.77:24076 TCP connection established with [AF_INET]100.64.1.1:53308\nSat Jan 11 16:25:19 2020 10.250.7.77:24076 Connection reset, restarting [0]\nSat Jan 11 16:25:19 2020 100.64.1.1:53308 Connection reset, restarting [0]\nSat Jan 11 16:25:23 2020 TCP connection established with [AF_INET]10.250.7.77:7950\nSat Jan 11 16:25:23 2020 10.250.7.77:7950 TCP connection established with [AF_INET]100.64.1.1:48890\nSat Jan 11 16:25:23 2020 10.250.7.77:7950 Connection reset, restarting [0]\nSat Jan 11 16:25:23 2020 100.64.1.1:48890 Connection reset, restarting [0]\nSat Jan 11 16:25:29 2020 TCP connection established with [AF_INET]10.250.7.77:24084\nSat Jan 11 16:25:29 2020 10.250.7.77:24084 TCP connection established with [AF_INET]100.64.1.1:53316\nSat Jan 11 16:25:29 2020 10.250.7.77:24084 Connection reset, restarting [0]\nSat Jan 11 16:25:29 2020 100.64.1.1:53316 Connection reset, restarting [0]\nSat Jan 11 16:25:33 2020 TCP connection established with [AF_INET]10.250.7.77:7960\nSat Jan 11 16:25:33 2020 10.250.7.77:7960 TCP connection established with [AF_INET]100.64.1.1:48900\nSat Jan 11 16:25:33 2020 10.250.7.77:7960 Connection reset, restarting [0]\nSat Jan 11 16:25:33 2020 100.64.1.1:48900 Connection reset, restarting [0]\nSat Jan 11 16:25:39 2020 TCP connection established with [AF_INET]10.250.7.77:24088\nSat Jan 11 16:25:39 2020 10.250.7.77:24088 TCP connection established with [AF_INET]100.64.1.1:53320\nSat Jan 11 16:25:39 2020 10.250.7.77:24088 Connection reset, restarting [0]\nSat Jan 11 16:25:39 2020 100.64.1.1:53320 Connection reset, restarting [0]\nSat Jan 11 16:25:43 2020 TCP connection established with [AF_INET]10.250.7.77:7964\nSat Jan 11 16:25:43 2020 10.250.7.77:7964 TCP connection established with [AF_INET]100.64.1.1:48904\nSat Jan 11 16:25:43 2020 10.250.7.77:7964 Connection reset, restarting [0]\nSat Jan 11 16:25:43 2020 100.64.1.1:48904 Connection reset, restarting [0]\nSat Jan 11 16:25:49 2020 TCP connection established with [AF_INET]10.250.7.77:24098\nSat Jan 11 16:25:49 2020 10.250.7.77:24098 TCP connection established with [AF_INET]100.64.1.1:53330\nSat Jan 11 16:25:49 2020 10.250.7.77:24098 Connection reset, restarting [0]\nSat Jan 11 16:25:49 2020 100.64.1.1:53330 Connection reset, restarting [0]\nSat Jan 11 16:25:53 2020 TCP connection established with [AF_INET]10.250.7.77:7980\nSat Jan 11 16:25:53 2020 10.250.7.77:7980 TCP connection established with [AF_INET]100.64.1.1:48920\nSat Jan 11 16:25:53 2020 10.250.7.77:7980 Connection reset, restarting [0]\nSat Jan 11 16:25:53 2020 100.64.1.1:48920 Connection reset, restarting [0]\nSat Jan 11 16:25:59 2020 TCP connection established with [AF_INET]10.250.7.77:24108\nSat Jan 11 16:25:59 2020 10.250.7.77:24108 TCP connection established with [AF_INET]100.64.1.1:53340\nSat Jan 11 16:25:59 2020 10.250.7.77:24108 Connection reset, restarting [0]\nSat Jan 11 16:25:59 2020 100.64.1.1:53340 Connection reset, restarting [0]\nSat Jan 11 16:26:03 2020 TCP connection established with [AF_INET]10.250.7.77:7994\nSat Jan 11 16:26:03 2020 10.250.7.77:7994 TCP connection established with [AF_INET]100.64.1.1:48934\nSat Jan 11 16:26:03 2020 10.250.7.77:7994 Connection reset, restarting [0]\nSat Jan 11 16:26:03 2020 100.64.1.1:48934 Connection reset, restarting [0]\nSat Jan 11 16:26:09 2020 TCP connection established with [AF_INET]10.250.7.77:24124\nSat Jan 11 16:26:09 2020 10.250.7.77:24124 Connection reset, restarting [0]\nSat Jan 11 16:26:09 2020 TCP connection established with [AF_INET]100.64.1.1:53356\nSat Jan 11 16:26:09 2020 100.64.1.1:53356 Connection reset, restarting [0]\nSat Jan 11 16:26:13 2020 TCP connection established with [AF_INET]10.250.7.77:8038\nSat Jan 11 16:26:13 2020 10.250.7.77:8038 TCP connection established with [AF_INET]100.64.1.1:48978\nSat Jan 11 16:26:13 2020 10.250.7.77:8038 Connection reset, restarting [0]\nSat Jan 11 16:26:13 2020 100.64.1.1:48978 Connection reset, restarting [0]\nSat Jan 11 16:26:19 2020 TCP connection established with [AF_INET]10.250.7.77:24136\nSat Jan 11 16:26:19 2020 10.250.7.77:24136 TCP connection established with [AF_INET]100.64.1.1:53368\nSat Jan 11 16:26:19 2020 10.250.7.77:24136 Connection reset, restarting [0]\nSat Jan 11 16:26:19 2020 100.64.1.1:53368 Connection reset, restarting [0]\nSat Jan 11 16:26:23 2020 TCP connection established with [AF_INET]10.250.7.77:8044\nSat Jan 11 16:26:23 2020 10.250.7.77:8044 TCP connection established with [AF_INET]100.64.1.1:48984\nSat Jan 11 16:26:23 2020 10.250.7.77:8044 Connection reset, restarting [0]\nSat Jan 11 16:26:23 2020 100.64.1.1:48984 Connection reset, restarting [0]\nSat Jan 11 16:26:29 2020 TCP connection established with [AF_INET]10.250.7.77:24146\nSat Jan 11 16:26:29 2020 10.250.7.77:24146 TCP connection established with [AF_INET]100.64.1.1:53378\nSat Jan 11 16:26:29 2020 10.250.7.77:24146 Connection reset, restarting [0]\nSat Jan 11 16:26:29 2020 100.64.1.1:53378 Connection reset, restarting [0]\nSat Jan 11 16:26:33 2020 TCP connection established with [AF_INET]10.250.7.77:8062\nSat Jan 11 16:26:33 2020 10.250.7.77:8062 TCP connection established with [AF_INET]100.64.1.1:49002\nSat Jan 11 16:26:33 2020 10.250.7.77:8062 Connection reset, restarting [0]\nSat Jan 11 16:26:33 2020 100.64.1.1:49002 Connection reset, restarting [0]\nSat Jan 11 16:26:39 2020 TCP connection established with [AF_INET]10.250.7.77:24150\nSat Jan 11 16:26:39 2020 10.250.7.77:24150 TCP connection established with [AF_INET]100.64.1.1:53382\nSat Jan 11 16:26:39 2020 10.250.7.77:24150 Connection reset, restarting [0]\nSat Jan 11 16:26:39 2020 100.64.1.1:53382 Connection reset, restarting [0]\nSat Jan 11 16:26:43 2020 TCP connection established with [AF_INET]10.250.7.77:8066\nSat Jan 11 16:26:43 2020 10.250.7.77:8066 TCP connection established with [AF_INET]100.64.1.1:49006\nSat Jan 11 16:26:43 2020 10.250.7.77:8066 Connection reset, restarting [0]\nSat Jan 11 16:26:43 2020 100.64.1.1:49006 Connection reset, restarting [0]\nSat Jan 11 16:26:49 2020 TCP connection established with [AF_INET]10.250.7.77:24170\nSat Jan 11 16:26:49 2020 10.250.7.77:24170 Connection reset, restarting [0]\nSat Jan 11 16:26:49 2020 TCP connection established with [AF_INET]100.64.1.1:53402\nSat Jan 11 16:26:49 2020 100.64.1.1:53402 Connection reset, restarting [0]\nSat Jan 11 16:26:53 2020 TCP connection established with [AF_INET]10.250.7.77:8078\nSat Jan 11 16:26:53 2020 10.250.7.77:8078 TCP connection established with [AF_INET]100.64.1.1:49018\nSat Jan 11 16:26:53 2020 10.250.7.77:8078 Connection reset, restarting [0]\nSat Jan 11 16:26:53 2020 100.64.1.1:49018 Connection reset, restarting [0]\nSat Jan 11 16:26:59 2020 TCP connection established with [AF_INET]10.250.7.77:24182\nSat Jan 11 16:26:59 2020 10.250.7.77:24182 TCP connection established with [AF_INET]100.64.1.1:53414\nSat Jan 11 16:26:59 2020 10.250.7.77:24182 Connection reset, restarting [0]\nSat Jan 11 16:26:59 2020 100.64.1.1:53414 Connection reset, restarting [0]\nSat Jan 11 16:27:03 2020 TCP connection established with [AF_INET]10.250.7.77:8094\nSat Jan 11 16:27:03 2020 10.250.7.77:8094 TCP connection established with [AF_INET]100.64.1.1:49034\nSat Jan 11 16:27:03 2020 10.250.7.77:8094 Connection reset, restarting [0]\nSat Jan 11 16:27:03 2020 100.64.1.1:49034 Connection reset, restarting [0]\nSat Jan 11 16:27:09 2020 TCP connection established with [AF_INET]10.250.7.77:24198\nSat Jan 11 16:27:09 2020 10.250.7.77:24198 TCP connection established with [AF_INET]100.64.1.1:53430\nSat Jan 11 16:27:09 2020 10.250.7.77:24198 Connection reset, restarting [0]\nSat Jan 11 16:27:09 2020 100.64.1.1:53430 Connection reset, restarting [0]\nSat Jan 11 16:27:13 2020 TCP connection established with [AF_INET]10.250.7.77:8106\nSat Jan 11 16:27:13 2020 10.250.7.77:8106 TCP connection established with [AF_INET]100.64.1.1:49046\nSat Jan 11 16:27:13 2020 10.250.7.77:8106 Connection reset, restarting [0]\nSat Jan 11 16:27:13 2020 100.64.1.1:49046 Connection reset, restarting [0]\nSat Jan 11 16:27:19 2020 TCP connection established with [AF_INET]10.250.7.77:24206\nSat Jan 11 16:27:19 2020 10.250.7.77:24206 TCP connection established with [AF_INET]100.64.1.1:53438\nSat Jan 11 16:27:19 2020 10.250.7.77:24206 Connection reset, restarting [0]\nSat Jan 11 16:27:19 2020 100.64.1.1:53438 Connection reset, restarting [0]\nSat Jan 11 16:27:23 2020 TCP connection established with [AF_INET]10.250.7.77:8112\nSat Jan 11 16:27:23 2020 10.250.7.77:8112 TCP connection established with [AF_INET]100.64.1.1:49052\nSat Jan 11 16:27:23 2020 10.250.7.77:8112 Connection reset, restarting [0]\nSat Jan 11 16:27:23 2020 100.64.1.1:49052 Connection reset, restarting [0]\nSat Jan 11 16:27:29 2020 TCP connection established with [AF_INET]10.250.7.77:24218\nSat Jan 11 16:27:29 2020 10.250.7.77:24218 TCP connection established with [AF_INET]100.64.1.1:53450\nSat Jan 11 16:27:29 2020 10.250.7.77:24218 Connection reset, restarting [0]\nSat Jan 11 16:27:29 2020 100.64.1.1:53450 Connection reset, restarting [0]\nSat Jan 11 16:27:33 2020 TCP connection established with [AF_INET]10.250.7.77:8122\nSat Jan 11 16:27:33 2020 10.250.7.77:8122 TCP connection established with [AF_INET]100.64.1.1:49062\nSat Jan 11 16:27:33 2020 10.250.7.77:8122 Connection reset, restarting [0]\nSat Jan 11 16:27:33 2020 100.64.1.1:49062 Connection reset, restarting [0]\nSat Jan 11 16:27:39 2020 TCP connection established with [AF_INET]10.250.7.77:24224\nSat Jan 11 16:27:39 2020 10.250.7.77:24224 TCP connection established with [AF_INET]100.64.1.1:53456\nSat Jan 11 16:27:39 2020 10.250.7.77:24224 Connection reset, restarting [0]\nSat Jan 11 16:27:39 2020 100.64.1.1:53456 Connection reset, restarting [0]\nSat Jan 11 16:27:43 2020 TCP connection established with [AF_INET]10.250.7.77:8126\nSat Jan 11 16:27:43 2020 10.250.7.77:8126 TCP connection established with [AF_INET]100.64.1.1:49066\nSat Jan 11 16:27:43 2020 10.250.7.77:8126 Connection reset, restarting [0]\nSat Jan 11 16:27:43 2020 100.64.1.1:49066 Connection reset, restarting [0]\nSat Jan 11 16:27:49 2020 TCP connection established with [AF_INET]10.250.7.77:24234\nSat Jan 11 16:27:49 2020 10.250.7.77:24234 TCP connection established with [AF_INET]100.64.1.1:53466\nSat Jan 11 16:27:49 2020 10.250.7.77:24234 Connection reset, restarting [0]\nSat Jan 11 16:27:49 2020 100.64.1.1:53466 Connection reset, restarting [0]\nSat Jan 11 16:27:53 2020 TCP connection established with [AF_INET]10.250.7.77:8138\nSat Jan 11 16:27:53 2020 10.250.7.77:8138 TCP connection established with [AF_INET]100.64.1.1:49078\nSat Jan 11 16:27:53 2020 10.250.7.77:8138 Connection reset, restarting [0]\nSat Jan 11 16:27:53 2020 100.64.1.1:49078 Connection reset, restarting [0]\nSat Jan 11 16:27:59 2020 TCP connection established with [AF_INET]10.250.7.77:24244\nSat Jan 11 16:27:59 2020 10.250.7.77:24244 TCP connection established with [AF_INET]100.64.1.1:53476\nSat Jan 11 16:27:59 2020 10.250.7.77:24244 Connection reset, restarting [0]\nSat Jan 11 16:27:59 2020 100.64.1.1:53476 Connection reset, restarting [0]\nSat Jan 11 16:28:03 2020 TCP connection established with [AF_INET]10.250.7.77:8158\nSat Jan 11 16:28:03 2020 10.250.7.77:8158 TCP connection established with [AF_INET]100.64.1.1:49098\nSat Jan 11 16:28:03 2020 10.250.7.77:8158 Connection reset, restarting [0]\nSat Jan 11 16:28:03 2020 100.64.1.1:49098 Connection reset, restarting [0]\nSat Jan 11 16:28:09 2020 TCP connection established with [AF_INET]10.250.7.77:24264\nSat Jan 11 16:28:09 2020 10.250.7.77:24264 TCP connection established with [AF_INET]100.64.1.1:53496\nSat Jan 11 16:28:09 2020 10.250.7.77:24264 Connection reset, restarting [0]\nSat Jan 11 16:28:09 2020 100.64.1.1:53496 Connection reset, restarting [0]\nSat Jan 11 16:28:13 2020 TCP connection established with [AF_INET]10.250.7.77:8166\nSat Jan 11 16:28:13 2020 10.250.7.77:8166 TCP connection established with [AF_INET]100.64.1.1:49106\nSat Jan 11 16:28:13 2020 10.250.7.77:8166 Connection reset, restarting [0]\nSat Jan 11 16:28:13 2020 100.64.1.1:49106 Connection reset, restarting [0]\nSat Jan 11 16:28:19 2020 TCP connection established with [AF_INET]10.250.7.77:24272\nSat Jan 11 16:28:19 2020 10.250.7.77:24272 TCP connection established with [AF_INET]100.64.1.1:53504\nSat Jan 11 16:28:19 2020 10.250.7.77:24272 Connection reset, restarting [0]\nSat Jan 11 16:28:19 2020 100.64.1.1:53504 Connection reset, restarting [0]\nSat Jan 11 16:28:23 2020 TCP connection established with [AF_INET]10.250.7.77:8176\nSat Jan 11 16:28:23 2020 10.250.7.77:8176 TCP connection established with [AF_INET]100.64.1.1:49116\nSat Jan 11 16:28:23 2020 10.250.7.77:8176 Connection reset, restarting [0]\nSat Jan 11 16:28:23 2020 100.64.1.1:49116 Connection reset, restarting [0]\nSat Jan 11 16:28:29 2020 TCP connection established with [AF_INET]10.250.7.77:24280\nSat Jan 11 16:28:29 2020 10.250.7.77:24280 TCP connection established with [AF_INET]100.64.1.1:53512\nSat Jan 11 16:28:29 2020 10.250.7.77:24280 Connection reset, restarting [0]\nSat Jan 11 16:28:29 2020 100.64.1.1:53512 Connection reset, restarting [0]\nSat Jan 11 16:28:33 2020 TCP connection established with [AF_INET]10.250.7.77:8186\nSat Jan 11 16:28:33 2020 10.250.7.77:8186 TCP connection established with [AF_INET]100.64.1.1:49126\nSat Jan 11 16:28:33 2020 10.250.7.77:8186 Connection reset, restarting [0]\nSat Jan 11 16:28:33 2020 100.64.1.1:49126 Connection reset, restarting [0]\nSat Jan 11 16:28:39 2020 TCP connection established with [AF_INET]10.250.7.77:24284\nSat Jan 11 16:28:39 2020 10.250.7.77:24284 TCP connection established with [AF_INET]100.64.1.1:53516\nSat Jan 11 16:28:39 2020 10.250.7.77:24284 Connection reset, restarting [0]\nSat Jan 11 16:28:39 2020 100.64.1.1:53516 Connection reset, restarting [0]\nSat Jan 11 16:28:43 2020 TCP connection established with [AF_INET]10.250.7.77:8192\nSat Jan 11 16:28:43 2020 10.250.7.77:8192 TCP connection established with [AF_INET]100.64.1.1:49132\nSat Jan 11 16:28:43 2020 10.250.7.77:8192 Connection reset, restarting [0]\nSat Jan 11 16:28:43 2020 100.64.1.1:49132 Connection reset, restarting [0]\nSat Jan 11 16:28:49 2020 TCP connection established with [AF_INET]10.250.7.77:24300\nSat Jan 11 16:28:49 2020 10.250.7.77:24300 TCP connection established with [AF_INET]100.64.1.1:53532\nSat Jan 11 16:28:49 2020 10.250.7.77:24300 Connection reset, restarting [0]\nSat Jan 11 16:28:49 2020 100.64.1.1:53532 Connection reset, restarting [0]\nSat Jan 11 16:28:53 2020 TCP connection established with [AF_INET]10.250.7.77:8202\nSat Jan 11 16:28:53 2020 10.250.7.77:8202 TCP connection established with [AF_INET]100.64.1.1:49142\nSat Jan 11 16:28:53 2020 10.250.7.77:8202 Connection reset, restarting [0]\nSat Jan 11 16:28:53 2020 100.64.1.1:49142 Connection reset, restarting [0]\nSat Jan 11 16:28:59 2020 TCP connection established with [AF_INET]10.250.7.77:24342\nSat Jan 11 16:28:59 2020 10.250.7.77:24342 TCP connection established with [AF_INET]100.64.1.1:53574\nSat Jan 11 16:28:59 2020 10.250.7.77:24342 Connection reset, restarting [0]\nSat Jan 11 16:28:59 2020 100.64.1.1:53574 Connection reset, restarting [0]\nSat Jan 11 16:29:03 2020 TCP connection established with [AF_INET]10.250.7.77:8220\nSat Jan 11 16:29:03 2020 10.250.7.77:8220 TCP connection established with [AF_INET]100.64.1.1:49160\nSat Jan 11 16:29:03 2020 10.250.7.77:8220 Connection reset, restarting [0]\nSat Jan 11 16:29:03 2020 100.64.1.1:49160 Connection reset, restarting [0]\nSat Jan 11 16:29:09 2020 TCP connection established with [AF_INET]10.250.7.77:24362\nSat Jan 11 16:29:09 2020 10.250.7.77:24362 TCP connection established with [AF_INET]100.64.1.1:53594\nSat Jan 11 16:29:09 2020 10.250.7.77:24362 Connection reset, restarting [0]\nSat Jan 11 16:29:09 2020 100.64.1.1:53594 Connection reset, restarting [0]\nSat Jan 11 16:29:13 2020 TCP connection established with [AF_INET]10.250.7.77:8228\nSat Jan 11 16:29:13 2020 10.250.7.77:8228 TCP connection established with [AF_INET]100.64.1.1:49168\nSat Jan 11 16:29:13 2020 10.250.7.77:8228 Connection reset, restarting [0]\nSat Jan 11 16:29:13 2020 100.64.1.1:49168 Connection reset, restarting [0]\nSat Jan 11 16:29:19 2020 TCP connection established with [AF_INET]10.250.7.77:24370\nSat Jan 11 16:29:19 2020 10.250.7.77:24370 TCP connection established with [AF_INET]100.64.1.1:53602\nSat Jan 11 16:29:19 2020 10.250.7.77:24370 Connection reset, restarting [0]\nSat Jan 11 16:29:19 2020 100.64.1.1:53602 Connection reset, restarting [0]\nSat Jan 11 16:29:23 2020 TCP connection established with [AF_INET]10.250.7.77:8234\nSat Jan 11 16:29:23 2020 10.250.7.77:8234 TCP connection established with [AF_INET]100.64.1.1:49174\nSat Jan 11 16:29:23 2020 10.250.7.77:8234 Connection reset, restarting [0]\nSat Jan 11 16:29:23 2020 100.64.1.1:49174 Connection reset, restarting [0]\nSat Jan 11 16:29:29 2020 TCP connection established with [AF_INET]10.250.7.77:24378\nSat Jan 11 16:29:29 2020 10.250.7.77:24378 TCP connection established with [AF_INET]100.64.1.1:53610\nSat Jan 11 16:29:29 2020 10.250.7.77:24378 Connection reset, restarting [0]\nSat Jan 11 16:29:29 2020 100.64.1.1:53610 Connection reset, restarting [0]\nSat Jan 11 16:29:33 2020 TCP connection established with [AF_INET]100.64.1.1:49194\nSat Jan 11 16:29:33 2020 100.64.1.1:49194 Connection reset, restarting [0]\nSat Jan 11 16:29:33 2020 TCP connection established with [AF_INET]10.250.7.77:8254\nSat Jan 11 16:29:33 2020 10.250.7.77:8254 Connection reset, restarting [0]\nSat Jan 11 16:29:39 2020 TCP connection established with [AF_INET]10.250.7.77:24382\nSat Jan 11 16:29:39 2020 10.250.7.77:24382 TCP connection established with [AF_INET]100.64.1.1:53614\nSat Jan 11 16:29:39 2020 10.250.7.77:24382 Connection reset, restarting [0]\nSat Jan 11 16:29:39 2020 100.64.1.1:53614 Connection reset, restarting [0]\nSat Jan 11 16:29:43 2020 TCP connection established with [AF_INET]10.250.7.77:8264\nSat Jan 11 16:29:43 2020 10.250.7.77:8264 TCP connection established with [AF_INET]100.64.1.1:49204\nSat Jan 11 16:29:43 2020 10.250.7.77:8264 Connection reset, restarting [0]\nSat Jan 11 16:29:43 2020 100.64.1.1:49204 Connection reset, restarting [0]\nSat Jan 11 16:29:49 2020 TCP connection established with [AF_INET]10.250.7.77:24394\nSat Jan 11 16:29:49 2020 10.250.7.77:24394 TCP connection established with [AF_INET]100.64.1.1:53626\nSat Jan 11 16:29:49 2020 10.250.7.77:24394 Connection reset, restarting [0]\nSat Jan 11 16:29:49 2020 100.64.1.1:53626 Connection reset, restarting [0]\nSat Jan 11 16:29:53 2020 TCP connection established with [AF_INET]10.250.7.77:8274\nSat Jan 11 16:29:53 2020 10.250.7.77:8274 TCP connection established with [AF_INET]100.64.1.1:49214\nSat Jan 11 16:29:53 2020 10.250.7.77:8274 Connection reset, restarting [0]\nSat Jan 11 16:29:53 2020 100.64.1.1:49214 Connection reset, restarting [0]\nSat Jan 11 16:29:59 2020 TCP connection established with [AF_INET]10.250.7.77:24406\nSat Jan 11 16:29:59 2020 10.250.7.77:24406 TCP connection established with [AF_INET]100.64.1.1:53638\nSat Jan 11 16:29:59 2020 10.250.7.77:24406 Connection reset, restarting [0]\nSat Jan 11 16:29:59 2020 100.64.1.1:53638 Connection reset, restarting [0]\nSat Jan 11 16:30:03 2020 TCP connection established with [AF_INET]100.64.1.1:49228\nSat Jan 11 16:30:03 2020 100.64.1.1:49228 TCP connection established with [AF_INET]10.250.7.77:8288\nSat Jan 11 16:30:03 2020 100.64.1.1:49228 Connection reset, restarting [0]\nSat Jan 11 16:30:03 2020 10.250.7.77:8288 Connection reset, restarting [0]\nSat Jan 11 16:30:09 2020 TCP connection established with [AF_INET]10.250.7.77:24420\nSat Jan 11 16:30:09 2020 10.250.7.77:24420 TCP connection established with [AF_INET]100.64.1.1:53652\nSat Jan 11 16:30:09 2020 10.250.7.77:24420 Connection reset, restarting [0]\nSat Jan 11 16:30:09 2020 100.64.1.1:53652 Connection reset, restarting [0]\nSat Jan 11 16:30:13 2020 TCP connection established with [AF_INET]10.250.7.77:8296\nSat Jan 11 16:30:13 2020 10.250.7.77:8296 TCP connection established with [AF_INET]100.64.1.1:49236\nSat Jan 11 16:30:13 2020 10.250.7.77:8296 Connection reset, restarting [0]\nSat Jan 11 16:30:13 2020 100.64.1.1:49236 Connection reset, restarting [0]\nSat Jan 11 16:30:19 2020 TCP connection established with [AF_INET]10.250.7.77:24428\nSat Jan 11 16:30:19 2020 10.250.7.77:24428 TCP connection established with [AF_INET]100.64.1.1:53660\nSat Jan 11 16:30:19 2020 10.250.7.77:24428 Connection reset, restarting [0]\nSat Jan 11 16:30:19 2020 100.64.1.1:53660 Connection reset, restarting [0]\nSat Jan 11 16:30:23 2020 TCP connection established with [AF_INET]10.250.7.77:8302\nSat Jan 11 16:30:23 2020 10.250.7.77:8302 TCP connection established with [AF_INET]100.64.1.1:49242\nSat Jan 11 16:30:23 2020 10.250.7.77:8302 Connection reset, restarting [0]\nSat Jan 11 16:30:23 2020 100.64.1.1:49242 Connection reset, restarting [0]\nSat Jan 11 16:30:29 2020 TCP connection established with [AF_INET]10.250.7.77:24436\nSat Jan 11 16:30:29 2020 10.250.7.77:24436 TCP connection established with [AF_INET]100.64.1.1:53668\nSat Jan 11 16:30:29 2020 10.250.7.77:24436 Connection reset, restarting [0]\nSat Jan 11 16:30:29 2020 100.64.1.1:53668 Connection reset, restarting [0]\nSat Jan 11 16:30:33 2020 TCP connection established with [AF_INET]10.250.7.77:8314\nSat Jan 11 16:30:33 2020 10.250.7.77:8314 TCP connection established with [AF_INET]100.64.1.1:49254\nSat Jan 11 16:30:33 2020 10.250.7.77:8314 Connection reset, restarting [0]\nSat Jan 11 16:30:33 2020 100.64.1.1:49254 Connection reset, restarting [0]\nSat Jan 11 16:30:39 2020 TCP connection established with [AF_INET]10.250.7.77:24450\nSat Jan 11 16:30:39 2020 10.250.7.77:24450 TCP connection established with [AF_INET]100.64.1.1:53682\nSat Jan 11 16:30:39 2020 10.250.7.77:24450 Connection reset, restarting [0]\nSat Jan 11 16:30:39 2020 100.64.1.1:53682 Connection reset, restarting [0]\nSat Jan 11 16:30:43 2020 TCP connection established with [AF_INET]10.250.7.77:8318\nSat Jan 11 16:30:43 2020 10.250.7.77:8318 TCP connection established with [AF_INET]100.64.1.1:49258\nSat Jan 11 16:30:43 2020 10.250.7.77:8318 Connection reset, restarting [0]\nSat Jan 11 16:30:43 2020 100.64.1.1:49258 Connection reset, restarting [0]\nSat Jan 11 16:30:49 2020 TCP connection established with [AF_INET]10.250.7.77:24462\nSat Jan 11 16:30:49 2020 10.250.7.77:24462 TCP connection established with [AF_INET]100.64.1.1:53694\nSat Jan 11 16:30:49 2020 10.250.7.77:24462 Connection reset, restarting [0]\nSat Jan 11 16:30:49 2020 100.64.1.1:53694 Connection reset, restarting [0]\nSat Jan 11 16:30:53 2020 TCP connection established with [AF_INET]10.250.7.77:8332\nSat Jan 11 16:30:53 2020 10.250.7.77:8332 TCP connection established with [AF_INET]100.64.1.1:49272\nSat Jan 11 16:30:53 2020 10.250.7.77:8332 Connection reset, restarting [0]\nSat Jan 11 16:30:53 2020 100.64.1.1:49272 Connection reset, restarting [0]\nSat Jan 11 16:30:59 2020 TCP connection established with [AF_INET]10.250.7.77:24470\nSat Jan 11 16:30:59 2020 10.250.7.77:24470 TCP connection established with [AF_INET]100.64.1.1:53702\nSat Jan 11 16:30:59 2020 10.250.7.77:24470 Connection reset, restarting [0]\nSat Jan 11 16:30:59 2020 100.64.1.1:53702 Connection reset, restarting [0]\nSat Jan 11 16:31:03 2020 TCP connection established with [AF_INET]10.250.7.77:8346\nSat Jan 11 16:31:03 2020 10.250.7.77:8346 TCP connection established with [AF_INET]100.64.1.1:49286\nSat Jan 11 16:31:03 2020 10.250.7.77:8346 Connection reset, restarting [0]\nSat Jan 11 16:31:03 2020 100.64.1.1:49286 Connection reset, restarting [0]\nSat Jan 11 16:31:09 2020 TCP connection established with [AF_INET]10.250.7.77:24484\nSat Jan 11 16:31:09 2020 10.250.7.77:24484 TCP connection established with [AF_INET]100.64.1.1:53716\nSat Jan 11 16:31:09 2020 10.250.7.77:24484 Connection reset, restarting [0]\nSat Jan 11 16:31:09 2020 100.64.1.1:53716 Connection reset, restarting [0]\nSat Jan 11 16:31:13 2020 TCP connection established with [AF_INET]10.250.7.77:8354\nSat Jan 11 16:31:13 2020 10.250.7.77:8354 TCP connection established with [AF_INET]100.64.1.1:49294\nSat Jan 11 16:31:13 2020 10.250.7.77:8354 Connection reset, restarting [0]\nSat Jan 11 16:31:13 2020 100.64.1.1:49294 Connection reset, restarting [0]\nSat Jan 11 16:31:19 2020 TCP connection established with [AF_INET]10.250.7.77:24496\nSat Jan 11 16:31:19 2020 10.250.7.77:24496 TCP connection established with [AF_INET]100.64.1.1:53728\nSat Jan 11 16:31:19 2020 10.250.7.77:24496 Connection reset, restarting [0]\nSat Jan 11 16:31:19 2020 100.64.1.1:53728 Connection reset, restarting [0]\nSat Jan 11 16:31:23 2020 TCP connection established with [AF_INET]10.250.7.77:8360\nSat Jan 11 16:31:23 2020 10.250.7.77:8360 TCP connection established with [AF_INET]100.64.1.1:49300\nSat Jan 11 16:31:23 2020 10.250.7.77:8360 Connection reset, restarting [0]\nSat Jan 11 16:31:23 2020 100.64.1.1:49300 Connection reset, restarting [0]\nSat Jan 11 16:31:29 2020 TCP connection established with [AF_INET]10.250.7.77:24504\nSat Jan 11 16:31:29 2020 10.250.7.77:24504 TCP connection established with [AF_INET]100.64.1.1:53736\nSat Jan 11 16:31:29 2020 10.250.7.77:24504 Connection reset, restarting [0]\nSat Jan 11 16:31:29 2020 100.64.1.1:53736 Connection reset, restarting [0]\nSat Jan 11 16:31:33 2020 TCP connection established with [AF_INET]10.250.7.77:8372\nSat Jan 11 16:31:33 2020 10.250.7.77:8372 TCP connection established with [AF_INET]100.64.1.1:49312\nSat Jan 11 16:31:33 2020 10.250.7.77:8372 Connection reset, restarting [0]\nSat Jan 11 16:31:33 2020 100.64.1.1:49312 Connection reset, restarting [0]\nSat Jan 11 16:31:39 2020 TCP connection established with [AF_INET]10.250.7.77:24510\nSat Jan 11 16:31:39 2020 10.250.7.77:24510 TCP connection established with [AF_INET]100.64.1.1:53742\nSat Jan 11 16:31:39 2020 10.250.7.77:24510 Connection reset, restarting [0]\nSat Jan 11 16:31:39 2020 100.64.1.1:53742 Connection reset, restarting [0]\nSat Jan 11 16:31:43 2020 TCP connection established with [AF_INET]10.250.7.77:8376\nSat Jan 11 16:31:43 2020 10.250.7.77:8376 TCP connection established with [AF_INET]100.64.1.1:49316\nSat Jan 11 16:31:43 2020 10.250.7.77:8376 Connection reset, restarting [0]\nSat Jan 11 16:31:43 2020 100.64.1.1:49316 Connection reset, restarting [0]\nSat Jan 11 16:31:49 2020 TCP connection established with [AF_INET]10.250.7.77:24520\nSat Jan 11 16:31:49 2020 10.250.7.77:24520 Connection reset, restarting [0]\nSat Jan 11 16:31:49 2020 TCP connection established with [AF_INET]100.64.1.1:53752\nSat Jan 11 16:31:49 2020 100.64.1.1:53752 Connection reset, restarting [0]\nSat Jan 11 16:31:53 2020 TCP connection established with [AF_INET]10.250.7.77:8386\nSat Jan 11 16:31:53 2020 10.250.7.77:8386 TCP connection established with [AF_INET]100.64.1.1:49326\nSat Jan 11 16:31:53 2020 10.250.7.77:8386 Connection reset, restarting [0]\nSat Jan 11 16:31:53 2020 100.64.1.1:49326 Connection reset, restarting [0]\nSat Jan 11 16:31:59 2020 TCP connection established with [AF_INET]10.250.7.77:24528\nSat Jan 11 16:31:59 2020 10.250.7.77:24528 TCP connection established with [AF_INET]100.64.1.1:53760\nSat Jan 11 16:31:59 2020 10.250.7.77:24528 Connection reset, restarting [0]\nSat Jan 11 16:31:59 2020 100.64.1.1:53760 Connection reset, restarting [0]\nSat Jan 11 16:32:03 2020 TCP connection established with [AF_INET]10.250.7.77:8400\nSat Jan 11 16:32:03 2020 10.250.7.77:8400 TCP connection established with [AF_INET]100.64.1.1:49340\nSat Jan 11 16:32:03 2020 10.250.7.77:8400 Connection reset, restarting [0]\nSat Jan 11 16:32:03 2020 100.64.1.1:49340 Connection reset, restarting [0]\nSat Jan 11 16:32:09 2020 TCP connection established with [AF_INET]10.250.7.77:24542\nSat Jan 11 16:32:09 2020 10.250.7.77:24542 TCP connection established with [AF_INET]100.64.1.1:53774\nSat Jan 11 16:32:09 2020 10.250.7.77:24542 Connection reset, restarting [0]\nSat Jan 11 16:32:09 2020 100.64.1.1:53774 Connection reset, restarting [0]\nSat Jan 11 16:32:13 2020 TCP connection established with [AF_INET]10.250.7.77:8412\nSat Jan 11 16:32:13 2020 10.250.7.77:8412 TCP connection established with [AF_INET]100.64.1.1:49352\nSat Jan 11 16:32:13 2020 10.250.7.77:8412 Connection reset, restarting [0]\nSat Jan 11 16:32:13 2020 100.64.1.1:49352 Connection reset, restarting [0]\nSat Jan 11 16:32:19 2020 TCP connection established with [AF_INET]10.250.7.77:24550\nSat Jan 11 16:32:19 2020 10.250.7.77:24550 TCP connection established with [AF_INET]100.64.1.1:53782\nSat Jan 11 16:32:19 2020 10.250.7.77:24550 Connection reset, restarting [0]\nSat Jan 11 16:32:19 2020 100.64.1.1:53782 Connection reset, restarting [0]\nSat Jan 11 16:32:23 2020 TCP connection established with [AF_INET]10.250.7.77:8418\nSat Jan 11 16:32:23 2020 10.250.7.77:8418 TCP connection established with [AF_INET]100.64.1.1:49358\nSat Jan 11 16:32:23 2020 10.250.7.77:8418 Connection reset, restarting [0]\nSat Jan 11 16:32:23 2020 100.64.1.1:49358 Connection reset, restarting [0]\nSat Jan 11 16:32:29 2020 TCP connection established with [AF_INET]10.250.7.77:24562\nSat Jan 11 16:32:29 2020 10.250.7.77:24562 TCP connection established with [AF_INET]100.64.1.1:53794\nSat Jan 11 16:32:29 2020 10.250.7.77:24562 Connection reset, restarting [0]\nSat Jan 11 16:32:29 2020 100.64.1.1:53794 Connection reset, restarting [0]\nSat Jan 11 16:32:33 2020 TCP connection established with [AF_INET]10.250.7.77:8430\nSat Jan 11 16:32:33 2020 10.250.7.77:8430 TCP connection established with [AF_INET]100.64.1.1:49370\nSat Jan 11 16:32:33 2020 10.250.7.77:8430 Connection reset, restarting [0]\nSat Jan 11 16:32:33 2020 100.64.1.1:49370 Connection reset, restarting [0]\nSat Jan 11 16:32:39 2020 TCP connection established with [AF_INET]10.250.7.77:24568\nSat Jan 11 16:32:39 2020 10.250.7.77:24568 TCP connection established with [AF_INET]100.64.1.1:53800\nSat Jan 11 16:32:39 2020 10.250.7.77:24568 Connection reset, restarting [0]\nSat Jan 11 16:32:39 2020 100.64.1.1:53800 Connection reset, restarting [0]\nSat Jan 11 16:32:43 2020 TCP connection established with [AF_INET]10.250.7.77:8434\nSat Jan 11 16:32:43 2020 10.250.7.77:8434 Connection reset, restarting [0]\nSat Jan 11 16:32:43 2020 TCP connection established with [AF_INET]100.64.1.1:49374\nSat Jan 11 16:32:43 2020 100.64.1.1:49374 Connection reset, restarting [0]\nSat Jan 11 16:32:49 2020 TCP connection established with [AF_INET]10.250.7.77:24578\nSat Jan 11 16:32:49 2020 10.250.7.77:24578 TCP connection established with [AF_INET]100.64.1.1:53810\nSat Jan 11 16:32:49 2020 10.250.7.77:24578 Connection reset, restarting [0]\nSat Jan 11 16:32:49 2020 100.64.1.1:53810 Connection reset, restarting [0]\nSat Jan 11 16:32:53 2020 TCP connection established with [AF_INET]10.250.7.77:8444\nSat Jan 11 16:32:53 2020 10.250.7.77:8444 TCP connection established with [AF_INET]100.64.1.1:49384\nSat Jan 11 16:32:53 2020 10.250.7.77:8444 Connection reset, restarting [0]\nSat Jan 11 16:32:53 2020 100.64.1.1:49384 Connection reset, restarting [0]\nSat Jan 11 16:32:59 2020 TCP connection established with [AF_INET]10.250.7.77:24586\nSat Jan 11 16:32:59 2020 10.250.7.77:24586 TCP connection established with [AF_INET]100.64.1.1:53818\nSat Jan 11 16:32:59 2020 10.250.7.77:24586 Connection reset, restarting [0]\nSat Jan 11 16:32:59 2020 100.64.1.1:53818 Connection reset, restarting [0]\nSat Jan 11 16:33:03 2020 TCP connection established with [AF_INET]10.250.7.77:8460\nSat Jan 11 16:33:03 2020 10.250.7.77:8460 TCP connection established with [AF_INET]100.64.1.1:49400\nSat Jan 11 16:33:03 2020 10.250.7.77:8460 Connection reset, restarting [0]\nSat Jan 11 16:33:03 2020 100.64.1.1:49400 Connection reset, restarting [0]\nSat Jan 11 16:33:09 2020 TCP connection established with [AF_INET]100.64.1.1:53832\nSat Jan 11 16:33:09 2020 100.64.1.1:53832 TCP connection established with [AF_INET]10.250.7.77:24600\nSat Jan 11 16:33:09 2020 100.64.1.1:53832 Connection reset, restarting [0]\nSat Jan 11 16:33:09 2020 10.250.7.77:24600 Connection reset, restarting [0]\nSat Jan 11 16:33:13 2020 TCP connection established with [AF_INET]10.250.7.77:8468\nSat Jan 11 16:33:13 2020 10.250.7.77:8468 TCP connection established with [AF_INET]100.64.1.1:49408\nSat Jan 11 16:33:13 2020 10.250.7.77:8468 Connection reset, restarting [0]\nSat Jan 11 16:33:13 2020 100.64.1.1:49408 Connection reset, restarting [0]\nSat Jan 11 16:33:19 2020 TCP connection established with [AF_INET]10.250.7.77:24608\nSat Jan 11 16:33:19 2020 10.250.7.77:24608 TCP connection established with [AF_INET]100.64.1.1:53840\nSat Jan 11 16:33:19 2020 10.250.7.77:24608 Connection reset, restarting [0]\nSat Jan 11 16:33:19 2020 100.64.1.1:53840 Connection reset, restarting [0]\nSat Jan 11 16:33:23 2020 TCP connection established with [AF_INET]10.250.7.77:8480\nSat Jan 11 16:33:23 2020 10.250.7.77:8480 TCP connection established with [AF_INET]100.64.1.1:49420\nSat Jan 11 16:33:23 2020 10.250.7.77:8480 Connection reset, restarting [0]\nSat Jan 11 16:33:23 2020 100.64.1.1:49420 Connection reset, restarting [0]\nSat Jan 11 16:33:29 2020 TCP connection established with [AF_INET]10.250.7.77:24618\nSat Jan 11 16:33:29 2020 10.250.7.77:24618 TCP connection established with [AF_INET]100.64.1.1:53850\nSat Jan 11 16:33:29 2020 10.250.7.77:24618 Connection reset, restarting [0]\nSat Jan 11 16:33:29 2020 100.64.1.1:53850 Connection reset, restarting [0]\nSat Jan 11 16:33:33 2020 TCP connection established with [AF_INET]10.250.7.77:8492\nSat Jan 11 16:33:33 2020 10.250.7.77:8492 TCP connection established with [AF_INET]100.64.1.1:49432\nSat Jan 11 16:33:33 2020 10.250.7.77:8492 Connection reset, restarting [0]\nSat Jan 11 16:33:33 2020 100.64.1.1:49432 Connection reset, restarting [0]\nSat Jan 11 16:33:39 2020 TCP connection established with [AF_INET]10.250.7.77:24622\nSat Jan 11 16:33:39 2020 10.250.7.77:24622 TCP connection established with [AF_INET]100.64.1.1:53854\nSat Jan 11 16:33:39 2020 10.250.7.77:24622 Connection reset, restarting [0]\nSat Jan 11 16:33:39 2020 100.64.1.1:53854 Connection reset, restarting [0]\nSat Jan 11 16:33:43 2020 TCP connection established with [AF_INET]10.250.7.77:8496\nSat Jan 11 16:33:43 2020 10.250.7.77:8496 TCP connection established with [AF_INET]100.64.1.1:49436\nSat Jan 11 16:33:43 2020 10.250.7.77:8496 Connection reset, restarting [0]\nSat Jan 11 16:33:43 2020 100.64.1.1:49436 Connection reset, restarting [0]\nSat Jan 11 16:33:49 2020 TCP connection established with [AF_INET]10.250.7.77:24636\nSat Jan 11 16:33:49 2020 10.250.7.77:24636 TCP connection established with [AF_INET]100.64.1.1:53868\nSat Jan 11 16:33:49 2020 10.250.7.77:24636 Connection reset, restarting [0]\nSat Jan 11 16:33:49 2020 100.64.1.1:53868 Connection reset, restarting [0]\nSat Jan 11 16:33:53 2020 TCP connection established with [AF_INET]10.250.7.77:8506\nSat Jan 11 16:33:53 2020 10.250.7.77:8506 TCP connection established with [AF_INET]100.64.1.1:49446\nSat Jan 11 16:33:53 2020 10.250.7.77:8506 Connection reset, restarting [0]\nSat Jan 11 16:33:53 2020 100.64.1.1:49446 Connection reset, restarting [0]\nSat Jan 11 16:33:59 2020 TCP connection established with [AF_INET]10.250.7.77:24644\nSat Jan 11 16:33:59 2020 10.250.7.77:24644 TCP connection established with [AF_INET]100.64.1.1:53876\nSat Jan 11 16:33:59 2020 10.250.7.77:24644 Connection reset, restarting [0]\nSat Jan 11 16:33:59 2020 100.64.1.1:53876 Connection reset, restarting [0]\nSat Jan 11 16:34:03 2020 TCP connection established with [AF_INET]10.250.7.77:8522\nSat Jan 11 16:34:03 2020 10.250.7.77:8522 TCP connection established with [AF_INET]100.64.1.1:49462\nSat Jan 11 16:34:03 2020 10.250.7.77:8522 Connection reset, restarting [0]\nSat Jan 11 16:34:03 2020 100.64.1.1:49462 Connection reset, restarting [0]\nSat Jan 11 16:34:09 2020 TCP connection established with [AF_INET]10.250.7.77:24658\nSat Jan 11 16:34:09 2020 10.250.7.77:24658 Connection reset, restarting [0]\nSat Jan 11 16:34:09 2020 TCP connection established with [AF_INET]100.64.1.1:53890\nSat Jan 11 16:34:09 2020 100.64.1.1:53890 Connection reset, restarting [0]\nSat Jan 11 16:34:13 2020 TCP connection established with [AF_INET]10.250.7.77:8530\nSat Jan 11 16:34:13 2020 10.250.7.77:8530 TCP connection established with [AF_INET]100.64.1.1:49470\nSat Jan 11 16:34:13 2020 10.250.7.77:8530 Connection reset, restarting [0]\nSat Jan 11 16:34:13 2020 100.64.1.1:49470 Connection reset, restarting [0]\nSat Jan 11 16:34:19 2020 TCP connection established with [AF_INET]10.250.7.77:24666\nSat Jan 11 16:34:19 2020 10.250.7.77:24666 TCP connection established with [AF_INET]100.64.1.1:53898\nSat Jan 11 16:34:19 2020 10.250.7.77:24666 Connection reset, restarting [0]\nSat Jan 11 16:34:19 2020 100.64.1.1:53898 Connection reset, restarting [0]\nSat Jan 11 16:34:23 2020 TCP connection established with [AF_INET]10.250.7.77:8538\nSat Jan 11 16:34:23 2020 10.250.7.77:8538 TCP connection established with [AF_INET]100.64.1.1:49478\nSat Jan 11 16:34:23 2020 10.250.7.77:8538 Connection reset, restarting [0]\nSat Jan 11 16:34:23 2020 100.64.1.1:49478 Connection reset, restarting [0]\nSat Jan 11 16:34:29 2020 TCP connection established with [AF_INET]10.250.7.77:24676\nSat Jan 11 16:34:29 2020 10.250.7.77:24676 TCP connection established with [AF_INET]100.64.1.1:53908\nSat Jan 11 16:34:29 2020 10.250.7.77:24676 Connection reset, restarting [0]\nSat Jan 11 16:34:29 2020 100.64.1.1:53908 Connection reset, restarting [0]\nSat Jan 11 16:34:33 2020 TCP connection established with [AF_INET]10.250.7.77:8548\nSat Jan 11 16:34:33 2020 10.250.7.77:8548 TCP connection established with [AF_INET]100.64.1.1:49488\nSat Jan 11 16:34:33 2020 10.250.7.77:8548 Connection reset, restarting [0]\nSat Jan 11 16:34:33 2020 100.64.1.1:49488 Connection reset, restarting [0]\nSat Jan 11 16:34:39 2020 TCP connection established with [AF_INET]10.250.7.77:24680\nSat Jan 11 16:34:39 2020 10.250.7.77:24680 TCP connection established with [AF_INET]100.64.1.1:53912\nSat Jan 11 16:34:39 2020 10.250.7.77:24680 Connection reset, restarting [0]\nSat Jan 11 16:34:39 2020 100.64.1.1:53912 Connection reset, restarting [0]\nSat Jan 11 16:34:43 2020 TCP connection established with [AF_INET]10.250.7.77:8558\nSat Jan 11 16:34:43 2020 10.250.7.77:8558 TCP connection established with [AF_INET]100.64.1.1:49498\nSat Jan 11 16:34:43 2020 10.250.7.77:8558 Connection reset, restarting [0]\nSat Jan 11 16:34:43 2020 100.64.1.1:49498 Connection reset, restarting [0]\nSat Jan 11 16:34:49 2020 TCP connection established with [AF_INET]10.250.7.77:24690\nSat Jan 11 16:34:49 2020 10.250.7.77:24690 TCP connection established with [AF_INET]100.64.1.1:53922\nSat Jan 11 16:34:49 2020 10.250.7.77:24690 Connection reset, restarting [0]\nSat Jan 11 16:34:49 2020 100.64.1.1:53922 Connection reset, restarting [0]\nSat Jan 11 16:34:53 2020 TCP connection established with [AF_INET]10.250.7.77:8568\nSat Jan 11 16:34:53 2020 10.250.7.77:8568 TCP connection established with [AF_INET]100.64.1.1:49508\nSat Jan 11 16:34:53 2020 10.250.7.77:8568 Connection reset, restarting [0]\nSat Jan 11 16:34:53 2020 100.64.1.1:49508 Connection reset, restarting [0]\nSat Jan 11 16:34:59 2020 TCP connection established with [AF_INET]10.250.7.77:24702\nSat Jan 11 16:34:59 2020 10.250.7.77:24702 TCP connection established with [AF_INET]100.64.1.1:53934\nSat Jan 11 16:34:59 2020 10.250.7.77:24702 Connection reset, restarting [0]\nSat Jan 11 16:34:59 2020 100.64.1.1:53934 Connection reset, restarting [0]\nSat Jan 11 16:35:03 2020 TCP connection established with [AF_INET]10.250.7.77:8582\nSat Jan 11 16:35:03 2020 10.250.7.77:8582 TCP connection established with [AF_INET]100.64.1.1:49522\nSat Jan 11 16:35:03 2020 10.250.7.77:8582 Connection reset, restarting [0]\nSat Jan 11 16:35:03 2020 100.64.1.1:49522 Connection reset, restarting [0]\nSat Jan 11 16:35:09 2020 TCP connection established with [AF_INET]10.250.7.77:24716\nSat Jan 11 16:35:09 2020 10.250.7.77:24716 TCP connection established with [AF_INET]100.64.1.1:53948\nSat Jan 11 16:35:09 2020 10.250.7.77:24716 Connection reset, restarting [0]\nSat Jan 11 16:35:09 2020 100.64.1.1:53948 Connection reset, restarting [0]\nSat Jan 11 16:35:13 2020 TCP connection established with [AF_INET]10.250.7.77:8592\nSat Jan 11 16:35:13 2020 10.250.7.77:8592 TCP connection established with [AF_INET]100.64.1.1:49532\nSat Jan 11 16:35:13 2020 10.250.7.77:8592 Connection reset, restarting [0]\nSat Jan 11 16:35:13 2020 100.64.1.1:49532 Connection reset, restarting [0]\nSat Jan 11 16:35:19 2020 TCP connection established with [AF_INET]10.250.7.77:24724\nSat Jan 11 16:35:19 2020 10.250.7.77:24724 TCP connection established with [AF_INET]100.64.1.1:53956\nSat Jan 11 16:35:19 2020 10.250.7.77:24724 Connection reset, restarting [0]\nSat Jan 11 16:35:19 2020 100.64.1.1:53956 Connection reset, restarting [0]\nSat Jan 11 16:35:23 2020 TCP connection established with [AF_INET]10.250.7.77:8598\nSat Jan 11 16:35:23 2020 10.250.7.77:8598 TCP connection established with [AF_INET]100.64.1.1:49538\nSat Jan 11 16:35:23 2020 10.250.7.77:8598 Connection reset, restarting [0]\nSat Jan 11 16:35:23 2020 100.64.1.1:49538 Connection reset, restarting [0]\nSat Jan 11 16:35:29 2020 TCP connection established with [AF_INET]10.250.7.77:24734\nSat Jan 11 16:35:29 2020 10.250.7.77:24734 TCP connection established with [AF_INET]100.64.1.1:53966\nSat Jan 11 16:35:29 2020 10.250.7.77:24734 Connection reset, restarting [0]\nSat Jan 11 16:35:29 2020 100.64.1.1:53966 Connection reset, restarting [0]\nSat Jan 11 16:35:33 2020 TCP connection established with [AF_INET]10.250.7.77:8608\nSat Jan 11 16:35:33 2020 10.250.7.77:8608 Connection reset, restarting [0]\nSat Jan 11 16:35:33 2020 TCP connection established with [AF_INET]100.64.1.1:49548\nSat Jan 11 16:35:33 2020 100.64.1.1:49548 Connection reset, restarting [0]\nSat Jan 11 16:35:39 2020 TCP connection established with [AF_INET]10.250.7.77:24738\nSat Jan 11 16:35:39 2020 10.250.7.77:24738 TCP connection established with [AF_INET]100.64.1.1:53970\nSat Jan 11 16:35:39 2020 10.250.7.77:24738 Connection reset, restarting [0]\nSat Jan 11 16:35:39 2020 100.64.1.1:53970 Connection reset, restarting [0]\nSat Jan 11 16:35:43 2020 TCP connection established with [AF_INET]10.250.7.77:8612\nSat Jan 11 16:35:43 2020 10.250.7.77:8612 TCP connection established with [AF_INET]100.64.1.1:49552\nSat Jan 11 16:35:43 2020 10.250.7.77:8612 Connection reset, restarting [0]\nSat Jan 11 16:35:43 2020 100.64.1.1:49552 Connection reset, restarting [0]\nSat Jan 11 16:35:49 2020 TCP connection established with [AF_INET]10.250.7.77:24748\nSat Jan 11 16:35:49 2020 10.250.7.77:24748 TCP connection established with [AF_INET]100.64.1.1:53980\nSat Jan 11 16:35:49 2020 10.250.7.77:24748 Connection reset, restarting [0]\nSat Jan 11 16:35:49 2020 100.64.1.1:53980 Connection reset, restarting [0]\nSat Jan 11 16:35:53 2020 TCP connection established with [AF_INET]10.250.7.77:8626\nSat Jan 11 16:35:53 2020 10.250.7.77:8626 TCP connection established with [AF_INET]100.64.1.1:49566\nSat Jan 11 16:35:53 2020 10.250.7.77:8626 Connection reset, restarting [0]\nSat Jan 11 16:35:53 2020 100.64.1.1:49566 Connection reset, restarting [0]\nSat Jan 11 16:35:59 2020 TCP connection established with [AF_INET]10.250.7.77:24758\nSat Jan 11 16:35:59 2020 10.250.7.77:24758 TCP connection established with [AF_INET]100.64.1.1:53990\nSat Jan 11 16:35:59 2020 10.250.7.77:24758 Connection reset, restarting [0]\nSat Jan 11 16:35:59 2020 100.64.1.1:53990 Connection reset, restarting [0]\nSat Jan 11 16:36:03 2020 TCP connection established with [AF_INET]10.250.7.77:8674\nSat Jan 11 16:36:03 2020 10.250.7.77:8674 TCP connection established with [AF_INET]100.64.1.1:49614\nSat Jan 11 16:36:03 2020 10.250.7.77:8674 Connection reset, restarting [0]\nSat Jan 11 16:36:03 2020 100.64.1.1:49614 Connection reset, restarting [0]\nSat Jan 11 16:36:09 2020 TCP connection established with [AF_INET]10.250.7.77:24772\nSat Jan 11 16:36:09 2020 10.250.7.77:24772 TCP connection established with [AF_INET]100.64.1.1:54004\nSat Jan 11 16:36:09 2020 10.250.7.77:24772 Connection reset, restarting [0]\nSat Jan 11 16:36:09 2020 100.64.1.1:54004 Connection reset, restarting [0]\nSat Jan 11 16:36:13 2020 TCP connection established with [AF_INET]10.250.7.77:8686\nSat Jan 11 16:36:13 2020 10.250.7.77:8686 TCP connection established with [AF_INET]100.64.1.1:49626\nSat Jan 11 16:36:13 2020 10.250.7.77:8686 Connection reset, restarting [0]\nSat Jan 11 16:36:13 2020 100.64.1.1:49626 Connection reset, restarting [0]\nSat Jan 11 16:36:19 2020 TCP connection established with [AF_INET]10.250.7.77:24786\nSat Jan 11 16:36:19 2020 10.250.7.77:24786 Connection reset, restarting [0]\nSat Jan 11 16:36:19 2020 TCP connection established with [AF_INET]100.64.1.1:54018\nSat Jan 11 16:36:19 2020 100.64.1.1:54018 Connection reset, restarting [0]\nSat Jan 11 16:36:23 2020 TCP connection established with [AF_INET]10.250.7.77:8692\nSat Jan 11 16:36:23 2020 10.250.7.77:8692 TCP connection established with [AF_INET]100.64.1.1:49632\nSat Jan 11 16:36:23 2020 10.250.7.77:8692 Connection reset, restarting [0]\nSat Jan 11 16:36:23 2020 100.64.1.1:49632 Connection reset, restarting [0]\nSat Jan 11 16:36:29 2020 TCP connection established with [AF_INET]10.250.7.77:24796\nSat Jan 11 16:36:29 2020 10.250.7.77:24796 TCP connection established with [AF_INET]100.64.1.1:54028\nSat Jan 11 16:36:29 2020 10.250.7.77:24796 Connection reset, restarting [0]\nSat Jan 11 16:36:29 2020 100.64.1.1:54028 Connection reset, restarting [0]\nSat Jan 11 16:36:33 2020 TCP connection established with [AF_INET]10.250.7.77:8702\nSat Jan 11 16:36:33 2020 10.250.7.77:8702 TCP connection established with [AF_INET]100.64.1.1:49642\nSat Jan 11 16:36:33 2020 10.250.7.77:8702 Connection reset, restarting [0]\nSat Jan 11 16:36:33 2020 100.64.1.1:49642 Connection reset, restarting [0]\nSat Jan 11 16:36:39 2020 TCP connection established with [AF_INET]10.250.7.77:24800\nSat Jan 11 16:36:39 2020 10.250.7.77:24800 TCP connection established with [AF_INET]100.64.1.1:54032\nSat Jan 11 16:36:39 2020 10.250.7.77:24800 Connection reset, restarting [0]\nSat Jan 11 16:36:39 2020 100.64.1.1:54032 Connection reset, restarting [0]\nSat Jan 11 16:36:43 2020 TCP connection established with [AF_INET]10.250.7.77:8706\nSat Jan 11 16:36:43 2020 10.250.7.77:8706 TCP connection established with [AF_INET]100.64.1.1:49646\nSat Jan 11 16:36:43 2020 10.250.7.77:8706 Connection reset, restarting [0]\nSat Jan 11 16:36:43 2020 100.64.1.1:49646 Connection reset, restarting [0]\nSat Jan 11 16:36:49 2020 TCP connection established with [AF_INET]10.250.7.77:24810\nSat Jan 11 16:36:49 2020 10.250.7.77:24810 TCP connection established with [AF_INET]100.64.1.1:54042\nSat Jan 11 16:36:49 2020 10.250.7.77:24810 Connection reset, restarting [0]\nSat Jan 11 16:36:49 2020 100.64.1.1:54042 Connection reset, restarting [0]\nSat Jan 11 16:36:53 2020 TCP connection established with [AF_INET]10.250.7.77:8716\nSat Jan 11 16:36:53 2020 10.250.7.77:8716 TCP connection established with [AF_INET]100.64.1.1:49656\nSat Jan 11 16:36:53 2020 10.250.7.77:8716 Connection reset, restarting [0]\nSat Jan 11 16:36:53 2020 100.64.1.1:49656 Connection reset, restarting [0]\nSat Jan 11 16:36:59 2020 TCP connection established with [AF_INET]10.250.7.77:24820\nSat Jan 11 16:36:59 2020 10.250.7.77:24820 TCP connection established with [AF_INET]100.64.1.1:54052\nSat Jan 11 16:36:59 2020 10.250.7.77:24820 Connection reset, restarting [0]\nSat Jan 11 16:36:59 2020 100.64.1.1:54052 Connection reset, restarting [0]\nSat Jan 11 16:37:03 2020 TCP connection established with [AF_INET]10.250.7.77:8730\nSat Jan 11 16:37:03 2020 10.250.7.77:8730 TCP connection established with [AF_INET]100.64.1.1:49670\nSat Jan 11 16:37:03 2020 10.250.7.77:8730 Connection reset, restarting [0]\nSat Jan 11 16:37:03 2020 100.64.1.1:49670 Connection reset, restarting [0]\nSat Jan 11 16:37:09 2020 TCP connection established with [AF_INET]10.250.7.77:24834\nSat Jan 11 16:37:09 2020 10.250.7.77:24834 TCP connection established with [AF_INET]100.64.1.1:54066\nSat Jan 11 16:37:09 2020 10.250.7.77:24834 Connection reset, restarting [0]\nSat Jan 11 16:37:09 2020 100.64.1.1:54066 Connection reset, restarting [0]\nSat Jan 11 16:37:13 2020 TCP connection established with [AF_INET]10.250.7.77:8744\nSat Jan 11 16:37:13 2020 10.250.7.77:8744 TCP connection established with [AF_INET]100.64.1.1:49684\nSat Jan 11 16:37:13 2020 10.250.7.77:8744 Connection reset, restarting [0]\nSat Jan 11 16:37:13 2020 100.64.1.1:49684 Connection reset, restarting [0]\nSat Jan 11 16:37:19 2020 TCP connection established with [AF_INET]10.250.7.77:24844\nSat Jan 11 16:37:19 2020 10.250.7.77:24844 TCP connection established with [AF_INET]100.64.1.1:54076\nSat Jan 11 16:37:19 2020 10.250.7.77:24844 Connection reset, restarting [0]\nSat Jan 11 16:37:19 2020 100.64.1.1:54076 Connection reset, restarting [0]\nSat Jan 11 16:37:23 2020 TCP connection established with [AF_INET]10.250.7.77:8750\nSat Jan 11 16:37:23 2020 10.250.7.77:8750 TCP connection established with [AF_INET]100.64.1.1:49690\nSat Jan 11 16:37:23 2020 10.250.7.77:8750 Connection reset, restarting [0]\nSat Jan 11 16:37:23 2020 100.64.1.1:49690 Connection reset, restarting [0]\nSat Jan 11 16:37:29 2020 TCP connection established with [AF_INET]10.250.7.77:24856\nSat Jan 11 16:37:29 2020 10.250.7.77:24856 TCP connection established with [AF_INET]100.64.1.1:54088\nSat Jan 11 16:37:29 2020 10.250.7.77:24856 Connection reset, restarting [0]\nSat Jan 11 16:37:29 2020 100.64.1.1:54088 Connection reset, restarting [0]\nSat Jan 11 16:37:33 2020 TCP connection established with [AF_INET]10.250.7.77:8760\nSat Jan 11 16:37:33 2020 10.250.7.77:8760 TCP connection established with [AF_INET]100.64.1.1:49700\nSat Jan 11 16:37:33 2020 10.250.7.77:8760 Connection reset, restarting [0]\nSat Jan 11 16:37:33 2020 100.64.1.1:49700 Connection reset, restarting [0]\nSat Jan 11 16:37:39 2020 TCP connection established with [AF_INET]10.250.7.77:24862\nSat Jan 11 16:37:39 2020 10.250.7.77:24862 TCP connection established with [AF_INET]100.64.1.1:54094\nSat Jan 11 16:37:39 2020 10.250.7.77:24862 Connection reset, restarting [0]\nSat Jan 11 16:37:39 2020 100.64.1.1:54094 Connection reset, restarting [0]\nSat Jan 11 16:37:43 2020 TCP connection established with [AF_INET]10.250.7.77:8764\nSat Jan 11 16:37:43 2020 10.250.7.77:8764 TCP connection established with [AF_INET]100.64.1.1:49704\nSat Jan 11 16:37:43 2020 10.250.7.77:8764 Connection reset, restarting [0]\nSat Jan 11 16:37:43 2020 100.64.1.1:49704 Connection reset, restarting [0]\nSat Jan 11 16:37:49 2020 TCP connection established with [AF_INET]10.250.7.77:24872\nSat Jan 11 16:37:49 2020 10.250.7.77:24872 TCP connection established with [AF_INET]100.64.1.1:54104\nSat Jan 11 16:37:49 2020 10.250.7.77:24872 Connection reset, restarting [0]\nSat Jan 11 16:37:49 2020 100.64.1.1:54104 Connection reset, restarting [0]\nSat Jan 11 16:37:53 2020 TCP connection established with [AF_INET]10.250.7.77:8774\nSat Jan 11 16:37:53 2020 10.250.7.77:8774 TCP connection established with [AF_INET]100.64.1.1:49714\nSat Jan 11 16:37:53 2020 10.250.7.77:8774 Connection reset, restarting [0]\nSat Jan 11 16:37:53 2020 100.64.1.1:49714 Connection reset, restarting [0]\nSat Jan 11 16:37:59 2020 TCP connection established with [AF_INET]10.250.7.77:24880\nSat Jan 11 16:37:59 2020 10.250.7.77:24880 TCP connection established with [AF_INET]100.64.1.1:54112\nSat Jan 11 16:37:59 2020 10.250.7.77:24880 Connection reset, restarting [0]\nSat Jan 11 16:37:59 2020 100.64.1.1:54112 Connection reset, restarting [0]\nSat Jan 11 16:38:03 2020 TCP connection established with [AF_INET]10.250.7.77:8790\nSat Jan 11 16:38:03 2020 10.250.7.77:8790 TCP connection established with [AF_INET]100.64.1.1:49730\nSat Jan 11 16:38:03 2020 10.250.7.77:8790 Connection reset, restarting [0]\nSat Jan 11 16:38:03 2020 100.64.1.1:49730 Connection reset, restarting [0]\nSat Jan 11 16:38:09 2020 TCP connection established with [AF_INET]10.250.7.77:24896\nSat Jan 11 16:38:09 2020 10.250.7.77:24896 TCP connection established with [AF_INET]100.64.1.1:54128\nSat Jan 11 16:38:09 2020 10.250.7.77:24896 Connection reset, restarting [0]\nSat Jan 11 16:38:09 2020 100.64.1.1:54128 Connection reset, restarting [0]\nSat Jan 11 16:38:13 2020 TCP connection established with [AF_INET]10.250.7.77:8798\nSat Jan 11 16:38:13 2020 10.250.7.77:8798 TCP connection established with [AF_INET]100.64.1.1:49738\nSat Jan 11 16:38:13 2020 10.250.7.77:8798 Connection reset, restarting [0]\nSat Jan 11 16:38:13 2020 100.64.1.1:49738 Connection reset, restarting [0]\nSat Jan 11 16:38:19 2020 TCP connection established with [AF_INET]10.250.7.77:24904\nSat Jan 11 16:38:19 2020 10.250.7.77:24904 TCP connection established with [AF_INET]100.64.1.1:54136\nSat Jan 11 16:38:19 2020 10.250.7.77:24904 Connection reset, restarting [0]\nSat Jan 11 16:38:19 2020 100.64.1.1:54136 Connection reset, restarting [0]\nSat Jan 11 16:38:23 2020 TCP connection established with [AF_INET]10.250.7.77:8808\nSat Jan 11 16:38:23 2020 10.250.7.77:8808 TCP connection established with [AF_INET]100.64.1.1:49748\nSat Jan 11 16:38:23 2020 10.250.7.77:8808 Connection reset, restarting [0]\nSat Jan 11 16:38:23 2020 100.64.1.1:49748 Connection reset, restarting [0]\nSat Jan 11 16:38:29 2020 TCP connection established with [AF_INET]10.250.7.77:24912\nSat Jan 11 16:38:29 2020 10.250.7.77:24912 TCP connection established with [AF_INET]100.64.1.1:54144\nSat Jan 11 16:38:29 2020 10.250.7.77:24912 Connection reset, restarting [0]\nSat Jan 11 16:38:29 2020 100.64.1.1:54144 Connection reset, restarting [0]\nSat Jan 11 16:38:33 2020 TCP connection established with [AF_INET]10.250.7.77:8818\nSat Jan 11 16:38:33 2020 10.250.7.77:8818 TCP connection established with [AF_INET]100.64.1.1:49758\nSat Jan 11 16:38:33 2020 10.250.7.77:8818 Connection reset, restarting [0]\nSat Jan 11 16:38:33 2020 100.64.1.1:49758 Connection reset, restarting [0]\nSat Jan 11 16:38:39 2020 TCP connection established with [AF_INET]10.250.7.77:24916\nSat Jan 11 16:38:39 2020 10.250.7.77:24916 TCP connection established with [AF_INET]100.64.1.1:54148\nSat Jan 11 16:38:39 2020 10.250.7.77:24916 Connection reset, restarting [0]\nSat Jan 11 16:38:39 2020 100.64.1.1:54148 Connection reset, restarting [0]\nSat Jan 11 16:38:43 2020 TCP connection established with [AF_INET]10.250.7.77:8822\nSat Jan 11 16:38:43 2020 10.250.7.77:8822 TCP connection established with [AF_INET]100.64.1.1:49762\nSat Jan 11 16:38:43 2020 10.250.7.77:8822 Connection reset, restarting [0]\nSat Jan 11 16:38:43 2020 100.64.1.1:49762 Connection reset, restarting [0]\nSat Jan 11 16:38:49 2020 TCP connection established with [AF_INET]10.250.7.77:24930\nSat Jan 11 16:38:49 2020 10.250.7.77:24930 TCP connection established with [AF_INET]100.64.1.1:54162\nSat Jan 11 16:38:49 2020 10.250.7.77:24930 Connection reset, restarting [0]\nSat Jan 11 16:38:49 2020 100.64.1.1:54162 Connection reset, restarting [0]\nSat Jan 11 16:38:53 2020 TCP connection established with [AF_INET]10.250.7.77:8832\nSat Jan 11 16:38:53 2020 10.250.7.77:8832 TCP connection established with [AF_INET]100.64.1.1:49772\nSat Jan 11 16:38:53 2020 10.250.7.77:8832 Connection reset, restarting [0]\nSat Jan 11 16:38:53 2020 100.64.1.1:49772 Connection reset, restarting [0]\nSat Jan 11 16:38:59 2020 TCP connection established with [AF_INET]10.250.7.77:24972\nSat Jan 11 16:38:59 2020 10.250.7.77:24972 TCP connection established with [AF_INET]100.64.1.1:54204\nSat Jan 11 16:38:59 2020 10.250.7.77:24972 Connection reset, restarting [0]\nSat Jan 11 16:38:59 2020 100.64.1.1:54204 Connection reset, restarting [0]\nSat Jan 11 16:39:03 2020 TCP connection established with [AF_INET]10.250.7.77:8848\nSat Jan 11 16:39:03 2020 10.250.7.77:8848 TCP connection established with [AF_INET]100.64.1.1:49788\nSat Jan 11 16:39:03 2020 10.250.7.77:8848 Connection reset, restarting [0]\nSat Jan 11 16:39:03 2020 100.64.1.1:49788 Connection reset, restarting [0]\nSat Jan 11 16:39:09 2020 TCP connection established with [AF_INET]10.250.7.77:24990\nSat Jan 11 16:39:09 2020 10.250.7.77:24990 TCP connection established with [AF_INET]100.64.1.1:54222\nSat Jan 11 16:39:09 2020 10.250.7.77:24990 Connection reset, restarting [0]\nSat Jan 11 16:39:09 2020 100.64.1.1:54222 Connection reset, restarting [0]\nSat Jan 11 16:39:13 2020 TCP connection established with [AF_INET]10.250.7.77:8856\nSat Jan 11 16:39:13 2020 10.250.7.77:8856 TCP connection established with [AF_INET]100.64.1.1:49796\nSat Jan 11 16:39:13 2020 10.250.7.77:8856 Connection reset, restarting [0]\nSat Jan 11 16:39:13 2020 100.64.1.1:49796 Connection reset, restarting [0]\nSat Jan 11 16:39:19 2020 TCP connection established with [AF_INET]10.250.7.77:24998\nSat Jan 11 16:39:19 2020 10.250.7.77:24998 TCP connection established with [AF_INET]100.64.1.1:54230\nSat Jan 11 16:39:19 2020 10.250.7.77:24998 Connection reset, restarting [0]\nSat Jan 11 16:39:19 2020 100.64.1.1:54230 Connection reset, restarting [0]\nSat Jan 11 16:39:23 2020 TCP connection established with [AF_INET]10.250.7.77:8862\nSat Jan 11 16:39:23 2020 10.250.7.77:8862 TCP connection established with [AF_INET]100.64.1.1:49802\nSat Jan 11 16:39:23 2020 10.250.7.77:8862 Connection reset, restarting [0]\nSat Jan 11 16:39:23 2020 100.64.1.1:49802 Connection reset, restarting [0]\nSat Jan 11 16:39:29 2020 TCP connection established with [AF_INET]10.250.7.77:25006\nSat Jan 11 16:39:29 2020 10.250.7.77:25006 TCP connection established with [AF_INET]100.64.1.1:54238\nSat Jan 11 16:39:29 2020 10.250.7.77:25006 Connection reset, restarting [0]\nSat Jan 11 16:39:29 2020 100.64.1.1:54238 Connection reset, restarting [0]\nSat Jan 11 16:39:33 2020 TCP connection established with [AF_INET]10.250.7.77:8872\nSat Jan 11 16:39:33 2020 10.250.7.77:8872 TCP connection established with [AF_INET]100.64.1.1:49812\nSat Jan 11 16:39:33 2020 10.250.7.77:8872 Connection reset, restarting [0]\nSat Jan 11 16:39:33 2020 100.64.1.1:49812 Connection reset, restarting [0]\nSat Jan 11 16:39:39 2020 TCP connection established with [AF_INET]10.250.7.77:25010\nSat Jan 11 16:39:39 2020 10.250.7.77:25010 TCP connection established with [AF_INET]100.64.1.1:54242\nSat Jan 11 16:39:39 2020 10.250.7.77:25010 Connection reset, restarting [0]\nSat Jan 11 16:39:39 2020 100.64.1.1:54242 Connection reset, restarting [0]\nSat Jan 11 16:39:43 2020 TCP connection established with [AF_INET]10.250.7.77:8880\nSat Jan 11 16:39:43 2020 10.250.7.77:8880 TCP connection established with [AF_INET]100.64.1.1:49820\nSat Jan 11 16:39:43 2020 10.250.7.77:8880 Connection reset, restarting [0]\nSat Jan 11 16:39:43 2020 100.64.1.1:49820 Connection reset, restarting [0]\nSat Jan 11 16:39:49 2020 TCP connection established with [AF_INET]10.250.7.77:25020\nSat Jan 11 16:39:49 2020 10.250.7.77:25020 TCP connection established with [AF_INET]100.64.1.1:54252\nSat Jan 11 16:39:49 2020 10.250.7.77:25020 Connection reset, restarting [0]\nSat Jan 11 16:39:49 2020 100.64.1.1:54252 Connection reset, restarting [0]\nSat Jan 11 16:39:53 2020 TCP connection established with [AF_INET]10.250.7.77:8892\nSat Jan 11 16:39:53 2020 10.250.7.77:8892 TCP connection established with [AF_INET]100.64.1.1:49832\nSat Jan 11 16:39:53 2020 10.250.7.77:8892 Connection reset, restarting [0]\nSat Jan 11 16:39:53 2020 100.64.1.1:49832 Connection reset, restarting [0]\nSat Jan 11 16:39:59 2020 TCP connection established with [AF_INET]10.250.7.77:25032\nSat Jan 11 16:39:59 2020 10.250.7.77:25032 TCP connection established with [AF_INET]100.64.1.1:54264\nSat Jan 11 16:39:59 2020 10.250.7.77:25032 Connection reset, restarting [0]\nSat Jan 11 16:39:59 2020 100.64.1.1:54264 Connection reset, restarting [0]\nSat Jan 11 16:40:03 2020 TCP connection established with [AF_INET]10.250.7.77:8906\nSat Jan 11 16:40:03 2020 10.250.7.77:8906 TCP connection established with [AF_INET]100.64.1.1:49846\nSat Jan 11 16:40:03 2020 10.250.7.77:8906 Connection reset, restarting [0]\nSat Jan 11 16:40:03 2020 100.64.1.1:49846 Connection reset, restarting [0]\nSat Jan 11 16:40:09 2020 TCP connection established with [AF_INET]10.250.7.77:25048\nSat Jan 11 16:40:09 2020 10.250.7.77:25048 TCP connection established with [AF_INET]100.64.1.1:54280\nSat Jan 11 16:40:09 2020 10.250.7.77:25048 Connection reset, restarting [0]\nSat Jan 11 16:40:09 2020 100.64.1.1:54280 Connection reset, restarting [0]\nSat Jan 11 16:40:13 2020 TCP connection established with [AF_INET]10.250.7.77:8914\nSat Jan 11 16:40:13 2020 10.250.7.77:8914 TCP connection established with [AF_INET]100.64.1.1:49854\nSat Jan 11 16:40:13 2020 10.250.7.77:8914 Connection reset, restarting [0]\nSat Jan 11 16:40:13 2020 100.64.1.1:49854 Connection reset, restarting [0]\nSat Jan 11 16:40:19 2020 TCP connection established with [AF_INET]10.250.7.77:25056\nSat Jan 11 16:40:19 2020 10.250.7.77:25056 TCP connection established with [AF_INET]100.64.1.1:54288\nSat Jan 11 16:40:19 2020 10.250.7.77:25056 Connection reset, restarting [0]\nSat Jan 11 16:40:19 2020 100.64.1.1:54288 Connection reset, restarting [0]\nSat Jan 11 16:40:23 2020 TCP connection established with [AF_INET]10.250.7.77:8920\nSat Jan 11 16:40:23 2020 10.250.7.77:8920 TCP connection established with [AF_INET]100.64.1.1:49860\nSat Jan 11 16:40:23 2020 10.250.7.77:8920 Connection reset, restarting [0]\nSat Jan 11 16:40:23 2020 100.64.1.1:49860 Connection reset, restarting [0]\nSat Jan 11 16:40:29 2020 TCP connection established with [AF_INET]10.250.7.77:25064\nSat Jan 11 16:40:29 2020 10.250.7.77:25064 TCP connection established with [AF_INET]100.64.1.1:54296\nSat Jan 11 16:40:29 2020 10.250.7.77:25064 Connection reset, restarting [0]\nSat Jan 11 16:40:29 2020 100.64.1.1:54296 Connection reset, restarting [0]\nSat Jan 11 16:40:33 2020 TCP connection established with [AF_INET]10.250.7.77:8930\nSat Jan 11 16:40:33 2020 10.250.7.77:8930 TCP connection established with [AF_INET]100.64.1.1:49870\nSat Jan 11 16:40:33 2020 10.250.7.77:8930 Connection reset, restarting [0]\nSat Jan 11 16:40:33 2020 100.64.1.1:49870 Connection reset, restarting [0]\nSat Jan 11 16:40:39 2020 TCP connection established with [AF_INET]10.250.7.77:25068\nSat Jan 11 16:40:39 2020 10.250.7.77:25068 TCP connection established with [AF_INET]100.64.1.1:54300\nSat Jan 11 16:40:39 2020 10.250.7.77:25068 Connection reset, restarting [0]\nSat Jan 11 16:40:39 2020 100.64.1.1:54300 Connection reset, restarting [0]\nSat Jan 11 16:40:43 2020 TCP connection established with [AF_INET]10.250.7.77:8934\nSat Jan 11 16:40:43 2020 10.250.7.77:8934 TCP connection established with [AF_INET]100.64.1.1:49874\nSat Jan 11 16:40:43 2020 10.250.7.77:8934 Connection reset, restarting [0]\nSat Jan 11 16:40:43 2020 100.64.1.1:49874 Connection reset, restarting [0]\nSat Jan 11 16:40:49 2020 TCP connection established with [AF_INET]10.250.7.77:25078\nSat Jan 11 16:40:49 2020 10.250.7.77:25078 TCP connection established with [AF_INET]100.64.1.1:54310\nSat Jan 11 16:40:49 2020 10.250.7.77:25078 Connection reset, restarting [0]\nSat Jan 11 16:40:49 2020 100.64.1.1:54310 Connection reset, restarting [0]\nSat Jan 11 16:40:53 2020 TCP connection established with [AF_INET]10.250.7.77:8950\nSat Jan 11 16:40:53 2020 10.250.7.77:8950 TCP connection established with [AF_INET]100.64.1.1:49890\nSat Jan 11 16:40:53 2020 10.250.7.77:8950 Connection reset, restarting [0]\nSat Jan 11 16:40:53 2020 100.64.1.1:49890 Connection reset, restarting [0]\nSat Jan 11 16:40:59 2020 TCP connection established with [AF_INET]10.250.7.77:25088\nSat Jan 11 16:40:59 2020 10.250.7.77:25088 TCP connection established with [AF_INET]100.64.1.1:54320\nSat Jan 11 16:40:59 2020 10.250.7.77:25088 Connection reset, restarting [0]\nSat Jan 11 16:40:59 2020 100.64.1.1:54320 Connection reset, restarting [0]\nSat Jan 11 16:41:03 2020 TCP connection established with [AF_INET]10.250.7.77:8964\nSat Jan 11 16:41:03 2020 10.250.7.77:8964 TCP connection established with [AF_INET]100.64.1.1:49904\nSat Jan 11 16:41:03 2020 10.250.7.77:8964 Connection reset, restarting [0]\nSat Jan 11 16:41:03 2020 100.64.1.1:49904 Connection reset, restarting [0]\nSat Jan 11 16:41:09 2020 TCP connection established with [AF_INET]10.250.7.77:25102\nSat Jan 11 16:41:09 2020 10.250.7.77:25102 TCP connection established with [AF_INET]100.64.1.1:54334\nSat Jan 11 16:41:09 2020 10.250.7.77:25102 Connection reset, restarting [0]\nSat Jan 11 16:41:09 2020 100.64.1.1:54334 Connection reset, restarting [0]\nSat Jan 11 16:41:13 2020 TCP connection established with [AF_INET]10.250.7.77:8972\nSat Jan 11 16:41:13 2020 10.250.7.77:8972 TCP connection established with [AF_INET]100.64.1.1:49912\nSat Jan 11 16:41:13 2020 10.250.7.77:8972 Connection reset, restarting [0]\nSat Jan 11 16:41:13 2020 100.64.1.1:49912 Connection reset, restarting [0]\nSat Jan 11 16:41:19 2020 TCP connection established with [AF_INET]10.250.7.77:25114\nSat Jan 11 16:41:19 2020 10.250.7.77:25114 TCP connection established with [AF_INET]100.64.1.1:54346\nSat Jan 11 16:41:19 2020 10.250.7.77:25114 Connection reset, restarting [0]\nSat Jan 11 16:41:19 2020 100.64.1.1:54346 Connection reset, restarting [0]\nSat Jan 11 16:41:23 2020 TCP connection established with [AF_INET]10.250.7.77:8978\nSat Jan 11 16:41:23 2020 10.250.7.77:8978 TCP connection established with [AF_INET]100.64.1.1:49918\nSat Jan 11 16:41:23 2020 10.250.7.77:8978 Connection reset, restarting [0]\nSat Jan 11 16:41:23 2020 100.64.1.1:49918 Connection reset, restarting [0]\nSat Jan 11 16:41:29 2020 TCP connection established with [AF_INET]10.250.7.77:25122\nSat Jan 11 16:41:29 2020 10.250.7.77:25122 TCP connection established with [AF_INET]100.64.1.1:54354\nSat Jan 11 16:41:29 2020 10.250.7.77:25122 Connection reset, restarting [0]\nSat Jan 11 16:41:29 2020 100.64.1.1:54354 Connection reset, restarting [0]\nSat Jan 11 16:41:33 2020 TCP connection established with [AF_INET]10.250.7.77:8988\nSat Jan 11 16:41:33 2020 10.250.7.77:8988 TCP connection established with [AF_INET]100.64.1.1:49928\nSat Jan 11 16:41:33 2020 10.250.7.77:8988 Connection reset, restarting [0]\nSat Jan 11 16:41:33 2020 100.64.1.1:49928 Connection reset, restarting [0]\nSat Jan 11 16:41:39 2020 TCP connection established with [AF_INET]10.250.7.77:25126\nSat Jan 11 16:41:39 2020 10.250.7.77:25126 TCP connection established with [AF_INET]100.64.1.1:54358\nSat Jan 11 16:41:39 2020 10.250.7.77:25126 Connection reset, restarting [0]\nSat Jan 11 16:41:39 2020 100.64.1.1:54358 Connection reset, restarting [0]\nSat Jan 11 16:41:43 2020 TCP connection established with [AF_INET]10.250.7.77:8992\nSat Jan 11 16:41:43 2020 10.250.7.77:8992 TCP connection established with [AF_INET]100.64.1.1:49932\nSat Jan 11 16:41:43 2020 10.250.7.77:8992 Connection reset, restarting [0]\nSat Jan 11 16:41:43 2020 100.64.1.1:49932 Connection reset, restarting [0]\nSat Jan 11 16:41:49 2020 TCP connection established with [AF_INET]10.250.7.77:25136\nSat Jan 11 16:41:49 2020 10.250.7.77:25136 TCP connection established with [AF_INET]100.64.1.1:54368\nSat Jan 11 16:41:49 2020 10.250.7.77:25136 Connection reset, restarting [0]\nSat Jan 11 16:41:49 2020 100.64.1.1:54368 Connection reset, restarting [0]\nSat Jan 11 16:41:53 2020 TCP connection established with [AF_INET]10.250.7.77:9004\nSat Jan 11 16:41:53 2020 10.250.7.77:9004 TCP connection established with [AF_INET]100.64.1.1:49944\nSat Jan 11 16:41:53 2020 10.250.7.77:9004 Connection reset, restarting [0]\nSat Jan 11 16:41:53 2020 100.64.1.1:49944 Connection reset, restarting [0]\nSat Jan 11 16:41:59 2020 TCP connection established with [AF_INET]10.250.7.77:25146\nSat Jan 11 16:41:59 2020 10.250.7.77:25146 TCP connection established with [AF_INET]100.64.1.1:54378\nSat Jan 11 16:41:59 2020 10.250.7.77:25146 Connection reset, restarting [0]\nSat Jan 11 16:41:59 2020 100.64.1.1:54378 Connection reset, restarting [0]\nSat Jan 11 16:42:03 2020 TCP connection established with [AF_INET]100.64.1.1:49964\nSat Jan 11 16:42:03 2020 100.64.1.1:49964 TCP connection established with [AF_INET]10.250.7.77:9024\nSat Jan 11 16:42:03 2020 100.64.1.1:49964 Connection reset, restarting [0]\nSat Jan 11 16:42:03 2020 10.250.7.77:9024 Connection reset, restarting [0]\nSat Jan 11 16:42:09 2020 TCP connection established with [AF_INET]10.250.7.77:25166\nSat Jan 11 16:42:09 2020 10.250.7.77:25166 TCP connection established with [AF_INET]100.64.1.1:54398\nSat Jan 11 16:42:09 2020 10.250.7.77:25166 Connection reset, restarting [0]\nSat Jan 11 16:42:09 2020 100.64.1.1:54398 Connection reset, restarting [0]\nSat Jan 11 16:42:13 2020 TCP connection established with [AF_INET]10.250.7.77:9036\nSat Jan 11 16:42:13 2020 10.250.7.77:9036 TCP connection established with [AF_INET]100.64.1.1:49976\nSat Jan 11 16:42:13 2020 10.250.7.77:9036 Connection reset, restarting [0]\nSat Jan 11 16:42:13 2020 100.64.1.1:49976 Connection reset, restarting [0]\nSat Jan 11 16:42:19 2020 TCP connection established with [AF_INET]10.250.7.77:25174\nSat Jan 11 16:42:19 2020 10.250.7.77:25174 TCP connection established with [AF_INET]100.64.1.1:54406\nSat Jan 11 16:42:19 2020 10.250.7.77:25174 Connection reset, restarting [0]\nSat Jan 11 16:42:19 2020 100.64.1.1:54406 Connection reset, restarting [0]\nSat Jan 11 16:42:23 2020 TCP connection established with [AF_INET]10.250.7.77:9042\nSat Jan 11 16:42:23 2020 10.250.7.77:9042 TCP connection established with [AF_INET]100.64.1.1:49982\nSat Jan 11 16:42:23 2020 10.250.7.77:9042 Connection reset, restarting [0]\nSat Jan 11 16:42:23 2020 100.64.1.1:49982 Connection reset, restarting [0]\nSat Jan 11 16:42:29 2020 TCP connection established with [AF_INET]10.250.7.77:25186\nSat Jan 11 16:42:29 2020 10.250.7.77:25186 TCP connection established with [AF_INET]100.64.1.1:54418\nSat Jan 11 16:42:29 2020 10.250.7.77:25186 Connection reset, restarting [0]\nSat Jan 11 16:42:29 2020 100.64.1.1:54418 Connection reset, restarting [0]\nSat Jan 11 16:42:33 2020 TCP connection established with [AF_INET]10.250.7.77:9052\nSat Jan 11 16:42:33 2020 10.250.7.77:9052 TCP connection established with [AF_INET]100.64.1.1:49992\nSat Jan 11 16:42:33 2020 10.250.7.77:9052 Connection reset, restarting [0]\nSat Jan 11 16:42:33 2020 100.64.1.1:49992 Connection reset, restarting [0]\nSat Jan 11 16:42:39 2020 TCP connection established with [AF_INET]10.250.7.77:25190\nSat Jan 11 16:42:39 2020 10.250.7.77:25190 TCP connection established with [AF_INET]100.64.1.1:54422\nSat Jan 11 16:42:39 2020 10.250.7.77:25190 Connection reset, restarting [0]\nSat Jan 11 16:42:39 2020 100.64.1.1:54422 Connection reset, restarting [0]\nSat Jan 11 16:42:43 2020 TCP connection established with [AF_INET]10.250.7.77:9058\nSat Jan 11 16:42:43 2020 10.250.7.77:9058 TCP connection established with [AF_INET]100.64.1.1:49998\nSat Jan 11 16:42:43 2020 10.250.7.77:9058 Connection reset, restarting [0]\nSat Jan 11 16:42:43 2020 100.64.1.1:49998 Connection reset, restarting [0]\nSat Jan 11 16:42:49 2020 TCP connection established with [AF_INET]10.250.7.77:25202\nSat Jan 11 16:42:49 2020 10.250.7.77:25202 TCP connection established with [AF_INET]100.64.1.1:54434\nSat Jan 11 16:42:49 2020 10.250.7.77:25202 Connection reset, restarting [0]\nSat Jan 11 16:42:49 2020 100.64.1.1:54434 Connection reset, restarting [0]\nSat Jan 11 16:42:53 2020 TCP connection established with [AF_INET]10.250.7.77:9068\nSat Jan 11 16:42:53 2020 10.250.7.77:9068 TCP connection established with [AF_INET]100.64.1.1:50008\nSat Jan 11 16:42:53 2020 10.250.7.77:9068 Connection reset, restarting [0]\nSat Jan 11 16:42:53 2020 100.64.1.1:50008 Connection reset, restarting [0]\nSat Jan 11 16:42:59 2020 TCP connection established with [AF_INET]10.250.7.77:25210\nSat Jan 11 16:42:59 2020 10.250.7.77:25210 TCP connection established with [AF_INET]100.64.1.1:54442\nSat Jan 11 16:42:59 2020 10.250.7.77:25210 Connection reset, restarting [0]\nSat Jan 11 16:42:59 2020 100.64.1.1:54442 Connection reset, restarting [0]\nSat Jan 11 16:43:03 2020 TCP connection established with [AF_INET]10.250.7.77:9082\nSat Jan 11 16:43:03 2020 10.250.7.77:9082 TCP connection established with [AF_INET]100.64.1.1:50022\nSat Jan 11 16:43:03 2020 10.250.7.77:9082 Connection reset, restarting [0]\nSat Jan 11 16:43:03 2020 100.64.1.1:50022 Connection reset, restarting [0]\nSat Jan 11 16:43:09 2020 TCP connection established with [AF_INET]10.250.7.77:25224\nSat Jan 11 16:43:09 2020 10.250.7.77:25224 TCP connection established with [AF_INET]100.64.1.1:54456\nSat Jan 11 16:43:09 2020 10.250.7.77:25224 Connection reset, restarting [0]\nSat Jan 11 16:43:09 2020 100.64.1.1:54456 Connection reset, restarting [0]\nSat Jan 11 16:43:13 2020 TCP connection established with [AF_INET]100.64.1.1:50030\nSat Jan 11 16:43:13 2020 100.64.1.1:50030 TCP connection established with [AF_INET]10.250.7.77:9090\nSat Jan 11 16:43:13 2020 100.64.1.1:50030 Connection reset, restarting [0]\nSat Jan 11 16:43:13 2020 10.250.7.77:9090 Connection reset, restarting [0]\nSat Jan 11 16:43:19 2020 TCP connection established with [AF_INET]10.250.7.77:25232\nSat Jan 11 16:43:19 2020 10.250.7.77:25232 TCP connection established with [AF_INET]100.64.1.1:54464\nSat Jan 11 16:43:19 2020 10.250.7.77:25232 Connection reset, restarting [0]\nSat Jan 11 16:43:19 2020 100.64.1.1:54464 Connection reset, restarting [0]\nSat Jan 11 16:43:23 2020 TCP connection established with [AF_INET]10.250.7.77:9100\nSat Jan 11 16:43:23 2020 10.250.7.77:9100 TCP connection established with [AF_INET]100.64.1.1:50040\nSat Jan 11 16:43:23 2020 10.250.7.77:9100 Connection reset, restarting [0]\nSat Jan 11 16:43:23 2020 100.64.1.1:50040 Connection reset, restarting [0]\nSat Jan 11 16:43:29 2020 TCP connection established with [AF_INET]10.250.7.77:25240\nSat Jan 11 16:43:29 2020 10.250.7.77:25240 TCP connection established with [AF_INET]100.64.1.1:54472\nSat Jan 11 16:43:29 2020 10.250.7.77:25240 Connection reset, restarting [0]\nSat Jan 11 16:43:29 2020 100.64.1.1:54472 Connection reset, restarting [0]\nSat Jan 11 16:43:33 2020 TCP connection established with [AF_INET]10.250.7.77:9110\nSat Jan 11 16:43:33 2020 10.250.7.77:9110 TCP connection established with [AF_INET]100.64.1.1:50050\nSat Jan 11 16:43:33 2020 10.250.7.77:9110 Connection reset, restarting [0]\nSat Jan 11 16:43:33 2020 100.64.1.1:50050 Connection reset, restarting [0]\nSat Jan 11 16:43:39 2020 TCP connection established with [AF_INET]10.250.7.77:25244\nSat Jan 11 16:43:39 2020 10.250.7.77:25244 TCP connection established with [AF_INET]100.64.1.1:54476\nSat Jan 11 16:43:39 2020 10.250.7.77:25244 Connection reset, restarting [0]\nSat Jan 11 16:43:39 2020 100.64.1.1:54476 Connection reset, restarting [0]\nSat Jan 11 16:43:43 2020 TCP connection established with [AF_INET]10.250.7.77:9116\nSat Jan 11 16:43:43 2020 10.250.7.77:9116 TCP connection established with [AF_INET]100.64.1.1:50056\nSat Jan 11 16:43:43 2020 10.250.7.77:9116 Connection reset, restarting [0]\nSat Jan 11 16:43:43 2020 100.64.1.1:50056 Connection reset, restarting [0]\nSat Jan 11 16:43:49 2020 TCP connection established with [AF_INET]10.250.7.77:25260\nSat Jan 11 16:43:49 2020 10.250.7.77:25260 TCP connection established with [AF_INET]100.64.1.1:54492\nSat Jan 11 16:43:49 2020 10.250.7.77:25260 Connection reset, restarting [0]\nSat Jan 11 16:43:49 2020 100.64.1.1:54492 Connection reset, restarting [0]\nSat Jan 11 16:43:53 2020 TCP connection established with [AF_INET]10.250.7.77:9126\nSat Jan 11 16:43:53 2020 10.250.7.77:9126 TCP connection established with [AF_INET]100.64.1.1:50066\nSat Jan 11 16:43:53 2020 10.250.7.77:9126 Connection reset, restarting [0]\nSat Jan 11 16:43:53 2020 100.64.1.1:50066 Connection reset, restarting [0]\nSat Jan 11 16:43:59 2020 TCP connection established with [AF_INET]10.250.7.77:25268\nSat Jan 11 16:43:59 2020 10.250.7.77:25268 TCP connection established with [AF_INET]100.64.1.1:54500\nSat Jan 11 16:43:59 2020 10.250.7.77:25268 Connection reset, restarting [0]\nSat Jan 11 16:43:59 2020 100.64.1.1:54500 Connection reset, restarting [0]\nSat Jan 11 16:44:03 2020 TCP connection established with [AF_INET]10.250.7.77:9140\nSat Jan 11 16:44:03 2020 10.250.7.77:9140 TCP connection established with [AF_INET]100.64.1.1:50080\nSat Jan 11 16:44:03 2020 10.250.7.77:9140 Connection reset, restarting [0]\nSat Jan 11 16:44:03 2020 100.64.1.1:50080 Connection reset, restarting [0]\nSat Jan 11 16:44:09 2020 TCP connection established with [AF_INET]100.64.1.1:54514\nSat Jan 11 16:44:09 2020 100.64.1.1:54514 TCP connection established with [AF_INET]10.250.7.77:25282\nSat Jan 11 16:44:09 2020 100.64.1.1:54514 Connection reset, restarting [0]\nSat Jan 11 16:44:09 2020 10.250.7.77:25282 Connection reset, restarting [0]\nSat Jan 11 16:44:13 2020 TCP connection established with [AF_INET]10.250.7.77:9148\nSat Jan 11 16:44:13 2020 10.250.7.77:9148 TCP connection established with [AF_INET]100.64.1.1:50088\nSat Jan 11 16:44:13 2020 10.250.7.77:9148 Connection reset, restarting [0]\nSat Jan 11 16:44:13 2020 100.64.1.1:50088 Connection reset, restarting [0]\nSat Jan 11 16:44:19 2020 TCP connection established with [AF_INET]10.250.7.77:25290\nSat Jan 11 16:44:19 2020 10.250.7.77:25290 TCP connection established with [AF_INET]100.64.1.1:54522\nSat Jan 11 16:44:19 2020 10.250.7.77:25290 Connection reset, restarting [0]\nSat Jan 11 16:44:19 2020 100.64.1.1:54522 Connection reset, restarting [0]\nSat Jan 11 16:44:23 2020 TCP connection established with [AF_INET]10.250.7.77:9154\nSat Jan 11 16:44:23 2020 10.250.7.77:9154 TCP connection established with [AF_INET]100.64.1.1:50094\nSat Jan 11 16:44:23 2020 10.250.7.77:9154 Connection reset, restarting [0]\nSat Jan 11 16:44:23 2020 100.64.1.1:50094 Connection reset, restarting [0]\nSat Jan 11 16:44:29 2020 TCP connection established with [AF_INET]10.250.7.77:25298\nSat Jan 11 16:44:29 2020 10.250.7.77:25298 TCP connection established with [AF_INET]100.64.1.1:54530\nSat Jan 11 16:44:29 2020 10.250.7.77:25298 Connection reset, restarting [0]\nSat Jan 11 16:44:29 2020 100.64.1.1:54530 Connection reset, restarting [0]\nSat Jan 11 16:44:33 2020 TCP connection established with [AF_INET]10.250.7.77:9166\nSat Jan 11 16:44:33 2020 10.250.7.77:9166 TCP connection established with [AF_INET]100.64.1.1:50106\nSat Jan 11 16:44:33 2020 10.250.7.77:9166 Connection reset, restarting [0]\nSat Jan 11 16:44:33 2020 100.64.1.1:50106 Connection reset, restarting [0]\nSat Jan 11 16:44:39 2020 TCP connection established with [AF_INET]10.250.7.77:25302\nSat Jan 11 16:44:39 2020 10.250.7.77:25302 TCP connection established with [AF_INET]100.64.1.1:54534\nSat Jan 11 16:44:39 2020 10.250.7.77:25302 Connection reset, restarting [0]\nSat Jan 11 16:44:39 2020 100.64.1.1:54534 Connection reset, restarting [0]\nSat Jan 11 16:44:43 2020 TCP connection established with [AF_INET]10.250.7.77:9174\nSat Jan 11 16:44:43 2020 10.250.7.77:9174 TCP connection established with [AF_INET]100.64.1.1:50114\nSat Jan 11 16:44:43 2020 10.250.7.77:9174 Connection reset, restarting [0]\nSat Jan 11 16:44:43 2020 100.64.1.1:50114 Connection reset, restarting [0]\nSat Jan 11 16:44:49 2020 TCP connection established with [AF_INET]10.250.7.77:25314\nSat Jan 11 16:44:49 2020 10.250.7.77:25314 TCP connection established with [AF_INET]100.64.1.1:54546\nSat Jan 11 16:44:49 2020 10.250.7.77:25314 Connection reset, restarting [0]\nSat Jan 11 16:44:49 2020 100.64.1.1:54546 Connection reset, restarting [0]\nSat Jan 11 16:44:53 2020 TCP connection established with [AF_INET]100.64.1.1:50124\nSat Jan 11 16:44:53 2020 100.64.1.1:50124 Connection reset, restarting [0]\nSat Jan 11 16:44:53 2020 TCP connection established with [AF_INET]10.250.7.77:9184\nSat Jan 11 16:44:53 2020 10.250.7.77:9184 Connection reset, restarting [0]\nSat Jan 11 16:44:59 2020 TCP connection established with [AF_INET]10.250.7.77:25326\nSat Jan 11 16:44:59 2020 10.250.7.77:25326 TCP connection established with [AF_INET]100.64.1.1:54558\nSat Jan 11 16:44:59 2020 10.250.7.77:25326 Connection reset, restarting [0]\nSat Jan 11 16:44:59 2020 100.64.1.1:54558 Connection reset, restarting [0]\nSat Jan 11 16:45:03 2020 TCP connection established with [AF_INET]10.250.7.77:9198\nSat Jan 11 16:45:03 2020 10.250.7.77:9198 TCP connection established with [AF_INET]100.64.1.1:50138\nSat Jan 11 16:45:03 2020 10.250.7.77:9198 Connection reset, restarting [0]\nSat Jan 11 16:45:03 2020 100.64.1.1:50138 Connection reset, restarting [0]\nSat Jan 11 16:45:09 2020 TCP connection established with [AF_INET]10.250.7.77:25340\nSat Jan 11 16:45:09 2020 10.250.7.77:25340 TCP connection established with [AF_INET]100.64.1.1:54572\nSat Jan 11 16:45:09 2020 10.250.7.77:25340 Connection reset, restarting [0]\nSat Jan 11 16:45:09 2020 100.64.1.1:54572 Connection reset, restarting [0]\nSat Jan 11 16:45:13 2020 TCP connection established with [AF_INET]10.250.7.77:9206\nSat Jan 11 16:45:13 2020 10.250.7.77:9206 TCP connection established with [AF_INET]100.64.1.1:50146\nSat Jan 11 16:45:13 2020 10.250.7.77:9206 Connection reset, restarting [0]\nSat Jan 11 16:45:13 2020 100.64.1.1:50146 Connection reset, restarting [0]\nSat Jan 11 16:45:19 2020 TCP connection established with [AF_INET]10.250.7.77:25348\nSat Jan 11 16:45:19 2020 10.250.7.77:25348 TCP connection established with [AF_INET]100.64.1.1:54580\nSat Jan 11 16:45:19 2020 10.250.7.77:25348 Connection reset, restarting [0]\nSat Jan 11 16:45:19 2020 100.64.1.1:54580 Connection reset, restarting [0]\nSat Jan 11 16:45:23 2020 TCP connection established with [AF_INET]10.250.7.77:9220\nSat Jan 11 16:45:23 2020 10.250.7.77:9220 TCP connection established with [AF_INET]100.64.1.1:50160\nSat Jan 11 16:45:23 2020 10.250.7.77:9220 Connection reset, restarting [0]\nSat Jan 11 16:45:23 2020 100.64.1.1:50160 Connection reset, restarting [0]\nSat Jan 11 16:45:29 2020 TCP connection established with [AF_INET]10.250.7.77:25356\nSat Jan 11 16:45:29 2020 10.250.7.77:25356 TCP connection established with [AF_INET]100.64.1.1:54588\nSat Jan 11 16:45:29 2020 10.250.7.77:25356 Connection reset, restarting [0]\nSat Jan 11 16:45:29 2020 100.64.1.1:54588 Connection reset, restarting [0]\nSat Jan 11 16:45:33 2020 TCP connection established with [AF_INET]10.250.7.77:9232\nSat Jan 11 16:45:33 2020 10.250.7.77:9232 TCP connection established with [AF_INET]100.64.1.1:50172\nSat Jan 11 16:45:33 2020 10.250.7.77:9232 Connection reset, restarting [0]\nSat Jan 11 16:45:33 2020 100.64.1.1:50172 Connection reset, restarting [0]\nSat Jan 11 16:45:39 2020 TCP connection established with [AF_INET]10.250.7.77:25362\nSat Jan 11 16:45:39 2020 10.250.7.77:25362 TCP connection established with [AF_INET]100.64.1.1:54594\nSat Jan 11 16:45:39 2020 10.250.7.77:25362 Connection reset, restarting [0]\nSat Jan 11 16:45:39 2020 100.64.1.1:54594 Connection reset, restarting [0]\nSat Jan 11 16:45:43 2020 TCP connection established with [AF_INET]10.250.7.77:9236\nSat Jan 11 16:45:43 2020 10.250.7.77:9236 TCP connection established with [AF_INET]100.64.1.1:50176\nSat Jan 11 16:45:43 2020 10.250.7.77:9236 Connection reset, restarting [0]\nSat Jan 11 16:45:43 2020 100.64.1.1:50176 Connection reset, restarting [0]\nSat Jan 11 16:45:49 2020 TCP connection established with [AF_INET]10.250.7.77:25372\nSat Jan 11 16:45:49 2020 10.250.7.77:25372 TCP connection established with [AF_INET]100.64.1.1:54604\nSat Jan 11 16:45:49 2020 10.250.7.77:25372 Connection reset, restarting [0]\nSat Jan 11 16:45:49 2020 100.64.1.1:54604 Connection reset, restarting [0]\nSat Jan 11 16:45:53 2020 TCP connection established with [AF_INET]10.250.7.77:9250\nSat Jan 11 16:45:53 2020 10.250.7.77:9250 TCP connection established with [AF_INET]100.64.1.1:50190\nSat Jan 11 16:45:53 2020 10.250.7.77:9250 Connection reset, restarting [0]\nSat Jan 11 16:45:53 2020 100.64.1.1:50190 Connection reset, restarting [0]\nSat Jan 11 16:45:59 2020 TCP connection established with [AF_INET]10.250.7.77:25380\nSat Jan 11 16:45:59 2020 10.250.7.77:25380 TCP connection established with [AF_INET]100.64.1.1:54612\nSat Jan 11 16:45:59 2020 10.250.7.77:25380 Connection reset, restarting [0]\nSat Jan 11 16:45:59 2020 100.64.1.1:54612 Connection reset, restarting [0]\nSat Jan 11 16:46:03 2020 TCP connection established with [AF_INET]10.250.7.77:9298\nSat Jan 11 16:46:03 2020 10.250.7.77:9298 TCP connection established with [AF_INET]100.64.1.1:50238\nSat Jan 11 16:46:03 2020 10.250.7.77:9298 Connection reset, restarting [0]\nSat Jan 11 16:46:03 2020 100.64.1.1:50238 Connection reset, restarting [0]\nSat Jan 11 16:46:09 2020 TCP connection established with [AF_INET]10.250.7.77:25394\nSat Jan 11 16:46:09 2020 10.250.7.77:25394 TCP connection established with [AF_INET]100.64.1.1:54626\nSat Jan 11 16:46:09 2020 10.250.7.77:25394 Connection reset, restarting [0]\nSat Jan 11 16:46:09 2020 100.64.1.1:54626 Connection reset, restarting [0]\nSat Jan 11 16:46:13 2020 TCP connection established with [AF_INET]10.250.7.77:9314\nSat Jan 11 16:46:13 2020 10.250.7.77:9314 TCP connection established with [AF_INET]100.64.1.1:50254\nSat Jan 11 16:46:13 2020 10.250.7.77:9314 Connection reset, restarting [0]\nSat Jan 11 16:46:13 2020 100.64.1.1:50254 Connection reset, restarting [0]\nSat Jan 11 16:46:19 2020 TCP connection established with [AF_INET]10.250.7.77:25406\nSat Jan 11 16:46:19 2020 10.250.7.77:25406 TCP connection established with [AF_INET]100.64.1.1:54638\nSat Jan 11 16:46:19 2020 10.250.7.77:25406 Connection reset, restarting [0]\nSat Jan 11 16:46:19 2020 100.64.1.1:54638 Connection reset, restarting [0]\nSat Jan 11 16:46:23 2020 TCP connection established with [AF_INET]10.250.7.77:9320\nSat Jan 11 16:46:23 2020 10.250.7.77:9320 TCP connection established with [AF_INET]100.64.1.1:50260\nSat Jan 11 16:46:23 2020 10.250.7.77:9320 Connection reset, restarting [0]\nSat Jan 11 16:46:23 2020 100.64.1.1:50260 Connection reset, restarting [0]\nSat Jan 11 16:46:29 2020 TCP connection established with [AF_INET]10.250.7.77:25414\nSat Jan 11 16:46:29 2020 10.250.7.77:25414 TCP connection established with [AF_INET]100.64.1.1:54646\nSat Jan 11 16:46:29 2020 10.250.7.77:25414 Connection reset, restarting [0]\nSat Jan 11 16:46:29 2020 100.64.1.1:54646 Connection reset, restarting [0]\nSat Jan 11 16:46:33 2020 TCP connection established with [AF_INET]10.250.7.77:9332\nSat Jan 11 16:46:33 2020 10.250.7.77:9332 TCP connection established with [AF_INET]100.64.1.1:50272\nSat Jan 11 16:46:33 2020 10.250.7.77:9332 Connection reset, restarting [0]\nSat Jan 11 16:46:33 2020 100.64.1.1:50272 Connection reset, restarting [0]\nSat Jan 11 16:46:39 2020 TCP connection established with [AF_INET]10.250.7.77:25420\nSat Jan 11 16:46:39 2020 10.250.7.77:25420 TCP connection established with [AF_INET]100.64.1.1:54652\nSat Jan 11 16:46:39 2020 10.250.7.77:25420 Connection reset, restarting [0]\nSat Jan 11 16:46:39 2020 100.64.1.1:54652 Connection reset, restarting [0]\nSat Jan 11 16:46:43 2020 TCP connection established with [AF_INET]10.250.7.77:9336\nSat Jan 11 16:46:43 2020 10.250.7.77:9336 TCP connection established with [AF_INET]100.64.1.1:50276\nSat Jan 11 16:46:43 2020 10.250.7.77:9336 Connection reset, restarting [0]\nSat Jan 11 16:46:43 2020 100.64.1.1:50276 Connection reset, restarting [0]\nSat Jan 11 16:46:49 2020 TCP connection established with [AF_INET]10.250.7.77:25430\nSat Jan 11 16:46:49 2020 10.250.7.77:25430 TCP connection established with [AF_INET]100.64.1.1:54662\nSat Jan 11 16:46:49 2020 10.250.7.77:25430 Connection reset, restarting [0]\nSat Jan 11 16:46:49 2020 100.64.1.1:54662 Connection reset, restarting [0]\nSat Jan 11 16:46:53 2020 TCP connection established with [AF_INET]10.250.7.77:9346\nSat Jan 11 16:46:53 2020 10.250.7.77:9346 TCP connection established with [AF_INET]100.64.1.1:50286\nSat Jan 11 16:46:53 2020 10.250.7.77:9346 Connection reset, restarting [0]\nSat Jan 11 16:46:53 2020 100.64.1.1:50286 Connection reset, restarting [0]\nSat Jan 11 16:46:59 2020 TCP connection established with [AF_INET]10.250.7.77:25438\nSat Jan 11 16:46:59 2020 10.250.7.77:25438 TCP connection established with [AF_INET]100.64.1.1:54670\nSat Jan 11 16:46:59 2020 10.250.7.77:25438 Connection reset, restarting [0]\nSat Jan 11 16:46:59 2020 100.64.1.1:54670 Connection reset, restarting [0]\nSat Jan 11 16:47:03 2020 TCP connection established with [AF_INET]10.250.7.77:9360\nSat Jan 11 16:47:03 2020 10.250.7.77:9360 TCP connection established with [AF_INET]100.64.1.1:50300\nSat Jan 11 16:47:03 2020 10.250.7.77:9360 Connection reset, restarting [0]\nSat Jan 11 16:47:03 2020 100.64.1.1:50300 Connection reset, restarting [0]\nSat Jan 11 16:47:09 2020 TCP connection established with [AF_INET]10.250.7.77:25452\nSat Jan 11 16:47:09 2020 10.250.7.77:25452 TCP connection established with [AF_INET]100.64.1.1:54684\nSat Jan 11 16:47:09 2020 10.250.7.77:25452 Connection reset, restarting [0]\nSat Jan 11 16:47:09 2020 100.64.1.1:54684 Connection reset, restarting [0]\nSat Jan 11 16:47:13 2020 TCP connection established with [AF_INET]10.250.7.77:9372\nSat Jan 11 16:47:13 2020 10.250.7.77:9372 TCP connection established with [AF_INET]100.64.1.1:50312\nSat Jan 11 16:47:13 2020 10.250.7.77:9372 Connection reset, restarting [0]\nSat Jan 11 16:47:13 2020 100.64.1.1:50312 Connection reset, restarting [0]\nSat Jan 11 16:47:19 2020 TCP connection established with [AF_INET]10.250.7.77:25460\nSat Jan 11 16:47:19 2020 10.250.7.77:25460 TCP connection established with [AF_INET]100.64.1.1:54692\nSat Jan 11 16:47:19 2020 10.250.7.77:25460 Connection reset, restarting [0]\nSat Jan 11 16:47:19 2020 100.64.1.1:54692 Connection reset, restarting [0]\nSat Jan 11 16:47:23 2020 TCP connection established with [AF_INET]10.250.7.77:9380\nSat Jan 11 16:47:23 2020 10.250.7.77:9380 TCP connection established with [AF_INET]100.64.1.1:50320\nSat Jan 11 16:47:23 2020 10.250.7.77:9380 Connection reset, restarting [0]\nSat Jan 11 16:47:23 2020 100.64.1.1:50320 Connection reset, restarting [0]\nSat Jan 11 16:47:29 2020 TCP connection established with [AF_INET]10.250.7.77:25474\nSat Jan 11 16:47:29 2020 10.250.7.77:25474 TCP connection established with [AF_INET]100.64.1.1:54706\nSat Jan 11 16:47:29 2020 10.250.7.77:25474 Connection reset, restarting [0]\nSat Jan 11 16:47:29 2020 100.64.1.1:54706 Connection reset, restarting [0]\nSat Jan 11 16:47:33 2020 TCP connection established with [AF_INET]10.250.7.77:9390\nSat Jan 11 16:47:33 2020 10.250.7.77:9390 TCP connection established with [AF_INET]100.64.1.1:50330\nSat Jan 11 16:47:33 2020 10.250.7.77:9390 Connection reset, restarting [0]\nSat Jan 11 16:47:33 2020 100.64.1.1:50330 Connection reset, restarting [0]\nSat Jan 11 16:47:39 2020 TCP connection established with [AF_INET]10.250.7.77:25478\nSat Jan 11 16:47:39 2020 10.250.7.77:25478 TCP connection established with [AF_INET]100.64.1.1:54710\nSat Jan 11 16:47:39 2020 10.250.7.77:25478 Connection reset, restarting [0]\nSat Jan 11 16:47:39 2020 100.64.1.1:54710 Connection reset, restarting [0]\nSat Jan 11 16:47:43 2020 TCP connection established with [AF_INET]10.250.7.77:9394\nSat Jan 11 16:47:43 2020 10.250.7.77:9394 TCP connection established with [AF_INET]100.64.1.1:50334\nSat Jan 11 16:47:43 2020 10.250.7.77:9394 Connection reset, restarting [0]\nSat Jan 11 16:47:43 2020 100.64.1.1:50334 Connection reset, restarting [0]\nSat Jan 11 16:47:49 2020 TCP connection established with [AF_INET]10.250.7.77:25488\nSat Jan 11 16:47:49 2020 10.250.7.77:25488 TCP connection established with [AF_INET]100.64.1.1:54720\nSat Jan 11 16:47:49 2020 10.250.7.77:25488 Connection reset, restarting [0]\nSat Jan 11 16:47:49 2020 100.64.1.1:54720 Connection reset, restarting [0]\nSat Jan 11 16:47:53 2020 TCP connection established with [AF_INET]10.250.7.77:9404\nSat Jan 11 16:47:53 2020 10.250.7.77:9404 TCP connection established with [AF_INET]100.64.1.1:50344\nSat Jan 11 16:47:53 2020 10.250.7.77:9404 Connection reset, restarting [0]\nSat Jan 11 16:47:53 2020 100.64.1.1:50344 Connection reset, restarting [0]\nSat Jan 11 16:47:59 2020 TCP connection established with [AF_INET]10.250.7.77:25496\nSat Jan 11 16:47:59 2020 10.250.7.77:25496 TCP connection established with [AF_INET]100.64.1.1:54728\nSat Jan 11 16:47:59 2020 10.250.7.77:25496 Connection reset, restarting [0]\nSat Jan 11 16:47:59 2020 100.64.1.1:54728 Connection reset, restarting [0]\nSat Jan 11 16:48:03 2020 TCP connection established with [AF_INET]10.250.7.77:9418\nSat Jan 11 16:48:03 2020 10.250.7.77:9418 TCP connection established with [AF_INET]100.64.1.1:50358\nSat Jan 11 16:48:03 2020 10.250.7.77:9418 Connection reset, restarting [0]\nSat Jan 11 16:48:03 2020 100.64.1.1:50358 Connection reset, restarting [0]\nSat Jan 11 16:48:09 2020 TCP connection established with [AF_INET]10.250.7.77:25510\nSat Jan 11 16:48:09 2020 10.250.7.77:25510 TCP connection established with [AF_INET]100.64.1.1:54742\nSat Jan 11 16:48:09 2020 10.250.7.77:25510 Connection reset, restarting [0]\nSat Jan 11 16:48:09 2020 100.64.1.1:54742 Connection reset, restarting [0]\nSat Jan 11 16:48:13 2020 TCP connection established with [AF_INET]10.250.7.77:9426\nSat Jan 11 16:48:13 2020 10.250.7.77:9426 TCP connection established with [AF_INET]100.64.1.1:50366\nSat Jan 11 16:48:13 2020 10.250.7.77:9426 Connection reset, restarting [0]\nSat Jan 11 16:48:13 2020 100.64.1.1:50366 Connection reset, restarting [0]\nSat Jan 11 16:48:19 2020 TCP connection established with [AF_INET]10.250.7.77:25518\nSat Jan 11 16:48:19 2020 10.250.7.77:25518 TCP connection established with [AF_INET]100.64.1.1:54750\nSat Jan 11 16:48:19 2020 10.250.7.77:25518 Connection reset, restarting [0]\nSat Jan 11 16:48:19 2020 100.64.1.1:54750 Connection reset, restarting [0]\nSat Jan 11 16:48:23 2020 TCP connection established with [AF_INET]10.250.7.77:9438\nSat Jan 11 16:48:23 2020 10.250.7.77:9438 Connection reset, restarting [0]\nSat Jan 11 16:48:23 2020 TCP connection established with [AF_INET]100.64.1.1:50378\nSat Jan 11 16:48:23 2020 100.64.1.1:50378 Connection reset, restarting [0]\nSat Jan 11 16:48:29 2020 TCP connection established with [AF_INET]10.250.7.77:25528\nSat Jan 11 16:48:29 2020 10.250.7.77:25528 TCP connection established with [AF_INET]100.64.1.1:54760\nSat Jan 11 16:48:29 2020 10.250.7.77:25528 Connection reset, restarting [0]\nSat Jan 11 16:48:29 2020 100.64.1.1:54760 Connection reset, restarting [0]\nSat Jan 11 16:48:33 2020 TCP connection established with [AF_INET]10.250.7.77:9450\nSat Jan 11 16:48:33 2020 10.250.7.77:9450 TCP connection established with [AF_INET]100.64.1.1:50390\nSat Jan 11 16:48:33 2020 10.250.7.77:9450 Connection reset, restarting [0]\nSat Jan 11 16:48:33 2020 100.64.1.1:50390 Connection reset, restarting [0]\nSat Jan 11 16:48:39 2020 TCP connection established with [AF_INET]10.250.7.77:25532\nSat Jan 11 16:48:39 2020 10.250.7.77:25532 TCP connection established with [AF_INET]100.64.1.1:54764\nSat Jan 11 16:48:39 2020 10.250.7.77:25532 Connection reset, restarting [0]\nSat Jan 11 16:48:39 2020 100.64.1.1:54764 Connection reset, restarting [0]\nSat Jan 11 16:48:43 2020 TCP connection established with [AF_INET]10.250.7.77:9454\nSat Jan 11 16:48:43 2020 10.250.7.77:9454 TCP connection established with [AF_INET]100.64.1.1:50394\nSat Jan 11 16:48:43 2020 10.250.7.77:9454 Connection reset, restarting [0]\nSat Jan 11 16:48:43 2020 100.64.1.1:50394 Connection reset, restarting [0]\nSat Jan 11 16:48:49 2020 TCP connection established with [AF_INET]10.250.7.77:25554\nSat Jan 11 16:48:49 2020 10.250.7.77:25554 TCP connection established with [AF_INET]100.64.1.1:54786\nSat Jan 11 16:48:49 2020 10.250.7.77:25554 Connection reset, restarting [0]\nSat Jan 11 16:48:49 2020 100.64.1.1:54786 Connection reset, restarting [0]\nSat Jan 11 16:48:53 2020 TCP connection established with [AF_INET]10.250.7.77:9464\nSat Jan 11 16:48:53 2020 10.250.7.77:9464 TCP connection established with [AF_INET]100.64.1.1:50404\nSat Jan 11 16:48:53 2020 10.250.7.77:9464 Connection reset, restarting [0]\nSat Jan 11 16:48:53 2020 100.64.1.1:50404 Connection reset, restarting [0]\nSat Jan 11 16:48:59 2020 TCP connection established with [AF_INET]100.64.1.1:54828\nSat Jan 11 16:48:59 2020 100.64.1.1:54828 Connection reset, restarting [0]\nSat Jan 11 16:48:59 2020 TCP connection established with [AF_INET]10.250.7.77:25596\nSat Jan 11 16:48:59 2020 10.250.7.77:25596 Connection reset, restarting [0]\nSat Jan 11 16:49:03 2020 TCP connection established with [AF_INET]10.250.7.77:9478\nSat Jan 11 16:49:03 2020 10.250.7.77:9478 TCP connection established with [AF_INET]100.64.1.1:50418\nSat Jan 11 16:49:03 2020 10.250.7.77:9478 Connection reset, restarting [0]\nSat Jan 11 16:49:03 2020 100.64.1.1:50418 Connection reset, restarting [0]\nSat Jan 11 16:49:09 2020 TCP connection established with [AF_INET]10.250.7.77:25622\nSat Jan 11 16:49:09 2020 10.250.7.77:25622 TCP connection established with [AF_INET]100.64.1.1:54854\nSat Jan 11 16:49:09 2020 10.250.7.77:25622 Connection reset, restarting [0]\nSat Jan 11 16:49:09 2020 100.64.1.1:54854 Connection reset, restarting [0]\nSat Jan 11 16:49:13 2020 TCP connection established with [AF_INET]10.250.7.77:9488\nSat Jan 11 16:49:13 2020 10.250.7.77:9488 TCP connection established with [AF_INET]100.64.1.1:50428\nSat Jan 11 16:49:13 2020 10.250.7.77:9488 Connection reset, restarting [0]\nSat Jan 11 16:49:13 2020 100.64.1.1:50428 Connection reset, restarting [0]\nSat Jan 11 16:49:19 2020 TCP connection established with [AF_INET]10.250.7.77:25630\nSat Jan 11 16:49:19 2020 10.250.7.77:25630 TCP connection established with [AF_INET]100.64.1.1:54862\nSat Jan 11 16:49:19 2020 10.250.7.77:25630 Connection reset, restarting [0]\nSat Jan 11 16:49:19 2020 100.64.1.1:54862 Connection reset, restarting [0]\nSat Jan 11 16:49:23 2020 TCP connection established with [AF_INET]10.250.7.77:9494\nSat Jan 11 16:49:23 2020 10.250.7.77:9494 TCP connection established with [AF_INET]100.64.1.1:50434\nSat Jan 11 16:49:23 2020 10.250.7.77:9494 Connection reset, restarting [0]\nSat Jan 11 16:49:23 2020 100.64.1.1:50434 Connection reset, restarting [0]\nSat Jan 11 16:49:29 2020 TCP connection established with [AF_INET]10.250.7.77:25640\nSat Jan 11 16:49:29 2020 10.250.7.77:25640 TCP connection established with [AF_INET]100.64.1.1:54872\nSat Jan 11 16:49:29 2020 10.250.7.77:25640 Connection reset, restarting [0]\nSat Jan 11 16:49:29 2020 100.64.1.1:54872 Connection reset, restarting [0]\nSat Jan 11 16:49:33 2020 TCP connection established with [AF_INET]10.250.7.77:9504\nSat Jan 11 16:49:33 2020 10.250.7.77:9504 TCP connection established with [AF_INET]100.64.1.1:50444\nSat Jan 11 16:49:33 2020 10.250.7.77:9504 Connection reset, restarting [0]\nSat Jan 11 16:49:33 2020 100.64.1.1:50444 Connection reset, restarting [0]\nSat Jan 11 16:49:36 2020 vpn-seed/100.64.1.1:48682 Connection reset, restarting [0]\nSat Jan 11 16:49:39 2020 TCP connection established with [AF_INET]10.250.7.77:25644\nSat Jan 11 16:49:39 2020 10.250.7.77:25644 TCP connection established with [AF_INET]100.64.1.1:54876\nSat Jan 11 16:49:39 2020 10.250.7.77:25644 Connection reset, restarting [0]\nSat Jan 11 16:49:39 2020 100.64.1.1:54876 Connection reset, restarting [0]\nSat Jan 11 16:49:43 2020 TCP connection established with [AF_INET]10.250.7.77:9512\nSat Jan 11 16:49:43 2020 10.250.7.77:9512 TCP connection established with [AF_INET]100.64.1.1:50452\nSat Jan 11 16:49:43 2020 10.250.7.77:9512 Connection reset, restarting [0]\nSat Jan 11 16:49:43 2020 100.64.1.1:50452 Connection reset, restarting [0]\nSat Jan 11 16:49:49 2020 TCP connection established with [AF_INET]10.250.7.77:25654\nSat Jan 11 16:49:49 2020 10.250.7.77:25654 TCP connection established with [AF_INET]100.64.1.1:54886\nSat Jan 11 16:49:49 2020 10.250.7.77:25654 Connection reset, restarting [0]\nSat Jan 11 16:49:49 2020 100.64.1.1:54886 Connection reset, restarting [0]\nSat Jan 11 16:49:53 2020 TCP connection established with [AF_INET]10.250.7.77:9522\nSat Jan 11 16:49:53 2020 10.250.7.77:9522 TCP connection established with [AF_INET]100.64.1.1:50462\nSat Jan 11 16:49:53 2020 10.250.7.77:9522 Connection reset, restarting [0]\nSat Jan 11 16:49:53 2020 100.64.1.1:50462 Connection reset, restarting [0]\nSat Jan 11 16:49:59 2020 TCP connection established with [AF_INET]10.250.7.77:25666\nSat Jan 11 16:49:59 2020 10.250.7.77:25666 TCP connection established with [AF_INET]100.64.1.1:54898\nSat Jan 11 16:49:59 2020 10.250.7.77:25666 Connection reset, restarting [0]\nSat Jan 11 16:49:59 2020 100.64.1.1:54898 Connection reset, restarting [0]\nSat Jan 11 16:50:03 2020 TCP connection established with [AF_INET]10.250.7.77:9536\nSat Jan 11 16:50:03 2020 10.250.7.77:9536 TCP connection established with [AF_INET]100.64.1.1:50476\nSat Jan 11 16:50:03 2020 10.250.7.77:9536 Connection reset, restarting [0]\nSat Jan 11 16:50:03 2020 100.64.1.1:50476 Connection reset, restarting [0]\nSat Jan 11 16:50:09 2020 TCP connection established with [AF_INET]10.250.7.77:25680\nSat Jan 11 16:50:09 2020 10.250.7.77:25680 TCP connection established with [AF_INET]100.64.1.1:54912\nSat Jan 11 16:50:09 2020 10.250.7.77:25680 Connection reset, restarting [0]\nSat Jan 11 16:50:09 2020 100.64.1.1:54912 Connection reset, restarting [0]\nSat Jan 11 16:50:13 2020 TCP connection established with [AF_INET]10.250.7.77:9546\nSat Jan 11 16:50:13 2020 10.250.7.77:9546 TCP connection established with [AF_INET]100.64.1.1:50486\nSat Jan 11 16:50:13 2020 10.250.7.77:9546 Connection reset, restarting [0]\nSat Jan 11 16:50:13 2020 100.64.1.1:50486 Connection reset, restarting [0]\nSat Jan 11 16:50:15 2020 TCP connection established with [AF_INET]100.64.1.1:54918\nSat Jan 11 16:50:15 2020 100.64.1.1:54918 Connection reset, restarting [0]\nSat Jan 11 16:50:16 2020 TCP connection established with [AF_INET]10.250.7.77:25686\nSat Jan 11 16:50:17 2020 10.250.7.77:25686 peer info: IV_VER=2.4.6\nSat Jan 11 16:50:17 2020 10.250.7.77:25686 peer info: IV_PLAT=linux\nSat Jan 11 16:50:17 2020 10.250.7.77:25686 peer info: IV_PROTO=2\nSat Jan 11 16:50:17 2020 10.250.7.77:25686 peer info: IV_NCP=2\nSat Jan 11 16:50:17 2020 10.250.7.77:25686 peer info: IV_LZ4=1\nSat Jan 11 16:50:17 2020 10.250.7.77:25686 peer info: IV_LZ4v2=1\nSat Jan 11 16:50:17 2020 10.250.7.77:25686 peer info: IV_LZO=1\nSat Jan 11 16:50:17 2020 10.250.7.77:25686 peer info: IV_COMP_STUB=1\nSat Jan 11 16:50:17 2020 10.250.7.77:25686 peer info: IV_COMP_STUBv2=1\nSat Jan 11 16:50:17 2020 10.250.7.77:25686 peer info: IV_TCPNL=1\nSat Jan 11 16:50:17 2020 10.250.7.77:25686 [vpn-seed] Peer Connection Initiated with [AF_INET]10.250.7.77:25686\nSat Jan 11 16:50:17 2020 vpn-seed/10.250.7.77:25686 MULTI_sva: pool returned IPv4=192.168.123.6, IPv6=(Not enabled)\nSat Jan 11 16:50:19 2020 TCP connection established with [AF_INET]10.250.7.77:25694\nSat Jan 11 16:50:19 2020 10.250.7.77:25694 TCP connection established with [AF_INET]100.64.1.1:54926\nSat Jan 11 16:50:19 2020 10.250.7.77:25694 Connection reset, restarting [0]\nSat Jan 11 16:50:19 2020 100.64.1.1:54926 Connection reset, restarting [0]\nSat Jan 11 16:50:23 2020 TCP connection established with [AF_INET]10.250.7.77:9552\nSat Jan 11 16:50:23 2020 10.250.7.77:9552 TCP connection established with [AF_INET]100.64.1.1:50492\nSat Jan 11 16:50:23 2020 10.250.7.77:9552 Connection reset, restarting [0]\nSat Jan 11 16:50:23 2020 100.64.1.1:50492 Connection reset, restarting [0]\nSat Jan 11 16:50:29 2020 TCP connection established with [AF_INET]10.250.7.77:25702\nSat Jan 11 16:50:29 2020 10.250.7.77:25702 TCP connection established with [AF_INET]100.64.1.1:54934\nSat Jan 11 16:50:29 2020 10.250.7.77:25702 Connection reset, restarting [0]\nSat Jan 11 16:50:29 2020 100.64.1.1:54934 Connection reset, restarting [0]\nSat Jan 11 16:50:33 2020 TCP connection established with [AF_INET]10.250.7.77:9562\nSat Jan 11 16:50:33 2020 10.250.7.77:9562 TCP connection established with [AF_INET]100.64.1.1:50502\nSat Jan 11 16:50:33 2020 10.250.7.77:9562 Connection reset, restarting [0]\nSat Jan 11 16:50:33 2020 100.64.1.1:50502 Connection reset, restarting [0]\nSat Jan 11 16:50:39 2020 TCP connection established with [AF_INET]10.250.7.77:25706\nSat Jan 11 16:50:39 2020 10.250.7.77:25706 TCP connection established with [AF_INET]100.64.1.1:54938\nSat Jan 11 16:50:39 2020 10.250.7.77:25706 Connection reset, restarting [0]\nSat Jan 11 16:50:39 2020 100.64.1.1:54938 Connection reset, restarting [0]\nSat Jan 11 16:50:43 2020 TCP connection established with [AF_INET]10.250.7.77:9566\nSat Jan 11 16:50:43 2020 10.250.7.77:9566 TCP connection established with [AF_INET]100.64.1.1:50506\nSat Jan 11 16:50:43 2020 10.250.7.77:9566 Connection reset, restarting [0]\nSat Jan 11 16:50:43 2020 100.64.1.1:50506 Connection reset, restarting [0]\nSat Jan 11 16:50:49 2020 TCP connection established with [AF_INET]100.64.1.1:54948\nSat Jan 11 16:50:49 2020 100.64.1.1:54948 Connection reset, restarting [0]\nSat Jan 11 16:50:49 2020 TCP connection established with [AF_INET]10.250.7.77:25716\nSat Jan 11 16:50:49 2020 10.250.7.77:25716 Connection reset, restarting [0]\nSat Jan 11 16:50:53 2020 TCP connection established with [AF_INET]10.250.7.77:9580\nSat Jan 11 16:50:53 2020 10.250.7.77:9580 TCP connection established with [AF_INET]100.64.1.1:50520\nSat Jan 11 16:50:53 2020 10.250.7.77:9580 Connection reset, restarting [0]\nSat Jan 11 16:50:53 2020 100.64.1.1:50520 Connection reset, restarting [0]\nSat Jan 11 16:50:59 2020 TCP connection established with [AF_INET]10.250.7.77:25724\nSat Jan 11 16:50:59 2020 10.250.7.77:25724 TCP connection established with [AF_INET]100.64.1.1:54956\nSat Jan 11 16:50:59 2020 10.250.7.77:25724 Connection reset, restarting [0]\nSat Jan 11 16:50:59 2020 100.64.1.1:54956 Connection reset, restarting [0]\nSat Jan 11 16:51:03 2020 TCP connection established with [AF_INET]10.250.7.77:9594\nSat Jan 11 16:51:03 2020 10.250.7.77:9594 TCP connection established with [AF_INET]100.64.1.1:50534\nSat Jan 11 16:51:03 2020 10.250.7.77:9594 Connection reset, restarting [0]\nSat Jan 11 16:51:03 2020 100.64.1.1:50534 Connection reset, restarting [0]\nSat Jan 11 16:51:09 2020 TCP connection established with [AF_INET]10.250.7.77:25738\nSat Jan 11 16:51:09 2020 10.250.7.77:25738 TCP connection established with [AF_INET]100.64.1.1:54970\nSat Jan 11 16:51:09 2020 10.250.7.77:25738 Connection reset, restarting [0]\nSat Jan 11 16:51:09 2020 100.64.1.1:54970 Connection reset, restarting [0]\nSat Jan 11 16:51:13 2020 TCP connection established with [AF_INET]10.250.7.77:9604\nSat Jan 11 16:51:13 2020 10.250.7.77:9604 TCP connection established with [AF_INET]100.64.1.1:50544\nSat Jan 11 16:51:13 2020 10.250.7.77:9604 Connection reset, restarting [0]\nSat Jan 11 16:51:13 2020 100.64.1.1:50544 Connection reset, restarting [0]\nSat Jan 11 16:51:19 2020 TCP connection established with [AF_INET]10.250.7.77:25752\nSat Jan 11 16:51:19 2020 10.250.7.77:25752 TCP connection established with [AF_INET]100.64.1.1:54984\nSat Jan 11 16:51:19 2020 10.250.7.77:25752 Connection reset, restarting [0]\nSat Jan 11 16:51:19 2020 100.64.1.1:54984 Connection reset, restarting [0]\nSat Jan 11 16:51:23 2020 TCP connection established with [AF_INET]10.250.7.77:9610\nSat Jan 11 16:51:23 2020 10.250.7.77:9610 TCP connection established with [AF_INET]100.64.1.1:50550\nSat Jan 11 16:51:23 2020 10.250.7.77:9610 Connection reset, restarting [0]\nSat Jan 11 16:51:23 2020 100.64.1.1:50550 Connection reset, restarting [0]\nSat Jan 11 16:51:29 2020 TCP connection established with [AF_INET]10.250.7.77:25760\nSat Jan 11 16:51:29 2020 10.250.7.77:25760 TCP connection established with [AF_INET]100.64.1.1:54992\nSat Jan 11 16:51:29 2020 10.250.7.77:25760 Connection reset, restarting [0]\nSat Jan 11 16:51:29 2020 100.64.1.1:54992 Connection reset, restarting [0]\nSat Jan 11 16:51:33 2020 TCP connection established with [AF_INET]10.250.7.77:9620\nSat Jan 11 16:51:33 2020 10.250.7.77:9620 TCP connection established with [AF_INET]100.64.1.1:50560\nSat Jan 11 16:51:33 2020 10.250.7.77:9620 Connection reset, restarting [0]\nSat Jan 11 16:51:33 2020 100.64.1.1:50560 Connection reset, restarting [0]\nSat Jan 11 16:51:39 2020 TCP connection established with [AF_INET]10.250.7.77:25764\nSat Jan 11 16:51:39 2020 10.250.7.77:25764 TCP connection established with [AF_INET]100.64.1.1:54996\nSat Jan 11 16:51:39 2020 10.250.7.77:25764 Connection reset, restarting [0]\nSat Jan 11 16:51:39 2020 100.64.1.1:54996 Connection reset, restarting [0]\nSat Jan 11 16:51:43 2020 TCP connection established with [AF_INET]10.250.7.77:9624\nSat Jan 11 16:51:43 2020 10.250.7.77:9624 TCP connection established with [AF_INET]100.64.1.1:50564\nSat Jan 11 16:51:43 2020 10.250.7.77:9624 Connection reset, restarting [0]\nSat Jan 11 16:51:43 2020 100.64.1.1:50564 Connection reset, restarting [0]\nSat Jan 11 16:51:49 2020 TCP connection established with [AF_INET]10.250.7.77:25776\nSat Jan 11 16:51:49 2020 10.250.7.77:25776 TCP connection established with [AF_INET]100.64.1.1:55008\nSat Jan 11 16:51:49 2020 10.250.7.77:25776 Connection reset, restarting [0]\nSat Jan 11 16:51:49 2020 100.64.1.1:55008 Connection reset, restarting [0]\nSat Jan 11 16:51:53 2020 TCP connection established with [AF_INET]10.250.7.77:9634\nSat Jan 11 16:51:53 2020 10.250.7.77:9634 TCP connection established with [AF_INET]100.64.1.1:50574\nSat Jan 11 16:51:53 2020 10.250.7.77:9634 Connection reset, restarting [0]\nSat Jan 11 16:51:53 2020 100.64.1.1:50574 Connection reset, restarting [0]\nSat Jan 11 16:51:59 2020 TCP connection established with [AF_INET]10.250.7.77:25784\nSat Jan 11 16:51:59 2020 10.250.7.77:25784 TCP connection established with [AF_INET]100.64.1.1:55016\nSat Jan 11 16:51:59 2020 10.250.7.77:25784 Connection reset, restarting [0]\nSat Jan 11 16:51:59 2020 100.64.1.1:55016 Connection reset, restarting [0]\nSat Jan 11 16:52:03 2020 TCP connection established with [AF_INET]10.250.7.77:9650\nSat Jan 11 16:52:03 2020 10.250.7.77:9650 TCP connection established with [AF_INET]100.64.1.1:50590\nSat Jan 11 16:52:03 2020 10.250.7.77:9650 Connection reset, restarting [0]\nSat Jan 11 16:52:03 2020 100.64.1.1:50590 Connection reset, restarting [0]\nSat Jan 11 16:52:09 2020 TCP connection established with [AF_INET]10.250.7.77:25800\nSat Jan 11 16:52:09 2020 10.250.7.77:25800 TCP connection established with [AF_INET]100.64.1.1:55032\nSat Jan 11 16:52:09 2020 10.250.7.77:25800 Connection reset, restarting [0]\nSat Jan 11 16:52:09 2020 100.64.1.1:55032 Connection reset, restarting [0]\nSat Jan 11 16:52:13 2020 TCP connection established with [AF_INET]10.250.7.77:9662\nSat Jan 11 16:52:13 2020 10.250.7.77:9662 TCP connection established with [AF_INET]100.64.1.1:50602\nSat Jan 11 16:52:13 2020 10.250.7.77:9662 Connection reset, restarting [0]\nSat Jan 11 16:52:13 2020 100.64.1.1:50602 Connection reset, restarting [0]\nSat Jan 11 16:52:19 2020 TCP connection established with [AF_INET]10.250.7.77:25808\nSat Jan 11 16:52:19 2020 10.250.7.77:25808 TCP connection established with [AF_INET]100.64.1.1:55040\nSat Jan 11 16:52:19 2020 10.250.7.77:25808 Connection reset, restarting [0]\nSat Jan 11 16:52:19 2020 100.64.1.1:55040 Connection reset, restarting [0]\nSat Jan 11 16:52:23 2020 TCP connection established with [AF_INET]10.250.7.77:9668\nSat Jan 11 16:52:23 2020 10.250.7.77:9668 TCP connection established with [AF_INET]100.64.1.1:50608\nSat Jan 11 16:52:23 2020 10.250.7.77:9668 Connection reset, restarting [0]\nSat Jan 11 16:52:23 2020 100.64.1.1:50608 Connection reset, restarting [0]\nSat Jan 11 16:52:29 2020 TCP connection established with [AF_INET]10.250.7.77:25820\nSat Jan 11 16:52:29 2020 10.250.7.77:25820 TCP connection established with [AF_INET]100.64.1.1:55052\nSat Jan 11 16:52:29 2020 10.250.7.77:25820 Connection reset, restarting [0]\nSat Jan 11 16:52:29 2020 100.64.1.1:55052 Connection reset, restarting [0]\nSat Jan 11 16:52:33 2020 TCP connection established with [AF_INET]10.250.7.77:9678\nSat Jan 11 16:52:33 2020 10.250.7.77:9678 TCP connection established with [AF_INET]100.64.1.1:50618\nSat Jan 11 16:52:33 2020 10.250.7.77:9678 Connection reset, restarting [0]\nSat Jan 11 16:52:33 2020 100.64.1.1:50618 Connection reset, restarting [0]\nSat Jan 11 16:52:39 2020 TCP connection established with [AF_INET]10.250.7.77:25824\nSat Jan 11 16:52:39 2020 10.250.7.77:25824 TCP connection established with [AF_INET]100.64.1.1:55056\nSat Jan 11 16:52:39 2020 10.250.7.77:25824 Connection reset, restarting [0]\nSat Jan 11 16:52:39 2020 100.64.1.1:55056 Connection reset, restarting [0]\nSat Jan 11 16:52:43 2020 TCP connection established with [AF_INET]10.250.7.77:9682\nSat Jan 11 16:52:43 2020 10.250.7.77:9682 TCP connection established with [AF_INET]100.64.1.1:50622\nSat Jan 11 16:52:43 2020 10.250.7.77:9682 Connection reset, restarting [0]\nSat Jan 11 16:52:43 2020 100.64.1.1:50622 Connection reset, restarting [0]\nSat Jan 11 16:52:49 2020 TCP connection established with [AF_INET]10.250.7.77:25834\nSat Jan 11 16:52:49 2020 10.250.7.77:25834 TCP connection established with [AF_INET]100.64.1.1:55066\nSat Jan 11 16:52:49 2020 10.250.7.77:25834 Connection reset, restarting [0]\nSat Jan 11 16:52:49 2020 100.64.1.1:55066 Connection reset, restarting [0]\nSat Jan 11 16:52:53 2020 TCP connection established with [AF_INET]10.250.7.77:9692\nSat Jan 11 16:52:53 2020 10.250.7.77:9692 TCP connection established with [AF_INET]100.64.1.1:50632\nSat Jan 11 16:52:53 2020 10.250.7.77:9692 Connection reset, restarting [0]\nSat Jan 11 16:52:53 2020 100.64.1.1:50632 Connection reset, restarting [0]\nSat Jan 11 16:52:59 2020 TCP connection established with [AF_INET]10.250.7.77:25842\nSat Jan 11 16:52:59 2020 10.250.7.77:25842 TCP connection established with [AF_INET]100.64.1.1:55074\nSat Jan 11 16:52:59 2020 10.250.7.77:25842 Connection reset, restarting [0]\nSat Jan 11 16:52:59 2020 100.64.1.1:55074 Connection reset, restarting [0]\nSat Jan 11 16:53:03 2020 TCP connection established with [AF_INET]10.250.7.77:9710\nSat Jan 11 16:53:03 2020 10.250.7.77:9710 TCP connection established with [AF_INET]100.64.1.1:50650\nSat Jan 11 16:53:03 2020 10.250.7.77:9710 Connection reset, restarting [0]\nSat Jan 11 16:53:03 2020 100.64.1.1:50650 Connection reset, restarting [0]\nSat Jan 11 16:53:09 2020 TCP connection established with [AF_INET]10.250.7.77:25858\nSat Jan 11 16:53:09 2020 10.250.7.77:25858 TCP connection established with [AF_INET]100.64.1.1:55090\nSat Jan 11 16:53:09 2020 10.250.7.77:25858 Connection reset, restarting [0]\nSat Jan 11 16:53:09 2020 100.64.1.1:55090 Connection reset, restarting [0]\nSat Jan 11 16:53:13 2020 TCP connection established with [AF_INET]10.250.7.77:9718\nSat Jan 11 16:53:13 2020 10.250.7.77:9718 TCP connection established with [AF_INET]100.64.1.1:50658\nSat Jan 11 16:53:13 2020 10.250.7.77:9718 Connection reset, restarting [0]\nSat Jan 11 16:53:13 2020 100.64.1.1:50658 Connection reset, restarting [0]\nSat Jan 11 16:53:19 2020 TCP connection established with [AF_INET]10.250.7.77:25866\nSat Jan 11 16:53:19 2020 10.250.7.77:25866 TCP connection established with [AF_INET]100.64.1.1:55098\nSat Jan 11 16:53:19 2020 10.250.7.77:25866 Connection reset, restarting [0]\nSat Jan 11 16:53:19 2020 100.64.1.1:55098 Connection reset, restarting [0]\nSat Jan 11 16:53:23 2020 TCP connection established with [AF_INET]10.250.7.77:9728\nSat Jan 11 16:53:23 2020 10.250.7.77:9728 TCP connection established with [AF_INET]100.64.1.1:50668\nSat Jan 11 16:53:23 2020 10.250.7.77:9728 Connection reset, restarting [0]\nSat Jan 11 16:53:23 2020 100.64.1.1:50668 Connection reset, restarting [0]\nSat Jan 11 16:53:29 2020 TCP connection established with [AF_INET]10.250.7.77:25874\nSat Jan 11 16:53:29 2020 10.250.7.77:25874 TCP connection established with [AF_INET]100.64.1.1:55106\nSat Jan 11 16:53:29 2020 10.250.7.77:25874 Connection reset, restarting [0]\nSat Jan 11 16:53:29 2020 100.64.1.1:55106 Connection reset, restarting [0]\nSat Jan 11 16:53:33 2020 TCP connection established with [AF_INET]10.250.7.77:9740\nSat Jan 11 16:53:33 2020 10.250.7.77:9740 TCP connection established with [AF_INET]100.64.1.1:50680\nSat Jan 11 16:53:33 2020 10.250.7.77:9740 Connection reset, restarting [0]\nSat Jan 11 16:53:33 2020 100.64.1.1:50680 Connection reset, restarting [0]\nSat Jan 11 16:53:39 2020 TCP connection established with [AF_INET]10.250.7.77:25878\nSat Jan 11 16:53:39 2020 10.250.7.77:25878 TCP connection established with [AF_INET]100.64.1.1:55110\nSat Jan 11 16:53:39 2020 10.250.7.77:25878 Connection reset, restarting [0]\nSat Jan 11 16:53:39 2020 100.64.1.1:55110 Connection reset, restarting [0]\nSat Jan 11 16:53:43 2020 TCP connection established with [AF_INET]10.250.7.77:9744\nSat Jan 11 16:53:43 2020 10.250.7.77:9744 TCP connection established with [AF_INET]100.64.1.1:50684\nSat Jan 11 16:53:43 2020 10.250.7.77:9744 Connection reset, restarting [0]\nSat Jan 11 16:53:43 2020 100.64.1.1:50684 Connection reset, restarting [0]\nSat Jan 11 16:53:49 2020 TCP connection established with [AF_INET]10.250.7.77:25892\nSat Jan 11 16:53:49 2020 10.250.7.77:25892 TCP connection established with [AF_INET]100.64.1.1:55124\nSat Jan 11 16:53:49 2020 10.250.7.77:25892 Connection reset, restarting [0]\nSat Jan 11 16:53:49 2020 100.64.1.1:55124 Connection reset, restarting [0]\nSat Jan 11 16:53:53 2020 TCP connection established with [AF_INET]100.64.1.1:50696\nSat Jan 11 16:53:53 2020 100.64.1.1:50696 TCP connection established with [AF_INET]10.250.7.77:9756\nSat Jan 11 16:53:53 2020 100.64.1.1:50696 Connection reset, restarting [0]\nSat Jan 11 16:53:53 2020 10.250.7.77:9756 Connection reset, restarting [0]\nSat Jan 11 16:53:59 2020 TCP connection established with [AF_INET]10.250.7.77:25900\nSat Jan 11 16:53:59 2020 10.250.7.77:25900 Connection reset, restarting [0]\nSat Jan 11 16:53:59 2020 TCP connection established with [AF_INET]100.64.1.1:55132\nSat Jan 11 16:53:59 2020 100.64.1.1:55132 Connection reset, restarting [0]\nSat Jan 11 16:54:03 2020 TCP connection established with [AF_INET]10.250.7.77:9772\nSat Jan 11 16:54:03 2020 10.250.7.77:9772 TCP connection established with [AF_INET]100.64.1.1:50712\nSat Jan 11 16:54:03 2020 10.250.7.77:9772 Connection reset, restarting [0]\nSat Jan 11 16:54:03 2020 100.64.1.1:50712 Connection reset, restarting [0]\nSat Jan 11 16:54:09 2020 TCP connection established with [AF_INET]10.250.7.77:25916\nSat Jan 11 16:54:09 2020 10.250.7.77:25916 TCP connection established with [AF_INET]100.64.1.1:55148\nSat Jan 11 16:54:09 2020 10.250.7.77:25916 Connection reset, restarting [0]\nSat Jan 11 16:54:09 2020 100.64.1.1:55148 Connection reset, restarting [0]\nSat Jan 11 16:54:13 2020 TCP connection established with [AF_INET]10.250.7.77:9780\nSat Jan 11 16:54:13 2020 10.250.7.77:9780 TCP connection established with [AF_INET]100.64.1.1:50720\nSat Jan 11 16:54:13 2020 10.250.7.77:9780 Connection reset, restarting [0]\nSat Jan 11 16:54:13 2020 100.64.1.1:50720 Connection reset, restarting [0]\nSat Jan 11 16:54:19 2020 TCP connection established with [AF_INET]10.250.7.77:25924\nSat Jan 11 16:54:19 2020 10.250.7.77:25924 TCP connection established with [AF_INET]100.64.1.1:55156\nSat Jan 11 16:54:19 2020 10.250.7.77:25924 Connection reset, restarting [0]\nSat Jan 11 16:54:19 2020 100.64.1.1:55156 Connection reset, restarting [0]\nSat Jan 11 16:54:23 2020 TCP connection established with [AF_INET]10.250.7.77:9786\nSat Jan 11 16:54:23 2020 10.250.7.77:9786 TCP connection established with [AF_INET]100.64.1.1:50726\nSat Jan 11 16:54:23 2020 10.250.7.77:9786 Connection reset, restarting [0]\nSat Jan 11 16:54:23 2020 100.64.1.1:50726 Connection reset, restarting [0]\nSat Jan 11 16:54:29 2020 TCP connection established with [AF_INET]10.250.7.77:25932\nSat Jan 11 16:54:29 2020 10.250.7.77:25932 TCP connection established with [AF_INET]100.64.1.1:55164\nSat Jan 11 16:54:29 2020 10.250.7.77:25932 Connection reset, restarting [0]\nSat Jan 11 16:54:29 2020 100.64.1.1:55164 Connection reset, restarting [0]\nSat Jan 11 16:54:33 2020 TCP connection established with [AF_INET]10.250.7.77:9796\nSat Jan 11 16:54:33 2020 10.250.7.77:9796 TCP connection established with [AF_INET]100.64.1.1:50736\nSat Jan 11 16:54:33 2020 10.250.7.77:9796 Connection reset, restarting [0]\nSat Jan 11 16:54:33 2020 100.64.1.1:50736 Connection reset, restarting [0]\nSat Jan 11 16:54:39 2020 TCP connection established with [AF_INET]10.250.7.77:25936\nSat Jan 11 16:54:39 2020 10.250.7.77:25936 TCP connection established with [AF_INET]100.64.1.1:55168\nSat Jan 11 16:54:39 2020 10.250.7.77:25936 Connection reset, restarting [0]\nSat Jan 11 16:54:39 2020 100.64.1.1:55168 Connection reset, restarting [0]\nSat Jan 11 16:54:43 2020 TCP connection established with [AF_INET]10.250.7.77:9806\nSat Jan 11 16:54:43 2020 10.250.7.77:9806 TCP connection established with [AF_INET]100.64.1.1:50746\nSat Jan 11 16:54:43 2020 10.250.7.77:9806 Connection reset, restarting [0]\nSat Jan 11 16:54:43 2020 100.64.1.1:50746 Connection reset, restarting [0]\nSat Jan 11 16:54:49 2020 TCP connection established with [AF_INET]10.250.7.77:25946\nSat Jan 11 16:54:49 2020 10.250.7.77:25946 TCP connection established with [AF_INET]100.64.1.1:55178\nSat Jan 11 16:54:49 2020 10.250.7.77:25946 Connection reset, restarting [0]\nSat Jan 11 16:54:49 2020 100.64.1.1:55178 Connection reset, restarting [0]\nSat Jan 11 16:54:53 2020 TCP connection established with [AF_INET]10.250.7.77:9818\nSat Jan 11 16:54:53 2020 10.250.7.77:9818 TCP connection established with [AF_INET]100.64.1.1:50758\nSat Jan 11 16:54:53 2020 10.250.7.77:9818 Connection reset, restarting [0]\nSat Jan 11 16:54:53 2020 100.64.1.1:50758 Connection reset, restarting [0]\nSat Jan 11 16:54:59 2020 TCP connection established with [AF_INET]10.250.7.77:25960\nSat Jan 11 16:54:59 2020 10.250.7.77:25960 TCP connection established with [AF_INET]100.64.1.1:55192\nSat Jan 11 16:54:59 2020 10.250.7.77:25960 Connection reset, restarting [0]\nSat Jan 11 16:54:59 2020 100.64.1.1:55192 Connection reset, restarting [0]\nSat Jan 11 16:55:03 2020 TCP connection established with [AF_INET]100.64.1.1:50772\nSat Jan 11 16:55:03 2020 100.64.1.1:50772 TCP connection established with [AF_INET]10.250.7.77:9832\nSat Jan 11 16:55:03 2020 100.64.1.1:50772 Connection reset, restarting [0]\nSat Jan 11 16:55:03 2020 10.250.7.77:9832 Connection reset, restarting [0]\nSat Jan 11 16:55:09 2020 TCP connection established with [AF_INET]10.250.7.77:25974\nSat Jan 11 16:55:09 2020 10.250.7.77:25974 TCP connection established with [AF_INET]100.64.1.1:55206\nSat Jan 11 16:55:09 2020 10.250.7.77:25974 Connection reset, restarting [0]\nSat Jan 11 16:55:09 2020 100.64.1.1:55206 Connection reset, restarting [0]\nSat Jan 11 16:55:13 2020 TCP connection established with [AF_INET]10.250.7.77:9840\nSat Jan 11 16:55:13 2020 10.250.7.77:9840 TCP connection established with [AF_INET]100.64.1.1:50780\nSat Jan 11 16:55:13 2020 10.250.7.77:9840 Connection reset, restarting [0]\nSat Jan 11 16:55:13 2020 100.64.1.1:50780 Connection reset, restarting [0]\nSat Jan 11 16:55:19 2020 TCP connection established with [AF_INET]10.250.7.77:25982\nSat Jan 11 16:55:19 2020 10.250.7.77:25982 TCP connection established with [AF_INET]100.64.1.1:55214\nSat Jan 11 16:55:19 2020 10.250.7.77:25982 Connection reset, restarting [0]\nSat Jan 11 16:55:19 2020 100.64.1.1:55214 Connection reset, restarting [0]\nSat Jan 11 16:55:23 2020 TCP connection established with [AF_INET]10.250.7.77:9846\nSat Jan 11 16:55:23 2020 10.250.7.77:9846 TCP connection established with [AF_INET]100.64.1.1:50786\nSat Jan 11 16:55:23 2020 10.250.7.77:9846 Connection reset, restarting [0]\nSat Jan 11 16:55:23 2020 100.64.1.1:50786 Connection reset, restarting [0]\nSat Jan 11 16:55:29 2020 TCP connection established with [AF_INET]10.250.7.77:25990\nSat Jan 11 16:55:29 2020 10.250.7.77:25990 TCP connection established with [AF_INET]100.64.1.1:55222\nSat Jan 11 16:55:29 2020 10.250.7.77:25990 Connection reset, restarting [0]\nSat Jan 11 16:55:29 2020 100.64.1.1:55222 Connection reset, restarting [0]\nSat Jan 11 16:55:33 2020 TCP connection established with [AF_INET]10.250.7.77:9856\nSat Jan 11 16:55:33 2020 10.250.7.77:9856 TCP connection established with [AF_INET]100.64.1.1:50796\nSat Jan 11 16:55:33 2020 10.250.7.77:9856 Connection reset, restarting [0]\nSat Jan 11 16:55:33 2020 100.64.1.1:50796 Connection reset, restarting [0]\nSat Jan 11 16:55:39 2020 TCP connection established with [AF_INET]10.250.7.77:25994\nSat Jan 11 16:55:39 2020 10.250.7.77:25994 TCP connection established with [AF_INET]100.64.1.1:55226\nSat Jan 11 16:55:39 2020 10.250.7.77:25994 Connection reset, restarting [0]\nSat Jan 11 16:55:39 2020 100.64.1.1:55226 Connection reset, restarting [0]\nSat Jan 11 16:55:43 2020 TCP connection established with [AF_INET]10.250.7.77:9860\nSat Jan 11 16:55:43 2020 10.250.7.77:9860 TCP connection established with [AF_INET]100.64.1.1:50800\nSat Jan 11 16:55:43 2020 10.250.7.77:9860 Connection reset, restarting [0]\nSat Jan 11 16:55:43 2020 100.64.1.1:50800 Connection reset, restarting [0]\nSat Jan 11 16:55:49 2020 TCP connection established with [AF_INET]10.250.7.77:26004\nSat Jan 11 16:55:49 2020 10.250.7.77:26004 TCP connection established with [AF_INET]100.64.1.1:55236\nSat Jan 11 16:55:49 2020 10.250.7.77:26004 Connection reset, restarting [0]\nSat Jan 11 16:55:49 2020 100.64.1.1:55236 Connection reset, restarting [0]\nSat Jan 11 16:55:53 2020 TCP connection established with [AF_INET]10.250.7.77:9876\nSat Jan 11 16:55:53 2020 10.250.7.77:9876 TCP connection established with [AF_INET]100.64.1.1:50816\nSat Jan 11 16:55:53 2020 10.250.7.77:9876 Connection reset, restarting [0]\nSat Jan 11 16:55:53 2020 100.64.1.1:50816 Connection reset, restarting [0]\nSat Jan 11 16:55:59 2020 TCP connection established with [AF_INET]10.250.7.77:26016\nSat Jan 11 16:55:59 2020 10.250.7.77:26016 TCP connection established with [AF_INET]100.64.1.1:55248\nSat Jan 11 16:55:59 2020 10.250.7.77:26016 Connection reset, restarting [0]\nSat Jan 11 16:55:59 2020 100.64.1.1:55248 Connection reset, restarting [0]\nSat Jan 11 16:56:03 2020 TCP connection established with [AF_INET]10.250.7.77:9930\nSat Jan 11 16:56:03 2020 10.250.7.77:9930 TCP connection established with [AF_INET]100.64.1.1:50870\nSat Jan 11 16:56:03 2020 10.250.7.77:9930 Connection reset, restarting [0]\nSat Jan 11 16:56:03 2020 100.64.1.1:50870 Connection reset, restarting [0]\nSat Jan 11 16:56:09 2020 TCP connection established with [AF_INET]10.250.7.77:26036\nSat Jan 11 16:56:09 2020 10.250.7.77:26036 TCP connection established with [AF_INET]100.64.1.1:55268\nSat Jan 11 16:56:09 2020 10.250.7.77:26036 Connection reset, restarting [0]\nSat Jan 11 16:56:09 2020 100.64.1.1:55268 Connection reset, restarting [0]\nSat Jan 11 16:56:13 2020 TCP connection established with [AF_INET]10.250.7.77:9940\nSat Jan 11 16:56:13 2020 10.250.7.77:9940 TCP connection established with [AF_INET]100.64.1.1:50880\nSat Jan 11 16:56:13 2020 10.250.7.77:9940 Connection reset, restarting [0]\nSat Jan 11 16:56:13 2020 100.64.1.1:50880 Connection reset, restarting [0]\nSat Jan 11 16:56:19 2020 TCP connection established with [AF_INET]10.250.7.77:26048\nSat Jan 11 16:56:19 2020 10.250.7.77:26048 TCP connection established with [AF_INET]100.64.1.1:55280\nSat Jan 11 16:56:19 2020 10.250.7.77:26048 Connection reset, restarting [0]\nSat Jan 11 16:56:19 2020 100.64.1.1:55280 Connection reset, restarting [0]\nSat Jan 11 16:56:23 2020 TCP connection established with [AF_INET]10.250.7.77:9950\nSat Jan 11 16:56:23 2020 10.250.7.77:9950 TCP connection established with [AF_INET]100.64.1.1:50890\nSat Jan 11 16:56:23 2020 10.250.7.77:9950 Connection reset, restarting [0]\nSat Jan 11 16:56:23 2020 100.64.1.1:50890 Connection reset, restarting [0]\nSat Jan 11 16:56:29 2020 TCP connection established with [AF_INET]10.250.7.77:26058\nSat Jan 11 16:56:29 2020 10.250.7.77:26058 TCP connection established with [AF_INET]100.64.1.1:55290\nSat Jan 11 16:56:29 2020 10.250.7.77:26058 Connection reset, restarting [0]\nSat Jan 11 16:56:29 2020 100.64.1.1:55290 Connection reset, restarting [0]\nSat Jan 11 16:56:33 2020 TCP connection established with [AF_INET]10.250.7.77:9960\nSat Jan 11 16:56:33 2020 10.250.7.77:9960 TCP connection established with [AF_INET]100.64.1.1:50900\nSat Jan 11 16:56:33 2020 10.250.7.77:9960 Connection reset, restarting [0]\nSat Jan 11 16:56:33 2020 100.64.1.1:50900 Connection reset, restarting [0]\nSat Jan 11 16:56:39 2020 TCP connection established with [AF_INET]10.250.7.77:26062\nSat Jan 11 16:56:39 2020 10.250.7.77:26062 TCP connection established with [AF_INET]100.64.1.1:55294\nSat Jan 11 16:56:39 2020 10.250.7.77:26062 Connection reset, restarting [0]\nSat Jan 11 16:56:39 2020 100.64.1.1:55294 Connection reset, restarting [0]\nSat Jan 11 16:56:43 2020 TCP connection established with [AF_INET]10.250.7.77:9966\nSat Jan 11 16:56:43 2020 10.250.7.77:9966 TCP connection established with [AF_INET]100.64.1.1:50906\nSat Jan 11 16:56:43 2020 10.250.7.77:9966 Connection reset, restarting [0]\nSat Jan 11 16:56:43 2020 100.64.1.1:50906 Connection reset, restarting [0]\nSat Jan 11 16:56:49 2020 TCP connection established with [AF_INET]10.250.7.77:26074\nSat Jan 11 16:56:49 2020 10.250.7.77:26074 TCP connection established with [AF_INET]100.64.1.1:55306\nSat Jan 11 16:56:49 2020 10.250.7.77:26074 Connection reset, restarting [0]\nSat Jan 11 16:56:49 2020 100.64.1.1:55306 Connection reset, restarting [0]\nSat Jan 11 16:56:53 2020 TCP connection established with [AF_INET]10.250.7.77:9976\nSat Jan 11 16:56:53 2020 10.250.7.77:9976 TCP connection established with [AF_INET]100.64.1.1:50916\nSat Jan 11 16:56:53 2020 10.250.7.77:9976 Connection reset, restarting [0]\nSat Jan 11 16:56:53 2020 100.64.1.1:50916 Connection reset, restarting [0]\nSat Jan 11 16:56:59 2020 TCP connection established with [AF_INET]10.250.7.77:26084\nSat Jan 11 16:56:59 2020 10.250.7.77:26084 TCP connection established with [AF_INET]100.64.1.1:55316\nSat Jan 11 16:56:59 2020 10.250.7.77:26084 Connection reset, restarting [0]\nSat Jan 11 16:56:59 2020 100.64.1.1:55316 Connection reset, restarting [0]\nSat Jan 11 16:57:03 2020 TCP connection established with [AF_INET]10.250.7.77:9990\nSat Jan 11 16:57:03 2020 10.250.7.77:9990 TCP connection established with [AF_INET]100.64.1.1:50930\nSat Jan 11 16:57:03 2020 10.250.7.77:9990 Connection reset, restarting [0]\nSat Jan 11 16:57:03 2020 100.64.1.1:50930 Connection reset, restarting [0]\nSat Jan 11 16:57:09 2020 TCP connection established with [AF_INET]10.250.7.77:26098\nSat Jan 11 16:57:09 2020 10.250.7.77:26098 TCP connection established with [AF_INET]100.64.1.1:55330\nSat Jan 11 16:57:09 2020 10.250.7.77:26098 Connection reset, restarting [0]\nSat Jan 11 16:57:09 2020 100.64.1.1:55330 Connection reset, restarting [0]\nSat Jan 11 16:57:13 2020 TCP connection established with [AF_INET]10.250.7.77:10002\nSat Jan 11 16:57:13 2020 10.250.7.77:10002 TCP connection established with [AF_INET]100.64.1.1:50942\nSat Jan 11 16:57:13 2020 10.250.7.77:10002 Connection reset, restarting [0]\nSat Jan 11 16:57:13 2020 100.64.1.1:50942 Connection reset, restarting [0]\nSat Jan 11 16:57:19 2020 TCP connection established with [AF_INET]10.250.7.77:26106\nSat Jan 11 16:57:19 2020 10.250.7.77:26106 TCP connection established with [AF_INET]100.64.1.1:55338\nSat Jan 11 16:57:19 2020 10.250.7.77:26106 Connection reset, restarting [0]\nSat Jan 11 16:57:19 2020 100.64.1.1:55338 Connection reset, restarting [0]\nSat Jan 11 16:57:23 2020 TCP connection established with [AF_INET]10.250.7.77:10008\nSat Jan 11 16:57:23 2020 10.250.7.77:10008 TCP connection established with [AF_INET]100.64.1.1:50948\nSat Jan 11 16:57:23 2020 10.250.7.77:10008 Connection reset, restarting [0]\nSat Jan 11 16:57:23 2020 100.64.1.1:50948 Connection reset, restarting [0]\nSat Jan 11 16:57:29 2020 TCP connection established with [AF_INET]10.250.7.77:26118\nSat Jan 11 16:57:29 2020 10.250.7.77:26118 TCP connection established with [AF_INET]100.64.1.1:55350\nSat Jan 11 16:57:29 2020 10.250.7.77:26118 Connection reset, restarting [0]\nSat Jan 11 16:57:29 2020 100.64.1.1:55350 Connection reset, restarting [0]\nSat Jan 11 16:57:33 2020 TCP connection established with [AF_INET]10.250.7.77:10018\nSat Jan 11 16:57:33 2020 10.250.7.77:10018 TCP connection established with [AF_INET]100.64.1.1:50958\nSat Jan 11 16:57:33 2020 10.250.7.77:10018 Connection reset, restarting [0]\nSat Jan 11 16:57:33 2020 100.64.1.1:50958 Connection reset, restarting [0]\nSat Jan 11 16:57:39 2020 TCP connection established with [AF_INET]10.250.7.77:26124\nSat Jan 11 16:57:39 2020 10.250.7.77:26124 TCP connection established with [AF_INET]100.64.1.1:55356\nSat Jan 11 16:57:39 2020 10.250.7.77:26124 Connection reset, restarting [0]\nSat Jan 11 16:57:39 2020 100.64.1.1:55356 Connection reset, restarting [0]\nSat Jan 11 16:57:43 2020 TCP connection established with [AF_INET]10.250.7.77:10024\nSat Jan 11 16:57:43 2020 10.250.7.77:10024 TCP connection established with [AF_INET]100.64.1.1:50964\nSat Jan 11 16:57:43 2020 10.250.7.77:10024 Connection reset, restarting [0]\nSat Jan 11 16:57:43 2020 100.64.1.1:50964 Connection reset, restarting [0]\nSat Jan 11 16:57:49 2020 TCP connection established with [AF_INET]10.250.7.77:26136\nSat Jan 11 16:57:49 2020 10.250.7.77:26136 Connection reset, restarting [0]\nSat Jan 11 16:57:49 2020 TCP connection established with [AF_INET]100.64.1.1:55368\nSat Jan 11 16:57:49 2020 100.64.1.1:55368 Connection reset, restarting [0]\nSat Jan 11 16:57:53 2020 TCP connection established with [AF_INET]10.250.7.77:10034\nSat Jan 11 16:57:53 2020 10.250.7.77:10034 TCP connection established with [AF_INET]100.64.1.1:50974\nSat Jan 11 16:57:53 2020 10.250.7.77:10034 Connection reset, restarting [0]\nSat Jan 11 16:57:53 2020 100.64.1.1:50974 Connection reset, restarting [0]\nSat Jan 11 16:57:59 2020 TCP connection established with [AF_INET]10.250.7.77:26144\nSat Jan 11 16:57:59 2020 10.250.7.77:26144 TCP connection established with [AF_INET]100.64.1.1:55376\nSat Jan 11 16:57:59 2020 10.250.7.77:26144 Connection reset, restarting [0]\nSat Jan 11 16:57:59 2020 100.64.1.1:55376 Connection reset, restarting [0]\nSat Jan 11 16:58:03 2020 TCP connection established with [AF_INET]10.250.7.77:10048\nSat Jan 11 16:58:03 2020 10.250.7.77:10048 TCP connection established with [AF_INET]100.64.1.1:50988\nSat Jan 11 16:58:03 2020 10.250.7.77:10048 Connection reset, restarting [0]\nSat Jan 11 16:58:03 2020 100.64.1.1:50988 Connection reset, restarting [0]\nSat Jan 11 16:58:09 2020 TCP connection established with [AF_INET]10.250.7.77:26158\nSat Jan 11 16:58:09 2020 10.250.7.77:26158 TCP connection established with [AF_INET]100.64.1.1:55390\nSat Jan 11 16:58:09 2020 10.250.7.77:26158 Connection reset, restarting [0]\nSat Jan 11 16:58:09 2020 100.64.1.1:55390 Connection reset, restarting [0]\nSat Jan 11 16:58:13 2020 TCP connection established with [AF_INET]10.250.7.77:10056\nSat Jan 11 16:58:13 2020 10.250.7.77:10056 TCP connection established with [AF_INET]100.64.1.1:50996\nSat Jan 11 16:58:13 2020 10.250.7.77:10056 Connection reset, restarting [0]\nSat Jan 11 16:58:13 2020 100.64.1.1:50996 Connection reset, restarting [0]\nSat Jan 11 16:58:19 2020 TCP connection established with [AF_INET]10.250.7.77:26166\nSat Jan 11 16:58:19 2020 10.250.7.77:26166 TCP connection established with [AF_INET]100.64.1.1:55398\nSat Jan 11 16:58:19 2020 10.250.7.77:26166 Connection reset, restarting [0]\nSat Jan 11 16:58:19 2020 100.64.1.1:55398 Connection reset, restarting [0]\nSat Jan 11 16:58:23 2020 TCP connection established with [AF_INET]10.250.7.77:10066\nSat Jan 11 16:58:23 2020 10.250.7.77:10066 TCP connection established with [AF_INET]100.64.1.1:51006\nSat Jan 11 16:58:23 2020 10.250.7.77:10066 Connection reset, restarting [0]\nSat Jan 11 16:58:23 2020 100.64.1.1:51006 Connection reset, restarting [0]\nSat Jan 11 16:58:29 2020 TCP connection established with [AF_INET]10.250.7.77:26174\nSat Jan 11 16:58:29 2020 TCP connection established with [AF_INET]100.64.1.1:55406\nSat Jan 11 16:58:29 2020 10.250.7.77:26174 Connection reset, restarting [0]\nSat Jan 11 16:58:29 2020 100.64.1.1:55406 Connection reset, restarting [0]\nSat Jan 11 16:58:33 2020 TCP connection established with [AF_INET]10.250.7.77:10078\nSat Jan 11 16:58:33 2020 10.250.7.77:10078 TCP connection established with [AF_INET]100.64.1.1:51018\nSat Jan 11 16:58:33 2020 10.250.7.77:10078 Connection reset, restarting [0]\nSat Jan 11 16:58:33 2020 100.64.1.1:51018 Connection reset, restarting [0]\nSat Jan 11 16:58:36 2020 vpn-seed/10.250.7.77:25686 Connection reset, restarting [0]\nSat Jan 11 16:58:39 2020 TCP connection established with [AF_INET]10.250.7.77:26178\nSat Jan 11 16:58:39 2020 10.250.7.77:26178 TCP connection established with [AF_INET]100.64.1.1:55410\nSat Jan 11 16:58:39 2020 10.250.7.77:26178 Connection reset, restarting [0]\nSat Jan 11 16:58:39 2020 100.64.1.1:55410 Connection reset, restarting [0]\nSat Jan 11 16:58:43 2020 TCP connection established with [AF_INET]10.250.7.77:10090\nSat Jan 11 16:58:43 2020 10.250.7.77:10090 TCP connection established with [AF_INET]100.64.1.1:51030\nSat Jan 11 16:58:43 2020 10.250.7.77:10090 Connection reset, restarting [0]\nSat Jan 11 16:58:43 2020 100.64.1.1:51030 Connection reset, restarting [0]\nSat Jan 11 16:58:49 2020 TCP connection established with [AF_INET]10.250.7.77:26194\nSat Jan 11 16:58:49 2020 10.250.7.77:26194 TCP connection established with [AF_INET]100.64.1.1:55426\nSat Jan 11 16:58:49 2020 10.250.7.77:26194 Connection reset, restarting [0]\nSat Jan 11 16:58:49 2020 100.64.1.1:55426 Connection reset, restarting [0]\nSat Jan 11 16:58:53 2020 TCP connection established with [AF_INET]10.250.7.77:10100\nSat Jan 11 16:58:53 2020 10.250.7.77:10100 TCP connection established with [AF_INET]100.64.1.1:51040\nSat Jan 11 16:58:53 2020 10.250.7.77:10100 Connection reset, restarting [0]\nSat Jan 11 16:58:53 2020 100.64.1.1:51040 Connection reset, restarting [0]\nSat Jan 11 16:58:59 2020 TCP connection established with [AF_INET]10.250.7.77:26236\nSat Jan 11 16:58:59 2020 10.250.7.77:26236 TCP connection established with [AF_INET]100.64.1.1:55468\nSat Jan 11 16:58:59 2020 10.250.7.77:26236 Connection reset, restarting [0]\nSat Jan 11 16:58:59 2020 100.64.1.1:55468 Connection reset, restarting [0]\nSat Jan 11 16:59:03 2020 TCP connection established with [AF_INET]10.250.7.77:10114\nSat Jan 11 16:59:03 2020 10.250.7.77:10114 TCP connection established with [AF_INET]100.64.1.1:51054\nSat Jan 11 16:59:03 2020 10.250.7.77:10114 Connection reset, restarting [0]\nSat Jan 11 16:59:03 2020 100.64.1.1:51054 Connection reset, restarting [0]\nSat Jan 11 16:59:09 2020 TCP connection established with [AF_INET]10.250.7.77:26252\nSat Jan 11 16:59:09 2020 10.250.7.77:26252 TCP connection established with [AF_INET]100.64.1.1:55484\nSat Jan 11 16:59:09 2020 10.250.7.77:26252 Connection reset, restarting [0]\nSat Jan 11 16:59:09 2020 100.64.1.1:55484 Connection reset, restarting [0]\nSat Jan 11 16:59:09 2020 TCP connection established with [AF_INET]10.250.7.77:10116\nSat Jan 11 16:59:09 2020 10.250.7.77:10116 Connection reset, restarting [0]\nSat Jan 11 16:59:09 2020 TCP connection established with [AF_INET]100.64.1.1:51060\nSat Jan 11 16:59:10 2020 100.64.1.1:51060 peer info: IV_VER=2.4.6\nSat Jan 11 16:59:10 2020 100.64.1.1:51060 peer info: IV_PLAT=linux\nSat Jan 11 16:59:10 2020 100.64.1.1:51060 peer info: IV_PROTO=2\nSat Jan 11 16:59:10 2020 100.64.1.1:51060 peer info: IV_NCP=2\nSat Jan 11 16:59:10 2020 100.64.1.1:51060 peer info: IV_LZ4=1\nSat Jan 11 16:59:10 2020 100.64.1.1:51060 peer info: IV_LZ4v2=1\nSat Jan 11 16:59:10 2020 100.64.1.1:51060 peer info: IV_LZO=1\nSat Jan 11 16:59:10 2020 100.64.1.1:51060 peer info: IV_COMP_STUB=1\nSat Jan 11 16:59:10 2020 100.64.1.1:51060 peer info: IV_COMP_STUBv2=1\nSat Jan 11 16:59:10 2020 100.64.1.1:51060 peer info: IV_TCPNL=1\nSat Jan 11 16:59:10 2020 100.64.1.1:51060 [vpn-seed] Peer Connection Initiated with [AF_INET]100.64.1.1:51060\nSat Jan 11 16:59:10 2020 vpn-seed/100.64.1.1:51060 MULTI_sva: pool returned IPv4=192.168.123.6, IPv6=(Not enabled)\nSat Jan 11 16:59:13 2020 TCP connection established with [AF_INET]10.250.7.77:10126\nSat Jan 11 16:59:13 2020 10.250.7.77:10126 TCP connection established with [AF_INET]100.64.1.1:51066\nSat Jan 11 16:59:13 2020 10.250.7.77:10126 Connection reset, restarting [0]\nSat Jan 11 16:59:13 2020 100.64.1.1:51066 Connection reset, restarting [0]\nSat Jan 11 16:59:19 2020 TCP connection established with [AF_INET]10.250.7.77:26264\nSat Jan 11 16:59:19 2020 10.250.7.77:26264 TCP connection established with [AF_INET]100.64.1.1:55496\nSat Jan 11 16:59:19 2020 10.250.7.77:26264 Connection reset, restarting [0]\nSat Jan 11 16:59:19 2020 100.64.1.1:55496 Connection reset, restarting [0]\nSat Jan 11 16:59:23 2020 TCP connection established with [AF_INET]10.250.7.77:10132\nSat Jan 11 16:59:23 2020 10.250.7.77:10132 TCP connection established with [AF_INET]100.64.1.1:51072\nSat Jan 11 16:59:23 2020 10.250.7.77:10132 Connection reset, restarting [0]\nSat Jan 11 16:59:23 2020 100.64.1.1:51072 Connection reset, restarting [0]\nSat Jan 11 16:59:29 2020 TCP connection established with [AF_INET]10.250.7.77:26272\nSat Jan 11 16:59:29 2020 10.250.7.77:26272 TCP connection established with [AF_INET]100.64.1.1:55504\nSat Jan 11 16:59:29 2020 10.250.7.77:26272 Connection reset, restarting [0]\nSat Jan 11 16:59:29 2020 100.64.1.1:55504 Connection reset, restarting [0]\nSat Jan 11 16:59:33 2020 TCP connection established with [AF_INET]10.250.7.77:10144\nSat Jan 11 16:59:33 2020 10.250.7.77:10144 TCP connection established with [AF_INET]100.64.1.1:51084\nSat Jan 11 16:59:33 2020 10.250.7.77:10144 Connection reset, restarting [0]\nSat Jan 11 16:59:33 2020 100.64.1.1:51084 Connection reset, restarting [0]\nSat Jan 11 16:59:39 2020 TCP connection established with [AF_INET]10.250.7.77:26278\nSat Jan 11 16:59:39 2020 10.250.7.77:26278 TCP connection established with [AF_INET]100.64.1.1:55510\nSat Jan 11 16:59:39 2020 10.250.7.77:26278 Connection reset, restarting [0]\nSat Jan 11 16:59:39 2020 100.64.1.1:55510 Connection reset, restarting [0]\nSat Jan 11 16:59:43 2020 TCP connection established with [AF_INET]10.250.7.77:10152\nSat Jan 11 16:59:43 2020 10.250.7.77:10152 TCP connection established with [AF_INET]100.64.1.1:51092\nSat Jan 11 16:59:43 2020 10.250.7.77:10152 Connection reset, restarting [0]\nSat Jan 11 16:59:43 2020 100.64.1.1:51092 Connection reset, restarting [0]\nSat Jan 11 16:59:49 2020 TCP connection established with [AF_INET]10.250.7.77:26288\nSat Jan 11 16:59:49 2020 10.250.7.77:26288 TCP connection established with [AF_INET]100.64.1.1:55520\nSat Jan 11 16:59:49 2020 10.250.7.77:26288 Connection reset, restarting [0]\nSat Jan 11 16:59:49 2020 100.64.1.1:55520 Connection reset, restarting [0]\nSat Jan 11 16:59:53 2020 TCP connection established with [AF_INET]10.250.7.77:10162\nSat Jan 11 16:59:53 2020 10.250.7.77:10162 TCP connection established with [AF_INET]100.64.1.1:51102\nSat Jan 11 16:59:53 2020 10.250.7.77:10162 Connection reset, restarting [0]\nSat Jan 11 16:59:53 2020 100.64.1.1:51102 Connection reset, restarting [0]\nSat Jan 11 16:59:59 2020 TCP connection established with [AF_INET]10.250.7.77:26300\nSat Jan 11 16:59:59 2020 10.250.7.77:26300 TCP connection established with [AF_INET]100.64.1.1:55532\nSat Jan 11 16:59:59 2020 10.250.7.77:26300 Connection reset, restarting [0]\nSat Jan 11 16:59:59 2020 100.64.1.1:55532 Connection reset, restarting [0]\nSat Jan 11 17:00:03 2020 TCP connection established with [AF_INET]10.250.7.77:10178\nSat Jan 11 17:00:03 2020 10.250.7.77:10178 TCP connection established with [AF_INET]100.64.1.1:51118\nSat Jan 11 17:00:03 2020 10.250.7.77:10178 Connection reset, restarting [0]\nSat Jan 11 17:00:03 2020 100.64.1.1:51118 Connection reset, restarting [0]\nSat Jan 11 17:00:09 2020 TCP connection established with [AF_INET]10.250.7.77:26316\nSat Jan 11 17:00:09 2020 10.250.7.77:26316 TCP connection established with [AF_INET]100.64.1.1:55548\nSat Jan 11 17:00:09 2020 10.250.7.77:26316 Connection reset, restarting [0]\nSat Jan 11 17:00:09 2020 100.64.1.1:55548 Connection reset, restarting [0]\nSat Jan 11 17:00:13 2020 TCP connection established with [AF_INET]10.250.7.77:10186\nSat Jan 11 17:00:13 2020 10.250.7.77:10186 TCP connection established with [AF_INET]100.64.1.1:51126\nSat Jan 11 17:00:13 2020 10.250.7.77:10186 Connection reset, restarting [0]\nSat Jan 11 17:00:13 2020 100.64.1.1:51126 Connection reset, restarting [0]\nSat Jan 11 17:00:19 2020 TCP connection established with [AF_INET]10.250.7.77:26324\nSat Jan 11 17:00:19 2020 10.250.7.77:26324 TCP connection established with [AF_INET]100.64.1.1:55556\nSat Jan 11 17:00:19 2020 10.250.7.77:26324 Connection reset, restarting [0]\nSat Jan 11 17:00:19 2020 100.64.1.1:55556 Connection reset, restarting [0]\nSat Jan 11 17:00:23 2020 TCP connection established with [AF_INET]100.64.1.1:51132\nSat Jan 11 17:00:23 2020 100.64.1.1:51132 TCP connection established with [AF_INET]10.250.7.77:10192\nSat Jan 11 17:00:23 2020 100.64.1.1:51132 Connection reset, restarting [0]\nSat Jan 11 17:00:23 2020 10.250.7.77:10192 Connection reset, restarting [0]\nSat Jan 11 17:00:29 2020 vpn-seed/100.64.1.1:47320 peer info: IV_VER=2.4.6\nSat Jan 11 17:00:29 2020 vpn-seed/100.64.1.1:47320 peer info: IV_PLAT=linux\nSat Jan 11 17:00:29 2020 vpn-seed/100.64.1.1:47320 peer info: IV_PROTO=2\nSat Jan 11 17:00:29 2020 vpn-seed/100.64.1.1:47320 peer info: IV_LZ4=1\nSat Jan 11 17:00:29 2020 vpn-seed/100.64.1.1:47320 peer info: IV_LZ4v2=1\nSat Jan 11 17:00:29 2020 vpn-seed/100.64.1.1:47320 peer info: IV_LZO=1\nSat Jan 11 17:00:29 2020 vpn-seed/100.64.1.1:47320 peer info: IV_COMP_STUB=1\nSat Jan 11 17:00:29 2020 vpn-seed/100.64.1.1:47320 peer info: IV_COMP_STUBv2=1\nSat Jan 11 17:00:29 2020 vpn-seed/100.64.1.1:47320 peer info: IV_TCPNL=1\nSat Jan 11 17:00:29 2020 TCP connection established with [AF_INET]10.250.7.77:26332\nSat Jan 11 17:00:29 2020 10.250.7.77:26332 TCP connection established with [AF_INET]100.64.1.1:55564\nSat Jan 11 17:00:29 2020 10.250.7.77:26332 Connection reset, restarting [0]\nSat Jan 11 17:00:29 2020 100.64.1.1:55564 Connection reset, restarting [0]\nSat Jan 11 17:00:33 2020 TCP connection established with [AF_INET]10.250.7.77:10204\nSat Jan 11 17:00:33 2020 10.250.7.77:10204 TCP connection established with [AF_INET]100.64.1.1:51144\nSat Jan 11 17:00:33 2020 10.250.7.77:10204 Connection reset, restarting [0]\nSat Jan 11 17:00:33 2020 100.64.1.1:51144 Connection reset, restarting [0]\nSat Jan 11 17:00:39 2020 TCP connection established with [AF_INET]10.250.7.77:26338\nSat Jan 11 17:00:39 2020 10.250.7.77:26338 TCP connection established with [AF_INET]100.64.1.1:55570\nSat Jan 11 17:00:39 2020 10.250.7.77:26338 Connection reset, restarting [0]\nSat Jan 11 17:00:39 2020 100.64.1.1:55570 Connection reset, restarting [0]\nSat Jan 11 17:00:43 2020 TCP connection established with [AF_INET]10.250.7.77:10208\nSat Jan 11 17:00:43 2020 10.250.7.77:10208 TCP connection established with [AF_INET]100.64.1.1:51148\nSat Jan 11 17:00:43 2020 10.250.7.77:10208 Connection reset, restarting [0]\nSat Jan 11 17:00:43 2020 100.64.1.1:51148 Connection reset, restarting [0]\nSat Jan 11 17:00:49 2020 TCP connection established with [AF_INET]10.250.7.77:26356\nSat Jan 11 17:00:49 2020 10.250.7.77:26356 TCP connection established with [AF_INET]100.64.1.1:55588\nSat Jan 11 17:00:49 2020 10.250.7.77:26356 Connection reset, restarting [0]\nSat Jan 11 17:00:49 2020 100.64.1.1:55588 Connection reset, restarting [0]\nSat Jan 11 17:00:51 2020 vpn-seed/100.64.1.1:51770 peer info: IV_VER=2.4.6\nSat Jan 11 17:00:51 2020 vpn-seed/100.64.1.1:51770 peer info: IV_PLAT=linux\nSat Jan 11 17:00:51 2020 vpn-seed/100.64.1.1:51770 peer info: IV_PROTO=2\nSat Jan 11 17:00:51 2020 vpn-seed/100.64.1.1:51770 peer info: IV_LZ4=1\nSat Jan 11 17:00:51 2020 vpn-seed/100.64.1.1:51770 peer info: IV_LZ4v2=1\nSat Jan 11 17:00:51 2020 vpn-seed/100.64.1.1:51770 peer info: IV_LZO=1\nSat Jan 11 17:00:51 2020 vpn-seed/100.64.1.1:51770 peer info: IV_COMP_STUB=1\nSat Jan 11 17:00:51 2020 vpn-seed/100.64.1.1:51770 peer info: IV_COMP_STUBv2=1\nSat Jan 11 17:00:51 2020 vpn-seed/100.64.1.1:51770 peer info: IV_TCPNL=1\nSat Jan 11 17:00:53 2020 TCP connection established with [AF_INET]10.250.7.77:10228\nSat Jan 11 17:00:53 2020 10.250.7.77:10228 TCP connection established with [AF_INET]100.64.1.1:51168\nSat Jan 11 17:00:53 2020 10.250.7.77:10228 Connection reset, restarting [0]\nSat Jan 11 17:00:53 2020 100.64.1.1:51168 Connection reset, restarting [0]\nSat Jan 11 17:00:59 2020 TCP connection established with [AF_INET]10.250.7.77:26364\nSat Jan 11 17:00:59 2020 10.250.7.77:26364 TCP connection established with [AF_INET]100.64.1.1:55596\nSat Jan 11 17:00:59 2020 10.250.7.77:26364 Connection reset, restarting [0]\nSat Jan 11 17:00:59 2020 100.64.1.1:55596 Connection reset, restarting [0]\nSat Jan 11 17:01:03 2020 TCP connection established with [AF_INET]10.250.7.77:10242\nSat Jan 11 17:01:03 2020 10.250.7.77:10242 TCP connection established with [AF_INET]100.64.1.1:51182\nSat Jan 11 17:01:03 2020 10.250.7.77:10242 Connection reset, restarting [0]\nSat Jan 11 17:01:03 2020 100.64.1.1:51182 Connection reset, restarting [0]\nSat Jan 11 17:01:09 2020 TCP connection established with [AF_INET]10.250.7.77:26378\nSat Jan 11 17:01:09 2020 10.250.7.77:26378 TCP connection established with [AF_INET]100.64.1.1:55610\nSat Jan 11 17:01:09 2020 10.250.7.77:26378 Connection reset, restarting [0]\nSat Jan 11 17:01:09 2020 100.64.1.1:55610 Connection reset, restarting [0]\nSat Jan 11 17:01:13 2020 TCP connection established with [AF_INET]10.250.7.77:10250\nSat Jan 11 17:01:13 2020 10.250.7.77:10250 TCP connection established with [AF_INET]100.64.1.1:51190\nSat Jan 11 17:01:13 2020 10.250.7.77:10250 Connection reset, restarting [0]\nSat Jan 11 17:01:13 2020 100.64.1.1:51190 Connection reset, restarting [0]\nSat Jan 11 17:01:19 2020 TCP connection established with [AF_INET]10.250.7.77:26390\nSat Jan 11 17:01:19 2020 10.250.7.77:26390 TCP connection established with [AF_INET]100.64.1.1:55622\nSat Jan 11 17:01:19 2020 10.250.7.77:26390 Connection reset, restarting [0]\nSat Jan 11 17:01:19 2020 100.64.1.1:55622 Connection reset, restarting [0]\nSat Jan 11 17:01:21 2020 vpn-seed/10.250.7.77:22572 peer info: IV_VER=2.4.6\nSat Jan 11 17:01:21 2020 vpn-seed/10.250.7.77:22572 peer info: IV_PLAT=linux\nSat Jan 11 17:01:21 2020 vpn-seed/10.250.7.77:22572 peer info: IV_PROTO=2\nSat Jan 11 17:01:21 2020 vpn-seed/10.250.7.77:22572 peer info: IV_LZ4=1\nSat Jan 11 17:01:21 2020 vpn-seed/10.250.7.77:22572 peer info: IV_LZ4v2=1\nSat Jan 11 17:01:21 2020 vpn-seed/10.250.7.77:22572 peer info: IV_LZO=1\nSat Jan 11 17:01:21 2020 vpn-seed/10.250.7.77:22572 peer info: IV_COMP_STUB=1\nSat Jan 11 17:01:21 2020 vpn-seed/10.250.7.77:22572 peer info: IV_COMP_STUBv2=1\nSat Jan 11 17:01:21 2020 vpn-seed/10.250.7.77:22572 peer info: IV_TCPNL=1\nSat Jan 11 17:01:23 2020 TCP connection established with [AF_INET]10.250.7.77:10258\nSat Jan 11 17:01:23 2020 10.250.7.77:10258 TCP connection established with [AF_INET]100.64.1.1:51198\nSat Jan 11 17:01:23 2020 10.250.7.77:10258 Connection reset, restarting [0]\nSat Jan 11 17:01:23 2020 100.64.1.1:51198 Connection reset, restarting [0]\nSat Jan 11 17:01:29 2020 TCP connection established with [AF_INET]10.250.7.77:26400\nSat Jan 11 17:01:29 2020 10.250.7.77:26400 TCP connection established with [AF_INET]100.64.1.1:55632\nSat Jan 11 17:01:29 2020 10.250.7.77:26400 Connection reset, restarting [0]\nSat Jan 11 17:01:29 2020 100.64.1.1:55632 Connection reset, restarting [0]\nSat Jan 11 17:01:33 2020 TCP connection established with [AF_INET]100.64.1.1:51208\nSat Jan 11 17:01:33 2020 100.64.1.1:51208 TCP connection established with [AF_INET]10.250.7.77:10268\nSat Jan 11 17:01:33 2020 100.64.1.1:51208 Connection reset, restarting [0]\nSat Jan 11 17:01:33 2020 10.250.7.77:10268 Connection reset, restarting [0]\nSat Jan 11 17:01:39 2020 TCP connection established with [AF_INET]10.250.7.77:26404\nSat Jan 11 17:01:39 2020 10.250.7.77:26404 TCP connection established with [AF_INET]100.64.1.1:55636\nSat Jan 11 17:01:39 2020 10.250.7.77:26404 Connection reset, restarting [0]\nSat Jan 11 17:01:39 2020 100.64.1.1:55636 Connection reset, restarting [0]\nSat Jan 11 17:01:43 2020 TCP connection established with [AF_INET]10.250.7.77:10272\nSat Jan 11 17:01:43 2020 10.250.7.77:10272 TCP connection established with [AF_INET]100.64.1.1:51212\nSat Jan 11 17:01:43 2020 10.250.7.77:10272 Connection reset, restarting [0]\nSat Jan 11 17:01:43 2020 100.64.1.1:51212 Connection reset, restarting [0]\nSat Jan 11 17:01:49 2020 TCP connection established with [AF_INET]10.250.7.77:26414\nSat Jan 11 17:01:49 2020 10.250.7.77:26414 TCP connection established with [AF_INET]100.64.1.1:55646\nSat Jan 11 17:01:49 2020 10.250.7.77:26414 Connection reset, restarting [0]\nSat Jan 11 17:01:49 2020 100.64.1.1:55646 Connection reset, restarting [0]\nSat Jan 11 17:01:53 2020 TCP connection established with [AF_INET]10.250.7.77:10282\nSat Jan 11 17:01:53 2020 10.250.7.77:10282 TCP connection established with [AF_INET]100.64.1.1:51222\nSat Jan 11 17:01:53 2020 10.250.7.77:10282 Connection reset, restarting [0]\nSat Jan 11 17:01:53 2020 100.64.1.1:51222 Connection reset, restarting [0]\nSat Jan 11 17:01:59 2020 TCP connection established with [AF_INET]10.250.7.77:26422\nSat Jan 11 17:01:59 2020 10.250.7.77:26422 TCP connection established with [AF_INET]100.64.1.1:55654\nSat Jan 11 17:01:59 2020 10.250.7.77:26422 Connection reset, restarting [0]\nSat Jan 11 17:01:59 2020 100.64.1.1:55654 Connection reset, restarting [0]\nSat Jan 11 17:02:03 2020 TCP connection established with [AF_INET]100.64.1.1:51236\nSat Jan 11 17:02:03 2020 100.64.1.1:51236 Connection reset, restarting [0]\nSat Jan 11 17:02:03 2020 TCP connection established with [AF_INET]10.250.7.77:10296\nSat Jan 11 17:02:03 2020 10.250.7.77:10296 Connection reset, restarting [0]\nSat Jan 11 17:02:09 2020 TCP connection established with [AF_INET]10.250.7.77:26436\nSat Jan 11 17:02:09 2020 10.250.7.77:26436 TCP connection established with [AF_INET]100.64.1.1:55668\nSat Jan 11 17:02:09 2020 10.250.7.77:26436 Connection reset, restarting [0]\nSat Jan 11 17:02:09 2020 100.64.1.1:55668 Connection reset, restarting [0]\nSat Jan 11 17:02:13 2020 TCP connection established with [AF_INET]10.250.7.77:10308\nSat Jan 11 17:02:13 2020 10.250.7.77:10308 TCP connection established with [AF_INET]100.64.1.1:51248\nSat Jan 11 17:02:13 2020 10.250.7.77:10308 Connection reset, restarting [0]\nSat Jan 11 17:02:13 2020 100.64.1.1:51248 Connection reset, restarting [0]\nSat Jan 11 17:02:19 2020 TCP connection established with [AF_INET]10.250.7.77:26444\nSat Jan 11 17:02:19 2020 10.250.7.77:26444 TCP connection established with [AF_INET]100.64.1.1:55676\nSat Jan 11 17:02:19 2020 10.250.7.77:26444 Connection reset, restarting [0]\nSat Jan 11 17:02:19 2020 100.64.1.1:55676 Connection reset, restarting [0]\nSat Jan 11 17:02:23 2020 TCP connection established with [AF_INET]10.250.7.77:10316\nSat Jan 11 17:02:23 2020 10.250.7.77:10316 TCP connection established with [AF_INET]100.64.1.1:51256\nSat Jan 11 17:02:23 2020 10.250.7.77:10316 Connection reset, restarting [0]\nSat Jan 11 17:02:23 2020 100.64.1.1:51256 Connection reset, restarting [0]\nSat Jan 11 17:02:29 2020 TCP connection established with [AF_INET]10.250.7.77:26458\nSat Jan 11 17:02:29 2020 10.250.7.77:26458 TCP connection established with [AF_INET]100.64.1.1:55690\nSat Jan 11 17:02:29 2020 10.250.7.77:26458 Connection reset, restarting [0]\nSat Jan 11 17:02:29 2020 100.64.1.1:55690 Connection reset, restarting [0]\nSat Jan 11 17:02:33 2020 TCP connection established with [AF_INET]10.250.7.77:10326\nSat Jan 11 17:02:33 2020 10.250.7.77:10326 TCP connection established with [AF_INET]100.64.1.1:51266\nSat Jan 11 17:02:33 2020 10.250.7.77:10326 Connection reset, restarting [0]\nSat Jan 11 17:02:33 2020 100.64.1.1:51266 Connection reset, restarting [0]\nSat Jan 11 17:02:39 2020 TCP connection established with [AF_INET]10.250.7.77:26462\nSat Jan 11 17:02:39 2020 10.250.7.77:26462 TCP connection established with [AF_INET]100.64.1.1:55694\nSat Jan 11 17:02:39 2020 10.250.7.77:26462 Connection reset, restarting [0]\nSat Jan 11 17:02:39 2020 100.64.1.1:55694 Connection reset, restarting [0]\nSat Jan 11 17:02:43 2020 TCP connection established with [AF_INET]10.250.7.77:10330\nSat Jan 11 17:02:43 2020 10.250.7.77:10330 TCP connection established with [AF_INET]100.64.1.1:51270\nSat Jan 11 17:02:43 2020 10.250.7.77:10330 Connection reset, restarting [0]\nSat Jan 11 17:02:43 2020 100.64.1.1:51270 Connection reset, restarting [0]\nSat Jan 11 17:02:49 2020 TCP connection established with [AF_INET]10.250.7.77:26472\nSat Jan 11 17:02:49 2020 10.250.7.77:26472 TCP connection established with [AF_INET]100.64.1.1:55704\nSat Jan 11 17:02:49 2020 10.250.7.77:26472 Connection reset, restarting [0]\nSat Jan 11 17:02:49 2020 100.64.1.1:55704 Connection reset, restarting [0]\nSat Jan 11 17:02:53 2020 TCP connection established with [AF_INET]10.250.7.77:10340\nSat Jan 11 17:02:53 2020 10.250.7.77:10340 TCP connection established with [AF_INET]100.64.1.1:51280\nSat Jan 11 17:02:53 2020 10.250.7.77:10340 Connection reset, restarting [0]\nSat Jan 11 17:02:53 2020 100.64.1.1:51280 Connection reset, restarting [0]\nSat Jan 11 17:02:59 2020 TCP connection established with [AF_INET]10.250.7.77:26480\nSat Jan 11 17:02:59 2020 10.250.7.77:26480 TCP connection established with [AF_INET]100.64.1.1:55712\nSat Jan 11 17:02:59 2020 10.250.7.77:26480 Connection reset, restarting [0]\nSat Jan 11 17:02:59 2020 100.64.1.1:55712 Connection reset, restarting [0]\nSat Jan 11 17:03:03 2020 TCP connection established with [AF_INET]10.250.7.77:10354\nSat Jan 11 17:03:03 2020 10.250.7.77:10354 TCP connection established with [AF_INET]100.64.1.1:51294\nSat Jan 11 17:03:03 2020 10.250.7.77:10354 Connection reset, restarting [0]\nSat Jan 11 17:03:03 2020 100.64.1.1:51294 Connection reset, restarting [0]\nSat Jan 11 17:03:09 2020 TCP connection established with [AF_INET]10.250.7.77:26494\nSat Jan 11 17:03:09 2020 10.250.7.77:26494 TCP connection established with [AF_INET]100.64.1.1:55726\nSat Jan 11 17:03:09 2020 10.250.7.77:26494 Connection reset, restarting [0]\nSat Jan 11 17:03:09 2020 100.64.1.1:55726 Connection reset, restarting [0]\nSat Jan 11 17:03:13 2020 TCP connection established with [AF_INET]10.250.7.77:10364\nSat Jan 11 17:03:13 2020 10.250.7.77:10364 TCP connection established with [AF_INET]100.64.1.1:51304\nSat Jan 11 17:03:13 2020 10.250.7.77:10364 Connection reset, restarting [0]\nSat Jan 11 17:03:13 2020 100.64.1.1:51304 Connection reset, restarting [0]\nSat Jan 11 17:03:19 2020 TCP connection established with [AF_INET]10.250.7.77:26502\nSat Jan 11 17:03:19 2020 10.250.7.77:26502 TCP connection established with [AF_INET]100.64.1.1:55734\nSat Jan 11 17:03:19 2020 10.250.7.77:26502 Connection reset, restarting [0]\nSat Jan 11 17:03:19 2020 100.64.1.1:55734 Connection reset, restarting [0]\nSat Jan 11 17:03:23 2020 TCP connection established with [AF_INET]10.250.7.77:10374\nSat Jan 11 17:03:23 2020 10.250.7.77:10374 TCP connection established with [AF_INET]100.64.1.1:51314\nSat Jan 11 17:03:23 2020 10.250.7.77:10374 Connection reset, restarting [0]\nSat Jan 11 17:03:23 2020 100.64.1.1:51314 Connection reset, restarting [0]\nSat Jan 11 17:03:29 2020 TCP connection established with [AF_INET]10.250.7.77:26512\nSat Jan 11 17:03:29 2020 10.250.7.77:26512 TCP connection established with [AF_INET]100.64.1.1:55744\nSat Jan 11 17:03:29 2020 10.250.7.77:26512 Connection reset, restarting [0]\nSat Jan 11 17:03:29 2020 100.64.1.1:55744 Connection reset, restarting [0]\nSat Jan 11 17:03:33 2020 TCP connection established with [AF_INET]10.250.7.77:10384\nSat Jan 11 17:03:33 2020 10.250.7.77:10384 TCP connection established with [AF_INET]100.64.1.1:51324\nSat Jan 11 17:03:33 2020 10.250.7.77:10384 Connection reset, restarting [0]\nSat Jan 11 17:03:33 2020 100.64.1.1:51324 Connection reset, restarting [0]\nSat Jan 11 17:03:39 2020 TCP connection established with [AF_INET]10.250.7.77:26516\nSat Jan 11 17:03:39 2020 10.250.7.77:26516 TCP connection established with [AF_INET]100.64.1.1:55748\nSat Jan 11 17:03:39 2020 10.250.7.77:26516 Connection reset, restarting [0]\nSat Jan 11 17:03:39 2020 100.64.1.1:55748 Connection reset, restarting [0]\nSat Jan 11 17:03:43 2020 TCP connection established with [AF_INET]10.250.7.77:10388\nSat Jan 11 17:03:43 2020 10.250.7.77:10388 TCP connection established with [AF_INET]100.64.1.1:51328\nSat Jan 11 17:03:43 2020 10.250.7.77:10388 Connection reset, restarting [0]\nSat Jan 11 17:03:43 2020 100.64.1.1:51328 Connection reset, restarting [0]\nSat Jan 11 17:03:49 2020 TCP connection established with [AF_INET]10.250.7.77:26528\nSat Jan 11 17:03:49 2020 10.250.7.77:26528 TCP connection established with [AF_INET]100.64.1.1:55760\nSat Jan 11 17:03:49 2020 10.250.7.77:26528 Connection reset, restarting [0]\nSat Jan 11 17:03:49 2020 100.64.1.1:55760 Connection reset, restarting [0]\nSat Jan 11 17:03:53 2020 TCP connection established with [AF_INET]10.250.7.77:10398\nSat Jan 11 17:03:53 2020 10.250.7.77:10398 TCP connection established with [AF_INET]100.64.1.1:51338\nSat Jan 11 17:03:53 2020 10.250.7.77:10398 Connection reset, restarting [0]\nSat Jan 11 17:03:53 2020 100.64.1.1:51338 Connection reset, restarting [0]\nSat Jan 11 17:03:59 2020 TCP connection established with [AF_INET]10.250.7.77:26544\nSat Jan 11 17:03:59 2020 10.250.7.77:26544 TCP connection established with [AF_INET]100.64.1.1:55776\nSat Jan 11 17:03:59 2020 10.250.7.77:26544 Connection reset, restarting [0]\nSat Jan 11 17:03:59 2020 100.64.1.1:55776 Connection reset, restarting [0]\nSat Jan 11 17:04:03 2020 TCP connection established with [AF_INET]100.64.1.1:51352\nSat Jan 11 17:04:03 2020 100.64.1.1:51352 TCP connection established with [AF_INET]10.250.7.77:10412\nSat Jan 11 17:04:03 2020 100.64.1.1:51352 Connection reset, restarting [0]\nSat Jan 11 17:04:03 2020 10.250.7.77:10412 Connection reset, restarting [0]\nSat Jan 11 17:04:09 2020 TCP connection established with [AF_INET]10.250.7.77:26554\nSat Jan 11 17:04:09 2020 10.250.7.77:26554 TCP connection established with [AF_INET]100.64.1.1:55786\nSat Jan 11 17:04:09 2020 10.250.7.77:26554 Connection reset, restarting [0]\nSat Jan 11 17:04:09 2020 100.64.1.1:55786 Connection reset, restarting [0]\nSat Jan 11 17:04:13 2020 TCP connection established with [AF_INET]10.250.7.77:10422\nSat Jan 11 17:04:13 2020 10.250.7.77:10422 TCP connection established with [AF_INET]100.64.1.1:51362\nSat Jan 11 17:04:13 2020 10.250.7.77:10422 Connection reset, restarting [0]\nSat Jan 11 17:04:13 2020 100.64.1.1:51362 Connection reset, restarting [0]\nSat Jan 11 17:04:19 2020 TCP connection established with [AF_INET]10.250.7.77:26568\nSat Jan 11 17:04:19 2020 10.250.7.77:26568 TCP connection established with [AF_INET]100.64.1.1:55800\nSat Jan 11 17:04:19 2020 10.250.7.77:26568 Connection reset, restarting [0]\nSat Jan 11 17:04:19 2020 100.64.1.1:55800 Connection reset, restarting [0]\nSat Jan 11 17:04:23 2020 TCP connection established with [AF_INET]10.250.7.77:10428\nSat Jan 11 17:04:23 2020 10.250.7.77:10428 TCP connection established with [AF_INET]100.64.1.1:51368\nSat Jan 11 17:04:23 2020 10.250.7.77:10428 Connection reset, restarting [0]\nSat Jan 11 17:04:23 2020 100.64.1.1:51368 Connection reset, restarting [0]\nSat Jan 11 17:04:29 2020 TCP connection established with [AF_INET]10.250.7.77:26572\nSat Jan 11 17:04:29 2020 10.250.7.77:26572 TCP connection established with [AF_INET]100.64.1.1:55804\nSat Jan 11 17:04:29 2020 10.250.7.77:26572 Connection reset, restarting [0]\nSat Jan 11 17:04:29 2020 100.64.1.1:55804 Connection reset, restarting [0]\nSat Jan 11 17:04:33 2020 TCP connection established with [AF_INET]10.250.7.77:10438\nSat Jan 11 17:04:33 2020 10.250.7.77:10438 TCP connection established with [AF_INET]100.64.1.1:51378\nSat Jan 11 17:04:33 2020 10.250.7.77:10438 Connection reset, restarting [0]\nSat Jan 11 17:04:33 2020 100.64.1.1:51378 Connection reset, restarting [0]\nSat Jan 11 17:04:39 2020 TCP connection established with [AF_INET]10.250.7.77:26580\nSat Jan 11 17:04:39 2020 10.250.7.77:26580 TCP connection established with [AF_INET]100.64.1.1:55812\nSat Jan 11 17:04:39 2020 10.250.7.77:26580 Connection reset, restarting [0]\nSat Jan 11 17:04:39 2020 100.64.1.1:55812 Connection reset, restarting [0]\nSat Jan 11 17:04:43 2020 TCP connection established with [AF_INET]10.250.7.77:10446\nSat Jan 11 17:04:43 2020 10.250.7.77:10446 TCP connection established with [AF_INET]100.64.1.1:51386\nSat Jan 11 17:04:43 2020 10.250.7.77:10446 Connection reset, restarting [0]\nSat Jan 11 17:04:43 2020 100.64.1.1:51386 Connection reset, restarting [0]\nSat Jan 11 17:04:49 2020 TCP connection established with [AF_INET]10.250.7.77:26586\nSat Jan 11 17:04:49 2020 10.250.7.77:26586 TCP connection established with [AF_INET]100.64.1.1:55818\nSat Jan 11 17:04:49 2020 10.250.7.77:26586 Connection reset, restarting [0]\nSat Jan 11 17:04:49 2020 100.64.1.1:55818 Connection reset, restarting [0]\nSat Jan 11 17:04:53 2020 TCP connection established with [AF_INET]10.250.7.77:10456\nSat Jan 11 17:04:53 2020 10.250.7.77:10456 TCP connection established with [AF_INET]100.64.1.1:51396\nSat Jan 11 17:04:53 2020 10.250.7.77:10456 Connection reset, restarting [0]\nSat Jan 11 17:04:53 2020 100.64.1.1:51396 Connection reset, restarting [0]\nSat Jan 11 17:04:59 2020 TCP connection established with [AF_INET]10.250.7.77:26602\nSat Jan 11 17:04:59 2020 10.250.7.77:26602 TCP connection established with [AF_INET]100.64.1.1:55834\nSat Jan 11 17:04:59 2020 10.250.7.77:26602 Connection reset, restarting [0]\nSat Jan 11 17:04:59 2020 100.64.1.1:55834 Connection reset, restarting [0]\nSat Jan 11 17:05:03 2020 TCP connection established with [AF_INET]10.250.7.77:10470\nSat Jan 11 17:05:03 2020 10.250.7.77:10470 TCP connection established with [AF_INET]100.64.1.1:51410\nSat Jan 11 17:05:03 2020 10.250.7.77:10470 Connection reset, restarting [0]\nSat Jan 11 17:05:03 2020 100.64.1.1:51410 Connection reset, restarting [0]\nSat Jan 11 17:05:09 2020 TCP connection established with [AF_INET]10.250.7.77:26612\nSat Jan 11 17:05:09 2020 10.250.7.77:26612 TCP connection established with [AF_INET]100.64.1.1:55844\nSat Jan 11 17:05:09 2020 10.250.7.77:26612 Connection reset, restarting [0]\nSat Jan 11 17:05:09 2020 100.64.1.1:55844 Connection reset, restarting [0]\nSat Jan 11 17:05:13 2020 TCP connection established with [AF_INET]10.250.7.77:10480\nSat Jan 11 17:05:13 2020 10.250.7.77:10480 TCP connection established with [AF_INET]100.64.1.1:51420\nSat Jan 11 17:05:13 2020 10.250.7.77:10480 Connection reset, restarting [0]\nSat Jan 11 17:05:13 2020 100.64.1.1:51420 Connection reset, restarting [0]\nSat Jan 11 17:05:19 2020 TCP connection established with [AF_INET]10.250.7.77:26626\nSat Jan 11 17:05:19 2020 10.250.7.77:26626 TCP connection established with [AF_INET]100.64.1.1:55858\nSat Jan 11 17:05:19 2020 10.250.7.77:26626 Connection reset, restarting [0]\nSat Jan 11 17:05:19 2020 100.64.1.1:55858 Connection reset, restarting [0]\nSat Jan 11 17:05:23 2020 TCP connection established with [AF_INET]10.250.7.77:10486\nSat Jan 11 17:05:23 2020 10.250.7.77:10486 TCP connection established with [AF_INET]100.64.1.1:51426\nSat Jan 11 17:05:23 2020 10.250.7.77:10486 Connection reset, restarting [0]\nSat Jan 11 17:05:23 2020 100.64.1.1:51426 Connection reset, restarting [0]\nSat Jan 11 17:05:29 2020 TCP connection established with [AF_INET]10.250.7.77:26630\nSat Jan 11 17:05:29 2020 10.250.7.77:26630 TCP connection established with [AF_INET]100.64.1.1:55862\nSat Jan 11 17:05:29 2020 10.250.7.77:26630 Connection reset, restarting [0]\nSat Jan 11 17:05:29 2020 100.64.1.1:55862 Connection reset, restarting [0]\nSat Jan 11 17:05:33 2020 TCP connection established with [AF_INET]10.250.7.77:10496\nSat Jan 11 17:05:33 2020 10.250.7.77:10496 TCP connection established with [AF_INET]100.64.1.1:51436\nSat Jan 11 17:05:33 2020 10.250.7.77:10496 Connection reset, restarting [0]\nSat Jan 11 17:05:33 2020 100.64.1.1:51436 Connection reset, restarting [0]\nSat Jan 11 17:05:39 2020 TCP connection established with [AF_INET]10.250.7.77:26638\nSat Jan 11 17:05:39 2020 10.250.7.77:26638 TCP connection established with [AF_INET]100.64.1.1:55870\nSat Jan 11 17:05:39 2020 10.250.7.77:26638 Connection reset, restarting [0]\nSat Jan 11 17:05:39 2020 100.64.1.1:55870 Connection reset, restarting [0]\nSat Jan 11 17:05:43 2020 TCP connection established with [AF_INET]10.250.7.77:10500\nSat Jan 11 17:05:43 2020 10.250.7.77:10500 TCP connection established with [AF_INET]100.64.1.1:51440\nSat Jan 11 17:05:43 2020 10.250.7.77:10500 Connection reset, restarting [0]\nSat Jan 11 17:05:43 2020 100.64.1.1:51440 Connection reset, restarting [0]\nSat Jan 11 17:05:49 2020 TCP connection established with [AF_INET]10.250.7.77:26644\nSat Jan 11 17:05:49 2020 10.250.7.77:26644 TCP connection established with [AF_INET]100.64.1.1:55876\nSat Jan 11 17:05:49 2020 10.250.7.77:26644 Connection reset, restarting [0]\nSat Jan 11 17:05:49 2020 100.64.1.1:55876 Connection reset, restarting [0]\nSat Jan 11 17:05:53 2020 TCP connection established with [AF_INET]10.250.7.77:10514\nSat Jan 11 17:05:53 2020 10.250.7.77:10514 TCP connection established with [AF_INET]100.64.1.1:51454\nSat Jan 11 17:05:53 2020 10.250.7.77:10514 Connection reset, restarting [0]\nSat Jan 11 17:05:53 2020 100.64.1.1:51454 Connection reset, restarting [0]\nSat Jan 11 17:05:59 2020 TCP connection established with [AF_INET]100.64.1.1:55888\nSat Jan 11 17:05:59 2020 100.64.1.1:55888 Connection reset, restarting [0]\nSat Jan 11 17:05:59 2020 TCP connection established with [AF_INET]10.250.7.77:26656\nSat Jan 11 17:05:59 2020 10.250.7.77:26656 Connection reset, restarting [0]\nSat Jan 11 17:06:03 2020 TCP connection established with [AF_INET]10.250.7.77:10564\nSat Jan 11 17:06:03 2020 10.250.7.77:10564 TCP connection established with [AF_INET]100.64.1.1:51504\nSat Jan 11 17:06:03 2020 10.250.7.77:10564 Connection reset, restarting [0]\nSat Jan 11 17:06:03 2020 100.64.1.1:51504 Connection reset, restarting [0]\nSat Jan 11 17:06:09 2020 TCP connection established with [AF_INET]10.250.7.77:26668\nSat Jan 11 17:06:09 2020 10.250.7.77:26668 TCP connection established with [AF_INET]100.64.1.1:55900\nSat Jan 11 17:06:09 2020 10.250.7.77:26668 Connection reset, restarting [0]\nSat Jan 11 17:06:09 2020 100.64.1.1:55900 Connection reset, restarting [0]\nSat Jan 11 17:06:13 2020 TCP connection established with [AF_INET]10.250.7.77:10574\nSat Jan 11 17:06:13 2020 10.250.7.77:10574 TCP connection established with [AF_INET]100.64.1.1:51514\nSat Jan 11 17:06:13 2020 10.250.7.77:10574 Connection reset, restarting [0]\nSat Jan 11 17:06:13 2020 100.64.1.1:51514 Connection reset, restarting [0]\nSat Jan 11 17:06:19 2020 TCP connection established with [AF_INET]10.250.7.77:26684\nSat Jan 11 17:06:19 2020 10.250.7.77:26684 TCP connection established with [AF_INET]100.64.1.1:55916\nSat Jan 11 17:06:19 2020 10.250.7.77:26684 Connection reset, restarting [0]\nSat Jan 11 17:06:19 2020 100.64.1.1:55916 Connection reset, restarting [0]\nSat Jan 11 17:06:23 2020 TCP connection established with [AF_INET]10.250.7.77:10580\nSat Jan 11 17:06:23 2020 10.250.7.77:10580 TCP connection established with [AF_INET]100.64.1.1:51520\nSat Jan 11 17:06:23 2020 10.250.7.77:10580 Connection reset, restarting [0]\nSat Jan 11 17:06:23 2020 100.64.1.1:51520 Connection reset, restarting [0]\nSat Jan 11 17:06:29 2020 TCP connection established with [AF_INET]10.250.7.77:26688\nSat Jan 11 17:06:29 2020 10.250.7.77:26688 TCP connection established with [AF_INET]100.64.1.1:55920\nSat Jan 11 17:06:29 2020 10.250.7.77:26688 Connection reset, restarting [0]\nSat Jan 11 17:06:29 2020 100.64.1.1:55920 Connection reset, restarting [0]\nSat Jan 11 17:06:33 2020 TCP connection established with [AF_INET]10.250.7.77:10590\nSat Jan 11 17:06:33 2020 10.250.7.77:10590 TCP connection established with [AF_INET]100.64.1.1:51530\nSat Jan 11 17:06:33 2020 10.250.7.77:10590 Connection reset, restarting [0]\nSat Jan 11 17:06:33 2020 100.64.1.1:51530 Connection reset, restarting [0]\nSat Jan 11 17:06:39 2020 TCP connection established with [AF_INET]10.250.7.77:26696\nSat Jan 11 17:06:39 2020 10.250.7.77:26696 TCP connection established with [AF_INET]100.64.1.1:55928\nSat Jan 11 17:06:39 2020 10.250.7.77:26696 Connection reset, restarting [0]\nSat Jan 11 17:06:39 2020 100.64.1.1:55928 Connection reset, restarting [0]\nSat Jan 11 17:06:43 2020 TCP connection established with [AF_INET]10.250.7.77:10594\nSat Jan 11 17:06:43 2020 10.250.7.77:10594 TCP connection established with [AF_INET]100.64.1.1:51534\nSat Jan 11 17:06:43 2020 10.250.7.77:10594 Connection reset, restarting [0]\nSat Jan 11 17:06:43 2020 100.64.1.1:51534 Connection reset, restarting [0]\nSat Jan 11 17:06:49 2020 TCP connection established with [AF_INET]10.250.7.77:26702\nSat Jan 11 17:06:49 2020 10.250.7.77:26702 TCP connection established with [AF_INET]100.64.1.1:55934\nSat Jan 11 17:06:49 2020 10.250.7.77:26702 Connection reset, restarting [0]\nSat Jan 11 17:06:49 2020 100.64.1.1:55934 Connection reset, restarting [0]\nSat Jan 11 17:06:53 2020 TCP connection established with [AF_INET]10.250.7.77:10610\nSat Jan 11 17:06:53 2020 10.250.7.77:10610 TCP connection established with [AF_INET]100.64.1.1:51550\nSat Jan 11 17:06:53 2020 10.250.7.77:10610 Connection reset, restarting [0]\nSat Jan 11 17:06:53 2020 100.64.1.1:51550 Connection reset, restarting [0]\nSat Jan 11 17:06:59 2020 TCP connection established with [AF_INET]10.250.7.77:26714\nSat Jan 11 17:06:59 2020 10.250.7.77:26714 TCP connection established with [AF_INET]100.64.1.1:55946\nSat Jan 11 17:06:59 2020 10.250.7.77:26714 Connection reset, restarting [0]\nSat Jan 11 17:06:59 2020 100.64.1.1:55946 Connection reset, restarting [0]\nSat Jan 11 17:07:03 2020 TCP connection established with [AF_INET]10.250.7.77:10626\nSat Jan 11 17:07:03 2020 10.250.7.77:10626 TCP connection established with [AF_INET]100.64.1.1:51566\nSat Jan 11 17:07:03 2020 10.250.7.77:10626 Connection reset, restarting [0]\nSat Jan 11 17:07:03 2020 100.64.1.1:51566 Connection reset, restarting [0]\nSat Jan 11 17:07:09 2020 TCP connection established with [AF_INET]10.250.7.77:26726\nSat Jan 11 17:07:09 2020 10.250.7.77:26726 TCP connection established with [AF_INET]100.64.1.1:55958\nSat Jan 11 17:07:09 2020 10.250.7.77:26726 Connection reset, restarting [0]\nSat Jan 11 17:07:09 2020 100.64.1.1:55958 Connection reset, restarting [0]\nSat Jan 11 17:07:13 2020 TCP connection established with [AF_INET]10.250.7.77:10638\nSat Jan 11 17:07:13 2020 10.250.7.77:10638 TCP connection established with [AF_INET]100.64.1.1:51578\nSat Jan 11 17:07:13 2020 10.250.7.77:10638 Connection reset, restarting [0]\nSat Jan 11 17:07:13 2020 100.64.1.1:51578 Connection reset, restarting [0]\nSat Jan 11 17:07:19 2020 TCP connection established with [AF_INET]10.250.7.77:26738\nSat Jan 11 17:07:19 2020 10.250.7.77:26738 TCP connection established with [AF_INET]100.64.1.1:55970\nSat Jan 11 17:07:19 2020 10.250.7.77:26738 Connection reset, restarting [0]\nSat Jan 11 17:07:19 2020 100.64.1.1:55970 Connection reset, restarting [0]\nSat Jan 11 17:07:23 2020 TCP connection established with [AF_INET]10.250.7.77:10644\nSat Jan 11 17:07:23 2020 10.250.7.77:10644 TCP connection established with [AF_INET]100.64.1.1:51584\nSat Jan 11 17:07:23 2020 10.250.7.77:10644 Connection reset, restarting [0]\nSat Jan 11 17:07:23 2020 100.64.1.1:51584 Connection reset, restarting [0]\nSat Jan 11 17:07:29 2020 TCP connection established with [AF_INET]10.250.7.77:26746\nSat Jan 11 17:07:29 2020 10.250.7.77:26746 TCP connection established with [AF_INET]100.64.1.1:55978\nSat Jan 11 17:07:29 2020 10.250.7.77:26746 Connection reset, restarting [0]\nSat Jan 11 17:07:29 2020 100.64.1.1:55978 Connection reset, restarting [0]\nSat Jan 11 17:07:33 2020 TCP connection established with [AF_INET]10.250.7.77:10654\nSat Jan 11 17:07:33 2020 10.250.7.77:10654 TCP connection established with [AF_INET]100.64.1.1:51594\nSat Jan 11 17:07:33 2020 10.250.7.77:10654 Connection reset, restarting [0]\nSat Jan 11 17:07:33 2020 100.64.1.1:51594 Connection reset, restarting [0]\nSat Jan 11 17:07:39 2020 TCP connection established with [AF_INET]100.64.1.1:55986\nSat Jan 11 17:07:39 2020 100.64.1.1:55986 TCP connection established with [AF_INET]10.250.7.77:26754\nSat Jan 11 17:07:39 2020 100.64.1.1:55986 Connection reset, restarting [0]\nSat Jan 11 17:07:39 2020 10.250.7.77:26754 Connection reset, restarting [0]\nSat Jan 11 17:07:43 2020 TCP connection established with [AF_INET]100.64.1.1:51598\nSat Jan 11 17:07:43 2020 100.64.1.1:51598 TCP connection established with [AF_INET]10.250.7.77:10658\nSat Jan 11 17:07:43 2020 100.64.1.1:51598 Connection reset, restarting [0]\nSat Jan 11 17:07:43 2020 10.250.7.77:10658 Connection reset, restarting [0]\nSat Jan 11 17:07:49 2020 TCP connection established with [AF_INET]10.250.7.77:26760\nSat Jan 11 17:07:49 2020 10.250.7.77:26760 TCP connection established with [AF_INET]100.64.1.1:55992\nSat Jan 11 17:07:49 2020 10.250.7.77:26760 Connection reset, restarting [0]\nSat Jan 11 17:07:49 2020 100.64.1.1:55992 Connection reset, restarting [0]\nSat Jan 11 17:07:53 2020 TCP connection established with [AF_INET]10.250.7.77:10682\nSat Jan 11 17:07:53 2020 10.250.7.77:10682 TCP connection established with [AF_INET]100.64.1.1:51622\nSat Jan 11 17:07:53 2020 10.250.7.77:10682 Connection reset, restarting [0]\nSat Jan 11 17:07:53 2020 100.64.1.1:51622 Connection reset, restarting [0]\nSat Jan 11 17:07:59 2020 TCP connection established with [AF_INET]10.250.7.77:26772\nSat Jan 11 17:07:59 2020 10.250.7.77:26772 TCP connection established with [AF_INET]100.64.1.1:56004\nSat Jan 11 17:07:59 2020 10.250.7.77:26772 Connection reset, restarting [0]\nSat Jan 11 17:07:59 2020 100.64.1.1:56004 Connection reset, restarting [0]\nSat Jan 11 17:08:03 2020 TCP connection established with [AF_INET]10.250.7.77:10696\nSat Jan 11 17:08:03 2020 10.250.7.77:10696 TCP connection established with [AF_INET]100.64.1.1:51636\nSat Jan 11 17:08:03 2020 10.250.7.77:10696 Connection reset, restarting [0]\nSat Jan 11 17:08:03 2020 100.64.1.1:51636 Connection reset, restarting [0]\nSat Jan 11 17:08:09 2020 TCP connection established with [AF_INET]10.250.7.77:26784\nSat Jan 11 17:08:09 2020 10.250.7.77:26784 TCP connection established with [AF_INET]100.64.1.1:56016\nSat Jan 11 17:08:09 2020 10.250.7.77:26784 Connection reset, restarting [0]\nSat Jan 11 17:08:09 2020 100.64.1.1:56016 Connection reset, restarting [0]\nSat Jan 11 17:08:13 2020 TCP connection established with [AF_INET]10.250.7.77:10704\nSat Jan 11 17:08:13 2020 10.250.7.77:10704 TCP connection established with [AF_INET]100.64.1.1:51644\nSat Jan 11 17:08:13 2020 10.250.7.77:10704 Connection reset, restarting [0]\nSat Jan 11 17:08:13 2020 100.64.1.1:51644 Connection reset, restarting [0]\nSat Jan 11 17:08:19 2020 TCP connection established with [AF_INET]10.250.7.77:26796\nSat Jan 11 17:08:19 2020 10.250.7.77:26796 TCP connection established with [AF_INET]100.64.1.1:56028\nSat Jan 11 17:08:19 2020 10.250.7.77:26796 Connection reset, restarting [0]\nSat Jan 11 17:08:19 2020 100.64.1.1:56028 Connection reset, restarting [0]\nSat Jan 11 17:08:23 2020 TCP connection established with [AF_INET]10.250.7.77:10714\nSat Jan 11 17:08:23 2020 10.250.7.77:10714 TCP connection established with [AF_INET]100.64.1.1:51654\nSat Jan 11 17:08:23 2020 10.250.7.77:10714 Connection reset, restarting [0]\nSat Jan 11 17:08:23 2020 100.64.1.1:51654 Connection reset, restarting [0]\nSat Jan 11 17:08:29 2020 TCP connection established with [AF_INET]10.250.7.77:26800\nSat Jan 11 17:08:29 2020 10.250.7.77:26800 TCP connection established with [AF_INET]100.64.1.1:56032\nSat Jan 11 17:08:29 2020 10.250.7.77:26800 Connection reset, restarting [0]\nSat Jan 11 17:08:29 2020 100.64.1.1:56032 Connection reset, restarting [0]\nSat Jan 11 17:08:33 2020 TCP connection established with [AF_INET]10.250.7.77:10724\nSat Jan 11 17:08:33 2020 10.250.7.77:10724 TCP connection established with [AF_INET]100.64.1.1:51664\nSat Jan 11 17:08:33 2020 10.250.7.77:10724 Connection reset, restarting [0]\nSat Jan 11 17:08:33 2020 100.64.1.1:51664 Connection reset, restarting [0]\nSat Jan 11 17:08:39 2020 TCP connection established with [AF_INET]10.250.7.77:26808\nSat Jan 11 17:08:39 2020 10.250.7.77:26808 TCP connection established with [AF_INET]100.64.1.1:56040\nSat Jan 11 17:08:39 2020 10.250.7.77:26808 Connection reset, restarting [0]\nSat Jan 11 17:08:39 2020 100.64.1.1:56040 Connection reset, restarting [0]\nSat Jan 11 17:08:43 2020 TCP connection established with [AF_INET]10.250.7.77:10728\nSat Jan 11 17:08:43 2020 10.250.7.77:10728 TCP connection established with [AF_INET]100.64.1.1:51668\nSat Jan 11 17:08:43 2020 10.250.7.77:10728 Connection reset, restarting [0]\nSat Jan 11 17:08:43 2020 100.64.1.1:51668 Connection reset, restarting [0]\nSat Jan 11 17:08:49 2020 TCP connection established with [AF_INET]10.250.7.77:26818\nSat Jan 11 17:08:49 2020 10.250.7.77:26818 TCP connection established with [AF_INET]100.64.1.1:56050\nSat Jan 11 17:08:49 2020 10.250.7.77:26818 Connection reset, restarting [0]\nSat Jan 11 17:08:49 2020 100.64.1.1:56050 Connection reset, restarting [0]\nSat Jan 11 17:08:53 2020 TCP connection established with [AF_INET]10.250.7.77:10740\nSat Jan 11 17:08:53 2020 10.250.7.77:10740 TCP connection established with [AF_INET]100.64.1.1:51680\nSat Jan 11 17:08:53 2020 10.250.7.77:10740 Connection reset, restarting [0]\nSat Jan 11 17:08:53 2020 100.64.1.1:51680 Connection reset, restarting [0]\nSat Jan 11 17:08:59 2020 TCP connection established with [AF_INET]10.250.7.77:26866\nSat Jan 11 17:08:59 2020 10.250.7.77:26866 TCP connection established with [AF_INET]100.64.1.1:56098\nSat Jan 11 17:08:59 2020 10.250.7.77:26866 Connection reset, restarting [0]\nSat Jan 11 17:08:59 2020 100.64.1.1:56098 Connection reset, restarting [0]\nSat Jan 11 17:09:03 2020 TCP connection established with [AF_INET]10.250.7.77:10754\nSat Jan 11 17:09:03 2020 10.250.7.77:10754 TCP connection established with [AF_INET]100.64.1.1:51694\nSat Jan 11 17:09:03 2020 10.250.7.77:10754 Connection reset, restarting [0]\nSat Jan 11 17:09:03 2020 100.64.1.1:51694 Connection reset, restarting [0]\nSat Jan 11 17:09:09 2020 TCP connection established with [AF_INET]10.250.7.77:26878\nSat Jan 11 17:09:09 2020 10.250.7.77:26878 TCP connection established with [AF_INET]100.64.1.1:56110\nSat Jan 11 17:09:09 2020 10.250.7.77:26878 Connection reset, restarting [0]\nSat Jan 11 17:09:09 2020 100.64.1.1:56110 Connection reset, restarting [0]\nSat Jan 11 17:09:13 2020 TCP connection established with [AF_INET]10.250.7.77:10762\nSat Jan 11 17:09:13 2020 10.250.7.77:10762 TCP connection established with [AF_INET]100.64.1.1:51702\nSat Jan 11 17:09:13 2020 10.250.7.77:10762 Connection reset, restarting [0]\nSat Jan 11 17:09:13 2020 100.64.1.1:51702 Connection reset, restarting [0]\nSat Jan 11 17:09:19 2020 TCP connection established with [AF_INET]10.250.7.77:26890\nSat Jan 11 17:09:19 2020 10.250.7.77:26890 TCP connection established with [AF_INET]100.64.1.1:56122\nSat Jan 11 17:09:19 2020 10.250.7.77:26890 Connection reset, restarting [0]\nSat Jan 11 17:09:19 2020 100.64.1.1:56122 Connection reset, restarting [0]\nSat Jan 11 17:09:23 2020 TCP connection established with [AF_INET]10.250.7.77:10768\nSat Jan 11 17:09:23 2020 10.250.7.77:10768 TCP connection established with [AF_INET]100.64.1.1:51708\nSat Jan 11 17:09:23 2020 10.250.7.77:10768 Connection reset, restarting [0]\nSat Jan 11 17:09:23 2020 100.64.1.1:51708 Connection reset, restarting [0]\nSat Jan 11 17:09:29 2020 TCP connection established with [AF_INET]10.250.7.77:26894\nSat Jan 11 17:09:29 2020 10.250.7.77:26894 TCP connection established with [AF_INET]100.64.1.1:56126\nSat Jan 11 17:09:29 2020 10.250.7.77:26894 Connection reset, restarting [0]\nSat Jan 11 17:09:29 2020 100.64.1.1:56126 Connection reset, restarting [0]\nSat Jan 11 17:09:33 2020 TCP connection established with [AF_INET]10.250.7.77:10778\nSat Jan 11 17:09:33 2020 10.250.7.77:10778 TCP connection established with [AF_INET]100.64.1.1:51718\nSat Jan 11 17:09:33 2020 10.250.7.77:10778 Connection reset, restarting [0]\nSat Jan 11 17:09:33 2020 100.64.1.1:51718 Connection reset, restarting [0]\nSat Jan 11 17:09:39 2020 TCP connection established with [AF_INET]10.250.7.77:26902\nSat Jan 11 17:09:39 2020 10.250.7.77:26902 TCP connection established with [AF_INET]100.64.1.1:56134\nSat Jan 11 17:09:39 2020 10.250.7.77:26902 Connection reset, restarting [0]\nSat Jan 11 17:09:39 2020 100.64.1.1:56134 Connection reset, restarting [0]\nSat Jan 11 17:09:43 2020 TCP connection established with [AF_INET]10.250.7.77:10786\nSat Jan 11 17:09:43 2020 10.250.7.77:10786 TCP connection established with [AF_INET]100.64.1.1:51726\nSat Jan 11 17:09:43 2020 10.250.7.77:10786 Connection reset, restarting [0]\nSat Jan 11 17:09:43 2020 100.64.1.1:51726 Connection reset, restarting [0]\nSat Jan 11 17:09:49 2020 TCP connection established with [AF_INET]10.250.7.77:26908\nSat Jan 11 17:09:49 2020 10.250.7.77:26908 TCP connection established with [AF_INET]100.64.1.1:56140\nSat Jan 11 17:09:49 2020 10.250.7.77:26908 Connection reset, restarting [0]\nSat Jan 11 17:09:49 2020 100.64.1.1:56140 Connection reset, restarting [0]\nSat Jan 11 17:09:53 2020 TCP connection established with [AF_INET]10.250.7.77:10798\nSat Jan 11 17:09:53 2020 10.250.7.77:10798 TCP connection established with [AF_INET]100.64.1.1:51738\nSat Jan 11 17:09:53 2020 10.250.7.77:10798 Connection reset, restarting [0]\nSat Jan 11 17:09:53 2020 100.64.1.1:51738 Connection reset, restarting [0]\nSat Jan 11 17:09:59 2020 TCP connection established with [AF_INET]10.250.7.77:26932\nSat Jan 11 17:09:59 2020 10.250.7.77:26932 TCP connection established with [AF_INET]100.64.1.1:56164\nSat Jan 11 17:09:59 2020 10.250.7.77:26932 Connection reset, restarting [0]\nSat Jan 11 17:09:59 2020 100.64.1.1:56164 Connection reset, restarting [0]\nSat Jan 11 17:10:03 2020 TCP connection established with [AF_INET]10.250.7.77:10812\nSat Jan 11 17:10:03 2020 10.250.7.77:10812 TCP connection established with [AF_INET]100.64.1.1:51752\nSat Jan 11 17:10:03 2020 10.250.7.77:10812 Connection reset, restarting [0]\nSat Jan 11 17:10:03 2020 100.64.1.1:51752 Connection reset, restarting [0]\nSat Jan 11 17:10:09 2020 TCP connection established with [AF_INET]10.250.7.77:26942\nSat Jan 11 17:10:09 2020 10.250.7.77:26942 TCP connection established with [AF_INET]100.64.1.1:56174\nSat Jan 11 17:10:09 2020 10.250.7.77:26942 Connection reset, restarting [0]\nSat Jan 11 17:10:09 2020 100.64.1.1:56174 Connection reset, restarting [0]\nSat Jan 11 17:10:13 2020 TCP connection established with [AF_INET]10.250.7.77:10820\nSat Jan 11 17:10:13 2020 10.250.7.77:10820 TCP connection established with [AF_INET]100.64.1.1:51760\nSat Jan 11 17:10:13 2020 10.250.7.77:10820 Connection reset, restarting [0]\nSat Jan 11 17:10:13 2020 100.64.1.1:51760 Connection reset, restarting [0]\nSat Jan 11 17:10:19 2020 TCP connection established with [AF_INET]10.250.7.77:26954\nSat Jan 11 17:10:19 2020 10.250.7.77:26954 TCP connection established with [AF_INET]100.64.1.1:56186\nSat Jan 11 17:10:19 2020 10.250.7.77:26954 Connection reset, restarting [0]\nSat Jan 11 17:10:19 2020 100.64.1.1:56186 Connection reset, restarting [0]\nSat Jan 11 17:10:23 2020 TCP connection established with [AF_INET]10.250.7.77:10826\nSat Jan 11 17:10:23 2020 10.250.7.77:10826 TCP connection established with [AF_INET]100.64.1.1:51766\nSat Jan 11 17:10:23 2020 10.250.7.77:10826 Connection reset, restarting [0]\nSat Jan 11 17:10:23 2020 100.64.1.1:51766 Connection reset, restarting [0]\nSat Jan 11 17:10:29 2020 TCP connection established with [AF_INET]10.250.7.77:26958\nSat Jan 11 17:10:29 2020 10.250.7.77:26958 TCP connection established with [AF_INET]100.64.1.1:56190\nSat Jan 11 17:10:29 2020 10.250.7.77:26958 Connection reset, restarting [0]\nSat Jan 11 17:10:29 2020 100.64.1.1:56190 Connection reset, restarting [0]\nSat Jan 11 17:10:33 2020 TCP connection established with [AF_INET]10.250.7.77:10836\nSat Jan 11 17:10:33 2020 10.250.7.77:10836 TCP connection established with [AF_INET]100.64.1.1:51776\nSat Jan 11 17:10:33 2020 10.250.7.77:10836 Connection reset, restarting [0]\nSat Jan 11 17:10:33 2020 100.64.1.1:51776 Connection reset, restarting [0]\nSat Jan 11 17:10:39 2020 TCP connection established with [AF_INET]10.250.7.77:26966\nSat Jan 11 17:10:39 2020 10.250.7.77:26966 TCP connection established with [AF_INET]100.64.1.1:56198\nSat Jan 11 17:10:39 2020 10.250.7.77:26966 Connection reset, restarting [0]\nSat Jan 11 17:10:39 2020 100.64.1.1:56198 Connection reset, restarting [0]\nSat Jan 11 17:10:43 2020 TCP connection established with [AF_INET]10.250.7.77:10842\nSat Jan 11 17:10:43 2020 10.250.7.77:10842 TCP connection established with [AF_INET]100.64.1.1:51782\nSat Jan 11 17:10:43 2020 10.250.7.77:10842 Connection reset, restarting [0]\nSat Jan 11 17:10:43 2020 100.64.1.1:51782 Connection reset, restarting [0]\nSat Jan 11 17:10:49 2020 TCP connection established with [AF_INET]10.250.7.77:26984\nSat Jan 11 17:10:49 2020 10.250.7.77:26984 TCP connection established with [AF_INET]100.64.1.1:56216\nSat Jan 11 17:10:49 2020 10.250.7.77:26984 Connection reset, restarting [0]\nSat Jan 11 17:10:49 2020 100.64.1.1:56216 Connection reset, restarting [0]\nSat Jan 11 17:10:53 2020 TCP connection established with [AF_INET]10.250.7.77:10856\nSat Jan 11 17:10:53 2020 10.250.7.77:10856 TCP connection established with [AF_INET]100.64.1.1:51796\nSat Jan 11 17:10:53 2020 10.250.7.77:10856 Connection reset, restarting [0]\nSat Jan 11 17:10:53 2020 100.64.1.1:51796 Connection reset, restarting [0]\nSat Jan 11 17:10:59 2020 TCP connection established with [AF_INET]10.250.7.77:26996\nSat Jan 11 17:10:59 2020 10.250.7.77:26996 TCP connection established with [AF_INET]100.64.1.1:56228\nSat Jan 11 17:10:59 2020 10.250.7.77:26996 Connection reset, restarting [0]\nSat Jan 11 17:10:59 2020 100.64.1.1:56228 Connection reset, restarting [0]\nSat Jan 11 17:11:03 2020 TCP connection established with [AF_INET]10.250.7.77:10870\nSat Jan 11 17:11:03 2020 10.250.7.77:10870 TCP connection established with [AF_INET]100.64.1.1:51810\nSat Jan 11 17:11:03 2020 10.250.7.77:10870 Connection reset, restarting [0]\nSat Jan 11 17:11:03 2020 100.64.1.1:51810 Connection reset, restarting [0]\nSat Jan 11 17:11:09 2020 TCP connection established with [AF_INET]100.64.1.1:56238\nSat Jan 11 17:11:09 2020 100.64.1.1:56238 TCP connection established with [AF_INET]10.250.7.77:27006\nSat Jan 11 17:11:09 2020 100.64.1.1:56238 Connection reset, restarting [0]\nSat Jan 11 17:11:09 2020 10.250.7.77:27006 Connection reset, restarting [0]\nSat Jan 11 17:11:13 2020 TCP connection established with [AF_INET]10.250.7.77:10878\nSat Jan 11 17:11:13 2020 10.250.7.77:10878 TCP connection established with [AF_INET]100.64.1.1:51818\nSat Jan 11 17:11:13 2020 10.250.7.77:10878 Connection reset, restarting [0]\nSat Jan 11 17:11:13 2020 100.64.1.1:51818 Connection reset, restarting [0]\nSat Jan 11 17:11:19 2020 TCP connection established with [AF_INET]10.250.7.77:27022\nSat Jan 11 17:11:19 2020 10.250.7.77:27022 TCP connection established with [AF_INET]100.64.1.1:56254\nSat Jan 11 17:11:19 2020 10.250.7.77:27022 Connection reset, restarting [0]\nSat Jan 11 17:11:19 2020 100.64.1.1:56254 Connection reset, restarting [0]\nSat Jan 11 17:11:23 2020 TCP connection established with [AF_INET]10.250.7.77:10884\nSat Jan 11 17:11:23 2020 10.250.7.77:10884 TCP connection established with [AF_INET]100.64.1.1:51824\nSat Jan 11 17:11:23 2020 10.250.7.77:10884 Connection reset, restarting [0]\nSat Jan 11 17:11:23 2020 100.64.1.1:51824 Connection reset, restarting [0]\nSat Jan 11 17:11:29 2020 TCP connection established with [AF_INET]10.250.7.77:27026\nSat Jan 11 17:11:29 2020 10.250.7.77:27026 TCP connection established with [AF_INET]100.64.1.1:56258\nSat Jan 11 17:11:29 2020 10.250.7.77:27026 Connection reset, restarting [0]\nSat Jan 11 17:11:29 2020 100.64.1.1:56258 Connection reset, restarting [0]\nSat Jan 11 17:11:33 2020 TCP connection established with [AF_INET]10.250.7.77:10894\nSat Jan 11 17:11:33 2020 10.250.7.77:10894 TCP connection established with [AF_INET]100.64.1.1:51834\nSat Jan 11 17:11:33 2020 10.250.7.77:10894 Connection reset, restarting [0]\nSat Jan 11 17:11:33 2020 100.64.1.1:51834 Connection reset, restarting [0]\nSat Jan 11 17:11:39 2020 TCP connection established with [AF_INET]10.250.7.77:27034\nSat Jan 11 17:11:39 2020 10.250.7.77:27034 TCP connection established with [AF_INET]100.64.1.1:56266\nSat Jan 11 17:11:39 2020 10.250.7.77:27034 Connection reset, restarting [0]\nSat Jan 11 17:11:39 2020 100.64.1.1:56266 Connection reset, restarting [0]\nSat Jan 11 17:11:43 2020 TCP connection established with [AF_INET]10.250.7.77:10900\nSat Jan 11 17:11:43 2020 10.250.7.77:10900 TCP connection established with [AF_INET]100.64.1.1:51840\nSat Jan 11 17:11:43 2020 10.250.7.77:10900 Connection reset, restarting [0]\nSat Jan 11 17:11:43 2020 100.64.1.1:51840 Connection reset, restarting [0]\nSat Jan 11 17:11:49 2020 TCP connection established with [AF_INET]10.250.7.77:27042\nSat Jan 11 17:11:49 2020 10.250.7.77:27042 TCP connection established with [AF_INET]100.64.1.1:56274\nSat Jan 11 17:11:49 2020 10.250.7.77:27042 Connection reset, restarting [0]\nSat Jan 11 17:11:49 2020 100.64.1.1:56274 Connection reset, restarting [0]\nSat Jan 11 17:11:53 2020 TCP connection established with [AF_INET]10.250.7.77:10910\nSat Jan 11 17:11:53 2020 10.250.7.77:10910 TCP connection established with [AF_INET]100.64.1.1:51850\nSat Jan 11 17:11:53 2020 10.250.7.77:10910 Connection reset, restarting [0]\nSat Jan 11 17:11:53 2020 100.64.1.1:51850 Connection reset, restarting [0]\nSat Jan 11 17:11:59 2020 TCP connection established with [AF_INET]10.250.7.77:27054\nSat Jan 11 17:11:59 2020 10.250.7.77:27054 TCP connection established with [AF_INET]100.64.1.1:56286\nSat Jan 11 17:11:59 2020 10.250.7.77:27054 Connection reset, restarting [0]\nSat Jan 11 17:11:59 2020 100.64.1.1:56286 Connection reset, restarting [0]\nSat Jan 11 17:12:03 2020 TCP connection established with [AF_INET]10.250.7.77:10924\nSat Jan 11 17:12:03 2020 10.250.7.77:10924 TCP connection established with [AF_INET]100.64.1.1:51864\nSat Jan 11 17:12:03 2020 10.250.7.77:10924 Connection reset, restarting [0]\nSat Jan 11 17:12:03 2020 100.64.1.1:51864 Connection reset, restarting [0]\nSat Jan 11 17:12:09 2020 TCP connection established with [AF_INET]10.250.7.77:27064\nSat Jan 11 17:12:09 2020 10.250.7.77:27064 TCP connection established with [AF_INET]100.64.1.1:56296\nSat Jan 11 17:12:09 2020 10.250.7.77:27064 Connection reset, restarting [0]\nSat Jan 11 17:12:09 2020 100.64.1.1:56296 Connection reset, restarting [0]\nSat Jan 11 17:12:13 2020 TCP connection established with [AF_INET]10.250.7.77:10936\nSat Jan 11 17:12:13 2020 10.250.7.77:10936 TCP connection established with [AF_INET]100.64.1.1:51876\nSat Jan 11 17:12:13 2020 10.250.7.77:10936 Connection reset, restarting [0]\nSat Jan 11 17:12:13 2020 100.64.1.1:51876 Connection reset, restarting [0]\nSat Jan 11 17:12:19 2020 TCP connection established with [AF_INET]100.64.1.1:56308\nSat Jan 11 17:12:19 2020 100.64.1.1:56308 TCP connection established with [AF_INET]10.250.7.77:27076\nSat Jan 11 17:12:19 2020 100.64.1.1:56308 Connection reset, restarting [0]\nSat Jan 11 17:12:19 2020 10.250.7.77:27076 Connection reset, restarting [0]\nSat Jan 11 17:12:23 2020 TCP connection established with [AF_INET]10.250.7.77:10942\nSat Jan 11 17:12:23 2020 10.250.7.77:10942 TCP connection established with [AF_INET]100.64.1.1:51882\nSat Jan 11 17:12:23 2020 10.250.7.77:10942 Connection reset, restarting [0]\nSat Jan 11 17:12:23 2020 100.64.1.1:51882 Connection reset, restarting [0]\nSat Jan 11 17:12:29 2020 TCP connection established with [AF_INET]10.250.7.77:27084\nSat Jan 11 17:12:29 2020 10.250.7.77:27084 TCP connection established with [AF_INET]100.64.1.1:56316\nSat Jan 11 17:12:29 2020 10.250.7.77:27084 Connection reset, restarting [0]\nSat Jan 11 17:12:29 2020 100.64.1.1:56316 Connection reset, restarting [0]\nSat Jan 11 17:12:33 2020 TCP connection established with [AF_INET]10.250.7.77:10954\nSat Jan 11 17:12:33 2020 10.250.7.77:10954 TCP connection established with [AF_INET]100.64.1.1:51894\nSat Jan 11 17:12:33 2020 10.250.7.77:10954 Connection reset, restarting [0]\nSat Jan 11 17:12:33 2020 100.64.1.1:51894 Connection reset, restarting [0]\nSat Jan 11 17:12:39 2020 TCP connection established with [AF_INET]10.250.7.77:27092\nSat Jan 11 17:12:39 2020 10.250.7.77:27092 TCP connection established with [AF_INET]100.64.1.1:56324\nSat Jan 11 17:12:39 2020 10.250.7.77:27092 Connection reset, restarting [0]\nSat Jan 11 17:12:39 2020 100.64.1.1:56324 Connection reset, restarting [0]\nSat Jan 11 17:12:43 2020 TCP connection established with [AF_INET]10.250.7.77:10958\nSat Jan 11 17:12:43 2020 10.250.7.77:10958 TCP connection established with [AF_INET]100.64.1.1:51898\nSat Jan 11 17:12:43 2020 10.250.7.77:10958 Connection reset, restarting [0]\nSat Jan 11 17:12:43 2020 100.64.1.1:51898 Connection reset, restarting [0]\nSat Jan 11 17:12:49 2020 TCP connection established with [AF_INET]10.250.7.77:27100\nSat Jan 11 17:12:49 2020 10.250.7.77:27100 TCP connection established with [AF_INET]100.64.1.1:56332\nSat Jan 11 17:12:49 2020 10.250.7.77:27100 Connection reset, restarting [0]\nSat Jan 11 17:12:49 2020 100.64.1.1:56332 Connection reset, restarting [0]\nSat Jan 11 17:12:53 2020 TCP connection established with [AF_INET]10.250.7.77:10968\nSat Jan 11 17:12:53 2020 10.250.7.77:10968 TCP connection established with [AF_INET]100.64.1.1:51908\nSat Jan 11 17:12:53 2020 10.250.7.77:10968 Connection reset, restarting [0]\nSat Jan 11 17:12:53 2020 100.64.1.1:51908 Connection reset, restarting [0]\nSat Jan 11 17:12:59 2020 TCP connection established with [AF_INET]10.250.7.77:27112\nSat Jan 11 17:12:59 2020 10.250.7.77:27112 TCP connection established with [AF_INET]100.64.1.1:56344\nSat Jan 11 17:12:59 2020 10.250.7.77:27112 Connection reset, restarting [0]\nSat Jan 11 17:12:59 2020 100.64.1.1:56344 Connection reset, restarting [0]\nSat Jan 11 17:13:03 2020 TCP connection established with [AF_INET]10.250.7.77:10984\nSat Jan 11 17:13:03 2020 10.250.7.77:10984 TCP connection established with [AF_INET]100.64.1.1:51924\nSat Jan 11 17:13:03 2020 10.250.7.77:10984 Connection reset, restarting [0]\nSat Jan 11 17:13:03 2020 100.64.1.1:51924 Connection reset, restarting [0]\nSat Jan 11 17:13:09 2020 TCP connection established with [AF_INET]10.250.7.77:27122\nSat Jan 11 17:13:09 2020 10.250.7.77:27122 TCP connection established with [AF_INET]100.64.1.1:56354\nSat Jan 11 17:13:09 2020 10.250.7.77:27122 Connection reset, restarting [0]\nSat Jan 11 17:13:09 2020 100.64.1.1:56354 Connection reset, restarting [0]\nSat Jan 11 17:13:13 2020 TCP connection established with [AF_INET]10.250.7.77:10992\nSat Jan 11 17:13:13 2020 10.250.7.77:10992 TCP connection established with [AF_INET]100.64.1.1:51932\nSat Jan 11 17:13:13 2020 10.250.7.77:10992 Connection reset, restarting [0]\nSat Jan 11 17:13:13 2020 100.64.1.1:51932 Connection reset, restarting [0]\nSat Jan 11 17:13:19 2020 TCP connection established with [AF_INET]10.250.7.77:27134\nSat Jan 11 17:13:19 2020 10.250.7.77:27134 TCP connection established with [AF_INET]100.64.1.1:56366\nSat Jan 11 17:13:19 2020 10.250.7.77:27134 Connection reset, restarting [0]\nSat Jan 11 17:13:19 2020 100.64.1.1:56366 Connection reset, restarting [0]\nSat Jan 11 17:13:23 2020 TCP connection established with [AF_INET]10.250.7.77:11002\nSat Jan 11 17:13:23 2020 10.250.7.77:11002 TCP connection established with [AF_INET]100.64.1.1:51942\nSat Jan 11 17:13:23 2020 10.250.7.77:11002 Connection reset, restarting [0]\nSat Jan 11 17:13:23 2020 100.64.1.1:51942 Connection reset, restarting [0]\nSat Jan 11 17:13:29 2020 TCP connection established with [AF_INET]10.250.7.77:27138\nSat Jan 11 17:13:29 2020 10.250.7.77:27138 TCP connection established with [AF_INET]100.64.1.1:56370\nSat Jan 11 17:13:29 2020 10.250.7.77:27138 Connection reset, restarting [0]\nSat Jan 11 17:13:29 2020 100.64.1.1:56370 Connection reset, restarting [0]\nSat Jan 11 17:13:33 2020 TCP connection established with [AF_INET]10.250.7.77:11016\nSat Jan 11 17:13:33 2020 10.250.7.77:11016 TCP connection established with [AF_INET]100.64.1.1:51956\nSat Jan 11 17:13:33 2020 10.250.7.77:11016 Connection reset, restarting [0]\nSat Jan 11 17:13:33 2020 100.64.1.1:51956 Connection reset, restarting [0]\nSat Jan 11 17:13:39 2020 TCP connection established with [AF_INET]10.250.7.77:27148\nSat Jan 11 17:13:39 2020 10.250.7.77:27148 TCP connection established with [AF_INET]100.64.1.1:56380\nSat Jan 11 17:13:39 2020 10.250.7.77:27148 Connection reset, restarting [0]\nSat Jan 11 17:13:39 2020 100.64.1.1:56380 Connection reset, restarting [0]\nSat Jan 11 17:13:43 2020 TCP connection established with [AF_INET]100.64.1.1:51960\nSat Jan 11 17:13:43 2020 100.64.1.1:51960 TCP connection established with [AF_INET]10.250.7.77:11020\nSat Jan 11 17:13:43 2020 100.64.1.1:51960 Connection reset, restarting [0]\nSat Jan 11 17:13:43 2020 10.250.7.77:11020 Connection reset, restarting [0]\nSat Jan 11 17:13:49 2020 TCP connection established with [AF_INET]10.250.7.77:27158\nSat Jan 11 17:13:49 2020 10.250.7.77:27158 TCP connection established with [AF_INET]100.64.1.1:56390\nSat Jan 11 17:13:49 2020 10.250.7.77:27158 Connection reset, restarting [0]\nSat Jan 11 17:13:49 2020 100.64.1.1:56390 Connection reset, restarting [0]\nSat Jan 11 17:13:53 2020 TCP connection established with [AF_INET]10.250.7.77:11030\nSat Jan 11 17:13:53 2020 10.250.7.77:11030 TCP connection established with [AF_INET]100.64.1.1:51970\nSat Jan 11 17:13:53 2020 10.250.7.77:11030 Connection reset, restarting [0]\nSat Jan 11 17:13:53 2020 100.64.1.1:51970 Connection reset, restarting [0]\nSat Jan 11 17:13:59 2020 TCP connection established with [AF_INET]10.250.7.77:27170\nSat Jan 11 17:13:59 2020 10.250.7.77:27170 TCP connection established with [AF_INET]100.64.1.1:56402\nSat Jan 11 17:13:59 2020 10.250.7.77:27170 Connection reset, restarting [0]\nSat Jan 11 17:13:59 2020 100.64.1.1:56402 Connection reset, restarting [0]\nSat Jan 11 17:14:03 2020 TCP connection established with [AF_INET]10.250.7.77:11052\nSat Jan 11 17:14:03 2020 10.250.7.77:11052 TCP connection established with [AF_INET]100.64.1.1:51992\nSat Jan 11 17:14:03 2020 10.250.7.77:11052 Connection reset, restarting [0]\nSat Jan 11 17:14:03 2020 100.64.1.1:51992 Connection reset, restarting [0]\nSat Jan 11 17:14:09 2020 TCP connection established with [AF_INET]10.250.7.77:27186\nSat Jan 11 17:14:09 2020 10.250.7.77:27186 TCP connection established with [AF_INET]100.64.1.1:56418\nSat Jan 11 17:14:09 2020 10.250.7.77:27186 Connection reset, restarting [0]\nSat Jan 11 17:14:09 2020 100.64.1.1:56418 Connection reset, restarting [0]\nSat Jan 11 17:14:13 2020 TCP connection established with [AF_INET]10.250.7.77:11060\nSat Jan 11 17:14:13 2020 10.250.7.77:11060 TCP connection established with [AF_INET]100.64.1.1:52000\nSat Jan 11 17:14:13 2020 10.250.7.77:11060 Connection reset, restarting [0]\nSat Jan 11 17:14:13 2020 100.64.1.1:52000 Connection reset, restarting [0]\nSat Jan 11 17:14:19 2020 TCP connection established with [AF_INET]10.250.7.77:27198\nSat Jan 11 17:14:19 2020 10.250.7.77:27198 TCP connection established with [AF_INET]100.64.1.1:56430\nSat Jan 11 17:14:19 2020 10.250.7.77:27198 Connection reset, restarting [0]\nSat Jan 11 17:14:19 2020 100.64.1.1:56430 Connection reset, restarting [0]\nSat Jan 11 17:14:23 2020 TCP connection established with [AF_INET]10.250.7.77:11068\nSat Jan 11 17:14:23 2020 10.250.7.77:11068 TCP connection established with [AF_INET]100.64.1.1:52008\nSat Jan 11 17:14:23 2020 10.250.7.77:11068 Connection reset, restarting [0]\nSat Jan 11 17:14:23 2020 100.64.1.1:52008 Connection reset, restarting [0]\nSat Jan 11 17:14:29 2020 TCP connection established with [AF_INET]10.250.7.77:27208\nSat Jan 11 17:14:29 2020 10.250.7.77:27208 TCP connection established with [AF_INET]100.64.1.1:56440\nSat Jan 11 17:14:29 2020 10.250.7.77:27208 Connection reset, restarting [0]\nSat Jan 11 17:14:29 2020 100.64.1.1:56440 Connection reset, restarting [0]\nSat Jan 11 17:14:33 2020 TCP connection established with [AF_INET]10.250.7.77:11080\nSat Jan 11 17:14:33 2020 10.250.7.77:11080 TCP connection established with [AF_INET]100.64.1.1:52020\nSat Jan 11 17:14:33 2020 10.250.7.77:11080 Connection reset, restarting [0]\nSat Jan 11 17:14:33 2020 100.64.1.1:52020 Connection reset, restarting [0]\nSat Jan 11 17:14:39 2020 TCP connection established with [AF_INET]10.250.7.77:27224\nSat Jan 11 17:14:39 2020 10.250.7.77:27224 TCP connection established with [AF_INET]100.64.1.1:56456\nSat Jan 11 17:14:39 2020 10.250.7.77:27224 Connection reset, restarting [0]\nSat Jan 11 17:14:39 2020 100.64.1.1:56456 Connection reset, restarting [0]\nSat Jan 11 17:14:43 2020 TCP connection established with [AF_INET]10.250.7.77:11090\nSat Jan 11 17:14:43 2020 10.250.7.77:11090 TCP connection established with [AF_INET]100.64.1.1:52030\nSat Jan 11 17:14:43 2020 10.250.7.77:11090 Connection reset, restarting [0]\nSat Jan 11 17:14:43 2020 100.64.1.1:52030 Connection reset, restarting [0]\nSat Jan 11 17:14:49 2020 TCP connection established with [AF_INET]10.250.7.77:27230\nSat Jan 11 17:14:49 2020 10.250.7.77:27230 TCP connection established with [AF_INET]100.64.1.1:56462\nSat Jan 11 17:14:49 2020 10.250.7.77:27230 Connection reset, restarting [0]\nSat Jan 11 17:14:49 2020 100.64.1.1:56462 Connection reset, restarting [0]\nSat Jan 11 17:14:53 2020 TCP connection established with [AF_INET]10.250.7.77:11100\nSat Jan 11 17:14:53 2020 10.250.7.77:11100 TCP connection established with [AF_INET]100.64.1.1:52040\nSat Jan 11 17:14:53 2020 10.250.7.77:11100 Connection reset, restarting [0]\nSat Jan 11 17:14:53 2020 100.64.1.1:52040 Connection reset, restarting [0]\nSat Jan 11 17:14:59 2020 TCP connection established with [AF_INET]10.250.7.77:27246\nSat Jan 11 17:14:59 2020 10.250.7.77:27246 TCP connection established with [AF_INET]100.64.1.1:56478\nSat Jan 11 17:14:59 2020 10.250.7.77:27246 Connection reset, restarting [0]\nSat Jan 11 17:14:59 2020 100.64.1.1:56478 Connection reset, restarting [0]\nSat Jan 11 17:15:03 2020 TCP connection established with [AF_INET]10.250.7.77:11114\nSat Jan 11 17:15:03 2020 10.250.7.77:11114 TCP connection established with [AF_INET]100.64.1.1:52054\nSat Jan 11 17:15:03 2020 10.250.7.77:11114 Connection reset, restarting [0]\nSat Jan 11 17:15:03 2020 100.64.1.1:52054 Connection reset, restarting [0]\nSat Jan 11 17:15:09 2020 TCP connection established with [AF_INET]10.250.7.77:27256\nSat Jan 11 17:15:09 2020 10.250.7.77:27256 TCP connection established with [AF_INET]100.64.1.1:56488\nSat Jan 11 17:15:09 2020 10.250.7.77:27256 Connection reset, restarting [0]\nSat Jan 11 17:15:09 2020 100.64.1.1:56488 Connection reset, restarting [0]\nSat Jan 11 17:15:13 2020 TCP connection established with [AF_INET]10.250.7.77:11122\nSat Jan 11 17:15:13 2020 10.250.7.77:11122 TCP connection established with [AF_INET]100.64.1.1:52062\nSat Jan 11 17:15:13 2020 10.250.7.77:11122 Connection reset, restarting [0]\nSat Jan 11 17:15:13 2020 100.64.1.1:52062 Connection reset, restarting [0]\nSat Jan 11 17:15:19 2020 TCP connection established with [AF_INET]10.250.7.77:27268\nSat Jan 11 17:15:19 2020 10.250.7.77:27268 TCP connection established with [AF_INET]100.64.1.1:56500\nSat Jan 11 17:15:19 2020 10.250.7.77:27268 Connection reset, restarting [0]\nSat Jan 11 17:15:19 2020 100.64.1.1:56500 Connection reset, restarting [0]\nSat Jan 11 17:15:23 2020 TCP connection established with [AF_INET]10.250.7.77:11130\nSat Jan 11 17:15:23 2020 10.250.7.77:11130 Connection reset, restarting [0]\nSat Jan 11 17:15:23 2020 TCP connection established with [AF_INET]100.64.1.1:52070\nSat Jan 11 17:15:23 2020 100.64.1.1:52070 Connection reset, restarting [0]\nSat Jan 11 17:15:29 2020 TCP connection established with [AF_INET]10.250.7.77:27274\nSat Jan 11 17:15:29 2020 10.250.7.77:27274 TCP connection established with [AF_INET]100.64.1.1:56506\nSat Jan 11 17:15:29 2020 10.250.7.77:27274 Connection reset, restarting [0]\nSat Jan 11 17:15:29 2020 100.64.1.1:56506 Connection reset, restarting [0]\nSat Jan 11 17:15:33 2020 TCP connection established with [AF_INET]10.250.7.77:11140\nSat Jan 11 17:15:33 2020 10.250.7.77:11140 TCP connection established with [AF_INET]100.64.1.1:52080\nSat Jan 11 17:15:33 2020 10.250.7.77:11140 Connection reset, restarting [0]\nSat Jan 11 17:15:33 2020 100.64.1.1:52080 Connection reset, restarting [0]\nSat Jan 11 17:15:39 2020 TCP connection established with [AF_INET]10.250.7.77:27282\nSat Jan 11 17:15:39 2020 10.250.7.77:27282 TCP connection established with [AF_INET]100.64.1.1:56514\nSat Jan 11 17:15:39 2020 10.250.7.77:27282 Connection reset, restarting [0]\nSat Jan 11 17:15:39 2020 100.64.1.1:56514 Connection reset, restarting [0]\nSat Jan 11 17:15:43 2020 TCP connection established with [AF_INET]10.250.7.77:11144\nSat Jan 11 17:15:43 2020 10.250.7.77:11144 TCP connection established with [AF_INET]100.64.1.1:52084\nSat Jan 11 17:15:43 2020 10.250.7.77:11144 Connection reset, restarting [0]\nSat Jan 11 17:15:43 2020 100.64.1.1:52084 Connection reset, restarting [0]\nSat Jan 11 17:15:49 2020 TCP connection established with [AF_INET]10.250.7.77:27288\nSat Jan 11 17:15:49 2020 10.250.7.77:27288 TCP connection established with [AF_INET]100.64.1.1:56520\nSat Jan 11 17:15:49 2020 10.250.7.77:27288 Connection reset, restarting [0]\nSat Jan 11 17:15:49 2020 100.64.1.1:56520 Connection reset, restarting [0]\nSat Jan 11 17:15:53 2020 TCP connection established with [AF_INET]10.250.7.77:11158\nSat Jan 11 17:15:53 2020 10.250.7.77:11158 TCP connection established with [AF_INET]100.64.1.1:52098\nSat Jan 11 17:15:53 2020 10.250.7.77:11158 Connection reset, restarting [0]\nSat Jan 11 17:15:53 2020 100.64.1.1:52098 Connection reset, restarting [0]\nSat Jan 11 17:15:59 2020 TCP connection established with [AF_INET]10.250.7.77:27302\nSat Jan 11 17:15:59 2020 10.250.7.77:27302 TCP connection established with [AF_INET]100.64.1.1:56534\nSat Jan 11 17:15:59 2020 10.250.7.77:27302 Connection reset, restarting [0]\nSat Jan 11 17:15:59 2020 100.64.1.1:56534 Connection reset, restarting [0]\nSat Jan 11 17:16:03 2020 TCP connection established with [AF_INET]10.250.7.77:11206\nSat Jan 11 17:16:03 2020 10.250.7.77:11206 TCP connection established with [AF_INET]100.64.1.1:52146\nSat Jan 11 17:16:03 2020 10.250.7.77:11206 Connection reset, restarting [0]\nSat Jan 11 17:16:03 2020 100.64.1.1:52146 Connection reset, restarting [0]\nSat Jan 11 17:16:09 2020 TCP connection established with [AF_INET]10.250.7.77:27312\nSat Jan 11 17:16:09 2020 10.250.7.77:27312 TCP connection established with [AF_INET]100.64.1.1:56544\nSat Jan 11 17:16:09 2020 10.250.7.77:27312 Connection reset, restarting [0]\nSat Jan 11 17:16:09 2020 100.64.1.1:56544 Connection reset, restarting [0]\nSat Jan 11 17:16:13 2020 TCP connection established with [AF_INET]10.250.7.77:11216\nSat Jan 11 17:16:13 2020 10.250.7.77:11216 TCP connection established with [AF_INET]100.64.1.1:52156\nSat Jan 11 17:16:13 2020 10.250.7.77:11216 Connection reset, restarting [0]\nSat Jan 11 17:16:13 2020 100.64.1.1:52156 Connection reset, restarting [0]\nSat Jan 11 17:16:19 2020 TCP connection established with [AF_INET]10.250.7.77:27328\nSat Jan 11 17:16:19 2020 10.250.7.77:27328 TCP connection established with [AF_INET]100.64.1.1:56560\nSat Jan 11 17:16:19 2020 10.250.7.77:27328 Connection reset, restarting [0]\nSat Jan 11 17:16:19 2020 100.64.1.1:56560 Connection reset, restarting [0]\nSat Jan 11 17:16:23 2020 TCP connection established with [AF_INET]10.250.7.77:11224\nSat Jan 11 17:16:23 2020 10.250.7.77:11224 TCP connection established with [AF_INET]100.64.1.1:52164\nSat Jan 11 17:16:23 2020 10.250.7.77:11224 Connection reset, restarting [0]\nSat Jan 11 17:16:23 2020 100.64.1.1:52164 Connection reset, restarting [0]\nSat Jan 11 17:16:29 2020 TCP connection established with [AF_INET]10.250.7.77:27336\nSat Jan 11 17:16:29 2020 10.250.7.77:27336 TCP connection established with [AF_INET]100.64.1.1:56568\nSat Jan 11 17:16:29 2020 10.250.7.77:27336 Connection reset, restarting [0]\nSat Jan 11 17:16:29 2020 100.64.1.1:56568 Connection reset, restarting [0]\nSat Jan 11 17:16:33 2020 TCP connection established with [AF_INET]10.250.7.77:11234\nSat Jan 11 17:16:33 2020 10.250.7.77:11234 TCP connection established with [AF_INET]100.64.1.1:52174\nSat Jan 11 17:16:33 2020 10.250.7.77:11234 Connection reset, restarting [0]\nSat Jan 11 17:16:33 2020 100.64.1.1:52174 Connection reset, restarting [0]\nSat Jan 11 17:16:39 2020 TCP connection established with [AF_INET]10.250.7.77:27344\nSat Jan 11 17:16:39 2020 10.250.7.77:27344 TCP connection established with [AF_INET]100.64.1.1:56576\nSat Jan 11 17:16:39 2020 10.250.7.77:27344 Connection reset, restarting [0]\nSat Jan 11 17:16:39 2020 100.64.1.1:56576 Connection reset, restarting [0]\nSat Jan 11 17:16:43 2020 TCP connection established with [AF_INET]10.250.7.77:11238\nSat Jan 11 17:16:43 2020 10.250.7.77:11238 TCP connection established with [AF_INET]100.64.1.1:52178\nSat Jan 11 17:16:43 2020 10.250.7.77:11238 Connection reset, restarting [0]\nSat Jan 11 17:16:43 2020 100.64.1.1:52178 Connection reset, restarting [0]\nSat Jan 11 17:16:49 2020 TCP connection established with [AF_INET]10.250.7.77:27350\nSat Jan 11 17:16:49 2020 10.250.7.77:27350 TCP connection established with [AF_INET]100.64.1.1:56582\nSat Jan 11 17:16:49 2020 10.250.7.77:27350 Connection reset, restarting [0]\nSat Jan 11 17:16:49 2020 100.64.1.1:56582 Connection reset, restarting [0]\nSat Jan 11 17:16:53 2020 TCP connection established with [AF_INET]10.250.7.77:11248\nSat Jan 11 17:16:53 2020 10.250.7.77:11248 TCP connection established with [AF_INET]100.64.1.1:52188\nSat Jan 11 17:16:53 2020 10.250.7.77:11248 Connection reset, restarting [0]\nSat Jan 11 17:16:53 2020 100.64.1.1:52188 Connection reset, restarting [0]\nSat Jan 11 17:16:59 2020 TCP connection established with [AF_INET]10.250.7.77:27364\nSat Jan 11 17:16:59 2020 10.250.7.77:27364 TCP connection established with [AF_INET]100.64.1.1:56596\nSat Jan 11 17:16:59 2020 10.250.7.77:27364 Connection reset, restarting [0]\nSat Jan 11 17:16:59 2020 100.64.1.1:56596 Connection reset, restarting [0]\nSat Jan 11 17:17:03 2020 TCP connection established with [AF_INET]10.250.7.77:11262\nSat Jan 11 17:17:03 2020 10.250.7.77:11262 TCP connection established with [AF_INET]100.64.1.1:52202\nSat Jan 11 17:17:03 2020 10.250.7.77:11262 Connection reset, restarting [0]\nSat Jan 11 17:17:03 2020 100.64.1.1:52202 Connection reset, restarting [0]\nSat Jan 11 17:17:09 2020 TCP connection established with [AF_INET]10.250.7.77:27374\nSat Jan 11 17:17:09 2020 10.250.7.77:27374 TCP connection established with [AF_INET]100.64.1.1:56606\nSat Jan 11 17:17:09 2020 10.250.7.77:27374 Connection reset, restarting [0]\nSat Jan 11 17:17:09 2020 100.64.1.1:56606 Connection reset, restarting [0]\nSat Jan 11 17:17:13 2020 TCP connection established with [AF_INET]10.250.7.77:11276\nSat Jan 11 17:17:13 2020 10.250.7.77:11276 TCP connection established with [AF_INET]100.64.1.1:52216\nSat Jan 11 17:17:13 2020 10.250.7.77:11276 Connection reset, restarting [0]\nSat Jan 11 17:17:13 2020 100.64.1.1:52216 Connection reset, restarting [0]\nSat Jan 11 17:17:19 2020 TCP connection established with [AF_INET]10.250.7.77:27386\nSat Jan 11 17:17:19 2020 10.250.7.77:27386 TCP connection established with [AF_INET]100.64.1.1:56618\nSat Jan 11 17:17:19 2020 10.250.7.77:27386 Connection reset, restarting [0]\nSat Jan 11 17:17:19 2020 100.64.1.1:56618 Connection reset, restarting [0]\nSat Jan 11 17:17:23 2020 TCP connection established with [AF_INET]10.250.7.77:11282\nSat Jan 11 17:17:23 2020 10.250.7.77:11282 TCP connection established with [AF_INET]100.64.1.1:52222\nSat Jan 11 17:17:23 2020 10.250.7.77:11282 Connection reset, restarting [0]\nSat Jan 11 17:17:23 2020 100.64.1.1:52222 Connection reset, restarting [0]\nSat Jan 11 17:17:29 2020 TCP connection established with [AF_INET]10.250.7.77:27396\nSat Jan 11 17:17:29 2020 10.250.7.77:27396 TCP connection established with [AF_INET]100.64.1.1:56628\nSat Jan 11 17:17:29 2020 10.250.7.77:27396 Connection reset, restarting [0]\nSat Jan 11 17:17:29 2020 100.64.1.1:56628 Connection reset, restarting [0]\nSat Jan 11 17:17:33 2020 TCP connection established with [AF_INET]10.250.7.77:11292\nSat Jan 11 17:17:33 2020 10.250.7.77:11292 TCP connection established with [AF_INET]100.64.1.1:52232\nSat Jan 11 17:17:33 2020 10.250.7.77:11292 Connection reset, restarting [0]\nSat Jan 11 17:17:33 2020 100.64.1.1:52232 Connection reset, restarting [0]\nSat Jan 11 17:17:39 2020 TCP connection established with [AF_INET]10.250.7.77:27406\nSat Jan 11 17:17:39 2020 10.250.7.77:27406 TCP connection established with [AF_INET]100.64.1.1:56638\nSat Jan 11 17:17:39 2020 10.250.7.77:27406 Connection reset, restarting [0]\nSat Jan 11 17:17:39 2020 100.64.1.1:56638 Connection reset, restarting [0]\nSat Jan 11 17:17:43 2020 TCP connection established with [AF_INET]10.250.7.77:11296\nSat Jan 11 17:17:43 2020 10.250.7.77:11296 TCP connection established with [AF_INET]100.64.1.1:52236\nSat Jan 11 17:17:43 2020 10.250.7.77:11296 Connection reset, restarting [0]\nSat Jan 11 17:17:43 2020 100.64.1.1:52236 Connection reset, restarting [0]\nSat Jan 11 17:17:49 2020 TCP connection established with [AF_INET]10.250.7.77:27412\nSat Jan 11 17:17:49 2020 10.250.7.77:27412 TCP connection established with [AF_INET]100.64.1.1:56644\nSat Jan 11 17:17:49 2020 10.250.7.77:27412 Connection reset, restarting [0]\nSat Jan 11 17:17:49 2020 100.64.1.1:56644 Connection reset, restarting [0]\nSat Jan 11 17:17:53 2020 TCP connection established with [AF_INET]10.250.7.77:11306\nSat Jan 11 17:17:53 2020 10.250.7.77:11306 TCP connection established with [AF_INET]100.64.1.1:52246\nSat Jan 11 17:17:53 2020 10.250.7.77:11306 Connection reset, restarting [0]\nSat Jan 11 17:17:53 2020 100.64.1.1:52246 Connection reset, restarting [0]\nSat Jan 11 17:17:59 2020 TCP connection established with [AF_INET]10.250.7.77:27424\nSat Jan 11 17:17:59 2020 10.250.7.77:27424 TCP connection established with [AF_INET]100.64.1.1:56656\nSat Jan 11 17:17:59 2020 10.250.7.77:27424 Connection reset, restarting [0]\nSat Jan 11 17:17:59 2020 100.64.1.1:56656 Connection reset, restarting [0]\nSat Jan 11 17:18:03 2020 TCP connection established with [AF_INET]10.250.7.77:11320\nSat Jan 11 17:18:03 2020 10.250.7.77:11320 TCP connection established with [AF_INET]100.64.1.1:52260\nSat Jan 11 17:18:03 2020 10.250.7.77:11320 Connection reset, restarting [0]\nSat Jan 11 17:18:03 2020 100.64.1.1:52260 Connection reset, restarting [0]\nSat Jan 11 17:18:09 2020 TCP connection established with [AF_INET]10.250.7.77:27434\nSat Jan 11 17:18:09 2020 10.250.7.77:27434 TCP connection established with [AF_INET]100.64.1.1:56666\nSat Jan 11 17:18:09 2020 10.250.7.77:27434 Connection reset, restarting [0]\nSat Jan 11 17:18:09 2020 100.64.1.1:56666 Connection reset, restarting [0]\nSat Jan 11 17:18:13 2020 TCP connection established with [AF_INET]10.250.7.77:11330\nSat Jan 11 17:18:13 2020 10.250.7.77:11330 TCP connection established with [AF_INET]100.64.1.1:52270\nSat Jan 11 17:18:13 2020 10.250.7.77:11330 Connection reset, restarting [0]\nSat Jan 11 17:18:13 2020 100.64.1.1:52270 Connection reset, restarting [0]\nSat Jan 11 17:18:19 2020 TCP connection established with [AF_INET]10.250.7.77:27448\nSat Jan 11 17:18:19 2020 10.250.7.77:27448 TCP connection established with [AF_INET]100.64.1.1:56680\nSat Jan 11 17:18:19 2020 10.250.7.77:27448 Connection reset, restarting [0]\nSat Jan 11 17:18:19 2020 100.64.1.1:56680 Connection reset, restarting [0]\nSat Jan 11 17:18:23 2020 TCP connection established with [AF_INET]10.250.7.77:11340\nSat Jan 11 17:18:23 2020 10.250.7.77:11340 TCP connection established with [AF_INET]100.64.1.1:52280\nSat Jan 11 17:18:23 2020 10.250.7.77:11340 Connection reset, restarting [0]\nSat Jan 11 17:18:23 2020 100.64.1.1:52280 Connection reset, restarting [0]\nSat Jan 11 17:18:29 2020 TCP connection established with [AF_INET]10.250.7.77:27452\nSat Jan 11 17:18:29 2020 10.250.7.77:27452 TCP connection established with [AF_INET]100.64.1.1:56684\nSat Jan 11 17:18:29 2020 10.250.7.77:27452 Connection reset, restarting [0]\nSat Jan 11 17:18:29 2020 100.64.1.1:56684 Connection reset, restarting [0]\nSat Jan 11 17:18:33 2020 TCP connection established with [AF_INET]10.250.7.77:11350\nSat Jan 11 17:18:33 2020 10.250.7.77:11350 TCP connection established with [AF_INET]100.64.1.1:52290\nSat Jan 11 17:18:33 2020 10.250.7.77:11350 Connection reset, restarting [0]\nSat Jan 11 17:18:33 2020 100.64.1.1:52290 Connection reset, restarting [0]\nSat Jan 11 17:18:39 2020 TCP connection established with [AF_INET]10.250.7.77:27460\nSat Jan 11 17:18:39 2020 10.250.7.77:27460 TCP connection established with [AF_INET]100.64.1.1:56692\nSat Jan 11 17:18:39 2020 10.250.7.77:27460 Connection reset, restarting [0]\nSat Jan 11 17:18:39 2020 100.64.1.1:56692 Connection reset, restarting [0]\nSat Jan 11 17:18:43 2020 TCP connection established with [AF_INET]10.250.7.77:11354\nSat Jan 11 17:18:43 2020 10.250.7.77:11354 TCP connection established with [AF_INET]100.64.1.1:52294\nSat Jan 11 17:18:43 2020 10.250.7.77:11354 Connection reset, restarting [0]\nSat Jan 11 17:18:43 2020 100.64.1.1:52294 Connection reset, restarting [0]\nSat Jan 11 17:18:49 2020 TCP connection established with [AF_INET]10.250.7.77:27504\nSat Jan 11 17:18:49 2020 10.250.7.77:27504 TCP connection established with [AF_INET]100.64.1.1:56736\nSat Jan 11 17:18:49 2020 10.250.7.77:27504 Connection reset, restarting [0]\nSat Jan 11 17:18:49 2020 100.64.1.1:56736 Connection reset, restarting [0]\nSat Jan 11 17:18:53 2020 TCP connection established with [AF_INET]10.250.7.77:11364\nSat Jan 11 17:18:53 2020 10.250.7.77:11364 TCP connection established with [AF_INET]100.64.1.1:52304\nSat Jan 11 17:18:53 2020 10.250.7.77:11364 Connection reset, restarting [0]\nSat Jan 11 17:18:53 2020 100.64.1.1:52304 Connection reset, restarting [0]\nSat Jan 11 17:18:59 2020 TCP connection established with [AF_INET]10.250.7.77:27516\nSat Jan 11 17:18:59 2020 10.250.7.77:27516 TCP connection established with [AF_INET]100.64.1.1:56748\nSat Jan 11 17:18:59 2020 10.250.7.77:27516 Connection reset, restarting [0]\nSat Jan 11 17:18:59 2020 100.64.1.1:56748 Connection reset, restarting [0]\nSat Jan 11 17:19:03 2020 TCP connection established with [AF_INET]10.250.7.77:11378\nSat Jan 11 17:19:03 2020 10.250.7.77:11378 TCP connection established with [AF_INET]100.64.1.1:52318\nSat Jan 11 17:19:03 2020 10.250.7.77:11378 Connection reset, restarting [0]\nSat Jan 11 17:19:03 2020 100.64.1.1:52318 Connection reset, restarting [0]\nSat Jan 11 17:19:09 2020 TCP connection established with [AF_INET]10.250.7.77:27528\nSat Jan 11 17:19:09 2020 10.250.7.77:27528 TCP connection established with [AF_INET]100.64.1.1:56760\nSat Jan 11 17:19:09 2020 10.250.7.77:27528 Connection reset, restarting [0]\nSat Jan 11 17:19:09 2020 100.64.1.1:56760 Connection reset, restarting [0]\nSat Jan 11 17:19:13 2020 TCP connection established with [AF_INET]10.250.7.77:11388\nSat Jan 11 17:19:13 2020 10.250.7.77:11388 TCP connection established with [AF_INET]100.64.1.1:52328\nSat Jan 11 17:19:13 2020 10.250.7.77:11388 Connection reset, restarting [0]\nSat Jan 11 17:19:13 2020 100.64.1.1:52328 Connection reset, restarting [0]\nSat Jan 11 17:19:19 2020 TCP connection established with [AF_INET]10.250.7.77:27542\nSat Jan 11 17:19:19 2020 10.250.7.77:27542 TCP connection established with [AF_INET]100.64.1.1:56774\nSat Jan 11 17:19:19 2020 10.250.7.77:27542 Connection reset, restarting [0]\nSat Jan 11 17:19:19 2020 100.64.1.1:56774 Connection reset, restarting [0]\nSat Jan 11 17:19:23 2020 TCP connection established with [AF_INET]10.250.7.77:11400\nSat Jan 11 17:19:23 2020 10.250.7.77:11400 TCP connection established with [AF_INET]100.64.1.1:52340\nSat Jan 11 17:19:23 2020 10.250.7.77:11400 Connection reset, restarting [0]\nSat Jan 11 17:19:23 2020 100.64.1.1:52340 Connection reset, restarting [0]\nSat Jan 11 17:19:29 2020 TCP connection established with [AF_INET]10.250.7.77:27546\nSat Jan 11 17:19:29 2020 10.250.7.77:27546 TCP connection established with [AF_INET]100.64.1.1:56778\nSat Jan 11 17:19:29 2020 10.250.7.77:27546 Connection reset, restarting [0]\nSat Jan 11 17:19:29 2020 100.64.1.1:56778 Connection reset, restarting [0]\nSat Jan 11 17:19:33 2020 TCP connection established with [AF_INET]10.250.7.77:11416\nSat Jan 11 17:19:33 2020 10.250.7.77:11416 TCP connection established with [AF_INET]100.64.1.1:52356\nSat Jan 11 17:19:33 2020 10.250.7.77:11416 Connection reset, restarting [0]\nSat Jan 11 17:19:33 2020 100.64.1.1:52356 Connection reset, restarting [0]\nSat Jan 11 17:19:39 2020 TCP connection established with [AF_INET]10.250.7.77:27554\nSat Jan 11 17:19:39 2020 10.250.7.77:27554 TCP connection established with [AF_INET]100.64.1.1:56786\nSat Jan 11 17:19:39 2020 10.250.7.77:27554 Connection reset, restarting [0]\nSat Jan 11 17:19:39 2020 100.64.1.1:56786 Connection reset, restarting [0]\nSat Jan 11 17:19:43 2020 TCP connection established with [AF_INET]10.250.7.77:11424\nSat Jan 11 17:19:43 2020 10.250.7.77:11424 TCP connection established with [AF_INET]100.64.1.1:52364\nSat Jan 11 17:19:43 2020 10.250.7.77:11424 Connection reset, restarting [0]\nSat Jan 11 17:19:43 2020 100.64.1.1:52364 Connection reset, restarting [0]\nSat Jan 11 17:19:49 2020 TCP connection established with [AF_INET]10.250.7.77:27560\nSat Jan 11 17:19:49 2020 10.250.7.77:27560 TCP connection established with [AF_INET]100.64.1.1:56792\nSat Jan 11 17:19:49 2020 10.250.7.77:27560 Connection reset, restarting [0]\nSat Jan 11 17:19:49 2020 100.64.1.1:56792 Connection reset, restarting [0]\nSat Jan 11 17:19:53 2020 TCP connection established with [AF_INET]10.250.7.77:11434\nSat Jan 11 17:19:53 2020 10.250.7.77:11434 TCP connection established with [AF_INET]100.64.1.1:52374\nSat Jan 11 17:19:53 2020 10.250.7.77:11434 Connection reset, restarting [0]\nSat Jan 11 17:19:53 2020 100.64.1.1:52374 Connection reset, restarting [0]\nSat Jan 11 17:19:59 2020 TCP connection established with [AF_INET]10.250.7.77:27576\nSat Jan 11 17:19:59 2020 10.250.7.77:27576 TCP connection established with [AF_INET]100.64.1.1:56808\nSat Jan 11 17:19:59 2020 10.250.7.77:27576 Connection reset, restarting [0]\nSat Jan 11 17:19:59 2020 100.64.1.1:56808 Connection reset, restarting [0]\nSat Jan 11 17:20:03 2020 TCP connection established with [AF_INET]100.64.1.1:52390\nSat Jan 11 17:20:03 2020 100.64.1.1:52390 Connection reset, restarting [0]\nSat Jan 11 17:20:03 2020 TCP connection established with [AF_INET]10.250.7.77:11450\nSat Jan 11 17:20:03 2020 10.250.7.77:11450 Connection reset, restarting [0]\nSat Jan 11 17:20:09 2020 TCP connection established with [AF_INET]100.64.1.1:56820\nSat Jan 11 17:20:09 2020 100.64.1.1:56820 Connection reset, restarting [0]\nSat Jan 11 17:20:09 2020 TCP connection established with [AF_INET]10.250.7.77:27588\nSat Jan 11 17:20:09 2020 10.250.7.77:27588 Connection reset, restarting [0]\nSat Jan 11 17:20:13 2020 TCP connection established with [AF_INET]10.250.7.77:11458\nSat Jan 11 17:20:13 2020 10.250.7.77:11458 Connection reset, restarting [0]\nSat Jan 11 17:20:13 2020 TCP connection established with [AF_INET]100.64.1.1:52398\nSat Jan 11 17:20:13 2020 100.64.1.1:52398 Connection reset, restarting [0]\nSat Jan 11 17:20:19 2020 TCP connection established with [AF_INET]10.250.7.77:27600\nSat Jan 11 17:20:19 2020 10.250.7.77:27600 Connection reset, restarting [0]\nSat Jan 11 17:20:19 2020 TCP connection established with [AF_INET]100.64.1.1:56832\nSat Jan 11 17:20:19 2020 100.64.1.1:56832 Connection reset, restarting [0]\nSat Jan 11 17:20:23 2020 TCP connection established with [AF_INET]10.250.7.77:11464\nSat Jan 11 17:20:23 2020 10.250.7.77:11464 TCP connection established with [AF_INET]100.64.1.1:52404\nSat Jan 11 17:20:23 2020 10.250.7.77:11464 Connection reset, restarting [0]\nSat Jan 11 17:20:23 2020 100.64.1.1:52404 Connection reset, restarting [0]\nSat Jan 11 17:20:29 2020 TCP connection established with [AF_INET]10.250.7.77:27604\nSat Jan 11 17:20:29 2020 10.250.7.77:27604 TCP connection established with [AF_INET]100.64.1.1:56836\nSat Jan 11 17:20:29 2020 10.250.7.77:27604 Connection reset, restarting [0]\nSat Jan 11 17:20:29 2020 100.64.1.1:56836 Connection reset, restarting [0]\nSat Jan 11 17:20:33 2020 TCP connection established with [AF_INET]10.250.7.77:11474\nSat Jan 11 17:20:33 2020 10.250.7.77:11474 TCP connection established with [AF_INET]100.64.1.1:52414\nSat Jan 11 17:20:33 2020 10.250.7.77:11474 Connection reset, restarting [0]\nSat Jan 11 17:20:33 2020 100.64.1.1:52414 Connection reset, restarting [0]\nSat Jan 11 17:20:39 2020 TCP connection established with [AF_INET]10.250.7.77:27612\nSat Jan 11 17:20:39 2020 10.250.7.77:27612 TCP connection established with [AF_INET]100.64.1.1:56844\nSat Jan 11 17:20:39 2020 10.250.7.77:27612 Connection reset, restarting [0]\nSat Jan 11 17:20:39 2020 100.64.1.1:56844 Connection reset, restarting [0]\nSat Jan 11 17:20:43 2020 TCP connection established with [AF_INET]10.250.7.77:11478\nSat Jan 11 17:20:43 2020 10.250.7.77:11478 TCP connection established with [AF_INET]100.64.1.1:52418\nSat Jan 11 17:20:43 2020 10.250.7.77:11478 Connection reset, restarting [0]\nSat Jan 11 17:20:43 2020 100.64.1.1:52418 Connection reset, restarting [0]\nSat Jan 11 17:20:49 2020 TCP connection established with [AF_INET]10.250.7.77:27618\nSat Jan 11 17:20:49 2020 10.250.7.77:27618 TCP connection established with [AF_INET]100.64.1.1:56850\nSat Jan 11 17:20:49 2020 10.250.7.77:27618 Connection reset, restarting [0]\nSat Jan 11 17:20:49 2020 100.64.1.1:56850 Connection reset, restarting [0]\nSat Jan 11 17:20:53 2020 TCP connection established with [AF_INET]10.250.7.77:11492\nSat Jan 11 17:20:53 2020 10.250.7.77:11492 TCP connection established with [AF_INET]100.64.1.1:52432\nSat Jan 11 17:20:53 2020 10.250.7.77:11492 Connection reset, restarting [0]\nSat Jan 11 17:20:53 2020 100.64.1.1:52432 Connection reset, restarting [0]\nSat Jan 11 17:20:59 2020 TCP connection established with [AF_INET]10.250.7.77:27632\nSat Jan 11 17:20:59 2020 10.250.7.77:27632 TCP connection established with [AF_INET]100.64.1.1:56864\nSat Jan 11 17:20:59 2020 10.250.7.77:27632 Connection reset, restarting [0]\nSat Jan 11 17:20:59 2020 100.64.1.1:56864 Connection reset, restarting [0]\nSat Jan 11 17:21:03 2020 TCP connection established with [AF_INET]10.250.7.77:11508\nSat Jan 11 17:21:03 2020 10.250.7.77:11508 TCP connection established with [AF_INET]100.64.1.1:52448\nSat Jan 11 17:21:03 2020 10.250.7.77:11508 Connection reset, restarting [0]\nSat Jan 11 17:21:03 2020 100.64.1.1:52448 Connection reset, restarting [0]\nSat Jan 11 17:21:09 2020 TCP connection established with [AF_INET]10.250.7.77:27644\nSat Jan 11 17:21:09 2020 10.250.7.77:27644 TCP connection established with [AF_INET]100.64.1.1:56876\nSat Jan 11 17:21:09 2020 10.250.7.77:27644 Connection reset, restarting [0]\nSat Jan 11 17:21:09 2020 100.64.1.1:56876 Connection reset, restarting [0]\nSat Jan 11 17:21:13 2020 TCP connection established with [AF_INET]10.250.7.77:11516\nSat Jan 11 17:21:13 2020 10.250.7.77:11516 TCP connection established with [AF_INET]100.64.1.1:52456\nSat Jan 11 17:21:13 2020 10.250.7.77:11516 Connection reset, restarting [0]\nSat Jan 11 17:21:13 2020 100.64.1.1:52456 Connection reset, restarting [0]\nSat Jan 11 17:21:19 2020 TCP connection established with [AF_INET]10.250.7.77:27660\nSat Jan 11 17:21:19 2020 10.250.7.77:27660 TCP connection established with [AF_INET]100.64.1.1:56892\nSat Jan 11 17:21:19 2020 10.250.7.77:27660 Connection reset, restarting [0]\nSat Jan 11 17:21:19 2020 100.64.1.1:56892 Connection reset, restarting [0]\nSat Jan 11 17:21:23 2020 TCP connection established with [AF_INET]10.250.7.77:11522\nSat Jan 11 17:21:23 2020 10.250.7.77:11522 TCP connection established with [AF_INET]100.64.1.1:52462\nSat Jan 11 17:21:23 2020 10.250.7.77:11522 Connection reset, restarting [0]\nSat Jan 11 17:21:23 2020 100.64.1.1:52462 Connection reset, restarting [0]\nSat Jan 11 17:21:29 2020 TCP connection established with [AF_INET]10.250.7.77:27664\nSat Jan 11 17:21:29 2020 10.250.7.77:27664 TCP connection established with [AF_INET]100.64.1.1:56896\nSat Jan 11 17:21:29 2020 10.250.7.77:27664 Connection reset, restarting [0]\nSat Jan 11 17:21:29 2020 100.64.1.1:56896 Connection reset, restarting [0]\nSat Jan 11 17:21:33 2020 TCP connection established with [AF_INET]10.250.7.77:11532\nSat Jan 11 17:21:33 2020 10.250.7.77:11532 TCP connection established with [AF_INET]100.64.1.1:52472\nSat Jan 11 17:21:33 2020 10.250.7.77:11532 Connection reset, restarting [0]\nSat Jan 11 17:21:33 2020 100.64.1.1:52472 Connection reset, restarting [0]\nSat Jan 11 17:21:39 2020 TCP connection established with [AF_INET]10.250.7.77:27676\nSat Jan 11 17:21:39 2020 10.250.7.77:27676 TCP connection established with [AF_INET]100.64.1.1:56908\nSat Jan 11 17:21:39 2020 10.250.7.77:27676 Connection reset, restarting [0]\nSat Jan 11 17:21:39 2020 100.64.1.1:56908 Connection reset, restarting [0]\nSat Jan 11 17:21:43 2020 TCP connection established with [AF_INET]10.250.7.77:11536\nSat Jan 11 17:21:43 2020 10.250.7.77:11536 TCP connection established with [AF_INET]100.64.1.1:52476\nSat Jan 11 17:21:43 2020 10.250.7.77:11536 Connection reset, restarting [0]\nSat Jan 11 17:21:43 2020 100.64.1.1:52476 Connection reset, restarting [0]\nSat Jan 11 17:21:49 2020 TCP connection established with [AF_INET]10.250.7.77:27682\nSat Jan 11 17:21:49 2020 10.250.7.77:27682 TCP connection established with [AF_INET]100.64.1.1:56914\nSat Jan 11 17:21:49 2020 10.250.7.77:27682 Connection reset, restarting [0]\nSat Jan 11 17:21:49 2020 100.64.1.1:56914 Connection reset, restarting [0]\nSat Jan 11 17:21:53 2020 TCP connection established with [AF_INET]10.250.7.77:11548\nSat Jan 11 17:21:53 2020 10.250.7.77:11548 TCP connection established with [AF_INET]100.64.1.1:52488\nSat Jan 11 17:21:53 2020 10.250.7.77:11548 Connection reset, restarting [0]\nSat Jan 11 17:21:53 2020 100.64.1.1:52488 Connection reset, restarting [0]\nSat Jan 11 17:21:59 2020 TCP connection established with [AF_INET]10.250.7.77:27694\nSat Jan 11 17:21:59 2020 10.250.7.77:27694 TCP connection established with [AF_INET]100.64.1.1:56926\nSat Jan 11 17:21:59 2020 10.250.7.77:27694 Connection reset, restarting [0]\nSat Jan 11 17:21:59 2020 100.64.1.1:56926 Connection reset, restarting [0]\nSat Jan 11 17:22:03 2020 TCP connection established with [AF_INET]10.250.7.77:11562\nSat Jan 11 17:22:03 2020 10.250.7.77:11562 TCP connection established with [AF_INET]100.64.1.1:52502\nSat Jan 11 17:22:03 2020 10.250.7.77:11562 Connection reset, restarting [0]\nSat Jan 11 17:22:03 2020 100.64.1.1:52502 Connection reset, restarting [0]\nSat Jan 11 17:22:09 2020 TCP connection established with [AF_INET]10.250.7.77:27706\nSat Jan 11 17:22:09 2020 10.250.7.77:27706 TCP connection established with [AF_INET]100.64.1.1:56938\nSat Jan 11 17:22:09 2020 10.250.7.77:27706 Connection reset, restarting [0]\nSat Jan 11 17:22:09 2020 100.64.1.1:56938 Connection reset, restarting [0]\nSat Jan 11 17:22:13 2020 TCP connection established with [AF_INET]10.250.7.77:11574\nSat Jan 11 17:22:13 2020 10.250.7.77:11574 TCP connection established with [AF_INET]100.64.1.1:52514\nSat Jan 11 17:22:13 2020 10.250.7.77:11574 Connection reset, restarting [0]\nSat Jan 11 17:22:13 2020 100.64.1.1:52514 Connection reset, restarting [0]\nSat Jan 11 17:22:19 2020 TCP connection established with [AF_INET]10.250.7.77:27726\nSat Jan 11 17:22:19 2020 10.250.7.77:27726 TCP connection established with [AF_INET]100.64.1.1:56958\nSat Jan 11 17:22:19 2020 10.250.7.77:27726 Connection reset, restarting [0]\nSat Jan 11 17:22:19 2020 100.64.1.1:56958 Connection reset, restarting [0]\nSat Jan 11 17:22:23 2020 TCP connection established with [AF_INET]10.250.7.77:11588\nSat Jan 11 17:22:23 2020 10.250.7.77:11588 TCP connection established with [AF_INET]100.64.1.1:52528\nSat Jan 11 17:22:23 2020 10.250.7.77:11588 Connection reset, restarting [0]\nSat Jan 11 17:22:23 2020 100.64.1.1:52528 Connection reset, restarting [0]\nSat Jan 11 17:22:29 2020 TCP connection established with [AF_INET]10.250.7.77:27734\nSat Jan 11 17:22:29 2020 10.250.7.77:27734 TCP connection established with [AF_INET]100.64.1.1:56966\nSat Jan 11 17:22:29 2020 10.250.7.77:27734 Connection reset, restarting [0]\nSat Jan 11 17:22:29 2020 100.64.1.1:56966 Connection reset, restarting [0]\nSat Jan 11 17:22:33 2020 TCP connection established with [AF_INET]10.250.7.77:11598\nSat Jan 11 17:22:33 2020 10.250.7.77:11598 TCP connection established with [AF_INET]100.64.1.1:52538\nSat Jan 11 17:22:33 2020 10.250.7.77:11598 Connection reset, restarting [0]\nSat Jan 11 17:22:33 2020 100.64.1.1:52538 Connection reset, restarting [0]\nSat Jan 11 17:22:39 2020 TCP connection established with [AF_INET]10.250.7.77:27742\nSat Jan 11 17:22:39 2020 10.250.7.77:27742 TCP connection established with [AF_INET]100.64.1.1:56974\nSat Jan 11 17:22:39 2020 10.250.7.77:27742 Connection reset, restarting [0]\nSat Jan 11 17:22:39 2020 100.64.1.1:56974 Connection reset, restarting [0]\nSat Jan 11 17:22:39 2020 TCP connection established with [AF_INET]100.64.1.1:56974\nSat Jan 11 17:22:43 2020 TCP connection established with [AF_INET]10.250.7.77:11602\nSat Jan 11 17:22:43 2020 10.250.7.77:11602 TCP connection established with [AF_INET]100.64.1.1:52542\nSat Jan 11 17:22:43 2020 10.250.7.77:11602 Connection reset, restarting [0]\nSat Jan 11 17:22:43 2020 100.64.1.1:52542 Connection reset, restarting [0]\nSat Jan 11 17:22:49 2020 TCP connection established with [AF_INET]10.250.7.77:27748\nSat Jan 11 17:22:49 2020 10.250.7.77:27748 TCP connection established with [AF_INET]100.64.1.1:56980\nSat Jan 11 17:22:49 2020 10.250.7.77:27748 Connection reset, restarting [0]\nSat Jan 11 17:22:49 2020 100.64.1.1:56980 Connection reset, restarting [0]\nSat Jan 11 17:22:53 2020 TCP connection established with [AF_INET]10.250.7.77:11620\nSat Jan 11 17:22:53 2020 10.250.7.77:11620 TCP connection established with [AF_INET]100.64.1.1:52560\nSat Jan 11 17:22:53 2020 10.250.7.77:11620 Connection reset, restarting [0]\nSat Jan 11 17:22:53 2020 100.64.1.1:52560 Connection reset, restarting [0]\nSat Jan 11 17:22:59 2020 TCP connection established with [AF_INET]10.250.7.77:27762\nSat Jan 11 17:22:59 2020 10.250.7.77:27762 TCP connection established with [AF_INET]100.64.1.1:56994\nSat Jan 11 17:22:59 2020 10.250.7.77:27762 Connection reset, restarting [0]\nSat Jan 11 17:22:59 2020 100.64.1.1:56994 Connection reset, restarting [0]\nSat Jan 11 17:23:03 2020 TCP connection established with [AF_INET]10.250.7.77:11634\nSat Jan 11 17:23:03 2020 10.250.7.77:11634 TCP connection established with [AF_INET]100.64.1.1:52574\nSat Jan 11 17:23:03 2020 10.250.7.77:11634 Connection reset, restarting [0]\nSat Jan 11 17:23:03 2020 100.64.1.1:52574 Connection reset, restarting [0]\nSat Jan 11 17:23:09 2020 TCP connection established with [AF_INET]10.250.7.77:27772\nSat Jan 11 17:23:09 2020 10.250.7.77:27772 TCP connection established with [AF_INET]100.64.1.1:57004\nSat Jan 11 17:23:09 2020 10.250.7.77:27772 Connection reset, restarting [0]\nSat Jan 11 17:23:09 2020 100.64.1.1:57004 Connection reset, restarting [0]\nSat Jan 11 17:23:13 2020 TCP connection established with [AF_INET]10.250.7.77:11642\nSat Jan 11 17:23:13 2020 10.250.7.77:11642 TCP connection established with [AF_INET]100.64.1.1:52582\nSat Jan 11 17:23:13 2020 10.250.7.77:11642 Connection reset, restarting [0]\nSat Jan 11 17:23:13 2020 100.64.1.1:52582 Connection reset, restarting [0]\nSat Jan 11 17:23:19 2020 TCP connection established with [AF_INET]10.250.7.77:27784\nSat Jan 11 17:23:19 2020 10.250.7.77:27784 TCP connection established with [AF_INET]100.64.1.1:57016\nSat Jan 11 17:23:19 2020 10.250.7.77:27784 Connection reset, restarting [0]\nSat Jan 11 17:23:19 2020 100.64.1.1:57016 Connection reset, restarting [0]\nSat Jan 11 17:23:23 2020 TCP connection established with [AF_INET]10.250.7.77:11652\nSat Jan 11 17:23:23 2020 10.250.7.77:11652 TCP connection established with [AF_INET]100.64.1.1:52592\nSat Jan 11 17:23:23 2020 10.250.7.77:11652 Connection reset, restarting [0]\nSat Jan 11 17:23:23 2020 100.64.1.1:52592 Connection reset, restarting [0]\nSat Jan 11 17:23:29 2020 TCP connection established with [AF_INET]10.250.7.77:27788\nSat Jan 11 17:23:29 2020 10.250.7.77:27788 TCP connection established with [AF_INET]100.64.1.1:57020\nSat Jan 11 17:23:29 2020 10.250.7.77:27788 Connection reset, restarting [0]\nSat Jan 11 17:23:29 2020 100.64.1.1:57020 Connection reset, restarting [0]\nSat Jan 11 17:23:33 2020 TCP connection established with [AF_INET]10.250.7.77:11662\nSat Jan 11 17:23:33 2020 10.250.7.77:11662 TCP connection established with [AF_INET]100.64.1.1:52602\nSat Jan 11 17:23:33 2020 10.250.7.77:11662 Connection reset, restarting [0]\nSat Jan 11 17:23:33 2020 100.64.1.1:52602 Connection reset, restarting [0]\nSat Jan 11 17:23:39 2020 100.64.1.1:56974 TLS Error: TLS key negotiation failed to occur within 60 seconds (check your network connectivity)\nSat Jan 11 17:23:39 2020 100.64.1.1:56974 TLS Error: TLS handshake failed\nSat Jan 11 17:23:39 2020 100.64.1.1:56974 Fatal TLS error (check_tls_errors_co), restarting\nSat Jan 11 17:23:39 2020 TCP connection established with [AF_INET]10.250.7.77:27796\nSat Jan 11 17:23:39 2020 10.250.7.77:27796 TCP connection established with [AF_INET]100.64.1.1:57028\nSat Jan 11 17:23:39 2020 10.250.7.77:27796 Connection reset, restarting [0]\nSat Jan 11 17:23:39 2020 100.64.1.1:57028 Connection reset, restarting [0]\nSat Jan 11 17:23:43 2020 TCP connection established with [AF_INET]10.250.7.77:11666\nSat Jan 11 17:23:43 2020 10.250.7.77:11666 TCP connection established with [AF_INET]100.64.1.1:52606\nSat Jan 11 17:23:43 2020 10.250.7.77:11666 Connection reset, restarting [0]\nSat Jan 11 17:23:43 2020 100.64.1.1:52606 Connection reset, restarting [0]\nSat Jan 11 17:23:49 2020 TCP connection established with [AF_INET]10.250.7.77:27806\nSat Jan 11 17:23:49 2020 10.250.7.77:27806 TCP connection established with [AF_INET]100.64.1.1:57038\nSat Jan 11 17:23:49 2020 10.250.7.77:27806 Connection reset, restarting [0]\nSat Jan 11 17:23:49 2020 100.64.1.1:57038 Connection reset, restarting [0]\nSat Jan 11 17:23:53 2020 TCP connection established with [AF_INET]10.250.7.77:11678\nSat Jan 11 17:23:53 2020 10.250.7.77:11678 TCP connection established with [AF_INET]100.64.1.1:52618\nSat Jan 11 17:23:53 2020 10.250.7.77:11678 Connection reset, restarting [0]\nSat Jan 11 17:23:53 2020 100.64.1.1:52618 Connection reset, restarting [0]\nSat Jan 11 17:23:59 2020 TCP connection established with [AF_INET]10.250.7.77:27820\nSat Jan 11 17:23:59 2020 10.250.7.77:27820 TCP connection established with [AF_INET]100.64.1.1:57052\nSat Jan 11 17:23:59 2020 10.250.7.77:27820 Connection reset, restarting [0]\nSat Jan 11 17:23:59 2020 100.64.1.1:57052 Connection reset, restarting [0]\nSat Jan 11 17:24:03 2020 TCP connection established with [AF_INET]10.250.7.77:11692\nSat Jan 11 17:24:03 2020 10.250.7.77:11692 TCP connection established with [AF_INET]100.64.1.1:52632\nSat Jan 11 17:24:03 2020 10.250.7.77:11692 Connection reset, restarting [0]\nSat Jan 11 17:24:03 2020 100.64.1.1:52632 Connection reset, restarting [0]\nSat Jan 11 17:24:09 2020 TCP connection established with [AF_INET]10.250.7.77:27830\nSat Jan 11 17:24:09 2020 10.250.7.77:27830 TCP connection established with [AF_INET]100.64.1.1:57062\nSat Jan 11 17:24:09 2020 10.250.7.77:27830 Connection reset, restarting [0]\nSat Jan 11 17:24:09 2020 100.64.1.1:57062 Connection reset, restarting [0]\nSat Jan 11 17:24:13 2020 TCP connection established with [AF_INET]10.250.7.77:11700\nSat Jan 11 17:24:13 2020 10.250.7.77:11700 TCP connection established with [AF_INET]100.64.1.1:52640\nSat Jan 11 17:24:13 2020 10.250.7.77:11700 Connection reset, restarting [0]\nSat Jan 11 17:24:13 2020 100.64.1.1:52640 Connection reset, restarting [0]\nSat Jan 11 17:24:19 2020 TCP connection established with [AF_INET]10.250.7.77:27842\nSat Jan 11 17:24:19 2020 10.250.7.77:27842 TCP connection established with [AF_INET]100.64.1.1:57074\nSat Jan 11 17:24:19 2020 10.250.7.77:27842 Connection reset, restarting [0]\nSat Jan 11 17:24:19 2020 100.64.1.1:57074 Connection reset, restarting [0]\nSat Jan 11 17:24:23 2020 TCP connection established with [AF_INET]10.250.7.77:11706\nSat Jan 11 17:24:23 2020 10.250.7.77:11706 TCP connection established with [AF_INET]100.64.1.1:52646\nSat Jan 11 17:24:23 2020 10.250.7.77:11706 Connection reset, restarting [0]\nSat Jan 11 17:24:23 2020 100.64.1.1:52646 Connection reset, restarting [0]\nSat Jan 11 17:24:29 2020 TCP connection established with [AF_INET]10.250.7.77:27846\nSat Jan 11 17:24:29 2020 10.250.7.77:27846 TCP connection established with [AF_INET]100.64.1.1:57078\nSat Jan 11 17:24:29 2020 10.250.7.77:27846 Connection reset, restarting [0]\nSat Jan 11 17:24:29 2020 100.64.1.1:57078 Connection reset, restarting [0]\nSat Jan 11 17:24:33 2020 TCP connection established with [AF_INET]10.250.7.77:11716\nSat Jan 11 17:24:33 2020 10.250.7.77:11716 TCP connection established with [AF_INET]100.64.1.1:52656\nSat Jan 11 17:24:33 2020 10.250.7.77:11716 Connection reset, restarting [0]\nSat Jan 11 17:24:33 2020 100.64.1.1:52656 Connection reset, restarting [0]\nSat Jan 11 17:24:39 2020 TCP connection established with [AF_INET]10.250.7.77:27854\nSat Jan 11 17:24:39 2020 10.250.7.77:27854 TCP connection established with [AF_INET]100.64.1.1:57086\nSat Jan 11 17:24:39 2020 10.250.7.77:27854 Connection reset, restarting [0]\nSat Jan 11 17:24:39 2020 100.64.1.1:57086 Connection reset, restarting [0]\nSat Jan 11 17:24:43 2020 TCP connection established with [AF_INET]10.250.7.77:11726\nSat Jan 11 17:24:43 2020 10.250.7.77:11726 TCP connection established with [AF_INET]100.64.1.1:52666\nSat Jan 11 17:24:43 2020 10.250.7.77:11726 Connection reset, restarting [0]\nSat Jan 11 17:24:43 2020 100.64.1.1:52666 Connection reset, restarting [0]\nSat Jan 11 17:24:49 2020 TCP connection established with [AF_INET]10.250.7.77:27862\nSat Jan 11 17:24:49 2020 10.250.7.77:27862 TCP connection established with [AF_INET]100.64.1.1:57094\nSat Jan 11 17:24:49 2020 10.250.7.77:27862 Connection reset, restarting [0]\nSat Jan 11 17:24:49 2020 100.64.1.1:57094 Connection reset, restarting [0]\nSat Jan 11 17:24:53 2020 TCP connection established with [AF_INET]10.250.7.77:11736\nSat Jan 11 17:24:53 2020 10.250.7.77:11736 TCP connection established with [AF_INET]100.64.1.1:52676\nSat Jan 11 17:24:53 2020 10.250.7.77:11736 Connection reset, restarting [0]\nSat Jan 11 17:24:53 2020 100.64.1.1:52676 Connection reset, restarting [0]\nSat Jan 11 17:24:59 2020 TCP connection established with [AF_INET]10.250.7.77:27878\nSat Jan 11 17:24:59 2020 10.250.7.77:27878 TCP connection established with [AF_INET]100.64.1.1:57110\nSat Jan 11 17:24:59 2020 10.250.7.77:27878 Connection reset, restarting [0]\nSat Jan 11 17:24:59 2020 100.64.1.1:57110 Connection reset, restarting [0]\nSat Jan 11 17:25:03 2020 TCP connection established with [AF_INET]10.250.7.77:11750\nSat Jan 11 17:25:03 2020 10.250.7.77:11750 TCP connection established with [AF_INET]100.64.1.1:52690\nSat Jan 11 17:25:03 2020 10.250.7.77:11750 Connection reset, restarting [0]\nSat Jan 11 17:25:03 2020 100.64.1.1:52690 Connection reset, restarting [0]\nSat Jan 11 17:25:09 2020 TCP connection established with [AF_INET]10.250.7.77:27888\nSat Jan 11 17:25:09 2020 10.250.7.77:27888 TCP connection established with [AF_INET]100.64.1.1:57120\nSat Jan 11 17:25:09 2020 10.250.7.77:27888 Connection reset, restarting [0]\nSat Jan 11 17:25:09 2020 100.64.1.1:57120 Connection reset, restarting [0]\nSat Jan 11 17:25:13 2020 TCP connection established with [AF_INET]10.250.7.77:11758\nSat Jan 11 17:25:13 2020 10.250.7.77:11758 TCP connection established with [AF_INET]100.64.1.1:52698\nSat Jan 11 17:25:13 2020 10.250.7.77:11758 Connection reset, restarting [0]\nSat Jan 11 17:25:13 2020 100.64.1.1:52698 Connection reset, restarting [0]\nSat Jan 11 17:25:19 2020 TCP connection established with [AF_INET]10.250.7.77:27900\nSat Jan 11 17:25:19 2020 10.250.7.77:27900 TCP connection established with [AF_INET]100.64.1.1:57132\nSat Jan 11 17:25:19 2020 10.250.7.77:27900 Connection reset, restarting [0]\nSat Jan 11 17:25:19 2020 100.64.1.1:57132 Connection reset, restarting [0]\nSat Jan 11 17:25:23 2020 TCP connection established with [AF_INET]10.250.7.77:11764\nSat Jan 11 17:25:23 2020 10.250.7.77:11764 TCP connection established with [AF_INET]100.64.1.1:52704\nSat Jan 11 17:25:23 2020 10.250.7.77:11764 Connection reset, restarting [0]\nSat Jan 11 17:25:23 2020 100.64.1.1:52704 Connection reset, restarting [0]\nSat Jan 11 17:25:29 2020 TCP connection established with [AF_INET]100.64.1.1:57136\nSat Jan 11 17:25:29 2020 100.64.1.1:57136 TCP connection established with [AF_INET]10.250.7.77:27904\nSat Jan 11 17:25:29 2020 100.64.1.1:57136 Connection reset, restarting [0]\nSat Jan 11 17:25:29 2020 10.250.7.77:27904 Connection reset, restarting [0]\nSat Jan 11 17:25:33 2020 TCP connection established with [AF_INET]10.250.7.77:11774\nSat Jan 11 17:25:33 2020 10.250.7.77:11774 TCP connection established with [AF_INET]100.64.1.1:52714\nSat Jan 11 17:25:33 2020 10.250.7.77:11774 Connection reset, restarting [0]\nSat Jan 11 17:25:33 2020 100.64.1.1:52714 Connection reset, restarting [0]\nSat Jan 11 17:25:39 2020 TCP connection established with [AF_INET]10.250.7.77:27912\nSat Jan 11 17:25:39 2020 10.250.7.77:27912 TCP connection established with [AF_INET]100.64.1.1:57144\nSat Jan 11 17:25:39 2020 10.250.7.77:27912 Connection reset, restarting [0]\nSat Jan 11 17:25:39 2020 100.64.1.1:57144 Connection reset, restarting [0]\nSat Jan 11 17:25:43 2020 TCP connection established with [AF_INET]10.250.7.77:11780\nSat Jan 11 17:25:43 2020 10.250.7.77:11780 TCP connection established with [AF_INET]100.64.1.1:52720\nSat Jan 11 17:25:43 2020 10.250.7.77:11780 Connection reset, restarting [0]\nSat Jan 11 17:25:43 2020 100.64.1.1:52720 Connection reset, restarting [0]\nSat Jan 11 17:25:49 2020 TCP connection established with [AF_INET]10.250.7.77:27920\nSat Jan 11 17:25:49 2020 10.250.7.77:27920 TCP connection established with [AF_INET]100.64.1.1:57152\nSat Jan 11 17:25:49 2020 10.250.7.77:27920 Connection reset, restarting [0]\nSat Jan 11 17:25:49 2020 100.64.1.1:57152 Connection reset, restarting [0]\nSat Jan 11 17:25:53 2020 TCP connection established with [AF_INET]10.250.7.77:11828\nSat Jan 11 17:25:53 2020 10.250.7.77:11828 TCP connection established with [AF_INET]100.64.1.1:52768\nSat Jan 11 17:25:53 2020 10.250.7.77:11828 Connection reset, restarting [0]\nSat Jan 11 17:25:53 2020 100.64.1.1:52768 Connection reset, restarting [0]\nSat Jan 11 17:25:59 2020 TCP connection established with [AF_INET]10.250.7.77:27932\nSat Jan 11 17:25:59 2020 10.250.7.77:27932 TCP connection established with [AF_INET]100.64.1.1:57164\nSat Jan 11 17:25:59 2020 10.250.7.77:27932 Connection reset, restarting [0]\nSat Jan 11 17:25:59 2020 100.64.1.1:57164 Connection reset, restarting [0]\nSat Jan 11 17:26:03 2020 TCP connection established with [AF_INET]10.250.7.77:11842\nSat Jan 11 17:26:03 2020 10.250.7.77:11842 TCP connection established with [AF_INET]100.64.1.1:52782\nSat Jan 11 17:26:03 2020 10.250.7.77:11842 Connection reset, restarting [0]\nSat Jan 11 17:26:03 2020 100.64.1.1:52782 Connection reset, restarting [0]\nSat Jan 11 17:26:09 2020 TCP connection established with [AF_INET]10.250.7.77:27942\nSat Jan 11 17:26:09 2020 10.250.7.77:27942 TCP connection established with [AF_INET]100.64.1.1:57174\nSat Jan 11 17:26:09 2020 10.250.7.77:27942 Connection reset, restarting [0]\nSat Jan 11 17:26:09 2020 100.64.1.1:57174 Connection reset, restarting [0]\nSat Jan 11 17:26:13 2020 TCP connection established with [AF_INET]10.250.7.77:11852\nSat Jan 11 17:26:13 2020 10.250.7.77:11852 TCP connection established with [AF_INET]100.64.1.1:52792\nSat Jan 11 17:26:13 2020 10.250.7.77:11852 Connection reset, restarting [0]\nSat Jan 11 17:26:13 2020 100.64.1.1:52792 Connection reset, restarting [0]\nSat Jan 11 17:26:19 2020 TCP connection established with [AF_INET]10.250.7.77:27958\nSat Jan 11 17:26:19 2020 10.250.7.77:27958 TCP connection established with [AF_INET]100.64.1.1:57190\nSat Jan 11 17:26:19 2020 10.250.7.77:27958 Connection reset, restarting [0]\nSat Jan 11 17:26:19 2020 100.64.1.1:57190 Connection reset, restarting [0]\nSat Jan 11 17:26:23 2020 TCP connection established with [AF_INET]10.250.7.77:11858\nSat Jan 11 17:26:23 2020 10.250.7.77:11858 TCP connection established with [AF_INET]100.64.1.1:52798\nSat Jan 11 17:26:23 2020 10.250.7.77:11858 Connection reset, restarting [0]\nSat Jan 11 17:26:23 2020 100.64.1.1:52798 Connection reset, restarting [0]\nSat Jan 11 17:26:29 2020 TCP connection established with [AF_INET]10.250.7.77:27962\nSat Jan 11 17:26:29 2020 10.250.7.77:27962 TCP connection established with [AF_INET]100.64.1.1:57194\nSat Jan 11 17:26:29 2020 10.250.7.77:27962 Connection reset, restarting [0]\nSat Jan 11 17:26:29 2020 100.64.1.1:57194 Connection reset, restarting [0]\nSat Jan 11 17:26:33 2020 TCP connection established with [AF_INET]10.250.7.77:11870\nSat Jan 11 17:26:33 2020 10.250.7.77:11870 TCP connection established with [AF_INET]100.64.1.1:52810\nSat Jan 11 17:26:33 2020 10.250.7.77:11870 Connection reset, restarting [0]\nSat Jan 11 17:26:33 2020 100.64.1.1:52810 Connection reset, restarting [0]\nSat Jan 11 17:26:39 2020 TCP connection established with [AF_INET]10.250.7.77:27970\nSat Jan 11 17:26:39 2020 10.250.7.77:27970 TCP connection established with [AF_INET]100.64.1.1:57202\nSat Jan 11 17:26:39 2020 10.250.7.77:27970 Connection reset, restarting [0]\nSat Jan 11 17:26:39 2020 100.64.1.1:57202 Connection reset, restarting [0]\nSat Jan 11 17:26:43 2020 TCP connection established with [AF_INET]10.250.7.77:11874\nSat Jan 11 17:26:43 2020 10.250.7.77:11874 TCP connection established with [AF_INET]100.64.1.1:52814\nSat Jan 11 17:26:43 2020 10.250.7.77:11874 Connection reset, restarting [0]\nSat Jan 11 17:26:43 2020 100.64.1.1:52814 Connection reset, restarting [0]\nSat Jan 11 17:26:49 2020 TCP connection established with [AF_INET]10.250.7.77:27978\nSat Jan 11 17:26:49 2020 10.250.7.77:27978 TCP connection established with [AF_INET]100.64.1.1:57210\nSat Jan 11 17:26:49 2020 10.250.7.77:27978 Connection reset, restarting [0]\nSat Jan 11 17:26:49 2020 100.64.1.1:57210 Connection reset, restarting [0]\nSat Jan 11 17:26:53 2020 TCP connection established with [AF_INET]10.250.7.77:11884\nSat Jan 11 17:26:53 2020 10.250.7.77:11884 TCP connection established with [AF_INET]100.64.1.1:52824\nSat Jan 11 17:26:53 2020 10.250.7.77:11884 Connection reset, restarting [0]\nSat Jan 11 17:26:53 2020 100.64.1.1:52824 Connection reset, restarting [0]\nSat Jan 11 17:26:59 2020 TCP connection established with [AF_INET]10.250.7.77:27990\nSat Jan 11 17:26:59 2020 10.250.7.77:27990 TCP connection established with [AF_INET]100.64.1.1:57222\nSat Jan 11 17:26:59 2020 10.250.7.77:27990 Connection reset, restarting [0]\nSat Jan 11 17:26:59 2020 100.64.1.1:57222 Connection reset, restarting [0]\nSat Jan 11 17:27:03 2020 TCP connection established with [AF_INET]10.250.7.77:11900\nSat Jan 11 17:27:03 2020 10.250.7.77:11900 TCP connection established with [AF_INET]100.64.1.1:52840\nSat Jan 11 17:27:03 2020 10.250.7.77:11900 Connection reset, restarting [0]\nSat Jan 11 17:27:03 2020 100.64.1.1:52840 Connection reset, restarting [0]\nSat Jan 11 17:27:09 2020 TCP connection established with [AF_INET]10.250.7.77:28002\nSat Jan 11 17:27:09 2020 10.250.7.77:28002 TCP connection established with [AF_INET]100.64.1.1:57234\nSat Jan 11 17:27:09 2020 10.250.7.77:28002 Connection reset, restarting [0]\nSat Jan 11 17:27:09 2020 100.64.1.1:57234 Connection reset, restarting [0]\nSat Jan 11 17:27:13 2020 TCP connection established with [AF_INET]10.250.7.77:11912\nSat Jan 11 17:27:13 2020 10.250.7.77:11912 TCP connection established with [AF_INET]100.64.1.1:52852\nSat Jan 11 17:27:13 2020 10.250.7.77:11912 Connection reset, restarting [0]\nSat Jan 11 17:27:13 2020 100.64.1.1:52852 Connection reset, restarting [0]\nSat Jan 11 17:27:19 2020 TCP connection established with [AF_INET]10.250.7.77:28014\nSat Jan 11 17:27:19 2020 10.250.7.77:28014 TCP connection established with [AF_INET]100.64.1.1:57246\nSat Jan 11 17:27:19 2020 10.250.7.77:28014 Connection reset, restarting [0]\nSat Jan 11 17:27:19 2020 100.64.1.1:57246 Connection reset, restarting [0]\nSat Jan 11 17:27:23 2020 TCP connection established with [AF_INET]10.250.7.77:11918\nSat Jan 11 17:27:23 2020 10.250.7.77:11918 TCP connection established with [AF_INET]100.64.1.1:52858\nSat Jan 11 17:27:23 2020 10.250.7.77:11918 Connection reset, restarting [0]\nSat Jan 11 17:27:23 2020 100.64.1.1:52858 Connection reset, restarting [0]\nSat Jan 11 17:27:29 2020 TCP connection established with [AF_INET]10.250.7.77:28022\nSat Jan 11 17:27:29 2020 10.250.7.77:28022 TCP connection established with [AF_INET]100.64.1.1:57254\nSat Jan 11 17:27:29 2020 10.250.7.77:28022 Connection reset, restarting [0]\nSat Jan 11 17:27:29 2020 100.64.1.1:57254 Connection reset, restarting [0]\nSat Jan 11 17:27:33 2020 TCP connection established with [AF_INET]10.250.7.77:11930\nSat Jan 11 17:27:33 2020 10.250.7.77:11930 TCP connection established with [AF_INET]100.64.1.1:52870\nSat Jan 11 17:27:33 2020 10.250.7.77:11930 Connection reset, restarting [0]\nSat Jan 11 17:27:33 2020 100.64.1.1:52870 Connection reset, restarting [0]\nSat Jan 11 17:27:39 2020 TCP connection established with [AF_INET]10.250.7.77:28032\nSat Jan 11 17:27:39 2020 10.250.7.77:28032 TCP connection established with [AF_INET]100.64.1.1:57264\nSat Jan 11 17:27:39 2020 10.250.7.77:28032 Connection reset, restarting [0]\nSat Jan 11 17:27:39 2020 100.64.1.1:57264 Connection reset, restarting [0]\nSat Jan 11 17:27:43 2020 TCP connection established with [AF_INET]10.250.7.77:11934\nSat Jan 11 17:27:43 2020 10.250.7.77:11934 TCP connection established with [AF_INET]100.64.1.1:52874\nSat Jan 11 17:27:43 2020 10.250.7.77:11934 Connection reset, restarting [0]\nSat Jan 11 17:27:43 2020 100.64.1.1:52874 Connection reset, restarting [0]\nSat Jan 11 17:27:49 2020 TCP connection established with [AF_INET]10.250.7.77:28038\nSat Jan 11 17:27:49 2020 10.250.7.77:28038 TCP connection established with [AF_INET]100.64.1.1:57270\nSat Jan 11 17:27:49 2020 10.250.7.77:28038 Connection reset, restarting [0]\nSat Jan 11 17:27:49 2020 100.64.1.1:57270 Connection reset, restarting [0]\nSat Jan 11 17:27:53 2020 TCP connection established with [AF_INET]10.250.7.77:11944\nSat Jan 11 17:27:53 2020 10.250.7.77:11944 TCP connection established with [AF_INET]100.64.1.1:52884\nSat Jan 11 17:27:53 2020 10.250.7.77:11944 Connection reset, restarting [0]\nSat Jan 11 17:27:53 2020 100.64.1.1:52884 Connection reset, restarting [0]\nSat Jan 11 17:27:59 2020 TCP connection established with [AF_INET]10.250.7.77:28050\nSat Jan 11 17:27:59 2020 10.250.7.77:28050 TCP connection established with [AF_INET]100.64.1.1:57282\nSat Jan 11 17:27:59 2020 10.250.7.77:28050 Connection reset, restarting [0]\nSat Jan 11 17:27:59 2020 100.64.1.1:57282 Connection reset, restarting [0]\nSat Jan 11 17:28:03 2020 TCP connection established with [AF_INET]10.250.7.77:11964\nSat Jan 11 17:28:03 2020 10.250.7.77:11964 TCP connection established with [AF_INET]100.64.1.1:52904\nSat Jan 11 17:28:03 2020 10.250.7.77:11964 Connection reset, restarting [0]\nSat Jan 11 17:28:03 2020 100.64.1.1:52904 Connection reset, restarting [0]\nSat Jan 11 17:28:09 2020 TCP connection established with [AF_INET]10.250.7.77:28066\nSat Jan 11 17:28:09 2020 10.250.7.77:28066 TCP connection established with [AF_INET]100.64.1.1:57298\nSat Jan 11 17:28:09 2020 10.250.7.77:28066 Connection reset, restarting [0]\nSat Jan 11 17:28:09 2020 100.64.1.1:57298 Connection reset, restarting [0]\nSat Jan 11 17:28:13 2020 TCP connection established with [AF_INET]10.250.7.77:11972\nSat Jan 11 17:28:13 2020 10.250.7.77:11972 TCP connection established with [AF_INET]100.64.1.1:52912\nSat Jan 11 17:28:13 2020 10.250.7.77:11972 Connection reset, restarting [0]\nSat Jan 11 17:28:13 2020 100.64.1.1:52912 Connection reset, restarting [0]\nSat Jan 11 17:28:19 2020 TCP connection established with [AF_INET]10.250.7.77:28078\nSat Jan 11 17:28:19 2020 10.250.7.77:28078 TCP connection established with [AF_INET]100.64.1.1:57310\nSat Jan 11 17:28:19 2020 10.250.7.77:28078 Connection reset, restarting [0]\nSat Jan 11 17:28:19 2020 100.64.1.1:57310 Connection reset, restarting [0]\nSat Jan 11 17:28:23 2020 TCP connection established with [AF_INET]10.250.7.77:11982\nSat Jan 11 17:28:23 2020 10.250.7.77:11982 TCP connection established with [AF_INET]100.64.1.1:52922\nSat Jan 11 17:28:23 2020 10.250.7.77:11982 Connection reset, restarting [0]\nSat Jan 11 17:28:23 2020 100.64.1.1:52922 Connection reset, restarting [0]\nSat Jan 11 17:28:29 2020 TCP connection established with [AF_INET]10.250.7.77:28082\nSat Jan 11 17:28:29 2020 10.250.7.77:28082 TCP connection established with [AF_INET]100.64.1.1:57314\nSat Jan 11 17:28:29 2020 10.250.7.77:28082 Connection reset, restarting [0]\nSat Jan 11 17:28:29 2020 100.64.1.1:57314 Connection reset, restarting [0]\nSat Jan 11 17:28:33 2020 TCP connection established with [AF_INET]10.250.7.77:11994\nSat Jan 11 17:28:33 2020 10.250.7.77:11994 TCP connection established with [AF_INET]100.64.1.1:52934\nSat Jan 11 17:28:33 2020 10.250.7.77:11994 Connection reset, restarting [0]\nSat Jan 11 17:28:33 2020 100.64.1.1:52934 Connection reset, restarting [0]\nSat Jan 11 17:28:39 2020 TCP connection established with [AF_INET]10.250.7.77:28092\nSat Jan 11 17:28:39 2020 10.250.7.77:28092 TCP connection established with [AF_INET]100.64.1.1:57324\nSat Jan 11 17:28:39 2020 10.250.7.77:28092 Connection reset, restarting [0]\nSat Jan 11 17:28:39 2020 100.64.1.1:57324 Connection reset, restarting [0]\nSat Jan 11 17:28:43 2020 TCP connection established with [AF_INET]10.250.7.77:11998\nSat Jan 11 17:28:43 2020 10.250.7.77:11998 TCP connection established with [AF_INET]100.64.1.1:52938\nSat Jan 11 17:28:43 2020 10.250.7.77:11998 Connection reset, restarting [0]\nSat Jan 11 17:28:43 2020 100.64.1.1:52938 Connection reset, restarting [0]\nSat Jan 11 17:28:49 2020 TCP connection established with [AF_INET]10.250.7.77:28136\nSat Jan 11 17:28:49 2020 10.250.7.77:28136 TCP connection established with [AF_INET]100.64.1.1:57368\nSat Jan 11 17:28:49 2020 10.250.7.77:28136 Connection reset, restarting [0]\nSat Jan 11 17:28:49 2020 100.64.1.1:57368 Connection reset, restarting [0]\nSat Jan 11 17:28:53 2020 TCP connection established with [AF_INET]10.250.7.77:12008\nSat Jan 11 17:28:53 2020 10.250.7.77:12008 TCP connection established with [AF_INET]100.64.1.1:52948\nSat Jan 11 17:28:53 2020 10.250.7.77:12008 Connection reset, restarting [0]\nSat Jan 11 17:28:53 2020 100.64.1.1:52948 Connection reset, restarting [0]\nSat Jan 11 17:28:59 2020 TCP connection established with [AF_INET]10.250.7.77:28148\nSat Jan 11 17:28:59 2020 10.250.7.77:28148 TCP connection established with [AF_INET]100.64.1.1:57380\nSat Jan 11 17:28:59 2020 10.250.7.77:28148 Connection reset, restarting [0]\nSat Jan 11 17:28:59 2020 100.64.1.1:57380 Connection reset, restarting [0]\nSat Jan 11 17:29:03 2020 TCP connection established with [AF_INET]10.250.7.77:12026\nSat Jan 11 17:29:03 2020 10.250.7.77:12026 TCP connection established with [AF_INET]100.64.1.1:52966\nSat Jan 11 17:29:03 2020 10.250.7.77:12026 Connection reset, restarting [0]\nSat Jan 11 17:29:03 2020 100.64.1.1:52966 Connection reset, restarting [0]\nSat Jan 11 17:29:09 2020 TCP connection established with [AF_INET]10.250.7.77:28164\nSat Jan 11 17:29:09 2020 10.250.7.77:28164 TCP connection established with [AF_INET]100.64.1.1:57396\nSat Jan 11 17:29:09 2020 10.250.7.77:28164 Connection reset, restarting [0]\nSat Jan 11 17:29:09 2020 100.64.1.1:57396 Connection reset, restarting [0]\nSat Jan 11 17:29:13 2020 TCP connection established with [AF_INET]100.64.1.1:52974\nSat Jan 11 17:29:13 2020 100.64.1.1:52974 TCP connection established with [AF_INET]10.250.7.77:12034\nSat Jan 11 17:29:13 2020 100.64.1.1:52974 Connection reset, restarting [0]\nSat Jan 11 17:29:13 2020 10.250.7.77:12034 Connection reset, restarting [0]\nSat Jan 11 17:29:19 2020 TCP connection established with [AF_INET]10.250.7.77:28176\nSat Jan 11 17:29:19 2020 10.250.7.77:28176 TCP connection established with [AF_INET]100.64.1.1:57408\nSat Jan 11 17:29:19 2020 10.250.7.77:28176 Connection reset, restarting [0]\nSat Jan 11 17:29:19 2020 100.64.1.1:57408 Connection reset, restarting [0]\nSat Jan 11 17:29:23 2020 TCP connection established with [AF_INET]10.250.7.77:12042\nSat Jan 11 17:29:23 2020 10.250.7.77:12042 TCP connection established with [AF_INET]100.64.1.1:52982\nSat Jan 11 17:29:23 2020 10.250.7.77:12042 Connection reset, restarting [0]\nSat Jan 11 17:29:23 2020 100.64.1.1:52982 Connection reset, restarting [0]\nSat Jan 11 17:29:29 2020 TCP connection established with [AF_INET]10.250.7.77:28182\nSat Jan 11 17:29:29 2020 10.250.7.77:28182 TCP connection established with [AF_INET]100.64.1.1:57414\nSat Jan 11 17:29:29 2020 10.250.7.77:28182 Connection reset, restarting [0]\nSat Jan 11 17:29:29 2020 100.64.1.1:57414 Connection reset, restarting [0]\nSat Jan 11 17:29:33 2020 TCP connection established with [AF_INET]10.250.7.77:12052\nSat Jan 11 17:29:33 2020 10.250.7.77:12052 TCP connection established with [AF_INET]100.64.1.1:52992\nSat Jan 11 17:29:33 2020 10.250.7.77:12052 Connection reset, restarting [0]\nSat Jan 11 17:29:33 2020 100.64.1.1:52992 Connection reset, restarting [0]\nSat Jan 11 17:29:39 2020 TCP connection established with [AF_INET]10.250.7.77:28190\nSat Jan 11 17:29:39 2020 10.250.7.77:28190 TCP connection established with [AF_INET]100.64.1.1:57422\nSat Jan 11 17:29:39 2020 10.250.7.77:28190 Connection reset, restarting [0]\nSat Jan 11 17:29:39 2020 100.64.1.1:57422 Connection reset, restarting [0]\nSat Jan 11 17:29:43 2020 TCP connection established with [AF_INET]10.250.7.77:12060\nSat Jan 11 17:29:43 2020 10.250.7.77:12060 TCP connection established with [AF_INET]100.64.1.1:53000\nSat Jan 11 17:29:43 2020 10.250.7.77:12060 Connection reset, restarting [0]\nSat Jan 11 17:29:43 2020 100.64.1.1:53000 Connection reset, restarting [0]\nSat Jan 11 17:29:49 2020 TCP connection established with [AF_INET]10.250.7.77:28196\nSat Jan 11 17:29:49 2020 10.250.7.77:28196 TCP connection established with [AF_INET]100.64.1.1:57428\nSat Jan 11 17:29:49 2020 10.250.7.77:28196 Connection reset, restarting [0]\nSat Jan 11 17:29:49 2020 100.64.1.1:57428 Connection reset, restarting [0]\nSat Jan 11 17:29:53 2020 TCP connection established with [AF_INET]10.250.7.77:12070\nSat Jan 11 17:29:53 2020 10.250.7.77:12070 TCP connection established with [AF_INET]100.64.1.1:53010\nSat Jan 11 17:29:53 2020 10.250.7.77:12070 Connection reset, restarting [0]\nSat Jan 11 17:29:53 2020 100.64.1.1:53010 Connection reset, restarting [0]\nSat Jan 11 17:29:59 2020 TCP connection established with [AF_INET]10.250.7.77:28212\nSat Jan 11 17:29:59 2020 10.250.7.77:28212 TCP connection established with [AF_INET]100.64.1.1:57444\nSat Jan 11 17:29:59 2020 10.250.7.77:28212 Connection reset, restarting [0]\nSat Jan 11 17:29:59 2020 100.64.1.1:57444 Connection reset, restarting [0]\nSat Jan 11 17:30:03 2020 TCP connection established with [AF_INET]10.250.7.77:12084\nSat Jan 11 17:30:03 2020 10.250.7.77:12084 TCP connection established with [AF_INET]100.64.1.1:53024\nSat Jan 11 17:30:03 2020 10.250.7.77:12084 Connection reset, restarting [0]\nSat Jan 11 17:30:03 2020 100.64.1.1:53024 Connection reset, restarting [0]\nSat Jan 11 17:30:09 2020 TCP connection established with [AF_INET]10.250.7.77:28222\nSat Jan 11 17:30:09 2020 10.250.7.77:28222 TCP connection established with [AF_INET]100.64.1.1:57454\nSat Jan 11 17:30:09 2020 10.250.7.77:28222 Connection reset, restarting [0]\nSat Jan 11 17:30:09 2020 100.64.1.1:57454 Connection reset, restarting [0]\nSat Jan 11 17:30:13 2020 TCP connection established with [AF_INET]10.250.7.77:12092\nSat Jan 11 17:30:13 2020 10.250.7.77:12092 Connection reset, restarting [0]\nSat Jan 11 17:30:13 2020 TCP connection established with [AF_INET]100.64.1.1:53032\nSat Jan 11 17:30:13 2020 100.64.1.1:53032 Connection reset, restarting [0]\nSat Jan 11 17:30:19 2020 TCP connection established with [AF_INET]10.250.7.77:28234\nSat Jan 11 17:30:19 2020 10.250.7.77:28234 TCP connection established with [AF_INET]100.64.1.1:57466\nSat Jan 11 17:30:19 2020 10.250.7.77:28234 Connection reset, restarting [0]\nSat Jan 11 17:30:19 2020 100.64.1.1:57466 Connection reset, restarting [0]\nSat Jan 11 17:30:23 2020 TCP connection established with [AF_INET]10.250.7.77:12100\nSat Jan 11 17:30:23 2020 10.250.7.77:12100 TCP connection established with [AF_INET]100.64.1.1:53040\nSat Jan 11 17:30:23 2020 10.250.7.77:12100 Connection reset, restarting [0]\nSat Jan 11 17:30:23 2020 100.64.1.1:53040 Connection reset, restarting [0]\nSat Jan 11 17:30:29 2020 TCP connection established with [AF_INET]10.250.7.77:28240\nSat Jan 11 17:30:29 2020 10.250.7.77:28240 TCP connection established with [AF_INET]100.64.1.1:57472\nSat Jan 11 17:30:29 2020 10.250.7.77:28240 Connection reset, restarting [0]\nSat Jan 11 17:30:29 2020 100.64.1.1:57472 Connection reset, restarting [0]\nSat Jan 11 17:30:33 2020 TCP connection established with [AF_INET]10.250.7.77:12110\nSat Jan 11 17:30:33 2020 10.250.7.77:12110 TCP connection established with [AF_INET]100.64.1.1:53050\nSat Jan 11 17:30:33 2020 10.250.7.77:12110 Connection reset, restarting [0]\nSat Jan 11 17:30:33 2020 100.64.1.1:53050 Connection reset, restarting [0]\nSat Jan 11 17:30:39 2020 TCP connection established with [AF_INET]10.250.7.77:28248\nSat Jan 11 17:30:39 2020 10.250.7.77:28248 TCP connection established with [AF_INET]100.64.1.1:57480\nSat Jan 11 17:30:39 2020 10.250.7.77:28248 Connection reset, restarting [0]\nSat Jan 11 17:30:39 2020 100.64.1.1:57480 Connection reset, restarting [0]\nSat Jan 11 17:30:43 2020 TCP connection established with [AF_INET]10.250.7.77:12114\nSat Jan 11 17:30:43 2020 10.250.7.77:12114 TCP connection established with [AF_INET]100.64.1.1:53054\nSat Jan 11 17:30:43 2020 10.250.7.77:12114 Connection reset, restarting [0]\nSat Jan 11 17:30:43 2020 100.64.1.1:53054 Connection reset, restarting [0]\nSat Jan 11 17:30:49 2020 TCP connection established with [AF_INET]10.250.7.77:28254\nSat Jan 11 17:30:49 2020 10.250.7.77:28254 Connection reset, restarting [0]\nSat Jan 11 17:30:49 2020 TCP connection established with [AF_INET]100.64.1.1:57486\nSat Jan 11 17:30:49 2020 100.64.1.1:57486 Connection reset, restarting [0]\nSat Jan 11 17:30:53 2020 TCP connection established with [AF_INET]10.250.7.77:12128\nSat Jan 11 17:30:53 2020 10.250.7.77:12128 TCP connection established with [AF_INET]100.64.1.1:53068\nSat Jan 11 17:30:53 2020 10.250.7.77:12128 Connection reset, restarting [0]\nSat Jan 11 17:30:53 2020 100.64.1.1:53068 Connection reset, restarting [0]\nSat Jan 11 17:30:59 2020 TCP connection established with [AF_INET]10.250.7.77:28266\nSat Jan 11 17:30:59 2020 10.250.7.77:28266 TCP connection established with [AF_INET]100.64.1.1:57498\nSat Jan 11 17:30:59 2020 10.250.7.77:28266 Connection reset, restarting [0]\nSat Jan 11 17:30:59 2020 100.64.1.1:57498 Connection reset, restarting [0]\nSat Jan 11 17:31:03 2020 TCP connection established with [AF_INET]10.250.7.77:12142\nSat Jan 11 17:31:03 2020 10.250.7.77:12142 TCP connection established with [AF_INET]100.64.1.1:53082\nSat Jan 11 17:31:03 2020 10.250.7.77:12142 Connection reset, restarting [0]\nSat Jan 11 17:31:03 2020 100.64.1.1:53082 Connection reset, restarting [0]\nSat Jan 11 17:31:09 2020 TCP connection established with [AF_INET]10.250.7.77:28276\nSat Jan 11 17:31:09 2020 10.250.7.77:28276 TCP connection established with [AF_INET]100.64.1.1:57508\nSat Jan 11 17:31:09 2020 10.250.7.77:28276 Connection reset, restarting [0]\nSat Jan 11 17:31:09 2020 100.64.1.1:57508 Connection reset, restarting [0]\nSat Jan 11 17:31:13 2020 TCP connection established with [AF_INET]10.250.7.77:12152\nSat Jan 11 17:31:13 2020 10.250.7.77:12152 TCP connection established with [AF_INET]100.64.1.1:53092\nSat Jan 11 17:31:13 2020 10.250.7.77:12152 Connection reset, restarting [0]\nSat Jan 11 17:31:13 2020 100.64.1.1:53092 Connection reset, restarting [0]\nSat Jan 11 17:31:19 2020 TCP connection established with [AF_INET]10.250.7.77:28292\nSat Jan 11 17:31:19 2020 10.250.7.77:28292 TCP connection established with [AF_INET]100.64.1.1:57524\nSat Jan 11 17:31:19 2020 10.250.7.77:28292 Connection reset, restarting [0]\nSat Jan 11 17:31:19 2020 100.64.1.1:57524 Connection reset, restarting [0]\nSat Jan 11 17:31:23 2020 TCP connection established with [AF_INET]10.250.7.77:12158\nSat Jan 11 17:31:23 2020 10.250.7.77:12158 TCP connection established with [AF_INET]100.64.1.1:53098\nSat Jan 11 17:31:23 2020 10.250.7.77:12158 Connection reset, restarting [0]\nSat Jan 11 17:31:23 2020 100.64.1.1:53098 Connection reset, restarting [0]\nSat Jan 11 17:31:29 2020 TCP connection established with [AF_INET]10.250.7.77:28298\nSat Jan 11 17:31:29 2020 10.250.7.77:28298 TCP connection established with [AF_INET]100.64.1.1:57530\nSat Jan 11 17:31:29 2020 10.250.7.77:28298 Connection reset, restarting [0]\nSat Jan 11 17:31:29 2020 100.64.1.1:57530 Connection reset, restarting [0]\nSat Jan 11 17:31:33 2020 TCP connection established with [AF_INET]10.250.7.77:12168\nSat Jan 11 17:31:33 2020 10.250.7.77:12168 TCP connection established with [AF_INET]100.64.1.1:53108\nSat Jan 11 17:31:33 2020 10.250.7.77:12168 Connection reset, restarting [0]\nSat Jan 11 17:31:33 2020 100.64.1.1:53108 Connection reset, restarting [0]\nSat Jan 11 17:31:39 2020 TCP connection established with [AF_INET]10.250.7.77:28306\nSat Jan 11 17:31:39 2020 10.250.7.77:28306 TCP connection established with [AF_INET]100.64.1.1:57538\nSat Jan 11 17:31:39 2020 10.250.7.77:28306 Connection reset, restarting [0]\nSat Jan 11 17:31:39 2020 100.64.1.1:57538 Connection reset, restarting [0]\nSat Jan 11 17:31:43 2020 TCP connection established with [AF_INET]10.250.7.77:12172\nSat Jan 11 17:31:43 2020 10.250.7.77:12172 TCP connection established with [AF_INET]100.64.1.1:53112\nSat Jan 11 17:31:43 2020 10.250.7.77:12172 Connection reset, restarting [0]\nSat Jan 11 17:31:43 2020 100.64.1.1:53112 Connection reset, restarting [0]\nSat Jan 11 17:31:49 2020 TCP connection established with [AF_INET]10.250.7.77:28312\nSat Jan 11 17:31:49 2020 10.250.7.77:28312 TCP connection established with [AF_INET]100.64.1.1:57544\nSat Jan 11 17:31:49 2020 10.250.7.77:28312 Connection reset, restarting [0]\nSat Jan 11 17:31:49 2020 100.64.1.1:57544 Connection reset, restarting [0]\nSat Jan 11 17:31:53 2020 TCP connection established with [AF_INET]10.250.7.77:12182\nSat Jan 11 17:31:53 2020 10.250.7.77:12182 TCP connection established with [AF_INET]100.64.1.1:53122\nSat Jan 11 17:31:53 2020 10.250.7.77:12182 Connection reset, restarting [0]\nSat Jan 11 17:31:53 2020 100.64.1.1:53122 Connection reset, restarting [0]\nSat Jan 11 17:31:59 2020 TCP connection established with [AF_INET]10.250.7.77:28324\nSat Jan 11 17:31:59 2020 10.250.7.77:28324 TCP connection established with [AF_INET]100.64.1.1:57556\nSat Jan 11 17:31:59 2020 10.250.7.77:28324 Connection reset, restarting [0]\nSat Jan 11 17:31:59 2020 100.64.1.1:57556 Connection reset, restarting [0]\nSat Jan 11 17:32:03 2020 TCP connection established with [AF_INET]10.250.7.77:12196\nSat Jan 11 17:32:03 2020 10.250.7.77:12196 TCP connection established with [AF_INET]100.64.1.1:53136\nSat Jan 11 17:32:03 2020 10.250.7.77:12196 Connection reset, restarting [0]\nSat Jan 11 17:32:03 2020 100.64.1.1:53136 Connection reset, restarting [0]\nSat Jan 11 17:32:09 2020 TCP connection established with [AF_INET]10.250.7.77:28334\nSat Jan 11 17:32:09 2020 10.250.7.77:28334 TCP connection established with [AF_INET]100.64.1.1:57566\nSat Jan 11 17:32:09 2020 10.250.7.77:28334 Connection reset, restarting [0]\nSat Jan 11 17:32:09 2020 100.64.1.1:57566 Connection reset, restarting [0]\nSat Jan 11 17:32:13 2020 TCP connection established with [AF_INET]10.250.7.77:12210\nSat Jan 11 17:32:13 2020 10.250.7.77:12210 TCP connection established with [AF_INET]100.64.1.1:53150\nSat Jan 11 17:32:13 2020 10.250.7.77:12210 Connection reset, restarting [0]\nSat Jan 11 17:32:13 2020 100.64.1.1:53150 Connection reset, restarting [0]\nSat Jan 11 17:32:19 2020 TCP connection established with [AF_INET]10.250.7.77:28348\nSat Jan 11 17:32:19 2020 10.250.7.77:28348 TCP connection established with [AF_INET]100.64.1.1:57580\nSat Jan 11 17:32:19 2020 10.250.7.77:28348 Connection reset, restarting [0]\nSat Jan 11 17:32:19 2020 100.64.1.1:57580 Connection reset, restarting [0]\nSat Jan 11 17:32:23 2020 TCP connection established with [AF_INET]10.250.7.77:12216\nSat Jan 11 17:32:23 2020 10.250.7.77:12216 TCP connection established with [AF_INET]100.64.1.1:53156\nSat Jan 11 17:32:23 2020 10.250.7.77:12216 Connection reset, restarting [0]\nSat Jan 11 17:32:23 2020 100.64.1.1:53156 Connection reset, restarting [0]\nSat Jan 11 17:32:29 2020 TCP connection established with [AF_INET]10.250.7.77:28356\nSat Jan 11 17:32:29 2020 10.250.7.77:28356 Connection reset, restarting [0]\nSat Jan 11 17:32:29 2020 TCP connection established with [AF_INET]100.64.1.1:57588\nSat Jan 11 17:32:29 2020 100.64.1.1:57588 Connection reset, restarting [0]\nSat Jan 11 17:32:33 2020 TCP connection established with [AF_INET]10.250.7.77:12226\nSat Jan 11 17:32:33 2020 10.250.7.77:12226 TCP connection established with [AF_INET]100.64.1.1:53166\nSat Jan 11 17:32:33 2020 10.250.7.77:12226 Connection reset, restarting [0]\nSat Jan 11 17:32:33 2020 100.64.1.1:53166 Connection reset, restarting [0]\nSat Jan 11 17:32:39 2020 TCP connection established with [AF_INET]10.250.7.77:28364\nSat Jan 11 17:32:39 2020 10.250.7.77:28364 TCP connection established with [AF_INET]100.64.1.1:57596\nSat Jan 11 17:32:39 2020 10.250.7.77:28364 Connection reset, restarting [0]\nSat Jan 11 17:32:39 2020 100.64.1.1:57596 Connection reset, restarting [0]\nSat Jan 11 17:32:43 2020 TCP connection established with [AF_INET]10.250.7.77:12240\nSat Jan 11 17:32:43 2020 10.250.7.77:12240 TCP connection established with [AF_INET]100.64.1.1:53180\nSat Jan 11 17:32:43 2020 10.250.7.77:12240 Connection reset, restarting [0]\nSat Jan 11 17:32:43 2020 100.64.1.1:53180 Connection reset, restarting [0]\nSat Jan 11 17:32:49 2020 TCP connection established with [AF_INET]10.250.7.77:28370\nSat Jan 11 17:32:49 2020 10.250.7.77:28370 TCP connection established with [AF_INET]100.64.1.1:57602\nSat Jan 11 17:32:49 2020 10.250.7.77:28370 Connection reset, restarting [0]\nSat Jan 11 17:32:49 2020 100.64.1.1:57602 Connection reset, restarting [0]\nSat Jan 11 17:32:53 2020 TCP connection established with [AF_INET]10.250.7.77:12250\nSat Jan 11 17:32:53 2020 10.250.7.77:12250 TCP connection established with [AF_INET]100.64.1.1:53190\nSat Jan 11 17:32:53 2020 10.250.7.77:12250 Connection reset, restarting [0]\nSat Jan 11 17:32:53 2020 100.64.1.1:53190 Connection reset, restarting [0]\nSat Jan 11 17:32:59 2020 TCP connection established with [AF_INET]10.250.7.77:28382\nSat Jan 11 17:32:59 2020 10.250.7.77:28382 TCP connection established with [AF_INET]100.64.1.1:57614\nSat Jan 11 17:32:59 2020 10.250.7.77:28382 Connection reset, restarting [0]\nSat Jan 11 17:32:59 2020 100.64.1.1:57614 Connection reset, restarting [0]\nSat Jan 11 17:33:03 2020 TCP connection established with [AF_INET]10.250.7.77:12266\nSat Jan 11 17:33:03 2020 10.250.7.77:12266 TCP connection established with [AF_INET]100.64.1.1:53206\nSat Jan 11 17:33:03 2020 10.250.7.77:12266 Connection reset, restarting [0]\nSat Jan 11 17:33:03 2020 100.64.1.1:53206 Connection reset, restarting [0]\nSat Jan 11 17:33:09 2020 TCP connection established with [AF_INET]10.250.7.77:28400\nSat Jan 11 17:33:09 2020 10.250.7.77:28400 TCP connection established with [AF_INET]100.64.1.1:57632\nSat Jan 11 17:33:09 2020 10.250.7.77:28400 Connection reset, restarting [0]\nSat Jan 11 17:33:09 2020 100.64.1.1:57632 Connection reset, restarting [0]\nSat Jan 11 17:33:13 2020 TCP connection established with [AF_INET]10.250.7.77:12276\nSat Jan 11 17:33:13 2020 10.250.7.77:12276 Connection reset, restarting [0]\nSat Jan 11 17:33:13 2020 TCP connection established with [AF_INET]100.64.1.1:53216\nSat Jan 11 17:33:13 2020 100.64.1.1:53216 Connection reset, restarting [0]\nSat Jan 11 17:33:19 2020 TCP connection established with [AF_INET]10.250.7.77:28414\nSat Jan 11 17:33:19 2020 10.250.7.77:28414 TCP connection established with [AF_INET]100.64.1.1:57646\nSat Jan 11 17:33:19 2020 10.250.7.77:28414 Connection reset, restarting [0]\nSat Jan 11 17:33:19 2020 100.64.1.1:57646 Connection reset, restarting [0]\nSat Jan 11 17:33:23 2020 TCP connection established with [AF_INET]10.250.7.77:12286\nSat Jan 11 17:33:23 2020 10.250.7.77:12286 TCP connection established with [AF_INET]100.64.1.1:53226\nSat Jan 11 17:33:23 2020 10.250.7.77:12286 Connection reset, restarting [0]\nSat Jan 11 17:33:23 2020 100.64.1.1:53226 Connection reset, restarting [0]\nSat Jan 11 17:33:29 2020 TCP connection established with [AF_INET]10.250.7.77:28418\nSat Jan 11 17:33:29 2020 10.250.7.77:28418 TCP connection established with [AF_INET]100.64.1.1:57650\nSat Jan 11 17:33:29 2020 10.250.7.77:28418 Connection reset, restarting [0]\nSat Jan 11 17:33:29 2020 100.64.1.1:57650 Connection reset, restarting [0]\nSat Jan 11 17:33:33 2020 TCP connection established with [AF_INET]10.250.7.77:12298\nSat Jan 11 17:33:33 2020 10.250.7.77:12298 TCP connection established with [AF_INET]100.64.1.1:53238\nSat Jan 11 17:33:33 2020 10.250.7.77:12298 Connection reset, restarting [0]\nSat Jan 11 17:33:33 2020 100.64.1.1:53238 Connection reset, restarting [0]\nSat Jan 11 17:33:39 2020 TCP connection established with [AF_INET]10.250.7.77:28426\nSat Jan 11 17:33:39 2020 10.250.7.77:28426 TCP connection established with [AF_INET]100.64.1.1:57658\nSat Jan 11 17:33:39 2020 10.250.7.77:28426 Connection reset, restarting [0]\nSat Jan 11 17:33:39 2020 100.64.1.1:57658 Connection reset, restarting [0]\nSat Jan 11 17:33:43 2020 TCP connection established with [AF_INET]10.250.7.77:12302\nSat Jan 11 17:33:43 2020 10.250.7.77:12302 TCP connection established with [AF_INET]100.64.1.1:53242\nSat Jan 11 17:33:43 2020 10.250.7.77:12302 Connection reset, restarting [0]\nSat Jan 11 17:33:43 2020 100.64.1.1:53242 Connection reset, restarting [0]\nSat Jan 11 17:33:49 2020 TCP connection established with [AF_INET]10.250.7.77:28436\nSat Jan 11 17:33:49 2020 10.250.7.77:28436 TCP connection established with [AF_INET]100.64.1.1:57668\nSat Jan 11 17:33:49 2020 10.250.7.77:28436 Connection reset, restarting [0]\nSat Jan 11 17:33:49 2020 100.64.1.1:57668 Connection reset, restarting [0]\nSat Jan 11 17:33:53 2020 TCP connection established with [AF_INET]10.250.7.77:12312\nSat Jan 11 17:33:53 2020 10.250.7.77:12312 TCP connection established with [AF_INET]100.64.1.1:53252\nSat Jan 11 17:33:53 2020 10.250.7.77:12312 Connection reset, restarting [0]\nSat Jan 11 17:33:53 2020 100.64.1.1:53252 Connection reset, restarting [0]\nSat Jan 11 17:33:59 2020 TCP connection established with [AF_INET]10.250.7.77:28448\nSat Jan 11 17:33:59 2020 10.250.7.77:28448 TCP connection established with [AF_INET]100.64.1.1:57680\nSat Jan 11 17:33:59 2020 10.250.7.77:28448 Connection reset, restarting [0]\nSat Jan 11 17:33:59 2020 100.64.1.1:57680 Connection reset, restarting [0]\nSat Jan 11 17:34:03 2020 TCP connection established with [AF_INET]10.250.7.77:12330\nSat Jan 11 17:34:03 2020 10.250.7.77:12330 TCP connection established with [AF_INET]100.64.1.1:53270\nSat Jan 11 17:34:03 2020 10.250.7.77:12330 Connection reset, restarting [0]\nSat Jan 11 17:34:03 2020 100.64.1.1:53270 Connection reset, restarting [0]\nSat Jan 11 17:34:09 2020 TCP connection established with [AF_INET]10.250.7.77:28460\nSat Jan 11 17:34:09 2020 10.250.7.77:28460 TCP connection established with [AF_INET]100.64.1.1:57692\nSat Jan 11 17:34:09 2020 10.250.7.77:28460 Connection reset, restarting [0]\nSat Jan 11 17:34:09 2020 100.64.1.1:57692 Connection reset, restarting [0]\nSat Jan 11 17:34:13 2020 TCP connection established with [AF_INET]10.250.7.77:12338\nSat Jan 11 17:34:13 2020 10.250.7.77:12338 TCP connection established with [AF_INET]100.64.1.1:53278\nSat Jan 11 17:34:13 2020 10.250.7.77:12338 Connection reset, restarting [0]\nSat Jan 11 17:34:13 2020 100.64.1.1:53278 Connection reset, restarting [0]\nSat Jan 11 17:34:19 2020 TCP connection established with [AF_INET]10.250.7.77:28472\nSat Jan 11 17:34:19 2020 10.250.7.77:28472 TCP connection established with [AF_INET]100.64.1.1:57704\nSat Jan 11 17:34:19 2020 10.250.7.77:28472 Connection reset, restarting [0]\nSat Jan 11 17:34:19 2020 100.64.1.1:57704 Connection reset, restarting [0]\nSat Jan 11 17:34:23 2020 TCP connection established with [AF_INET]10.250.7.77:12344\nSat Jan 11 17:34:23 2020 10.250.7.77:12344 TCP connection established with [AF_INET]100.64.1.1:53284\nSat Jan 11 17:34:23 2020 10.250.7.77:12344 Connection reset, restarting [0]\nSat Jan 11 17:34:23 2020 100.64.1.1:53284 Connection reset, restarting [0]\nSat Jan 11 17:34:29 2020 TCP connection established with [AF_INET]10.250.7.77:28476\nSat Jan 11 17:34:29 2020 10.250.7.77:28476 TCP connection established with [AF_INET]100.64.1.1:57708\nSat Jan 11 17:34:29 2020 10.250.7.77:28476 Connection reset, restarting [0]\nSat Jan 11 17:34:29 2020 100.64.1.1:57708 Connection reset, restarting [0]\nSat Jan 11 17:34:33 2020 TCP connection established with [AF_INET]10.250.7.77:12354\nSat Jan 11 17:34:33 2020 10.250.7.77:12354 TCP connection established with [AF_INET]100.64.1.1:53294\nSat Jan 11 17:34:33 2020 10.250.7.77:12354 Connection reset, restarting [0]\nSat Jan 11 17:34:33 2020 100.64.1.1:53294 Connection reset, restarting [0]\nSat Jan 11 17:34:39 2020 TCP connection established with [AF_INET]10.250.7.77:28494\nSat Jan 11 17:34:39 2020 10.250.7.77:28494 TCP connection established with [AF_INET]100.64.1.1:57726\nSat Jan 11 17:34:39 2020 10.250.7.77:28494 Connection reset, restarting [0]\nSat Jan 11 17:34:39 2020 100.64.1.1:57726 Connection reset, restarting [0]\nSat Jan 11 17:34:43 2020 TCP connection established with [AF_INET]10.250.7.77:12364\nSat Jan 11 17:34:43 2020 10.250.7.77:12364 TCP connection established with [AF_INET]100.64.1.1:53304\nSat Jan 11 17:34:43 2020 10.250.7.77:12364 Connection reset, restarting [0]\nSat Jan 11 17:34:43 2020 100.64.1.1:53304 Connection reset, restarting [0]\nSat Jan 11 17:34:49 2020 TCP connection established with [AF_INET]10.250.7.77:28500\nSat Jan 11 17:34:49 2020 10.250.7.77:28500 TCP connection established with [AF_INET]100.64.1.1:57732\nSat Jan 11 17:34:49 2020 10.250.7.77:28500 Connection reset, restarting [0]\nSat Jan 11 17:34:49 2020 100.64.1.1:57732 Connection reset, restarting [0]\nSat Jan 11 17:34:53 2020 TCP connection established with [AF_INET]10.250.7.77:12374\nSat Jan 11 17:34:53 2020 10.250.7.77:12374 TCP connection established with [AF_INET]100.64.1.1:53314\nSat Jan 11 17:34:53 2020 10.250.7.77:12374 Connection reset, restarting [0]\nSat Jan 11 17:34:53 2020 100.64.1.1:53314 Connection reset, restarting [0]\nSat Jan 11 17:34:59 2020 TCP connection established with [AF_INET]10.250.7.77:28516\nSat Jan 11 17:34:59 2020 10.250.7.77:28516 TCP connection established with [AF_INET]100.64.1.1:57748\nSat Jan 11 17:34:59 2020 10.250.7.77:28516 Connection reset, restarting [0]\nSat Jan 11 17:34:59 2020 100.64.1.1:57748 Connection reset, restarting [0]\nSat Jan 11 17:35:03 2020 TCP connection established with [AF_INET]10.250.7.77:12390\nSat Jan 11 17:35:03 2020 10.250.7.77:12390 TCP connection established with [AF_INET]100.64.1.1:53330\nSat Jan 11 17:35:03 2020 10.250.7.77:12390 Connection reset, restarting [0]\nSat Jan 11 17:35:03 2020 100.64.1.1:53330 Connection reset, restarting [0]\nSat Jan 11 17:35:09 2020 TCP connection established with [AF_INET]10.250.7.77:28528\nSat Jan 11 17:35:09 2020 10.250.7.77:28528 TCP connection established with [AF_INET]100.64.1.1:57760\nSat Jan 11 17:35:09 2020 10.250.7.77:28528 Connection reset, restarting [0]\nSat Jan 11 17:35:09 2020 100.64.1.1:57760 Connection reset, restarting [0]\nSat Jan 11 17:35:13 2020 TCP connection established with [AF_INET]10.250.7.77:12398\nSat Jan 11 17:35:13 2020 10.250.7.77:12398 TCP connection established with [AF_INET]100.64.1.1:53338\nSat Jan 11 17:35:13 2020 10.250.7.77:12398 Connection reset, restarting [0]\nSat Jan 11 17:35:13 2020 100.64.1.1:53338 Connection reset, restarting [0]\nSat Jan 11 17:35:19 2020 TCP connection established with [AF_INET]10.250.7.77:28540\nSat Jan 11 17:35:19 2020 10.250.7.77:28540 TCP connection established with [AF_INET]100.64.1.1:57772\nSat Jan 11 17:35:19 2020 10.250.7.77:28540 Connection reset, restarting [0]\nSat Jan 11 17:35:19 2020 100.64.1.1:57772 Connection reset, restarting [0]\nSat Jan 11 17:35:23 2020 TCP connection established with [AF_INET]10.250.7.77:12412\nSat Jan 11 17:35:23 2020 10.250.7.77:12412 TCP connection established with [AF_INET]100.64.1.1:53352\nSat Jan 11 17:35:23 2020 10.250.7.77:12412 Connection reset, restarting [0]\nSat Jan 11 17:35:23 2020 100.64.1.1:53352 Connection reset, restarting [0]\nSat Jan 11 17:35:29 2020 TCP connection established with [AF_INET]10.250.7.77:28544\nSat Jan 11 17:35:29 2020 10.250.7.77:28544 TCP connection established with [AF_INET]100.64.1.1:57776\nSat Jan 11 17:35:29 2020 10.250.7.77:28544 Connection reset, restarting [0]\nSat Jan 11 17:35:29 2020 100.64.1.1:57776 Connection reset, restarting [0]\nSat Jan 11 17:35:33 2020 TCP connection established with [AF_INET]10.250.7.77:12422\nSat Jan 11 17:35:33 2020 10.250.7.77:12422 TCP connection established with [AF_INET]100.64.1.1:53362\nSat Jan 11 17:35:33 2020 10.250.7.77:12422 Connection reset, restarting [0]\nSat Jan 11 17:35:33 2020 100.64.1.1:53362 Connection reset, restarting [0]\nSat Jan 11 17:35:39 2020 TCP connection established with [AF_INET]10.250.7.77:28552\nSat Jan 11 17:35:39 2020 10.250.7.77:28552 TCP connection established with [AF_INET]100.64.1.1:57784\nSat Jan 11 17:35:39 2020 10.250.7.77:28552 Connection reset, restarting [0]\nSat Jan 11 17:35:39 2020 100.64.1.1:57784 Connection reset, restarting [0]\nSat Jan 11 17:35:43 2020 TCP connection established with [AF_INET]10.250.7.77:12426\nSat Jan 11 17:35:43 2020 10.250.7.77:12426 TCP connection established with [AF_INET]100.64.1.1:53366\nSat Jan 11 17:35:43 2020 10.250.7.77:12426 Connection reset, restarting [0]\nSat Jan 11 17:35:43 2020 100.64.1.1:53366 Connection reset, restarting [0]\nSat Jan 11 17:35:49 2020 TCP connection established with [AF_INET]10.250.7.77:28558\nSat Jan 11 17:35:49 2020 10.250.7.77:28558 TCP connection established with [AF_INET]100.64.1.1:57790\nSat Jan 11 17:35:49 2020 10.250.7.77:28558 Connection reset, restarting [0]\nSat Jan 11 17:35:49 2020 100.64.1.1:57790 Connection reset, restarting [0]\nSat Jan 11 17:35:53 2020 TCP connection established with [AF_INET]10.250.7.77:12476\nSat Jan 11 17:35:53 2020 10.250.7.77:12476 TCP connection established with [AF_INET]100.64.1.1:53416\nSat Jan 11 17:35:53 2020 10.250.7.77:12476 Connection reset, restarting [0]\nSat Jan 11 17:35:53 2020 100.64.1.1:53416 Connection reset, restarting [0]\nSat Jan 11 17:35:59 2020 TCP connection established with [AF_INET]10.250.7.77:28572\nSat Jan 11 17:35:59 2020 10.250.7.77:28572 TCP connection established with [AF_INET]100.64.1.1:57804\nSat Jan 11 17:35:59 2020 10.250.7.77:28572 Connection reset, restarting [0]\nSat Jan 11 17:35:59 2020 100.64.1.1:57804 Connection reset, restarting [0]\nSat Jan 11 17:36:03 2020 TCP connection established with [AF_INET]10.250.7.77:12486\nSat Jan 11 17:36:03 2020 10.250.7.77:12486 TCP connection established with [AF_INET]100.64.1.1:53426\nSat Jan 11 17:36:03 2020 10.250.7.77:12486 Connection reset, restarting [0]\nSat Jan 11 17:36:03 2020 100.64.1.1:53426 Connection reset, restarting [0]\nSat Jan 11 17:36:09 2020 TCP connection established with [AF_INET]10.250.7.77:28584\nSat Jan 11 17:36:09 2020 10.250.7.77:28584 TCP connection established with [AF_INET]100.64.1.1:57816\nSat Jan 11 17:36:09 2020 10.250.7.77:28584 Connection reset, restarting [0]\nSat Jan 11 17:36:09 2020 100.64.1.1:57816 Connection reset, restarting [0]\nSat Jan 11 17:36:13 2020 TCP connection established with [AF_INET]10.250.7.77:12500\nSat Jan 11 17:36:13 2020 10.250.7.77:12500 TCP connection established with [AF_INET]100.64.1.1:53440\nSat Jan 11 17:36:13 2020 10.250.7.77:12500 Connection reset, restarting [0]\nSat Jan 11 17:36:13 2020 100.64.1.1:53440 Connection reset, restarting [0]\nSat Jan 11 17:36:19 2020 TCP connection established with [AF_INET]10.250.7.77:28600\nSat Jan 11 17:36:19 2020 10.250.7.77:28600 TCP connection established with [AF_INET]100.64.1.1:57832\nSat Jan 11 17:36:19 2020 10.250.7.77:28600 Connection reset, restarting [0]\nSat Jan 11 17:36:19 2020 100.64.1.1:57832 Connection reset, restarting [0]\nSat Jan 11 17:36:23 2020 TCP connection established with [AF_INET]10.250.7.77:12512\nSat Jan 11 17:36:23 2020 10.250.7.77:12512 TCP connection established with [AF_INET]100.64.1.1:53452\nSat Jan 11 17:36:23 2020 10.250.7.77:12512 Connection reset, restarting [0]\nSat Jan 11 17:36:23 2020 100.64.1.1:53452 Connection reset, restarting [0]\nSat Jan 11 17:36:29 2020 TCP connection established with [AF_INET]10.250.7.77:28606\nSat Jan 11 17:36:29 2020 10.250.7.77:28606 TCP connection established with [AF_INET]100.64.1.1:57838\nSat Jan 11 17:36:29 2020 10.250.7.77:28606 Connection reset, restarting [0]\nSat Jan 11 17:36:29 2020 100.64.1.1:57838 Connection reset, restarting [0]\nSat Jan 11 17:36:33 2020 TCP connection established with [AF_INET]10.250.7.77:12522\nSat Jan 11 17:36:33 2020 10.250.7.77:12522 TCP connection established with [AF_INET]100.64.1.1:53462\nSat Jan 11 17:36:33 2020 10.250.7.77:12522 Connection reset, restarting [0]\nSat Jan 11 17:36:33 2020 100.64.1.1:53462 Connection reset, restarting [0]\nSat Jan 11 17:36:39 2020 TCP connection established with [AF_INET]10.250.7.77:28614\nSat Jan 11 17:36:39 2020 10.250.7.77:28614 TCP connection established with [AF_INET]100.64.1.1:57846\nSat Jan 11 17:36:39 2020 10.250.7.77:28614 Connection reset, restarting [0]\nSat Jan 11 17:36:39 2020 100.64.1.1:57846 Connection reset, restarting [0]\nSat Jan 11 17:36:43 2020 TCP connection established with [AF_INET]10.250.7.77:12526\nSat Jan 11 17:36:43 2020 10.250.7.77:12526 TCP connection established with [AF_INET]100.64.1.1:53466\nSat Jan 11 17:36:43 2020 10.250.7.77:12526 Connection reset, restarting [0]\nSat Jan 11 17:36:43 2020 100.64.1.1:53466 Connection reset, restarting [0]\nSat Jan 11 17:36:49 2020 TCP connection established with [AF_INET]10.250.7.77:28620\nSat Jan 11 17:36:49 2020 10.250.7.77:28620 TCP connection established with [AF_INET]100.64.1.1:57852\nSat Jan 11 17:36:49 2020 10.250.7.77:28620 Connection reset, restarting [0]\nSat Jan 11 17:36:49 2020 100.64.1.1:57852 Connection reset, restarting [0]\nSat Jan 11 17:36:53 2020 TCP connection established with [AF_INET]10.250.7.77:12538\nSat Jan 11 17:36:53 2020 10.250.7.77:12538 TCP connection established with [AF_INET]100.64.1.1:53478\nSat Jan 11 17:36:53 2020 10.250.7.77:12538 Connection reset, restarting [0]\nSat Jan 11 17:36:53 2020 100.64.1.1:53478 Connection reset, restarting [0]\nSat Jan 11 17:36:59 2020 TCP connection established with [AF_INET]10.250.7.77:28636\nSat Jan 11 17:36:59 2020 10.250.7.77:28636 TCP connection established with [AF_INET]100.64.1.1:57868\nSat Jan 11 17:36:59 2020 10.250.7.77:28636 Connection reset, restarting [0]\nSat Jan 11 17:36:59 2020 100.64.1.1:57868 Connection reset, restarting [0]\nSat Jan 11 17:37:03 2020 TCP connection established with [AF_INET]10.250.7.77:12548\nSat Jan 11 17:37:03 2020 10.250.7.77:12548 TCP connection established with [AF_INET]100.64.1.1:53488\nSat Jan 11 17:37:03 2020 10.250.7.77:12548 Connection reset, restarting [0]\nSat Jan 11 17:37:03 2020 100.64.1.1:53488 Connection reset, restarting [0]\nSat Jan 11 17:37:09 2020 TCP connection established with [AF_INET]10.250.7.77:28646\nSat Jan 11 17:37:09 2020 10.250.7.77:28646 TCP connection established with [AF_INET]100.64.1.1:57878\nSat Jan 11 17:37:09 2020 10.250.7.77:28646 Connection reset, restarting [0]\nSat Jan 11 17:37:09 2020 100.64.1.1:57878 Connection reset, restarting [0]\nSat Jan 11 17:37:13 2020 TCP connection established with [AF_INET]10.250.7.77:12564\nSat Jan 11 17:37:13 2020 10.250.7.77:12564 TCP connection established with [AF_INET]100.64.1.1:53504\nSat Jan 11 17:37:13 2020 10.250.7.77:12564 Connection reset, restarting [0]\nSat Jan 11 17:37:13 2020 100.64.1.1:53504 Connection reset, restarting [0]\nSat Jan 11 17:37:19 2020 TCP connection established with [AF_INET]10.250.7.77:28658\nSat Jan 11 17:37:19 2020 10.250.7.77:28658 TCP connection established with [AF_INET]100.64.1.1:57890\nSat Jan 11 17:37:19 2020 10.250.7.77:28658 Connection reset, restarting [0]\nSat Jan 11 17:37:19 2020 100.64.1.1:57890 Connection reset, restarting [0]\nSat Jan 11 17:37:23 2020 TCP connection established with [AF_INET]10.250.7.77:12570\nSat Jan 11 17:37:23 2020 10.250.7.77:12570 TCP connection established with [AF_INET]100.64.1.1:53510\nSat Jan 11 17:37:23 2020 10.250.7.77:12570 Connection reset, restarting [0]\nSat Jan 11 17:37:23 2020 100.64.1.1:53510 Connection reset, restarting [0]\nSat Jan 11 17:37:29 2020 TCP connection established with [AF_INET]10.250.7.77:28666\nSat Jan 11 17:37:29 2020 10.250.7.77:28666 TCP connection established with [AF_INET]100.64.1.1:57898\nSat Jan 11 17:37:29 2020 10.250.7.77:28666 Connection reset, restarting [0]\nSat Jan 11 17:37:29 2020 100.64.1.1:57898 Connection reset, restarting [0]\nSat Jan 11 17:37:33 2020 TCP connection established with [AF_INET]10.250.7.77:12580\nSat Jan 11 17:37:33 2020 10.250.7.77:12580 TCP connection established with [AF_INET]100.64.1.1:53520\nSat Jan 11 17:37:33 2020 10.250.7.77:12580 Connection reset, restarting [0]\nSat Jan 11 17:37:33 2020 100.64.1.1:53520 Connection reset, restarting [0]\nSat Jan 11 17:37:39 2020 TCP connection established with [AF_INET]100.64.1.1:57908\nSat Jan 11 17:37:39 2020 100.64.1.1:57908 Connection reset, restarting [0]\nSat Jan 11 17:37:39 2020 TCP connection established with [AF_INET]10.250.7.77:28676\nSat Jan 11 17:37:39 2020 10.250.7.77:28676 Connection reset, restarting [0]\nSat Jan 11 17:37:43 2020 TCP connection established with [AF_INET]10.250.7.77:12584\nSat Jan 11 17:37:43 2020 10.250.7.77:12584 TCP connection established with [AF_INET]100.64.1.1:53524\nSat Jan 11 17:37:43 2020 10.250.7.77:12584 Connection reset, restarting [0]\nSat Jan 11 17:37:43 2020 100.64.1.1:53524 Connection reset, restarting [0]\nSat Jan 11 17:37:49 2020 TCP connection established with [AF_INET]10.250.7.77:28682\nSat Jan 11 17:37:49 2020 10.250.7.77:28682 TCP connection established with [AF_INET]100.64.1.1:57914\nSat Jan 11 17:37:49 2020 10.250.7.77:28682 Connection reset, restarting [0]\nSat Jan 11 17:37:49 2020 100.64.1.1:57914 Connection reset, restarting [0]\nSat Jan 11 17:37:53 2020 TCP connection established with [AF_INET]10.250.7.77:12596\nSat Jan 11 17:37:53 2020 10.250.7.77:12596 TCP connection established with [AF_INET]100.64.1.1:53536\nSat Jan 11 17:37:53 2020 10.250.7.77:12596 Connection reset, restarting [0]\nSat Jan 11 17:37:53 2020 100.64.1.1:53536 Connection reset, restarting [0]\nSat Jan 11 17:37:59 2020 TCP connection established with [AF_INET]10.250.7.77:28696\nSat Jan 11 17:37:59 2020 10.250.7.77:28696 TCP connection established with [AF_INET]100.64.1.1:57928\nSat Jan 11 17:37:59 2020 10.250.7.77:28696 Connection reset, restarting [0]\nSat Jan 11 17:37:59 2020 100.64.1.1:57928 Connection reset, restarting [0]\nSat Jan 11 17:38:03 2020 TCP connection established with [AF_INET]10.250.7.77:12606\nSat Jan 11 17:38:03 2020 10.250.7.77:12606 TCP connection established with [AF_INET]100.64.1.1:53546\nSat Jan 11 17:38:03 2020 10.250.7.77:12606 Connection reset, restarting [0]\nSat Jan 11 17:38:03 2020 100.64.1.1:53546 Connection reset, restarting [0]\nSat Jan 11 17:38:09 2020 TCP connection established with [AF_INET]10.250.7.77:28706\nSat Jan 11 17:38:09 2020 10.250.7.77:28706 TCP connection established with [AF_INET]100.64.1.1:57938\nSat Jan 11 17:38:09 2020 10.250.7.77:28706 Connection reset, restarting [0]\nSat Jan 11 17:38:09 2020 100.64.1.1:57938 Connection reset, restarting [0]\nSat Jan 11 17:38:13 2020 TCP connection established with [AF_INET]10.250.7.77:12618\nSat Jan 11 17:38:13 2020 10.250.7.77:12618 TCP connection established with [AF_INET]100.64.1.1:53558\nSat Jan 11 17:38:13 2020 10.250.7.77:12618 Connection reset, restarting [0]\nSat Jan 11 17:38:13 2020 100.64.1.1:53558 Connection reset, restarting [0]\nSat Jan 11 17:38:19 2020 TCP connection established with [AF_INET]10.250.7.77:28718\nSat Jan 11 17:38:19 2020 10.250.7.77:28718 TCP connection established with [AF_INET]100.64.1.1:57950\nSat Jan 11 17:38:19 2020 10.250.7.77:28718 Connection reset, restarting [0]\nSat Jan 11 17:38:19 2020 100.64.1.1:57950 Connection reset, restarting [0]\nSat Jan 11 17:38:23 2020 TCP connection established with [AF_INET]10.250.7.77:12628\nSat Jan 11 17:38:23 2020 10.250.7.77:12628 TCP connection established with [AF_INET]100.64.1.1:53568\nSat Jan 11 17:38:23 2020 10.250.7.77:12628 Connection reset, restarting [0]\nSat Jan 11 17:38:23 2020 100.64.1.1:53568 Connection reset, restarting [0]\nSat Jan 11 17:38:29 2020 TCP connection established with [AF_INET]10.250.7.77:28722\nSat Jan 11 17:38:29 2020 10.250.7.77:28722 TCP connection established with [AF_INET]100.64.1.1:57954\nSat Jan 11 17:38:29 2020 10.250.7.77:28722 Connection reset, restarting [0]\nSat Jan 11 17:38:29 2020 100.64.1.1:57954 Connection reset, restarting [0]\nSat Jan 11 17:38:33 2020 TCP connection established with [AF_INET]10.250.7.77:12638\nSat Jan 11 17:38:33 2020 10.250.7.77:12638 TCP connection established with [AF_INET]100.64.1.1:53578\nSat Jan 11 17:38:33 2020 10.250.7.77:12638 Connection reset, restarting [0]\nSat Jan 11 17:38:33 2020 100.64.1.1:53578 Connection reset, restarting [0]\nSat Jan 11 17:38:39 2020 TCP connection established with [AF_INET]10.250.7.77:28730\nSat Jan 11 17:38:39 2020 10.250.7.77:28730 TCP connection established with [AF_INET]100.64.1.1:57962\nSat Jan 11 17:38:39 2020 10.250.7.77:28730 Connection reset, restarting [0]\nSat Jan 11 17:38:39 2020 100.64.1.1:57962 Connection reset, restarting [0]\nSat Jan 11 17:38:43 2020 TCP connection established with [AF_INET]10.250.7.77:12644\nSat Jan 11 17:38:43 2020 10.250.7.77:12644 TCP connection established with [AF_INET]100.64.1.1:53584\nSat Jan 11 17:38:43 2020 10.250.7.77:12644 Connection reset, restarting [0]\nSat Jan 11 17:38:43 2020 100.64.1.1:53584 Connection reset, restarting [0]\nSat Jan 11 17:38:49 2020 TCP connection established with [AF_INET]10.250.7.77:28776\nSat Jan 11 17:38:49 2020 10.250.7.77:28776 TCP connection established with [AF_INET]100.64.1.1:58008\nSat Jan 11 17:38:49 2020 10.250.7.77:28776 Connection reset, restarting [0]\nSat Jan 11 17:38:49 2020 100.64.1.1:58008 Connection reset, restarting [0]\nSat Jan 11 17:38:53 2020 TCP connection established with [AF_INET]10.250.7.77:12654\nSat Jan 11 17:38:53 2020 10.250.7.77:12654 TCP connection established with [AF_INET]100.64.1.1:53594\nSat Jan 11 17:38:53 2020 10.250.7.77:12654 Connection reset, restarting [0]\nSat Jan 11 17:38:53 2020 100.64.1.1:53594 Connection reset, restarting [0]\nSat Jan 11 17:38:59 2020 TCP connection established with [AF_INET]10.250.7.77:28788\nSat Jan 11 17:38:59 2020 10.250.7.77:28788 TCP connection established with [AF_INET]100.64.1.1:58020\nSat Jan 11 17:38:59 2020 10.250.7.77:28788 Connection reset, restarting [0]\nSat Jan 11 17:38:59 2020 100.64.1.1:58020 Connection reset, restarting [0]\nSat Jan 11 17:39:03 2020 TCP connection established with [AF_INET]10.250.7.77:12664\nSat Jan 11 17:39:03 2020 10.250.7.77:12664 TCP connection established with [AF_INET]100.64.1.1:53604\nSat Jan 11 17:39:03 2020 10.250.7.77:12664 Connection reset, restarting [0]\nSat Jan 11 17:39:03 2020 100.64.1.1:53604 Connection reset, restarting [0]\nSat Jan 11 17:39:09 2020 TCP connection established with [AF_INET]10.250.7.77:28802\nSat Jan 11 17:39:09 2020 10.250.7.77:28802 TCP connection established with [AF_INET]100.64.1.1:58034\nSat Jan 11 17:39:09 2020 10.250.7.77:28802 Connection reset, restarting [0]\nSat Jan 11 17:39:09 2020 100.64.1.1:58034 Connection reset, restarting [0]\nSat Jan 11 17:39:13 2020 TCP connection established with [AF_INET]10.250.7.77:12676\nSat Jan 11 17:39:13 2020 10.250.7.77:12676 TCP connection established with [AF_INET]100.64.1.1:53616\nSat Jan 11 17:39:13 2020 10.250.7.77:12676 Connection reset, restarting [0]\nSat Jan 11 17:39:13 2020 100.64.1.1:53616 Connection reset, restarting [0]\nSat Jan 11 17:39:19 2020 TCP connection established with [AF_INET]10.250.7.77:28818\nSat Jan 11 17:39:19 2020 10.250.7.77:28818 TCP connection established with [AF_INET]100.64.1.1:58050\nSat Jan 11 17:39:19 2020 10.250.7.77:28818 Connection reset, restarting [0]\nSat Jan 11 17:39:19 2020 100.64.1.1:58050 Connection reset, restarting [0]\nSat Jan 11 17:39:23 2020 TCP connection established with [AF_INET]10.250.7.77:12682\nSat Jan 11 17:39:23 2020 10.250.7.77:12682 TCP connection established with [AF_INET]100.64.1.1:53622\nSat Jan 11 17:39:23 2020 10.250.7.77:12682 Connection reset, restarting [0]\nSat Jan 11 17:39:23 2020 100.64.1.1:53622 Connection reset, restarting [0]\nSat Jan 11 17:39:29 2020 TCP connection established with [AF_INET]10.250.7.77:28822\nSat Jan 11 17:39:29 2020 10.250.7.77:28822 TCP connection established with [AF_INET]100.64.1.1:58054\nSat Jan 11 17:39:29 2020 10.250.7.77:28822 Connection reset, restarting [0]\nSat Jan 11 17:39:29 2020 100.64.1.1:58054 Connection reset, restarting [0]\nSat Jan 11 17:39:33 2020 TCP connection established with [AF_INET]10.250.7.77:12692\nSat Jan 11 17:39:33 2020 10.250.7.77:12692 TCP connection established with [AF_INET]100.64.1.1:53632\nSat Jan 11 17:39:33 2020 10.250.7.77:12692 Connection reset, restarting [0]\nSat Jan 11 17:39:33 2020 100.64.1.1:53632 Connection reset, restarting [0]\nSat Jan 11 17:39:39 2020 TCP connection established with [AF_INET]10.250.7.77:28830\nSat Jan 11 17:39:39 2020 10.250.7.77:28830 TCP connection established with [AF_INET]100.64.1.1:58062\nSat Jan 11 17:39:39 2020 10.250.7.77:28830 Connection reset, restarting [0]\nSat Jan 11 17:39:39 2020 100.64.1.1:58062 Connection reset, restarting [0]\nSat Jan 11 17:39:43 2020 TCP connection established with [AF_INET]10.250.7.77:12702\nSat Jan 11 17:39:43 2020 10.250.7.77:12702 TCP connection established with [AF_INET]100.64.1.1:53642\nSat Jan 11 17:39:43 2020 10.250.7.77:12702 Connection reset, restarting [0]\nSat Jan 11 17:39:43 2020 100.64.1.1:53642 Connection reset, restarting [0]\nSat Jan 11 17:39:49 2020 TCP connection established with [AF_INET]10.250.7.77:28838\nSat Jan 11 17:39:49 2020 10.250.7.77:28838 TCP connection established with [AF_INET]100.64.1.1:58070\nSat Jan 11 17:39:49 2020 10.250.7.77:28838 Connection reset, restarting [0]\nSat Jan 11 17:39:49 2020 100.64.1.1:58070 Connection reset, restarting [0]\nSat Jan 11 17:39:53 2020 TCP connection established with [AF_INET]10.250.7.77:12712\nSat Jan 11 17:39:53 2020 10.250.7.77:12712 TCP connection established with [AF_INET]100.64.1.1:53652\nSat Jan 11 17:39:53 2020 10.250.7.77:12712 Connection reset, restarting [0]\nSat Jan 11 17:39:53 2020 100.64.1.1:53652 Connection reset, restarting [0]\nSat Jan 11 17:39:59 2020 TCP connection established with [AF_INET]10.250.7.77:28854\nSat Jan 11 17:39:59 2020 10.250.7.77:28854 TCP connection established with [AF_INET]100.64.1.1:58086\nSat Jan 11 17:39:59 2020 10.250.7.77:28854 Connection reset, restarting [0]\nSat Jan 11 17:39:59 2020 100.64.1.1:58086 Connection reset, restarting [0]\nSat Jan 11 17:40:03 2020 TCP connection established with [AF_INET]10.250.7.77:12722\nSat Jan 11 17:40:03 2020 10.250.7.77:12722 TCP connection established with [AF_INET]100.64.1.1:53662\nSat Jan 11 17:40:03 2020 10.250.7.77:12722 Connection reset, restarting [0]\nSat Jan 11 17:40:03 2020 100.64.1.1:53662 Connection reset, restarting [0]\nSat Jan 11 17:40:09 2020 TCP connection established with [AF_INET]10.250.7.77:28864\nSat Jan 11 17:40:09 2020 10.250.7.77:28864 Connection reset, restarting [0]\nSat Jan 11 17:40:09 2020 TCP connection established with [AF_INET]100.64.1.1:58096\nSat Jan 11 17:40:09 2020 100.64.1.1:58096 Connection reset, restarting [0]\nSat Jan 11 17:40:13 2020 TCP connection established with [AF_INET]10.250.7.77:12734\nSat Jan 11 17:40:13 2020 10.250.7.77:12734 TCP connection established with [AF_INET]100.64.1.1:53674\nSat Jan 11 17:40:13 2020 10.250.7.77:12734 Connection reset, restarting [0]\nSat Jan 11 17:40:13 2020 100.64.1.1:53674 Connection reset, restarting [0]\nSat Jan 11 17:40:19 2020 TCP connection established with [AF_INET]10.250.7.77:28876\nSat Jan 11 17:40:19 2020 10.250.7.77:28876 TCP connection established with [AF_INET]100.64.1.1:58108\nSat Jan 11 17:40:19 2020 10.250.7.77:28876 Connection reset, restarting [0]\nSat Jan 11 17:40:19 2020 100.64.1.1:58108 Connection reset, restarting [0]\nSat Jan 11 17:40:23 2020 TCP connection established with [AF_INET]10.250.7.77:12740\nSat Jan 11 17:40:23 2020 10.250.7.77:12740 TCP connection established with [AF_INET]100.64.1.1:53680\nSat Jan 11 17:40:23 2020 10.250.7.77:12740 Connection reset, restarting [0]\nSat Jan 11 17:40:23 2020 100.64.1.1:53680 Connection reset, restarting [0]\nSat Jan 11 17:40:29 2020 TCP connection established with [AF_INET]10.250.7.77:28880\nSat Jan 11 17:40:29 2020 10.250.7.77:28880 TCP connection established with [AF_INET]100.64.1.1:58112\nSat Jan 11 17:40:29 2020 10.250.7.77:28880 Connection reset, restarting [0]\nSat Jan 11 17:40:29 2020 100.64.1.1:58112 Connection reset, restarting [0]\nSat Jan 11 17:40:33 2020 TCP connection established with [AF_INET]10.250.7.77:12752\nSat Jan 11 17:40:33 2020 10.250.7.77:12752 TCP connection established with [AF_INET]100.64.1.1:53692\nSat Jan 11 17:40:33 2020 10.250.7.77:12752 Connection reset, restarting [0]\nSat Jan 11 17:40:33 2020 100.64.1.1:53692 Connection reset, restarting [0]\nSat Jan 11 17:40:39 2020 TCP connection established with [AF_INET]10.250.7.77:28888\nSat Jan 11 17:40:39 2020 10.250.7.77:28888 TCP connection established with [AF_INET]100.64.1.1:58120\nSat Jan 11 17:40:39 2020 10.250.7.77:28888 Connection reset, restarting [0]\nSat Jan 11 17:40:39 2020 100.64.1.1:58120 Connection reset, restarting [0]\nSat Jan 11 17:40:43 2020 TCP connection established with [AF_INET]10.250.7.77:12756\nSat Jan 11 17:40:43 2020 10.250.7.77:12756 TCP connection established with [AF_INET]100.64.1.1:53696\nSat Jan 11 17:40:43 2020 10.250.7.77:12756 Connection reset, restarting [0]\nSat Jan 11 17:40:43 2020 100.64.1.1:53696 Connection reset, restarting [0]\nSat Jan 11 17:40:49 2020 TCP connection established with [AF_INET]10.250.7.77:28896\nSat Jan 11 17:40:49 2020 10.250.7.77:28896 TCP connection established with [AF_INET]100.64.1.1:58128\nSat Jan 11 17:40:49 2020 10.250.7.77:28896 Connection reset, restarting [0]\nSat Jan 11 17:40:49 2020 100.64.1.1:58128 Connection reset, restarting [0]\nSat Jan 11 17:40:53 2020 TCP connection established with [AF_INET]10.250.7.77:12770\nSat Jan 11 17:40:53 2020 10.250.7.77:12770 TCP connection established with [AF_INET]100.64.1.1:53710\nSat Jan 11 17:40:53 2020 10.250.7.77:12770 Connection reset, restarting [0]\nSat Jan 11 17:40:53 2020 100.64.1.1:53710 Connection reset, restarting [0]\nSat Jan 11 17:40:59 2020 TCP connection established with [AF_INET]10.250.7.77:28908\nSat Jan 11 17:40:59 2020 10.250.7.77:28908 TCP connection established with [AF_INET]100.64.1.1:58140\nSat Jan 11 17:40:59 2020 10.250.7.77:28908 Connection reset, restarting [0]\nSat Jan 11 17:40:59 2020 100.64.1.1:58140 Connection reset, restarting [0]\nSat Jan 11 17:41:03 2020 TCP connection established with [AF_INET]10.250.7.77:12780\nSat Jan 11 17:41:03 2020 10.250.7.77:12780 TCP connection established with [AF_INET]100.64.1.1:53720\nSat Jan 11 17:41:03 2020 10.250.7.77:12780 Connection reset, restarting [0]\nSat Jan 11 17:41:03 2020 100.64.1.1:53720 Connection reset, restarting [0]\nSat Jan 11 17:41:09 2020 TCP connection established with [AF_INET]10.250.7.77:28918\nSat Jan 11 17:41:09 2020 10.250.7.77:28918 TCP connection established with [AF_INET]100.64.1.1:58150\nSat Jan 11 17:41:09 2020 10.250.7.77:28918 Connection reset, restarting [0]\nSat Jan 11 17:41:09 2020 100.64.1.1:58150 Connection reset, restarting [0]\nSat Jan 11 17:41:13 2020 TCP connection established with [AF_INET]10.250.7.77:12792\nSat Jan 11 17:41:13 2020 10.250.7.77:12792 TCP connection established with [AF_INET]100.64.1.1:53732\nSat Jan 11 17:41:13 2020 10.250.7.77:12792 Connection reset, restarting [0]\nSat Jan 11 17:41:13 2020 100.64.1.1:53732 Connection reset, restarting [0]\nSat Jan 11 17:41:19 2020 TCP connection established with [AF_INET]10.250.7.77:28934\nSat Jan 11 17:41:19 2020 10.250.7.77:28934 TCP connection established with [AF_INET]100.64.1.1:58166\nSat Jan 11 17:41:19 2020 10.250.7.77:28934 Connection reset, restarting [0]\nSat Jan 11 17:41:19 2020 100.64.1.1:58166 Connection reset, restarting [0]\nSat Jan 11 17:41:23 2020 TCP connection established with [AF_INET]10.250.7.77:12798\nSat Jan 11 17:41:23 2020 10.250.7.77:12798 TCP connection established with [AF_INET]100.64.1.1:53738\nSat Jan 11 17:41:23 2020 10.250.7.77:12798 Connection reset, restarting [0]\nSat Jan 11 17:41:23 2020 100.64.1.1:53738 Connection reset, restarting [0]\nSat Jan 11 17:41:29 2020 TCP connection established with [AF_INET]10.250.7.77:28938\nSat Jan 11 17:41:29 2020 10.250.7.77:28938 TCP connection established with [AF_INET]100.64.1.1:58170\nSat Jan 11 17:41:29 2020 10.250.7.77:28938 Connection reset, restarting [0]\nSat Jan 11 17:41:29 2020 100.64.1.1:58170 Connection reset, restarting [0]\nSat Jan 11 17:41:33 2020 TCP connection established with [AF_INET]10.250.7.77:12810\nSat Jan 11 17:41:33 2020 10.250.7.77:12810 TCP connection established with [AF_INET]100.64.1.1:53750\nSat Jan 11 17:41:33 2020 10.250.7.77:12810 Connection reset, restarting [0]\nSat Jan 11 17:41:33 2020 100.64.1.1:53750 Connection reset, restarting [0]\nSat Jan 11 17:41:39 2020 TCP connection established with [AF_INET]10.250.7.77:28948\nSat Jan 11 17:41:39 2020 10.250.7.77:28948 Connection reset, restarting [0]\nSat Jan 11 17:41:39 2020 TCP connection established with [AF_INET]100.64.1.1:58180\nSat Jan 11 17:41:39 2020 100.64.1.1:58180 Connection reset, restarting [0]\nSat Jan 11 17:41:43 2020 TCP connection established with [AF_INET]100.64.1.1:53754\nSat Jan 11 17:41:43 2020 100.64.1.1:53754 TCP connection established with [AF_INET]10.250.7.77:12814\nSat Jan 11 17:41:43 2020 100.64.1.1:53754 Connection reset, restarting [0]\nSat Jan 11 17:41:43 2020 10.250.7.77:12814 Connection reset, restarting [0]\nSat Jan 11 17:41:49 2020 TCP connection established with [AF_INET]10.250.7.77:28954\nSat Jan 11 17:41:49 2020 10.250.7.77:28954 TCP connection established with [AF_INET]100.64.1.1:58186\nSat Jan 11 17:41:49 2020 10.250.7.77:28954 Connection reset, restarting [0]\nSat Jan 11 17:41:49 2020 100.64.1.1:58186 Connection reset, restarting [0]\nSat Jan 11 17:41:53 2020 TCP connection established with [AF_INET]10.250.7.77:12820\nSat Jan 11 17:41:53 2020 10.250.7.77:12820 TCP connection established with [AF_INET]100.64.1.1:53760\nSat Jan 11 17:41:53 2020 10.250.7.77:12820 Connection reset, restarting [0]\nSat Jan 11 17:41:53 2020 100.64.1.1:53760 Connection reset, restarting [0]\nSat Jan 11 17:41:59 2020 TCP connection established with [AF_INET]10.250.7.77:28966\nSat Jan 11 17:41:59 2020 10.250.7.77:28966 TCP connection established with [AF_INET]100.64.1.1:58198\nSat Jan 11 17:41:59 2020 10.250.7.77:28966 Connection reset, restarting [0]\nSat Jan 11 17:41:59 2020 100.64.1.1:58198 Connection reset, restarting [0]\nSat Jan 11 17:42:03 2020 TCP connection established with [AF_INET]10.250.7.77:12840\nSat Jan 11 17:42:03 2020 10.250.7.77:12840 TCP connection established with [AF_INET]100.64.1.1:53780\nSat Jan 11 17:42:03 2020 10.250.7.77:12840 Connection reset, restarting [0]\nSat Jan 11 17:42:03 2020 100.64.1.1:53780 Connection reset, restarting [0]\nSat Jan 11 17:42:09 2020 TCP connection established with [AF_INET]10.250.7.77:28982\nSat Jan 11 17:42:09 2020 10.250.7.77:28982 TCP connection established with [AF_INET]100.64.1.1:58214\nSat Jan 11 17:42:09 2020 10.250.7.77:28982 Connection reset, restarting [0]\nSat Jan 11 17:42:09 2020 100.64.1.1:58214 Connection reset, restarting [0]\nSat Jan 11 17:42:13 2020 TCP connection established with [AF_INET]10.250.7.77:12852\nSat Jan 11 17:42:13 2020 10.250.7.77:12852 TCP connection established with [AF_INET]100.64.1.1:53792\nSat Jan 11 17:42:13 2020 10.250.7.77:12852 Connection reset, restarting [0]\nSat Jan 11 17:42:13 2020 100.64.1.1:53792 Connection reset, restarting [0]\nSat Jan 11 17:42:19 2020 TCP connection established with [AF_INET]10.250.7.77:28994\nSat Jan 11 17:42:19 2020 10.250.7.77:28994 TCP connection established with [AF_INET]100.64.1.1:58226\nSat Jan 11 17:42:19 2020 10.250.7.77:28994 Connection reset, restarting [0]\nSat Jan 11 17:42:19 2020 100.64.1.1:58226 Connection reset, restarting [0]\nSat Jan 11 17:42:23 2020 TCP connection established with [AF_INET]10.250.7.77:12862\nSat Jan 11 17:42:23 2020 10.250.7.77:12862 TCP connection established with [AF_INET]100.64.1.1:53802\nSat Jan 11 17:42:23 2020 10.250.7.77:12862 Connection reset, restarting [0]\nSat Jan 11 17:42:23 2020 100.64.1.1:53802 Connection reset, restarting [0]\nSat Jan 11 17:42:29 2020 TCP connection established with [AF_INET]10.250.7.77:29002\nSat Jan 11 17:42:29 2020 10.250.7.77:29002 TCP connection established with [AF_INET]100.64.1.1:58234\nSat Jan 11 17:42:29 2020 10.250.7.77:29002 Connection reset, restarting [0]\nSat Jan 11 17:42:29 2020 100.64.1.1:58234 Connection reset, restarting [0]\nSat Jan 11 17:42:33 2020 TCP connection established with [AF_INET]10.250.7.77:12870\nSat Jan 11 17:42:33 2020 10.250.7.77:12870 TCP connection established with [AF_INET]100.64.1.1:53810\nSat Jan 11 17:42:33 2020 10.250.7.77:12870 Connection reset, restarting [0]\nSat Jan 11 17:42:33 2020 100.64.1.1:53810 Connection reset, restarting [0]\nSat Jan 11 17:42:39 2020 TCP connection established with [AF_INET]10.250.7.77:29012\nSat Jan 11 17:42:39 2020 10.250.7.77:29012 TCP connection established with [AF_INET]100.64.1.1:58244\nSat Jan 11 17:42:39 2020 10.250.7.77:29012 Connection reset, restarting [0]\nSat Jan 11 17:42:39 2020 100.64.1.1:58244 Connection reset, restarting [0]\nSat Jan 11 17:42:43 2020 TCP connection established with [AF_INET]10.250.7.77:12878\nSat Jan 11 17:42:43 2020 10.250.7.77:12878 TCP connection established with [AF_INET]100.64.1.1:53818\nSat Jan 11 17:42:43 2020 10.250.7.77:12878 Connection reset, restarting [0]\nSat Jan 11 17:42:43 2020 100.64.1.1:53818 Connection reset, restarting [0]\nSat Jan 11 17:42:49 2020 TCP connection established with [AF_INET]10.250.7.77:29018\nSat Jan 11 17:42:49 2020 10.250.7.77:29018 TCP connection established with [AF_INET]100.64.1.1:58250\nSat Jan 11 17:42:49 2020 10.250.7.77:29018 Connection reset, restarting [0]\nSat Jan 11 17:42:49 2020 100.64.1.1:58250 Connection reset, restarting [0]\nSat Jan 11 17:42:53 2020 TCP connection established with [AF_INET]10.250.7.77:12884\nSat Jan 11 17:42:53 2020 10.250.7.77:12884 TCP connection established with [AF_INET]100.64.1.1:53824\nSat Jan 11 17:42:53 2020 10.250.7.77:12884 Connection reset, restarting [0]\nSat Jan 11 17:42:53 2020 100.64.1.1:53824 Connection reset, restarting [0]\nSat Jan 11 17:42:59 2020 TCP connection established with [AF_INET]10.250.7.77:29030\nSat Jan 11 17:42:59 2020 10.250.7.77:29030 TCP connection established with [AF_INET]100.64.1.1:58262\nSat Jan 11 17:42:59 2020 10.250.7.77:29030 Connection reset, restarting [0]\nSat Jan 11 17:42:59 2020 100.64.1.1:58262 Connection reset, restarting [0]\nSat Jan 11 17:43:03 2020 TCP connection established with [AF_INET]10.250.7.77:12906\nSat Jan 11 17:43:03 2020 10.250.7.77:12906 TCP connection established with [AF_INET]100.64.1.1:53846\nSat Jan 11 17:43:03 2020 10.250.7.77:12906 Connection reset, restarting [0]\nSat Jan 11 17:43:03 2020 100.64.1.1:53846 Connection reset, restarting [0]\nSat Jan 11 17:43:09 2020 TCP connection established with [AF_INET]10.250.7.77:29040\nSat Jan 11 17:43:09 2020 10.250.7.77:29040 Connection reset, restarting [0]\nSat Jan 11 17:43:09 2020 TCP connection established with [AF_INET]100.64.1.1:58272\nSat Jan 11 17:43:09 2020 100.64.1.1:58272 Connection reset, restarting [0]\nSat Jan 11 17:43:13 2020 TCP connection established with [AF_INET]10.250.7.77:12914\nSat Jan 11 17:43:13 2020 10.250.7.77:12914 TCP connection established with [AF_INET]100.64.1.1:53854\nSat Jan 11 17:43:13 2020 10.250.7.77:12914 Connection reset, restarting [0]\nSat Jan 11 17:43:13 2020 100.64.1.1:53854 Connection reset, restarting [0]\nSat Jan 11 17:43:19 2020 TCP connection established with [AF_INET]10.250.7.77:29052\nSat Jan 11 17:43:19 2020 10.250.7.77:29052 TCP connection established with [AF_INET]100.64.1.1:58284\nSat Jan 11 17:43:19 2020 10.250.7.77:29052 Connection reset, restarting [0]\nSat Jan 11 17:43:19 2020 100.64.1.1:58284 Connection reset, restarting [0]\nSat Jan 11 17:43:23 2020 TCP connection established with [AF_INET]10.250.7.77:12930\nSat Jan 11 17:43:23 2020 10.250.7.77:12930 TCP connection established with [AF_INET]100.64.1.1:53870\nSat Jan 11 17:43:23 2020 10.250.7.77:12930 Connection reset, restarting [0]\nSat Jan 11 17:43:23 2020 100.64.1.1:53870 Connection reset, restarting [0]\nSat Jan 11 17:43:29 2020 TCP connection established with [AF_INET]10.250.7.77:29058\nSat Jan 11 17:43:29 2020 10.250.7.77:29058 TCP connection established with [AF_INET]100.64.1.1:58290\nSat Jan 11 17:43:29 2020 10.250.7.77:29058 Connection reset, restarting [0]\nSat Jan 11 17:43:29 2020 100.64.1.1:58290 Connection reset, restarting [0]\nSat Jan 11 17:43:33 2020 TCP connection established with [AF_INET]10.250.7.77:12936\nSat Jan 11 17:43:33 2020 10.250.7.77:12936 TCP connection established with [AF_INET]100.64.1.1:53876\nSat Jan 11 17:43:33 2020 10.250.7.77:12936 Connection reset, restarting [0]\nSat Jan 11 17:43:33 2020 100.64.1.1:53876 Connection reset, restarting [0]\nSat Jan 11 17:43:39 2020 TCP connection established with [AF_INET]10.250.7.77:29066\nSat Jan 11 17:43:39 2020 10.250.7.77:29066 TCP connection established with [AF_INET]100.64.1.1:58298\nSat Jan 11 17:43:39 2020 10.250.7.77:29066 Connection reset, restarting [0]\nSat Jan 11 17:43:39 2020 100.64.1.1:58298 Connection reset, restarting [0]\nSat Jan 11 17:43:43 2020 TCP connection established with [AF_INET]10.250.7.77:12944\nSat Jan 11 17:43:43 2020 10.250.7.77:12944 TCP connection established with [AF_INET]100.64.1.1:53884\nSat Jan 11 17:43:43 2020 10.250.7.77:12944 Connection reset, restarting [0]\nSat Jan 11 17:43:43 2020 100.64.1.1:53884 Connection reset, restarting [0]\nSat Jan 11 17:43:49 2020 TCP connection established with [AF_INET]10.250.7.77:29076\nSat Jan 11 17:43:49 2020 10.250.7.77:29076 TCP connection established with [AF_INET]100.64.1.1:58308\nSat Jan 11 17:43:49 2020 10.250.7.77:29076 Connection reset, restarting [0]\nSat Jan 11 17:43:49 2020 100.64.1.1:58308 Connection reset, restarting [0]\nSat Jan 11 17:43:53 2020 TCP connection established with [AF_INET]10.250.7.77:12952\nSat Jan 11 17:43:53 2020 10.250.7.77:12952 TCP connection established with [AF_INET]100.64.1.1:53892\nSat Jan 11 17:43:53 2020 10.250.7.77:12952 Connection reset, restarting [0]\nSat Jan 11 17:43:53 2020 100.64.1.1:53892 Connection reset, restarting [0]\nSat Jan 11 17:43:59 2020 TCP connection established with [AF_INET]10.250.7.77:29088\nSat Jan 11 17:43:59 2020 10.250.7.77:29088 TCP connection established with [AF_INET]100.64.1.1:58320\nSat Jan 11 17:43:59 2020 10.250.7.77:29088 Connection reset, restarting [0]\nSat Jan 11 17:43:59 2020 100.64.1.1:58320 Connection reset, restarting [0]\nSat Jan 11 17:44:03 2020 TCP connection established with [AF_INET]10.250.7.77:12966\nSat Jan 11 17:44:03 2020 10.250.7.77:12966 TCP connection established with [AF_INET]100.64.1.1:53906\nSat Jan 11 17:44:03 2020 10.250.7.77:12966 Connection reset, restarting [0]\nSat Jan 11 17:44:03 2020 100.64.1.1:53906 Connection reset, restarting [0]\nSat Jan 11 17:44:09 2020 TCP connection established with [AF_INET]10.250.7.77:29098\nSat Jan 11 17:44:09 2020 10.250.7.77:29098 TCP connection established with [AF_INET]100.64.1.1:58330\nSat Jan 11 17:44:09 2020 10.250.7.77:29098 Connection reset, restarting [0]\nSat Jan 11 17:44:09 2020 100.64.1.1:58330 Connection reset, restarting [0]\nSat Jan 11 17:44:13 2020 TCP connection established with [AF_INET]10.250.7.77:12974\nSat Jan 11 17:44:13 2020 10.250.7.77:12974 TCP connection established with [AF_INET]100.64.1.1:53914\nSat Jan 11 17:44:13 2020 10.250.7.77:12974 Connection reset, restarting [0]\nSat Jan 11 17:44:13 2020 100.64.1.1:53914 Connection reset, restarting [0]\nSat Jan 11 17:44:19 2020 TCP connection established with [AF_INET]10.250.7.77:29110\nSat Jan 11 17:44:19 2020 10.250.7.77:29110 TCP connection established with [AF_INET]100.64.1.1:58342\nSat Jan 11 17:44:19 2020 10.250.7.77:29110 Connection reset, restarting [0]\nSat Jan 11 17:44:19 2020 100.64.1.1:58342 Connection reset, restarting [0]\nSat Jan 11 17:44:23 2020 TCP connection established with [AF_INET]10.250.7.77:12986\nSat Jan 11 17:44:23 2020 10.250.7.77:12986 TCP connection established with [AF_INET]100.64.1.1:53926\nSat Jan 11 17:44:23 2020 10.250.7.77:12986 Connection reset, restarting [0]\nSat Jan 11 17:44:23 2020 100.64.1.1:53926 Connection reset, restarting [0]\nSat Jan 11 17:44:29 2020 TCP connection established with [AF_INET]10.250.7.77:29116\nSat Jan 11 17:44:29 2020 10.250.7.77:29116 TCP connection established with [AF_INET]100.64.1.1:58348\nSat Jan 11 17:44:29 2020 10.250.7.77:29116 Connection reset, restarting [0]\nSat Jan 11 17:44:29 2020 100.64.1.1:58348 Connection reset, restarting [0]\nSat Jan 11 17:44:33 2020 TCP connection established with [AF_INET]10.250.7.77:12992\nSat Jan 11 17:44:33 2020 10.250.7.77:12992 TCP connection established with [AF_INET]100.64.1.1:53932\nSat Jan 11 17:44:33 2020 10.250.7.77:12992 Connection reset, restarting [0]\nSat Jan 11 17:44:33 2020 100.64.1.1:53932 Connection reset, restarting [0]\nSat Jan 11 17:44:39 2020 TCP connection established with [AF_INET]10.250.7.77:29124\nSat Jan 11 17:44:39 2020 10.250.7.77:29124 TCP connection established with [AF_INET]100.64.1.1:58356\nSat Jan 11 17:44:39 2020 10.250.7.77:29124 Connection reset, restarting [0]\nSat Jan 11 17:44:39 2020 100.64.1.1:58356 Connection reset, restarting [0]\nSat Jan 11 17:44:43 2020 TCP connection established with [AF_INET]10.250.7.77:13004\nSat Jan 11 17:44:43 2020 10.250.7.77:13004 TCP connection established with [AF_INET]100.64.1.1:53944\nSat Jan 11 17:44:43 2020 10.250.7.77:13004 Connection reset, restarting [0]\nSat Jan 11 17:44:43 2020 100.64.1.1:53944 Connection reset, restarting [0]\nSat Jan 11 17:44:49 2020 TCP connection established with [AF_INET]10.250.7.77:29130\nSat Jan 11 17:44:49 2020 10.250.7.77:29130 TCP connection established with [AF_INET]100.64.1.1:58362\nSat Jan 11 17:44:49 2020 10.250.7.77:29130 Connection reset, restarting [0]\nSat Jan 11 17:44:49 2020 100.64.1.1:58362 Connection reset, restarting [0]\nSat Jan 11 17:44:53 2020 TCP connection established with [AF_INET]10.250.7.77:13010\nSat Jan 11 17:44:53 2020 10.250.7.77:13010 TCP connection established with [AF_INET]100.64.1.1:53950\nSat Jan 11 17:44:53 2020 10.250.7.77:13010 Connection reset, restarting [0]\nSat Jan 11 17:44:53 2020 100.64.1.1:53950 Connection reset, restarting [0]\nSat Jan 11 17:44:59 2020 TCP connection established with [AF_INET]10.250.7.77:29146\nSat Jan 11 17:44:59 2020 10.250.7.77:29146 TCP connection established with [AF_INET]100.64.1.1:58378\nSat Jan 11 17:44:59 2020 10.250.7.77:29146 Connection reset, restarting [0]\nSat Jan 11 17:44:59 2020 100.64.1.1:58378 Connection reset, restarting [0]\nSat Jan 11 17:45:03 2020 TCP connection established with [AF_INET]10.250.7.77:13024\nSat Jan 11 17:45:03 2020 10.250.7.77:13024 TCP connection established with [AF_INET]100.64.1.1:53964\nSat Jan 11 17:45:03 2020 10.250.7.77:13024 Connection reset, restarting [0]\nSat Jan 11 17:45:03 2020 100.64.1.1:53964 Connection reset, restarting [0]\nSat Jan 11 17:45:09 2020 TCP connection established with [AF_INET]10.250.7.77:29156\nSat Jan 11 17:45:09 2020 10.250.7.77:29156 TCP connection established with [AF_INET]100.64.1.1:58388\nSat Jan 11 17:45:09 2020 10.250.7.77:29156 Connection reset, restarting [0]\nSat Jan 11 17:45:09 2020 100.64.1.1:58388 Connection reset, restarting [0]\nSat Jan 11 17:45:13 2020 TCP connection established with [AF_INET]10.250.7.77:13034\nSat Jan 11 17:45:13 2020 10.250.7.77:13034 Connection reset, restarting [0]\nSat Jan 11 17:45:13 2020 TCP connection established with [AF_INET]100.64.1.1:53974\nSat Jan 11 17:45:13 2020 100.64.1.1:53974 Connection reset, restarting [0]\nSat Jan 11 17:45:19 2020 TCP connection established with [AF_INET]10.250.7.77:29168\nSat Jan 11 17:45:19 2020 10.250.7.77:29168 TCP connection established with [AF_INET]100.64.1.1:58400\nSat Jan 11 17:45:19 2020 10.250.7.77:29168 Connection reset, restarting [0]\nSat Jan 11 17:45:19 2020 100.64.1.1:58400 Connection reset, restarting [0]\nSat Jan 11 17:45:23 2020 TCP connection established with [AF_INET]10.250.7.77:13044\nSat Jan 11 17:45:23 2020 10.250.7.77:13044 TCP connection established with [AF_INET]100.64.1.1:53984\nSat Jan 11 17:45:23 2020 10.250.7.77:13044 Connection reset, restarting [0]\nSat Jan 11 17:45:23 2020 100.64.1.1:53984 Connection reset, restarting [0]\nSat Jan 11 17:45:29 2020 TCP connection established with [AF_INET]10.250.7.77:29174\nSat Jan 11 17:45:29 2020 10.250.7.77:29174 TCP connection established with [AF_INET]100.64.1.1:58406\nSat Jan 11 17:45:29 2020 10.250.7.77:29174 Connection reset, restarting [0]\nSat Jan 11 17:45:29 2020 100.64.1.1:58406 Connection reset, restarting [0]\nSat Jan 11 17:45:33 2020 TCP connection established with [AF_INET]10.250.7.77:13050\nSat Jan 11 17:45:33 2020 10.250.7.77:13050 TCP connection established with [AF_INET]100.64.1.1:53990\nSat Jan 11 17:45:33 2020 10.250.7.77:13050 Connection reset, restarting [0]\nSat Jan 11 17:45:33 2020 100.64.1.1:53990 Connection reset, restarting [0]\nSat Jan 11 17:45:39 2020 TCP connection established with [AF_INET]10.250.7.77:29182\nSat Jan 11 17:45:39 2020 10.250.7.77:29182 TCP connection established with [AF_INET]100.64.1.1:58414\nSat Jan 11 17:45:39 2020 10.250.7.77:29182 Connection reset, restarting [0]\nSat Jan 11 17:45:39 2020 100.64.1.1:58414 Connection reset, restarting [0]\nSat Jan 11 17:45:43 2020 TCP connection established with [AF_INET]10.250.7.77:13058\nSat Jan 11 17:45:43 2020 10.250.7.77:13058 TCP connection established with [AF_INET]100.64.1.1:53998\nSat Jan 11 17:45:43 2020 10.250.7.77:13058 Connection reset, restarting [0]\nSat Jan 11 17:45:43 2020 100.64.1.1:53998 Connection reset, restarting [0]\nSat Jan 11 17:45:49 2020 TCP connection established with [AF_INET]10.250.7.77:29188\nSat Jan 11 17:45:49 2020 10.250.7.77:29188 TCP connection established with [AF_INET]100.64.1.1:58420\nSat Jan 11 17:45:49 2020 10.250.7.77:29188 Connection reset, restarting [0]\nSat Jan 11 17:45:49 2020 100.64.1.1:58420 Connection reset, restarting [0]\nSat Jan 11 17:45:53 2020 TCP connection established with [AF_INET]10.250.7.77:13068\nSat Jan 11 17:45:53 2020 10.250.7.77:13068 TCP connection established with [AF_INET]100.64.1.1:54008\nSat Jan 11 17:45:53 2020 10.250.7.77:13068 Connection reset, restarting [0]\nSat Jan 11 17:45:53 2020 100.64.1.1:54008 Connection reset, restarting [0]\nSat Jan 11 17:45:59 2020 TCP connection established with [AF_INET]10.250.7.77:29200\nSat Jan 11 17:45:59 2020 10.250.7.77:29200 TCP connection established with [AF_INET]100.64.1.1:58432\nSat Jan 11 17:45:59 2020 10.250.7.77:29200 Connection reset, restarting [0]\nSat Jan 11 17:45:59 2020 100.64.1.1:58432 Connection reset, restarting [0]\nSat Jan 11 17:46:03 2020 TCP connection established with [AF_INET]10.250.7.77:13116\nSat Jan 11 17:46:03 2020 10.250.7.77:13116 TCP connection established with [AF_INET]100.64.1.1:54056\nSat Jan 11 17:46:03 2020 10.250.7.77:13116 Connection reset, restarting [0]\nSat Jan 11 17:46:03 2020 100.64.1.1:54056 Connection reset, restarting [0]\nSat Jan 11 17:46:09 2020 TCP connection established with [AF_INET]10.250.7.77:29210\nSat Jan 11 17:46:09 2020 10.250.7.77:29210 TCP connection established with [AF_INET]100.64.1.1:58442\nSat Jan 11 17:46:09 2020 10.250.7.77:29210 Connection reset, restarting [0]\nSat Jan 11 17:46:09 2020 100.64.1.1:58442 Connection reset, restarting [0]\nSat Jan 11 17:46:13 2020 TCP connection established with [AF_INET]10.250.7.77:13128\nSat Jan 11 17:46:13 2020 10.250.7.77:13128 TCP connection established with [AF_INET]100.64.1.1:54068\nSat Jan 11 17:46:13 2020 10.250.7.77:13128 Connection reset, restarting [0]\nSat Jan 11 17:46:13 2020 100.64.1.1:54068 Connection reset, restarting [0]\nSat Jan 11 17:46:19 2020 TCP connection established with [AF_INET]10.250.7.77:29228\nSat Jan 11 17:46:19 2020 10.250.7.77:29228 TCP connection established with [AF_INET]100.64.1.1:58460\nSat Jan 11 17:46:19 2020 10.250.7.77:29228 Connection reset, restarting [0]\nSat Jan 11 17:46:19 2020 100.64.1.1:58460 Connection reset, restarting [0]\nSat Jan 11 17:46:23 2020 TCP connection established with [AF_INET]10.250.7.77:13138\nSat Jan 11 17:46:23 2020 10.250.7.77:13138 TCP connection established with [AF_INET]100.64.1.1:54078\nSat Jan 11 17:46:23 2020 10.250.7.77:13138 Connection reset, restarting [0]\nSat Jan 11 17:46:23 2020 100.64.1.1:54078 Connection reset, restarting [0]\nSat Jan 11 17:46:29 2020 TCP connection established with [AF_INET]10.250.7.77:29232\nSat Jan 11 17:46:29 2020 10.250.7.77:29232 TCP connection established with [AF_INET]100.64.1.1:58464\nSat Jan 11 17:46:29 2020 10.250.7.77:29232 Connection reset, restarting [0]\nSat Jan 11 17:46:29 2020 100.64.1.1:58464 Connection reset, restarting [0]\nSat Jan 11 17:46:33 2020 TCP connection established with [AF_INET]10.250.7.77:13144\nSat Jan 11 17:46:33 2020 10.250.7.77:13144 TCP connection established with [AF_INET]100.64.1.1:54084\nSat Jan 11 17:46:33 2020 10.250.7.77:13144 Connection reset, restarting [0]\nSat Jan 11 17:46:33 2020 100.64.1.1:54084 Connection reset, restarting [0]\nSat Jan 11 17:46:39 2020 TCP connection established with [AF_INET]10.250.7.77:29248\nSat Jan 11 17:46:39 2020 10.250.7.77:29248 TCP connection established with [AF_INET]100.64.1.1:58480\nSat Jan 11 17:46:39 2020 10.250.7.77:29248 Connection reset, restarting [0]\nSat Jan 11 17:46:39 2020 100.64.1.1:58480 Connection reset, restarting [0]\nSat Jan 11 17:46:43 2020 TCP connection established with [AF_INET]10.250.7.77:13152\nSat Jan 11 17:46:43 2020 10.250.7.77:13152 TCP connection established with [AF_INET]100.64.1.1:54092\nSat Jan 11 17:46:43 2020 10.250.7.77:13152 Connection reset, restarting [0]\nSat Jan 11 17:46:43 2020 100.64.1.1:54092 Connection reset, restarting [0]\nSat Jan 11 17:46:49 2020 TCP connection established with [AF_INET]10.250.7.77:29254\nSat Jan 11 17:46:49 2020 10.250.7.77:29254 TCP connection established with [AF_INET]100.64.1.1:58486\nSat Jan 11 17:46:49 2020 10.250.7.77:29254 Connection reset, restarting [0]\nSat Jan 11 17:46:49 2020 100.64.1.1:58486 Connection reset, restarting [0]\nSat Jan 11 17:46:53 2020 TCP connection established with [AF_INET]10.250.7.77:13158\nSat Jan 11 17:46:53 2020 10.250.7.77:13158 TCP connection established with [AF_INET]100.64.1.1:54098\nSat Jan 11 17:46:53 2020 10.250.7.77:13158 Connection reset, restarting [0]\nSat Jan 11 17:46:53 2020 100.64.1.1:54098 Connection reset, restarting [0]\nSat Jan 11 17:46:59 2020 TCP connection established with [AF_INET]10.250.7.77:29266\nSat Jan 11 17:46:59 2020 10.250.7.77:29266 TCP connection established with [AF_INET]100.64.1.1:58498\nSat Jan 11 17:46:59 2020 10.250.7.77:29266 Connection reset, restarting [0]\nSat Jan 11 17:46:59 2020 100.64.1.1:58498 Connection reset, restarting [0]\nSat Jan 11 17:47:03 2020 TCP connection established with [AF_INET]10.250.7.77:13172\nSat Jan 11 17:47:03 2020 10.250.7.77:13172 TCP connection established with [AF_INET]100.64.1.1:54112\nSat Jan 11 17:47:03 2020 10.250.7.77:13172 Connection reset, restarting [0]\nSat Jan 11 17:47:03 2020 100.64.1.1:54112 Connection reset, restarting [0]\nSat Jan 11 17:47:09 2020 TCP connection established with [AF_INET]10.250.7.77:29276\nSat Jan 11 17:47:09 2020 10.250.7.77:29276 TCP connection established with [AF_INET]100.64.1.1:58508\nSat Jan 11 17:47:09 2020 10.250.7.77:29276 Connection reset, restarting [0]\nSat Jan 11 17:47:09 2020 100.64.1.1:58508 Connection reset, restarting [0]\nSat Jan 11 17:47:13 2020 TCP connection established with [AF_INET]10.250.7.77:13186\nSat Jan 11 17:47:13 2020 10.250.7.77:13186 TCP connection established with [AF_INET]100.64.1.1:54126\nSat Jan 11 17:47:13 2020 10.250.7.77:13186 Connection reset, restarting [0]\nSat Jan 11 17:47:13 2020 100.64.1.1:54126 Connection reset, restarting [0]\nSat Jan 11 17:47:19 2020 TCP connection established with [AF_INET]10.250.7.77:29290\nSat Jan 11 17:47:19 2020 10.250.7.77:29290 TCP connection established with [AF_INET]100.64.1.1:58522\nSat Jan 11 17:47:19 2020 10.250.7.77:29290 Connection reset, restarting [0]\nSat Jan 11 17:47:19 2020 100.64.1.1:58522 Connection reset, restarting [0]\nSat Jan 11 17:47:23 2020 TCP connection established with [AF_INET]10.250.7.77:13196\nSat Jan 11 17:47:23 2020 10.250.7.77:13196 TCP connection established with [AF_INET]100.64.1.1:54136\nSat Jan 11 17:47:23 2020 10.250.7.77:13196 Connection reset, restarting [0]\nSat Jan 11 17:47:23 2020 100.64.1.1:54136 Connection reset, restarting [0]\nSat Jan 11 17:47:29 2020 TCP connection established with [AF_INET]10.250.7.77:29298\nSat Jan 11 17:47:29 2020 10.250.7.77:29298 TCP connection established with [AF_INET]100.64.1.1:58530\nSat Jan 11 17:47:29 2020 10.250.7.77:29298 Connection reset, restarting [0]\nSat Jan 11 17:47:29 2020 100.64.1.1:58530 Connection reset, restarting [0]\nSat Jan 11 17:47:33 2020 TCP connection established with [AF_INET]10.250.7.77:13202\nSat Jan 11 17:47:33 2020 10.250.7.77:13202 TCP connection established with [AF_INET]100.64.1.1:54142\nSat Jan 11 17:47:33 2020 10.250.7.77:13202 Connection reset, restarting [0]\nSat Jan 11 17:47:33 2020 100.64.1.1:54142 Connection reset, restarting [0]\nSat Jan 11 17:47:39 2020 TCP connection established with [AF_INET]10.250.7.77:29306\nSat Jan 11 17:47:39 2020 10.250.7.77:29306 TCP connection established with [AF_INET]100.64.1.1:58538\nSat Jan 11 17:47:39 2020 10.250.7.77:29306 Connection reset, restarting [0]\nSat Jan 11 17:47:39 2020 100.64.1.1:58538 Connection reset, restarting [0]\nSat Jan 11 17:47:43 2020 TCP connection established with [AF_INET]10.250.7.77:13210\nSat Jan 11 17:47:43 2020 10.250.7.77:13210 TCP connection established with [AF_INET]100.64.1.1:54150\nSat Jan 11 17:47:43 2020 10.250.7.77:13210 Connection reset, restarting [0]\nSat Jan 11 17:47:43 2020 100.64.1.1:54150 Connection reset, restarting [0]\nSat Jan 11 17:47:49 2020 TCP connection established with [AF_INET]10.250.7.77:29312\nSat Jan 11 17:47:49 2020 10.250.7.77:29312 Connection reset, restarting [0]\nSat Jan 11 17:47:49 2020 TCP connection established with [AF_INET]100.64.1.1:58544\nSat Jan 11 17:47:49 2020 100.64.1.1:58544 Connection reset, restarting [0]\nSat Jan 11 17:47:53 2020 TCP connection established with [AF_INET]10.250.7.77:13216\nSat Jan 11 17:47:53 2020 10.250.7.77:13216 TCP connection established with [AF_INET]100.64.1.1:54156\nSat Jan 11 17:47:53 2020 10.250.7.77:13216 Connection reset, restarting [0]\nSat Jan 11 17:47:53 2020 100.64.1.1:54156 Connection reset, restarting [0]\nSat Jan 11 17:47:59 2020 TCP connection established with [AF_INET]10.250.7.77:29324\nSat Jan 11 17:47:59 2020 10.250.7.77:29324 TCP connection established with [AF_INET]100.64.1.1:58556\nSat Jan 11 17:47:59 2020 10.250.7.77:29324 Connection reset, restarting [0]\nSat Jan 11 17:47:59 2020 100.64.1.1:58556 Connection reset, restarting [0]\nSat Jan 11 17:48:03 2020 TCP connection established with [AF_INET]10.250.7.77:13232\nSat Jan 11 17:48:03 2020 10.250.7.77:13232 TCP connection established with [AF_INET]100.64.1.1:54172\nSat Jan 11 17:48:03 2020 10.250.7.77:13232 Connection reset, restarting [0]\nSat Jan 11 17:48:03 2020 100.64.1.1:54172 Connection reset, restarting [0]\nSat Jan 11 17:48:09 2020 TCP connection established with [AF_INET]10.250.7.77:29336\nSat Jan 11 17:48:09 2020 10.250.7.77:29336 TCP connection established with [AF_INET]100.64.1.1:58568\nSat Jan 11 17:48:09 2020 10.250.7.77:29336 Connection reset, restarting [0]\nSat Jan 11 17:48:09 2020 100.64.1.1:58568 Connection reset, restarting [0]\nSat Jan 11 17:48:13 2020 TCP connection established with [AF_INET]10.250.7.77:13240\nSat Jan 11 17:48:13 2020 10.250.7.77:13240 TCP connection established with [AF_INET]100.64.1.1:54180\nSat Jan 11 17:48:13 2020 10.250.7.77:13240 Connection reset, restarting [0]\nSat Jan 11 17:48:13 2020 100.64.1.1:54180 Connection reset, restarting [0]\nSat Jan 11 17:48:19 2020 TCP connection established with [AF_INET]10.250.7.77:29348\nSat Jan 11 17:48:19 2020 10.250.7.77:29348 TCP connection established with [AF_INET]100.64.1.1:58580\nSat Jan 11 17:48:19 2020 10.250.7.77:29348 Connection reset, restarting [0]\nSat Jan 11 17:48:19 2020 100.64.1.1:58580 Connection reset, restarting [0]\nSat Jan 11 17:48:23 2020 TCP connection established with [AF_INET]10.250.7.77:13254\nSat Jan 11 17:48:23 2020 10.250.7.77:13254 TCP connection established with [AF_INET]100.64.1.1:54194\nSat Jan 11 17:48:23 2020 10.250.7.77:13254 Connection reset, restarting [0]\nSat Jan 11 17:48:23 2020 100.64.1.1:54194 Connection reset, restarting [0]\nSat Jan 11 17:48:29 2020 TCP connection established with [AF_INET]10.250.7.77:29352\nSat Jan 11 17:48:29 2020 10.250.7.77:29352 TCP connection established with [AF_INET]100.64.1.1:58584\nSat Jan 11 17:48:29 2020 10.250.7.77:29352 Connection reset, restarting [0]\nSat Jan 11 17:48:29 2020 100.64.1.1:58584 Connection reset, restarting [0]\nSat Jan 11 17:48:33 2020 TCP connection established with [AF_INET]10.250.7.77:13260\nSat Jan 11 17:48:33 2020 10.250.7.77:13260 TCP connection established with [AF_INET]100.64.1.1:54200\nSat Jan 11 17:48:33 2020 10.250.7.77:13260 Connection reset, restarting [0]\nSat Jan 11 17:48:33 2020 100.64.1.1:54200 Connection reset, restarting [0]\nSat Jan 11 17:48:39 2020 TCP connection established with [AF_INET]10.250.7.77:29360\nSat Jan 11 17:48:39 2020 10.250.7.77:29360 Connection reset, restarting [0]\nSat Jan 11 17:48:39 2020 TCP connection established with [AF_INET]100.64.1.1:58592\nSat Jan 11 17:48:39 2020 100.64.1.1:58592 Connection reset, restarting [0]\nSat Jan 11 17:48:43 2020 TCP connection established with [AF_INET]10.250.7.77:13268\nSat Jan 11 17:48:43 2020 10.250.7.77:13268 Connection reset, restarting [0]\nSat Jan 11 17:48:43 2020 TCP connection established with [AF_INET]100.64.1.1:54208\nSat Jan 11 17:48:43 2020 100.64.1.1:54208 Connection reset, restarting [0]\nSat Jan 11 17:48:49 2020 TCP connection established with [AF_INET]10.250.7.77:29370\nSat Jan 11 17:48:49 2020 10.250.7.77:29370 TCP connection established with [AF_INET]100.64.1.1:58602\nSat Jan 11 17:48:49 2020 10.250.7.77:29370 Connection reset, restarting [0]\nSat Jan 11 17:48:49 2020 100.64.1.1:58602 Connection reset, restarting [0]\nSat Jan 11 17:48:53 2020 TCP connection established with [AF_INET]10.250.7.77:13274\nSat Jan 11 17:48:53 2020 10.250.7.77:13274 TCP connection established with [AF_INET]100.64.1.1:54214\nSat Jan 11 17:48:53 2020 10.250.7.77:13274 Connection reset, restarting [0]\nSat Jan 11 17:48:53 2020 100.64.1.1:54214 Connection reset, restarting [0]\nSat Jan 11 17:48:59 2020 TCP connection established with [AF_INET]10.250.7.77:29416\nSat Jan 11 17:48:59 2020 10.250.7.77:29416 TCP connection established with [AF_INET]100.64.1.1:58648\nSat Jan 11 17:48:59 2020 10.250.7.77:29416 Connection reset, restarting [0]\nSat Jan 11 17:48:59 2020 100.64.1.1:58648 Connection reset, restarting [0]\nSat Jan 11 17:49:03 2020 TCP connection established with [AF_INET]10.250.7.77:13290\nSat Jan 11 17:49:03 2020 10.250.7.77:13290 TCP connection established with [AF_INET]100.64.1.1:54230\nSat Jan 11 17:49:03 2020 10.250.7.77:13290 Connection reset, restarting [0]\nSat Jan 11 17:49:03 2020 100.64.1.1:54230 Connection reset, restarting [0]\nSat Jan 11 17:49:09 2020 TCP connection established with [AF_INET]10.250.7.77:29430\nSat Jan 11 17:49:09 2020 10.250.7.77:29430 TCP connection established with [AF_INET]100.64.1.1:58662\nSat Jan 11 17:49:09 2020 10.250.7.77:29430 Connection reset, restarting [0]\nSat Jan 11 17:49:09 2020 100.64.1.1:58662 Connection reset, restarting [0]\nSat Jan 11 17:49:13 2020 TCP connection established with [AF_INET]10.250.7.77:13298\nSat Jan 11 17:49:13 2020 10.250.7.77:13298 TCP connection established with [AF_INET]100.64.1.1:54238\nSat Jan 11 17:49:13 2020 10.250.7.77:13298 Connection reset, restarting [0]\nSat Jan 11 17:49:13 2020 100.64.1.1:54238 Connection reset, restarting [0]\nSat Jan 11 17:49:19 2020 TCP connection established with [AF_INET]10.250.7.77:29442\nSat Jan 11 17:49:19 2020 10.250.7.77:29442 TCP connection established with [AF_INET]100.64.1.1:58674\nSat Jan 11 17:49:19 2020 10.250.7.77:29442 Connection reset, restarting [0]\nSat Jan 11 17:49:19 2020 100.64.1.1:58674 Connection reset, restarting [0]\nSat Jan 11 17:49:23 2020 TCP connection established with [AF_INET]10.250.7.77:13308\nSat Jan 11 17:49:23 2020 10.250.7.77:13308 TCP connection established with [AF_INET]100.64.1.1:54248\nSat Jan 11 17:49:23 2020 10.250.7.77:13308 Connection reset, restarting [0]\nSat Jan 11 17:49:23 2020 100.64.1.1:54248 Connection reset, restarting [0]\nSat Jan 11 17:49:29 2020 TCP connection established with [AF_INET]10.250.7.77:29446\nSat Jan 11 17:49:29 2020 10.250.7.77:29446 TCP connection established with [AF_INET]100.64.1.1:58678\nSat Jan 11 17:49:29 2020 10.250.7.77:29446 Connection reset, restarting [0]\nSat Jan 11 17:49:29 2020 100.64.1.1:58678 Connection reset, restarting [0]\nSat Jan 11 17:49:33 2020 TCP connection established with [AF_INET]10.250.7.77:13314\nSat Jan 11 17:49:33 2020 10.250.7.77:13314 TCP connection established with [AF_INET]100.64.1.1:54254\nSat Jan 11 17:49:33 2020 10.250.7.77:13314 Connection reset, restarting [0]\nSat Jan 11 17:49:33 2020 100.64.1.1:54254 Connection reset, restarting [0]\nSat Jan 11 17:49:39 2020 TCP connection established with [AF_INET]10.250.7.77:29454\nSat Jan 11 17:49:39 2020 10.250.7.77:29454 TCP connection established with [AF_INET]100.64.1.1:58686\nSat Jan 11 17:49:39 2020 10.250.7.77:29454 Connection reset, restarting [0]\nSat Jan 11 17:49:39 2020 100.64.1.1:58686 Connection reset, restarting [0]\nSat Jan 11 17:49:43 2020 TCP connection established with [AF_INET]10.250.7.77:13326\nSat Jan 11 17:49:43 2020 10.250.7.77:13326 TCP connection established with [AF_INET]100.64.1.1:54266\nSat Jan 11 17:49:43 2020 10.250.7.77:13326 Connection reset, restarting [0]\nSat Jan 11 17:49:43 2020 100.64.1.1:54266 Connection reset, restarting [0]\nSat Jan 11 17:49:49 2020 TCP connection established with [AF_INET]10.250.7.77:29460\nSat Jan 11 17:49:49 2020 10.250.7.77:29460 TCP connection established with [AF_INET]100.64.1.1:58692\nSat Jan 11 17:49:49 2020 10.250.7.77:29460 Connection reset, restarting [0]\nSat Jan 11 17:49:49 2020 100.64.1.1:58692 Connection reset, restarting [0]\nSat Jan 11 17:49:53 2020 TCP connection established with [AF_INET]10.250.7.77:13334\nSat Jan 11 17:49:53 2020 10.250.7.77:13334 TCP connection established with [AF_INET]100.64.1.1:54274\nSat Jan 11 17:49:53 2020 10.250.7.77:13334 Connection reset, restarting [0]\nSat Jan 11 17:49:53 2020 100.64.1.1:54274 Connection reset, restarting [0]\nSat Jan 11 17:49:59 2020 TCP connection established with [AF_INET]10.250.7.77:29476\nSat Jan 11 17:49:59 2020 10.250.7.77:29476 TCP connection established with [AF_INET]100.64.1.1:58708\nSat Jan 11 17:49:59 2020 10.250.7.77:29476 Connection reset, restarting [0]\nSat Jan 11 17:49:59 2020 100.64.1.1:58708 Connection reset, restarting [0]\nSat Jan 11 17:50:03 2020 TCP connection established with [AF_INET]10.250.7.77:13348\nSat Jan 11 17:50:03 2020 10.250.7.77:13348 TCP connection established with [AF_INET]100.64.1.1:54288\nSat Jan 11 17:50:03 2020 10.250.7.77:13348 Connection reset, restarting [0]\nSat Jan 11 17:50:03 2020 100.64.1.1:54288 Connection reset, restarting [0]\nSat Jan 11 17:50:09 2020 TCP connection established with [AF_INET]10.250.7.77:29488\nSat Jan 11 17:50:09 2020 10.250.7.77:29488 TCP connection established with [AF_INET]100.64.1.1:58720\nSat Jan 11 17:50:09 2020 10.250.7.77:29488 Connection reset, restarting [0]\nSat Jan 11 17:50:09 2020 100.64.1.1:58720 Connection reset, restarting [0]\nSat Jan 11 17:50:13 2020 TCP connection established with [AF_INET]10.250.7.77:13356\nSat Jan 11 17:50:13 2020 10.250.7.77:13356 TCP connection established with [AF_INET]100.64.1.1:54296\nSat Jan 11 17:50:13 2020 10.250.7.77:13356 Connection reset, restarting [0]\nSat Jan 11 17:50:13 2020 100.64.1.1:54296 Connection reset, restarting [0]\nSat Jan 11 17:50:19 2020 TCP connection established with [AF_INET]10.250.7.77:29500\nSat Jan 11 17:50:19 2020 10.250.7.77:29500 TCP connection established with [AF_INET]100.64.1.1:58732\nSat Jan 11 17:50:19 2020 10.250.7.77:29500 Connection reset, restarting [0]\nSat Jan 11 17:50:19 2020 100.64.1.1:58732 Connection reset, restarting [0]\nSat Jan 11 17:50:23 2020 TCP connection established with [AF_INET]10.250.7.77:13366\nSat Jan 11 17:50:23 2020 10.250.7.77:13366 TCP connection established with [AF_INET]100.64.1.1:54306\nSat Jan 11 17:50:23 2020 10.250.7.77:13366 Connection reset, restarting [0]\nSat Jan 11 17:50:23 2020 100.64.1.1:54306 Connection reset, restarting [0]\nSat Jan 11 17:50:29 2020 TCP connection established with [AF_INET]10.250.7.77:29504\nSat Jan 11 17:50:29 2020 10.250.7.77:29504 TCP connection established with [AF_INET]100.64.1.1:58736\nSat Jan 11 17:50:29 2020 10.250.7.77:29504 Connection reset, restarting [0]\nSat Jan 11 17:50:29 2020 100.64.1.1:58736 Connection reset, restarting [0]\nSat Jan 11 17:50:33 2020 TCP connection established with [AF_INET]10.250.7.77:13372\nSat Jan 11 17:50:33 2020 10.250.7.77:13372 TCP connection established with [AF_INET]100.64.1.1:54312\nSat Jan 11 17:50:33 2020 10.250.7.77:13372 Connection reset, restarting [0]\nSat Jan 11 17:50:33 2020 100.64.1.1:54312 Connection reset, restarting [0]\nSat Jan 11 17:50:39 2020 TCP connection established with [AF_INET]10.250.7.77:29512\nSat Jan 11 17:50:39 2020 10.250.7.77:29512 TCP connection established with [AF_INET]100.64.1.1:58744\nSat Jan 11 17:50:39 2020 10.250.7.77:29512 Connection reset, restarting [0]\nSat Jan 11 17:50:39 2020 100.64.1.1:58744 Connection reset, restarting [0]\nSat Jan 11 17:50:43 2020 TCP connection established with [AF_INET]10.250.7.77:13380\nSat Jan 11 17:50:43 2020 10.250.7.77:13380 TCP connection established with [AF_INET]100.64.1.1:54320\nSat Jan 11 17:50:43 2020 10.250.7.77:13380 Connection reset, restarting [0]\nSat Jan 11 17:50:43 2020 100.64.1.1:54320 Connection reset, restarting [0]\nSat Jan 11 17:50:49 2020 TCP connection established with [AF_INET]10.250.7.77:29518\nSat Jan 11 17:50:49 2020 10.250.7.77:29518 TCP connection established with [AF_INET]100.64.1.1:58750\nSat Jan 11 17:50:49 2020 10.250.7.77:29518 Connection reset, restarting [0]\nSat Jan 11 17:50:49 2020 100.64.1.1:58750 Connection reset, restarting [0]\nSat Jan 11 17:50:53 2020 TCP connection established with [AF_INET]10.250.7.77:13392\nSat Jan 11 17:50:53 2020 10.250.7.77:13392 TCP connection established with [AF_INET]100.64.1.1:54332\nSat Jan 11 17:50:53 2020 10.250.7.77:13392 Connection reset, restarting [0]\nSat Jan 11 17:50:53 2020 100.64.1.1:54332 Connection reset, restarting [0]\nSat Jan 11 17:50:59 2020 TCP connection established with [AF_INET]10.250.7.77:29534\nSat Jan 11 17:50:59 2020 10.250.7.77:29534 TCP connection established with [AF_INET]100.64.1.1:58766\nSat Jan 11 17:50:59 2020 10.250.7.77:29534 Connection reset, restarting [0]\nSat Jan 11 17:50:59 2020 100.64.1.1:58766 Connection reset, restarting [0]\nSat Jan 11 17:51:03 2020 TCP connection established with [AF_INET]10.250.7.77:13406\nSat Jan 11 17:51:03 2020 10.250.7.77:13406 TCP connection established with [AF_INET]100.64.1.1:54346\nSat Jan 11 17:51:03 2020 10.250.7.77:13406 Connection reset, restarting [0]\nSat Jan 11 17:51:03 2020 100.64.1.1:54346 Connection reset, restarting [0]\nSat Jan 11 17:51:09 2020 TCP connection established with [AF_INET]10.250.7.77:29544\nSat Jan 11 17:51:09 2020 10.250.7.77:29544 TCP connection established with [AF_INET]100.64.1.1:58776\nSat Jan 11 17:51:09 2020 10.250.7.77:29544 Connection reset, restarting [0]\nSat Jan 11 17:51:09 2020 100.64.1.1:58776 Connection reset, restarting [0]\nSat Jan 11 17:51:13 2020 TCP connection established with [AF_INET]10.250.7.77:13414\nSat Jan 11 17:51:13 2020 10.250.7.77:13414 TCP connection established with [AF_INET]100.64.1.1:54354\nSat Jan 11 17:51:13 2020 10.250.7.77:13414 Connection reset, restarting [0]\nSat Jan 11 17:51:13 2020 100.64.1.1:54354 Connection reset, restarting [0]\nSat Jan 11 17:51:19 2020 TCP connection established with [AF_INET]10.250.7.77:29560\nSat Jan 11 17:51:19 2020 10.250.7.77:29560 TCP connection established with [AF_INET]100.64.1.1:58792\nSat Jan 11 17:51:19 2020 10.250.7.77:29560 Connection reset, restarting [0]\nSat Jan 11 17:51:19 2020 100.64.1.1:58792 Connection reset, restarting [0]\nSat Jan 11 17:51:23 2020 TCP connection established with [AF_INET]10.250.7.77:13424\nSat Jan 11 17:51:23 2020 10.250.7.77:13424 TCP connection established with [AF_INET]100.64.1.1:54364\nSat Jan 11 17:51:23 2020 10.250.7.77:13424 Connection reset, restarting [0]\nSat Jan 11 17:51:23 2020 100.64.1.1:54364 Connection reset, restarting [0]\nSat Jan 11 17:51:29 2020 TCP connection established with [AF_INET]10.250.7.77:29564\nSat Jan 11 17:51:29 2020 10.250.7.77:29564 TCP connection established with [AF_INET]100.64.1.1:58796\nSat Jan 11 17:51:29 2020 10.250.7.77:29564 Connection reset, restarting [0]\nSat Jan 11 17:51:29 2020 100.64.1.1:58796 Connection reset, restarting [0]\nSat Jan 11 17:51:33 2020 TCP connection established with [AF_INET]10.250.7.77:13430\nSat Jan 11 17:51:33 2020 10.250.7.77:13430 TCP connection established with [AF_INET]100.64.1.1:54370\nSat Jan 11 17:51:33 2020 10.250.7.77:13430 Connection reset, restarting [0]\nSat Jan 11 17:51:33 2020 100.64.1.1:54370 Connection reset, restarting [0]\nSat Jan 11 17:51:39 2020 TCP connection established with [AF_INET]10.250.7.77:29572\nSat Jan 11 17:51:39 2020 10.250.7.77:29572 TCP connection established with [AF_INET]100.64.1.1:58804\nSat Jan 11 17:51:39 2020 10.250.7.77:29572 Connection reset, restarting [0]\nSat Jan 11 17:51:39 2020 100.64.1.1:58804 Connection reset, restarting [0]\nSat Jan 11 17:51:43 2020 TCP connection established with [AF_INET]10.250.7.77:13438\nSat Jan 11 17:51:43 2020 10.250.7.77:13438 TCP connection established with [AF_INET]100.64.1.1:54378\nSat Jan 11 17:51:43 2020 10.250.7.77:13438 Connection reset, restarting [0]\nSat Jan 11 17:51:43 2020 100.64.1.1:54378 Connection reset, restarting [0]\nSat Jan 11 17:51:49 2020 TCP connection established with [AF_INET]10.250.7.77:29578\nSat Jan 11 17:51:49 2020 10.250.7.77:29578 TCP connection established with [AF_INET]100.64.1.1:58810\nSat Jan 11 17:51:49 2020 10.250.7.77:29578 Connection reset, restarting [0]\nSat Jan 11 17:51:49 2020 100.64.1.1:58810 Connection reset, restarting [0]\nSat Jan 11 17:51:53 2020 TCP connection established with [AF_INET]10.250.7.77:13446\nSat Jan 11 17:51:53 2020 10.250.7.77:13446 TCP connection established with [AF_INET]100.64.1.1:54386\nSat Jan 11 17:51:53 2020 10.250.7.77:13446 Connection reset, restarting [0]\nSat Jan 11 17:51:53 2020 100.64.1.1:54386 Connection reset, restarting [0]\nSat Jan 11 17:51:59 2020 TCP connection established with [AF_INET]10.250.7.77:29592\nSat Jan 11 17:51:59 2020 10.250.7.77:29592 TCP connection established with [AF_INET]100.64.1.1:58824\nSat Jan 11 17:51:59 2020 10.250.7.77:29592 Connection reset, restarting [0]\nSat Jan 11 17:51:59 2020 100.64.1.1:58824 Connection reset, restarting [0]\nSat Jan 11 17:52:03 2020 TCP connection established with [AF_INET]10.250.7.77:13460\nSat Jan 11 17:52:03 2020 10.250.7.77:13460 TCP connection established with [AF_INET]100.64.1.1:54400\nSat Jan 11 17:52:03 2020 10.250.7.77:13460 Connection reset, restarting [0]\nSat Jan 11 17:52:03 2020 100.64.1.1:54400 Connection reset, restarting [0]\nSat Jan 11 17:52:09 2020 TCP connection established with [AF_INET]10.250.7.77:29602\nSat Jan 11 17:52:09 2020 10.250.7.77:29602 TCP connection established with [AF_INET]100.64.1.1:58834\nSat Jan 11 17:52:09 2020 10.250.7.77:29602 Connection reset, restarting [0]\nSat Jan 11 17:52:09 2020 100.64.1.1:58834 Connection reset, restarting [0]\nSat Jan 11 17:52:13 2020 TCP connection established with [AF_INET]10.250.7.77:13472\nSat Jan 11 17:52:13 2020 10.250.7.77:13472 TCP connection established with [AF_INET]100.64.1.1:54412\nSat Jan 11 17:52:13 2020 10.250.7.77:13472 Connection reset, restarting [0]\nSat Jan 11 17:52:13 2020 100.64.1.1:54412 Connection reset, restarting [0]\nSat Jan 11 17:52:19 2020 TCP connection established with [AF_INET]10.250.7.77:29614\nSat Jan 11 17:52:19 2020 10.250.7.77:29614 TCP connection established with [AF_INET]100.64.1.1:58846\nSat Jan 11 17:52:19 2020 10.250.7.77:29614 Connection reset, restarting [0]\nSat Jan 11 17:52:19 2020 100.64.1.1:58846 Connection reset, restarting [0]\nSat Jan 11 17:52:23 2020 TCP connection established with [AF_INET]10.250.7.77:13482\nSat Jan 11 17:52:23 2020 10.250.7.77:13482 TCP connection established with [AF_INET]100.64.1.1:54422\nSat Jan 11 17:52:23 2020 10.250.7.77:13482 Connection reset, restarting [0]\nSat Jan 11 17:52:23 2020 100.64.1.1:54422 Connection reset, restarting [0]\nSat Jan 11 17:52:29 2020 TCP connection established with [AF_INET]10.250.7.77:29622\nSat Jan 11 17:52:29 2020 10.250.7.77:29622 TCP connection established with [AF_INET]100.64.1.1:58854\nSat Jan 11 17:52:29 2020 10.250.7.77:29622 Connection reset, restarting [0]\nSat Jan 11 17:52:29 2020 100.64.1.1:58854 Connection reset, restarting [0]\nSat Jan 11 17:52:33 2020 TCP connection established with [AF_INET]10.250.7.77:13488\nSat Jan 11 17:52:33 2020 10.250.7.77:13488 TCP connection established with [AF_INET]100.64.1.1:54428\nSat Jan 11 17:52:33 2020 10.250.7.77:13488 Connection reset, restarting [0]\nSat Jan 11 17:52:33 2020 100.64.1.1:54428 Connection reset, restarting [0]\nSat Jan 11 17:52:39 2020 TCP connection established with [AF_INET]10.250.7.77:29630\nSat Jan 11 17:52:39 2020 10.250.7.77:29630 TCP connection established with [AF_INET]100.64.1.1:58862\nSat Jan 11 17:52:39 2020 10.250.7.77:29630 Connection reset, restarting [0]\nSat Jan 11 17:52:39 2020 100.64.1.1:58862 Connection reset, restarting [0]\nSat Jan 11 17:52:43 2020 TCP connection established with [AF_INET]10.250.7.77:13498\nSat Jan 11 17:52:43 2020 10.250.7.77:13498 TCP connection established with [AF_INET]100.64.1.1:54438\nSat Jan 11 17:52:43 2020 10.250.7.77:13498 Connection reset, restarting [0]\nSat Jan 11 17:52:43 2020 100.64.1.1:54438 Connection reset, restarting [0]\nSat Jan 11 17:52:49 2020 TCP connection established with [AF_INET]10.250.7.77:29638\nSat Jan 11 17:52:49 2020 10.250.7.77:29638 TCP connection established with [AF_INET]100.64.1.1:58870\nSat Jan 11 17:52:49 2020 10.250.7.77:29638 Connection reset, restarting [0]\nSat Jan 11 17:52:49 2020 100.64.1.1:58870 Connection reset, restarting [0]\nSat Jan 11 17:52:53 2020 TCP connection established with [AF_INET]10.250.7.77:13504\nSat Jan 11 17:52:53 2020 10.250.7.77:13504 TCP connection established with [AF_INET]100.64.1.1:54444\nSat Jan 11 17:52:53 2020 10.250.7.77:13504 Connection reset, restarting [0]\nSat Jan 11 17:52:53 2020 100.64.1.1:54444 Connection reset, restarting [0]\nSat Jan 11 17:52:59 2020 TCP connection established with [AF_INET]10.250.7.77:29650\nSat Jan 11 17:52:59 2020 10.250.7.77:29650 TCP connection established with [AF_INET]100.64.1.1:58882\nSat Jan 11 17:52:59 2020 10.250.7.77:29650 Connection reset, restarting [0]\nSat Jan 11 17:52:59 2020 100.64.1.1:58882 Connection reset, restarting [0]\nSat Jan 11 17:53:03 2020 TCP connection established with [AF_INET]10.250.7.77:13518\nSat Jan 11 17:53:03 2020 10.250.7.77:13518 TCP connection established with [AF_INET]100.64.1.1:54458\nSat Jan 11 17:53:03 2020 10.250.7.77:13518 Connection reset, restarting [0]\nSat Jan 11 17:53:03 2020 100.64.1.1:54458 Connection reset, restarting [0]\nSat Jan 11 17:53:09 2020 TCP connection established with [AF_INET]10.250.7.77:29660\nSat Jan 11 17:53:09 2020 10.250.7.77:29660 TCP connection established with [AF_INET]100.64.1.1:58892\nSat Jan 11 17:53:09 2020 10.250.7.77:29660 Connection reset, restarting [0]\nSat Jan 11 17:53:09 2020 100.64.1.1:58892 Connection reset, restarting [0]\nSat Jan 11 17:53:13 2020 TCP connection established with [AF_INET]10.250.7.77:13526\nSat Jan 11 17:53:13 2020 10.250.7.77:13526 TCP connection established with [AF_INET]100.64.1.1:54466\nSat Jan 11 17:53:13 2020 10.250.7.77:13526 Connection reset, restarting [0]\nSat Jan 11 17:53:13 2020 100.64.1.1:54466 Connection reset, restarting [0]\nSat Jan 11 17:53:19 2020 TCP connection established with [AF_INET]10.250.7.77:29672\nSat Jan 11 17:53:19 2020 10.250.7.77:29672 TCP connection established with [AF_INET]100.64.1.1:58904\nSat Jan 11 17:53:19 2020 10.250.7.77:29672 Connection reset, restarting [0]\nSat Jan 11 17:53:19 2020 100.64.1.1:58904 Connection reset, restarting [0]\nSat Jan 11 17:53:23 2020 TCP connection established with [AF_INET]10.250.7.77:13540\nSat Jan 11 17:53:23 2020 10.250.7.77:13540 TCP connection established with [AF_INET]100.64.1.1:54480\nSat Jan 11 17:53:23 2020 10.250.7.77:13540 Connection reset, restarting [0]\nSat Jan 11 17:53:23 2020 100.64.1.1:54480 Connection reset, restarting [0]\nSat Jan 11 17:53:29 2020 TCP connection established with [AF_INET]10.250.7.77:29676\nSat Jan 11 17:53:29 2020 10.250.7.77:29676 TCP connection established with [AF_INET]100.64.1.1:58908\nSat Jan 11 17:53:29 2020 10.250.7.77:29676 Connection reset, restarting [0]\nSat Jan 11 17:53:29 2020 100.64.1.1:58908 Connection reset, restarting [0]\nSat Jan 11 17:53:33 2020 TCP connection established with [AF_INET]10.250.7.77:13546\nSat Jan 11 17:53:33 2020 10.250.7.77:13546 TCP connection established with [AF_INET]100.64.1.1:54486\nSat Jan 11 17:53:33 2020 10.250.7.77:13546 Connection reset, restarting [0]\nSat Jan 11 17:53:33 2020 100.64.1.1:54486 Connection reset, restarting [0]\nSat Jan 11 17:53:39 2020 TCP connection established with [AF_INET]10.250.7.77:29684\nSat Jan 11 17:53:39 2020 10.250.7.77:29684 TCP connection established with [AF_INET]100.64.1.1:58916\nSat Jan 11 17:53:39 2020 10.250.7.77:29684 Connection reset, restarting [0]\nSat Jan 11 17:53:39 2020 100.64.1.1:58916 Connection reset, restarting [0]\nSat Jan 11 17:53:43 2020 TCP connection established with [AF_INET]10.250.7.77:13556\nSat Jan 11 17:53:43 2020 10.250.7.77:13556 TCP connection established with [AF_INET]100.64.1.1:54496\nSat Jan 11 17:53:43 2020 10.250.7.77:13556 Connection reset, restarting [0]\nSat Jan 11 17:53:43 2020 100.64.1.1:54496 Connection reset, restarting [0]\nSat Jan 11 17:53:49 2020 TCP connection established with [AF_INET]10.250.7.77:29696\nSat Jan 11 17:53:49 2020 10.250.7.77:29696 TCP connection established with [AF_INET]100.64.1.1:58928\nSat Jan 11 17:53:49 2020 10.250.7.77:29696 Connection reset, restarting [0]\nSat Jan 11 17:53:49 2020 100.64.1.1:58928 Connection reset, restarting [0]\nSat Jan 11 17:53:53 2020 TCP connection established with [AF_INET]10.250.7.77:13562\nSat Jan 11 17:53:53 2020 10.250.7.77:13562 TCP connection established with [AF_INET]100.64.1.1:54502\nSat Jan 11 17:53:53 2020 10.250.7.77:13562 Connection reset, restarting [0]\nSat Jan 11 17:53:53 2020 100.64.1.1:54502 Connection reset, restarting [0]\nSat Jan 11 17:53:59 2020 TCP connection established with [AF_INET]10.250.7.77:29708\nSat Jan 11 17:53:59 2020 10.250.7.77:29708 TCP connection established with [AF_INET]100.64.1.1:58940\nSat Jan 11 17:53:59 2020 10.250.7.77:29708 Connection reset, restarting [0]\nSat Jan 11 17:53:59 2020 100.64.1.1:58940 Connection reset, restarting [0]\nSat Jan 11 17:54:03 2020 TCP connection established with [AF_INET]10.250.7.77:13576\nSat Jan 11 17:54:03 2020 10.250.7.77:13576 TCP connection established with [AF_INET]100.64.1.1:54516\nSat Jan 11 17:54:03 2020 10.250.7.77:13576 Connection reset, restarting [0]\nSat Jan 11 17:54:03 2020 100.64.1.1:54516 Connection reset, restarting [0]\nSat Jan 11 17:54:09 2020 TCP connection established with [AF_INET]10.250.7.77:29718\nSat Jan 11 17:54:09 2020 10.250.7.77:29718 TCP connection established with [AF_INET]100.64.1.1:58950\nSat Jan 11 17:54:09 2020 10.250.7.77:29718 Connection reset, restarting [0]\nSat Jan 11 17:54:09 2020 100.64.1.1:58950 Connection reset, restarting [0]\nSat Jan 11 17:54:13 2020 TCP connection established with [AF_INET]10.250.7.77:13584\nSat Jan 11 17:54:13 2020 10.250.7.77:13584 TCP connection established with [AF_INET]100.64.1.1:54524\nSat Jan 11 17:54:13 2020 10.250.7.77:13584 Connection reset, restarting [0]\nSat Jan 11 17:54:13 2020 100.64.1.1:54524 Connection reset, restarting [0]\nSat Jan 11 17:54:19 2020 TCP connection established with [AF_INET]10.250.7.77:29738\nSat Jan 11 17:54:19 2020 10.250.7.77:29738 TCP connection established with [AF_INET]100.64.1.1:58970\nSat Jan 11 17:54:19 2020 10.250.7.77:29738 Connection reset, restarting [0]\nSat Jan 11 17:54:19 2020 100.64.1.1:58970 Connection reset, restarting [0]\nSat Jan 11 17:54:23 2020 TCP connection established with [AF_INET]10.250.7.77:13602\nSat Jan 11 17:54:23 2020 10.250.7.77:13602 TCP connection established with [AF_INET]100.64.1.1:54542\nSat Jan 11 17:54:23 2020 10.250.7.77:13602 Connection reset, restarting [0]\nSat Jan 11 17:54:23 2020 100.64.1.1:54542 Connection reset, restarting [0]\nSat Jan 11 17:54:29 2020 TCP connection established with [AF_INET]10.250.7.77:29742\nSat Jan 11 17:54:29 2020 10.250.7.77:29742 TCP connection established with [AF_INET]100.64.1.1:58974\nSat Jan 11 17:54:29 2020 10.250.7.77:29742 Connection reset, restarting [0]\nSat Jan 11 17:54:29 2020 100.64.1.1:58974 Connection reset, restarting [0]\nSat Jan 11 17:54:33 2020 TCP connection established with [AF_INET]10.250.7.77:13610\nSat Jan 11 17:54:33 2020 10.250.7.77:13610 TCP connection established with [AF_INET]100.64.1.1:54550\nSat Jan 11 17:54:33 2020 10.250.7.77:13610 Connection reset, restarting [0]\nSat Jan 11 17:54:33 2020 100.64.1.1:54550 Connection reset, restarting [0]\nSat Jan 11 17:54:39 2020 TCP connection established with [AF_INET]10.250.7.77:29750\nSat Jan 11 17:54:39 2020 10.250.7.77:29750 TCP connection established with [AF_INET]100.64.1.1:58982\nSat Jan 11 17:54:39 2020 10.250.7.77:29750 Connection reset, restarting [0]\nSat Jan 11 17:54:39 2020 100.64.1.1:58982 Connection reset, restarting [0]\nSat Jan 11 17:54:43 2020 TCP connection established with [AF_INET]10.250.7.77:13622\nSat Jan 11 17:54:43 2020 10.250.7.77:13622 TCP connection established with [AF_INET]100.64.1.1:54562\nSat Jan 11 17:54:43 2020 10.250.7.77:13622 Connection reset, restarting [0]\nSat Jan 11 17:54:43 2020 100.64.1.1:54562 Connection reset, restarting [0]\nSat Jan 11 17:54:49 2020 TCP connection established with [AF_INET]10.250.7.77:29758\nSat Jan 11 17:54:49 2020 10.250.7.77:29758 TCP connection established with [AF_INET]100.64.1.1:58990\nSat Jan 11 17:54:49 2020 10.250.7.77:29758 Connection reset, restarting [0]\nSat Jan 11 17:54:49 2020 100.64.1.1:58990 Connection reset, restarting [0]\nSat Jan 11 17:54:53 2020 TCP connection established with [AF_INET]10.250.7.77:13628\nSat Jan 11 17:54:53 2020 10.250.7.77:13628 Connection reset, restarting [0]\nSat Jan 11 17:54:53 2020 TCP connection established with [AF_INET]100.64.1.1:54568\nSat Jan 11 17:54:53 2020 100.64.1.1:54568 Connection reset, restarting [0]\nSat Jan 11 17:54:59 2020 TCP connection established with [AF_INET]10.250.7.77:29774\nSat Jan 11 17:54:59 2020 10.250.7.77:29774 TCP connection established with [AF_INET]100.64.1.1:59006\nSat Jan 11 17:54:59 2020 10.250.7.77:29774 Connection reset, restarting [0]\nSat Jan 11 17:54:59 2020 100.64.1.1:59006 Connection reset, restarting [0]\nSat Jan 11 17:55:03 2020 TCP connection established with [AF_INET]10.250.7.77:13642\nSat Jan 11 17:55:03 2020 10.250.7.77:13642 TCP connection established with [AF_INET]100.64.1.1:54582\nSat Jan 11 17:55:03 2020 10.250.7.77:13642 Connection reset, restarting [0]\nSat Jan 11 17:55:03 2020 100.64.1.1:54582 Connection reset, restarting [0]\nSat Jan 11 17:55:09 2020 TCP connection established with [AF_INET]10.250.7.77:29784\nSat Jan 11 17:55:09 2020 10.250.7.77:29784 TCP connection established with [AF_INET]100.64.1.1:59016\nSat Jan 11 17:55:09 2020 10.250.7.77:29784 Connection reset, restarting [0]\nSat Jan 11 17:55:09 2020 100.64.1.1:59016 Connection reset, restarting [0]\nSat Jan 11 17:55:13 2020 TCP connection established with [AF_INET]10.250.7.77:13650\nSat Jan 11 17:55:13 2020 10.250.7.77:13650 TCP connection established with [AF_INET]100.64.1.1:54590\nSat Jan 11 17:55:13 2020 10.250.7.77:13650 Connection reset, restarting [0]\nSat Jan 11 17:55:13 2020 100.64.1.1:54590 Connection reset, restarting [0]\nSat Jan 11 17:55:19 2020 TCP connection established with [AF_INET]10.250.7.77:29796\nSat Jan 11 17:55:19 2020 10.250.7.77:29796 TCP connection established with [AF_INET]100.64.1.1:59028\nSat Jan 11 17:55:19 2020 10.250.7.77:29796 Connection reset, restarting [0]\nSat Jan 11 17:55:19 2020 100.64.1.1:59028 Connection reset, restarting [0]\nSat Jan 11 17:55:23 2020 TCP connection established with [AF_INET]10.250.7.77:13660\nSat Jan 11 17:55:23 2020 10.250.7.77:13660 TCP connection established with [AF_INET]100.64.1.1:54600\nSat Jan 11 17:55:23 2020 10.250.7.77:13660 Connection reset, restarting [0]\nSat Jan 11 17:55:23 2020 100.64.1.1:54600 Connection reset, restarting [0]\nSat Jan 11 17:55:29 2020 TCP connection established with [AF_INET]10.250.7.77:29800\nSat Jan 11 17:55:29 2020 10.250.7.77:29800 TCP connection established with [AF_INET]100.64.1.1:59032\nSat Jan 11 17:55:29 2020 10.250.7.77:29800 Connection reset, restarting [0]\nSat Jan 11 17:55:29 2020 100.64.1.1:59032 Connection reset, restarting [0]\nSat Jan 11 17:55:33 2020 TCP connection established with [AF_INET]10.250.7.77:13668\nSat Jan 11 17:55:33 2020 10.250.7.77:13668 TCP connection established with [AF_INET]100.64.1.1:54608\nSat Jan 11 17:55:33 2020 10.250.7.77:13668 Connection reset, restarting [0]\nSat Jan 11 17:55:33 2020 100.64.1.1:54608 Connection reset, restarting [0]\nSat Jan 11 17:55:39 2020 TCP connection established with [AF_INET]10.250.7.77:29810\nSat Jan 11 17:55:39 2020 10.250.7.77:29810 TCP connection established with [AF_INET]100.64.1.1:59042\nSat Jan 11 17:55:39 2020 10.250.7.77:29810 Connection reset, restarting [0]\nSat Jan 11 17:55:39 2020 100.64.1.1:59042 Connection reset, restarting [0]\nSat Jan 11 17:55:43 2020 TCP connection established with [AF_INET]10.250.7.77:13676\nSat Jan 11 17:55:43 2020 10.250.7.77:13676 TCP connection established with [AF_INET]100.64.1.1:54616\nSat Jan 11 17:55:43 2020 10.250.7.77:13676 Connection reset, restarting [0]\nSat Jan 11 17:55:43 2020 100.64.1.1:54616 Connection reset, restarting [0]\nSat Jan 11 17:55:49 2020 TCP connection established with [AF_INET]10.250.7.77:29816\nSat Jan 11 17:55:49 2020 10.250.7.77:29816 TCP connection established with [AF_INET]100.64.1.1:59048\nSat Jan 11 17:55:49 2020 10.250.7.77:29816 Connection reset, restarting [0]\nSat Jan 11 17:55:49 2020 100.64.1.1:59048 Connection reset, restarting [0]\nSat Jan 11 17:55:53 2020 TCP connection established with [AF_INET]10.250.7.77:13686\nSat Jan 11 17:55:53 2020 10.250.7.77:13686 TCP connection established with [AF_INET]100.64.1.1:54626\nSat Jan 11 17:55:53 2020 10.250.7.77:13686 Connection reset, restarting [0]\nSat Jan 11 17:55:53 2020 100.64.1.1:54626 Connection reset, restarting [0]\nSat Jan 11 17:55:59 2020 TCP connection established with [AF_INET]10.250.7.77:29828\nSat Jan 11 17:55:59 2020 10.250.7.77:29828 TCP connection established with [AF_INET]100.64.1.1:59060\nSat Jan 11 17:55:59 2020 10.250.7.77:29828 Connection reset, restarting [0]\nSat Jan 11 17:55:59 2020 100.64.1.1:59060 Connection reset, restarting [0]\nSat Jan 11 17:56:03 2020 TCP connection established with [AF_INET]10.250.7.77:13740\nSat Jan 11 17:56:03 2020 10.250.7.77:13740 TCP connection established with [AF_INET]100.64.1.1:54680\nSat Jan 11 17:56:03 2020 10.250.7.77:13740 Connection reset, restarting [0]\nSat Jan 11 17:56:03 2020 100.64.1.1:54680 Connection reset, restarting [0]\nSat Jan 11 17:56:09 2020 TCP connection established with [AF_INET]10.250.7.77:29844\nSat Jan 11 17:56:09 2020 10.250.7.77:29844 TCP connection established with [AF_INET]100.64.1.1:59076\nSat Jan 11 17:56:09 2020 10.250.7.77:29844 Connection reset, restarting [0]\nSat Jan 11 17:56:09 2020 100.64.1.1:59076 Connection reset, restarting [0]\nSat Jan 11 17:56:13 2020 TCP connection established with [AF_INET]10.250.7.77:13750\nSat Jan 11 17:56:13 2020 10.250.7.77:13750 TCP connection established with [AF_INET]100.64.1.1:54690\nSat Jan 11 17:56:13 2020 10.250.7.77:13750 Connection reset, restarting [0]\nSat Jan 11 17:56:13 2020 100.64.1.1:54690 Connection reset, restarting [0]\nSat Jan 11 17:56:19 2020 TCP connection established with [AF_INET]10.250.7.77:29860\nSat Jan 11 17:56:19 2020 10.250.7.77:29860 TCP connection established with [AF_INET]100.64.1.1:59092\nSat Jan 11 17:56:19 2020 10.250.7.77:29860 Connection reset, restarting [0]\nSat Jan 11 17:56:19 2020 100.64.1.1:59092 Connection reset, restarting [0]\nSat Jan 11 17:56:23 2020 TCP connection established with [AF_INET]10.250.7.77:13760\nSat Jan 11 17:56:23 2020 10.250.7.77:13760 TCP connection established with [AF_INET]100.64.1.1:54700\nSat Jan 11 17:56:23 2020 10.250.7.77:13760 Connection reset, restarting [0]\nSat Jan 11 17:56:23 2020 100.64.1.1:54700 Connection reset, restarting [0]\nSat Jan 11 17:56:29 2020 TCP connection established with [AF_INET]10.250.7.77:29864\nSat Jan 11 17:56:29 2020 10.250.7.77:29864 TCP connection established with [AF_INET]100.64.1.1:59096\nSat Jan 11 17:56:29 2020 10.250.7.77:29864 Connection reset, restarting [0]\nSat Jan 11 17:56:29 2020 100.64.1.1:59096 Connection reset, restarting [0]\nSat Jan 11 17:56:33 2020 TCP connection established with [AF_INET]10.250.7.77:13772\nSat Jan 11 17:56:33 2020 10.250.7.77:13772 TCP connection established with [AF_INET]100.64.1.1:54712\nSat Jan 11 17:56:33 2020 10.250.7.77:13772 Connection reset, restarting [0]\nSat Jan 11 17:56:33 2020 100.64.1.1:54712 Connection reset, restarting [0]\nSat Jan 11 17:56:39 2020 TCP connection established with [AF_INET]10.250.7.77:29874\nSat Jan 11 17:56:39 2020 10.250.7.77:29874 TCP connection established with [AF_INET]100.64.1.1:59106\nSat Jan 11 17:56:39 2020 10.250.7.77:29874 Connection reset, restarting [0]\nSat Jan 11 17:56:39 2020 100.64.1.1:59106 Connection reset, restarting [0]\nSat Jan 11 17:56:43 2020 TCP connection established with [AF_INET]10.250.7.77:13780\nSat Jan 11 17:56:43 2020 10.250.7.77:13780 TCP connection established with [AF_INET]100.64.1.1:54720\nSat Jan 11 17:56:43 2020 10.250.7.77:13780 Connection reset, restarting [0]\nSat Jan 11 17:56:43 2020 100.64.1.1:54720 Connection reset, restarting [0]\nSat Jan 11 17:56:49 2020 TCP connection established with [AF_INET]10.250.7.77:29880\nSat Jan 11 17:56:49 2020 10.250.7.77:29880 TCP connection established with [AF_INET]100.64.1.1:59112\nSat Jan 11 17:56:49 2020 10.250.7.77:29880 Connection reset, restarting [0]\nSat Jan 11 17:56:49 2020 100.64.1.1:59112 Connection reset, restarting [0]\nSat Jan 11 17:56:53 2020 TCP connection established with [AF_INET]10.250.7.77:13786\nSat Jan 11 17:56:53 2020 10.250.7.77:13786 TCP connection established with [AF_INET]100.64.1.1:54726\nSat Jan 11 17:56:53 2020 10.250.7.77:13786 Connection reset, restarting [0]\nSat Jan 11 17:56:53 2020 100.64.1.1:54726 Connection reset, restarting [0]\nSat Jan 11 17:56:59 2020 TCP connection established with [AF_INET]10.250.7.77:29892\nSat Jan 11 17:56:59 2020 10.250.7.77:29892 TCP connection established with [AF_INET]100.64.1.1:59124\nSat Jan 11 17:56:59 2020 10.250.7.77:29892 Connection reset, restarting [0]\nSat Jan 11 17:56:59 2020 100.64.1.1:59124 Connection reset, restarting [0]\nSat Jan 11 17:57:03 2020 TCP connection established with [AF_INET]10.250.7.77:13800\nSat Jan 11 17:57:03 2020 10.250.7.77:13800 TCP connection established with [AF_INET]100.64.1.1:54740\nSat Jan 11 17:57:03 2020 10.250.7.77:13800 Connection reset, restarting [0]\nSat Jan 11 17:57:03 2020 100.64.1.1:54740 Connection reset, restarting [0]\nSat Jan 11 17:57:09 2020 TCP connection established with [AF_INET]10.250.7.77:29902\nSat Jan 11 17:57:09 2020 10.250.7.77:29902 TCP connection established with [AF_INET]100.64.1.1:59134\nSat Jan 11 17:57:09 2020 10.250.7.77:29902 Connection reset, restarting [0]\nSat Jan 11 17:57:09 2020 100.64.1.1:59134 Connection reset, restarting [0]\nSat Jan 11 17:57:13 2020 TCP connection established with [AF_INET]10.250.7.77:13812\nSat Jan 11 17:57:13 2020 10.250.7.77:13812 TCP connection established with [AF_INET]100.64.1.1:54752\nSat Jan 11 17:57:13 2020 10.250.7.77:13812 Connection reset, restarting [0]\nSat Jan 11 17:57:13 2020 100.64.1.1:54752 Connection reset, restarting [0]\nSat Jan 11 17:57:19 2020 TCP connection established with [AF_INET]10.250.7.77:29914\nSat Jan 11 17:57:19 2020 10.250.7.77:29914 TCP connection established with [AF_INET]100.64.1.1:59146\nSat Jan 11 17:57:19 2020 10.250.7.77:29914 Connection reset, restarting [0]\nSat Jan 11 17:57:19 2020 100.64.1.1:59146 Connection reset, restarting [0]\nSat Jan 11 17:57:23 2020 TCP connection established with [AF_INET]10.250.7.77:13824\nSat Jan 11 17:57:23 2020 10.250.7.77:13824 TCP connection established with [AF_INET]100.64.1.1:54764\nSat Jan 11 17:57:23 2020 10.250.7.77:13824 Connection reset, restarting [0]\nSat Jan 11 17:57:23 2020 100.64.1.1:54764 Connection reset, restarting [0]\nSat Jan 11 17:57:29 2020 TCP connection established with [AF_INET]10.250.7.77:29924\nSat Jan 11 17:57:29 2020 10.250.7.77:29924 TCP connection established with [AF_INET]100.64.1.1:59156\nSat Jan 11 17:57:29 2020 10.250.7.77:29924 Connection reset, restarting [0]\nSat Jan 11 17:57:29 2020 100.64.1.1:59156 Connection reset, restarting [0]\nSat Jan 11 17:57:33 2020 TCP connection established with [AF_INET]10.250.7.77:13830\nSat Jan 11 17:57:33 2020 10.250.7.77:13830 TCP connection established with [AF_INET]100.64.1.1:54770\nSat Jan 11 17:57:33 2020 10.250.7.77:13830 Connection reset, restarting [0]\nSat Jan 11 17:57:33 2020 100.64.1.1:54770 Connection reset, restarting [0]\nSat Jan 11 17:57:39 2020 TCP connection established with [AF_INET]10.250.7.77:29932\nSat Jan 11 17:57:39 2020 10.250.7.77:29932 TCP connection established with [AF_INET]100.64.1.1:59164\nSat Jan 11 17:57:39 2020 10.250.7.77:29932 Connection reset, restarting [0]\nSat Jan 11 17:57:39 2020 100.64.1.1:59164 Connection reset, restarting [0]\nSat Jan 11 17:57:43 2020 TCP connection established with [AF_INET]10.250.7.77:13838\nSat Jan 11 17:57:43 2020 10.250.7.77:13838 TCP connection established with [AF_INET]100.64.1.1:54778\nSat Jan 11 17:57:43 2020 10.250.7.77:13838 Connection reset, restarting [0]\nSat Jan 11 17:57:43 2020 100.64.1.1:54778 Connection reset, restarting [0]\nSat Jan 11 17:57:49 2020 TCP connection established with [AF_INET]10.250.7.77:29938\nSat Jan 11 17:57:49 2020 10.250.7.77:29938 TCP connection established with [AF_INET]100.64.1.1:59170\nSat Jan 11 17:57:49 2020 10.250.7.77:29938 Connection reset, restarting [0]\nSat Jan 11 17:57:49 2020 100.64.1.1:59170 Connection reset, restarting [0]\nSat Jan 11 17:57:53 2020 TCP connection established with [AF_INET]10.250.7.77:13844\nSat Jan 11 17:57:53 2020 10.250.7.77:13844 TCP connection established with [AF_INET]100.64.1.1:54784\nSat Jan 11 17:57:53 2020 10.250.7.77:13844 Connection reset, restarting [0]\nSat Jan 11 17:57:53 2020 100.64.1.1:54784 Connection reset, restarting [0]\nSat Jan 11 17:57:59 2020 TCP connection established with [AF_INET]10.250.7.77:29950\nSat Jan 11 17:57:59 2020 10.250.7.77:29950 TCP connection established with [AF_INET]100.64.1.1:59182\nSat Jan 11 17:57:59 2020 10.250.7.77:29950 Connection reset, restarting [0]\nSat Jan 11 17:57:59 2020 100.64.1.1:59182 Connection reset, restarting [0]\nSat Jan 11 17:58:03 2020 TCP connection established with [AF_INET]10.250.7.77:13858\nSat Jan 11 17:58:03 2020 10.250.7.77:13858 TCP connection established with [AF_INET]100.64.1.1:54798\nSat Jan 11 17:58:03 2020 10.250.7.77:13858 Connection reset, restarting [0]\nSat Jan 11 17:58:03 2020 100.64.1.1:54798 Connection reset, restarting [0]\nSat Jan 11 17:58:09 2020 TCP connection established with [AF_INET]10.250.7.77:29960\nSat Jan 11 17:58:09 2020 10.250.7.77:29960 TCP connection established with [AF_INET]100.64.1.1:59192\nSat Jan 11 17:58:09 2020 10.250.7.77:29960 Connection reset, restarting [0]\nSat Jan 11 17:58:09 2020 100.64.1.1:59192 Connection reset, restarting [0]\nSat Jan 11 17:58:13 2020 TCP connection established with [AF_INET]10.250.7.77:13866\nSat Jan 11 17:58:13 2020 10.250.7.77:13866 TCP connection established with [AF_INET]100.64.1.1:54806\nSat Jan 11 17:58:13 2020 10.250.7.77:13866 Connection reset, restarting [0]\nSat Jan 11 17:58:13 2020 100.64.1.1:54806 Connection reset, restarting [0]\nSat Jan 11 17:58:19 2020 TCP connection established with [AF_INET]10.250.7.77:29972\nSat Jan 11 17:58:19 2020 10.250.7.77:29972 TCP connection established with [AF_INET]100.64.1.1:59204\nSat Jan 11 17:58:19 2020 10.250.7.77:29972 Connection reset, restarting [0]\nSat Jan 11 17:58:19 2020 100.64.1.1:59204 Connection reset, restarting [0]\nSat Jan 11 17:58:23 2020 TCP connection established with [AF_INET]10.250.7.77:13882\nSat Jan 11 17:58:23 2020 10.250.7.77:13882 TCP connection established with [AF_INET]100.64.1.1:54822\nSat Jan 11 17:58:23 2020 10.250.7.77:13882 Connection reset, restarting [0]\nSat Jan 11 17:58:23 2020 100.64.1.1:54822 Connection reset, restarting [0]\nSat Jan 11 17:58:29 2020 TCP connection established with [AF_INET]10.250.7.77:29978\nSat Jan 11 17:58:29 2020 10.250.7.77:29978 TCP connection established with [AF_INET]100.64.1.1:59210\nSat Jan 11 17:58:29 2020 10.250.7.77:29978 Connection reset, restarting [0]\nSat Jan 11 17:58:29 2020 100.64.1.1:59210 Connection reset, restarting [0]\nSat Jan 11 17:58:33 2020 TCP connection established with [AF_INET]10.250.7.77:13888\nSat Jan 11 17:58:33 2020 10.250.7.77:13888 TCP connection established with [AF_INET]100.64.1.1:54828\nSat Jan 11 17:58:33 2020 10.250.7.77:13888 Connection reset, restarting [0]\nSat Jan 11 17:58:33 2020 100.64.1.1:54828 Connection reset, restarting [0]\nSat Jan 11 17:58:39 2020 TCP connection established with [AF_INET]10.250.7.77:29986\nSat Jan 11 17:58:39 2020 10.250.7.77:29986 TCP connection established with [AF_INET]100.64.1.1:59218\nSat Jan 11 17:58:39 2020 10.250.7.77:29986 Connection reset, restarting [0]\nSat Jan 11 17:58:39 2020 100.64.1.1:59218 Connection reset, restarting [0]\nSat Jan 11 17:58:43 2020 TCP connection established with [AF_INET]10.250.7.77:13896\nSat Jan 11 17:58:43 2020 10.250.7.77:13896 TCP connection established with [AF_INET]100.64.1.1:54836\nSat Jan 11 17:58:43 2020 10.250.7.77:13896 Connection reset, restarting [0]\nSat Jan 11 17:58:43 2020 100.64.1.1:54836 Connection reset, restarting [0]\nSat Jan 11 17:58:49 2020 TCP connection established with [AF_INET]10.250.7.77:29996\nSat Jan 11 17:58:49 2020 10.250.7.77:29996 TCP connection established with [AF_INET]100.64.1.1:59228\nSat Jan 11 17:58:49 2020 10.250.7.77:29996 Connection reset, restarting [0]\nSat Jan 11 17:58:49 2020 100.64.1.1:59228 Connection reset, restarting [0]\nSat Jan 11 17:58:53 2020 TCP connection established with [AF_INET]10.250.7.77:13902\nSat Jan 11 17:58:53 2020 10.250.7.77:13902 TCP connection established with [AF_INET]100.64.1.1:54842\nSat Jan 11 17:58:53 2020 10.250.7.77:13902 Connection reset, restarting [0]\nSat Jan 11 17:58:53 2020 100.64.1.1:54842 Connection reset, restarting [0]\nSat Jan 11 17:58:59 2020 TCP connection established with [AF_INET]10.250.7.77:30042\nSat Jan 11 17:58:59 2020 10.250.7.77:30042 TCP connection established with [AF_INET]100.64.1.1:59274\nSat Jan 11 17:58:59 2020 10.250.7.77:30042 Connection reset, restarting [0]\nSat Jan 11 17:58:59 2020 100.64.1.1:59274 Connection reset, restarting [0]\nSat Jan 11 17:59:03 2020 TCP connection established with [AF_INET]10.250.7.77:13916\nSat Jan 11 17:59:03 2020 10.250.7.77:13916 TCP connection established with [AF_INET]100.64.1.1:54856\nSat Jan 11 17:59:03 2020 10.250.7.77:13916 Connection reset, restarting [0]\nSat Jan 11 17:59:03 2020 100.64.1.1:54856 Connection reset, restarting [0]\nSat Jan 11 17:59:09 2020 TCP connection established with [AF_INET]10.250.7.77:30054\nSat Jan 11 17:59:09 2020 10.250.7.77:30054 TCP connection established with [AF_INET]100.64.1.1:59286\nSat Jan 11 17:59:09 2020 10.250.7.77:30054 Connection reset, restarting [0]\nSat Jan 11 17:59:09 2020 100.64.1.1:59286 Connection reset, restarting [0]\nSat Jan 11 17:59:10 2020 vpn-seed/100.64.1.1:51060 peer info: IV_VER=2.4.6\nSat Jan 11 17:59:10 2020 vpn-seed/100.64.1.1:51060 peer info: IV_PLAT=linux\nSat Jan 11 17:59:10 2020 vpn-seed/100.64.1.1:51060 peer info: IV_PROTO=2\nSat Jan 11 17:59:10 2020 vpn-seed/100.64.1.1:51060 peer info: IV_LZ4=1\nSat Jan 11 17:59:10 2020 vpn-seed/100.64.1.1:51060 peer info: IV_LZ4v2=1\nSat Jan 11 17:59:10 2020 vpn-seed/100.64.1.1:51060 peer info: IV_LZO=1\nSat Jan 11 17:59:10 2020 vpn-seed/100.64.1.1:51060 peer info: IV_COMP_STUB=1\nSat Jan 11 17:59:10 2020 vpn-seed/100.64.1.1:51060 peer info: IV_COMP_STUBv2=1\nSat Jan 11 17:59:10 2020 vpn-seed/100.64.1.1:51060 peer info: IV_TCPNL=1\nSat Jan 11 17:59:13 2020 TCP connection established with [AF_INET]10.250.7.77:13926\nSat Jan 11 17:59:13 2020 10.250.7.77:13926 TCP connection established with [AF_INET]100.64.1.1:54866\nSat Jan 11 17:59:13 2020 10.250.7.77:13926 Connection reset, restarting [0]\nSat Jan 11 17:59:13 2020 100.64.1.1:54866 Connection reset, restarting [0]\nSat Jan 11 17:59:19 2020 TCP connection established with [AF_INET]10.250.7.77:30066\nSat Jan 11 17:59:19 2020 10.250.7.77:30066 TCP connection established with [AF_INET]100.64.1.1:59298\nSat Jan 11 17:59:19 2020 10.250.7.77:30066 Connection reset, restarting [0]\nSat Jan 11 17:59:19 2020 100.64.1.1:59298 Connection reset, restarting [0]\nSat Jan 11 17:59:23 2020 TCP connection established with [AF_INET]10.250.7.77:13936\nSat Jan 11 17:59:23 2020 10.250.7.77:13936 TCP connection established with [AF_INET]100.64.1.1:54876\nSat Jan 11 17:59:23 2020 10.250.7.77:13936 Connection reset, restarting [0]\nSat Jan 11 17:59:23 2020 100.64.1.1:54876 Connection reset, restarting [0]\nSat Jan 11 17:59:29 2020 TCP connection established with [AF_INET]10.250.7.77:30076\nSat Jan 11 17:59:29 2020 10.250.7.77:30076 TCP connection established with [AF_INET]100.64.1.1:59308\nSat Jan 11 17:59:29 2020 10.250.7.77:30076 Connection reset, restarting [0]\nSat Jan 11 17:59:29 2020 100.64.1.1:59308 Connection reset, restarting [0]\nSat Jan 11 17:59:33 2020 TCP connection established with [AF_INET]10.250.7.77:13942\nSat Jan 11 17:59:33 2020 10.250.7.77:13942 TCP connection established with [AF_INET]100.64.1.1:54882\nSat Jan 11 17:59:33 2020 10.250.7.77:13942 Connection reset, restarting [0]\nSat Jan 11 17:59:33 2020 100.64.1.1:54882 Connection reset, restarting [0]\nSat Jan 11 17:59:39 2020 TCP connection established with [AF_INET]10.250.7.77:30084\nSat Jan 11 17:59:39 2020 10.250.7.77:30084 TCP connection established with [AF_INET]100.64.1.1:59316\nSat Jan 11 17:59:39 2020 10.250.7.77:30084 Connection reset, restarting [0]\nSat Jan 11 17:59:39 2020 100.64.1.1:59316 Connection reset, restarting [0]\nSat Jan 11 17:59:43 2020 TCP connection established with [AF_INET]10.250.7.77:13954\nSat Jan 11 17:59:43 2020 10.250.7.77:13954 TCP connection established with [AF_INET]100.64.1.1:54894\nSat Jan 11 17:59:43 2020 10.250.7.77:13954 Connection reset, restarting [0]\nSat Jan 11 17:59:43 2020 100.64.1.1:54894 Connection reset, restarting [0]\nSat Jan 11 17:59:49 2020 TCP connection established with [AF_INET]10.250.7.77:30090\nSat Jan 11 17:59:49 2020 10.250.7.77:30090 TCP connection established with [AF_INET]100.64.1.1:59322\nSat Jan 11 17:59:49 2020 10.250.7.77:30090 Connection reset, restarting [0]\nSat Jan 11 17:59:49 2020 100.64.1.1:59322 Connection reset, restarting [0]\nSat Jan 11 17:59:53 2020 TCP connection established with [AF_INET]10.250.7.77:13960\nSat Jan 11 17:59:53 2020 10.250.7.77:13960 TCP connection established with [AF_INET]100.64.1.1:54900\nSat Jan 11 17:59:53 2020 10.250.7.77:13960 Connection reset, restarting [0]\nSat Jan 11 17:59:53 2020 100.64.1.1:54900 Connection reset, restarting [0]\nSat Jan 11 17:59:59 2020 TCP connection established with [AF_INET]10.250.7.77:30106\nSat Jan 11 17:59:59 2020 10.250.7.77:30106 TCP connection established with [AF_INET]100.64.1.1:59338\nSat Jan 11 17:59:59 2020 10.250.7.77:30106 Connection reset, restarting [0]\nSat Jan 11 17:59:59 2020 100.64.1.1:59338 Connection reset, restarting [0]\nSat Jan 11 18:00:03 2020 TCP connection established with [AF_INET]10.250.7.77:13976\nSat Jan 11 18:00:03 2020 10.250.7.77:13976 TCP connection established with [AF_INET]100.64.1.1:54916\nSat Jan 11 18:00:03 2020 10.250.7.77:13976 Connection reset, restarting [0]\nSat Jan 11 18:00:03 2020 100.64.1.1:54916 Connection reset, restarting [0]\nSat Jan 11 18:00:09 2020 TCP connection established with [AF_INET]10.250.7.77:30118\nSat Jan 11 18:00:09 2020 10.250.7.77:30118 TCP connection established with [AF_INET]100.64.1.1:59350\nSat Jan 11 18:00:09 2020 10.250.7.77:30118 Connection reset, restarting [0]\nSat Jan 11 18:00:09 2020 100.64.1.1:59350 Connection reset, restarting [0]\nSat Jan 11 18:00:13 2020 TCP connection established with [AF_INET]10.250.7.77:13986\nSat Jan 11 18:00:13 2020 10.250.7.77:13986 TCP connection established with [AF_INET]100.64.1.1:54926\nSat Jan 11 18:00:13 2020 10.250.7.77:13986 Connection reset, restarting [0]\nSat Jan 11 18:00:13 2020 100.64.1.1:54926 Connection reset, restarting [0]\nSat Jan 11 18:00:19 2020 TCP connection established with [AF_INET]10.250.7.77:30132\nSat Jan 11 18:00:19 2020 10.250.7.77:30132 TCP connection established with [AF_INET]100.64.1.1:59364\nSat Jan 11 18:00:19 2020 10.250.7.77:30132 Connection reset, restarting [0]\nSat Jan 11 18:00:19 2020 100.64.1.1:59364 Connection reset, restarting [0]\nSat Jan 11 18:00:23 2020 TCP connection established with [AF_INET]10.250.7.77:13996\nSat Jan 11 18:00:23 2020 10.250.7.77:13996 TCP connection established with [AF_INET]100.64.1.1:54936\nSat Jan 11 18:00:23 2020 10.250.7.77:13996 Connection reset, restarting [0]\nSat Jan 11 18:00:23 2020 100.64.1.1:54936 Connection reset, restarting [0]\nSat Jan 11 18:00:29 2020 vpn-seed/100.64.1.1:47320 peer info: IV_VER=2.4.6\nSat Jan 11 18:00:29 2020 vpn-seed/100.64.1.1:47320 peer info: IV_PLAT=linux\nSat Jan 11 18:00:29 2020 vpn-seed/100.64.1.1:47320 peer info: IV_PROTO=2\nSat Jan 11 18:00:29 2020 vpn-seed/100.64.1.1:47320 peer info: IV_LZ4=1\nSat Jan 11 18:00:29 2020 vpn-seed/100.64.1.1:47320 peer info: IV_LZ4v2=1\nSat Jan 11 18:00:29 2020 vpn-seed/100.64.1.1:47320 peer info: IV_LZO=1\nSat Jan 11 18:00:29 2020 vpn-seed/100.64.1.1:47320 peer info: IV_COMP_STUB=1\nSat Jan 11 18:00:29 2020 vpn-seed/100.64.1.1:47320 peer info: IV_COMP_STUBv2=1\nSat Jan 11 18:00:29 2020 vpn-seed/100.64.1.1:47320 peer info: IV_TCPNL=1\nSat Jan 11 18:00:29 2020 TCP connection established with [AF_INET]10.250.7.77:30136\nSat Jan 11 18:00:29 2020 10.250.7.77:30136 TCP connection established with [AF_INET]100.64.1.1:59368\nSat Jan 11 18:00:29 2020 10.250.7.77:30136 Connection reset, restarting [0]\nSat Jan 11 18:00:29 2020 100.64.1.1:59368 Connection reset, restarting [0]\nSat Jan 11 18:00:33 2020 TCP connection established with [AF_INET]10.250.7.77:14002\nSat Jan 11 18:00:33 2020 10.250.7.77:14002 TCP connection established with [AF_INET]100.64.1.1:54942\nSat Jan 11 18:00:33 2020 10.250.7.77:14002 Connection reset, restarting [0]\nSat Jan 11 18:00:33 2020 100.64.1.1:54942 Connection reset, restarting [0]\nSat Jan 11 18:00:39 2020 TCP connection established with [AF_INET]10.250.7.77:30144\nSat Jan 11 18:00:39 2020 10.250.7.77:30144 TCP connection established with [AF_INET]100.64.1.1:59376\nSat Jan 11 18:00:39 2020 10.250.7.77:30144 Connection reset, restarting [0]\nSat Jan 11 18:00:39 2020 100.64.1.1:59376 Connection reset, restarting [0]\nSat Jan 11 18:00:43 2020 TCP connection established with [AF_INET]10.250.7.77:14010\nSat Jan 11 18:00:43 2020 10.250.7.77:14010 TCP connection established with [AF_INET]100.64.1.1:54950\nSat Jan 11 18:00:43 2020 10.250.7.77:14010 Connection reset, restarting [0]\nSat Jan 11 18:00:43 2020 100.64.1.1:54950 Connection reset, restarting [0]\nSat Jan 11 18:00:49 2020 TCP connection established with [AF_INET]10.250.7.77:30150\nSat Jan 11 18:00:49 2020 10.250.7.77:30150 TCP connection established with [AF_INET]100.64.1.1:59382\nSat Jan 11 18:00:49 2020 10.250.7.77:30150 Connection reset, restarting [0]\nSat Jan 11 18:00:49 2020 100.64.1.1:59382 Connection reset, restarting [0]\nSat Jan 11 18:00:51 2020 vpn-seed/100.64.1.1:51770 peer info: IV_VER=2.4.6\nSat Jan 11 18:00:51 2020 vpn-seed/100.64.1.1:51770 peer info: IV_PLAT=linux\nSat Jan 11 18:00:51 2020 vpn-seed/100.64.1.1:51770 peer info: IV_PROTO=2\nSat Jan 11 18:00:51 2020 vpn-seed/100.64.1.1:51770 peer info: IV_LZ4=1\nSat Jan 11 18:00:51 2020 vpn-seed/100.64.1.1:51770 peer info: IV_LZ4v2=1\nSat Jan 11 18:00:51 2020 vpn-seed/100.64.1.1:51770 peer info: IV_LZO=1\nSat Jan 11 18:00:51 2020 vpn-seed/100.64.1.1:51770 peer info: IV_COMP_STUB=1\nSat Jan 11 18:00:51 2020 vpn-seed/100.64.1.1:51770 peer info: IV_COMP_STUBv2=1\nSat Jan 11 18:00:51 2020 vpn-seed/100.64.1.1:51770 peer info: IV_TCPNL=1\nSat Jan 11 18:00:53 2020 TCP connection established with [AF_INET]10.250.7.77:14020\nSat Jan 11 18:00:53 2020 10.250.7.77:14020 TCP connection established with [AF_INET]100.64.1.1:54960\nSat Jan 11 18:00:53 2020 10.250.7.77:14020 Connection reset, restarting [0]\nSat Jan 11 18:00:53 2020 100.64.1.1:54960 Connection reset, restarting [0]\nSat Jan 11 18:00:59 2020 TCP connection established with [AF_INET]100.64.1.1:59394\nSat Jan 11 18:00:59 2020 100.64.1.1:59394 TCP connection established with [AF_INET]10.250.7.77:30162\nSat Jan 11 18:00:59 2020 100.64.1.1:59394 Connection reset, restarting [0]\nSat Jan 11 18:00:59 2020 10.250.7.77:30162 Connection reset, restarting [0]\nSat Jan 11 18:01:03 2020 TCP connection established with [AF_INET]10.250.7.77:14034\nSat Jan 11 18:01:03 2020 10.250.7.77:14034 TCP connection established with [AF_INET]100.64.1.1:54974\nSat Jan 11 18:01:03 2020 10.250.7.77:14034 Connection reset, restarting [0]\nSat Jan 11 18:01:03 2020 100.64.1.1:54974 Connection reset, restarting [0]\nSat Jan 11 18:01:09 2020 TCP connection established with [AF_INET]10.250.7.77:30172\nSat Jan 11 18:01:09 2020 10.250.7.77:30172 TCP connection established with [AF_INET]100.64.1.1:59404\nSat Jan 11 18:01:09 2020 10.250.7.77:30172 Connection reset, restarting [0]\nSat Jan 11 18:01:09 2020 100.64.1.1:59404 Connection reset, restarting [0]\nSat Jan 11 18:01:13 2020 TCP connection established with [AF_INET]10.250.7.77:14044\nSat Jan 11 18:01:13 2020 10.250.7.77:14044 TCP connection established with [AF_INET]100.64.1.1:54984\nSat Jan 11 18:01:13 2020 10.250.7.77:14044 Connection reset, restarting [0]\nSat Jan 11 18:01:13 2020 100.64.1.1:54984 Connection reset, restarting [0]\nSat Jan 11 18:01:13 2020 TCP connection established with [AF_INET]100.64.1.1:54984\nSat Jan 11 18:01:19 2020 TCP connection established with [AF_INET]10.250.7.77:30190\nSat Jan 11 18:01:19 2020 10.250.7.77:30190 TCP connection established with [AF_INET]100.64.1.1:59422\nSat Jan 11 18:01:19 2020 10.250.7.77:30190 Connection reset, restarting [0]\nSat Jan 11 18:01:19 2020 100.64.1.1:59422 Connection reset, restarting [0]\nSat Jan 11 18:01:21 2020 vpn-seed/10.250.7.77:22572 peer info: IV_VER=2.4.6\nSat Jan 11 18:01:21 2020 vpn-seed/10.250.7.77:22572 peer info: IV_PLAT=linux\nSat Jan 11 18:01:21 2020 vpn-seed/10.250.7.77:22572 peer info: IV_PROTO=2\nSat Jan 11 18:01:21 2020 vpn-seed/10.250.7.77:22572 peer info: IV_LZ4=1\nSat Jan 11 18:01:21 2020 vpn-seed/10.250.7.77:22572 peer info: IV_LZ4v2=1\nSat Jan 11 18:01:21 2020 vpn-seed/10.250.7.77:22572 peer info: IV_LZO=1\nSat Jan 11 18:01:21 2020 vpn-seed/10.250.7.77:22572 peer info: IV_COMP_STUB=1\nSat Jan 11 18:01:21 2020 vpn-seed/10.250.7.77:22572 peer info: IV_COMP_STUBv2=1\nSat Jan 11 18:01:21 2020 vpn-seed/10.250.7.77:22572 peer info: IV_TCPNL=1\nSat Jan 11 18:01:23 2020 TCP connection established with [AF_INET]10.250.7.77:14054\nSat Jan 11 18:01:23 2020 10.250.7.77:14054 TCP connection established with [AF_INET]100.64.1.1:54994\nSat Jan 11 18:01:23 2020 10.250.7.77:14054 Connection reset, restarting [0]\nSat Jan 11 18:01:23 2020 100.64.1.1:54994 Connection reset, restarting [0]\nSat Jan 11 18:01:29 2020 TCP connection established with [AF_INET]10.250.7.77:30194\nSat Jan 11 18:01:29 2020 10.250.7.77:30194 TCP connection established with [AF_INET]100.64.1.1:59426\nSat Jan 11 18:01:29 2020 10.250.7.77:30194 Connection reset, restarting [0]\nSat Jan 11 18:01:29 2020 100.64.1.1:59426 Connection reset, restarting [0]\nSat Jan 11 18:01:33 2020 TCP connection established with [AF_INET]10.250.7.77:14060\nSat Jan 11 18:01:33 2020 10.250.7.77:14060 TCP connection established with [AF_INET]100.64.1.1:55000\nSat Jan 11 18:01:33 2020 10.250.7.77:14060 Connection reset, restarting [0]\nSat Jan 11 18:01:33 2020 100.64.1.1:55000 Connection reset, restarting [0]\nSat Jan 11 18:01:39 2020 TCP connection established with [AF_INET]10.250.7.77:30202\nSat Jan 11 18:01:39 2020 10.250.7.77:30202 TCP connection established with [AF_INET]100.64.1.1:59434\nSat Jan 11 18:01:39 2020 10.250.7.77:30202 Connection reset, restarting [0]\nSat Jan 11 18:01:39 2020 100.64.1.1:59434 Connection reset, restarting [0]\nSat Jan 11 18:01:43 2020 TCP connection established with [AF_INET]10.250.7.77:14068\nSat Jan 11 18:01:43 2020 10.250.7.77:14068 TCP connection established with [AF_INET]100.64.1.1:55008\nSat Jan 11 18:01:43 2020 10.250.7.77:14068 Connection reset, restarting [0]\nSat Jan 11 18:01:43 2020 100.64.1.1:55008 Connection reset, restarting [0]\nSat Jan 11 18:01:49 2020 TCP connection established with [AF_INET]10.250.7.77:30208\nSat Jan 11 18:01:49 2020 10.250.7.77:30208 TCP connection established with [AF_INET]100.64.1.1:59440\nSat Jan 11 18:01:49 2020 10.250.7.77:30208 Connection reset, restarting [0]\nSat Jan 11 18:01:49 2020 100.64.1.1:59440 Connection reset, restarting [0]\nSat Jan 11 18:01:53 2020 TCP connection established with [AF_INET]10.250.7.77:14074\nSat Jan 11 18:01:53 2020 10.250.7.77:14074 TCP connection established with [AF_INET]100.64.1.1:55014\nSat Jan 11 18:01:53 2020 10.250.7.77:14074 Connection reset, restarting [0]\nSat Jan 11 18:01:53 2020 100.64.1.1:55014 Connection reset, restarting [0]\nSat Jan 11 18:01:59 2020 TCP connection established with [AF_INET]10.250.7.77:30220\nSat Jan 11 18:01:59 2020 10.250.7.77:30220 TCP connection established with [AF_INET]100.64.1.1:59452\nSat Jan 11 18:01:59 2020 10.250.7.77:30220 Connection reset, restarting [0]\nSat Jan 11 18:01:59 2020 100.64.1.1:59452 Connection reset, restarting [0]\nSat Jan 11 18:02:03 2020 TCP connection established with [AF_INET]10.250.7.77:14090\nSat Jan 11 18:02:03 2020 10.250.7.77:14090 TCP connection established with [AF_INET]100.64.1.1:55030\nSat Jan 11 18:02:03 2020 10.250.7.77:14090 Connection reset, restarting [0]\nSat Jan 11 18:02:03 2020 100.64.1.1:55030 Connection reset, restarting [0]\nSat Jan 11 18:02:09 2020 TCP connection established with [AF_INET]10.250.7.77:30232\nSat Jan 11 18:02:09 2020 10.250.7.77:30232 TCP connection established with [AF_INET]100.64.1.1:59464\nSat Jan 11 18:02:09 2020 10.250.7.77:30232 Connection reset, restarting [0]\nSat Jan 11 18:02:09 2020 100.64.1.1:59464 Connection reset, restarting [0]\nSat Jan 11 18:02:13 2020 100.64.1.1:54984 TLS Error: TLS key negotiation failed to occur within 60 seconds (check your network connectivity)\nSat Jan 11 18:02:13 2020 100.64.1.1:54984 TLS Error: TLS handshake failed\nSat Jan 11 18:02:13 2020 100.64.1.1:54984 Fatal TLS error (check_tls_errors_co), restarting\nSat Jan 11 18:02:13 2020 TCP connection established with [AF_INET]10.250.7.77:14102\nSat Jan 11 18:02:13 2020 10.250.7.77:14102 TCP connection established with [AF_INET]100.64.1.1:55042\nSat Jan 11 18:02:13 2020 10.250.7.77:14102 Connection reset, restarting [0]\nSat Jan 11 18:02:13 2020 100.64.1.1:55042 Connection reset, restarting [0]\nSat Jan 11 18:02:19 2020 TCP connection established with [AF_INET]10.250.7.77:30244\nSat Jan 11 18:02:19 2020 10.250.7.77:30244 TCP connection established with [AF_INET]100.64.1.1:59476\nSat Jan 11 18:02:19 2020 10.250.7.77:30244 Connection reset, restarting [0]\nSat Jan 11 18:02:19 2020 100.64.1.1:59476 Connection reset, restarting [0]\nSat Jan 11 18:02:23 2020 TCP connection established with [AF_INET]10.250.7.77:14112\nSat Jan 11 18:02:23 2020 10.250.7.77:14112 TCP connection established with [AF_INET]100.64.1.1:55052\nSat Jan 11 18:02:23 2020 10.250.7.77:14112 Connection reset, restarting [0]\nSat Jan 11 18:02:23 2020 100.64.1.1:55052 Connection reset, restarting [0]\nSat Jan 11 18:02:29 2020 TCP connection established with [AF_INET]10.250.7.77:30252\nSat Jan 11 18:02:29 2020 10.250.7.77:30252 TCP connection established with [AF_INET]100.64.1.1:59484\nSat Jan 11 18:02:29 2020 10.250.7.77:30252 Connection reset, restarting [0]\nSat Jan 11 18:02:29 2020 100.64.1.1:59484 Connection reset, restarting [0]\nSat Jan 11 18:02:33 2020 TCP connection established with [AF_INET]10.250.7.77:14118\nSat Jan 11 18:02:33 2020 10.250.7.77:14118 TCP connection established with [AF_INET]100.64.1.1:55058\nSat Jan 11 18:02:33 2020 10.250.7.77:14118 Connection reset, restarting [0]\nSat Jan 11 18:02:33 2020 100.64.1.1:55058 Connection reset, restarting [0]\nSat Jan 11 18:02:39 2020 TCP connection established with [AF_INET]10.250.7.77:30260\nSat Jan 11 18:02:39 2020 10.250.7.77:30260 TCP connection established with [AF_INET]100.64.1.1:59492\nSat Jan 11 18:02:39 2020 10.250.7.77:30260 Connection reset, restarting [0]\nSat Jan 11 18:02:39 2020 100.64.1.1:59492 Connection reset, restarting [0]\nSat Jan 11 18:02:43 2020 TCP connection established with [AF_INET]10.250.7.77:14126\nSat Jan 11 18:02:43 2020 10.250.7.77:14126 TCP connection established with [AF_INET]100.64.1.1:55066\nSat Jan 11 18:02:43 2020 10.250.7.77:14126 Connection reset, restarting [0]\nSat Jan 11 18:02:43 2020 100.64.1.1:55066 Connection reset, restarting [0]\nSat Jan 11 18:02:49 2020 TCP connection established with [AF_INET]10.250.7.77:30266\nSat Jan 11 18:02:49 2020 10.250.7.77:30266 TCP connection established with [AF_INET]100.64.1.1:59498\nSat Jan 11 18:02:49 2020 10.250.7.77:30266 Connection reset, restarting [0]\nSat Jan 11 18:02:49 2020 100.64.1.1:59498 Connection reset, restarting [0]\nSat Jan 11 18:02:53 2020 TCP connection established with [AF_INET]10.250.7.77:14132\nSat Jan 11 18:02:53 2020 10.250.7.77:14132 TCP connection established with [AF_INET]100.64.1.1:55072\nSat Jan 11 18:02:53 2020 10.250.7.77:14132 Connection reset, restarting [0]\nSat Jan 11 18:02:53 2020 100.64.1.1:55072 Connection reset, restarting [0]\nSat Jan 11 18:02:59 2020 TCP connection established with [AF_INET]10.250.7.77:30278\nSat Jan 11 18:02:59 2020 10.250.7.77:30278 TCP connection established with [AF_INET]100.64.1.1:59510\nSat Jan 11 18:02:59 2020 10.250.7.77:30278 Connection reset, restarting [0]\nSat Jan 11 18:02:59 2020 100.64.1.1:59510 Connection reset, restarting [0]\nSat Jan 11 18:03:03 2020 TCP connection established with [AF_INET]10.250.7.77:14148\nSat Jan 11 18:03:03 2020 10.250.7.77:14148 TCP connection established with [AF_INET]100.64.1.1:55088\nSat Jan 11 18:03:03 2020 10.250.7.77:14148 Connection reset, restarting [0]\nSat Jan 11 18:03:03 2020 100.64.1.1:55088 Connection reset, restarting [0]\nSat Jan 11 18:03:09 2020 TCP connection established with [AF_INET]10.250.7.77:30290\nSat Jan 11 18:03:09 2020 10.250.7.77:30290 TCP connection established with [AF_INET]100.64.1.1:59522\nSat Jan 11 18:03:09 2020 10.250.7.77:30290 Connection reset, restarting [0]\nSat Jan 11 18:03:09 2020 100.64.1.1:59522 Connection reset, restarting [0]\nSat Jan 11 18:03:13 2020 TCP connection established with [AF_INET]10.250.7.77:14156\nSat Jan 11 18:03:13 2020 10.250.7.77:14156 TCP connection established with [AF_INET]100.64.1.1:55096\nSat Jan 11 18:03:13 2020 10.250.7.77:14156 Connection reset, restarting [0]\nSat Jan 11 18:03:13 2020 100.64.1.1:55096 Connection reset, restarting [0]\nSat Jan 11 18:03:19 2020 TCP connection established with [AF_INET]10.250.7.77:30302\nSat Jan 11 18:03:19 2020 10.250.7.77:30302 TCP connection established with [AF_INET]100.64.1.1:59534\nSat Jan 11 18:03:19 2020 10.250.7.77:30302 Connection reset, restarting [0]\nSat Jan 11 18:03:19 2020 100.64.1.1:59534 Connection reset, restarting [0]\nSat Jan 11 18:03:23 2020 TCP connection established with [AF_INET]10.250.7.77:14170\nSat Jan 11 18:03:23 2020 10.250.7.77:14170 TCP connection established with [AF_INET]100.64.1.1:55110\nSat Jan 11 18:03:23 2020 10.250.7.77:14170 Connection reset, restarting [0]\nSat Jan 11 18:03:23 2020 100.64.1.1:55110 Connection reset, restarting [0]\nSat Jan 11 18:03:29 2020 TCP connection established with [AF_INET]10.250.7.77:30306\nSat Jan 11 18:03:29 2020 10.250.7.77:30306 TCP connection established with [AF_INET]100.64.1.1:59538\nSat Jan 11 18:03:29 2020 10.250.7.77:30306 Connection reset, restarting [0]\nSat Jan 11 18:03:29 2020 100.64.1.1:59538 Connection reset, restarting [0]\nSat Jan 11 18:03:33 2020 TCP connection established with [AF_INET]10.250.7.77:14176\nSat Jan 11 18:03:33 2020 10.250.7.77:14176 TCP connection established with [AF_INET]100.64.1.1:55116\nSat Jan 11 18:03:33 2020 10.250.7.77:14176 Connection reset, restarting [0]\nSat Jan 11 18:03:33 2020 100.64.1.1:55116 Connection reset, restarting [0]\nSat Jan 11 18:03:39 2020 TCP connection established with [AF_INET]10.250.7.77:30314\nSat Jan 11 18:03:39 2020 10.250.7.77:30314 TCP connection established with [AF_INET]100.64.1.1:59546\nSat Jan 11 18:03:39 2020 10.250.7.77:30314 Connection reset, restarting [0]\nSat Jan 11 18:03:39 2020 100.64.1.1:59546 Connection reset, restarting [0]\nSat Jan 11 18:03:43 2020 TCP connection established with [AF_INET]10.250.7.77:14184\nSat Jan 11 18:03:43 2020 10.250.7.77:14184 TCP connection established with [AF_INET]100.64.1.1:55124\nSat Jan 11 18:03:43 2020 10.250.7.77:14184 Connection reset, restarting [0]\nSat Jan 11 18:03:43 2020 100.64.1.1:55124 Connection reset, restarting [0]\nSat Jan 11 18:03:49 2020 TCP connection established with [AF_INET]10.250.7.77:30324\nSat Jan 11 18:03:49 2020 10.250.7.77:30324 TCP connection established with [AF_INET]100.64.1.1:59556\nSat Jan 11 18:03:49 2020 10.250.7.77:30324 Connection reset, restarting [0]\nSat Jan 11 18:03:49 2020 100.64.1.1:59556 Connection reset, restarting [0]\nSat Jan 11 18:03:53 2020 TCP connection established with [AF_INET]10.250.7.77:14192\nSat Jan 11 18:03:53 2020 10.250.7.77:14192 TCP connection established with [AF_INET]100.64.1.1:55132\nSat Jan 11 18:03:53 2020 10.250.7.77:14192 Connection reset, restarting [0]\nSat Jan 11 18:03:53 2020 100.64.1.1:55132 Connection reset, restarting [0]\nSat Jan 11 18:03:59 2020 TCP connection established with [AF_INET]10.250.7.77:30336\nSat Jan 11 18:03:59 2020 10.250.7.77:30336 TCP connection established with [AF_INET]100.64.1.1:59568\nSat Jan 11 18:03:59 2020 10.250.7.77:30336 Connection reset, restarting [0]\nSat Jan 11 18:03:59 2020 100.64.1.1:59568 Connection reset, restarting [0]\nSat Jan 11 18:04:03 2020 TCP connection established with [AF_INET]10.250.7.77:14206\nSat Jan 11 18:04:03 2020 10.250.7.77:14206 TCP connection established with [AF_INET]100.64.1.1:55146\nSat Jan 11 18:04:03 2020 10.250.7.77:14206 Connection reset, restarting [0]\nSat Jan 11 18:04:03 2020 100.64.1.1:55146 Connection reset, restarting [0]\nSat Jan 11 18:04:09 2020 TCP connection established with [AF_INET]10.250.7.77:30348\nSat Jan 11 18:04:09 2020 10.250.7.77:30348 TCP connection established with [AF_INET]100.64.1.1:59580\nSat Jan 11 18:04:09 2020 10.250.7.77:30348 Connection reset, restarting [0]\nSat Jan 11 18:04:09 2020 100.64.1.1:59580 Connection reset, restarting [0]\nSat Jan 11 18:04:13 2020 TCP connection established with [AF_INET]10.250.7.77:14214\nSat Jan 11 18:04:13 2020 10.250.7.77:14214 TCP connection established with [AF_INET]100.64.1.1:55154\nSat Jan 11 18:04:13 2020 10.250.7.77:14214 Connection reset, restarting [0]\nSat Jan 11 18:04:13 2020 100.64.1.1:55154 Connection reset, restarting [0]\nSat Jan 11 18:04:19 2020 TCP connection established with [AF_INET]10.250.7.77:30360\nSat Jan 11 18:04:19 2020 10.250.7.77:30360 TCP connection established with [AF_INET]100.64.1.1:59592\nSat Jan 11 18:04:19 2020 10.250.7.77:30360 Connection reset, restarting [0]\nSat Jan 11 18:04:19 2020 100.64.1.1:59592 Connection reset, restarting [0]\nSat Jan 11 18:04:23 2020 TCP connection established with [AF_INET]10.250.7.77:14224\nSat Jan 11 18:04:23 2020 10.250.7.77:14224 TCP connection established with [AF_INET]100.64.1.1:55164\nSat Jan 11 18:04:23 2020 10.250.7.77:14224 Connection reset, restarting [0]\nSat Jan 11 18:04:23 2020 100.64.1.1:55164 Connection reset, restarting [0]\nSat Jan 11 18:04:29 2020 TCP connection established with [AF_INET]10.250.7.77:30364\nSat Jan 11 18:04:29 2020 10.250.7.77:30364 TCP connection established with [AF_INET]100.64.1.1:59596\nSat Jan 11 18:04:29 2020 10.250.7.77:30364 Connection reset, restarting [0]\nSat Jan 11 18:04:29 2020 100.64.1.1:59596 Connection reset, restarting [0]\nSat Jan 11 18:04:33 2020 TCP connection established with [AF_INET]10.250.7.77:14230\nSat Jan 11 18:04:33 2020 10.250.7.77:14230 TCP connection established with [AF_INET]100.64.1.1:55170\nSat Jan 11 18:04:33 2020 10.250.7.77:14230 Connection reset, restarting [0]\nSat Jan 11 18:04:33 2020 100.64.1.1:55170 Connection reset, restarting [0]\nSat Jan 11 18:04:39 2020 TCP connection established with [AF_INET]10.250.7.77:30372\nSat Jan 11 18:04:39 2020 10.250.7.77:30372 TCP connection established with [AF_INET]100.64.1.1:59604\nSat Jan 11 18:04:39 2020 10.250.7.77:30372 Connection reset, restarting [0]\nSat Jan 11 18:04:39 2020 100.64.1.1:59604 Connection reset, restarting [0]\nSat Jan 11 18:04:43 2020 TCP connection established with [AF_INET]10.250.7.77:14242\nSat Jan 11 18:04:43 2020 10.250.7.77:14242 TCP connection established with [AF_INET]100.64.1.1:55182\nSat Jan 11 18:04:43 2020 10.250.7.77:14242 Connection reset, restarting [0]\nSat Jan 11 18:04:43 2020 100.64.1.1:55182 Connection reset, restarting [0]\nSat Jan 11 18:04:49 2020 TCP connection established with [AF_INET]10.250.7.77:30378\nSat Jan 11 18:04:49 2020 10.250.7.77:30378 TCP connection established with [AF_INET]100.64.1.1:59610\nSat Jan 11 18:04:49 2020 10.250.7.77:30378 Connection reset, restarting [0]\nSat Jan 11 18:04:49 2020 100.64.1.1:59610 Connection reset, restarting [0]\nSat Jan 11 18:04:53 2020 TCP connection established with [AF_INET]10.250.7.77:14250\nSat Jan 11 18:04:53 2020 10.250.7.77:14250 TCP connection established with [AF_INET]100.64.1.1:55190\nSat Jan 11 18:04:53 2020 10.250.7.77:14250 Connection reset, restarting [0]\nSat Jan 11 18:04:53 2020 100.64.1.1:55190 Connection reset, restarting [0]\nSat Jan 11 18:04:59 2020 TCP connection established with [AF_INET]10.250.7.77:30396\nSat Jan 11 18:04:59 2020 10.250.7.77:30396 TCP connection established with [AF_INET]100.64.1.1:59628\nSat Jan 11 18:04:59 2020 10.250.7.77:30396 Connection reset, restarting [0]\nSat Jan 11 18:04:59 2020 100.64.1.1:59628 Connection reset, restarting [0]\nSat Jan 11 18:05:03 2020 TCP connection established with [AF_INET]10.250.7.77:14264\nSat Jan 11 18:05:03 2020 10.250.7.77:14264 TCP connection established with [AF_INET]100.64.1.1:55204\nSat Jan 11 18:05:03 2020 10.250.7.77:14264 Connection reset, restarting [0]\nSat Jan 11 18:05:03 2020 100.64.1.1:55204 Connection reset, restarting [0]\nSat Jan 11 18:05:09 2020 TCP connection established with [AF_INET]10.250.7.77:30406\nSat Jan 11 18:05:09 2020 10.250.7.77:30406 Connection reset, restarting [0]\nSat Jan 11 18:05:09 2020 TCP connection established with [AF_INET]100.64.1.1:59638\nSat Jan 11 18:05:09 2020 100.64.1.1:59638 Connection reset, restarting [0]\nSat Jan 11 18:05:13 2020 TCP connection established with [AF_INET]10.250.7.77:14272\nSat Jan 11 18:05:13 2020 10.250.7.77:14272 TCP connection established with [AF_INET]100.64.1.1:55212\nSat Jan 11 18:05:13 2020 10.250.7.77:14272 Connection reset, restarting [0]\nSat Jan 11 18:05:13 2020 100.64.1.1:55212 Connection reset, restarting [0]\nSat Jan 11 18:05:19 2020 TCP connection established with [AF_INET]10.250.7.77:30418\nSat Jan 11 18:05:19 2020 10.250.7.77:30418 TCP connection established with [AF_INET]100.64.1.1:59650\nSat Jan 11 18:05:19 2020 10.250.7.77:30418 Connection reset, restarting [0]\nSat Jan 11 18:05:19 2020 100.64.1.1:59650 Connection reset, restarting [0]\nSat Jan 11 18:05:23 2020 TCP connection established with [AF_INET]10.250.7.77:14282\nSat Jan 11 18:05:23 2020 10.250.7.77:14282 TCP connection established with [AF_INET]100.64.1.1:55222\nSat Jan 11 18:05:23 2020 10.250.7.77:14282 Connection reset, restarting [0]\nSat Jan 11 18:05:23 2020 100.64.1.1:55222 Connection reset, restarting [0]\nSat Jan 11 18:05:29 2020 TCP connection established with [AF_INET]10.250.7.77:30422\nSat Jan 11 18:05:29 2020 10.250.7.77:30422 TCP connection established with [AF_INET]100.64.1.1:59654\nSat Jan 11 18:05:29 2020 10.250.7.77:30422 Connection reset, restarting [0]\nSat Jan 11 18:05:29 2020 100.64.1.1:59654 Connection reset, restarting [0]\nSat Jan 11 18:05:33 2020 TCP connection established with [AF_INET]10.250.7.77:14288\nSat Jan 11 18:05:33 2020 10.250.7.77:14288 TCP connection established with [AF_INET]100.64.1.1:55228\nSat Jan 11 18:05:33 2020 10.250.7.77:14288 Connection reset, restarting [0]\nSat Jan 11 18:05:33 2020 100.64.1.1:55228 Connection reset, restarting [0]\nSat Jan 11 18:05:39 2020 TCP connection established with [AF_INET]10.250.7.77:30430\nSat Jan 11 18:05:39 2020 10.250.7.77:30430 TCP connection established with [AF_INET]100.64.1.1:59662\nSat Jan 11 18:05:39 2020 10.250.7.77:30430 Connection reset, restarting [0]\nSat Jan 11 18:05:39 2020 100.64.1.1:59662 Connection reset, restarting [0]\nSat Jan 11 18:05:43 2020 TCP connection established with [AF_INET]10.250.7.77:14296\nSat Jan 11 18:05:43 2020 10.250.7.77:14296 TCP connection established with [AF_INET]100.64.1.1:55236\nSat Jan 11 18:05:43 2020 10.250.7.77:14296 Connection reset, restarting [0]\nSat Jan 11 18:05:43 2020 100.64.1.1:55236 Connection reset, restarting [0]\nSat Jan 11 18:05:49 2020 TCP connection established with [AF_INET]10.250.7.77:30436\nSat Jan 11 18:05:49 2020 10.250.7.77:30436 TCP connection established with [AF_INET]100.64.1.1:59668\nSat Jan 11 18:05:49 2020 10.250.7.77:30436 Connection reset, restarting [0]\nSat Jan 11 18:05:49 2020 100.64.1.1:59668 Connection reset, restarting [0]\nSat Jan 11 18:05:53 2020 TCP connection established with [AF_INET]10.250.7.77:14338\nSat Jan 11 18:05:53 2020 10.250.7.77:14338 TCP connection established with [AF_INET]100.64.1.1:55278\nSat Jan 11 18:05:53 2020 10.250.7.77:14338 Connection reset, restarting [0]\nSat Jan 11 18:05:53 2020 100.64.1.1:55278 Connection reset, restarting [0]\nSat Jan 11 18:05:59 2020 TCP connection established with [AF_INET]10.250.7.77:30450\nSat Jan 11 18:05:59 2020 10.250.7.77:30450 TCP connection established with [AF_INET]100.64.1.1:59682\nSat Jan 11 18:05:59 2020 10.250.7.77:30450 Connection reset, restarting [0]\nSat Jan 11 18:05:59 2020 100.64.1.1:59682 Connection reset, restarting [0]\nSat Jan 11 18:06:03 2020 TCP connection established with [AF_INET]10.250.7.77:14356\nSat Jan 11 18:06:03 2020 10.250.7.77:14356 Connection reset, restarting [0]\nSat Jan 11 18:06:03 2020 TCP connection established with [AF_INET]100.64.1.1:55296\nSat Jan 11 18:06:03 2020 100.64.1.1:55296 Connection reset, restarting [0]\nSat Jan 11 18:06:09 2020 TCP connection established with [AF_INET]10.250.7.77:30460\nSat Jan 11 18:06:09 2020 10.250.7.77:30460 TCP connection established with [AF_INET]100.64.1.1:59692\nSat Jan 11 18:06:09 2020 10.250.7.77:30460 Connection reset, restarting [0]\nSat Jan 11 18:06:09 2020 100.64.1.1:59692 Connection reset, restarting [0]\nSat Jan 11 18:06:13 2020 TCP connection established with [AF_INET]10.250.7.77:14366\nSat Jan 11 18:06:13 2020 10.250.7.77:14366 TCP connection established with [AF_INET]100.64.1.1:55306\nSat Jan 11 18:06:13 2020 10.250.7.77:14366 Connection reset, restarting [0]\nSat Jan 11 18:06:13 2020 100.64.1.1:55306 Connection reset, restarting [0]\nSat Jan 11 18:06:19 2020 TCP connection established with [AF_INET]10.250.7.77:30476\nSat Jan 11 18:06:19 2020 10.250.7.77:30476 TCP connection established with [AF_INET]100.64.1.1:59708\nSat Jan 11 18:06:19 2020 10.250.7.77:30476 Connection reset, restarting [0]\nSat Jan 11 18:06:19 2020 100.64.1.1:59708 Connection reset, restarting [0]\nSat Jan 11 18:06:23 2020 TCP connection established with [AF_INET]10.250.7.77:14376\nSat Jan 11 18:06:23 2020 10.250.7.77:14376 TCP connection established with [AF_INET]100.64.1.1:55316\nSat Jan 11 18:06:23 2020 10.250.7.77:14376 Connection reset, restarting [0]\nSat Jan 11 18:06:23 2020 100.64.1.1:55316 Connection reset, restarting [0]\nSat Jan 11 18:06:29 2020 TCP connection established with [AF_INET]10.250.7.77:30480\nSat Jan 11 18:06:29 2020 10.250.7.77:30480 TCP connection established with [AF_INET]100.64.1.1:59712\nSat Jan 11 18:06:29 2020 10.250.7.77:30480 Connection reset, restarting [0]\nSat Jan 11 18:06:29 2020 100.64.1.1:59712 Connection reset, restarting [0]\nSat Jan 11 18:06:33 2020 TCP connection established with [AF_INET]10.250.7.77:14382\nSat Jan 11 18:06:33 2020 10.250.7.77:14382 TCP connection established with [AF_INET]100.64.1.1:55322\nSat Jan 11 18:06:33 2020 10.250.7.77:14382 Connection reset, restarting [0]\nSat Jan 11 18:06:33 2020 100.64.1.1:55322 Connection reset, restarting [0]\nSat Jan 11 18:06:39 2020 TCP connection established with [AF_INET]10.250.7.77:30488\nSat Jan 11 18:06:39 2020 10.250.7.77:30488 TCP connection established with [AF_INET]100.64.1.1:59720\nSat Jan 11 18:06:39 2020 10.250.7.77:30488 Connection reset, restarting [0]\nSat Jan 11 18:06:39 2020 100.64.1.1:59720 Connection reset, restarting [0]\nSat Jan 11 18:06:43 2020 TCP connection established with [AF_INET]10.250.7.77:14392\nSat Jan 11 18:06:43 2020 10.250.7.77:14392 TCP connection established with [AF_INET]100.64.1.1:55332\nSat Jan 11 18:06:43 2020 10.250.7.77:14392 Connection reset, restarting [0]\nSat Jan 11 18:06:43 2020 100.64.1.1:55332 Connection reset, restarting [0]\nSat Jan 11 18:06:49 2020 TCP connection established with [AF_INET]10.250.7.77:30496\nSat Jan 11 18:06:49 2020 10.250.7.77:30496 TCP connection established with [AF_INET]100.64.1.1:59728\nSat Jan 11 18:06:49 2020 10.250.7.77:30496 Connection reset, restarting [0]\nSat Jan 11 18:06:49 2020 100.64.1.1:59728 Connection reset, restarting [0]\nSat Jan 11 18:06:53 2020 TCP connection established with [AF_INET]10.250.7.77:14398\nSat Jan 11 18:06:53 2020 10.250.7.77:14398 TCP connection established with [AF_INET]100.64.1.1:55338\nSat Jan 11 18:06:53 2020 10.250.7.77:14398 Connection reset, restarting [0]\nSat Jan 11 18:06:53 2020 100.64.1.1:55338 Connection reset, restarting [0]\nSat Jan 11 18:06:59 2020 TCP connection established with [AF_INET]10.250.7.77:30508\nSat Jan 11 18:06:59 2020 10.250.7.77:30508 TCP connection established with [AF_INET]100.64.1.1:59740\nSat Jan 11 18:06:59 2020 10.250.7.77:30508 Connection reset, restarting [0]\nSat Jan 11 18:06:59 2020 100.64.1.1:59740 Connection reset, restarting [0]\nSat Jan 11 18:07:03 2020 TCP connection established with [AF_INET]10.250.7.77:14418\nSat Jan 11 18:07:03 2020 10.250.7.77:14418 TCP connection established with [AF_INET]100.64.1.1:55358\nSat Jan 11 18:07:03 2020 10.250.7.77:14418 Connection reset, restarting [0]\nSat Jan 11 18:07:03 2020 100.64.1.1:55358 Connection reset, restarting [0]\nSat Jan 11 18:07:09 2020 TCP connection established with [AF_INET]10.250.7.77:30518\nSat Jan 11 18:07:09 2020 10.250.7.77:30518 TCP connection established with [AF_INET]100.64.1.1:59750\nSat Jan 11 18:07:09 2020 10.250.7.77:30518 Connection reset, restarting [0]\nSat Jan 11 18:07:09 2020 100.64.1.1:59750 Connection reset, restarting [0]\nSat Jan 11 18:07:13 2020 TCP connection established with [AF_INET]10.250.7.77:14438\nSat Jan 11 18:07:13 2020 10.250.7.77:14438 TCP connection established with [AF_INET]100.64.1.1:55378\nSat Jan 11 18:07:13 2020 10.250.7.77:14438 Connection reset, restarting [0]\nSat Jan 11 18:07:13 2020 100.64.1.1:55378 Connection reset, restarting [0]\nSat Jan 11 18:07:19 2020 TCP connection established with [AF_INET]10.250.7.77:30530\nSat Jan 11 18:07:19 2020 10.250.7.77:30530 TCP connection established with [AF_INET]100.64.1.1:59762\nSat Jan 11 18:07:19 2020 10.250.7.77:30530 Connection reset, restarting [0]\nSat Jan 11 18:07:19 2020 100.64.1.1:59762 Connection reset, restarting [0]\nSat Jan 11 18:07:23 2020 TCP connection established with [AF_INET]10.250.7.77:14448\nSat Jan 11 18:07:23 2020 10.250.7.77:14448 TCP connection established with [AF_INET]100.64.1.1:55388\nSat Jan 11 18:07:23 2020 10.250.7.77:14448 Connection reset, restarting [0]\nSat Jan 11 18:07:23 2020 100.64.1.1:55388 Connection reset, restarting [0]\nSat Jan 11 18:07:29 2020 TCP connection established with [AF_INET]10.250.7.77:30538\nSat Jan 11 18:07:29 2020 10.250.7.77:30538 TCP connection established with [AF_INET]100.64.1.1:59770\nSat Jan 11 18:07:29 2020 10.250.7.77:30538 Connection reset, restarting [0]\nSat Jan 11 18:07:29 2020 100.64.1.1:59770 Connection reset, restarting [0]\nSat Jan 11 18:07:33 2020 TCP connection established with [AF_INET]10.250.7.77:14454\nSat Jan 11 18:07:33 2020 10.250.7.77:14454 TCP connection established with [AF_INET]100.64.1.1:55394\nSat Jan 11 18:07:33 2020 10.250.7.77:14454 Connection reset, restarting [0]\nSat Jan 11 18:07:33 2020 100.64.1.1:55394 Connection reset, restarting [0]\nSat Jan 11 18:07:39 2020 TCP connection established with [AF_INET]10.250.7.77:30546\nSat Jan 11 18:07:39 2020 10.250.7.77:30546 TCP connection established with [AF_INET]100.64.1.1:59778\nSat Jan 11 18:07:39 2020 10.250.7.77:30546 Connection reset, restarting [0]\nSat Jan 11 18:07:39 2020 100.64.1.1:59778 Connection reset, restarting [0]\nSat Jan 11 18:07:43 2020 TCP connection established with [AF_INET]10.250.7.77:14464\nSat Jan 11 18:07:43 2020 10.250.7.77:14464 TCP connection established with [AF_INET]100.64.1.1:55404\nSat Jan 11 18:07:43 2020 10.250.7.77:14464 Connection reset, restarting [0]\nSat Jan 11 18:07:43 2020 100.64.1.1:55404 Connection reset, restarting [0]\nSat Jan 11 18:07:49 2020 TCP connection established with [AF_INET]10.250.7.77:30554\nSat Jan 11 18:07:49 2020 10.250.7.77:30554 TCP connection established with [AF_INET]100.64.1.1:59786\nSat Jan 11 18:07:49 2020 10.250.7.77:30554 Connection reset, restarting [0]\nSat Jan 11 18:07:49 2020 100.64.1.1:59786 Connection reset, restarting [0]\nSat Jan 11 18:07:53 2020 TCP connection established with [AF_INET]10.250.7.77:14470\nSat Jan 11 18:07:53 2020 10.250.7.77:14470 TCP connection established with [AF_INET]100.64.1.1:55410\nSat Jan 11 18:07:53 2020 10.250.7.77:14470 Connection reset, restarting [0]\nSat Jan 11 18:07:53 2020 100.64.1.1:55410 Connection reset, restarting [0]\nSat Jan 11 18:07:59 2020 TCP connection established with [AF_INET]10.250.7.77:30566\nSat Jan 11 18:07:59 2020 10.250.7.77:30566 TCP connection established with [AF_INET]100.64.1.1:59798\nSat Jan 11 18:07:59 2020 10.250.7.77:30566 Connection reset, restarting [0]\nSat Jan 11 18:07:59 2020 100.64.1.1:59798 Connection reset, restarting [0]\nSat Jan 11 18:08:03 2020 TCP connection established with [AF_INET]10.250.7.77:14484\nSat Jan 11 18:08:03 2020 10.250.7.77:14484 TCP connection established with [AF_INET]100.64.1.1:55424\nSat Jan 11 18:08:03 2020 10.250.7.77:14484 Connection reset, restarting [0]\nSat Jan 11 18:08:03 2020 100.64.1.1:55424 Connection reset, restarting [0]\nSat Jan 11 18:08:09 2020 TCP connection established with [AF_INET]100.64.1.1:59808\nSat Jan 11 18:08:09 2020 100.64.1.1:59808 Connection reset, restarting [0]\nSat Jan 11 18:08:09 2020 TCP connection established with [AF_INET]10.250.7.77:30576\nSat Jan 11 18:08:09 2020 10.250.7.77:30576 Connection reset, restarting [0]\nSat Jan 11 18:08:13 2020 TCP connection established with [AF_INET]10.250.7.77:14492\nSat Jan 11 18:08:13 2020 10.250.7.77:14492 TCP connection established with [AF_INET]100.64.1.1:55432\nSat Jan 11 18:08:13 2020 10.250.7.77:14492 Connection reset, restarting [0]\nSat Jan 11 18:08:13 2020 100.64.1.1:55432 Connection reset, restarting [0]\nSat Jan 11 18:08:19 2020 TCP connection established with [AF_INET]10.250.7.77:30588\nSat Jan 11 18:08:19 2020 10.250.7.77:30588 TCP connection established with [AF_INET]100.64.1.1:59820\nSat Jan 11 18:08:19 2020 10.250.7.77:30588 Connection reset, restarting [0]\nSat Jan 11 18:08:19 2020 100.64.1.1:59820 Connection reset, restarting [0]\nSat Jan 11 18:08:23 2020 TCP connection established with [AF_INET]10.250.7.77:14506\nSat Jan 11 18:08:23 2020 10.250.7.77:14506 TCP connection established with [AF_INET]100.64.1.1:55446\nSat Jan 11 18:08:23 2020 10.250.7.77:14506 Connection reset, restarting [0]\nSat Jan 11 18:08:23 2020 100.64.1.1:55446 Connection reset, restarting [0]\nSat Jan 11 18:08:29 2020 TCP connection established with [AF_INET]10.250.7.77:30592\nSat Jan 11 18:08:29 2020 10.250.7.77:30592 TCP connection established with [AF_INET]100.64.1.1:59824\nSat Jan 11 18:08:29 2020 10.250.7.77:30592 Connection reset, restarting [0]\nSat Jan 11 18:08:29 2020 100.64.1.1:59824 Connection reset, restarting [0]\nSat Jan 11 18:08:33 2020 TCP connection established with [AF_INET]10.250.7.77:14514\nSat Jan 11 18:08:33 2020 10.250.7.77:14514 TCP connection established with [AF_INET]100.64.1.1:55454\nSat Jan 11 18:08:33 2020 10.250.7.77:14514 Connection reset, restarting [0]\nSat Jan 11 18:08:33 2020 100.64.1.1:55454 Connection reset, restarting [0]\nSat Jan 11 18:08:39 2020 TCP connection established with [AF_INET]10.250.7.77:30600\nSat Jan 11 18:08:39 2020 10.250.7.77:30600 TCP connection established with [AF_INET]100.64.1.1:59832\nSat Jan 11 18:08:39 2020 10.250.7.77:30600 Connection reset, restarting [0]\nSat Jan 11 18:08:39 2020 100.64.1.1:59832 Connection reset, restarting [0]\nSat Jan 11 18:08:43 2020 TCP connection established with [AF_INET]10.250.7.77:14522\nSat Jan 11 18:08:43 2020 10.250.7.77:14522 TCP connection established with [AF_INET]100.64.1.1:55462\nSat Jan 11 18:08:43 2020 10.250.7.77:14522 Connection reset, restarting [0]\nSat Jan 11 18:08:43 2020 100.64.1.1:55462 Connection reset, restarting [0]\nSat Jan 11 18:08:49 2020 TCP connection established with [AF_INET]10.250.7.77:30646\nSat Jan 11 18:08:49 2020 10.250.7.77:30646 TCP connection established with [AF_INET]100.64.1.1:59878\nSat Jan 11 18:08:49 2020 10.250.7.77:30646 Connection reset, restarting [0]\nSat Jan 11 18:08:49 2020 100.64.1.1:59878 Connection reset, restarting [0]\nSat Jan 11 18:08:53 2020 TCP connection established with [AF_INET]10.250.7.77:14528\nSat Jan 11 18:08:53 2020 10.250.7.77:14528 TCP connection established with [AF_INET]100.64.1.1:55468\nSat Jan 11 18:08:53 2020 10.250.7.77:14528 Connection reset, restarting [0]\nSat Jan 11 18:08:53 2020 100.64.1.1:55468 Connection reset, restarting [0]\nSat Jan 11 18:08:59 2020 TCP connection established with [AF_INET]10.250.7.77:30658\nSat Jan 11 18:08:59 2020 10.250.7.77:30658 TCP connection established with [AF_INET]100.64.1.1:59890\nSat Jan 11 18:08:59 2020 10.250.7.77:30658 Connection reset, restarting [0]\nSat Jan 11 18:08:59 2020 100.64.1.1:59890 Connection reset, restarting [0]\nSat Jan 11 18:09:03 2020 TCP connection established with [AF_INET]10.250.7.77:14542\nSat Jan 11 18:09:03 2020 10.250.7.77:14542 TCP connection established with [AF_INET]100.64.1.1:55482\nSat Jan 11 18:09:03 2020 10.250.7.77:14542 Connection reset, restarting [0]\nSat Jan 11 18:09:03 2020 100.64.1.1:55482 Connection reset, restarting [0]\nSat Jan 11 18:09:09 2020 TCP connection established with [AF_INET]10.250.7.77:30670\nSat Jan 11 18:09:09 2020 10.250.7.77:30670 TCP connection established with [AF_INET]100.64.1.1:59902\nSat Jan 11 18:09:09 2020 10.250.7.77:30670 Connection reset, restarting [0]\nSat Jan 11 18:09:09 2020 100.64.1.1:59902 Connection reset, restarting [0]\nSat Jan 11 18:09:13 2020 TCP connection established with [AF_INET]100.64.1.1:55490\nSat Jan 11 18:09:13 2020 100.64.1.1:55490 Connection reset, restarting [0]\nSat Jan 11 18:09:13 2020 TCP connection established with [AF_INET]10.250.7.77:14550\nSat Jan 11 18:09:13 2020 10.250.7.77:14550 Connection reset, restarting [0]\nSat Jan 11 18:09:19 2020 TCP connection established with [AF_INET]10.250.7.77:30682\nSat Jan 11 18:09:19 2020 10.250.7.77:30682 TCP connection established with [AF_INET]100.64.1.1:59914\nSat Jan 11 18:09:19 2020 10.250.7.77:30682 Connection reset, restarting [0]\nSat Jan 11 18:09:19 2020 100.64.1.1:59914 Connection reset, restarting [0]\nSat Jan 11 18:09:23 2020 TCP connection established with [AF_INET]10.250.7.77:14560\nSat Jan 11 18:09:23 2020 10.250.7.77:14560 TCP connection established with [AF_INET]100.64.1.1:55500\nSat Jan 11 18:09:23 2020 10.250.7.77:14560 Connection reset, restarting [0]\nSat Jan 11 18:09:23 2020 100.64.1.1:55500 Connection reset, restarting [0]\nSat Jan 11 18:09:29 2020 TCP connection established with [AF_INET]10.250.7.77:30686\nSat Jan 11 18:09:29 2020 10.250.7.77:30686 TCP connection established with [AF_INET]100.64.1.1:59918\nSat Jan 11 18:09:29 2020 10.250.7.77:30686 Connection reset, restarting [0]\nSat Jan 11 18:09:29 2020 100.64.1.1:59918 Connection reset, restarting [0]\nSat Jan 11 18:09:33 2020 TCP connection established with [AF_INET]10.250.7.77:14568\nSat Jan 11 18:09:33 2020 10.250.7.77:14568 TCP connection established with [AF_INET]100.64.1.1:55508\nSat Jan 11 18:09:33 2020 10.250.7.77:14568 Connection reset, restarting [0]\nSat Jan 11 18:09:33 2020 100.64.1.1:55508 Connection reset, restarting [0]\nSat Jan 11 18:09:39 2020 TCP connection established with [AF_INET]10.250.7.77:30696\nSat Jan 11 18:09:39 2020 10.250.7.77:30696 TCP connection established with [AF_INET]100.64.1.1:59928\nSat Jan 11 18:09:39 2020 10.250.7.77:30696 Connection reset, restarting [0]\nSat Jan 11 18:09:39 2020 100.64.1.1:59928 Connection reset, restarting [0]\nSat Jan 11 18:09:43 2020 TCP connection established with [AF_INET]10.250.7.77:14580\nSat Jan 11 18:09:43 2020 10.250.7.77:14580 TCP connection established with [AF_INET]100.64.1.1:55520\nSat Jan 11 18:09:43 2020 10.250.7.77:14580 Connection reset, restarting [0]\nSat Jan 11 18:09:43 2020 100.64.1.1:55520 Connection reset, restarting [0]\nSat Jan 11 18:09:49 2020 TCP connection established with [AF_INET]10.250.7.77:30702\nSat Jan 11 18:09:49 2020 10.250.7.77:30702 TCP connection established with [AF_INET]100.64.1.1:59934\nSat Jan 11 18:09:49 2020 10.250.7.77:30702 Connection reset, restarting [0]\nSat Jan 11 18:09:49 2020 100.64.1.1:59934 Connection reset, restarting [0]\nSat Jan 11 18:09:53 2020 TCP connection established with [AF_INET]10.250.7.77:14586\nSat Jan 11 18:09:53 2020 10.250.7.77:14586 TCP connection established with [AF_INET]100.64.1.1:55526\nSat Jan 11 18:09:53 2020 10.250.7.77:14586 Connection reset, restarting [0]\nSat Jan 11 18:09:53 2020 100.64.1.1:55526 Connection reset, restarting [0]\nSat Jan 11 18:09:59 2020 TCP connection established with [AF_INET]10.250.7.77:30724\nSat Jan 11 18:09:59 2020 10.250.7.77:30724 TCP connection established with [AF_INET]100.64.1.1:59956\nSat Jan 11 18:09:59 2020 10.250.7.77:30724 Connection reset, restarting [0]\nSat Jan 11 18:09:59 2020 100.64.1.1:59956 Connection reset, restarting [0]\nSat Jan 11 18:10:03 2020 TCP connection established with [AF_INET]10.250.7.77:14600\nSat Jan 11 18:10:03 2020 10.250.7.77:14600 TCP connection established with [AF_INET]100.64.1.1:55540\nSat Jan 11 18:10:03 2020 10.250.7.77:14600 Connection reset, restarting [0]\nSat Jan 11 18:10:03 2020 100.64.1.1:55540 Connection reset, restarting [0]\nSat Jan 11 18:10:09 2020 TCP connection established with [AF_INET]10.250.7.77:30734\nSat Jan 11 18:10:09 2020 10.250.7.77:30734 TCP connection established with [AF_INET]100.64.1.1:59966\nSat Jan 11 18:10:09 2020 10.250.7.77:30734 Connection reset, restarting [0]\nSat Jan 11 18:10:09 2020 100.64.1.1:59966 Connection reset, restarting [0]\nSat Jan 11 18:10:13 2020 TCP connection established with [AF_INET]10.250.7.77:14608\nSat Jan 11 18:10:13 2020 10.250.7.77:14608 TCP connection established with [AF_INET]100.64.1.1:55548\nSat Jan 11 18:10:13 2020 10.250.7.77:14608 Connection reset, restarting [0]\nSat Jan 11 18:10:13 2020 100.64.1.1:55548 Connection reset, restarting [0]\nSat Jan 11 18:10:19 2020 TCP connection established with [AF_INET]10.250.7.77:30752\nSat Jan 11 18:10:19 2020 10.250.7.77:30752 TCP connection established with [AF_INET]100.64.1.1:59984\nSat Jan 11 18:10:19 2020 10.250.7.77:30752 Connection reset, restarting [0]\nSat Jan 11 18:10:19 2020 100.64.1.1:59984 Connection reset, restarting [0]\nSat Jan 11 18:10:23 2020 TCP connection established with [AF_INET]10.250.7.77:14618\nSat Jan 11 18:10:23 2020 10.250.7.77:14618 TCP connection established with [AF_INET]100.64.1.1:55558\nSat Jan 11 18:10:23 2020 10.250.7.77:14618 Connection reset, restarting [0]\nSat Jan 11 18:10:23 2020 100.64.1.1:55558 Connection reset, restarting [0]\nSat Jan 11 18:10:29 2020 TCP connection established with [AF_INET]10.250.7.77:30756\nSat Jan 11 18:10:29 2020 10.250.7.77:30756 TCP connection established with [AF_INET]100.64.1.1:59988\nSat Jan 11 18:10:29 2020 10.250.7.77:30756 Connection reset, restarting [0]\nSat Jan 11 18:10:29 2020 100.64.1.1:59988 Connection reset, restarting [0]\nSat Jan 11 18:10:33 2020 TCP connection established with [AF_INET]10.250.7.77:14626\nSat Jan 11 18:10:33 2020 10.250.7.77:14626 TCP connection established with [AF_INET]100.64.1.1:55566\nSat Jan 11 18:10:33 2020 10.250.7.77:14626 Connection reset, restarting [0]\nSat Jan 11 18:10:33 2020 100.64.1.1:55566 Connection reset, restarting [0]\nSat Jan 11 18:10:39 2020 TCP connection established with [AF_INET]10.250.7.77:30766\nSat Jan 11 18:10:39 2020 10.250.7.77:30766 TCP connection established with [AF_INET]100.64.1.1:59998\nSat Jan 11 18:10:39 2020 10.250.7.77:30766 Connection reset, restarting [0]\nSat Jan 11 18:10:39 2020 100.64.1.1:59998 Connection reset, restarting [0]\nSat Jan 11 18:10:43 2020 TCP connection established with [AF_INET]10.250.7.77:14634\nSat Jan 11 18:10:43 2020 10.250.7.77:14634 TCP connection established with [AF_INET]100.64.1.1:55574\nSat Jan 11 18:10:43 2020 10.250.7.77:14634 Connection reset, restarting [0]\nSat Jan 11 18:10:43 2020 100.64.1.1:55574 Connection reset, restarting [0]\nSat Jan 11 18:10:49 2020 TCP connection established with [AF_INET]10.250.7.77:30772\nSat Jan 11 18:10:49 2020 10.250.7.77:30772 TCP connection established with [AF_INET]100.64.1.1:60004\nSat Jan 11 18:10:49 2020 10.250.7.77:30772 Connection reset, restarting [0]\nSat Jan 11 18:10:49 2020 100.64.1.1:60004 Connection reset, restarting [0]\nSat Jan 11 18:10:53 2020 TCP connection established with [AF_INET]10.250.7.77:14644\nSat Jan 11 18:10:53 2020 10.250.7.77:14644 TCP connection established with [AF_INET]100.64.1.1:55584\nSat Jan 11 18:10:53 2020 10.250.7.77:14644 Connection reset, restarting [0]\nSat Jan 11 18:10:53 2020 100.64.1.1:55584 Connection reset, restarting [0]\nSat Jan 11 18:10:59 2020 TCP connection established with [AF_INET]10.250.7.77:30784\nSat Jan 11 18:10:59 2020 10.250.7.77:30784 TCP connection established with [AF_INET]100.64.1.1:60016\nSat Jan 11 18:10:59 2020 10.250.7.77:30784 Connection reset, restarting [0]\nSat Jan 11 18:10:59 2020 100.64.1.1:60016 Connection reset, restarting [0]\nSat Jan 11 18:11:03 2020 TCP connection established with [AF_INET]10.250.7.77:14658\nSat Jan 11 18:11:03 2020 10.250.7.77:14658 TCP connection established with [AF_INET]100.64.1.1:55598\nSat Jan 11 18:11:03 2020 10.250.7.77:14658 Connection reset, restarting [0]\nSat Jan 11 18:11:03 2020 100.64.1.1:55598 Connection reset, restarting [0]\nSat Jan 11 18:11:09 2020 TCP connection established with [AF_INET]10.250.7.77:30794\nSat Jan 11 18:11:09 2020 10.250.7.77:30794 TCP connection established with [AF_INET]100.64.1.1:60026\nSat Jan 11 18:11:09 2020 10.250.7.77:30794 Connection reset, restarting [0]\nSat Jan 11 18:11:09 2020 100.64.1.1:60026 Connection reset, restarting [0]\nSat Jan 11 18:11:13 2020 TCP connection established with [AF_INET]10.250.7.77:14666\nSat Jan 11 18:11:13 2020 10.250.7.77:14666 TCP connection established with [AF_INET]100.64.1.1:55606\nSat Jan 11 18:11:13 2020 100.64.1.1:55606 Connection reset, restarting [0]\nSat Jan 11 18:11:13 2020 10.250.7.77:14666 Connection reset, restarting [0]\nSat Jan 11 18:11:19 2020 TCP connection established with [AF_INET]10.250.7.77:30810\nSat Jan 11 18:11:19 2020 10.250.7.77:30810 TCP connection established with [AF_INET]100.64.1.1:60042\nSat Jan 11 18:11:19 2020 10.250.7.77:30810 Connection reset, restarting [0]\nSat Jan 11 18:11:19 2020 100.64.1.1:60042 Connection reset, restarting [0]\nSat Jan 11 18:11:23 2020 TCP connection established with [AF_INET]10.250.7.77:14678\nSat Jan 11 18:11:23 2020 10.250.7.77:14678 TCP connection established with [AF_INET]100.64.1.1:55618\nSat Jan 11 18:11:23 2020 10.250.7.77:14678 Connection reset, restarting [0]\nSat Jan 11 18:11:23 2020 100.64.1.1:55618 Connection reset, restarting [0]\nSat Jan 11 18:11:29 2020 TCP connection established with [AF_INET]10.250.7.77:30816\nSat Jan 11 18:11:29 2020 10.250.7.77:30816 TCP connection established with [AF_INET]100.64.1.1:60048\nSat Jan 11 18:11:29 2020 10.250.7.77:30816 Connection reset, restarting [0]\nSat Jan 11 18:11:29 2020 100.64.1.1:60048 Connection reset, restarting [0]\nSat Jan 11 18:11:33 2020 TCP connection established with [AF_INET]10.250.7.77:14684\nSat Jan 11 18:11:33 2020 10.250.7.77:14684 TCP connection established with [AF_INET]100.64.1.1:55624\nSat Jan 11 18:11:33 2020 10.250.7.77:14684 Connection reset, restarting [0]\nSat Jan 11 18:11:33 2020 100.64.1.1:55624 Connection reset, restarting [0]\nSat Jan 11 18:11:39 2020 TCP connection established with [AF_INET]10.250.7.77:30824\nSat Jan 11 18:11:39 2020 10.250.7.77:30824 TCP connection established with [AF_INET]100.64.1.1:60056\nSat Jan 11 18:11:39 2020 10.250.7.77:30824 Connection reset, restarting [0]\nSat Jan 11 18:11:39 2020 100.64.1.1:60056 Connection reset, restarting [0]\nSat Jan 11 18:11:43 2020 TCP connection established with [AF_INET]10.250.7.77:14692\nSat Jan 11 18:11:43 2020 10.250.7.77:14692 TCP connection established with [AF_INET]100.64.1.1:55632\nSat Jan 11 18:11:43 2020 10.250.7.77:14692 Connection reset, restarting [0]\nSat Jan 11 18:11:43 2020 100.64.1.1:55632 Connection reset, restarting [0]\nSat Jan 11 18:11:49 2020 TCP connection established with [AF_INET]10.250.7.77:30830\nSat Jan 11 18:11:49 2020 10.250.7.77:30830 TCP connection established with [AF_INET]100.64.1.1:60062\nSat Jan 11 18:11:49 2020 10.250.7.77:30830 Connection reset, restarting [0]\nSat Jan 11 18:11:49 2020 100.64.1.1:60062 Connection reset, restarting [0]\nSat Jan 11 18:11:53 2020 TCP connection established with [AF_INET]10.250.7.77:14708\nSat Jan 11 18:11:53 2020 10.250.7.77:14708 TCP connection established with [AF_INET]100.64.1.1:55648\nSat Jan 11 18:11:53 2020 10.250.7.77:14708 Connection reset, restarting [0]\nSat Jan 11 18:11:53 2020 100.64.1.1:55648 Connection reset, restarting [0]\nSat Jan 11 18:11:59 2020 TCP connection established with [AF_INET]10.250.7.77:30842\nSat Jan 11 18:11:59 2020 10.250.7.77:30842 TCP connection established with [AF_INET]100.64.1.1:60074\nSat Jan 11 18:11:59 2020 10.250.7.77:30842 Connection reset, restarting [0]\nSat Jan 11 18:11:59 2020 100.64.1.1:60074 Connection reset, restarting [0]\nSat Jan 11 18:12:03 2020 TCP connection established with [AF_INET]10.250.7.77:14722\nSat Jan 11 18:12:03 2020 10.250.7.77:14722 TCP connection established with [AF_INET]100.64.1.1:55662\nSat Jan 11 18:12:03 2020 10.250.7.77:14722 Connection reset, restarting [0]\nSat Jan 11 18:12:03 2020 100.64.1.1:55662 Connection reset, restarting [0]\nSat Jan 11 18:12:09 2020 TCP connection established with [AF_INET]10.250.7.77:30852\nSat Jan 11 18:12:09 2020 10.250.7.77:30852 TCP connection established with [AF_INET]100.64.1.1:60084\nSat Jan 11 18:12:09 2020 10.250.7.77:30852 Connection reset, restarting [0]\nSat Jan 11 18:12:09 2020 100.64.1.1:60084 Connection reset, restarting [0]\nSat Jan 11 18:12:13 2020 TCP connection established with [AF_INET]10.250.7.77:14734\nSat Jan 11 18:12:13 2020 10.250.7.77:14734 TCP connection established with [AF_INET]100.64.1.1:55674\nSat Jan 11 18:12:13 2020 10.250.7.77:14734 Connection reset, restarting [0]\nSat Jan 11 18:12:13 2020 100.64.1.1:55674 Connection reset, restarting [0]\nSat Jan 11 18:12:19 2020 TCP connection established with [AF_INET]10.250.7.77:30864\nSat Jan 11 18:12:19 2020 10.250.7.77:30864 TCP connection established with [AF_INET]100.64.1.1:60096\nSat Jan 11 18:12:19 2020 10.250.7.77:30864 Connection reset, restarting [0]\nSat Jan 11 18:12:19 2020 100.64.1.1:60096 Connection reset, restarting [0]\nSat Jan 11 18:12:23 2020 TCP connection established with [AF_INET]10.250.7.77:14746\nSat Jan 11 18:12:23 2020 10.250.7.77:14746 TCP connection established with [AF_INET]100.64.1.1:55686\nSat Jan 11 18:12:23 2020 10.250.7.77:14746 Connection reset, restarting [0]\nSat Jan 11 18:12:23 2020 100.64.1.1:55686 Connection reset, restarting [0]\nSat Jan 11 18:12:29 2020 TCP connection established with [AF_INET]10.250.7.77:30874\nSat Jan 11 18:12:29 2020 10.250.7.77:30874 TCP connection established with [AF_INET]100.64.1.1:60106\nSat Jan 11 18:12:29 2020 10.250.7.77:30874 Connection reset, restarting [0]\nSat Jan 11 18:12:29 2020 100.64.1.1:60106 Connection reset, restarting [0]\nSat Jan 11 18:12:33 2020 TCP connection established with [AF_INET]10.250.7.77:14752\nSat Jan 11 18:12:33 2020 10.250.7.77:14752 TCP connection established with [AF_INET]100.64.1.1:55692\nSat Jan 11 18:12:33 2020 10.250.7.77:14752 Connection reset, restarting [0]\nSat Jan 11 18:12:33 2020 100.64.1.1:55692 Connection reset, restarting [0]\nSat Jan 11 18:12:39 2020 TCP connection established with [AF_INET]10.250.7.77:30882\nSat Jan 11 18:12:39 2020 10.250.7.77:30882 TCP connection established with [AF_INET]100.64.1.1:60114\nSat Jan 11 18:12:39 2020 10.250.7.77:30882 Connection reset, restarting [0]\nSat Jan 11 18:12:39 2020 100.64.1.1:60114 Connection reset, restarting [0]\nSat Jan 11 18:12:43 2020 TCP connection established with [AF_INET]10.250.7.77:14760\nSat Jan 11 18:12:43 2020 10.250.7.77:14760 TCP connection established with [AF_INET]100.64.1.1:55700\nSat Jan 11 18:12:43 2020 10.250.7.77:14760 Connection reset, restarting [0]\nSat Jan 11 18:12:43 2020 100.64.1.1:55700 Connection reset, restarting [0]\nSat Jan 11 18:12:49 2020 TCP connection established with [AF_INET]10.250.7.77:30888\nSat Jan 11 18:12:49 2020 10.250.7.77:30888 TCP connection established with [AF_INET]100.64.1.1:60120\nSat Jan 11 18:12:49 2020 10.250.7.77:30888 Connection reset, restarting [0]\nSat Jan 11 18:12:49 2020 100.64.1.1:60120 Connection reset, restarting [0]\nSat Jan 11 18:12:53 2020 TCP connection established with [AF_INET]10.250.7.77:14766\nSat Jan 11 18:12:53 2020 10.250.7.77:14766 TCP connection established with [AF_INET]100.64.1.1:55706\nSat Jan 11 18:12:53 2020 10.250.7.77:14766 Connection reset, restarting [0]\nSat Jan 11 18:12:53 2020 100.64.1.1:55706 Connection reset, restarting [0]\nSat Jan 11 18:12:59 2020 TCP connection established with [AF_INET]10.250.7.77:30900\nSat Jan 11 18:12:59 2020 10.250.7.77:30900 TCP connection established with [AF_INET]100.64.1.1:60132\nSat Jan 11 18:12:59 2020 10.250.7.77:30900 Connection reset, restarting [0]\nSat Jan 11 18:12:59 2020 100.64.1.1:60132 Connection reset, restarting [0]\nSat Jan 11 18:13:03 2020 TCP connection established with [AF_INET]10.250.7.77:14782\nSat Jan 11 18:13:03 2020 10.250.7.77:14782 TCP connection established with [AF_INET]100.64.1.1:55722\nSat Jan 11 18:13:03 2020 10.250.7.77:14782 Connection reset, restarting [0]\nSat Jan 11 18:13:03 2020 100.64.1.1:55722 Connection reset, restarting [0]\nSat Jan 11 18:13:09 2020 TCP connection established with [AF_INET]10.250.7.77:30910\nSat Jan 11 18:13:09 2020 10.250.7.77:30910 TCP connection established with [AF_INET]100.64.1.1:60142\nSat Jan 11 18:13:09 2020 10.250.7.77:30910 Connection reset, restarting [0]\nSat Jan 11 18:13:09 2020 100.64.1.1:60142 Connection reset, restarting [0]\nSat Jan 11 18:13:13 2020 TCP connection established with [AF_INET]10.250.7.77:14792\nSat Jan 11 18:13:13 2020 10.250.7.77:14792 TCP connection established with [AF_INET]100.64.1.1:55732\nSat Jan 11 18:13:13 2020 10.250.7.77:14792 Connection reset, restarting [0]\nSat Jan 11 18:13:13 2020 100.64.1.1:55732 Connection reset, restarting [0]\nSat Jan 11 18:13:19 2020 TCP connection established with [AF_INET]10.250.7.77:30922\nSat Jan 11 18:13:19 2020 10.250.7.77:30922 TCP connection established with [AF_INET]100.64.1.1:60154\nSat Jan 11 18:13:19 2020 10.250.7.77:30922 Connection reset, restarting [0]\nSat Jan 11 18:13:19 2020 100.64.1.1:60154 Connection reset, restarting [0]\nSat Jan 11 18:13:23 2020 TCP connection established with [AF_INET]10.250.7.77:14806\nSat Jan 11 18:13:23 2020 10.250.7.77:14806 TCP connection established with [AF_INET]100.64.1.1:55746\nSat Jan 11 18:13:23 2020 10.250.7.77:14806 Connection reset, restarting [0]\nSat Jan 11 18:13:23 2020 100.64.1.1:55746 Connection reset, restarting [0]\nSat Jan 11 18:13:29 2020 TCP connection established with [AF_INET]10.250.7.77:30928\nSat Jan 11 18:13:29 2020 10.250.7.77:30928 TCP connection established with [AF_INET]100.64.1.1:60160\nSat Jan 11 18:13:29 2020 10.250.7.77:30928 Connection reset, restarting [0]\nSat Jan 11 18:13:29 2020 100.64.1.1:60160 Connection reset, restarting [0]\nSat Jan 11 18:13:33 2020 TCP connection established with [AF_INET]10.250.7.77:14822\nSat Jan 11 18:13:33 2020 10.250.7.77:14822 TCP connection established with [AF_INET]100.64.1.1:55762\nSat Jan 11 18:13:33 2020 10.250.7.77:14822 Connection reset, restarting [0]\nSat Jan 11 18:13:33 2020 100.64.1.1:55762 Connection reset, restarting [0]\nSat Jan 11 18:13:39 2020 TCP connection established with [AF_INET]10.250.7.77:30936\nSat Jan 11 18:13:39 2020 10.250.7.77:30936 TCP connection established with [AF_INET]100.64.1.1:60168\nSat Jan 11 18:13:39 2020 10.250.7.77:30936 Connection reset, restarting [0]\nSat Jan 11 18:13:39 2020 100.64.1.1:60168 Connection reset, restarting [0]\nSat Jan 11 18:13:43 2020 TCP connection established with [AF_INET]10.250.7.77:14830\nSat Jan 11 18:13:43 2020 10.250.7.77:14830 TCP connection established with [AF_INET]100.64.1.1:55770\nSat Jan 11 18:13:43 2020 10.250.7.77:14830 Connection reset, restarting [0]\nSat Jan 11 18:13:43 2020 100.64.1.1:55770 Connection reset, restarting [0]\nSat Jan 11 18:13:49 2020 TCP connection established with [AF_INET]10.250.7.77:30946\nSat Jan 11 18:13:49 2020 10.250.7.77:30946 TCP connection established with [AF_INET]100.64.1.1:60178\nSat Jan 11 18:13:49 2020 10.250.7.77:30946 Connection reset, restarting [0]\nSat Jan 11 18:13:49 2020 100.64.1.1:60178 Connection reset, restarting [0]\nSat Jan 11 18:13:53 2020 TCP connection established with [AF_INET]10.250.7.77:14836\nSat Jan 11 18:13:53 2020 10.250.7.77:14836 TCP connection established with [AF_INET]100.64.1.1:55776\nSat Jan 11 18:13:53 2020 10.250.7.77:14836 Connection reset, restarting [0]\nSat Jan 11 18:13:53 2020 100.64.1.1:55776 Connection reset, restarting [0]\nSat Jan 11 18:13:59 2020 TCP connection established with [AF_INET]10.250.7.77:30958\nSat Jan 11 18:13:59 2020 10.250.7.77:30958 Connection reset, restarting [0]\nSat Jan 11 18:13:59 2020 TCP connection established with [AF_INET]100.64.1.1:60190\nSat Jan 11 18:13:59 2020 100.64.1.1:60190 Connection reset, restarting [0]\nSat Jan 11 18:14:03 2020 TCP connection established with [AF_INET]10.250.7.77:14858\nSat Jan 11 18:14:03 2020 10.250.7.77:14858 TCP connection established with [AF_INET]100.64.1.1:55798\nSat Jan 11 18:14:03 2020 10.250.7.77:14858 Connection reset, restarting [0]\nSat Jan 11 18:14:03 2020 100.64.1.1:55798 Connection reset, restarting [0]\nSat Jan 11 18:14:09 2020 TCP connection established with [AF_INET]10.250.7.77:30974\nSat Jan 11 18:14:09 2020 10.250.7.77:30974 TCP connection established with [AF_INET]100.64.1.1:60206\nSat Jan 11 18:14:09 2020 10.250.7.77:30974 Connection reset, restarting [0]\nSat Jan 11 18:14:09 2020 100.64.1.1:60206 Connection reset, restarting [0]\nSat Jan 11 18:14:13 2020 TCP connection established with [AF_INET]10.250.7.77:14868\nSat Jan 11 18:14:13 2020 10.250.7.77:14868 TCP connection established with [AF_INET]100.64.1.1:55808\nSat Jan 11 18:14:13 2020 10.250.7.77:14868 Connection reset, restarting [0]\nSat Jan 11 18:14:13 2020 100.64.1.1:55808 Connection reset, restarting [0]\nSat Jan 11 18:14:19 2020 TCP connection established with [AF_INET]10.250.7.77:30994\nSat Jan 11 18:14:19 2020 10.250.7.77:30994 TCP connection established with [AF_INET]100.64.1.1:60226\nSat Jan 11 18:14:19 2020 10.250.7.77:30994 Connection reset, restarting [0]\nSat Jan 11 18:14:19 2020 100.64.1.1:60226 Connection reset, restarting [0]\nSat Jan 11 18:14:23 2020 TCP connection established with [AF_INET]10.250.7.77:14878\nSat Jan 11 18:14:23 2020 10.250.7.77:14878 TCP connection established with [AF_INET]100.64.1.1:55818\nSat Jan 11 18:14:23 2020 10.250.7.77:14878 Connection reset, restarting [0]\nSat Jan 11 18:14:23 2020 100.64.1.1:55818 Connection reset, restarting [0]\nSat Jan 11 18:14:29 2020 TCP connection established with [AF_INET]10.250.7.77:31004\nSat Jan 11 18:14:29 2020 10.250.7.77:31004 TCP connection established with [AF_INET]100.64.1.1:60236\nSat Jan 11 18:14:29 2020 10.250.7.77:31004 Connection reset, restarting [0]\nSat Jan 11 18:14:29 2020 100.64.1.1:60236 Connection reset, restarting [0]\nSat Jan 11 18:14:33 2020 TCP connection established with [AF_INET]10.250.7.77:14884\nSat Jan 11 18:14:33 2020 10.250.7.77:14884 TCP connection established with [AF_INET]100.64.1.1:55824\nSat Jan 11 18:14:33 2020 10.250.7.77:14884 Connection reset, restarting [0]\nSat Jan 11 18:14:33 2020 100.64.1.1:55824 Connection reset, restarting [0]\nSat Jan 11 18:14:39 2020 TCP connection established with [AF_INET]10.250.7.77:31012\nSat Jan 11 18:14:39 2020 10.250.7.77:31012 TCP connection established with [AF_INET]100.64.1.1:60244\nSat Jan 11 18:14:39 2020 10.250.7.77:31012 Connection reset, restarting [0]\nSat Jan 11 18:14:39 2020 100.64.1.1:60244 Connection reset, restarting [0]\nSat Jan 11 18:14:43 2020 TCP connection established with [AF_INET]10.250.7.77:14898\nSat Jan 11 18:14:43 2020 10.250.7.77:14898 TCP connection established with [AF_INET]100.64.1.1:55838\nSat Jan 11 18:14:43 2020 10.250.7.77:14898 Connection reset, restarting [0]\nSat Jan 11 18:14:43 2020 100.64.1.1:55838 Connection reset, restarting [0]\nSat Jan 11 18:14:49 2020 TCP connection established with [AF_INET]10.250.7.77:31028\nSat Jan 11 18:14:49 2020 10.250.7.77:31028 TCP connection established with [AF_INET]100.64.1.1:60260\nSat Jan 11 18:14:49 2020 10.250.7.77:31028 Connection reset, restarting [0]\nSat Jan 11 18:14:49 2020 100.64.1.1:60260 Connection reset, restarting [0]\nSat Jan 11 18:14:53 2020 TCP connection established with [AF_INET]10.250.7.77:14904\nSat Jan 11 18:14:53 2020 10.250.7.77:14904 TCP connection established with [AF_INET]100.64.1.1:55844\nSat Jan 11 18:14:53 2020 10.250.7.77:14904 Connection reset, restarting [0]\nSat Jan 11 18:14:53 2020 100.64.1.1:55844 Connection reset, restarting [0]\nSat Jan 11 18:14:59 2020 TCP connection established with [AF_INET]10.250.7.77:31044\nSat Jan 11 18:14:59 2020 10.250.7.77:31044 TCP connection established with [AF_INET]100.64.1.1:60276\nSat Jan 11 18:14:59 2020 10.250.7.77:31044 Connection reset, restarting [0]\nSat Jan 11 18:14:59 2020 100.64.1.1:60276 Connection reset, restarting [0]\nSat Jan 11 18:15:03 2020 TCP connection established with [AF_INET]10.250.7.77:14918\nSat Jan 11 18:15:03 2020 10.250.7.77:14918 TCP connection established with [AF_INET]100.64.1.1:55858\nSat Jan 11 18:15:03 2020 10.250.7.77:14918 Connection reset, restarting [0]\nSat Jan 11 18:15:03 2020 100.64.1.1:55858 Connection reset, restarting [0]\nSat Jan 11 18:15:09 2020 TCP connection established with [AF_INET]10.250.7.77:31054\nSat Jan 11 18:15:09 2020 10.250.7.77:31054 TCP connection established with [AF_INET]100.64.1.1:60286\nSat Jan 11 18:15:09 2020 10.250.7.77:31054 Connection reset, restarting [0]\nSat Jan 11 18:15:09 2020 100.64.1.1:60286 Connection reset, restarting [0]\nSat Jan 11 18:15:13 2020 TCP connection established with [AF_INET]10.250.7.77:14928\nSat Jan 11 18:15:13 2020 10.250.7.77:14928 TCP connection established with [AF_INET]100.64.1.1:55868\nSat Jan 11 18:15:13 2020 10.250.7.77:14928 Connection reset, restarting [0]\nSat Jan 11 18:15:13 2020 100.64.1.1:55868 Connection reset, restarting [0]\nSat Jan 11 18:15:19 2020 TCP connection established with [AF_INET]10.250.7.77:31068\nSat Jan 11 18:15:19 2020 10.250.7.77:31068 TCP connection established with [AF_INET]100.64.1.1:60300\nSat Jan 11 18:15:19 2020 10.250.7.77:31068 Connection reset, restarting [0]\nSat Jan 11 18:15:19 2020 100.64.1.1:60300 Connection reset, restarting [0]\nSat Jan 11 18:15:23 2020 TCP connection established with [AF_INET]100.64.1.1:55878\nSat Jan 11 18:15:23 2020 100.64.1.1:55878 Connection reset, restarting [0]\nSat Jan 11 18:15:23 2020 TCP connection established with [AF_INET]10.250.7.77:14938\nSat Jan 11 18:15:23 2020 10.250.7.77:14938 Connection reset, restarting [0]\nSat Jan 11 18:15:29 2020 TCP connection established with [AF_INET]10.250.7.77:31072\nSat Jan 11 18:15:29 2020 10.250.7.77:31072 TCP connection established with [AF_INET]100.64.1.1:60304\nSat Jan 11 18:15:29 2020 10.250.7.77:31072 Connection reset, restarting [0]\nSat Jan 11 18:15:29 2020 100.64.1.1:60304 Connection reset, restarting [0]\nSat Jan 11 18:15:33 2020 TCP connection established with [AF_INET]10.250.7.77:14944\nSat Jan 11 18:15:33 2020 10.250.7.77:14944 TCP connection established with [AF_INET]100.64.1.1:55884\nSat Jan 11 18:15:33 2020 10.250.7.77:14944 Connection reset, restarting [0]\nSat Jan 11 18:15:33 2020 100.64.1.1:55884 Connection reset, restarting [0]\nSat Jan 11 18:15:39 2020 TCP connection established with [AF_INET]100.64.1.1:60312\nSat Jan 11 18:15:39 2020 100.64.1.1:60312 Connection reset, restarting [0]\nSat Jan 11 18:15:39 2020 TCP connection established with [AF_INET]10.250.7.77:31080\nSat Jan 11 18:15:39 2020 10.250.7.77:31080 Connection reset, restarting [0]\nSat Jan 11 18:15:43 2020 TCP connection established with [AF_INET]10.250.7.77:14952\nSat Jan 11 18:15:43 2020 10.250.7.77:14952 TCP connection established with [AF_INET]100.64.1.1:55892\nSat Jan 11 18:15:43 2020 10.250.7.77:14952 Connection reset, restarting [0]\nSat Jan 11 18:15:43 2020 100.64.1.1:55892 Connection reset, restarting [0]\nSat Jan 11 18:15:49 2020 TCP connection established with [AF_INET]10.250.7.77:31086\nSat Jan 11 18:15:49 2020 10.250.7.77:31086 TCP connection established with [AF_INET]100.64.1.1:60318\nSat Jan 11 18:15:49 2020 10.250.7.77:31086 Connection reset, restarting [0]\nSat Jan 11 18:15:49 2020 100.64.1.1:60318 Connection reset, restarting [0]\nSat Jan 11 18:15:53 2020 TCP connection established with [AF_INET]10.250.7.77:14996\nSat Jan 11 18:15:53 2020 10.250.7.77:14996 TCP connection established with [AF_INET]100.64.1.1:55936\nSat Jan 11 18:15:53 2020 10.250.7.77:14996 Connection reset, restarting [0]\nSat Jan 11 18:15:53 2020 100.64.1.1:55936 Connection reset, restarting [0]\nSat Jan 11 18:15:59 2020 TCP connection established with [AF_INET]10.250.7.77:31100\nSat Jan 11 18:15:59 2020 10.250.7.77:31100 TCP connection established with [AF_INET]100.64.1.1:60332\nSat Jan 11 18:15:59 2020 10.250.7.77:31100 Connection reset, restarting [0]\nSat Jan 11 18:15:59 2020 100.64.1.1:60332 Connection reset, restarting [0]\nSat Jan 11 18:16:03 2020 TCP connection established with [AF_INET]10.250.7.77:15012\nSat Jan 11 18:16:03 2020 10.250.7.77:15012 TCP connection established with [AF_INET]100.64.1.1:55952\nSat Jan 11 18:16:03 2020 10.250.7.77:15012 Connection reset, restarting [0]\nSat Jan 11 18:16:03 2020 100.64.1.1:55952 Connection reset, restarting [0]\nSat Jan 11 18:16:09 2020 TCP connection established with [AF_INET]10.250.7.77:31112\nSat Jan 11 18:16:09 2020 10.250.7.77:31112 TCP connection established with [AF_INET]100.64.1.1:60344\nSat Jan 11 18:16:09 2020 10.250.7.77:31112 Connection reset, restarting [0]\nSat Jan 11 18:16:09 2020 100.64.1.1:60344 Connection reset, restarting [0]\nSat Jan 11 18:16:13 2020 TCP connection established with [AF_INET]10.250.7.77:15022\nSat Jan 11 18:16:13 2020 10.250.7.77:15022 TCP connection established with [AF_INET]100.64.1.1:55962\nSat Jan 11 18:16:13 2020 10.250.7.77:15022 Connection reset, restarting [0]\nSat Jan 11 18:16:13 2020 100.64.1.1:55962 Connection reset, restarting [0]\nSat Jan 11 18:16:19 2020 TCP connection established with [AF_INET]10.250.7.77:31128\nSat Jan 11 18:16:19 2020 10.250.7.77:31128 TCP connection established with [AF_INET]100.64.1.1:60360\nSat Jan 11 18:16:19 2020 10.250.7.77:31128 Connection reset, restarting [0]\nSat Jan 11 18:16:19 2020 100.64.1.1:60360 Connection reset, restarting [0]\nSat Jan 11 18:16:23 2020 TCP connection established with [AF_INET]10.250.7.77:15032\nSat Jan 11 18:16:23 2020 10.250.7.77:15032 TCP connection established with [AF_INET]100.64.1.1:55972\nSat Jan 11 18:16:23 2020 10.250.7.77:15032 Connection reset, restarting [0]\nSat Jan 11 18:16:23 2020 100.64.1.1:55972 Connection reset, restarting [0]\nSat Jan 11 18:16:29 2020 TCP connection established with [AF_INET]10.250.7.77:31134\nSat Jan 11 18:16:29 2020 10.250.7.77:31134 TCP connection established with [AF_INET]100.64.1.1:60366\nSat Jan 11 18:16:29 2020 10.250.7.77:31134 Connection reset, restarting [0]\nSat Jan 11 18:16:29 2020 100.64.1.1:60366 Connection reset, restarting [0]\nSat Jan 11 18:16:33 2020 TCP connection established with [AF_INET]10.250.7.77:15038\nSat Jan 11 18:16:33 2020 10.250.7.77:15038 TCP connection established with [AF_INET]100.64.1.1:55978\nSat Jan 11 18:16:33 2020 10.250.7.77:15038 Connection reset, restarting [0]\nSat Jan 11 18:16:33 2020 100.64.1.1:55978 Connection reset, restarting [0]\nSat Jan 11 18:16:39 2020 TCP connection established with [AF_INET]10.250.7.77:31142\nSat Jan 11 18:16:39 2020 10.250.7.77:31142 TCP connection established with [AF_INET]100.64.1.1:60374\nSat Jan 11 18:16:39 2020 10.250.7.77:31142 Connection reset, restarting [0]\nSat Jan 11 18:16:39 2020 100.64.1.1:60374 Connection reset, restarting [0]\nSat Jan 11 18:16:43 2020 TCP connection established with [AF_INET]10.250.7.77:15046\nSat Jan 11 18:16:43 2020 10.250.7.77:15046 TCP connection established with [AF_INET]100.64.1.1:55986\nSat Jan 11 18:16:43 2020 10.250.7.77:15046 Connection reset, restarting [0]\nSat Jan 11 18:16:43 2020 100.64.1.1:55986 Connection reset, restarting [0]\nSat Jan 11 18:16:49 2020 TCP connection established with [AF_INET]10.250.7.77:31148\nSat Jan 11 18:16:49 2020 10.250.7.77:31148 TCP connection established with [AF_INET]100.64.1.1:60380\nSat Jan 11 18:16:49 2020 10.250.7.77:31148 Connection reset, restarting [0]\nSat Jan 11 18:16:49 2020 100.64.1.1:60380 Connection reset, restarting [0]\nSat Jan 11 18:16:53 2020 TCP connection established with [AF_INET]10.250.7.77:15052\nSat Jan 11 18:16:53 2020 10.250.7.77:15052 TCP connection established with [AF_INET]100.64.1.1:55992\nSat Jan 11 18:16:53 2020 10.250.7.77:15052 Connection reset, restarting [0]\nSat Jan 11 18:16:53 2020 100.64.1.1:55992 Connection reset, restarting [0]\nSat Jan 11 18:16:59 2020 TCP connection established with [AF_INET]10.250.7.77:31162\nSat Jan 11 18:16:59 2020 10.250.7.77:31162 TCP connection established with [AF_INET]100.64.1.1:60394\nSat Jan 11 18:16:59 2020 10.250.7.77:31162 Connection reset, restarting [0]\nSat Jan 11 18:16:59 2020 100.64.1.1:60394 Connection reset, restarting [0]\nSat Jan 11 18:17:03 2020 TCP connection established with [AF_INET]10.250.7.77:15068\nSat Jan 11 18:17:03 2020 10.250.7.77:15068 TCP connection established with [AF_INET]100.64.1.1:56008\nSat Jan 11 18:17:03 2020 10.250.7.77:15068 Connection reset, restarting [0]\nSat Jan 11 18:17:03 2020 100.64.1.1:56008 Connection reset, restarting [0]\nSat Jan 11 18:17:09 2020 TCP connection established with [AF_INET]10.250.7.77:31174\nSat Jan 11 18:17:09 2020 10.250.7.77:31174 TCP connection established with [AF_INET]100.64.1.1:60406\nSat Jan 11 18:17:09 2020 10.250.7.77:31174 Connection reset, restarting [0]\nSat Jan 11 18:17:09 2020 100.64.1.1:60406 Connection reset, restarting [0]\nSat Jan 11 18:17:13 2020 TCP connection established with [AF_INET]10.250.7.77:15080\nSat Jan 11 18:17:13 2020 10.250.7.77:15080 TCP connection established with [AF_INET]100.64.1.1:56020\nSat Jan 11 18:17:13 2020 10.250.7.77:15080 Connection reset, restarting [0]\nSat Jan 11 18:17:13 2020 100.64.1.1:56020 Connection reset, restarting [0]\nSat Jan 11 18:17:19 2020 TCP connection established with [AF_INET]10.250.7.77:31186\nSat Jan 11 18:17:19 2020 10.250.7.77:31186 TCP connection established with [AF_INET]100.64.1.1:60418\nSat Jan 11 18:17:19 2020 10.250.7.77:31186 Connection reset, restarting [0]\nSat Jan 11 18:17:19 2020 100.64.1.1:60418 Connection reset, restarting [0]\nSat Jan 11 18:17:23 2020 TCP connection established with [AF_INET]10.250.7.77:15090\nSat Jan 11 18:17:23 2020 10.250.7.77:15090 TCP connection established with [AF_INET]100.64.1.1:56030\nSat Jan 11 18:17:23 2020 10.250.7.77:15090 Connection reset, restarting [0]\nSat Jan 11 18:17:23 2020 100.64.1.1:56030 Connection reset, restarting [0]\nSat Jan 11 18:17:29 2020 TCP connection established with [AF_INET]10.250.7.77:31194\nSat Jan 11 18:17:29 2020 10.250.7.77:31194 TCP connection established with [AF_INET]100.64.1.1:60426\nSat Jan 11 18:17:29 2020 10.250.7.77:31194 Connection reset, restarting [0]\nSat Jan 11 18:17:29 2020 100.64.1.1:60426 Connection reset, restarting [0]\nSat Jan 11 18:17:33 2020 TCP connection established with [AF_INET]10.250.7.77:15096\nSat Jan 11 18:17:33 2020 10.250.7.77:15096 TCP connection established with [AF_INET]100.64.1.1:56036\nSat Jan 11 18:17:33 2020 10.250.7.77:15096 Connection reset, restarting [0]\nSat Jan 11 18:17:33 2020 100.64.1.1:56036 Connection reset, restarting [0]\nSat Jan 11 18:17:39 2020 TCP connection established with [AF_INET]10.250.7.77:31204\nSat Jan 11 18:17:39 2020 10.250.7.77:31204 TCP connection established with [AF_INET]100.64.1.1:60436\nSat Jan 11 18:17:39 2020 10.250.7.77:31204 Connection reset, restarting [0]\nSat Jan 11 18:17:39 2020 100.64.1.1:60436 Connection reset, restarting [0]\nSat Jan 11 18:17:43 2020 TCP connection established with [AF_INET]10.250.7.77:15104\nSat Jan 11 18:17:43 2020 10.250.7.77:15104 TCP connection established with [AF_INET]100.64.1.1:56044\nSat Jan 11 18:17:43 2020 10.250.7.77:15104 Connection reset, restarting [0]\nSat Jan 11 18:17:43 2020 100.64.1.1:56044 Connection reset, restarting [0]\nSat Jan 11 18:17:49 2020 TCP connection established with [AF_INET]10.250.7.77:31218\nSat Jan 11 18:17:49 2020 10.250.7.77:31218 TCP connection established with [AF_INET]100.64.1.1:60450\nSat Jan 11 18:17:49 2020 10.250.7.77:31218 Connection reset, restarting [0]\nSat Jan 11 18:17:49 2020 100.64.1.1:60450 Connection reset, restarting [0]\nSat Jan 11 18:17:53 2020 TCP connection established with [AF_INET]10.250.7.77:15112\nSat Jan 11 18:17:53 2020 10.250.7.77:15112 TCP connection established with [AF_INET]100.64.1.1:56052\nSat Jan 11 18:17:53 2020 10.250.7.77:15112 Connection reset, restarting [0]\nSat Jan 11 18:17:53 2020 100.64.1.1:56052 Connection reset, restarting [0]\nSat Jan 11 18:17:59 2020 TCP connection established with [AF_INET]10.250.7.77:31230\nSat Jan 11 18:17:59 2020 10.250.7.77:31230 TCP connection established with [AF_INET]100.64.1.1:60462\nSat Jan 11 18:17:59 2020 10.250.7.77:31230 Connection reset, restarting [0]\nSat Jan 11 18:17:59 2020 100.64.1.1:60462 Connection reset, restarting [0]\nSat Jan 11 18:18:03 2020 TCP connection established with [AF_INET]10.250.7.77:15126\nSat Jan 11 18:18:03 2020 10.250.7.77:15126 TCP connection established with [AF_INET]100.64.1.1:56066\nSat Jan 11 18:18:03 2020 10.250.7.77:15126 Connection reset, restarting [0]\nSat Jan 11 18:18:03 2020 100.64.1.1:56066 Connection reset, restarting [0]\nSat Jan 11 18:18:09 2020 TCP connection established with [AF_INET]10.250.7.77:31242\nSat Jan 11 18:18:09 2020 10.250.7.77:31242 TCP connection established with [AF_INET]100.64.1.1:60474\nSat Jan 11 18:18:09 2020 10.250.7.77:31242 Connection reset, restarting [0]\nSat Jan 11 18:18:09 2020 100.64.1.1:60474 Connection reset, restarting [0]\nSat Jan 11 18:18:13 2020 TCP connection established with [AF_INET]10.250.7.77:15134\nSat Jan 11 18:18:13 2020 10.250.7.77:15134 TCP connection established with [AF_INET]100.64.1.1:56074\nSat Jan 11 18:18:13 2020 10.250.7.77:15134 Connection reset, restarting [0]\nSat Jan 11 18:18:13 2020 100.64.1.1:56074 Connection reset, restarting [0]\nSat Jan 11 18:18:19 2020 TCP connection established with [AF_INET]10.250.7.77:31254\nSat Jan 11 18:18:19 2020 10.250.7.77:31254 TCP connection established with [AF_INET]100.64.1.1:60486\nSat Jan 11 18:18:19 2020 10.250.7.77:31254 Connection reset, restarting [0]\nSat Jan 11 18:18:19 2020 100.64.1.1:60486 Connection reset, restarting [0]\nSat Jan 11 18:18:23 2020 TCP connection established with [AF_INET]10.250.7.77:15148\nSat Jan 11 18:18:23 2020 10.250.7.77:15148 TCP connection established with [AF_INET]100.64.1.1:56088\nSat Jan 11 18:18:23 2020 10.250.7.77:15148 Connection reset, restarting [0]\nSat Jan 11 18:18:23 2020 100.64.1.1:56088 Connection reset, restarting [0]\nSat Jan 11 18:18:29 2020 TCP connection established with [AF_INET]10.250.7.77:31258\nSat Jan 11 18:18:29 2020 10.250.7.77:31258 TCP connection established with [AF_INET]100.64.1.1:60490\nSat Jan 11 18:18:29 2020 10.250.7.77:31258 Connection reset, restarting [0]\nSat Jan 11 18:18:29 2020 100.64.1.1:60490 Connection reset, restarting [0]\nSat Jan 11 18:18:33 2020 TCP connection established with [AF_INET]10.250.7.77:15154\nSat Jan 11 18:18:33 2020 10.250.7.77:15154 TCP connection established with [AF_INET]100.64.1.1:56094\nSat Jan 11 18:18:33 2020 10.250.7.77:15154 Connection reset, restarting [0]\nSat Jan 11 18:18:33 2020 100.64.1.1:56094 Connection reset, restarting [0]\nSat Jan 11 18:18:39 2020 TCP connection established with [AF_INET]10.250.7.77:31266\nSat Jan 11 18:18:39 2020 10.250.7.77:31266 TCP connection established with [AF_INET]100.64.1.1:60498\nSat Jan 11 18:18:39 2020 10.250.7.77:31266 Connection reset, restarting [0]\nSat Jan 11 18:18:39 2020 100.64.1.1:60498 Connection reset, restarting [0]\nSat Jan 11 18:18:43 2020 TCP connection established with [AF_INET]10.250.7.77:15162\nSat Jan 11 18:18:43 2020 10.250.7.77:15162 TCP connection established with [AF_INET]100.64.1.1:56102\nSat Jan 11 18:18:43 2020 10.250.7.77:15162 Connection reset, restarting [0]\nSat Jan 11 18:18:43 2020 100.64.1.1:56102 Connection reset, restarting [0]\nSat Jan 11 18:18:49 2020 TCP connection established with [AF_INET]10.250.7.77:31310\nSat Jan 11 18:18:49 2020 10.250.7.77:31310 TCP connection established with [AF_INET]100.64.1.1:60542\nSat Jan 11 18:18:49 2020 10.250.7.77:31310 Connection reset, restarting [0]\nSat Jan 11 18:18:49 2020 100.64.1.1:60542 Connection reset, restarting [0]\nSat Jan 11 18:18:53 2020 TCP connection established with [AF_INET]10.250.7.77:15170\nSat Jan 11 18:18:53 2020 10.250.7.77:15170 TCP connection established with [AF_INET]100.64.1.1:56110\nSat Jan 11 18:18:53 2020 10.250.7.77:15170 Connection reset, restarting [0]\nSat Jan 11 18:18:53 2020 100.64.1.1:56110 Connection reset, restarting [0]\nSat Jan 11 18:18:59 2020 TCP connection established with [AF_INET]10.250.7.77:31324\nSat Jan 11 18:18:59 2020 10.250.7.77:31324 TCP connection established with [AF_INET]100.64.1.1:60556\nSat Jan 11 18:18:59 2020 10.250.7.77:31324 Connection reset, restarting [0]\nSat Jan 11 18:18:59 2020 100.64.1.1:60556 Connection reset, restarting [0]\nSat Jan 11 18:19:03 2020 TCP connection established with [AF_INET]10.250.7.77:15184\nSat Jan 11 18:19:03 2020 10.250.7.77:15184 Connection reset, restarting [0]\nSat Jan 11 18:19:03 2020 TCP connection established with [AF_INET]100.64.1.1:56124\nSat Jan 11 18:19:03 2020 100.64.1.1:56124 Connection reset, restarting [0]\nSat Jan 11 18:19:09 2020 TCP connection established with [AF_INET]10.250.7.77:31338\nSat Jan 11 18:19:09 2020 10.250.7.77:31338 TCP connection established with [AF_INET]100.64.1.1:60570\nSat Jan 11 18:19:09 2020 10.250.7.77:31338 Connection reset, restarting [0]\nSat Jan 11 18:19:09 2020 100.64.1.1:60570 Connection reset, restarting [0]\nSat Jan 11 18:19:13 2020 TCP connection established with [AF_INET]10.250.7.77:15192\nSat Jan 11 18:19:13 2020 10.250.7.77:15192 TCP connection established with [AF_INET]100.64.1.1:56132\nSat Jan 11 18:19:13 2020 10.250.7.77:15192 Connection reset, restarting [0]\nSat Jan 11 18:19:13 2020 100.64.1.1:56132 Connection reset, restarting [0]\nSat Jan 11 18:19:19 2020 TCP connection established with [AF_INET]10.250.7.77:31360\nSat Jan 11 18:19:19 2020 10.250.7.77:31360 TCP connection established with [AF_INET]100.64.1.1:60592\nSat Jan 11 18:19:19 2020 10.250.7.77:31360 Connection reset, restarting [0]\nSat Jan 11 18:19:19 2020 100.64.1.1:60592 Connection reset, restarting [0]\nSat Jan 11 18:19:23 2020 TCP connection established with [AF_INET]10.250.7.77:15208\nSat Jan 11 18:19:23 2020 10.250.7.77:15208 TCP connection established with [AF_INET]100.64.1.1:56148\nSat Jan 11 18:19:23 2020 10.250.7.77:15208 Connection reset, restarting [0]\nSat Jan 11 18:19:23 2020 100.64.1.1:56148 Connection reset, restarting [0]\nSat Jan 11 18:19:29 2020 TCP connection established with [AF_INET]10.250.7.77:31364\nSat Jan 11 18:19:29 2020 10.250.7.77:31364 TCP connection established with [AF_INET]100.64.1.1:60596\nSat Jan 11 18:19:29 2020 10.250.7.77:31364 Connection reset, restarting [0]\nSat Jan 11 18:19:29 2020 100.64.1.1:60596 Connection reset, restarting [0]\nSat Jan 11 18:19:33 2020 TCP connection established with [AF_INET]10.250.7.77:15220\nSat Jan 11 18:19:33 2020 10.250.7.77:15220 TCP connection established with [AF_INET]100.64.1.1:56160\nSat Jan 11 18:19:33 2020 10.250.7.77:15220 Connection reset, restarting [0]\nSat Jan 11 18:19:33 2020 100.64.1.1:56160 Connection reset, restarting [0]\nSat Jan 11 18:19:39 2020 TCP connection established with [AF_INET]10.250.7.77:31372\nSat Jan 11 18:19:39 2020 10.250.7.77:31372 TCP connection established with [AF_INET]100.64.1.1:60604\nSat Jan 11 18:19:39 2020 10.250.7.77:31372 Connection reset, restarting [0]\nSat Jan 11 18:19:39 2020 100.64.1.1:60604 Connection reset, restarting [0]\nSat Jan 11 18:19:43 2020 TCP connection established with [AF_INET]10.250.7.77:15232\nSat Jan 11 18:19:43 2020 10.250.7.77:15232 TCP connection established with [AF_INET]100.64.1.1:56172\nSat Jan 11 18:19:43 2020 10.250.7.77:15232 Connection reset, restarting [0]\nSat Jan 11 18:19:43 2020 100.64.1.1:56172 Connection reset, restarting [0]\nSat Jan 11 18:19:49 2020 TCP connection established with [AF_INET]10.250.7.77:31378\nSat Jan 11 18:19:49 2020 10.250.7.77:31378 TCP connection established with [AF_INET]100.64.1.1:60610\nSat Jan 11 18:19:49 2020 10.250.7.77:31378 Connection reset, restarting [0]\nSat Jan 11 18:19:49 2020 100.64.1.1:60610 Connection reset, restarting [0]\nSat Jan 11 18:19:53 2020 TCP connection established with [AF_INET]10.250.7.77:15240\nSat Jan 11 18:19:53 2020 10.250.7.77:15240 TCP connection established with [AF_INET]100.64.1.1:56180\nSat Jan 11 18:19:53 2020 10.250.7.77:15240 Connection reset, restarting [0]\nSat Jan 11 18:19:53 2020 100.64.1.1:56180 Connection reset, restarting [0]\nSat Jan 11 18:19:59 2020 TCP connection established with [AF_INET]10.250.7.77:31396\nSat Jan 11 18:19:59 2020 10.250.7.77:31396 TCP connection established with [AF_INET]100.64.1.1:60628\nSat Jan 11 18:19:59 2020 10.250.7.77:31396 Connection reset, restarting [0]\nSat Jan 11 18:19:59 2020 100.64.1.1:60628 Connection reset, restarting [0]\nSat Jan 11 18:20:03 2020 TCP connection established with [AF_INET]10.250.7.77:15254\nSat Jan 11 18:20:03 2020 10.250.7.77:15254 TCP connection established with [AF_INET]100.64.1.1:56194\nSat Jan 11 18:20:03 2020 10.250.7.77:15254 Connection reset, restarting [0]\nSat Jan 11 18:20:03 2020 100.64.1.1:56194 Connection reset, restarting [0]\nSat Jan 11 18:20:09 2020 TCP connection established with [AF_INET]10.250.7.77:31406\nSat Jan 11 18:20:09 2020 10.250.7.77:31406 TCP connection established with [AF_INET]100.64.1.1:60638\nSat Jan 11 18:20:09 2020 10.250.7.77:31406 Connection reset, restarting [0]\nSat Jan 11 18:20:09 2020 100.64.1.1:60638 Connection reset, restarting [0]\nSat Jan 11 18:20:13 2020 TCP connection established with [AF_INET]10.250.7.77:15262\nSat Jan 11 18:20:13 2020 10.250.7.77:15262 TCP connection established with [AF_INET]100.64.1.1:56202\nSat Jan 11 18:20:13 2020 10.250.7.77:15262 Connection reset, restarting [0]\nSat Jan 11 18:20:13 2020 100.64.1.1:56202 Connection reset, restarting [0]\nSat Jan 11 18:20:19 2020 TCP connection established with [AF_INET]10.250.7.77:31418\nSat Jan 11 18:20:19 2020 10.250.7.77:31418 TCP connection established with [AF_INET]100.64.1.1:60650\nSat Jan 11 18:20:19 2020 10.250.7.77:31418 Connection reset, restarting [0]\nSat Jan 11 18:20:19 2020 100.64.1.1:60650 Connection reset, restarting [0]\nSat Jan 11 18:20:23 2020 TCP connection established with [AF_INET]10.250.7.77:15272\nSat Jan 11 18:20:23 2020 10.250.7.77:15272 TCP connection established with [AF_INET]100.64.1.1:56212\nSat Jan 11 18:20:23 2020 10.250.7.77:15272 Connection reset, restarting [0]\nSat Jan 11 18:20:23 2020 100.64.1.1:56212 Connection reset, restarting [0]\nSat Jan 11 18:20:29 2020 TCP connection established with [AF_INET]10.250.7.77:31422\nSat Jan 11 18:20:29 2020 10.250.7.77:31422 TCP connection established with [AF_INET]100.64.1.1:60654\nSat Jan 11 18:20:29 2020 10.250.7.77:31422 Connection reset, restarting [0]\nSat Jan 11 18:20:29 2020 100.64.1.1:60654 Connection reset, restarting [0]\nSat Jan 11 18:20:33 2020 TCP connection established with [AF_INET]10.250.7.77:15278\nSat Jan 11 18:20:33 2020 10.250.7.77:15278 TCP connection established with [AF_INET]100.64.1.1:56218\nSat Jan 11 18:20:33 2020 10.250.7.77:15278 Connection reset, restarting [0]\nSat Jan 11 18:20:33 2020 100.64.1.1:56218 Connection reset, restarting [0]\nSat Jan 11 18:20:39 2020 TCP connection established with [AF_INET]10.250.7.77:31430\nSat Jan 11 18:20:39 2020 10.250.7.77:31430 TCP connection established with [AF_INET]100.64.1.1:60662\nSat Jan 11 18:20:39 2020 10.250.7.77:31430 Connection reset, restarting [0]\nSat Jan 11 18:20:39 2020 100.64.1.1:60662 Connection reset, restarting [0]\nSat Jan 11 18:20:43 2020 TCP connection established with [AF_INET]10.250.7.77:15288\nSat Jan 11 18:20:43 2020 10.250.7.77:15288 TCP connection established with [AF_INET]100.64.1.1:56228\nSat Jan 11 18:20:43 2020 10.250.7.77:15288 Connection reset, restarting [0]\nSat Jan 11 18:20:43 2020 100.64.1.1:56228 Connection reset, restarting [0]\nSat Jan 11 18:20:49 2020 TCP connection established with [AF_INET]10.250.7.77:31438\nSat Jan 11 18:20:49 2020 10.250.7.77:31438 TCP connection established with [AF_INET]100.64.1.1:60670\nSat Jan 11 18:20:49 2020 10.250.7.77:31438 Connection reset, restarting [0]\nSat Jan 11 18:20:49 2020 100.64.1.1:60670 Connection reset, restarting [0]\nSat Jan 11 18:20:53 2020 TCP connection established with [AF_INET]10.250.7.77:15298\nSat Jan 11 18:20:53 2020 10.250.7.77:15298 TCP connection established with [AF_INET]100.64.1.1:56238\nSat Jan 11 18:20:53 2020 10.250.7.77:15298 Connection reset, restarting [0]\nSat Jan 11 18:20:53 2020 100.64.1.1:56238 Connection reset, restarting [0]\nSat Jan 11 18:20:59 2020 TCP connection established with [AF_INET]10.250.7.77:31450\nSat Jan 11 18:20:59 2020 10.250.7.77:31450 TCP connection established with [AF_INET]100.64.1.1:60682\nSat Jan 11 18:20:59 2020 10.250.7.77:31450 Connection reset, restarting [0]\nSat Jan 11 18:20:59 2020 100.64.1.1:60682 Connection reset, restarting [0]\nSat Jan 11 18:21:03 2020 TCP connection established with [AF_INET]10.250.7.77:15312\nSat Jan 11 18:21:03 2020 10.250.7.77:15312 TCP connection established with [AF_INET]100.64.1.1:56252\nSat Jan 11 18:21:03 2020 10.250.7.77:15312 Connection reset, restarting [0]\nSat Jan 11 18:21:03 2020 100.64.1.1:56252 Connection reset, restarting [0]\nSat Jan 11 18:21:09 2020 TCP connection established with [AF_INET]10.250.7.77:31460\nSat Jan 11 18:21:09 2020 10.250.7.77:31460 TCP connection established with [AF_INET]100.64.1.1:60692\nSat Jan 11 18:21:09 2020 10.250.7.77:31460 Connection reset, restarting [0]\nSat Jan 11 18:21:09 2020 100.64.1.1:60692 Connection reset, restarting [0]\nSat Jan 11 18:21:13 2020 TCP connection established with [AF_INET]10.250.7.77:15320\nSat Jan 11 18:21:13 2020 10.250.7.77:15320 TCP connection established with [AF_INET]100.64.1.1:56260\nSat Jan 11 18:21:13 2020 10.250.7.77:15320 Connection reset, restarting [0]\nSat Jan 11 18:21:13 2020 100.64.1.1:56260 Connection reset, restarting [0]\nSat Jan 11 18:21:19 2020 TCP connection established with [AF_INET]10.250.7.77:31476\nSat Jan 11 18:21:19 2020 10.250.7.77:31476 TCP connection established with [AF_INET]100.64.1.1:60708\nSat Jan 11 18:21:19 2020 10.250.7.77:31476 Connection reset, restarting [0]\nSat Jan 11 18:21:19 2020 100.64.1.1:60708 Connection reset, restarting [0]\nSat Jan 11 18:21:23 2020 TCP connection established with [AF_INET]10.250.7.77:15330\nSat Jan 11 18:21:23 2020 10.250.7.77:15330 TCP connection established with [AF_INET]100.64.1.1:56270\nSat Jan 11 18:21:23 2020 10.250.7.77:15330 Connection reset, restarting [0]\nSat Jan 11 18:21:23 2020 100.64.1.1:56270 Connection reset, restarting [0]\nSat Jan 11 18:21:29 2020 TCP connection established with [AF_INET]10.250.7.77:31480\nSat Jan 11 18:21:29 2020 10.250.7.77:31480 TCP connection established with [AF_INET]100.64.1.1:60712\nSat Jan 11 18:21:29 2020 10.250.7.77:31480 Connection reset, restarting [0]\nSat Jan 11 18:21:29 2020 100.64.1.1:60712 Connection reset, restarting [0]\nSat Jan 11 18:21:33 2020 TCP connection established with [AF_INET]10.250.7.77:15336\nSat Jan 11 18:21:33 2020 10.250.7.77:15336 TCP connection established with [AF_INET]100.64.1.1:56276\nSat Jan 11 18:21:33 2020 10.250.7.77:15336 Connection reset, restarting [0]\nSat Jan 11 18:21:33 2020 100.64.1.1:56276 Connection reset, restarting [0]\nSat Jan 11 18:21:39 2020 TCP connection established with [AF_INET]10.250.7.77:31488\nSat Jan 11 18:21:39 2020 10.250.7.77:31488 TCP connection established with [AF_INET]100.64.1.1:60720\nSat Jan 11 18:21:39 2020 10.250.7.77:31488 Connection reset, restarting [0]\nSat Jan 11 18:21:39 2020 100.64.1.1:60720 Connection reset, restarting [0]\nSat Jan 11 18:21:43 2020 TCP connection established with [AF_INET]10.250.7.77:15346\nSat Jan 11 18:21:43 2020 10.250.7.77:15346 TCP connection established with [AF_INET]100.64.1.1:56286\nSat Jan 11 18:21:43 2020 10.250.7.77:15346 Connection reset, restarting [0]\nSat Jan 11 18:21:43 2020 100.64.1.1:56286 Connection reset, restarting [0]\nSat Jan 11 18:21:49 2020 TCP connection established with [AF_INET]10.250.7.77:31496\nSat Jan 11 18:21:49 2020 10.250.7.77:31496 TCP connection established with [AF_INET]100.64.1.1:60728\nSat Jan 11 18:21:49 2020 10.250.7.77:31496 Connection reset, restarting [0]\nSat Jan 11 18:21:49 2020 100.64.1.1:60728 Connection reset, restarting [0]\nSat Jan 11 18:21:53 2020 TCP connection established with [AF_INET]10.250.7.77:15352\nSat Jan 11 18:21:53 2020 10.250.7.77:15352 TCP connection established with [AF_INET]100.64.1.1:56292\nSat Jan 11 18:21:53 2020 10.250.7.77:15352 Connection reset, restarting [0]\nSat Jan 11 18:21:53 2020 100.64.1.1:56292 Connection reset, restarting [0]\nSat Jan 11 18:21:59 2020 TCP connection established with [AF_INET]10.250.7.77:31508\nSat Jan 11 18:21:59 2020 10.250.7.77:31508 TCP connection established with [AF_INET]100.64.1.1:60740\nSat Jan 11 18:21:59 2020 10.250.7.77:31508 Connection reset, restarting [0]\nSat Jan 11 18:21:59 2020 100.64.1.1:60740 Connection reset, restarting [0]\nSat Jan 11 18:22:03 2020 TCP connection established with [AF_INET]10.250.7.77:15366\nSat Jan 11 18:22:03 2020 10.250.7.77:15366 TCP connection established with [AF_INET]100.64.1.1:56306\nSat Jan 11 18:22:03 2020 10.250.7.77:15366 Connection reset, restarting [0]\nSat Jan 11 18:22:03 2020 100.64.1.1:56306 Connection reset, restarting [0]\nSat Jan 11 18:22:09 2020 TCP connection established with [AF_INET]10.250.7.77:31518\nSat Jan 11 18:22:09 2020 10.250.7.77:31518 TCP connection established with [AF_INET]100.64.1.1:60750\nSat Jan 11 18:22:09 2020 10.250.7.77:31518 Connection reset, restarting [0]\nSat Jan 11 18:22:09 2020 100.64.1.1:60750 Connection reset, restarting [0]\nSat Jan 11 18:22:13 2020 TCP connection established with [AF_INET]10.250.7.77:15378\nSat Jan 11 18:22:13 2020 10.250.7.77:15378 TCP connection established with [AF_INET]100.64.1.1:56318\nSat Jan 11 18:22:13 2020 10.250.7.77:15378 Connection reset, restarting [0]\nSat Jan 11 18:22:13 2020 100.64.1.1:56318 Connection reset, restarting [0]\nSat Jan 11 18:22:19 2020 TCP connection established with [AF_INET]10.250.7.77:31530\nSat Jan 11 18:22:19 2020 10.250.7.77:31530 TCP connection established with [AF_INET]100.64.1.1:60762\nSat Jan 11 18:22:19 2020 10.250.7.77:31530 Connection reset, restarting [0]\nSat Jan 11 18:22:19 2020 100.64.1.1:60762 Connection reset, restarting [0]\nSat Jan 11 18:22:23 2020 TCP connection established with [AF_INET]10.250.7.77:15388\nSat Jan 11 18:22:23 2020 10.250.7.77:15388 TCP connection established with [AF_INET]100.64.1.1:56328\nSat Jan 11 18:22:23 2020 10.250.7.77:15388 Connection reset, restarting [0]\nSat Jan 11 18:22:23 2020 100.64.1.1:56328 Connection reset, restarting [0]\nSat Jan 11 18:22:29 2020 TCP connection established with [AF_INET]10.250.7.77:31538\nSat Jan 11 18:22:29 2020 10.250.7.77:31538 TCP connection established with [AF_INET]100.64.1.1:60770\nSat Jan 11 18:22:29 2020 10.250.7.77:31538 Connection reset, restarting [0]\nSat Jan 11 18:22:29 2020 100.64.1.1:60770 Connection reset, restarting [0]\nSat Jan 11 18:22:33 2020 TCP connection established with [AF_INET]10.250.7.77:15396\nSat Jan 11 18:22:33 2020 10.250.7.77:15396 TCP connection established with [AF_INET]100.64.1.1:56336\nSat Jan 11 18:22:33 2020 10.250.7.77:15396 Connection reset, restarting [0]\nSat Jan 11 18:22:33 2020 100.64.1.1:56336 Connection reset, restarting [0]\nSat Jan 11 18:22:39 2020 TCP connection established with [AF_INET]10.250.7.77:31546\nSat Jan 11 18:22:39 2020 10.250.7.77:31546 TCP connection established with [AF_INET]100.64.1.1:60778\nSat Jan 11 18:22:39 2020 10.250.7.77:31546 Connection reset, restarting [0]\nSat Jan 11 18:22:39 2020 100.64.1.1:60778 Connection reset, restarting [0]\nSat Jan 11 18:22:43 2020 TCP connection established with [AF_INET]10.250.7.77:15404\nSat Jan 11 18:22:43 2020 10.250.7.77:15404 TCP connection established with [AF_INET]100.64.1.1:56344\nSat Jan 11 18:22:43 2020 10.250.7.77:15404 Connection reset, restarting [0]\nSat Jan 11 18:22:43 2020 100.64.1.1:56344 Connection reset, restarting [0]\nSat Jan 11 18:22:49 2020 TCP connection established with [AF_INET]10.250.7.77:31554\nSat Jan 11 18:22:49 2020 10.250.7.77:31554 TCP connection established with [AF_INET]100.64.1.1:60786\nSat Jan 11 18:22:49 2020 10.250.7.77:31554 Connection reset, restarting [0]\nSat Jan 11 18:22:49 2020 100.64.1.1:60786 Connection reset, restarting [0]\nSat Jan 11 18:22:53 2020 TCP connection established with [AF_INET]10.250.7.77:15410\nSat Jan 11 18:22:53 2020 10.250.7.77:15410 TCP connection established with [AF_INET]100.64.1.1:56350\nSat Jan 11 18:22:53 2020 10.250.7.77:15410 Connection reset, restarting [0]\nSat Jan 11 18:22:53 2020 100.64.1.1:56350 Connection reset, restarting [0]\nSat Jan 11 18:22:59 2020 TCP connection established with [AF_INET]10.250.7.77:31566\nSat Jan 11 18:22:59 2020 10.250.7.77:31566 TCP connection established with [AF_INET]100.64.1.1:60798\nSat Jan 11 18:22:59 2020 10.250.7.77:31566 Connection reset, restarting [0]\nSat Jan 11 18:22:59 2020 100.64.1.1:60798 Connection reset, restarting [0]\nSat Jan 11 18:23:03 2020 TCP connection established with [AF_INET]10.250.7.77:15424\nSat Jan 11 18:23:03 2020 10.250.7.77:15424 TCP connection established with [AF_INET]100.64.1.1:56364\nSat Jan 11 18:23:03 2020 10.250.7.77:15424 Connection reset, restarting [0]\nSat Jan 11 18:23:03 2020 100.64.1.1:56364 Connection reset, restarting [0]\nSat Jan 11 18:23:09 2020 TCP connection established with [AF_INET]10.250.7.77:31576\nSat Jan 11 18:23:09 2020 10.250.7.77:31576 TCP connection established with [AF_INET]100.64.1.1:60808\nSat Jan 11 18:23:09 2020 10.250.7.77:31576 Connection reset, restarting [0]\nSat Jan 11 18:23:09 2020 100.64.1.1:60808 Connection reset, restarting [0]\nSat Jan 11 18:23:13 2020 TCP connection established with [AF_INET]10.250.7.77:15432\nSat Jan 11 18:23:13 2020 10.250.7.77:15432 TCP connection established with [AF_INET]100.64.1.1:56372\nSat Jan 11 18:23:13 2020 10.250.7.77:15432 Connection reset, restarting [0]\nSat Jan 11 18:23:13 2020 100.64.1.1:56372 Connection reset, restarting [0]\nSat Jan 11 18:23:19 2020 TCP connection established with [AF_INET]10.250.7.77:31588\nSat Jan 11 18:23:19 2020 10.250.7.77:31588 TCP connection established with [AF_INET]100.64.1.1:60820\nSat Jan 11 18:23:19 2020 10.250.7.77:31588 Connection reset, restarting [0]\nSat Jan 11 18:23:19 2020 100.64.1.1:60820 Connection reset, restarting [0]\nSat Jan 11 18:23:23 2020 TCP connection established with [AF_INET]10.250.7.77:15446\nSat Jan 11 18:23:23 2020 10.250.7.77:15446 TCP connection established with [AF_INET]100.64.1.1:56386\nSat Jan 11 18:23:23 2020 10.250.7.77:15446 Connection reset, restarting [0]\nSat Jan 11 18:23:23 2020 100.64.1.1:56386 Connection reset, restarting [0]\nSat Jan 11 18:23:29 2020 TCP connection established with [AF_INET]10.250.7.77:31592\nSat Jan 11 18:23:29 2020 10.250.7.77:31592 TCP connection established with [AF_INET]100.64.1.1:60824\nSat Jan 11 18:23:29 2020 10.250.7.77:31592 Connection reset, restarting [0]\nSat Jan 11 18:23:29 2020 100.64.1.1:60824 Connection reset, restarting [0]\nSat Jan 11 18:23:33 2020 TCP connection established with [AF_INET]10.250.7.77:15454\nSat Jan 11 18:23:33 2020 10.250.7.77:15454 TCP connection established with [AF_INET]100.64.1.1:56394\nSat Jan 11 18:23:33 2020 10.250.7.77:15454 Connection reset, restarting [0]\nSat Jan 11 18:23:33 2020 100.64.1.1:56394 Connection reset, restarting [0]\nSat Jan 11 18:23:39 2020 TCP connection established with [AF_INET]10.250.7.77:31602\nSat Jan 11 18:23:39 2020 10.250.7.77:31602 TCP connection established with [AF_INET]100.64.1.1:60834\nSat Jan 11 18:23:39 2020 10.250.7.77:31602 Connection reset, restarting [0]\nSat Jan 11 18:23:39 2020 100.64.1.1:60834 Connection reset, restarting [0]\nSat Jan 11 18:23:43 2020 TCP connection established with [AF_INET]10.250.7.77:15462\nSat Jan 11 18:23:43 2020 10.250.7.77:15462 TCP connection established with [AF_INET]100.64.1.1:56402\nSat Jan 11 18:23:43 2020 10.250.7.77:15462 Connection reset, restarting [0]\nSat Jan 11 18:23:43 2020 100.64.1.1:56402 Connection reset, restarting [0]\nSat Jan 11 18:23:49 2020 TCP connection established with [AF_INET]10.250.7.77:31612\nSat Jan 11 18:23:49 2020 10.250.7.77:31612 TCP connection established with [AF_INET]100.64.1.1:60844\nSat Jan 11 18:23:49 2020 10.250.7.77:31612 Connection reset, restarting [0]\nSat Jan 11 18:23:49 2020 100.64.1.1:60844 Connection reset, restarting [0]\nSat Jan 11 18:23:53 2020 TCP connection established with [AF_INET]100.64.1.1:56408\nSat Jan 11 18:23:53 2020 100.64.1.1:56408 Connection reset, restarting [0]\nSat Jan 11 18:23:53 2020 TCP connection established with [AF_INET]10.250.7.77:15468\nSat Jan 11 18:23:53 2020 10.250.7.77:15468 Connection reset, restarting [0]\nSat Jan 11 18:23:59 2020 TCP connection established with [AF_INET]10.250.7.77:31624\nSat Jan 11 18:23:59 2020 10.250.7.77:31624 TCP connection established with [AF_INET]100.64.1.1:60856\nSat Jan 11 18:23:59 2020 10.250.7.77:31624 Connection reset, restarting [0]\nSat Jan 11 18:23:59 2020 100.64.1.1:60856 Connection reset, restarting [0]\nSat Jan 11 18:24:03 2020 TCP connection established with [AF_INET]10.250.7.77:15482\nSat Jan 11 18:24:03 2020 10.250.7.77:15482 TCP connection established with [AF_INET]100.64.1.1:56422\nSat Jan 11 18:24:03 2020 10.250.7.77:15482 Connection reset, restarting [0]\nSat Jan 11 18:24:03 2020 100.64.1.1:56422 Connection reset, restarting [0]\nSat Jan 11 18:24:09 2020 TCP connection established with [AF_INET]10.250.7.77:31634\nSat Jan 11 18:24:09 2020 10.250.7.77:31634 TCP connection established with [AF_INET]100.64.1.1:60866\nSat Jan 11 18:24:09 2020 10.250.7.77:31634 Connection reset, restarting [0]\nSat Jan 11 18:24:09 2020 100.64.1.1:60866 Connection reset, restarting [0]\nSat Jan 11 18:24:13 2020 TCP connection established with [AF_INET]10.250.7.77:15490\nSat Jan 11 18:24:13 2020 10.250.7.77:15490 TCP connection established with [AF_INET]100.64.1.1:56430\nSat Jan 11 18:24:13 2020 10.250.7.77:15490 Connection reset, restarting [0]\nSat Jan 11 18:24:13 2020 100.64.1.1:56430 Connection reset, restarting [0]\nSat Jan 11 18:24:19 2020 TCP connection established with [AF_INET]10.250.7.77:31646\nSat Jan 11 18:24:19 2020 10.250.7.77:31646 TCP connection established with [AF_INET]100.64.1.1:60878\nSat Jan 11 18:24:19 2020 10.250.7.77:31646 Connection reset, restarting [0]\nSat Jan 11 18:24:19 2020 100.64.1.1:60878 Connection reset, restarting [0]\nSat Jan 11 18:24:23 2020 TCP connection established with [AF_INET]10.250.7.77:15500\nSat Jan 11 18:24:23 2020 10.250.7.77:15500 TCP connection established with [AF_INET]100.64.1.1:56440\nSat Jan 11 18:24:23 2020 10.250.7.77:15500 Connection reset, restarting [0]\nSat Jan 11 18:24:23 2020 100.64.1.1:56440 Connection reset, restarting [0]\nSat Jan 11 18:24:29 2020 TCP connection established with [AF_INET]10.250.7.77:31650\nSat Jan 11 18:24:29 2020 10.250.7.77:31650 TCP connection established with [AF_INET]100.64.1.1:60882\nSat Jan 11 18:24:29 2020 10.250.7.77:31650 Connection reset, restarting [0]\nSat Jan 11 18:24:29 2020 100.64.1.1:60882 Connection reset, restarting [0]\nSat Jan 11 18:24:33 2020 TCP connection established with [AF_INET]10.250.7.77:15508\nSat Jan 11 18:24:33 2020 10.250.7.77:15508 TCP connection established with [AF_INET]100.64.1.1:56448\nSat Jan 11 18:24:33 2020 10.250.7.77:15508 Connection reset, restarting [0]\nSat Jan 11 18:24:33 2020 100.64.1.1:56448 Connection reset, restarting [0]\nSat Jan 11 18:24:39 2020 TCP connection established with [AF_INET]10.250.7.77:31660\nSat Jan 11 18:24:39 2020 10.250.7.77:31660 TCP connection established with [AF_INET]100.64.1.1:60892\nSat Jan 11 18:24:39 2020 10.250.7.77:31660 Connection reset, restarting [0]\nSat Jan 11 18:24:39 2020 100.64.1.1:60892 Connection reset, restarting [0]\nSat Jan 11 18:24:43 2020 TCP connection established with [AF_INET]10.250.7.77:15520\nSat Jan 11 18:24:43 2020 10.250.7.77:15520 TCP connection established with [AF_INET]100.64.1.1:56460\nSat Jan 11 18:24:43 2020 10.250.7.77:15520 Connection reset, restarting [0]\nSat Jan 11 18:24:43 2020 100.64.1.1:56460 Connection reset, restarting [0]\nSat Jan 11 18:24:49 2020 TCP connection established with [AF_INET]10.250.7.77:31666\nSat Jan 11 18:24:49 2020 10.250.7.77:31666 TCP connection established with [AF_INET]100.64.1.1:60898\nSat Jan 11 18:24:49 2020 10.250.7.77:31666 Connection reset, restarting [0]\nSat Jan 11 18:24:49 2020 100.64.1.1:60898 Connection reset, restarting [0]\nSat Jan 11 18:24:53 2020 TCP connection established with [AF_INET]10.250.7.77:15526\nSat Jan 11 18:24:53 2020 10.250.7.77:15526 TCP connection established with [AF_INET]100.64.1.1:56466\nSat Jan 11 18:24:53 2020 10.250.7.77:15526 Connection reset, restarting [0]\nSat Jan 11 18:24:53 2020 100.64.1.1:56466 Connection reset, restarting [0]\nSat Jan 11 18:24:59 2020 TCP connection established with [AF_INET]10.250.7.77:31682\nSat Jan 11 18:24:59 2020 10.250.7.77:31682 TCP connection established with [AF_INET]100.64.1.1:60914\nSat Jan 11 18:24:59 2020 10.250.7.77:31682 Connection reset, restarting [0]\nSat Jan 11 18:24:59 2020 100.64.1.1:60914 Connection reset, restarting [0]\nSat Jan 11 18:25:03 2020 TCP connection established with [AF_INET]10.250.7.77:15540\nSat Jan 11 18:25:03 2020 10.250.7.77:15540 TCP connection established with [AF_INET]100.64.1.1:56480\nSat Jan 11 18:25:03 2020 10.250.7.77:15540 Connection reset, restarting [0]\nSat Jan 11 18:25:03 2020 100.64.1.1:56480 Connection reset, restarting [0]\nSat Jan 11 18:25:09 2020 TCP connection established with [AF_INET]10.250.7.77:31692\nSat Jan 11 18:25:09 2020 10.250.7.77:31692 TCP connection established with [AF_INET]100.64.1.1:60924\nSat Jan 11 18:25:09 2020 10.250.7.77:31692 Connection reset, restarting [0]\nSat Jan 11 18:25:09 2020 100.64.1.1:60924 Connection reset, restarting [0]\nSat Jan 11 18:25:13 2020 TCP connection established with [AF_INET]10.250.7.77:15548\nSat Jan 11 18:25:13 2020 10.250.7.77:15548 TCP connection established with [AF_INET]100.64.1.1:56488\nSat Jan 11 18:25:13 2020 10.250.7.77:15548 Connection reset, restarting [0]\nSat Jan 11 18:25:13 2020 100.64.1.1:56488 Connection reset, restarting [0]\nSat Jan 11 18:25:19 2020 TCP connection established with [AF_INET]10.250.7.77:31704\nSat Jan 11 18:25:19 2020 10.250.7.77:31704 TCP connection established with [AF_INET]100.64.1.1:60936\nSat Jan 11 18:25:19 2020 10.250.7.77:31704 Connection reset, restarting [0]\nSat Jan 11 18:25:19 2020 100.64.1.1:60936 Connection reset, restarting [0]\nSat Jan 11 18:25:23 2020 TCP connection established with [AF_INET]10.250.7.77:15560\nSat Jan 11 18:25:23 2020 10.250.7.77:15560 TCP connection established with [AF_INET]100.64.1.1:56500\nSat Jan 11 18:25:23 2020 10.250.7.77:15560 Connection reset, restarting [0]\nSat Jan 11 18:25:23 2020 100.64.1.1:56500 Connection reset, restarting [0]\nSat Jan 11 18:25:29 2020 TCP connection established with [AF_INET]10.250.7.77:31710\nSat Jan 11 18:25:29 2020 10.250.7.77:31710 TCP connection established with [AF_INET]100.64.1.1:60942\nSat Jan 11 18:25:29 2020 10.250.7.77:31710 Connection reset, restarting [0]\nSat Jan 11 18:25:29 2020 100.64.1.1:60942 Connection reset, restarting [0]\nSat Jan 11 18:25:33 2020 TCP connection established with [AF_INET]10.250.7.77:15566\nSat Jan 11 18:25:33 2020 10.250.7.77:15566 TCP connection established with [AF_INET]100.64.1.1:56506\nSat Jan 11 18:25:33 2020 10.250.7.77:15566 Connection reset, restarting [0]\nSat Jan 11 18:25:33 2020 100.64.1.1:56506 Connection reset, restarting [0]\nSat Jan 11 18:25:39 2020 TCP connection established with [AF_INET]10.250.7.77:31718\nSat Jan 11 18:25:39 2020 10.250.7.77:31718 TCP connection established with [AF_INET]100.64.1.1:60950\nSat Jan 11 18:25:39 2020 10.250.7.77:31718 Connection reset, restarting [0]\nSat Jan 11 18:25:39 2020 100.64.1.1:60950 Connection reset, restarting [0]\nSat Jan 11 18:25:43 2020 TCP connection established with [AF_INET]100.64.1.1:56514\nSat Jan 11 18:25:43 2020 100.64.1.1:56514 TCP connection established with [AF_INET]10.250.7.77:15574\nSat Jan 11 18:25:43 2020 100.64.1.1:56514 Connection reset, restarting [0]\nSat Jan 11 18:25:43 2020 10.250.7.77:15574 Connection reset, restarting [0]\nSat Jan 11 18:25:49 2020 TCP connection established with [AF_INET]10.250.7.77:31724\nSat Jan 11 18:25:49 2020 10.250.7.77:31724 TCP connection established with [AF_INET]100.64.1.1:60956\nSat Jan 11 18:25:49 2020 10.250.7.77:31724 Connection reset, restarting [0]\nSat Jan 11 18:25:49 2020 100.64.1.1:60956 Connection reset, restarting [0]\nSat Jan 11 18:25:53 2020 TCP connection established with [AF_INET]10.250.7.77:15618\nSat Jan 11 18:25:53 2020 10.250.7.77:15618 TCP connection established with [AF_INET]100.64.1.1:56558\nSat Jan 11 18:25:53 2020 10.250.7.77:15618 Connection reset, restarting [0]\nSat Jan 11 18:25:53 2020 100.64.1.1:56558 Connection reset, restarting [0]\nSat Jan 11 18:25:59 2020 TCP connection established with [AF_INET]10.250.7.77:31736\nSat Jan 11 18:25:59 2020 10.250.7.77:31736 TCP connection established with [AF_INET]100.64.1.1:60968\nSat Jan 11 18:25:59 2020 10.250.7.77:31736 Connection reset, restarting [0]\nSat Jan 11 18:25:59 2020 100.64.1.1:60968 Connection reset, restarting [0]\nSat Jan 11 18:26:03 2020 TCP connection established with [AF_INET]10.250.7.77:15636\nSat Jan 11 18:26:03 2020 10.250.7.77:15636 TCP connection established with [AF_INET]100.64.1.1:56576\nSat Jan 11 18:26:03 2020 10.250.7.77:15636 Connection reset, restarting [0]\nSat Jan 11 18:26:03 2020 100.64.1.1:56576 Connection reset, restarting [0]\nSat Jan 11 18:26:09 2020 TCP connection established with [AF_INET]10.250.7.77:31746\nSat Jan 11 18:26:09 2020 10.250.7.77:31746 TCP connection established with [AF_INET]100.64.1.1:60978\nSat Jan 11 18:26:09 2020 10.250.7.77:31746 Connection reset, restarting [0]\nSat Jan 11 18:26:09 2020 100.64.1.1:60978 Connection reset, restarting [0]\nSat Jan 11 18:26:13 2020 TCP connection established with [AF_INET]10.250.7.77:15646\nSat Jan 11 18:26:13 2020 10.250.7.77:15646 TCP connection established with [AF_INET]100.64.1.1:56586\nSat Jan 11 18:26:13 2020 10.250.7.77:15646 Connection reset, restarting [0]\nSat Jan 11 18:26:13 2020 100.64.1.1:56586 Connection reset, restarting [0]\nSat Jan 11 18:26:19 2020 TCP connection established with [AF_INET]10.250.7.77:31762\nSat Jan 11 18:26:19 2020 10.250.7.77:31762 TCP connection established with [AF_INET]100.64.1.1:60994\nSat Jan 11 18:26:19 2020 10.250.7.77:31762 Connection reset, restarting [0]\nSat Jan 11 18:26:19 2020 100.64.1.1:60994 Connection reset, restarting [0]\nSat Jan 11 18:26:23 2020 TCP connection established with [AF_INET]10.250.7.77:15664\nSat Jan 11 18:26:23 2020 10.250.7.77:15664 TCP connection established with [AF_INET]100.64.1.1:56604\nSat Jan 11 18:26:23 2020 10.250.7.77:15664 Connection reset, restarting [0]\nSat Jan 11 18:26:23 2020 100.64.1.1:56604 Connection reset, restarting [0]\nSat Jan 11 18:26:29 2020 TCP connection established with [AF_INET]10.250.7.77:31768\nSat Jan 11 18:26:29 2020 10.250.7.77:31768 TCP connection established with [AF_INET]100.64.1.1:61000\nSat Jan 11 18:26:29 2020 10.250.7.77:31768 Connection reset, restarting [0]\nSat Jan 11 18:26:29 2020 100.64.1.1:61000 Connection reset, restarting [0]\nSat Jan 11 18:26:33 2020 TCP connection established with [AF_INET]10.250.7.77:15670\nSat Jan 11 18:26:33 2020 10.250.7.77:15670 TCP connection established with [AF_INET]100.64.1.1:56610\nSat Jan 11 18:26:33 2020 10.250.7.77:15670 Connection reset, restarting [0]\nSat Jan 11 18:26:33 2020 100.64.1.1:56610 Connection reset, restarting [0]\nSat Jan 11 18:26:39 2020 TCP connection established with [AF_INET]10.250.7.77:31776\nSat Jan 11 18:26:39 2020 10.250.7.77:31776 TCP connection established with [AF_INET]100.64.1.1:61008\nSat Jan 11 18:26:39 2020 10.250.7.77:31776 Connection reset, restarting [0]\nSat Jan 11 18:26:39 2020 100.64.1.1:61008 Connection reset, restarting [0]\nSat Jan 11 18:26:43 2020 TCP connection established with [AF_INET]10.250.7.77:15678\nSat Jan 11 18:26:43 2020 10.250.7.77:15678 TCP connection established with [AF_INET]100.64.1.1:56618\nSat Jan 11 18:26:43 2020 10.250.7.77:15678 Connection reset, restarting [0]\nSat Jan 11 18:26:43 2020 100.64.1.1:56618 Connection reset, restarting [0]\nSat Jan 11 18:26:49 2020 TCP connection established with [AF_INET]10.250.7.77:31782\nSat Jan 11 18:26:49 2020 10.250.7.77:31782 TCP connection established with [AF_INET]100.64.1.1:61014\nSat Jan 11 18:26:49 2020 10.250.7.77:31782 Connection reset, restarting [0]\nSat Jan 11 18:26:49 2020 100.64.1.1:61014 Connection reset, restarting [0]\nSat Jan 11 18:26:53 2020 TCP connection established with [AF_INET]10.250.7.77:15684\nSat Jan 11 18:26:53 2020 10.250.7.77:15684 TCP connection established with [AF_INET]100.64.1.1:56624\nSat Jan 11 18:26:53 2020 10.250.7.77:15684 Connection reset, restarting [0]\nSat Jan 11 18:26:53 2020 100.64.1.1:56624 Connection reset, restarting [0]\nSat Jan 11 18:26:59 2020 TCP connection established with [AF_INET]10.250.7.77:31794\nSat Jan 11 18:26:59 2020 10.250.7.77:31794 TCP connection established with [AF_INET]100.64.1.1:61026\nSat Jan 11 18:26:59 2020 10.250.7.77:31794 Connection reset, restarting [0]\nSat Jan 11 18:26:59 2020 100.64.1.1:61026 Connection reset, restarting [0]\nSat Jan 11 18:27:03 2020 TCP connection established with [AF_INET]10.250.7.77:15700\nSat Jan 11 18:27:03 2020 10.250.7.77:15700 TCP connection established with [AF_INET]100.64.1.1:56640\nSat Jan 11 18:27:03 2020 10.250.7.77:15700 Connection reset, restarting [0]\nSat Jan 11 18:27:03 2020 100.64.1.1:56640 Connection reset, restarting [0]\nSat Jan 11 18:27:09 2020 TCP connection established with [AF_INET]10.250.7.77:31806\nSat Jan 11 18:27:09 2020 10.250.7.77:31806 TCP connection established with [AF_INET]100.64.1.1:61038\nSat Jan 11 18:27:09 2020 10.250.7.77:31806 Connection reset, restarting [0]\nSat Jan 11 18:27:09 2020 100.64.1.1:61038 Connection reset, restarting [0]\nSat Jan 11 18:27:13 2020 TCP connection established with [AF_INET]10.250.7.77:15714\nSat Jan 11 18:27:13 2020 10.250.7.77:15714 TCP connection established with [AF_INET]100.64.1.1:56654\nSat Jan 11 18:27:13 2020 10.250.7.77:15714 Connection reset, restarting [0]\nSat Jan 11 18:27:13 2020 100.64.1.1:56654 Connection reset, restarting [0]\nSat Jan 11 18:27:19 2020 TCP connection established with [AF_INET]10.250.7.77:31818\nSat Jan 11 18:27:19 2020 10.250.7.77:31818 TCP connection established with [AF_INET]100.64.1.1:61050\nSat Jan 11 18:27:19 2020 10.250.7.77:31818 Connection reset, restarting [0]\nSat Jan 11 18:27:19 2020 100.64.1.1:61050 Connection reset, restarting [0]\nSat Jan 11 18:27:23 2020 TCP connection established with [AF_INET]10.250.7.77:15724\nSat Jan 11 18:27:23 2020 10.250.7.77:15724 TCP connection established with [AF_INET]100.64.1.1:56664\nSat Jan 11 18:27:23 2020 10.250.7.77:15724 Connection reset, restarting [0]\nSat Jan 11 18:27:23 2020 100.64.1.1:56664 Connection reset, restarting [0]\nSat Jan 11 18:27:29 2020 TCP connection established with [AF_INET]10.250.7.77:31828\nSat Jan 11 18:27:29 2020 10.250.7.77:31828 TCP connection established with [AF_INET]100.64.1.1:61060\nSat Jan 11 18:27:29 2020 10.250.7.77:31828 Connection reset, restarting [0]\nSat Jan 11 18:27:29 2020 100.64.1.1:61060 Connection reset, restarting [0]\nSat Jan 11 18:27:33 2020 TCP connection established with [AF_INET]10.250.7.77:15730\nSat Jan 11 18:27:33 2020 10.250.7.77:15730 TCP connection established with [AF_INET]100.64.1.1:56670\nSat Jan 11 18:27:33 2020 10.250.7.77:15730 Connection reset, restarting [0]\nSat Jan 11 18:27:33 2020 100.64.1.1:56670 Connection reset, restarting [0]\nSat Jan 11 18:27:39 2020 TCP connection established with [AF_INET]10.250.7.77:31836\nSat Jan 11 18:27:39 2020 10.250.7.77:31836 TCP connection established with [AF_INET]100.64.1.1:61068\nSat Jan 11 18:27:39 2020 10.250.7.77:31836 Connection reset, restarting [0]\nSat Jan 11 18:27:39 2020 100.64.1.1:61068 Connection reset, restarting [0]\nSat Jan 11 18:27:43 2020 TCP connection established with [AF_INET]10.250.7.77:15738\nSat Jan 11 18:27:43 2020 10.250.7.77:15738 TCP connection established with [AF_INET]100.64.1.1:56678\nSat Jan 11 18:27:43 2020 10.250.7.77:15738 Connection reset, restarting [0]\nSat Jan 11 18:27:43 2020 100.64.1.1:56678 Connection reset, restarting [0]\nSat Jan 11 18:27:49 2020 TCP connection established with [AF_INET]10.250.7.77:31842\nSat Jan 11 18:27:49 2020 10.250.7.77:31842 TCP connection established with [AF_INET]100.64.1.1:61074\nSat Jan 11 18:27:49 2020 10.250.7.77:31842 Connection reset, restarting [0]\nSat Jan 11 18:27:49 2020 100.64.1.1:61074 Connection reset, restarting [0]\nSat Jan 11 18:27:53 2020 TCP connection established with [AF_INET]10.250.7.77:15744\nSat Jan 11 18:27:53 2020 10.250.7.77:15744 TCP connection established with [AF_INET]100.64.1.1:56684\nSat Jan 11 18:27:53 2020 10.250.7.77:15744 Connection reset, restarting [0]\nSat Jan 11 18:27:53 2020 100.64.1.1:56684 Connection reset, restarting [0]\nSat Jan 11 18:27:59 2020 TCP connection established with [AF_INET]10.250.7.77:31854\nSat Jan 11 18:27:59 2020 10.250.7.77:31854 TCP connection established with [AF_INET]100.64.1.1:61086\nSat Jan 11 18:27:59 2020 10.250.7.77:31854 Connection reset, restarting [0]\nSat Jan 11 18:27:59 2020 100.64.1.1:61086 Connection reset, restarting [0]\nSat Jan 11 18:28:03 2020 TCP connection established with [AF_INET]10.250.7.77:15764\nSat Jan 11 18:28:03 2020 10.250.7.77:15764 TCP connection established with [AF_INET]100.64.1.1:56704\nSat Jan 11 18:28:03 2020 10.250.7.77:15764 Connection reset, restarting [0]\nSat Jan 11 18:28:03 2020 100.64.1.1:56704 Connection reset, restarting [0]\nSat Jan 11 18:28:09 2020 TCP connection established with [AF_INET]10.250.7.77:31870\nSat Jan 11 18:28:09 2020 10.250.7.77:31870 TCP connection established with [AF_INET]100.64.1.1:61102\nSat Jan 11 18:28:09 2020 10.250.7.77:31870 Connection reset, restarting [0]\nSat Jan 11 18:28:09 2020 100.64.1.1:61102 Connection reset, restarting [0]\nSat Jan 11 18:28:13 2020 TCP connection established with [AF_INET]10.250.7.77:15774\nSat Jan 11 18:28:13 2020 10.250.7.77:15774 TCP connection established with [AF_INET]100.64.1.1:56714\nSat Jan 11 18:28:13 2020 10.250.7.77:15774 Connection reset, restarting [0]\nSat Jan 11 18:28:13 2020 100.64.1.1:56714 Connection reset, restarting [0]\nSat Jan 11 18:28:19 2020 TCP connection established with [AF_INET]10.250.7.77:31884\nSat Jan 11 18:28:19 2020 10.250.7.77:31884 TCP connection established with [AF_INET]100.64.1.1:61116\nSat Jan 11 18:28:19 2020 10.250.7.77:31884 Connection reset, restarting [0]\nSat Jan 11 18:28:19 2020 100.64.1.1:61116 Connection reset, restarting [0]\nSat Jan 11 18:28:23 2020 TCP connection established with [AF_INET]10.250.7.77:15788\nSat Jan 11 18:28:23 2020 10.250.7.77:15788 TCP connection established with [AF_INET]100.64.1.1:56728\nSat Jan 11 18:28:23 2020 10.250.7.77:15788 Connection reset, restarting [0]\nSat Jan 11 18:28:23 2020 100.64.1.1:56728 Connection reset, restarting [0]\nSat Jan 11 18:28:29 2020 TCP connection established with [AF_INET]10.250.7.77:31888\nSat Jan 11 18:28:29 2020 10.250.7.77:31888 TCP connection established with [AF_INET]100.64.1.1:61120\nSat Jan 11 18:28:29 2020 10.250.7.77:31888 Connection reset, restarting [0]\nSat Jan 11 18:28:29 2020 100.64.1.1:61120 Connection reset, restarting [0]\nSat Jan 11 18:28:33 2020 TCP connection established with [AF_INET]10.250.7.77:15794\nSat Jan 11 18:28:33 2020 10.250.7.77:15794 TCP connection established with [AF_INET]100.64.1.1:56734\nSat Jan 11 18:28:33 2020 10.250.7.77:15794 Connection reset, restarting [0]\nSat Jan 11 18:28:33 2020 100.64.1.1:56734 Connection reset, restarting [0]\nSat Jan 11 18:28:39 2020 TCP connection established with [AF_INET]10.250.7.77:31896\nSat Jan 11 18:28:39 2020 10.250.7.77:31896 TCP connection established with [AF_INET]100.64.1.1:61128\nSat Jan 11 18:28:39 2020 10.250.7.77:31896 Connection reset, restarting [0]\nSat Jan 11 18:28:39 2020 100.64.1.1:61128 Connection reset, restarting [0]\nSat Jan 11 18:28:43 2020 TCP connection established with [AF_INET]10.250.7.77:15802\nSat Jan 11 18:28:43 2020 10.250.7.77:15802 TCP connection established with [AF_INET]100.64.1.1:56742\nSat Jan 11 18:28:43 2020 10.250.7.77:15802 Connection reset, restarting [0]\nSat Jan 11 18:28:43 2020 100.64.1.1:56742 Connection reset, restarting [0]\nSat Jan 11 18:28:49 2020 TCP connection established with [AF_INET]10.250.7.77:31940\nSat Jan 11 18:28:49 2020 10.250.7.77:31940 TCP connection established with [AF_INET]100.64.1.1:61172\nSat Jan 11 18:28:49 2020 10.250.7.77:31940 Connection reset, restarting [0]\nSat Jan 11 18:28:49 2020 100.64.1.1:61172 Connection reset, restarting [0]\nSat Jan 11 18:28:53 2020 TCP connection established with [AF_INET]10.250.7.77:15808\nSat Jan 11 18:28:53 2020 10.250.7.77:15808 TCP connection established with [AF_INET]100.64.1.1:56748\nSat Jan 11 18:28:53 2020 10.250.7.77:15808 Connection reset, restarting [0]\nSat Jan 11 18:28:53 2020 100.64.1.1:56748 Connection reset, restarting [0]\nSat Jan 11 18:28:59 2020 TCP connection established with [AF_INET]10.250.7.77:31952\nSat Jan 11 18:28:59 2020 10.250.7.77:31952 TCP connection established with [AF_INET]100.64.1.1:61184\nSat Jan 11 18:28:59 2020 10.250.7.77:31952 Connection reset, restarting [0]\nSat Jan 11 18:28:59 2020 100.64.1.1:61184 Connection reset, restarting [0]\nSat Jan 11 18:29:03 2020 TCP connection established with [AF_INET]10.250.7.77:15826\nSat Jan 11 18:29:03 2020 10.250.7.77:15826 TCP connection established with [AF_INET]100.64.1.1:56766\nSat Jan 11 18:29:03 2020 10.250.7.77:15826 Connection reset, restarting [0]\nSat Jan 11 18:29:03 2020 100.64.1.1:56766 Connection reset, restarting [0]\nSat Jan 11 18:29:09 2020 TCP connection established with [AF_INET]100.64.1.1:61200\nSat Jan 11 18:29:09 2020 100.64.1.1:61200 TCP connection established with [AF_INET]10.250.7.77:31968\nSat Jan 11 18:29:09 2020 100.64.1.1:61200 Connection reset, restarting [0]\nSat Jan 11 18:29:09 2020 10.250.7.77:31968 Connection reset, restarting [0]\nSat Jan 11 18:29:13 2020 TCP connection established with [AF_INET]10.250.7.77:15836\nSat Jan 11 18:29:13 2020 10.250.7.77:15836 TCP connection established with [AF_INET]100.64.1.1:56776\nSat Jan 11 18:29:13 2020 10.250.7.77:15836 Connection reset, restarting [0]\nSat Jan 11 18:29:13 2020 100.64.1.1:56776 Connection reset, restarting [0]\nSat Jan 11 18:29:19 2020 TCP connection established with [AF_INET]100.64.1.1:61220\nSat Jan 11 18:29:19 2020 100.64.1.1:61220 TCP connection established with [AF_INET]10.250.7.77:31988\nSat Jan 11 18:29:19 2020 100.64.1.1:61220 Connection reset, restarting [0]\nSat Jan 11 18:29:19 2020 10.250.7.77:31988 Connection reset, restarting [0]\nSat Jan 11 18:29:23 2020 TCP connection established with [AF_INET]10.250.7.77:15846\nSat Jan 11 18:29:23 2020 10.250.7.77:15846 TCP connection established with [AF_INET]100.64.1.1:56786\nSat Jan 11 18:29:23 2020 10.250.7.77:15846 Connection reset, restarting [0]\nSat Jan 11 18:29:23 2020 100.64.1.1:56786 Connection reset, restarting [0]\nSat Jan 11 18:29:29 2020 TCP connection established with [AF_INET]10.250.7.77:31992\nSat Jan 11 18:29:29 2020 10.250.7.77:31992 TCP connection established with [AF_INET]100.64.1.1:61224\nSat Jan 11 18:29:29 2020 10.250.7.77:31992 Connection reset, restarting [0]\nSat Jan 11 18:29:29 2020 100.64.1.1:61224 Connection reset, restarting [0]\nSat Jan 11 18:29:33 2020 TCP connection established with [AF_INET]10.250.7.77:15852\nSat Jan 11 18:29:33 2020 10.250.7.77:15852 TCP connection established with [AF_INET]100.64.1.1:56792\nSat Jan 11 18:29:33 2020 10.250.7.77:15852 Connection reset, restarting [0]\nSat Jan 11 18:29:33 2020 100.64.1.1:56792 Connection reset, restarting [0]\nSat Jan 11 18:29:39 2020 TCP connection established with [AF_INET]10.250.7.77:32000\nSat Jan 11 18:29:39 2020 10.250.7.77:32000 TCP connection established with [AF_INET]100.64.1.1:61232\nSat Jan 11 18:29:39 2020 10.250.7.77:32000 Connection reset, restarting [0]\nSat Jan 11 18:29:39 2020 100.64.1.1:61232 Connection reset, restarting [0]\nSat Jan 11 18:29:43 2020 TCP connection established with [AF_INET]10.250.7.77:15864\nSat Jan 11 18:29:43 2020 10.250.7.77:15864 TCP connection established with [AF_INET]100.64.1.1:56804\nSat Jan 11 18:29:43 2020 10.250.7.77:15864 Connection reset, restarting [0]\nSat Jan 11 18:29:43 2020 100.64.1.1:56804 Connection reset, restarting [0]\nSat Jan 11 18:29:49 2020 TCP connection established with [AF_INET]10.250.7.77:32006\nSat Jan 11 18:29:49 2020 10.250.7.77:32006 TCP connection established with [AF_INET]100.64.1.1:61238\nSat Jan 11 18:29:49 2020 10.250.7.77:32006 Connection reset, restarting [0]\nSat Jan 11 18:29:49 2020 100.64.1.1:61238 Connection reset, restarting [0]\nSat Jan 11 18:29:53 2020 TCP connection established with [AF_INET]10.250.7.77:15870\nSat Jan 11 18:29:53 2020 10.250.7.77:15870 TCP connection established with [AF_INET]100.64.1.1:56810\nSat Jan 11 18:29:53 2020 10.250.7.77:15870 Connection reset, restarting [0]\nSat Jan 11 18:29:53 2020 100.64.1.1:56810 Connection reset, restarting [0]\nSat Jan 11 18:29:59 2020 TCP connection established with [AF_INET]10.250.7.77:32022\nSat Jan 11 18:29:59 2020 10.250.7.77:32022 TCP connection established with [AF_INET]100.64.1.1:61254\nSat Jan 11 18:29:59 2020 10.250.7.77:32022 Connection reset, restarting [0]\nSat Jan 11 18:29:59 2020 100.64.1.1:61254 Connection reset, restarting [0]\nSat Jan 11 18:30:03 2020 TCP connection established with [AF_INET]10.250.7.77:15886\nSat Jan 11 18:30:03 2020 10.250.7.77:15886 TCP connection established with [AF_INET]100.64.1.1:56826\nSat Jan 11 18:30:03 2020 10.250.7.77:15886 Connection reset, restarting [0]\nSat Jan 11 18:30:03 2020 100.64.1.1:56826 Connection reset, restarting [0]\nSat Jan 11 18:30:09 2020 TCP connection established with [AF_INET]10.250.7.77:32034\nSat Jan 11 18:30:09 2020 10.250.7.77:32034 TCP connection established with [AF_INET]100.64.1.1:61266\nSat Jan 11 18:30:09 2020 10.250.7.77:32034 Connection reset, restarting [0]\nSat Jan 11 18:30:09 2020 100.64.1.1:61266 Connection reset, restarting [0]\nSat Jan 11 18:30:13 2020 TCP connection established with [AF_INET]10.250.7.77:15894\nSat Jan 11 18:30:13 2020 10.250.7.77:15894 TCP connection established with [AF_INET]100.64.1.1:56834\nSat Jan 11 18:30:13 2020 10.250.7.77:15894 Connection reset, restarting [0]\nSat Jan 11 18:30:13 2020 100.64.1.1:56834 Connection reset, restarting [0]\nSat Jan 11 18:30:19 2020 TCP connection established with [AF_INET]10.250.7.77:32046\nSat Jan 11 18:30:19 2020 10.250.7.77:32046 TCP connection established with [AF_INET]100.64.1.1:61278\nSat Jan 11 18:30:19 2020 10.250.7.77:32046 Connection reset, restarting [0]\nSat Jan 11 18:30:19 2020 100.64.1.1:61278 Connection reset, restarting [0]\nSat Jan 11 18:30:23 2020 TCP connection established with [AF_INET]10.250.7.77:15904\nSat Jan 11 18:30:23 2020 10.250.7.77:15904 TCP connection established with [AF_INET]100.64.1.1:56844\nSat Jan 11 18:30:23 2020 10.250.7.77:15904 Connection reset, restarting [0]\nSat Jan 11 18:30:23 2020 100.64.1.1:56844 Connection reset, restarting [0]\nSat Jan 11 18:30:29 2020 TCP connection established with [AF_INET]10.250.7.77:32050\nSat Jan 11 18:30:29 2020 10.250.7.77:32050 TCP connection established with [AF_INET]100.64.1.1:61282\nSat Jan 11 18:30:29 2020 10.250.7.77:32050 Connection reset, restarting [0]\nSat Jan 11 18:30:29 2020 100.64.1.1:61282 Connection reset, restarting [0]\nSat Jan 11 18:30:33 2020 TCP connection established with [AF_INET]10.250.7.77:15912\nSat Jan 11 18:30:33 2020 10.250.7.77:15912 TCP connection established with [AF_INET]100.64.1.1:56852\nSat Jan 11 18:30:33 2020 10.250.7.77:15912 Connection reset, restarting [0]\nSat Jan 11 18:30:33 2020 100.64.1.1:56852 Connection reset, restarting [0]\nSat Jan 11 18:30:39 2020 TCP connection established with [AF_INET]10.250.7.77:32058\nSat Jan 11 18:30:39 2020 10.250.7.77:32058 TCP connection established with [AF_INET]100.64.1.1:61290\nSat Jan 11 18:30:39 2020 10.250.7.77:32058 Connection reset, restarting [0]\nSat Jan 11 18:30:39 2020 100.64.1.1:61290 Connection reset, restarting [0]\nSat Jan 11 18:30:43 2020 TCP connection established with [AF_INET]10.250.7.77:15920\nSat Jan 11 18:30:43 2020 10.250.7.77:15920 TCP connection established with [AF_INET]100.64.1.1:56860\nSat Jan 11 18:30:43 2020 10.250.7.77:15920 Connection reset, restarting [0]\nSat Jan 11 18:30:43 2020 100.64.1.1:56860 Connection reset, restarting [0]\nSat Jan 11 18:30:49 2020 TCP connection established with [AF_INET]10.250.7.77:32064\nSat Jan 11 18:30:49 2020 10.250.7.77:32064 TCP connection established with [AF_INET]100.64.1.1:61296\nSat Jan 11 18:30:49 2020 10.250.7.77:32064 Connection reset, restarting [0]\nSat Jan 11 18:30:49 2020 100.64.1.1:61296 Connection reset, restarting [0]\nSat Jan 11 18:30:53 2020 TCP connection established with [AF_INET]10.250.7.77:15930\nSat Jan 11 18:30:53 2020 10.250.7.77:15930 TCP connection established with [AF_INET]100.64.1.1:56870\nSat Jan 11 18:30:53 2020 10.250.7.77:15930 Connection reset, restarting [0]\nSat Jan 11 18:30:53 2020 100.64.1.1:56870 Connection reset, restarting [0]\nSat Jan 11 18:30:59 2020 TCP connection established with [AF_INET]10.250.7.77:32076\nSat Jan 11 18:30:59 2020 10.250.7.77:32076 TCP connection established with [AF_INET]100.64.1.1:61308\nSat Jan 11 18:30:59 2020 10.250.7.77:32076 Connection reset, restarting [0]\nSat Jan 11 18:30:59 2020 100.64.1.1:61308 Connection reset, restarting [0]\nSat Jan 11 18:31:03 2020 TCP connection established with [AF_INET]100.64.1.1:56886\nSat Jan 11 18:31:03 2020 100.64.1.1:56886 TCP connection established with [AF_INET]10.250.7.77:15946\nSat Jan 11 18:31:03 2020 100.64.1.1:56886 Connection reset, restarting [0]\nSat Jan 11 18:31:03 2020 10.250.7.77:15946 Connection reset, restarting [0]\nSat Jan 11 18:31:09 2020 TCP connection established with [AF_INET]10.250.7.77:32088\nSat Jan 11 18:31:09 2020 10.250.7.77:32088 TCP connection established with [AF_INET]100.64.1.1:61320\nSat Jan 11 18:31:09 2020 10.250.7.77:32088 Connection reset, restarting [0]\nSat Jan 11 18:31:09 2020 100.64.1.1:61320 Connection reset, restarting [0]\nSat Jan 11 18:31:13 2020 TCP connection established with [AF_INET]10.250.7.77:15954\nSat Jan 11 18:31:13 2020 10.250.7.77:15954 TCP connection established with [AF_INET]100.64.1.1:56894\nSat Jan 11 18:31:13 2020 10.250.7.77:15954 Connection reset, restarting [0]\nSat Jan 11 18:31:13 2020 100.64.1.1:56894 Connection reset, restarting [0]\nSat Jan 11 18:31:19 2020 TCP connection established with [AF_INET]10.250.7.77:32104\nSat Jan 11 18:31:19 2020 10.250.7.77:32104 TCP connection established with [AF_INET]100.64.1.1:61336\nSat Jan 11 18:31:19 2020 10.250.7.77:32104 Connection reset, restarting [0]\nSat Jan 11 18:31:19 2020 100.64.1.1:61336 Connection reset, restarting [0]\nSat Jan 11 18:31:23 2020 TCP connection established with [AF_INET]10.250.7.77:15964\nSat Jan 11 18:31:23 2020 10.250.7.77:15964 TCP connection established with [AF_INET]100.64.1.1:56904\nSat Jan 11 18:31:23 2020 10.250.7.77:15964 Connection reset, restarting [0]\nSat Jan 11 18:31:23 2020 100.64.1.1:56904 Connection reset, restarting [0]\nSat Jan 11 18:31:29 2020 TCP connection established with [AF_INET]10.250.7.77:32108\nSat Jan 11 18:31:29 2020 10.250.7.77:32108 TCP connection established with [AF_INET]100.64.1.1:61340\nSat Jan 11 18:31:29 2020 10.250.7.77:32108 Connection reset, restarting [0]\nSat Jan 11 18:31:29 2020 100.64.1.1:61340 Connection reset, restarting [0]\nSat Jan 11 18:31:33 2020 TCP connection established with [AF_INET]10.250.7.77:15970\nSat Jan 11 18:31:33 2020 10.250.7.77:15970 TCP connection established with [AF_INET]100.64.1.1:56910\nSat Jan 11 18:31:33 2020 10.250.7.77:15970 Connection reset, restarting [0]\nSat Jan 11 18:31:33 2020 100.64.1.1:56910 Connection reset, restarting [0]\nSat Jan 11 18:31:39 2020 TCP connection established with [AF_INET]10.250.7.77:32116\nSat Jan 11 18:31:39 2020 10.250.7.77:32116 TCP connection established with [AF_INET]100.64.1.1:61348\nSat Jan 11 18:31:39 2020 10.250.7.77:32116 Connection reset, restarting [0]\nSat Jan 11 18:31:39 2020 100.64.1.1:61348 Connection reset, restarting [0]\nSat Jan 11 18:31:43 2020 TCP connection established with [AF_INET]10.250.7.77:15978\nSat Jan 11 18:31:43 2020 10.250.7.77:15978 TCP connection established with [AF_INET]100.64.1.1:56918\nSat Jan 11 18:31:43 2020 10.250.7.77:15978 Connection reset, restarting [0]\nSat Jan 11 18:31:43 2020 100.64.1.1:56918 Connection reset, restarting [0]\nSat Jan 11 18:31:49 2020 TCP connection established with [AF_INET]10.250.7.77:32122\nSat Jan 11 18:31:49 2020 10.250.7.77:32122 TCP connection established with [AF_INET]100.64.1.1:61354\nSat Jan 11 18:31:49 2020 10.250.7.77:32122 Connection reset, restarting [0]\nSat Jan 11 18:31:49 2020 100.64.1.1:61354 Connection reset, restarting [0]\nSat Jan 11 18:31:53 2020 TCP connection established with [AF_INET]10.250.7.77:15986\nSat Jan 11 18:31:53 2020 10.250.7.77:15986 TCP connection established with [AF_INET]100.64.1.1:56926\nSat Jan 11 18:31:53 2020 10.250.7.77:15986 Connection reset, restarting [0]\nSat Jan 11 18:31:53 2020 100.64.1.1:56926 Connection reset, restarting [0]\nSat Jan 11 18:31:59 2020 TCP connection established with [AF_INET]10.250.7.77:32134\nSat Jan 11 18:31:59 2020 10.250.7.77:32134 TCP connection established with [AF_INET]100.64.1.1:61366\nSat Jan 11 18:31:59 2020 10.250.7.77:32134 Connection reset, restarting [0]\nSat Jan 11 18:31:59 2020 100.64.1.1:61366 Connection reset, restarting [0]\nSat Jan 11 18:32:03 2020 TCP connection established with [AF_INET]10.250.7.77:16000\nSat Jan 11 18:32:03 2020 10.250.7.77:16000 TCP connection established with [AF_INET]100.64.1.1:56940\nSat Jan 11 18:32:03 2020 10.250.7.77:16000 Connection reset, restarting [0]\nSat Jan 11 18:32:03 2020 100.64.1.1:56940 Connection reset, restarting [0]\nSat Jan 11 18:32:09 2020 TCP connection established with [AF_INET]10.250.7.77:32146\nSat Jan 11 18:32:09 2020 10.250.7.77:32146 TCP connection established with [AF_INET]100.64.1.1:61378\nSat Jan 11 18:32:09 2020 10.250.7.77:32146 Connection reset, restarting [0]\nSat Jan 11 18:32:09 2020 100.64.1.1:61378 Connection reset, restarting [0]\nSat Jan 11 18:32:13 2020 TCP connection established with [AF_INET]10.250.7.77:16012\nSat Jan 11 18:32:13 2020 10.250.7.77:16012 TCP connection established with [AF_INET]100.64.1.1:56952\nSat Jan 11 18:32:13 2020 10.250.7.77:16012 Connection reset, restarting [0]\nSat Jan 11 18:32:13 2020 100.64.1.1:56952 Connection reset, restarting [0]\nSat Jan 11 18:32:19 2020 TCP connection established with [AF_INET]10.250.7.77:32158\nSat Jan 11 18:32:19 2020 10.250.7.77:32158 TCP connection established with [AF_INET]100.64.1.1:61390\nSat Jan 11 18:32:19 2020 10.250.7.77:32158 Connection reset, restarting [0]\nSat Jan 11 18:32:19 2020 100.64.1.1:61390 Connection reset, restarting [0]\nSat Jan 11 18:32:23 2020 TCP connection established with [AF_INET]10.250.7.77:16022\nSat Jan 11 18:32:23 2020 10.250.7.77:16022 TCP connection established with [AF_INET]100.64.1.1:56962\nSat Jan 11 18:32:23 2020 10.250.7.77:16022 Connection reset, restarting [0]\nSat Jan 11 18:32:23 2020 100.64.1.1:56962 Connection reset, restarting [0]\nSat Jan 11 18:32:29 2020 TCP connection established with [AF_INET]10.250.7.77:32166\nSat Jan 11 18:32:29 2020 10.250.7.77:32166 TCP connection established with [AF_INET]100.64.1.1:61398\nSat Jan 11 18:32:29 2020 10.250.7.77:32166 Connection reset, restarting [0]\nSat Jan 11 18:32:29 2020 100.64.1.1:61398 Connection reset, restarting [0]\nSat Jan 11 18:32:33 2020 TCP connection established with [AF_INET]10.250.7.77:16028\nSat Jan 11 18:32:33 2020 10.250.7.77:16028 TCP connection established with [AF_INET]100.64.1.1:56968\nSat Jan 11 18:32:33 2020 10.250.7.77:16028 Connection reset, restarting [0]\nSat Jan 11 18:32:33 2020 100.64.1.1:56968 Connection reset, restarting [0]\nSat Jan 11 18:32:39 2020 TCP connection established with [AF_INET]10.250.7.77:32174\nSat Jan 11 18:32:39 2020 10.250.7.77:32174 TCP connection established with [AF_INET]100.64.1.1:61406\nSat Jan 11 18:32:39 2020 10.250.7.77:32174 Connection reset, restarting [0]\nSat Jan 11 18:32:39 2020 100.64.1.1:61406 Connection reset, restarting [0]\nSat Jan 11 18:32:43 2020 TCP connection established with [AF_INET]10.250.7.77:16036\nSat Jan 11 18:32:43 2020 10.250.7.77:16036 TCP connection established with [AF_INET]100.64.1.1:56976\nSat Jan 11 18:32:43 2020 10.250.7.77:16036 Connection reset, restarting [0]\nSat Jan 11 18:32:43 2020 100.64.1.1:56976 Connection reset, restarting [0]\nSat Jan 11 18:32:49 2020 TCP connection established with [AF_INET]10.250.7.77:32180\nSat Jan 11 18:32:49 2020 10.250.7.77:32180 TCP connection established with [AF_INET]100.64.1.1:61412\nSat Jan 11 18:32:49 2020 10.250.7.77:32180 Connection reset, restarting [0]\nSat Jan 11 18:32:49 2020 100.64.1.1:61412 Connection reset, restarting [0]\nSat Jan 11 18:32:53 2020 TCP connection established with [AF_INET]10.250.7.77:16044\nSat Jan 11 18:32:53 2020 10.250.7.77:16044 TCP connection established with [AF_INET]100.64.1.1:56984\nSat Jan 11 18:32:53 2020 10.250.7.77:16044 Connection reset, restarting [0]\nSat Jan 11 18:32:53 2020 100.64.1.1:56984 Connection reset, restarting [0]\nSat Jan 11 18:32:59 2020 TCP connection established with [AF_INET]10.250.7.77:32194\nSat Jan 11 18:32:59 2020 10.250.7.77:32194 TCP connection established with [AF_INET]100.64.1.1:61426\nSat Jan 11 18:32:59 2020 10.250.7.77:32194 Connection reset, restarting [0]\nSat Jan 11 18:32:59 2020 100.64.1.1:61426 Connection reset, restarting [0]\nSat Jan 11 18:33:03 2020 TCP connection established with [AF_INET]10.250.7.77:16058\nSat Jan 11 18:33:03 2020 10.250.7.77:16058 TCP connection established with [AF_INET]100.64.1.1:56998\nSat Jan 11 18:33:03 2020 10.250.7.77:16058 Connection reset, restarting [0]\nSat Jan 11 18:33:03 2020 100.64.1.1:56998 Connection reset, restarting [0]\nSat Jan 11 18:33:09 2020 TCP connection established with [AF_INET]10.250.7.77:32204\nSat Jan 11 18:33:09 2020 10.250.7.77:32204 TCP connection established with [AF_INET]100.64.1.1:61436\nSat Jan 11 18:33:09 2020 10.250.7.77:32204 Connection reset, restarting [0]\nSat Jan 11 18:33:09 2020 100.64.1.1:61436 Connection reset, restarting [0]\nSat Jan 11 18:33:13 2020 TCP connection established with [AF_INET]10.250.7.77:16066\nSat Jan 11 18:33:13 2020 10.250.7.77:16066 TCP connection established with [AF_INET]100.64.1.1:57006\nSat Jan 11 18:33:13 2020 10.250.7.77:16066 Connection reset, restarting [0]\nSat Jan 11 18:33:13 2020 100.64.1.1:57006 Connection reset, restarting [0]\nSat Jan 11 18:33:19 2020 TCP connection established with [AF_INET]10.250.7.77:32220\nSat Jan 11 18:33:19 2020 10.250.7.77:32220 TCP connection established with [AF_INET]100.64.1.1:61452\nSat Jan 11 18:33:19 2020 10.250.7.77:32220 Connection reset, restarting [0]\nSat Jan 11 18:33:19 2020 100.64.1.1:61452 Connection reset, restarting [0]\nSat Jan 11 18:33:23 2020 TCP connection established with [AF_INET]10.250.7.77:16080\nSat Jan 11 18:33:23 2020 10.250.7.77:16080 TCP connection established with [AF_INET]100.64.1.1:57020\nSat Jan 11 18:33:23 2020 10.250.7.77:16080 Connection reset, restarting [0]\nSat Jan 11 18:33:23 2020 100.64.1.1:57020 Connection reset, restarting [0]\nSat Jan 11 18:33:29 2020 TCP connection established with [AF_INET]10.250.7.77:32224\nSat Jan 11 18:33:29 2020 10.250.7.77:32224 TCP connection established with [AF_INET]100.64.1.1:61456\nSat Jan 11 18:33:29 2020 10.250.7.77:32224 Connection reset, restarting [0]\nSat Jan 11 18:33:29 2020 100.64.1.1:61456 Connection reset, restarting [0]\nSat Jan 11 18:33:33 2020 TCP connection established with [AF_INET]10.250.7.77:16086\nSat Jan 11 18:33:33 2020 10.250.7.77:16086 TCP connection established with [AF_INET]100.64.1.1:57026\nSat Jan 11 18:33:33 2020 10.250.7.77:16086 Connection reset, restarting [0]\nSat Jan 11 18:33:33 2020 100.64.1.1:57026 Connection reset, restarting [0]\nSat Jan 11 18:33:39 2020 TCP connection established with [AF_INET]10.250.7.77:32232\nSat Jan 11 18:33:39 2020 10.250.7.77:32232 TCP connection established with [AF_INET]100.64.1.1:61464\nSat Jan 11 18:33:39 2020 10.250.7.77:32232 Connection reset, restarting [0]\nSat Jan 11 18:33:39 2020 100.64.1.1:61464 Connection reset, restarting [0]\nSat Jan 11 18:33:43 2020 TCP connection established with [AF_INET]10.250.7.77:16094\nSat Jan 11 18:33:43 2020 10.250.7.77:16094 TCP connection established with [AF_INET]100.64.1.1:57034\nSat Jan 11 18:33:43 2020 10.250.7.77:16094 Connection reset, restarting [0]\nSat Jan 11 18:33:43 2020 100.64.1.1:57034 Connection reset, restarting [0]\nSat Jan 11 18:33:49 2020 TCP connection established with [AF_INET]10.250.7.77:32242\nSat Jan 11 18:33:49 2020 10.250.7.77:32242 TCP connection established with [AF_INET]100.64.1.1:61474\nSat Jan 11 18:33:49 2020 10.250.7.77:32242 Connection reset, restarting [0]\nSat Jan 11 18:33:49 2020 100.64.1.1:61474 Connection reset, restarting [0]\nSat Jan 11 18:33:53 2020 TCP connection established with [AF_INET]10.250.7.77:16102\nSat Jan 11 18:33:53 2020 10.250.7.77:16102 TCP connection established with [AF_INET]100.64.1.1:57042\nSat Jan 11 18:33:53 2020 10.250.7.77:16102 Connection reset, restarting [0]\nSat Jan 11 18:33:53 2020 100.64.1.1:57042 Connection reset, restarting [0]\nSat Jan 11 18:33:59 2020 TCP connection established with [AF_INET]10.250.7.77:32256\nSat Jan 11 18:33:59 2020 10.250.7.77:32256 TCP connection established with [AF_INET]100.64.1.1:61488\nSat Jan 11 18:33:59 2020 10.250.7.77:32256 Connection reset, restarting [0]\nSat Jan 11 18:33:59 2020 100.64.1.1:61488 Connection reset, restarting [0]\nSat Jan 11 18:34:03 2020 TCP connection established with [AF_INET]10.250.7.77:16116\nSat Jan 11 18:34:03 2020 10.250.7.77:16116 TCP connection established with [AF_INET]100.64.1.1:57056\nSat Jan 11 18:34:03 2020 10.250.7.77:16116 Connection reset, restarting [0]\nSat Jan 11 18:34:03 2020 100.64.1.1:57056 Connection reset, restarting [0]\nSat Jan 11 18:34:09 2020 TCP connection established with [AF_INET]10.250.7.77:32266\nSat Jan 11 18:34:09 2020 10.250.7.77:32266 TCP connection established with [AF_INET]100.64.1.1:61498\nSat Jan 11 18:34:09 2020 10.250.7.77:32266 Connection reset, restarting [0]\nSat Jan 11 18:34:09 2020 100.64.1.1:61498 Connection reset, restarting [0]\nSat Jan 11 18:34:13 2020 TCP connection established with [AF_INET]10.250.7.77:16124\nSat Jan 11 18:34:13 2020 10.250.7.77:16124 TCP connection established with [AF_INET]100.64.1.1:57064\nSat Jan 11 18:34:13 2020 10.250.7.77:16124 Connection reset, restarting [0]\nSat Jan 11 18:34:13 2020 100.64.1.1:57064 Connection reset, restarting [0]\nSat Jan 11 18:34:19 2020 TCP connection established with [AF_INET]10.250.7.77:32278\nSat Jan 11 18:34:19 2020 10.250.7.77:32278 TCP connection established with [AF_INET]100.64.1.1:61510\nSat Jan 11 18:34:19 2020 10.250.7.77:32278 Connection reset, restarting [0]\nSat Jan 11 18:34:19 2020 100.64.1.1:61510 Connection reset, restarting [0]\nSat Jan 11 18:34:23 2020 TCP connection established with [AF_INET]10.250.7.77:16144\nSat Jan 11 18:34:23 2020 10.250.7.77:16144 TCP connection established with [AF_INET]100.64.1.1:57084\nSat Jan 11 18:34:23 2020 10.250.7.77:16144 Connection reset, restarting [0]\nSat Jan 11 18:34:23 2020 100.64.1.1:57084 Connection reset, restarting [0]\nSat Jan 11 18:34:29 2020 TCP connection established with [AF_INET]10.250.7.77:32282\nSat Jan 11 18:34:29 2020 10.250.7.77:32282 TCP connection established with [AF_INET]100.64.1.1:61514\nSat Jan 11 18:34:29 2020 10.250.7.77:32282 Connection reset, restarting [0]\nSat Jan 11 18:34:29 2020 100.64.1.1:61514 Connection reset, restarting [0]\nSat Jan 11 18:34:33 2020 TCP connection established with [AF_INET]10.250.7.77:16150\nSat Jan 11 18:34:33 2020 10.250.7.77:16150 TCP connection established with [AF_INET]100.64.1.1:57090\nSat Jan 11 18:34:33 2020 10.250.7.77:16150 Connection reset, restarting [0]\nSat Jan 11 18:34:33 2020 100.64.1.1:57090 Connection reset, restarting [0]\nSat Jan 11 18:34:39 2020 TCP connection established with [AF_INET]10.250.7.77:32290\nSat Jan 11 18:34:39 2020 10.250.7.77:32290 TCP connection established with [AF_INET]100.64.1.1:61522\nSat Jan 11 18:34:39 2020 10.250.7.77:32290 Connection reset, restarting [0]\nSat Jan 11 18:34:39 2020 100.64.1.1:61522 Connection reset, restarting [0]\nSat Jan 11 18:34:43 2020 TCP connection established with [AF_INET]10.250.7.77:16164\nSat Jan 11 18:34:43 2020 10.250.7.77:16164 TCP connection established with [AF_INET]100.64.1.1:57104\nSat Jan 11 18:34:43 2020 10.250.7.77:16164 Connection reset, restarting [0]\nSat Jan 11 18:34:43 2020 100.64.1.1:57104 Connection reset, restarting [0]\nSat Jan 11 18:34:49 2020 TCP connection established with [AF_INET]10.250.7.77:32298\nSat Jan 11 18:34:49 2020 10.250.7.77:32298 TCP connection established with [AF_INET]100.64.1.1:61530\nSat Jan 11 18:34:49 2020 10.250.7.77:32298 Connection reset, restarting [0]\nSat Jan 11 18:34:49 2020 100.64.1.1:61530 Connection reset, restarting [0]\nSat Jan 11 18:34:53 2020 TCP connection established with [AF_INET]10.250.7.77:16170\nSat Jan 11 18:34:53 2020 10.250.7.77:16170 TCP connection established with [AF_INET]100.64.1.1:57110\nSat Jan 11 18:34:53 2020 10.250.7.77:16170 Connection reset, restarting [0]\nSat Jan 11 18:34:53 2020 100.64.1.1:57110 Connection reset, restarting [0]\nSat Jan 11 18:34:59 2020 TCP connection established with [AF_INET]10.250.7.77:32314\nSat Jan 11 18:34:59 2020 10.250.7.77:32314 TCP connection established with [AF_INET]100.64.1.1:61546\nSat Jan 11 18:34:59 2020 10.250.7.77:32314 Connection reset, restarting [0]\nSat Jan 11 18:34:59 2020 100.64.1.1:61546 Connection reset, restarting [0]\nSat Jan 11 18:35:03 2020 TCP connection established with [AF_INET]10.250.7.77:16184\nSat Jan 11 18:35:03 2020 10.250.7.77:16184 TCP connection established with [AF_INET]100.64.1.1:57124\nSat Jan 11 18:35:03 2020 10.250.7.77:16184 Connection reset, restarting [0]\nSat Jan 11 18:35:03 2020 100.64.1.1:57124 Connection reset, restarting [0]\nSat Jan 11 18:35:09 2020 TCP connection established with [AF_INET]10.250.7.77:32324\nSat Jan 11 18:35:09 2020 10.250.7.77:32324 TCP connection established with [AF_INET]100.64.1.1:61556\nSat Jan 11 18:35:09 2020 10.250.7.77:32324 Connection reset, restarting [0]\nSat Jan 11 18:35:09 2020 100.64.1.1:61556 Connection reset, restarting [0]\nSat Jan 11 18:35:13 2020 TCP connection established with [AF_INET]10.250.7.77:16202\nSat Jan 11 18:35:13 2020 10.250.7.77:16202 TCP connection established with [AF_INET]100.64.1.1:57142\nSat Jan 11 18:35:13 2020 10.250.7.77:16202 Connection reset, restarting [0]\nSat Jan 11 18:35:13 2020 100.64.1.1:57142 Connection reset, restarting [0]\nSat Jan 11 18:35:19 2020 TCP connection established with [AF_INET]10.250.7.77:32336\nSat Jan 11 18:35:19 2020 10.250.7.77:32336 TCP connection established with [AF_INET]100.64.1.1:61568\nSat Jan 11 18:35:19 2020 10.250.7.77:32336 Connection reset, restarting [0]\nSat Jan 11 18:35:19 2020 100.64.1.1:61568 Connection reset, restarting [0]\nSat Jan 11 18:35:23 2020 TCP connection established with [AF_INET]10.250.7.77:16212\nSat Jan 11 18:35:23 2020 10.250.7.77:16212 TCP connection established with [AF_INET]100.64.1.1:57152\nSat Jan 11 18:35:23 2020 10.250.7.77:16212 Connection reset, restarting [0]\nSat Jan 11 18:35:23 2020 100.64.1.1:57152 Connection reset, restarting [0]\nSat Jan 11 18:35:29 2020 TCP connection established with [AF_INET]10.250.7.77:32340\nSat Jan 11 18:35:29 2020 10.250.7.77:32340 TCP connection established with [AF_INET]100.64.1.1:61572\nSat Jan 11 18:35:29 2020 10.250.7.77:32340 Connection reset, restarting [0]\nSat Jan 11 18:35:29 2020 100.64.1.1:61572 Connection reset, restarting [0]\nSat Jan 11 18:35:33 2020 TCP connection established with [AF_INET]10.250.7.77:16218\nSat Jan 11 18:35:33 2020 10.250.7.77:16218 TCP connection established with [AF_INET]100.64.1.1:57158\nSat Jan 11 18:35:33 2020 10.250.7.77:16218 Connection reset, restarting [0]\nSat Jan 11 18:35:33 2020 100.64.1.1:57158 Connection reset, restarting [0]\nSat Jan 11 18:35:39 2020 TCP connection established with [AF_INET]10.250.7.77:32356\nSat Jan 11 18:35:39 2020 10.250.7.77:32356 TCP connection established with [AF_INET]100.64.1.1:61588\nSat Jan 11 18:35:39 2020 10.250.7.77:32356 Connection reset, restarting [0]\nSat Jan 11 18:35:39 2020 100.64.1.1:61588 Connection reset, restarting [0]\nSat Jan 11 18:35:43 2020 TCP connection established with [AF_INET]10.250.7.77:16228\nSat Jan 11 18:35:43 2020 10.250.7.77:16228 TCP connection established with [AF_INET]100.64.1.1:57168\nSat Jan 11 18:35:43 2020 10.250.7.77:16228 Connection reset, restarting [0]\nSat Jan 11 18:35:43 2020 100.64.1.1:57168 Connection reset, restarting [0]\nSat Jan 11 18:35:49 2020 TCP connection established with [AF_INET]10.250.7.77:32364\nSat Jan 11 18:35:49 2020 10.250.7.77:32364 TCP connection established with [AF_INET]100.64.1.1:61596\nSat Jan 11 18:35:49 2020 10.250.7.77:32364 Connection reset, restarting [0]\nSat Jan 11 18:35:49 2020 100.64.1.1:61596 Connection reset, restarting [0]\nSat Jan 11 18:35:53 2020 TCP connection established with [AF_INET]10.250.7.77:16272\nSat Jan 11 18:35:53 2020 10.250.7.77:16272 TCP connection established with [AF_INET]100.64.1.1:57212\nSat Jan 11 18:35:53 2020 10.250.7.77:16272 Connection reset, restarting [0]\nSat Jan 11 18:35:53 2020 100.64.1.1:57212 Connection reset, restarting [0]\nSat Jan 11 18:35:59 2020 TCP connection established with [AF_INET]10.250.7.77:32376\nSat Jan 11 18:35:59 2020 10.250.7.77:32376 TCP connection established with [AF_INET]100.64.1.1:61608\nSat Jan 11 18:35:59 2020 10.250.7.77:32376 Connection reset, restarting [0]\nSat Jan 11 18:35:59 2020 100.64.1.1:61608 Connection reset, restarting [0]\nSat Jan 11 18:36:03 2020 TCP connection established with [AF_INET]10.250.7.77:16286\nSat Jan 11 18:36:03 2020 10.250.7.77:16286 TCP connection established with [AF_INET]100.64.1.1:57226\nSat Jan 11 18:36:03 2020 10.250.7.77:16286 Connection reset, restarting [0]\nSat Jan 11 18:36:03 2020 100.64.1.1:57226 Connection reset, restarting [0]\nSat Jan 11 18:36:09 2020 TCP connection established with [AF_INET]10.250.7.77:32386\nSat Jan 11 18:36:09 2020 10.250.7.77:32386 TCP connection established with [AF_INET]100.64.1.1:61618\nSat Jan 11 18:36:09 2020 10.250.7.77:32386 Connection reset, restarting [0]\nSat Jan 11 18:36:09 2020 100.64.1.1:61618 Connection reset, restarting [0]\nSat Jan 11 18:36:13 2020 TCP connection established with [AF_INET]10.250.7.77:16296\nSat Jan 11 18:36:13 2020 10.250.7.77:16296 TCP connection established with [AF_INET]100.64.1.1:57236\nSat Jan 11 18:36:13 2020 10.250.7.77:16296 Connection reset, restarting [0]\nSat Jan 11 18:36:13 2020 100.64.1.1:57236 Connection reset, restarting [0]\nSat Jan 11 18:36:19 2020 TCP connection established with [AF_INET]10.250.7.77:32402\nSat Jan 11 18:36:19 2020 10.250.7.77:32402 TCP connection established with [AF_INET]100.64.1.1:61634\nSat Jan 11 18:36:19 2020 10.250.7.77:32402 Connection reset, restarting [0]\nSat Jan 11 18:36:19 2020 100.64.1.1:61634 Connection reset, restarting [0]\nSat Jan 11 18:36:23 2020 TCP connection established with [AF_INET]10.250.7.77:16306\nSat Jan 11 18:36:23 2020 10.250.7.77:16306 TCP connection established with [AF_INET]100.64.1.1:57246\nSat Jan 11 18:36:23 2020 10.250.7.77:16306 Connection reset, restarting [0]\nSat Jan 11 18:36:23 2020 100.64.1.1:57246 Connection reset, restarting [0]\nSat Jan 11 18:36:29 2020 TCP connection established with [AF_INET]10.250.7.77:32406\nSat Jan 11 18:36:29 2020 10.250.7.77:32406 TCP connection established with [AF_INET]100.64.1.1:61638\nSat Jan 11 18:36:29 2020 10.250.7.77:32406 Connection reset, restarting [0]\nSat Jan 11 18:36:29 2020 100.64.1.1:61638 Connection reset, restarting [0]\nSat Jan 11 18:36:33 2020 TCP connection established with [AF_INET]10.250.7.77:16314\nSat Jan 11 18:36:33 2020 10.250.7.77:16314 TCP connection established with [AF_INET]100.64.1.1:57254\nSat Jan 11 18:36:33 2020 10.250.7.77:16314 Connection reset, restarting [0]\nSat Jan 11 18:36:33 2020 100.64.1.1:57254 Connection reset, restarting [0]\nSat Jan 11 18:36:39 2020 TCP connection established with [AF_INET]10.250.7.77:32414\nSat Jan 11 18:36:39 2020 10.250.7.77:32414 TCP connection established with [AF_INET]100.64.1.1:61646\nSat Jan 11 18:36:39 2020 10.250.7.77:32414 Connection reset, restarting [0]\nSat Jan 11 18:36:39 2020 100.64.1.1:61646 Connection reset, restarting [0]\nSat Jan 11 18:36:43 2020 TCP connection established with [AF_INET]10.250.7.77:16322\nSat Jan 11 18:36:43 2020 10.250.7.77:16322 TCP connection established with [AF_INET]100.64.1.1:57262\nSat Jan 11 18:36:43 2020 10.250.7.77:16322 Connection reset, restarting [0]\nSat Jan 11 18:36:43 2020 100.64.1.1:57262 Connection reset, restarting [0]\nSat Jan 11 18:36:49 2020 TCP connection established with [AF_INET]10.250.7.77:32422\nSat Jan 11 18:36:49 2020 10.250.7.77:32422 TCP connection established with [AF_INET]100.64.1.1:61654\nSat Jan 11 18:36:49 2020 10.250.7.77:32422 Connection reset, restarting [0]\nSat Jan 11 18:36:49 2020 100.64.1.1:61654 Connection reset, restarting [0]\nSat Jan 11 18:36:53 2020 TCP connection established with [AF_INET]10.250.7.77:16328\nSat Jan 11 18:36:53 2020 10.250.7.77:16328 TCP connection established with [AF_INET]100.64.1.1:57268\nSat Jan 11 18:36:53 2020 10.250.7.77:16328 Connection reset, restarting [0]\nSat Jan 11 18:36:53 2020 100.64.1.1:57268 Connection reset, restarting [0]\nSat Jan 11 18:36:59 2020 TCP connection established with [AF_INET]10.250.7.77:32434\nSat Jan 11 18:36:59 2020 10.250.7.77:32434 TCP connection established with [AF_INET]100.64.1.1:61666\nSat Jan 11 18:36:59 2020 10.250.7.77:32434 Connection reset, restarting [0]\nSat Jan 11 18:36:59 2020 100.64.1.1:61666 Connection reset, restarting [0]\nSat Jan 11 18:37:03 2020 TCP connection established with [AF_INET]10.250.7.77:16342\nSat Jan 11 18:37:03 2020 10.250.7.77:16342 TCP connection established with [AF_INET]100.64.1.1:57282\nSat Jan 11 18:37:03 2020 10.250.7.77:16342 Connection reset, restarting [0]\nSat Jan 11 18:37:03 2020 100.64.1.1:57282 Connection reset, restarting [0]\nSat Jan 11 18:37:09 2020 TCP connection established with [AF_INET]10.250.7.77:32444\nSat Jan 11 18:37:09 2020 10.250.7.77:32444 TCP connection established with [AF_INET]100.64.1.1:61676\nSat Jan 11 18:37:09 2020 10.250.7.77:32444 Connection reset, restarting [0]\nSat Jan 11 18:37:09 2020 100.64.1.1:61676 Connection reset, restarting [0]\nSat Jan 11 18:37:13 2020 TCP connection established with [AF_INET]10.250.7.77:16354\nSat Jan 11 18:37:13 2020 10.250.7.77:16354 TCP connection established with [AF_INET]100.64.1.1:57294\nSat Jan 11 18:37:13 2020 10.250.7.77:16354 Connection reset, restarting [0]\nSat Jan 11 18:37:13 2020 100.64.1.1:57294 Connection reset, restarting [0]\nSat Jan 11 18:37:19 2020 TCP connection established with [AF_INET]10.250.7.77:32456\nSat Jan 11 18:37:19 2020 10.250.7.77:32456 TCP connection established with [AF_INET]100.64.1.1:61688\nSat Jan 11 18:37:19 2020 10.250.7.77:32456 Connection reset, restarting [0]\nSat Jan 11 18:37:19 2020 100.64.1.1:61688 Connection reset, restarting [0]\nSat Jan 11 18:37:23 2020 TCP connection established with [AF_INET]10.250.7.77:16364\nSat Jan 11 18:37:23 2020 10.250.7.77:16364 TCP connection established with [AF_INET]100.64.1.1:57304\nSat Jan 11 18:37:23 2020 10.250.7.77:16364 Connection reset, restarting [0]\nSat Jan 11 18:37:23 2020 100.64.1.1:57304 Connection reset, restarting [0]\nSat Jan 11 18:37:29 2020 TCP connection established with [AF_INET]10.250.7.77:32464\nSat Jan 11 18:37:29 2020 10.250.7.77:32464 TCP connection established with [AF_INET]100.64.1.1:61696\nSat Jan 11 18:37:29 2020 10.250.7.77:32464 Connection reset, restarting [0]\nSat Jan 11 18:37:29 2020 100.64.1.1:61696 Connection reset, restarting [0]\nSat Jan 11 18:37:33 2020 TCP connection established with [AF_INET]10.250.7.77:16372\nSat Jan 11 18:37:33 2020 10.250.7.77:16372 TCP connection established with [AF_INET]100.64.1.1:57312\nSat Jan 11 18:37:33 2020 10.250.7.77:16372 Connection reset, restarting [0]\nSat Jan 11 18:37:33 2020 100.64.1.1:57312 Connection reset, restarting [0]\nSat Jan 11 18:37:39 2020 TCP connection established with [AF_INET]10.250.7.77:32474\nSat Jan 11 18:37:39 2020 10.250.7.77:32474 TCP connection established with [AF_INET]100.64.1.1:61706\nSat Jan 11 18:37:39 2020 10.250.7.77:32474 Connection reset, restarting [0]\nSat Jan 11 18:37:39 2020 100.64.1.1:61706 Connection reset, restarting [0]\nSat Jan 11 18:37:43 2020 TCP connection established with [AF_INET]100.64.1.1:57320\nSat Jan 11 18:37:43 2020 100.64.1.1:57320 TCP connection established with [AF_INET]10.250.7.77:16380\nSat Jan 11 18:37:43 2020 100.64.1.1:57320 Connection reset, restarting [0]\nSat Jan 11 18:37:43 2020 10.250.7.77:16380 Connection reset, restarting [0]\nSat Jan 11 18:37:49 2020 TCP connection established with [AF_INET]10.250.7.77:32480\nSat Jan 11 18:37:49 2020 10.250.7.77:32480 TCP connection established with [AF_INET]100.64.1.1:61712\nSat Jan 11 18:37:49 2020 10.250.7.77:32480 Connection reset, restarting [0]\nSat Jan 11 18:37:49 2020 100.64.1.1:61712 Connection reset, restarting [0]\nSat Jan 11 18:37:53 2020 TCP connection established with [AF_INET]10.250.7.77:16386\nSat Jan 11 18:37:53 2020 10.250.7.77:16386 TCP connection established with [AF_INET]100.64.1.1:57326\nSat Jan 11 18:37:53 2020 10.250.7.77:16386 Connection reset, restarting [0]\nSat Jan 11 18:37:53 2020 100.64.1.1:57326 Connection reset, restarting [0]\nSat Jan 11 18:37:59 2020 TCP connection established with [AF_INET]10.250.7.77:32492\nSat Jan 11 18:37:59 2020 10.250.7.77:32492 TCP connection established with [AF_INET]100.64.1.1:61724\nSat Jan 11 18:37:59 2020 10.250.7.77:32492 Connection reset, restarting [0]\nSat Jan 11 18:37:59 2020 100.64.1.1:61724 Connection reset, restarting [0]\nSat Jan 11 18:38:03 2020 TCP connection established with [AF_INET]10.250.7.77:16400\nSat Jan 11 18:38:03 2020 10.250.7.77:16400 TCP connection established with [AF_INET]100.64.1.1:57340\nSat Jan 11 18:38:03 2020 10.250.7.77:16400 Connection reset, restarting [0]\nSat Jan 11 18:38:03 2020 100.64.1.1:57340 Connection reset, restarting [0]\nSat Jan 11 18:38:09 2020 TCP connection established with [AF_INET]10.250.7.77:32502\nSat Jan 11 18:38:09 2020 10.250.7.77:32502 TCP connection established with [AF_INET]100.64.1.1:61734\nSat Jan 11 18:38:09 2020 10.250.7.77:32502 Connection reset, restarting [0]\nSat Jan 11 18:38:09 2020 100.64.1.1:61734 Connection reset, restarting [0]\nSat Jan 11 18:38:13 2020 TCP connection established with [AF_INET]10.250.7.77:16408\nSat Jan 11 18:38:13 2020 10.250.7.77:16408 TCP connection established with [AF_INET]100.64.1.1:57348\nSat Jan 11 18:38:13 2020 10.250.7.77:16408 Connection reset, restarting [0]\nSat Jan 11 18:38:13 2020 100.64.1.1:57348 Connection reset, restarting [0]\nSat Jan 11 18:38:19 2020 TCP connection established with [AF_INET]10.250.7.77:32514\nSat Jan 11 18:38:19 2020 10.250.7.77:32514 TCP connection established with [AF_INET]100.64.1.1:61746\nSat Jan 11 18:38:19 2020 10.250.7.77:32514 Connection reset, restarting [0]\nSat Jan 11 18:38:19 2020 100.64.1.1:61746 Connection reset, restarting [0]\nSat Jan 11 18:38:23 2020 TCP connection established with [AF_INET]10.250.7.77:16422\nSat Jan 11 18:38:23 2020 10.250.7.77:16422 Connection reset, restarting [0]\nSat Jan 11 18:38:23 2020 TCP connection established with [AF_INET]100.64.1.1:57362\nSat Jan 11 18:38:23 2020 100.64.1.1:57362 Connection reset, restarting [0]\nSat Jan 11 18:38:29 2020 TCP connection established with [AF_INET]10.250.7.77:32518\nSat Jan 11 18:38:29 2020 10.250.7.77:32518 TCP connection established with [AF_INET]100.64.1.1:61750\nSat Jan 11 18:38:29 2020 10.250.7.77:32518 Connection reset, restarting [0]\nSat Jan 11 18:38:29 2020 100.64.1.1:61750 Connection reset, restarting [0]\nSat Jan 11 18:38:33 2020 TCP connection established with [AF_INET]10.250.7.77:16430\nSat Jan 11 18:38:33 2020 10.250.7.77:16430 TCP connection established with [AF_INET]100.64.1.1:57370\nSat Jan 11 18:38:33 2020 10.250.7.77:16430 Connection reset, restarting [0]\nSat Jan 11 18:38:33 2020 100.64.1.1:57370 Connection reset, restarting [0]\nSat Jan 11 18:38:39 2020 TCP connection established with [AF_INET]10.250.7.77:32528\nSat Jan 11 18:38:39 2020 10.250.7.77:32528 TCP connection established with [AF_INET]100.64.1.1:61760\nSat Jan 11 18:38:39 2020 10.250.7.77:32528 Connection reset, restarting [0]\nSat Jan 11 18:38:39 2020 100.64.1.1:61760 Connection reset, restarting [0]\nSat Jan 11 18:38:43 2020 TCP connection established with [AF_INET]10.250.7.77:16438\nSat Jan 11 18:38:43 2020 10.250.7.77:16438 TCP connection established with [AF_INET]100.64.1.1:57378\nSat Jan 11 18:38:43 2020 10.250.7.77:16438 Connection reset, restarting [0]\nSat Jan 11 18:38:43 2020 100.64.1.1:57378 Connection reset, restarting [0]\nSat Jan 11 18:38:49 2020 TCP connection established with [AF_INET]10.250.7.77:32572\nSat Jan 11 18:38:49 2020 10.250.7.77:32572 TCP connection established with [AF_INET]100.64.1.1:61804\nSat Jan 11 18:38:49 2020 10.250.7.77:32572 Connection reset, restarting [0]\nSat Jan 11 18:38:49 2020 100.64.1.1:61804 Connection reset, restarting [0]\nSat Jan 11 18:38:53 2020 TCP connection established with [AF_INET]10.250.7.77:16444\nSat Jan 11 18:38:53 2020 10.250.7.77:16444 TCP connection established with [AF_INET]100.64.1.1:57384\nSat Jan 11 18:38:53 2020 10.250.7.77:16444 Connection reset, restarting [0]\nSat Jan 11 18:38:53 2020 100.64.1.1:57384 Connection reset, restarting [0]\nSat Jan 11 18:38:59 2020 TCP connection established with [AF_INET]10.250.7.77:32584\nSat Jan 11 18:38:59 2020 10.250.7.77:32584 TCP connection established with [AF_INET]100.64.1.1:61816\nSat Jan 11 18:38:59 2020 10.250.7.77:32584 Connection reset, restarting [0]\nSat Jan 11 18:38:59 2020 100.64.1.1:61816 Connection reset, restarting [0]\nSat Jan 11 18:39:03 2020 TCP connection established with [AF_INET]10.250.7.77:16458\nSat Jan 11 18:39:03 2020 10.250.7.77:16458 TCP connection established with [AF_INET]100.64.1.1:57398\nSat Jan 11 18:39:03 2020 10.250.7.77:16458 Connection reset, restarting [0]\nSat Jan 11 18:39:03 2020 100.64.1.1:57398 Connection reset, restarting [0]\nSat Jan 11 18:39:09 2020 TCP connection established with [AF_INET]10.250.7.77:32596\nSat Jan 11 18:39:09 2020 10.250.7.77:32596 TCP connection established with [AF_INET]100.64.1.1:61828\nSat Jan 11 18:39:09 2020 10.250.7.77:32596 Connection reset, restarting [0]\nSat Jan 11 18:39:09 2020 100.64.1.1:61828 Connection reset, restarting [0]\nSat Jan 11 18:39:13 2020 TCP connection established with [AF_INET]10.250.7.77:16466\nSat Jan 11 18:39:13 2020 10.250.7.77:16466 TCP connection established with [AF_INET]100.64.1.1:57406\nSat Jan 11 18:39:13 2020 10.250.7.77:16466 Connection reset, restarting [0]\nSat Jan 11 18:39:13 2020 100.64.1.1:57406 Connection reset, restarting [0]\nSat Jan 11 18:39:19 2020 TCP connection established with [AF_INET]10.250.7.77:32608\nSat Jan 11 18:39:19 2020 10.250.7.77:32608 TCP connection established with [AF_INET]100.64.1.1:61840\nSat Jan 11 18:39:19 2020 10.250.7.77:32608 Connection reset, restarting [0]\nSat Jan 11 18:39:19 2020 100.64.1.1:61840 Connection reset, restarting [0]\nSat Jan 11 18:39:23 2020 TCP connection established with [AF_INET]10.250.7.77:16478\nSat Jan 11 18:39:23 2020 10.250.7.77:16478 TCP connection established with [AF_INET]100.64.1.1:57418\nSat Jan 11 18:39:23 2020 10.250.7.77:16478 Connection reset, restarting [0]\nSat Jan 11 18:39:23 2020 100.64.1.1:57418 Connection reset, restarting [0]\nSat Jan 11 18:39:29 2020 TCP connection established with [AF_INET]10.250.7.77:32614\nSat Jan 11 18:39:29 2020 10.250.7.77:32614 TCP connection established with [AF_INET]100.64.1.1:61846\nSat Jan 11 18:39:29 2020 10.250.7.77:32614 Connection reset, restarting [0]\nSat Jan 11 18:39:29 2020 100.64.1.1:61846 Connection reset, restarting [0]\nSat Jan 11 18:39:33 2020 TCP connection established with [AF_INET]10.250.7.77:16484\nSat Jan 11 18:39:33 2020 10.250.7.77:16484 TCP connection established with [AF_INET]100.64.1.1:57424\nSat Jan 11 18:39:33 2020 10.250.7.77:16484 Connection reset, restarting [0]\nSat Jan 11 18:39:33 2020 100.64.1.1:57424 Connection reset, restarting [0]\nSat Jan 11 18:39:39 2020 TCP connection established with [AF_INET]10.250.7.77:32622\nSat Jan 11 18:39:39 2020 10.250.7.77:32622 TCP connection established with [AF_INET]100.64.1.1:61854\nSat Jan 11 18:39:39 2020 10.250.7.77:32622 Connection reset, restarting [0]\nSat Jan 11 18:39:39 2020 100.64.1.1:61854 Connection reset, restarting [0]\nSat Jan 11 18:39:43 2020 TCP connection established with [AF_INET]10.250.7.77:16496\nSat Jan 11 18:39:43 2020 10.250.7.77:16496 TCP connection established with [AF_INET]100.64.1.1:57436\nSat Jan 11 18:39:43 2020 10.250.7.77:16496 Connection reset, restarting [0]\nSat Jan 11 18:39:43 2020 100.64.1.1:57436 Connection reset, restarting [0]\nSat Jan 11 18:39:49 2020 TCP connection established with [AF_INET]10.250.7.77:32628\nSat Jan 11 18:39:49 2020 10.250.7.77:32628 TCP connection established with [AF_INET]100.64.1.1:61860\nSat Jan 11 18:39:49 2020 10.250.7.77:32628 Connection reset, restarting [0]\nSat Jan 11 18:39:49 2020 100.64.1.1:61860 Connection reset, restarting [0]\nSat Jan 11 18:39:53 2020 TCP connection established with [AF_INET]10.250.7.77:16502\nSat Jan 11 18:39:53 2020 10.250.7.77:16502 TCP connection established with [AF_INET]100.64.1.1:57442\nSat Jan 11 18:39:53 2020 10.250.7.77:16502 Connection reset, restarting [0]\nSat Jan 11 18:39:53 2020 100.64.1.1:57442 Connection reset, restarting [0]\nSat Jan 11 18:39:59 2020 TCP connection established with [AF_INET]10.250.7.77:32644\nSat Jan 11 18:39:59 2020 10.250.7.77:32644 TCP connection established with [AF_INET]100.64.1.1:61876\nSat Jan 11 18:39:59 2020 10.250.7.77:32644 Connection reset, restarting [0]\nSat Jan 11 18:39:59 2020 100.64.1.1:61876 Connection reset, restarting [0]\nSat Jan 11 18:40:03 2020 TCP connection established with [AF_INET]10.250.7.77:16516\nSat Jan 11 18:40:03 2020 10.250.7.77:16516 TCP connection established with [AF_INET]100.64.1.1:57456\nSat Jan 11 18:40:03 2020 10.250.7.77:16516 Connection reset, restarting [0]\nSat Jan 11 18:40:03 2020 100.64.1.1:57456 Connection reset, restarting [0]\nSat Jan 11 18:40:09 2020 TCP connection established with [AF_INET]10.250.7.77:32654\nSat Jan 11 18:40:09 2020 10.250.7.77:32654 TCP connection established with [AF_INET]100.64.1.1:61886\nSat Jan 11 18:40:09 2020 10.250.7.77:32654 Connection reset, restarting [0]\nSat Jan 11 18:40:09 2020 100.64.1.1:61886 Connection reset, restarting [0]\nSat Jan 11 18:40:13 2020 TCP connection established with [AF_INET]10.250.7.77:16524\nSat Jan 11 18:40:13 2020 10.250.7.77:16524 TCP connection established with [AF_INET]100.64.1.1:57464\nSat Jan 11 18:40:13 2020 10.250.7.77:16524 Connection reset, restarting [0]\nSat Jan 11 18:40:13 2020 100.64.1.1:57464 Connection reset, restarting [0]\nSat Jan 11 18:40:19 2020 TCP connection established with [AF_INET]10.250.7.77:32666\nSat Jan 11 18:40:19 2020 10.250.7.77:32666 TCP connection established with [AF_INET]100.64.1.1:61898\nSat Jan 11 18:40:19 2020 10.250.7.77:32666 Connection reset, restarting [0]\nSat Jan 11 18:40:19 2020 100.64.1.1:61898 Connection reset, restarting [0]\nSat Jan 11 18:40:23 2020 TCP connection established with [AF_INET]10.250.7.77:16536\nSat Jan 11 18:40:23 2020 10.250.7.77:16536 TCP connection established with [AF_INET]100.64.1.1:57476\nSat Jan 11 18:40:23 2020 10.250.7.77:16536 Connection reset, restarting [0]\nSat Jan 11 18:40:23 2020 100.64.1.1:57476 Connection reset, restarting [0]\nSat Jan 11 18:40:29 2020 TCP connection established with [AF_INET]10.250.7.77:32672\nSat Jan 11 18:40:29 2020 10.250.7.77:32672 TCP connection established with [AF_INET]100.64.1.1:61904\nSat Jan 11 18:40:29 2020 10.250.7.77:32672 Connection reset, restarting [0]\nSat Jan 11 18:40:29 2020 100.64.1.1:61904 Connection reset, restarting [0]\nSat Jan 11 18:40:33 2020 TCP connection established with [AF_INET]10.250.7.77:16542\nSat Jan 11 18:40:33 2020 10.250.7.77:16542 TCP connection established with [AF_INET]100.64.1.1:57482\nSat Jan 11 18:40:33 2020 10.250.7.77:16542 Connection reset, restarting [0]\nSat Jan 11 18:40:33 2020 100.64.1.1:57482 Connection reset, restarting [0]\nSat Jan 11 18:40:39 2020 TCP connection established with [AF_INET]10.250.7.77:32680\nSat Jan 11 18:40:39 2020 10.250.7.77:32680 TCP connection established with [AF_INET]100.64.1.1:61912\nSat Jan 11 18:40:39 2020 10.250.7.77:32680 Connection reset, restarting [0]\nSat Jan 11 18:40:39 2020 100.64.1.1:61912 Connection reset, restarting [0]\nSat Jan 11 18:40:43 2020 TCP connection established with [AF_INET]10.250.7.77:16550\nSat Jan 11 18:40:43 2020 10.250.7.77:16550 TCP connection established with [AF_INET]100.64.1.1:57490\nSat Jan 11 18:40:43 2020 10.250.7.77:16550 Connection reset, restarting [0]\nSat Jan 11 18:40:43 2020 100.64.1.1:57490 Connection reset, restarting [0]\nSat Jan 11 18:40:49 2020 TCP connection established with [AF_INET]10.250.7.77:32686\nSat Jan 11 18:40:49 2020 10.250.7.77:32686 TCP connection established with [AF_INET]100.64.1.1:61918\nSat Jan 11 18:40:49 2020 10.250.7.77:32686 Connection reset, restarting [0]\nSat Jan 11 18:40:49 2020 100.64.1.1:61918 Connection reset, restarting [0]\nSat Jan 11 18:40:53 2020 TCP connection established with [AF_INET]10.250.7.77:16560\nSat Jan 11 18:40:53 2020 10.250.7.77:16560 TCP connection established with [AF_INET]100.64.1.1:57500\nSat Jan 11 18:40:53 2020 10.250.7.77:16560 Connection reset, restarting [0]\nSat Jan 11 18:40:53 2020 100.64.1.1:57500 Connection reset, restarting [0]\nSat Jan 11 18:40:59 2020 TCP connection established with [AF_INET]10.250.7.77:32698\nSat Jan 11 18:40:59 2020 10.250.7.77:32698 TCP connection established with [AF_INET]100.64.1.1:61930\nSat Jan 11 18:40:59 2020 10.250.7.77:32698 Connection reset, restarting [0]\nSat Jan 11 18:40:59 2020 100.64.1.1:61930 Connection reset, restarting [0]\nSat Jan 11 18:41:03 2020 TCP connection established with [AF_INET]10.250.7.77:16574\nSat Jan 11 18:41:03 2020 10.250.7.77:16574 TCP connection established with [AF_INET]100.64.1.1:57514\nSat Jan 11 18:41:03 2020 10.250.7.77:16574 Connection reset, restarting [0]\nSat Jan 11 18:41:03 2020 100.64.1.1:57514 Connection reset, restarting [0]\nSat Jan 11 18:41:09 2020 TCP connection established with [AF_INET]10.250.7.77:32708\nSat Jan 11 18:41:09 2020 10.250.7.77:32708 TCP connection established with [AF_INET]100.64.1.1:61940\nSat Jan 11 18:41:09 2020 10.250.7.77:32708 Connection reset, restarting [0]\nSat Jan 11 18:41:09 2020 100.64.1.1:61940 Connection reset, restarting [0]\nSat Jan 11 18:41:13 2020 TCP connection established with [AF_INET]10.250.7.77:16584\nSat Jan 11 18:41:13 2020 10.250.7.77:16584 TCP connection established with [AF_INET]100.64.1.1:57524\nSat Jan 11 18:41:13 2020 10.250.7.77:16584 Connection reset, restarting [0]\nSat Jan 11 18:41:13 2020 100.64.1.1:57524 Connection reset, restarting [0]\nSat Jan 11 18:41:19 2020 TCP connection established with [AF_INET]10.250.7.77:32724\nSat Jan 11 18:41:19 2020 10.250.7.77:32724 TCP connection established with [AF_INET]100.64.1.1:61956\nSat Jan 11 18:41:19 2020 10.250.7.77:32724 Connection reset, restarting [0]\nSat Jan 11 18:41:19 2020 100.64.1.1:61956 Connection reset, restarting [0]\nSat Jan 11 18:41:23 2020 TCP connection established with [AF_INET]10.250.7.77:16594\nSat Jan 11 18:41:23 2020 10.250.7.77:16594 TCP connection established with [AF_INET]100.64.1.1:57534\nSat Jan 11 18:41:23 2020 10.250.7.77:16594 Connection reset, restarting [0]\nSat Jan 11 18:41:23 2020 100.64.1.1:57534 Connection reset, restarting [0]\nSat Jan 11 18:41:29 2020 TCP connection established with [AF_INET]10.250.7.77:32730\nSat Jan 11 18:41:29 2020 10.250.7.77:32730 TCP connection established with [AF_INET]100.64.1.1:61962\nSat Jan 11 18:41:29 2020 10.250.7.77:32730 Connection reset, restarting [0]\nSat Jan 11 18:41:29 2020 100.64.1.1:61962 Connection reset, restarting [0]\nSat Jan 11 18:41:33 2020 TCP connection established with [AF_INET]10.250.7.77:16600\nSat Jan 11 18:41:33 2020 10.250.7.77:16600 TCP connection established with [AF_INET]100.64.1.1:57540\nSat Jan 11 18:41:33 2020 10.250.7.77:16600 Connection reset, restarting [0]\nSat Jan 11 18:41:33 2020 100.64.1.1:57540 Connection reset, restarting [0]\nSat Jan 11 18:41:39 2020 TCP connection established with [AF_INET]10.250.7.77:32738\nSat Jan 11 18:41:39 2020 10.250.7.77:32738 TCP connection established with [AF_INET]100.64.1.1:61970\nSat Jan 11 18:41:39 2020 10.250.7.77:32738 Connection reset, restarting [0]\nSat Jan 11 18:41:39 2020 100.64.1.1:61970 Connection reset, restarting [0]\nSat Jan 11 18:41:43 2020 TCP connection established with [AF_INET]10.250.7.77:16608\nSat Jan 11 18:41:43 2020 10.250.7.77:16608 TCP connection established with [AF_INET]100.64.1.1:57548\nSat Jan 11 18:41:43 2020 10.250.7.77:16608 Connection reset, restarting [0]\nSat Jan 11 18:41:43 2020 100.64.1.1:57548 Connection reset, restarting [0]\nSat Jan 11 18:41:49 2020 TCP connection established with [AF_INET]10.250.7.77:32744\nSat Jan 11 18:41:49 2020 10.250.7.77:32744 TCP connection established with [AF_INET]100.64.1.1:61976\nSat Jan 11 18:41:49 2020 10.250.7.77:32744 Connection reset, restarting [0]\nSat Jan 11 18:41:49 2020 100.64.1.1:61976 Connection reset, restarting [0]\nSat Jan 11 18:41:53 2020 TCP connection established with [AF_INET]10.250.7.77:16614\nSat Jan 11 18:41:53 2020 10.250.7.77:16614 TCP connection established with [AF_INET]100.64.1.1:57554\nSat Jan 11 18:41:53 2020 10.250.7.77:16614 Connection reset, restarting [0]\nSat Jan 11 18:41:53 2020 100.64.1.1:57554 Connection reset, restarting [0]\nSat Jan 11 18:41:59 2020 TCP connection established with [AF_INET]10.250.7.77:32756\nSat Jan 11 18:41:59 2020 10.250.7.77:32756 TCP connection established with [AF_INET]100.64.1.1:61988\nSat Jan 11 18:41:59 2020 10.250.7.77:32756 Connection reset, restarting [0]\nSat Jan 11 18:41:59 2020 100.64.1.1:61988 Connection reset, restarting [0]\nSat Jan 11 18:42:03 2020 TCP connection established with [AF_INET]10.250.7.77:16634\nSat Jan 11 18:42:03 2020 10.250.7.77:16634 TCP connection established with [AF_INET]100.64.1.1:57574\nSat Jan 11 18:42:03 2020 10.250.7.77:16634 Connection reset, restarting [0]\nSat Jan 11 18:42:03 2020 100.64.1.1:57574 Connection reset, restarting [0]\nSat Jan 11 18:42:09 2020 TCP connection established with [AF_INET]10.250.7.77:32772\nSat Jan 11 18:42:09 2020 10.250.7.77:32772 TCP connection established with [AF_INET]100.64.1.1:62004\nSat Jan 11 18:42:09 2020 10.250.7.77:32772 Connection reset, restarting [0]\nSat Jan 11 18:42:09 2020 100.64.1.1:62004 Connection reset, restarting [0]\nSat Jan 11 18:42:13 2020 TCP connection established with [AF_INET]10.250.7.77:16648\nSat Jan 11 18:42:13 2020 10.250.7.77:16648 TCP connection established with [AF_INET]100.64.1.1:57588\nSat Jan 11 18:42:13 2020 10.250.7.77:16648 Connection reset, restarting [0]\nSat Jan 11 18:42:13 2020 100.64.1.1:57588 Connection reset, restarting [0]\nSat Jan 11 18:42:19 2020 TCP connection established with [AF_INET]10.250.7.77:32786\nSat Jan 11 18:42:19 2020 10.250.7.77:32786 TCP connection established with [AF_INET]100.64.1.1:62018\nSat Jan 11 18:42:19 2020 10.250.7.77:32786 Connection reset, restarting [0]\nSat Jan 11 18:42:19 2020 100.64.1.1:62018 Connection reset, restarting [0]\nSat Jan 11 18:42:23 2020 TCP connection established with [AF_INET]10.250.7.77:16658\nSat Jan 11 18:42:23 2020 10.250.7.77:16658 TCP connection established with [AF_INET]100.64.1.1:57598\nSat Jan 11 18:42:23 2020 10.250.7.77:16658 Connection reset, restarting [0]\nSat Jan 11 18:42:23 2020 100.64.1.1:57598 Connection reset, restarting [0]\nSat Jan 11 18:42:29 2020 TCP connection established with [AF_INET]10.250.7.77:32794\nSat Jan 11 18:42:29 2020 10.250.7.77:32794 TCP connection established with [AF_INET]100.64.1.1:62026\nSat Jan 11 18:42:29 2020 10.250.7.77:32794 Connection reset, restarting [0]\nSat Jan 11 18:42:29 2020 100.64.1.1:62026 Connection reset, restarting [0]\nSat Jan 11 18:42:33 2020 TCP connection established with [AF_INET]10.250.7.77:16664\nSat Jan 11 18:42:33 2020 10.250.7.77:16664 TCP connection established with [AF_INET]100.64.1.1:57604\nSat Jan 11 18:42:33 2020 10.250.7.77:16664 Connection reset, restarting [0]\nSat Jan 11 18:42:33 2020 100.64.1.1:57604 Connection reset, restarting [0]\nSat Jan 11 18:42:39 2020 TCP connection established with [AF_INET]10.250.7.77:32802\nSat Jan 11 18:42:39 2020 10.250.7.77:32802 TCP connection established with [AF_INET]100.64.1.1:62034\nSat Jan 11 18:42:39 2020 10.250.7.77:32802 Connection reset, restarting [0]\nSat Jan 11 18:42:39 2020 100.64.1.1:62034 Connection reset, restarting [0]\nSat Jan 11 18:42:43 2020 TCP connection established with [AF_INET]10.250.7.77:16672\nSat Jan 11 18:42:43 2020 10.250.7.77:16672 TCP connection established with [AF_INET]100.64.1.1:57612\nSat Jan 11 18:42:43 2020 10.250.7.77:16672 Connection reset, restarting [0]\nSat Jan 11 18:42:43 2020 100.64.1.1:57612 Connection reset, restarting [0]\nSat Jan 11 18:42:49 2020 TCP connection established with [AF_INET]10.250.7.77:32808\nSat Jan 11 18:42:49 2020 10.250.7.77:32808 TCP connection established with [AF_INET]100.64.1.1:62040\nSat Jan 11 18:42:49 2020 10.250.7.77:32808 Connection reset, restarting [0]\nSat Jan 11 18:42:49 2020 100.64.1.1:62040 Connection reset, restarting [0]\nSat Jan 11 18:42:53 2020 TCP connection established with [AF_INET]10.250.7.77:16678\nSat Jan 11 18:42:53 2020 10.250.7.77:16678 TCP connection established with [AF_INET]100.64.1.1:57618\nSat Jan 11 18:42:53 2020 10.250.7.77:16678 Connection reset, restarting [0]\nSat Jan 11 18:42:53 2020 100.64.1.1:57618 Connection reset, restarting [0]\nSat Jan 11 18:42:59 2020 TCP connection established with [AF_INET]10.250.7.77:32822\nSat Jan 11 18:42:59 2020 10.250.7.77:32822 TCP connection established with [AF_INET]100.64.1.1:62054\nSat Jan 11 18:42:59 2020 10.250.7.77:32822 Connection reset, restarting [0]\nSat Jan 11 18:42:59 2020 100.64.1.1:62054 Connection reset, restarting [0]\nSat Jan 11 18:43:03 2020 TCP connection established with [AF_INET]10.250.7.77:16692\nSat Jan 11 18:43:03 2020 10.250.7.77:16692 TCP connection established with [AF_INET]100.64.1.1:57632\nSat Jan 11 18:43:03 2020 10.250.7.77:16692 Connection reset, restarting [0]\nSat Jan 11 18:43:03 2020 100.64.1.1:57632 Connection reset, restarting [0]\nSat Jan 11 18:43:09 2020 TCP connection established with [AF_INET]10.250.7.77:32832\nSat Jan 11 18:43:09 2020 10.250.7.77:32832 TCP connection established with [AF_INET]100.64.1.1:62064\nSat Jan 11 18:43:09 2020 10.250.7.77:32832 Connection reset, restarting [0]\nSat Jan 11 18:43:09 2020 100.64.1.1:62064 Connection reset, restarting [0]\nSat Jan 11 18:43:13 2020 TCP connection established with [AF_INET]10.250.7.77:16702\nSat Jan 11 18:43:13 2020 10.250.7.77:16702 TCP connection established with [AF_INET]100.64.1.1:57642\nSat Jan 11 18:43:13 2020 10.250.7.77:16702 Connection reset, restarting [0]\nSat Jan 11 18:43:13 2020 100.64.1.1:57642 Connection reset, restarting [0]\nSat Jan 11 18:43:19 2020 TCP connection established with [AF_INET]10.250.7.77:32846\nSat Jan 11 18:43:19 2020 10.250.7.77:32846 TCP connection established with [AF_INET]100.64.1.1:62078\nSat Jan 11 18:43:19 2020 10.250.7.77:32846 Connection reset, restarting [0]\nSat Jan 11 18:43:19 2020 100.64.1.1:62078 Connection reset, restarting [0]\nSat Jan 11 18:43:23 2020 TCP connection established with [AF_INET]10.250.7.77:16716\nSat Jan 11 18:43:23 2020 10.250.7.77:16716 TCP connection established with [AF_INET]100.64.1.1:57656\nSat Jan 11 18:43:23 2020 10.250.7.77:16716 Connection reset, restarting [0]\nSat Jan 11 18:43:23 2020 100.64.1.1:57656 Connection reset, restarting [0]\nSat Jan 11 18:43:29 2020 TCP connection established with [AF_INET]10.250.7.77:32850\nSat Jan 11 18:43:29 2020 10.250.7.77:32850 TCP connection established with [AF_INET]100.64.1.1:62082\nSat Jan 11 18:43:29 2020 10.250.7.77:32850 Connection reset, restarting [0]\nSat Jan 11 18:43:29 2020 100.64.1.1:62082 Connection reset, restarting [0]\nSat Jan 11 18:43:33 2020 TCP connection established with [AF_INET]10.250.7.77:16722\nSat Jan 11 18:43:33 2020 10.250.7.77:16722 Connection reset, restarting [0]\nSat Jan 11 18:43:33 2020 TCP connection established with [AF_INET]100.64.1.1:57662\nSat Jan 11 18:43:33 2020 100.64.1.1:57662 Connection reset, restarting [0]\nSat Jan 11 18:43:39 2020 TCP connection established with [AF_INET]10.250.7.77:32858\nSat Jan 11 18:43:39 2020 10.250.7.77:32858 TCP connection established with [AF_INET]100.64.1.1:62090\nSat Jan 11 18:43:39 2020 10.250.7.77:32858 Connection reset, restarting [0]\nSat Jan 11 18:43:39 2020 100.64.1.1:62090 Connection reset, restarting [0]\nSat Jan 11 18:43:43 2020 TCP connection established with [AF_INET]10.250.7.77:16730\nSat Jan 11 18:43:43 2020 10.250.7.77:16730 TCP connection established with [AF_INET]100.64.1.1:57670\nSat Jan 11 18:43:43 2020 10.250.7.77:16730 Connection reset, restarting [0]\nSat Jan 11 18:43:43 2020 100.64.1.1:57670 Connection reset, restarting [0]\nSat Jan 11 18:43:49 2020 TCP connection established with [AF_INET]10.250.7.77:32868\nSat Jan 11 18:43:49 2020 10.250.7.77:32868 TCP connection established with [AF_INET]100.64.1.1:62100\nSat Jan 11 18:43:49 2020 10.250.7.77:32868 Connection reset, restarting [0]\nSat Jan 11 18:43:49 2020 100.64.1.1:62100 Connection reset, restarting [0]\nSat Jan 11 18:43:53 2020 TCP connection established with [AF_INET]10.250.7.77:16736\nSat Jan 11 18:43:53 2020 10.250.7.77:16736 TCP connection established with [AF_INET]100.64.1.1:57676\nSat Jan 11 18:43:53 2020 10.250.7.77:16736 Connection reset, restarting [0]\nSat Jan 11 18:43:53 2020 100.64.1.1:57676 Connection reset, restarting [0]\nSat Jan 11 18:43:59 2020 TCP connection established with [AF_INET]10.250.7.77:32880\nSat Jan 11 18:43:59 2020 10.250.7.77:32880 TCP connection established with [AF_INET]100.64.1.1:62112\nSat Jan 11 18:43:59 2020 10.250.7.77:32880 Connection reset, restarting [0]\nSat Jan 11 18:43:59 2020 100.64.1.1:62112 Connection reset, restarting [0]\nSat Jan 11 18:44:03 2020 TCP connection established with [AF_INET]10.250.7.77:16756\nSat Jan 11 18:44:03 2020 10.250.7.77:16756 TCP connection established with [AF_INET]100.64.1.1:57696\nSat Jan 11 18:44:03 2020 10.250.7.77:16756 Connection reset, restarting [0]\nSat Jan 11 18:44:03 2020 100.64.1.1:57696 Connection reset, restarting [0]\nSat Jan 11 18:44:09 2020 TCP connection established with [AF_INET]10.250.7.77:32892\nSat Jan 11 18:44:09 2020 10.250.7.77:32892 TCP connection established with [AF_INET]100.64.1.1:62124\nSat Jan 11 18:44:09 2020 10.250.7.77:32892 Connection reset, restarting [0]\nSat Jan 11 18:44:09 2020 100.64.1.1:62124 Connection reset, restarting [0]\nSat Jan 11 18:44:13 2020 TCP connection established with [AF_INET]10.250.7.77:16760\nSat Jan 11 18:44:13 2020 10.250.7.77:16760 TCP connection established with [AF_INET]100.64.1.1:57700\nSat Jan 11 18:44:13 2020 10.250.7.77:16760 Connection reset, restarting [0]\nSat Jan 11 18:44:13 2020 100.64.1.1:57700 Connection reset, restarting [0]\nSat Jan 11 18:44:19 2020 TCP connection established with [AF_INET]10.250.7.77:32904\nSat Jan 11 18:44:19 2020 10.250.7.77:32904 TCP connection established with [AF_INET]100.64.1.1:62136\nSat Jan 11 18:44:19 2020 10.250.7.77:32904 Connection reset, restarting [0]\nSat Jan 11 18:44:19 2020 100.64.1.1:62136 Connection reset, restarting [0]\nSat Jan 11 18:44:23 2020 TCP connection established with [AF_INET]10.250.7.77:16770\nSat Jan 11 18:44:23 2020 10.250.7.77:16770 TCP connection established with [AF_INET]100.64.1.1:57710\nSat Jan 11 18:44:23 2020 10.250.7.77:16770 Connection reset, restarting [0]\nSat Jan 11 18:44:23 2020 100.64.1.1:57710 Connection reset, restarting [0]\nSat Jan 11 18:44:29 2020 TCP connection established with [AF_INET]10.250.7.77:32908\nSat Jan 11 18:44:29 2020 10.250.7.77:32908 TCP connection established with [AF_INET]100.64.1.1:62140\nSat Jan 11 18:44:29 2020 10.250.7.77:32908 Connection reset, restarting [0]\nSat Jan 11 18:44:29 2020 100.64.1.1:62140 Connection reset, restarting [0]\nSat Jan 11 18:44:33 2020 TCP connection established with [AF_INET]10.250.7.77:16776\nSat Jan 11 18:44:33 2020 10.250.7.77:16776 TCP connection established with [AF_INET]100.64.1.1:57716\nSat Jan 11 18:44:33 2020 10.250.7.77:16776 Connection reset, restarting [0]\nSat Jan 11 18:44:33 2020 100.64.1.1:57716 Connection reset, restarting [0]\nSat Jan 11 18:44:39 2020 TCP connection established with [AF_INET]10.250.7.77:32916\nSat Jan 11 18:44:39 2020 10.250.7.77:32916 TCP connection established with [AF_INET]100.64.1.1:62148\nSat Jan 11 18:44:39 2020 10.250.7.77:32916 Connection reset, restarting [0]\nSat Jan 11 18:44:39 2020 100.64.1.1:62148 Connection reset, restarting [0]\nSat Jan 11 18:44:43 2020 TCP connection established with [AF_INET]10.250.7.77:16788\nSat Jan 11 18:44:43 2020 10.250.7.77:16788 TCP connection established with [AF_INET]100.64.1.1:57728\nSat Jan 11 18:44:43 2020 10.250.7.77:16788 Connection reset, restarting [0]\nSat Jan 11 18:44:43 2020 100.64.1.1:57728 Connection reset, restarting [0]\nSat Jan 11 18:44:49 2020 TCP connection established with [AF_INET]10.250.7.77:32922\nSat Jan 11 18:44:49 2020 10.250.7.77:32922 TCP connection established with [AF_INET]100.64.1.1:62154\nSat Jan 11 18:44:49 2020 10.250.7.77:32922 Connection reset, restarting [0]\nSat Jan 11 18:44:49 2020 100.64.1.1:62154 Connection reset, restarting [0]\nSat Jan 11 18:44:53 2020 TCP connection established with [AF_INET]10.250.7.77:16794\nSat Jan 11 18:44:53 2020 10.250.7.77:16794 TCP connection established with [AF_INET]100.64.1.1:57734\nSat Jan 11 18:44:53 2020 10.250.7.77:16794 Connection reset, restarting [0]\nSat Jan 11 18:44:53 2020 100.64.1.1:57734 Connection reset, restarting [0]\nSat Jan 11 18:44:59 2020 TCP connection established with [AF_INET]10.250.7.77:32938\nSat Jan 11 18:44:59 2020 10.250.7.77:32938 TCP connection established with [AF_INET]100.64.1.1:62170\nSat Jan 11 18:44:59 2020 10.250.7.77:32938 Connection reset, restarting [0]\nSat Jan 11 18:44:59 2020 100.64.1.1:62170 Connection reset, restarting [0]\nSat Jan 11 18:45:03 2020 TCP connection established with [AF_INET]10.250.7.77:16814\nSat Jan 11 18:45:03 2020 10.250.7.77:16814 TCP connection established with [AF_INET]100.64.1.1:57754\nSat Jan 11 18:45:03 2020 10.250.7.77:16814 Connection reset, restarting [0]\nSat Jan 11 18:45:03 2020 100.64.1.1:57754 Connection reset, restarting [0]\nSat Jan 11 18:45:09 2020 TCP connection established with [AF_INET]10.250.7.77:32950\nSat Jan 11 18:45:09 2020 10.250.7.77:32950 TCP connection established with [AF_INET]100.64.1.1:62182\nSat Jan 11 18:45:09 2020 10.250.7.77:32950 Connection reset, restarting [0]\nSat Jan 11 18:45:09 2020 100.64.1.1:62182 Connection reset, restarting [0]\nSat Jan 11 18:45:13 2020 TCP connection established with [AF_INET]10.250.7.77:16818\nSat Jan 11 18:45:13 2020 10.250.7.77:16818 TCP connection established with [AF_INET]100.64.1.1:57758\nSat Jan 11 18:45:13 2020 10.250.7.77:16818 Connection reset, restarting [0]\nSat Jan 11 18:45:13 2020 100.64.1.1:57758 Connection reset, restarting [0]\nSat Jan 11 18:45:19 2020 TCP connection established with [AF_INET]10.250.7.77:32962\nSat Jan 11 18:45:19 2020 10.250.7.77:32962 TCP connection established with [AF_INET]100.64.1.1:62194\nSat Jan 11 18:45:19 2020 10.250.7.77:32962 Connection reset, restarting [0]\nSat Jan 11 18:45:19 2020 100.64.1.1:62194 Connection reset, restarting [0]\nSat Jan 11 18:45:23 2020 TCP connection established with [AF_INET]10.250.7.77:16828\nSat Jan 11 18:45:23 2020 10.250.7.77:16828 TCP connection established with [AF_INET]100.64.1.1:57768\nSat Jan 11 18:45:23 2020 10.250.7.77:16828 Connection reset, restarting [0]\nSat Jan 11 18:45:23 2020 100.64.1.1:57768 Connection reset, restarting [0]\nSat Jan 11 18:45:29 2020 TCP connection established with [AF_INET]10.250.7.77:32966\nSat Jan 11 18:45:29 2020 10.250.7.77:32966 TCP connection established with [AF_INET]100.64.1.1:62198\nSat Jan 11 18:45:29 2020 10.250.7.77:32966 Connection reset, restarting [0]\nSat Jan 11 18:45:29 2020 100.64.1.1:62198 Connection reset, restarting [0]\nSat Jan 11 18:45:33 2020 TCP connection established with [AF_INET]10.250.7.77:16834\nSat Jan 11 18:45:33 2020 10.250.7.77:16834 TCP connection established with [AF_INET]100.64.1.1:57774\nSat Jan 11 18:45:33 2020 10.250.7.77:16834 Connection reset, restarting [0]\nSat Jan 11 18:45:33 2020 100.64.1.1:57774 Connection reset, restarting [0]\nSat Jan 11 18:45:39 2020 TCP connection established with [AF_INET]10.250.7.77:32974\nSat Jan 11 18:45:39 2020 10.250.7.77:32974 TCP connection established with [AF_INET]100.64.1.1:62206\nSat Jan 11 18:45:39 2020 10.250.7.77:32974 Connection reset, restarting [0]\nSat Jan 11 18:45:39 2020 100.64.1.1:62206 Connection reset, restarting [0]\nSat Jan 11 18:45:43 2020 TCP connection established with [AF_INET]10.250.7.77:16842\nSat Jan 11 18:45:43 2020 10.250.7.77:16842 TCP connection established with [AF_INET]100.64.1.1:57782\nSat Jan 11 18:45:43 2020 10.250.7.77:16842 Connection reset, restarting [0]\nSat Jan 11 18:45:43 2020 100.64.1.1:57782 Connection reset, restarting [0]\nSat Jan 11 18:45:49 2020 TCP connection established with [AF_INET]10.250.7.77:32980\nSat Jan 11 18:45:49 2020 10.250.7.77:32980 TCP connection established with [AF_INET]100.64.1.1:62212\nSat Jan 11 18:45:49 2020 10.250.7.77:32980 Connection reset, restarting [0]\nSat Jan 11 18:45:49 2020 100.64.1.1:62212 Connection reset, restarting [0]\nSat Jan 11 18:45:53 2020 TCP connection established with [AF_INET]10.250.7.77:16888\nSat Jan 11 18:45:53 2020 10.250.7.77:16888 TCP connection established with [AF_INET]100.64.1.1:57828\nSat Jan 11 18:45:53 2020 10.250.7.77:16888 Connection reset, restarting [0]\nSat Jan 11 18:45:53 2020 100.64.1.1:57828 Connection reset, restarting [0]\nSat Jan 11 18:45:59 2020 TCP connection established with [AF_INET]10.250.7.77:32992\nSat Jan 11 18:45:59 2020 10.250.7.77:32992 TCP connection established with [AF_INET]100.64.1.1:62224\nSat Jan 11 18:45:59 2020 10.250.7.77:32992 Connection reset, restarting [0]\nSat Jan 11 18:45:59 2020 100.64.1.1:62224 Connection reset, restarting [0]\nSat Jan 11 18:46:03 2020 TCP connection established with [AF_INET]10.250.7.77:16906\nSat Jan 11 18:46:03 2020 10.250.7.77:16906 TCP connection established with [AF_INET]100.64.1.1:57846\nSat Jan 11 18:46:03 2020 10.250.7.77:16906 Connection reset, restarting [0]\nSat Jan 11 18:46:03 2020 100.64.1.1:57846 Connection reset, restarting [0]\nSat Jan 11 18:46:09 2020 TCP connection established with [AF_INET]10.250.7.77:33004\nSat Jan 11 18:46:09 2020 10.250.7.77:33004 TCP connection established with [AF_INET]100.64.1.1:62236\nSat Jan 11 18:46:09 2020 10.250.7.77:33004 Connection reset, restarting [0]\nSat Jan 11 18:46:09 2020 100.64.1.1:62236 Connection reset, restarting [0]\nSat Jan 11 18:46:13 2020 TCP connection established with [AF_INET]10.250.7.77:16912\nSat Jan 11 18:46:13 2020 10.250.7.77:16912 TCP connection established with [AF_INET]100.64.1.1:57852\nSat Jan 11 18:46:13 2020 10.250.7.77:16912 Connection reset, restarting [0]\nSat Jan 11 18:46:13 2020 100.64.1.1:57852 Connection reset, restarting [0]\nSat Jan 11 18:46:19 2020 TCP connection established with [AF_INET]10.250.7.77:33020\nSat Jan 11 18:46:19 2020 10.250.7.77:33020 TCP connection established with [AF_INET]100.64.1.1:62252\nSat Jan 11 18:46:19 2020 10.250.7.77:33020 Connection reset, restarting [0]\nSat Jan 11 18:46:19 2020 100.64.1.1:62252 Connection reset, restarting [0]\nSat Jan 11 18:46:23 2020 TCP connection established with [AF_INET]10.250.7.77:16922\nSat Jan 11 18:46:23 2020 10.250.7.77:16922 TCP connection established with [AF_INET]100.64.1.1:57862\nSat Jan 11 18:46:23 2020 10.250.7.77:16922 Connection reset, restarting [0]\nSat Jan 11 18:46:23 2020 100.64.1.1:57862 Connection reset, restarting [0]\nSat Jan 11 18:46:29 2020 TCP connection established with [AF_INET]10.250.7.77:33024\nSat Jan 11 18:46:29 2020 10.250.7.77:33024 TCP connection established with [AF_INET]100.64.1.1:62256\nSat Jan 11 18:46:29 2020 10.250.7.77:33024 Connection reset, restarting [0]\nSat Jan 11 18:46:29 2020 100.64.1.1:62256 Connection reset, restarting [0]\nSat Jan 11 18:46:33 2020 TCP connection established with [AF_INET]10.250.7.77:16928\nSat Jan 11 18:46:33 2020 10.250.7.77:16928 TCP connection established with [AF_INET]100.64.1.1:57868\nSat Jan 11 18:46:33 2020 10.250.7.77:16928 Connection reset, restarting [0]\nSat Jan 11 18:46:33 2020 100.64.1.1:57868 Connection reset, restarting [0]\nSat Jan 11 18:46:39 2020 TCP connection established with [AF_INET]10.250.7.77:33032\nSat Jan 11 18:46:39 2020 10.250.7.77:33032 TCP connection established with [AF_INET]100.64.1.1:62264\nSat Jan 11 18:46:39 2020 10.250.7.77:33032 Connection reset, restarting [0]\nSat Jan 11 18:46:39 2020 100.64.1.1:62264 Connection reset, restarting [0]\nSat Jan 11 18:46:43 2020 TCP connection established with [AF_INET]10.250.7.77:16936\nSat Jan 11 18:46:43 2020 10.250.7.77:16936 TCP connection established with [AF_INET]100.64.1.1:57876\nSat Jan 11 18:46:43 2020 10.250.7.77:16936 Connection reset, restarting [0]\nSat Jan 11 18:46:43 2020 100.64.1.1:57876 Connection reset, restarting [0]\nSat Jan 11 18:46:49 2020 TCP connection established with [AF_INET]10.250.7.77:33038\nSat Jan 11 18:46:49 2020 10.250.7.77:33038 TCP connection established with [AF_INET]100.64.1.1:62270\nSat Jan 11 18:46:49 2020 10.250.7.77:33038 Connection reset, restarting [0]\nSat Jan 11 18:46:49 2020 100.64.1.1:62270 Connection reset, restarting [0]\nSat Jan 11 18:46:53 2020 TCP connection established with [AF_INET]10.250.7.77:16944\nSat Jan 11 18:46:53 2020 10.250.7.77:16944 TCP connection established with [AF_INET]100.64.1.1:57884\nSat Jan 11 18:46:53 2020 10.250.7.77:16944 Connection reset, restarting [0]\nSat Jan 11 18:46:53 2020 100.64.1.1:57884 Connection reset, restarting [0]\nSat Jan 11 18:46:59 2020 TCP connection established with [AF_INET]10.250.7.77:33052\nSat Jan 11 18:46:59 2020 10.250.7.77:33052 TCP connection established with [AF_INET]100.64.1.1:62284\nSat Jan 11 18:46:59 2020 10.250.7.77:33052 Connection reset, restarting [0]\nSat Jan 11 18:46:59 2020 100.64.1.1:62284 Connection reset, restarting [0]\nSat Jan 11 18:47:03 2020 TCP connection established with [AF_INET]10.250.7.77:16970\nSat Jan 11 18:47:03 2020 10.250.7.77:16970 TCP connection established with [AF_INET]100.64.1.1:57910\nSat Jan 11 18:47:03 2020 10.250.7.77:16970 Connection reset, restarting [0]\nSat Jan 11 18:47:03 2020 100.64.1.1:57910 Connection reset, restarting [0]\nSat Jan 11 18:47:09 2020 TCP connection established with [AF_INET]10.250.7.77:33062\nSat Jan 11 18:47:09 2020 10.250.7.77:33062 TCP connection established with [AF_INET]100.64.1.1:62294\nSat Jan 11 18:47:09 2020 10.250.7.77:33062 Connection reset, restarting [0]\nSat Jan 11 18:47:09 2020 100.64.1.1:62294 Connection reset, restarting [0]\nSat Jan 11 18:47:13 2020 TCP connection established with [AF_INET]10.250.7.77:16978\nSat Jan 11 18:47:13 2020 10.250.7.77:16978 TCP connection established with [AF_INET]100.64.1.1:57918\nSat Jan 11 18:47:13 2020 10.250.7.77:16978 Connection reset, restarting [0]\nSat Jan 11 18:47:13 2020 100.64.1.1:57918 Connection reset, restarting [0]\nSat Jan 11 18:47:19 2020 TCP connection established with [AF_INET]10.250.7.77:33074\nSat Jan 11 18:47:19 2020 10.250.7.77:33074 TCP connection established with [AF_INET]100.64.1.1:62306\nSat Jan 11 18:47:19 2020 10.250.7.77:33074 Connection reset, restarting [0]\nSat Jan 11 18:47:19 2020 100.64.1.1:62306 Connection reset, restarting [0]\nSat Jan 11 18:47:23 2020 TCP connection established with [AF_INET]100.64.1.1:57928\nSat Jan 11 18:47:23 2020 100.64.1.1:57928 TCP connection established with [AF_INET]10.250.7.77:16988\nSat Jan 11 18:47:23 2020 100.64.1.1:57928 Connection reset, restarting [0]\nSat Jan 11 18:47:23 2020 10.250.7.77:16988 Connection reset, restarting [0]\nSat Jan 11 18:47:29 2020 TCP connection established with [AF_INET]10.250.7.77:33082\nSat Jan 11 18:47:29 2020 10.250.7.77:33082 TCP connection established with [AF_INET]100.64.1.1:62314\nSat Jan 11 18:47:29 2020 10.250.7.77:33082 Connection reset, restarting [0]\nSat Jan 11 18:47:29 2020 100.64.1.1:62314 Connection reset, restarting [0]\nSat Jan 11 18:47:33 2020 TCP connection established with [AF_INET]10.250.7.77:16994\nSat Jan 11 18:47:33 2020 10.250.7.77:16994 TCP connection established with [AF_INET]100.64.1.1:57934\nSat Jan 11 18:47:33 2020 10.250.7.77:16994 Connection reset, restarting [0]\nSat Jan 11 18:47:33 2020 100.64.1.1:57934 Connection reset, restarting [0]\nSat Jan 11 18:47:39 2020 TCP connection established with [AF_INET]10.250.7.77:33098\nSat Jan 11 18:47:39 2020 10.250.7.77:33098 TCP connection established with [AF_INET]100.64.1.1:62330\nSat Jan 11 18:47:39 2020 10.250.7.77:33098 Connection reset, restarting [0]\nSat Jan 11 18:47:39 2020 100.64.1.1:62330 Connection reset, restarting [0]\nSat Jan 11 18:47:43 2020 TCP connection established with [AF_INET]10.250.7.77:17002\nSat Jan 11 18:47:43 2020 10.250.7.77:17002 TCP connection established with [AF_INET]100.64.1.1:57942\nSat Jan 11 18:47:43 2020 10.250.7.77:17002 Connection reset, restarting [0]\nSat Jan 11 18:47:43 2020 100.64.1.1:57942 Connection reset, restarting [0]\nSat Jan 11 18:47:49 2020 TCP connection established with [AF_INET]10.250.7.77:33104\nSat Jan 11 18:47:49 2020 10.250.7.77:33104 TCP connection established with [AF_INET]100.64.1.1:62336\nSat Jan 11 18:47:49 2020 10.250.7.77:33104 Connection reset, restarting [0]\nSat Jan 11 18:47:49 2020 100.64.1.1:62336 Connection reset, restarting [0]\nSat Jan 11 18:47:53 2020 TCP connection established with [AF_INET]10.250.7.77:17010\nSat Jan 11 18:47:53 2020 10.250.7.77:17010 TCP connection established with [AF_INET]100.64.1.1:57950\nSat Jan 11 18:47:53 2020 10.250.7.77:17010 Connection reset, restarting [0]\nSat Jan 11 18:47:53 2020 100.64.1.1:57950 Connection reset, restarting [0]\nSat Jan 11 18:47:59 2020 TCP connection established with [AF_INET]10.250.7.77:33118\nSat Jan 11 18:47:59 2020 10.250.7.77:33118 TCP connection established with [AF_INET]100.64.1.1:62350\nSat Jan 11 18:47:59 2020 10.250.7.77:33118 Connection reset, restarting [0]\nSat Jan 11 18:47:59 2020 100.64.1.1:62350 Connection reset, restarting [0]\nSat Jan 11 18:48:03 2020 TCP connection established with [AF_INET]10.250.7.77:17028\nSat Jan 11 18:48:03 2020 10.250.7.77:17028 TCP connection established with [AF_INET]100.64.1.1:57968\nSat Jan 11 18:48:03 2020 10.250.7.77:17028 Connection reset, restarting [0]\nSat Jan 11 18:48:03 2020 100.64.1.1:57968 Connection reset, restarting [0]\nSat Jan 11 18:48:09 2020 TCP connection established with [AF_INET]10.250.7.77:33128\nSat Jan 11 18:48:09 2020 10.250.7.77:33128 TCP connection established with [AF_INET]100.64.1.1:62360\nSat Jan 11 18:48:09 2020 10.250.7.77:33128 Connection reset, restarting [0]\nSat Jan 11 18:48:09 2020 100.64.1.1:62360 Connection reset, restarting [0]\nSat Jan 11 18:48:13 2020 TCP connection established with [AF_INET]10.250.7.77:17032\nSat Jan 11 18:48:13 2020 10.250.7.77:17032 TCP connection established with [AF_INET]100.64.1.1:57972\nSat Jan 11 18:48:13 2020 10.250.7.77:17032 Connection reset, restarting [0]\nSat Jan 11 18:48:13 2020 100.64.1.1:57972 Connection reset, restarting [0]\nSat Jan 11 18:48:19 2020 TCP connection established with [AF_INET]10.250.7.77:33140\nSat Jan 11 18:48:19 2020 10.250.7.77:33140 TCP connection established with [AF_INET]100.64.1.1:62372\nSat Jan 11 18:48:19 2020 10.250.7.77:33140 Connection reset, restarting [0]\nSat Jan 11 18:48:19 2020 100.64.1.1:62372 Connection reset, restarting [0]\nSat Jan 11 18:48:23 2020 TCP connection established with [AF_INET]10.250.7.77:17046\nSat Jan 11 18:48:23 2020 10.250.7.77:17046 TCP connection established with [AF_INET]100.64.1.1:57986\nSat Jan 11 18:48:23 2020 10.250.7.77:17046 Connection reset, restarting [0]\nSat Jan 11 18:48:23 2020 100.64.1.1:57986 Connection reset, restarting [0]\nSat Jan 11 18:48:29 2020 TCP connection established with [AF_INET]10.250.7.77:33144\nSat Jan 11 18:48:29 2020 10.250.7.77:33144 TCP connection established with [AF_INET]100.64.1.1:62376\nSat Jan 11 18:48:29 2020 10.250.7.77:33144 Connection reset, restarting [0]\nSat Jan 11 18:48:29 2020 100.64.1.1:62376 Connection reset, restarting [0]\nSat Jan 11 18:48:33 2020 TCP connection established with [AF_INET]10.250.7.77:17052\nSat Jan 11 18:48:33 2020 10.250.7.77:17052 TCP connection established with [AF_INET]100.64.1.1:57992\nSat Jan 11 18:48:33 2020 10.250.7.77:17052 Connection reset, restarting [0]\nSat Jan 11 18:48:33 2020 100.64.1.1:57992 Connection reset, restarting [0]\nSat Jan 11 18:48:39 2020 TCP connection established with [AF_INET]10.250.7.77:33152\nSat Jan 11 18:48:39 2020 10.250.7.77:33152 TCP connection established with [AF_INET]100.64.1.1:62384\nSat Jan 11 18:48:39 2020 10.250.7.77:33152 Connection reset, restarting [0]\nSat Jan 11 18:48:39 2020 100.64.1.1:62384 Connection reset, restarting [0]\nSat Jan 11 18:48:43 2020 TCP connection established with [AF_INET]10.250.7.77:17062\nSat Jan 11 18:48:43 2020 10.250.7.77:17062 TCP connection established with [AF_INET]100.64.1.1:58002\nSat Jan 11 18:48:43 2020 10.250.7.77:17062 Connection reset, restarting [0]\nSat Jan 11 18:48:43 2020 100.64.1.1:58002 Connection reset, restarting [0]\nSat Jan 11 18:48:49 2020 TCP connection established with [AF_INET]10.250.7.77:33198\nSat Jan 11 18:48:49 2020 10.250.7.77:33198 TCP connection established with [AF_INET]100.64.1.1:62430\nSat Jan 11 18:48:49 2020 10.250.7.77:33198 Connection reset, restarting [0]\nSat Jan 11 18:48:49 2020 100.64.1.1:62430 Connection reset, restarting [0]\nSat Jan 11 18:48:53 2020 TCP connection established with [AF_INET]10.250.7.77:17068\nSat Jan 11 18:48:53 2020 10.250.7.77:17068 TCP connection established with [AF_INET]100.64.1.1:58008\nSat Jan 11 18:48:53 2020 10.250.7.77:17068 Connection reset, restarting [0]\nSat Jan 11 18:48:53 2020 100.64.1.1:58008 Connection reset, restarting [0]\nSat Jan 11 18:48:59 2020 TCP connection established with [AF_INET]10.250.7.77:33210\nSat Jan 11 18:48:59 2020 10.250.7.77:33210 TCP connection established with [AF_INET]100.64.1.1:62442\nSat Jan 11 18:48:59 2020 10.250.7.77:33210 Connection reset, restarting [0]\nSat Jan 11 18:48:59 2020 100.64.1.1:62442 Connection reset, restarting [0]\nSat Jan 11 18:49:03 2020 TCP connection established with [AF_INET]10.250.7.77:17086\nSat Jan 11 18:49:03 2020 10.250.7.77:17086 TCP connection established with [AF_INET]100.64.1.1:58026\nSat Jan 11 18:49:03 2020 10.250.7.77:17086 Connection reset, restarting [0]\nSat Jan 11 18:49:03 2020 100.64.1.1:58026 Connection reset, restarting [0]\nSat Jan 11 18:49:09 2020 TCP connection established with [AF_INET]10.250.7.77:33222\nSat Jan 11 18:49:09 2020 10.250.7.77:33222 Connection reset, restarting [0]\nSat Jan 11 18:49:09 2020 TCP connection established with [AF_INET]100.64.1.1:62454\nSat Jan 11 18:49:09 2020 100.64.1.1:62454 Connection reset, restarting [0]\nSat Jan 11 18:49:13 2020 TCP connection established with [AF_INET]10.250.7.77:17090\nSat Jan 11 18:49:13 2020 10.250.7.77:17090 TCP connection established with [AF_INET]100.64.1.1:58030\nSat Jan 11 18:49:13 2020 10.250.7.77:17090 Connection reset, restarting [0]\nSat Jan 11 18:49:13 2020 100.64.1.1:58030 Connection reset, restarting [0]\nSat Jan 11 18:49:19 2020 TCP connection established with [AF_INET]10.250.7.77:33234\nSat Jan 11 18:49:19 2020 10.250.7.77:33234 TCP connection established with [AF_INET]100.64.1.1:62466\nSat Jan 11 18:49:19 2020 10.250.7.77:33234 Connection reset, restarting [0]\nSat Jan 11 18:49:19 2020 100.64.1.1:62466 Connection reset, restarting [0]\nSat Jan 11 18:49:23 2020 TCP connection established with [AF_INET]10.250.7.77:17100\nSat Jan 11 18:49:23 2020 10.250.7.77:17100 TCP connection established with [AF_INET]100.64.1.1:58040\nSat Jan 11 18:49:23 2020 10.250.7.77:17100 Connection reset, restarting [0]\nSat Jan 11 18:49:23 2020 100.64.1.1:58040 Connection reset, restarting [0]\nSat Jan 11 18:49:29 2020 TCP connection established with [AF_INET]10.250.7.77:33238\nSat Jan 11 18:49:29 2020 10.250.7.77:33238 Connection reset, restarting [0]\nSat Jan 11 18:49:29 2020 TCP connection established with [AF_INET]100.64.1.1:62470\nSat Jan 11 18:49:29 2020 100.64.1.1:62470 Connection reset, restarting [0]\nSat Jan 11 18:49:33 2020 TCP connection established with [AF_INET]10.250.7.77:17106\nSat Jan 11 18:49:33 2020 10.250.7.77:17106 TCP connection established with [AF_INET]100.64.1.1:58046\nSat Jan 11 18:49:33 2020 10.250.7.77:17106 Connection reset, restarting [0]\nSat Jan 11 18:49:33 2020 100.64.1.1:58046 Connection reset, restarting [0]\nSat Jan 11 18:49:39 2020 TCP connection established with [AF_INET]10.250.7.77:33246\nSat Jan 11 18:49:39 2020 10.250.7.77:33246 Connection reset, restarting [0]\nSat Jan 11 18:49:39 2020 TCP connection established with [AF_INET]100.64.1.1:62478\nSat Jan 11 18:49:39 2020 100.64.1.1:62478 Connection reset, restarting [0]\nSat Jan 11 18:49:43 2020 TCP connection established with [AF_INET]10.250.7.77:17120\nSat Jan 11 18:49:43 2020 10.250.7.77:17120 TCP connection established with [AF_INET]100.64.1.1:58060\nSat Jan 11 18:49:43 2020 10.250.7.77:17120 Connection reset, restarting [0]\nSat Jan 11 18:49:43 2020 100.64.1.1:58060 Connection reset, restarting [0]\nSat Jan 11 18:49:49 2020 TCP connection established with [AF_INET]10.250.7.77:33254\nSat Jan 11 18:49:49 2020 10.250.7.77:33254 TCP connection established with [AF_INET]100.64.1.1:62486\nSat Jan 11 18:49:49 2020 10.250.7.77:33254 Connection reset, restarting [0]\nSat Jan 11 18:49:49 2020 100.64.1.1:62486 Connection reset, restarting [0]\nSat Jan 11 18:49:53 2020 TCP connection established with [AF_INET]10.250.7.77:17126\nSat Jan 11 18:49:53 2020 10.250.7.77:17126 TCP connection established with [AF_INET]100.64.1.1:58066\nSat Jan 11 18:49:53 2020 10.250.7.77:17126 Connection reset, restarting [0]\nSat Jan 11 18:49:53 2020 100.64.1.1:58066 Connection reset, restarting [0]\nSat Jan 11 18:49:59 2020 TCP connection established with [AF_INET]10.250.7.77:33270\nSat Jan 11 18:49:59 2020 10.250.7.77:33270 Connection reset, restarting [0]\nSat Jan 11 18:49:59 2020 TCP connection established with [AF_INET]100.64.1.1:62502\nSat Jan 11 18:49:59 2020 100.64.1.1:62502 Connection reset, restarting [0]\nSat Jan 11 18:50:03 2020 TCP connection established with [AF_INET]10.250.7.77:17144\nSat Jan 11 18:50:03 2020 10.250.7.77:17144 TCP connection established with [AF_INET]100.64.1.1:58084\nSat Jan 11 18:50:03 2020 10.250.7.77:17144 Connection reset, restarting [0]\nSat Jan 11 18:50:03 2020 100.64.1.1:58084 Connection reset, restarting [0]\nSat Jan 11 18:50:09 2020 TCP connection established with [AF_INET]10.250.7.77:33280\nSat Jan 11 18:50:09 2020 10.250.7.77:33280 TCP connection established with [AF_INET]100.64.1.1:62512\nSat Jan 11 18:50:09 2020 10.250.7.77:33280 Connection reset, restarting [0]\nSat Jan 11 18:50:09 2020 100.64.1.1:62512 Connection reset, restarting [0]\nSat Jan 11 18:50:13 2020 TCP connection established with [AF_INET]10.250.7.77:17148\nSat Jan 11 18:50:13 2020 10.250.7.77:17148 TCP connection established with [AF_INET]100.64.1.1:58088\nSat Jan 11 18:50:13 2020 10.250.7.77:17148 Connection reset, restarting [0]\nSat Jan 11 18:50:13 2020 100.64.1.1:58088 Connection reset, restarting [0]\nSat Jan 11 18:50:19 2020 TCP connection established with [AF_INET]10.250.7.77:33292\nSat Jan 11 18:50:19 2020 10.250.7.77:33292 Connection reset, restarting [0]\nSat Jan 11 18:50:19 2020 TCP connection established with [AF_INET]100.64.1.1:62524\nSat Jan 11 18:50:19 2020 100.64.1.1:62524 Connection reset, restarting [0]\nSat Jan 11 18:50:23 2020 TCP connection established with [AF_INET]10.250.7.77:17158\nSat Jan 11 18:50:23 2020 10.250.7.77:17158 TCP connection established with [AF_INET]100.64.1.1:58098\nSat Jan 11 18:50:23 2020 10.250.7.77:17158 Connection reset, restarting [0]\nSat Jan 11 18:50:23 2020 100.64.1.1:58098 Connection reset, restarting [0]\nSat Jan 11 18:50:29 2020 TCP connection established with [AF_INET]10.250.7.77:33296\nSat Jan 11 18:50:29 2020 10.250.7.77:33296 TCP connection established with [AF_INET]100.64.1.1:62528\nSat Jan 11 18:50:29 2020 10.250.7.77:33296 Connection reset, restarting [0]\nSat Jan 11 18:50:29 2020 100.64.1.1:62528 Connection reset, restarting [0]\nSat Jan 11 18:50:33 2020 TCP connection established with [AF_INET]10.250.7.77:17166\nSat Jan 11 18:50:33 2020 10.250.7.77:17166 TCP connection established with [AF_INET]100.64.1.1:58106\nSat Jan 11 18:50:33 2020 10.250.7.77:17166 Connection reset, restarting [0]\nSat Jan 11 18:50:33 2020 100.64.1.1:58106 Connection reset, restarting [0]\nSat Jan 11 18:50:39 2020 TCP connection established with [AF_INET]10.250.7.77:33304\nSat Jan 11 18:50:39 2020 10.250.7.77:33304 TCP connection established with [AF_INET]100.64.1.1:62536\nSat Jan 11 18:50:39 2020 10.250.7.77:33304 Connection reset, restarting [0]\nSat Jan 11 18:50:39 2020 100.64.1.1:62536 Connection reset, restarting [0]\nSat Jan 11 18:50:43 2020 TCP connection established with [AF_INET]10.250.7.77:17174\nSat Jan 11 18:50:43 2020 10.250.7.77:17174 TCP connection established with [AF_INET]100.64.1.1:58114\nSat Jan 11 18:50:43 2020 10.250.7.77:17174 Connection reset, restarting [0]\nSat Jan 11 18:50:43 2020 100.64.1.1:58114 Connection reset, restarting [0]\nSat Jan 11 18:50:49 2020 TCP connection established with [AF_INET]10.250.7.77:33312\nSat Jan 11 18:50:49 2020 10.250.7.77:33312 TCP connection established with [AF_INET]100.64.1.1:62544\nSat Jan 11 18:50:49 2020 10.250.7.77:33312 Connection reset, restarting [0]\nSat Jan 11 18:50:49 2020 100.64.1.1:62544 Connection reset, restarting [0]\nSat Jan 11 18:50:53 2020 TCP connection established with [AF_INET]10.250.7.77:17184\nSat Jan 11 18:50:53 2020 10.250.7.77:17184 TCP connection established with [AF_INET]100.64.1.1:58124\nSat Jan 11 18:50:53 2020 10.250.7.77:17184 Connection reset, restarting [0]\nSat Jan 11 18:50:53 2020 100.64.1.1:58124 Connection reset, restarting [0]\nSat Jan 11 18:50:59 2020 TCP connection established with [AF_INET]10.250.7.77:33324\nSat Jan 11 18:50:59 2020 10.250.7.77:33324 TCP connection established with [AF_INET]100.64.1.1:62556\nSat Jan 11 18:50:59 2020 10.250.7.77:33324 Connection reset, restarting [0]\nSat Jan 11 18:50:59 2020 100.64.1.1:62556 Connection reset, restarting [0]\nSat Jan 11 18:51:03 2020 TCP connection established with [AF_INET]10.250.7.77:17202\nSat Jan 11 18:51:03 2020 10.250.7.77:17202 TCP connection established with [AF_INET]100.64.1.1:58142\nSat Jan 11 18:51:03 2020 10.250.7.77:17202 Connection reset, restarting [0]\nSat Jan 11 18:51:03 2020 100.64.1.1:58142 Connection reset, restarting [0]\nSat Jan 11 18:51:09 2020 TCP connection established with [AF_INET]10.250.7.77:33334\nSat Jan 11 18:51:09 2020 10.250.7.77:33334 TCP connection established with [AF_INET]100.64.1.1:62566\nSat Jan 11 18:51:09 2020 10.250.7.77:33334 Connection reset, restarting [0]\nSat Jan 11 18:51:09 2020 100.64.1.1:62566 Connection reset, restarting [0]\nSat Jan 11 18:51:13 2020 TCP connection established with [AF_INET]10.250.7.77:17206\nSat Jan 11 18:51:13 2020 10.250.7.77:17206 TCP connection established with [AF_INET]100.64.1.1:58146\nSat Jan 11 18:51:13 2020 10.250.7.77:17206 Connection reset, restarting [0]\nSat Jan 11 18:51:13 2020 100.64.1.1:58146 Connection reset, restarting [0]\nSat Jan 11 18:51:19 2020 TCP connection established with [AF_INET]10.250.7.77:33350\nSat Jan 11 18:51:19 2020 10.250.7.77:33350 TCP connection established with [AF_INET]100.64.1.1:62582\nSat Jan 11 18:51:19 2020 10.250.7.77:33350 Connection reset, restarting [0]\nSat Jan 11 18:51:19 2020 100.64.1.1:62582 Connection reset, restarting [0]\nSat Jan 11 18:51:23 2020 TCP connection established with [AF_INET]10.250.7.77:17216\nSat Jan 11 18:51:23 2020 10.250.7.77:17216 TCP connection established with [AF_INET]100.64.1.1:58156\nSat Jan 11 18:51:23 2020 10.250.7.77:17216 Connection reset, restarting [0]\nSat Jan 11 18:51:23 2020 100.64.1.1:58156 Connection reset, restarting [0]\nSat Jan 11 18:51:29 2020 TCP connection established with [AF_INET]10.250.7.77:33354\nSat Jan 11 18:51:29 2020 10.250.7.77:33354 TCP connection established with [AF_INET]100.64.1.1:62586\nSat Jan 11 18:51:29 2020 10.250.7.77:33354 Connection reset, restarting [0]\nSat Jan 11 18:51:29 2020 100.64.1.1:62586 Connection reset, restarting [0]\nSat Jan 11 18:51:33 2020 TCP connection established with [AF_INET]10.250.7.77:17224\nSat Jan 11 18:51:33 2020 10.250.7.77:17224 TCP connection established with [AF_INET]100.64.1.1:58164\nSat Jan 11 18:51:33 2020 10.250.7.77:17224 Connection reset, restarting [0]\nSat Jan 11 18:51:33 2020 100.64.1.1:58164 Connection reset, restarting [0]\nSat Jan 11 18:51:39 2020 TCP connection established with [AF_INET]10.250.7.77:33364\nSat Jan 11 18:51:39 2020 10.250.7.77:33364 TCP connection established with [AF_INET]100.64.1.1:62596\nSat Jan 11 18:51:39 2020 10.250.7.77:33364 Connection reset, restarting [0]\nSat Jan 11 18:51:39 2020 100.64.1.1:62596 Connection reset, restarting [0]\nSat Jan 11 18:51:43 2020 TCP connection established with [AF_INET]10.250.7.77:17232\nSat Jan 11 18:51:43 2020 10.250.7.77:17232 TCP connection established with [AF_INET]100.64.1.1:58172\nSat Jan 11 18:51:43 2020 10.250.7.77:17232 Connection reset, restarting [0]\nSat Jan 11 18:51:43 2020 100.64.1.1:58172 Connection reset, restarting [0]\nSat Jan 11 18:51:49 2020 TCP connection established with [AF_INET]10.250.7.77:33370\nSat Jan 11 18:51:49 2020 10.250.7.77:33370 TCP connection established with [AF_INET]100.64.1.1:62602\nSat Jan 11 18:51:49 2020 10.250.7.77:33370 Connection reset, restarting [0]\nSat Jan 11 18:51:49 2020 100.64.1.1:62602 Connection reset, restarting [0]\nSat Jan 11 18:51:53 2020 TCP connection established with [AF_INET]10.250.7.77:17238\nSat Jan 11 18:51:53 2020 10.250.7.77:17238 TCP connection established with [AF_INET]100.64.1.1:58178\nSat Jan 11 18:51:53 2020 10.250.7.77:17238 Connection reset, restarting [0]\nSat Jan 11 18:51:53 2020 100.64.1.1:58178 Connection reset, restarting [0]\nSat Jan 11 18:51:59 2020 TCP connection established with [AF_INET]10.250.7.77:33382\nSat Jan 11 18:51:59 2020 10.250.7.77:33382 TCP connection established with [AF_INET]100.64.1.1:62614\nSat Jan 11 18:51:59 2020 10.250.7.77:33382 Connection reset, restarting [0]\nSat Jan 11 18:51:59 2020 100.64.1.1:62614 Connection reset, restarting [0]\nSat Jan 11 18:52:03 2020 TCP connection established with [AF_INET]10.250.7.77:17256\nSat Jan 11 18:52:03 2020 10.250.7.77:17256 TCP connection established with [AF_INET]100.64.1.1:58196\nSat Jan 11 18:52:03 2020 10.250.7.77:17256 Connection reset, restarting [0]\nSat Jan 11 18:52:03 2020 100.64.1.1:58196 Connection reset, restarting [0]\nSat Jan 11 18:52:09 2020 TCP connection established with [AF_INET]10.250.7.77:33392\nSat Jan 11 18:52:09 2020 10.250.7.77:33392 TCP connection established with [AF_INET]100.64.1.1:62624\nSat Jan 11 18:52:09 2020 10.250.7.77:33392 Connection reset, restarting [0]\nSat Jan 11 18:52:09 2020 100.64.1.1:62624 Connection reset, restarting [0]\nSat Jan 11 18:52:13 2020 TCP connection established with [AF_INET]10.250.7.77:17264\nSat Jan 11 18:52:13 2020 10.250.7.77:17264 TCP connection established with [AF_INET]100.64.1.1:58204\nSat Jan 11 18:52:13 2020 10.250.7.77:17264 Connection reset, restarting [0]\nSat Jan 11 18:52:13 2020 100.64.1.1:58204 Connection reset, restarting [0]\nSat Jan 11 18:52:19 2020 TCP connection established with [AF_INET]10.250.7.77:33404\nSat Jan 11 18:52:19 2020 10.250.7.77:33404 TCP connection established with [AF_INET]100.64.1.1:62636\nSat Jan 11 18:52:19 2020 10.250.7.77:33404 Connection reset, restarting [0]\nSat Jan 11 18:52:19 2020 100.64.1.1:62636 Connection reset, restarting [0]\nSat Jan 11 18:52:23 2020 TCP connection established with [AF_INET]10.250.7.77:17274\nSat Jan 11 18:52:23 2020 10.250.7.77:17274 TCP connection established with [AF_INET]100.64.1.1:58214\nSat Jan 11 18:52:23 2020 10.250.7.77:17274 Connection reset, restarting [0]\nSat Jan 11 18:52:23 2020 100.64.1.1:58214 Connection reset, restarting [0]\nSat Jan 11 18:52:29 2020 TCP connection established with [AF_INET]10.250.7.77:33412\nSat Jan 11 18:52:29 2020 10.250.7.77:33412 TCP connection established with [AF_INET]100.64.1.1:62644\nSat Jan 11 18:52:29 2020 10.250.7.77:33412 Connection reset, restarting [0]\nSat Jan 11 18:52:29 2020 100.64.1.1:62644 Connection reset, restarting [0]\nSat Jan 11 18:52:33 2020 TCP connection established with [AF_INET]10.250.7.77:17282\nSat Jan 11 18:52:33 2020 10.250.7.77:17282 TCP connection established with [AF_INET]100.64.1.1:58222\nSat Jan 11 18:52:33 2020 10.250.7.77:17282 Connection reset, restarting [0]\nSat Jan 11 18:52:33 2020 100.64.1.1:58222 Connection reset, restarting [0]\nSat Jan 11 18:52:39 2020 TCP connection established with [AF_INET]10.250.7.77:33422\nSat Jan 11 18:52:39 2020 10.250.7.77:33422 TCP connection established with [AF_INET]100.64.1.1:62654\nSat Jan 11 18:52:39 2020 10.250.7.77:33422 Connection reset, restarting [0]\nSat Jan 11 18:52:39 2020 100.64.1.1:62654 Connection reset, restarting [0]\nSat Jan 11 18:52:43 2020 TCP connection established with [AF_INET]10.250.7.77:17290\nSat Jan 11 18:52:43 2020 10.250.7.77:17290 TCP connection established with [AF_INET]100.64.1.1:58230\nSat Jan 11 18:52:43 2020 10.250.7.77:17290 Connection reset, restarting [0]\nSat Jan 11 18:52:43 2020 100.64.1.1:58230 Connection reset, restarting [0]\nSat Jan 11 18:52:49 2020 TCP connection established with [AF_INET]10.250.7.77:33428\nSat Jan 11 18:52:49 2020 10.250.7.77:33428 TCP connection established with [AF_INET]100.64.1.1:62660\nSat Jan 11 18:52:49 2020 10.250.7.77:33428 Connection reset, restarting [0]\nSat Jan 11 18:52:49 2020 100.64.1.1:62660 Connection reset, restarting [0]\nSat Jan 11 18:52:53 2020 TCP connection established with [AF_INET]10.250.7.77:17296\nSat Jan 11 18:52:53 2020 10.250.7.77:17296 TCP connection established with [AF_INET]100.64.1.1:58236\nSat Jan 11 18:52:53 2020 10.250.7.77:17296 Connection reset, restarting [0]\nSat Jan 11 18:52:53 2020 100.64.1.1:58236 Connection reset, restarting [0]\nSat Jan 11 18:52:59 2020 TCP connection established with [AF_INET]10.250.7.77:33440\nSat Jan 11 18:52:59 2020 10.250.7.77:33440 TCP connection established with [AF_INET]100.64.1.1:62672\nSat Jan 11 18:52:59 2020 10.250.7.77:33440 Connection reset, restarting [0]\nSat Jan 11 18:52:59 2020 100.64.1.1:62672 Connection reset, restarting [0]\nSat Jan 11 18:53:03 2020 TCP connection established with [AF_INET]10.250.7.77:17316\nSat Jan 11 18:53:03 2020 10.250.7.77:17316 TCP connection established with [AF_INET]100.64.1.1:58256\nSat Jan 11 18:53:03 2020 10.250.7.77:17316 Connection reset, restarting [0]\nSat Jan 11 18:53:03 2020 100.64.1.1:58256 Connection reset, restarting [0]\nSat Jan 11 18:53:09 2020 TCP connection established with [AF_INET]10.250.7.77:33450\nSat Jan 11 18:53:09 2020 10.250.7.77:33450 TCP connection established with [AF_INET]100.64.1.1:62682\nSat Jan 11 18:53:09 2020 10.250.7.77:33450 Connection reset, restarting [0]\nSat Jan 11 18:53:09 2020 100.64.1.1:62682 Connection reset, restarting [0]\nSat Jan 11 18:53:13 2020 TCP connection established with [AF_INET]10.250.7.77:17320\nSat Jan 11 18:53:13 2020 10.250.7.77:17320 TCP connection established with [AF_INET]100.64.1.1:58260\nSat Jan 11 18:53:13 2020 10.250.7.77:17320 Connection reset, restarting [0]\nSat Jan 11 18:53:13 2020 100.64.1.1:58260 Connection reset, restarting [0]\nSat Jan 11 18:53:19 2020 TCP connection established with [AF_INET]10.250.7.77:33462\nSat Jan 11 18:53:19 2020 10.250.7.77:33462 TCP connection established with [AF_INET]100.64.1.1:62694\nSat Jan 11 18:53:19 2020 10.250.7.77:33462 Connection reset, restarting [0]\nSat Jan 11 18:53:19 2020 100.64.1.1:62694 Connection reset, restarting [0]\nSat Jan 11 18:53:23 2020 TCP connection established with [AF_INET]10.250.7.77:17336\nSat Jan 11 18:53:23 2020 10.250.7.77:17336 TCP connection established with [AF_INET]100.64.1.1:58276\nSat Jan 11 18:53:23 2020 10.250.7.77:17336 Connection reset, restarting [0]\nSat Jan 11 18:53:23 2020 100.64.1.1:58276 Connection reset, restarting [0]\nSat Jan 11 18:53:29 2020 TCP connection established with [AF_INET]10.250.7.77:33468\nSat Jan 11 18:53:29 2020 10.250.7.77:33468 TCP connection established with [AF_INET]100.64.1.1:62700\nSat Jan 11 18:53:29 2020 10.250.7.77:33468 Connection reset, restarting [0]\nSat Jan 11 18:53:29 2020 100.64.1.1:62700 Connection reset, restarting [0]\nSat Jan 11 18:53:33 2020 TCP connection established with [AF_INET]10.250.7.77:17344\nSat Jan 11 18:53:33 2020 10.250.7.77:17344 TCP connection established with [AF_INET]100.64.1.1:58284\nSat Jan 11 18:53:33 2020 10.250.7.77:17344 Connection reset, restarting [0]\nSat Jan 11 18:53:33 2020 100.64.1.1:58284 Connection reset, restarting [0]\nSat Jan 11 18:53:39 2020 TCP connection established with [AF_INET]10.250.7.77:33476\nSat Jan 11 18:53:39 2020 10.250.7.77:33476 TCP connection established with [AF_INET]100.64.1.1:62708\nSat Jan 11 18:53:39 2020 10.250.7.77:33476 Connection reset, restarting [0]\nSat Jan 11 18:53:39 2020 100.64.1.1:62708 Connection reset, restarting [0]\nSat Jan 11 18:53:43 2020 TCP connection established with [AF_INET]10.250.7.77:17352\nSat Jan 11 18:53:43 2020 10.250.7.77:17352 TCP connection established with [AF_INET]100.64.1.1:58292\nSat Jan 11 18:53:43 2020 10.250.7.77:17352 Connection reset, restarting [0]\nSat Jan 11 18:53:43 2020 100.64.1.1:58292 Connection reset, restarting [0]\nSat Jan 11 18:53:49 2020 TCP connection established with [AF_INET]10.250.7.77:33486\nSat Jan 11 18:53:49 2020 10.250.7.77:33486 TCP connection established with [AF_INET]100.64.1.1:62718\nSat Jan 11 18:53:49 2020 10.250.7.77:33486 Connection reset, restarting [0]\nSat Jan 11 18:53:49 2020 100.64.1.1:62718 Connection reset, restarting [0]\nSat Jan 11 18:53:53 2020 TCP connection established with [AF_INET]10.250.7.77:17358\nSat Jan 11 18:53:53 2020 10.250.7.77:17358 TCP connection established with [AF_INET]100.64.1.1:58298\nSat Jan 11 18:53:53 2020 10.250.7.77:17358 Connection reset, restarting [0]\nSat Jan 11 18:53:53 2020 100.64.1.1:58298 Connection reset, restarting [0]\nSat Jan 11 18:53:59 2020 TCP connection established with [AF_INET]10.250.7.77:33498\nSat Jan 11 18:53:59 2020 10.250.7.77:33498 TCP connection established with [AF_INET]100.64.1.1:62730\nSat Jan 11 18:53:59 2020 10.250.7.77:33498 Connection reset, restarting [0]\nSat Jan 11 18:53:59 2020 100.64.1.1:62730 Connection reset, restarting [0]\nSat Jan 11 18:54:03 2020 TCP connection established with [AF_INET]100.64.1.1:58326\nSat Jan 11 18:54:03 2020 100.64.1.1:58326 TCP connection established with [AF_INET]10.250.7.77:17386\nSat Jan 11 18:54:03 2020 100.64.1.1:58326 Connection reset, restarting [0]\nSat Jan 11 18:54:03 2020 10.250.7.77:17386 Connection reset, restarting [0]\nSat Jan 11 18:54:09 2020 TCP connection established with [AF_INET]10.250.7.77:33508\nSat Jan 11 18:54:09 2020 10.250.7.77:33508 TCP connection established with [AF_INET]100.64.1.1:62740\nSat Jan 11 18:54:09 2020 10.250.7.77:33508 Connection reset, restarting [0]\nSat Jan 11 18:54:09 2020 100.64.1.1:62740 Connection reset, restarting [0]\nSat Jan 11 18:54:13 2020 TCP connection established with [AF_INET]10.250.7.77:17390\nSat Jan 11 18:54:13 2020 10.250.7.77:17390 TCP connection established with [AF_INET]100.64.1.1:58330\nSat Jan 11 18:54:13 2020 10.250.7.77:17390 Connection reset, restarting [0]\nSat Jan 11 18:54:13 2020 100.64.1.1:58330 Connection reset, restarting [0]\nSat Jan 11 18:54:19 2020 TCP connection established with [AF_INET]10.250.7.77:33520\nSat Jan 11 18:54:19 2020 10.250.7.77:33520 TCP connection established with [AF_INET]100.64.1.1:62752\nSat Jan 11 18:54:19 2020 10.250.7.77:33520 Connection reset, restarting [0]\nSat Jan 11 18:54:19 2020 100.64.1.1:62752 Connection reset, restarting [0]\nSat Jan 11 18:54:23 2020 TCP connection established with [AF_INET]10.250.7.77:17402\nSat Jan 11 18:54:23 2020 10.250.7.77:17402 TCP connection established with [AF_INET]100.64.1.1:58342\nSat Jan 11 18:54:23 2020 10.250.7.77:17402 Connection reset, restarting [0]\nSat Jan 11 18:54:23 2020 100.64.1.1:58342 Connection reset, restarting [0]\nSat Jan 11 18:54:29 2020 TCP connection established with [AF_INET]10.250.7.77:33526\nSat Jan 11 18:54:29 2020 10.250.7.77:33526 TCP connection established with [AF_INET]100.64.1.1:62758\nSat Jan 11 18:54:29 2020 10.250.7.77:33526 Connection reset, restarting [0]\nSat Jan 11 18:54:29 2020 100.64.1.1:62758 Connection reset, restarting [0]\nSat Jan 11 18:54:33 2020 TCP connection established with [AF_INET]10.250.7.77:17408\nSat Jan 11 18:54:33 2020 10.250.7.77:17408 TCP connection established with [AF_INET]100.64.1.1:58348\nSat Jan 11 18:54:33 2020 10.250.7.77:17408 Connection reset, restarting [0]\nSat Jan 11 18:54:33 2020 100.64.1.1:58348 Connection reset, restarting [0]\nSat Jan 11 18:54:39 2020 TCP connection established with [AF_INET]10.250.7.77:33534\nSat Jan 11 18:54:39 2020 10.250.7.77:33534 TCP connection established with [AF_INET]100.64.1.1:62766\nSat Jan 11 18:54:39 2020 10.250.7.77:33534 Connection reset, restarting [0]\nSat Jan 11 18:54:39 2020 100.64.1.1:62766 Connection reset, restarting [0]\nSat Jan 11 18:54:43 2020 TCP connection established with [AF_INET]10.250.7.77:17422\nSat Jan 11 18:54:43 2020 10.250.7.77:17422 TCP connection established with [AF_INET]100.64.1.1:58362\nSat Jan 11 18:54:43 2020 10.250.7.77:17422 Connection reset, restarting [0]\nSat Jan 11 18:54:43 2020 100.64.1.1:58362 Connection reset, restarting [0]\nSat Jan 11 18:54:49 2020 TCP connection established with [AF_INET]10.250.7.77:33540\nSat Jan 11 18:54:49 2020 10.250.7.77:33540 TCP connection established with [AF_INET]100.64.1.1:62772\nSat Jan 11 18:54:49 2020 10.250.7.77:33540 Connection reset, restarting [0]\nSat Jan 11 18:54:49 2020 100.64.1.1:62772 Connection reset, restarting [0]\nSat Jan 11 18:54:53 2020 TCP connection established with [AF_INET]10.250.7.77:17428\nSat Jan 11 18:54:53 2020 10.250.7.77:17428 Connection reset, restarting [0]\nSat Jan 11 18:54:53 2020 TCP connection established with [AF_INET]100.64.1.1:58368\nSat Jan 11 18:54:53 2020 100.64.1.1:58368 Connection reset, restarting [0]\nSat Jan 11 18:54:59 2020 TCP connection established with [AF_INET]10.250.7.77:33556\nSat Jan 11 18:54:59 2020 10.250.7.77:33556 TCP connection established with [AF_INET]100.64.1.1:62788\nSat Jan 11 18:54:59 2020 10.250.7.77:33556 Connection reset, restarting [0]\nSat Jan 11 18:54:59 2020 100.64.1.1:62788 Connection reset, restarting [0]\nSat Jan 11 18:55:03 2020 TCP connection established with [AF_INET]10.250.7.77:17446\nSat Jan 11 18:55:03 2020 10.250.7.77:17446 TCP connection established with [AF_INET]100.64.1.1:58386\nSat Jan 11 18:55:03 2020 10.250.7.77:17446 Connection reset, restarting [0]\nSat Jan 11 18:55:03 2020 100.64.1.1:58386 Connection reset, restarting [0]\nSat Jan 11 18:55:09 2020 TCP connection established with [AF_INET]10.250.7.77:33566\nSat Jan 11 18:55:09 2020 10.250.7.77:33566 TCP connection established with [AF_INET]100.64.1.1:62798\nSat Jan 11 18:55:09 2020 10.250.7.77:33566 Connection reset, restarting [0]\nSat Jan 11 18:55:09 2020 100.64.1.1:62798 Connection reset, restarting [0]\nSat Jan 11 18:55:13 2020 TCP connection established with [AF_INET]10.250.7.77:17450\nSat Jan 11 18:55:13 2020 10.250.7.77:17450 TCP connection established with [AF_INET]100.64.1.1:58390\nSat Jan 11 18:55:13 2020 10.250.7.77:17450 Connection reset, restarting [0]\nSat Jan 11 18:55:13 2020 100.64.1.1:58390 Connection reset, restarting [0]\nSat Jan 11 18:55:19 2020 TCP connection established with [AF_INET]10.250.7.77:33578\nSat Jan 11 18:55:19 2020 10.250.7.77:33578 TCP connection established with [AF_INET]100.64.1.1:62810\nSat Jan 11 18:55:19 2020 10.250.7.77:33578 Connection reset, restarting [0]\nSat Jan 11 18:55:19 2020 100.64.1.1:62810 Connection reset, restarting [0]\nSat Jan 11 18:55:23 2020 TCP connection established with [AF_INET]10.250.7.77:17462\nSat Jan 11 18:55:23 2020 10.250.7.77:17462 TCP connection established with [AF_INET]100.64.1.1:58402\nSat Jan 11 18:55:23 2020 10.250.7.77:17462 Connection reset, restarting [0]\nSat Jan 11 18:55:23 2020 100.64.1.1:58402 Connection reset, restarting [0]\nSat Jan 11 18:55:29 2020 TCP connection established with [AF_INET]10.250.7.77:33584\nSat Jan 11 18:55:29 2020 10.250.7.77:33584 TCP connection established with [AF_INET]100.64.1.1:62816\nSat Jan 11 18:55:29 2020 10.250.7.77:33584 Connection reset, restarting [0]\nSat Jan 11 18:55:29 2020 100.64.1.1:62816 Connection reset, restarting [0]\nSat Jan 11 18:55:33 2020 TCP connection established with [AF_INET]10.250.7.77:17468\nSat Jan 11 18:55:33 2020 10.250.7.77:17468 TCP connection established with [AF_INET]100.64.1.1:58408\nSat Jan 11 18:55:33 2020 10.250.7.77:17468 Connection reset, restarting [0]\nSat Jan 11 18:55:33 2020 100.64.1.1:58408 Connection reset, restarting [0]\nSat Jan 11 18:55:39 2020 TCP connection established with [AF_INET]10.250.7.77:33592\nSat Jan 11 18:55:39 2020 10.250.7.77:33592 TCP connection established with [AF_INET]100.64.1.1:62824\nSat Jan 11 18:55:39 2020 10.250.7.77:33592 Connection reset, restarting [0]\nSat Jan 11 18:55:39 2020 100.64.1.1:62824 Connection reset, restarting [0]\nSat Jan 11 18:55:43 2020 TCP connection established with [AF_INET]10.250.7.77:17490\nSat Jan 11 18:55:43 2020 10.250.7.77:17490 TCP connection established with [AF_INET]100.64.1.1:58430\nSat Jan 11 18:55:43 2020 10.250.7.77:17490 Connection reset, restarting [0]\nSat Jan 11 18:55:43 2020 100.64.1.1:58430 Connection reset, restarting [0]\nSat Jan 11 18:55:49 2020 TCP connection established with [AF_INET]10.250.7.77:33598\nSat Jan 11 18:55:49 2020 10.250.7.77:33598 TCP connection established with [AF_INET]100.64.1.1:62830\nSat Jan 11 18:55:49 2020 10.250.7.77:33598 Connection reset, restarting [0]\nSat Jan 11 18:55:49 2020 100.64.1.1:62830 Connection reset, restarting [0]\nSat Jan 11 18:55:53 2020 TCP connection established with [AF_INET]100.64.1.1:58460\nSat Jan 11 18:55:53 2020 100.64.1.1:58460 TCP connection established with [AF_INET]10.250.7.77:17520\nSat Jan 11 18:55:53 2020 100.64.1.1:58460 Connection reset, restarting [0]\nSat Jan 11 18:55:53 2020 10.250.7.77:17520 Connection reset, restarting [0]\nSat Jan 11 18:55:59 2020 TCP connection established with [AF_INET]10.250.7.77:33612\nSat Jan 11 18:55:59 2020 10.250.7.77:33612 TCP connection established with [AF_INET]100.64.1.1:62844\nSat Jan 11 18:55:59 2020 10.250.7.77:33612 Connection reset, restarting [0]\nSat Jan 11 18:55:59 2020 100.64.1.1:62844 Connection reset, restarting [0]\nSat Jan 11 18:56:03 2020 TCP connection established with [AF_INET]10.250.7.77:17544\nSat Jan 11 18:56:03 2020 10.250.7.77:17544 TCP connection established with [AF_INET]100.64.1.1:58484\nSat Jan 11 18:56:03 2020 10.250.7.77:17544 Connection reset, restarting [0]\nSat Jan 11 18:56:03 2020 100.64.1.1:58484 Connection reset, restarting [0]\nSat Jan 11 18:56:09 2020 TCP connection established with [AF_INET]10.250.7.77:33628\nSat Jan 11 18:56:09 2020 10.250.7.77:33628 TCP connection established with [AF_INET]100.64.1.1:62860\nSat Jan 11 18:56:09 2020 10.250.7.77:33628 Connection reset, restarting [0]\nSat Jan 11 18:56:09 2020 100.64.1.1:62860 Connection reset, restarting [0]\nSat Jan 11 18:56:13 2020 TCP connection established with [AF_INET]10.250.7.77:17552\nSat Jan 11 18:56:13 2020 10.250.7.77:17552 TCP connection established with [AF_INET]100.64.1.1:58492\nSat Jan 11 18:56:13 2020 10.250.7.77:17552 Connection reset, restarting [0]\nSat Jan 11 18:56:13 2020 100.64.1.1:58492 Connection reset, restarting [0]\nSat Jan 11 18:56:19 2020 TCP connection established with [AF_INET]10.250.7.77:33646\nSat Jan 11 18:56:19 2020 10.250.7.77:33646 TCP connection established with [AF_INET]100.64.1.1:62878\nSat Jan 11 18:56:19 2020 10.250.7.77:33646 Connection reset, restarting [0]\nSat Jan 11 18:56:19 2020 100.64.1.1:62878 Connection reset, restarting [0]\nSat Jan 11 18:56:23 2020 TCP connection established with [AF_INET]10.250.7.77:17562\nSat Jan 11 18:56:23 2020 10.250.7.77:17562 TCP connection established with [AF_INET]100.64.1.1:58502\nSat Jan 11 18:56:23 2020 10.250.7.77:17562 Connection reset, restarting [0]\nSat Jan 11 18:56:23 2020 100.64.1.1:58502 Connection reset, restarting [0]\nSat Jan 11 18:56:29 2020 TCP connection established with [AF_INET]10.250.7.77:33652\nSat Jan 11 18:56:29 2020 10.250.7.77:33652 TCP connection established with [AF_INET]100.64.1.1:62884\nSat Jan 11 18:56:29 2020 10.250.7.77:33652 Connection reset, restarting [0]\nSat Jan 11 18:56:29 2020 100.64.1.1:62884 Connection reset, restarting [0]\nSat Jan 11 18:56:33 2020 TCP connection established with [AF_INET]10.250.7.77:17568\nSat Jan 11 18:56:33 2020 10.250.7.77:17568 TCP connection established with [AF_INET]100.64.1.1:58508\nSat Jan 11 18:56:33 2020 10.250.7.77:17568 Connection reset, restarting [0]\nSat Jan 11 18:56:33 2020 100.64.1.1:58508 Connection reset, restarting [0]\nSat Jan 11 18:56:39 2020 TCP connection established with [AF_INET]10.250.7.77:33660\nSat Jan 11 18:56:39 2020 10.250.7.77:33660 TCP connection established with [AF_INET]100.64.1.1:62892\nSat Jan 11 18:56:39 2020 10.250.7.77:33660 Connection reset, restarting [0]\nSat Jan 11 18:56:39 2020 100.64.1.1:62892 Connection reset, restarting [0]\nSat Jan 11 18:56:43 2020 TCP connection established with [AF_INET]10.250.7.77:17580\nSat Jan 11 18:56:43 2020 10.250.7.77:17580 TCP connection established with [AF_INET]100.64.1.1:58520\nSat Jan 11 18:56:43 2020 10.250.7.77:17580 Connection reset, restarting [0]\nSat Jan 11 18:56:43 2020 100.64.1.1:58520 Connection reset, restarting [0]\nSat Jan 11 18:56:49 2020 TCP connection established with [AF_INET]10.250.7.77:33666\nSat Jan 11 18:56:49 2020 10.250.7.77:33666 TCP connection established with [AF_INET]100.64.1.1:62898\nSat Jan 11 18:56:49 2020 10.250.7.77:33666 Connection reset, restarting [0]\nSat Jan 11 18:56:49 2020 100.64.1.1:62898 Connection reset, restarting [0]\nSat Jan 11 18:56:53 2020 TCP connection established with [AF_INET]10.250.7.77:17586\nSat Jan 11 18:56:53 2020 10.250.7.77:17586 TCP connection established with [AF_INET]100.64.1.1:58526\nSat Jan 11 18:56:53 2020 10.250.7.77:17586 Connection reset, restarting [0]\nSat Jan 11 18:56:53 2020 100.64.1.1:58526 Connection reset, restarting [0]\nSat Jan 11 18:56:59 2020 TCP connection established with [AF_INET]10.250.7.77:33680\nSat Jan 11 18:56:59 2020 10.250.7.77:33680 TCP connection established with [AF_INET]100.64.1.1:62912\nSat Jan 11 18:56:59 2020 10.250.7.77:33680 Connection reset, restarting [0]\nSat Jan 11 18:56:59 2020 100.64.1.1:62912 Connection reset, restarting [0]\nSat Jan 11 18:57:03 2020 TCP connection established with [AF_INET]10.250.7.77:17604\nSat Jan 11 18:57:03 2020 10.250.7.77:17604 TCP connection established with [AF_INET]100.64.1.1:58544\nSat Jan 11 18:57:03 2020 10.250.7.77:17604 Connection reset, restarting [0]\nSat Jan 11 18:57:03 2020 100.64.1.1:58544 Connection reset, restarting [0]\nSat Jan 11 18:57:09 2020 TCP connection established with [AF_INET]10.250.7.77:33690\nSat Jan 11 18:57:09 2020 10.250.7.77:33690 TCP connection established with [AF_INET]100.64.1.1:62922\nSat Jan 11 18:57:09 2020 10.250.7.77:33690 Connection reset, restarting [0]\nSat Jan 11 18:57:09 2020 100.64.1.1:62922 Connection reset, restarting [0]\nSat Jan 11 18:57:13 2020 TCP connection established with [AF_INET]10.250.7.77:17614\nSat Jan 11 18:57:13 2020 10.250.7.77:17614 TCP connection established with [AF_INET]100.64.1.1:58554\nSat Jan 11 18:57:13 2020 10.250.7.77:17614 Connection reset, restarting [0]\nSat Jan 11 18:57:13 2020 100.64.1.1:58554 Connection reset, restarting [0]\nSat Jan 11 18:57:19 2020 TCP connection established with [AF_INET]10.250.7.77:33704\nSat Jan 11 18:57:19 2020 10.250.7.77:33704 TCP connection established with [AF_INET]100.64.1.1:62936\nSat Jan 11 18:57:19 2020 10.250.7.77:33704 Connection reset, restarting [0]\nSat Jan 11 18:57:19 2020 100.64.1.1:62936 Connection reset, restarting [0]\nSat Jan 11 18:57:23 2020 TCP connection established with [AF_INET]10.250.7.77:17624\nSat Jan 11 18:57:23 2020 10.250.7.77:17624 TCP connection established with [AF_INET]100.64.1.1:58564\nSat Jan 11 18:57:23 2020 10.250.7.77:17624 Connection reset, restarting [0]\nSat Jan 11 18:57:23 2020 100.64.1.1:58564 Connection reset, restarting [0]\nSat Jan 11 18:57:29 2020 TCP connection established with [AF_INET]10.250.7.77:33712\nSat Jan 11 18:57:29 2020 10.250.7.77:33712 TCP connection established with [AF_INET]100.64.1.1:62944\nSat Jan 11 18:57:29 2020 10.250.7.77:33712 Connection reset, restarting [0]\nSat Jan 11 18:57:29 2020 100.64.1.1:62944 Connection reset, restarting [0]\nSat Jan 11 18:57:33 2020 TCP connection established with [AF_INET]10.250.7.77:17630\nSat Jan 11 18:57:33 2020 10.250.7.77:17630 TCP connection established with [AF_INET]100.64.1.1:58570\nSat Jan 11 18:57:33 2020 10.250.7.77:17630 Connection reset, restarting [0]\nSat Jan 11 18:57:33 2020 100.64.1.1:58570 Connection reset, restarting [0]\nSat Jan 11 18:57:39 2020 TCP connection established with [AF_INET]10.250.7.77:33722\nSat Jan 11 18:57:39 2020 10.250.7.77:33722 TCP connection established with [AF_INET]100.64.1.1:62954\nSat Jan 11 18:57:39 2020 10.250.7.77:33722 Connection reset, restarting [0]\nSat Jan 11 18:57:39 2020 100.64.1.1:62954 Connection reset, restarting [0]\nSat Jan 11 18:57:43 2020 TCP connection established with [AF_INET]10.250.7.77:17638\nSat Jan 11 18:57:43 2020 10.250.7.77:17638 TCP connection established with [AF_INET]100.64.1.1:58578\nSat Jan 11 18:57:43 2020 10.250.7.77:17638 Connection reset, restarting [0]\nSat Jan 11 18:57:43 2020 100.64.1.1:58578 Connection reset, restarting [0]\nSat Jan 11 18:57:49 2020 TCP connection established with [AF_INET]10.250.7.77:33728\nSat Jan 11 18:57:49 2020 10.250.7.77:33728 TCP connection established with [AF_INET]100.64.1.1:62960\nSat Jan 11 18:57:49 2020 10.250.7.77:33728 Connection reset, restarting [0]\nSat Jan 11 18:57:49 2020 100.64.1.1:62960 Connection reset, restarting [0]\nSat Jan 11 18:57:53 2020 TCP connection established with [AF_INET]10.250.7.77:17644\nSat Jan 11 18:57:53 2020 10.250.7.77:17644 TCP connection established with [AF_INET]100.64.1.1:58584\nSat Jan 11 18:57:53 2020 10.250.7.77:17644 Connection reset, restarting [0]\nSat Jan 11 18:57:53 2020 100.64.1.1:58584 Connection reset, restarting [0]\nSat Jan 11 18:57:59 2020 TCP connection established with [AF_INET]10.250.7.77:33740\nSat Jan 11 18:57:59 2020 10.250.7.77:33740 TCP connection established with [AF_INET]100.64.1.1:62972\nSat Jan 11 18:57:59 2020 10.250.7.77:33740 Connection reset, restarting [0]\nSat Jan 11 18:57:59 2020 100.64.1.1:62972 Connection reset, restarting [0]\nSat Jan 11 18:58:03 2020 TCP connection established with [AF_INET]10.250.7.77:17662\nSat Jan 11 18:58:03 2020 10.250.7.77:17662 TCP connection established with [AF_INET]100.64.1.1:58602\nSat Jan 11 18:58:03 2020 10.250.7.77:17662 Connection reset, restarting [0]\nSat Jan 11 18:58:03 2020 100.64.1.1:58602 Connection reset, restarting [0]\nSat Jan 11 18:58:09 2020 TCP connection established with [AF_INET]10.250.7.77:33752\nSat Jan 11 18:58:09 2020 10.250.7.77:33752 TCP connection established with [AF_INET]100.64.1.1:62984\nSat Jan 11 18:58:09 2020 10.250.7.77:33752 Connection reset, restarting [0]\nSat Jan 11 18:58:09 2020 100.64.1.1:62984 Connection reset, restarting [0]\nSat Jan 11 18:58:13 2020 TCP connection established with [AF_INET]10.250.7.77:17668\nSat Jan 11 18:58:13 2020 10.250.7.77:17668 TCP connection established with [AF_INET]100.64.1.1:58608\nSat Jan 11 18:58:13 2020 10.250.7.77:17668 Connection reset, restarting [0]\nSat Jan 11 18:58:13 2020 100.64.1.1:58608 Connection reset, restarting [0]\nSat Jan 11 18:58:19 2020 TCP connection established with [AF_INET]10.250.7.77:33764\nSat Jan 11 18:58:19 2020 10.250.7.77:33764 TCP connection established with [AF_INET]100.64.1.1:62996\nSat Jan 11 18:58:19 2020 10.250.7.77:33764 Connection reset, restarting [0]\nSat Jan 11 18:58:19 2020 100.64.1.1:62996 Connection reset, restarting [0]\nSat Jan 11 18:58:23 2020 TCP connection established with [AF_INET]10.250.7.77:17682\nSat Jan 11 18:58:23 2020 10.250.7.77:17682 TCP connection established with [AF_INET]100.64.1.1:58622\nSat Jan 11 18:58:23 2020 10.250.7.77:17682 Connection reset, restarting [0]\nSat Jan 11 18:58:23 2020 100.64.1.1:58622 Connection reset, restarting [0]\nSat Jan 11 18:58:29 2020 TCP connection established with [AF_INET]10.250.7.77:33768\nSat Jan 11 18:58:29 2020 10.250.7.77:33768 TCP connection established with [AF_INET]100.64.1.1:63000\nSat Jan 11 18:58:29 2020 10.250.7.77:33768 Connection reset, restarting [0]\nSat Jan 11 18:58:29 2020 100.64.1.1:63000 Connection reset, restarting [0]\nSat Jan 11 18:58:33 2020 TCP connection established with [AF_INET]10.250.7.77:17688\nSat Jan 11 18:58:33 2020 10.250.7.77:17688 TCP connection established with [AF_INET]100.64.1.1:58628\nSat Jan 11 18:58:33 2020 10.250.7.77:17688 Connection reset, restarting [0]\nSat Jan 11 18:58:33 2020 100.64.1.1:58628 Connection reset, restarting [0]\nSat Jan 11 18:58:39 2020 TCP connection established with [AF_INET]10.250.7.77:33810\nSat Jan 11 18:58:39 2020 10.250.7.77:33810 TCP connection established with [AF_INET]100.64.1.1:63042\nSat Jan 11 18:58:39 2020 10.250.7.77:33810 Connection reset, restarting [0]\nSat Jan 11 18:58:39 2020 100.64.1.1:63042 Connection reset, restarting [0]\nSat Jan 11 18:58:43 2020 TCP connection established with [AF_INET]10.250.7.77:17696\nSat Jan 11 18:58:43 2020 10.250.7.77:17696 TCP connection established with [AF_INET]100.64.1.1:58636\nSat Jan 11 18:58:43 2020 10.250.7.77:17696 Connection reset, restarting [0]\nSat Jan 11 18:58:43 2020 100.64.1.1:58636 Connection reset, restarting [0]\nSat Jan 11 18:58:49 2020 TCP connection established with [AF_INET]10.250.7.77:33820\nSat Jan 11 18:58:49 2020 10.250.7.77:33820 TCP connection established with [AF_INET]100.64.1.1:63052\nSat Jan 11 18:58:49 2020 10.250.7.77:33820 Connection reset, restarting [0]\nSat Jan 11 18:58:49 2020 100.64.1.1:63052 Connection reset, restarting [0]\nSat Jan 11 18:58:53 2020 TCP connection established with [AF_INET]10.250.7.77:17702\nSat Jan 11 18:58:53 2020 10.250.7.77:17702 TCP connection established with [AF_INET]100.64.1.1:58642\nSat Jan 11 18:58:53 2020 10.250.7.77:17702 Connection reset, restarting [0]\nSat Jan 11 18:58:53 2020 100.64.1.1:58642 Connection reset, restarting [0]\nSat Jan 11 18:58:59 2020 TCP connection established with [AF_INET]10.250.7.77:33832\nSat Jan 11 18:58:59 2020 10.250.7.77:33832 TCP connection established with [AF_INET]100.64.1.1:63064\nSat Jan 11 18:58:59 2020 10.250.7.77:33832 Connection reset, restarting [0]\nSat Jan 11 18:58:59 2020 100.64.1.1:63064 Connection reset, restarting [0]\nSat Jan 11 18:59:03 2020 TCP connection established with [AF_INET]10.250.7.77:17722\nSat Jan 11 18:59:03 2020 10.250.7.77:17722 TCP connection established with [AF_INET]100.64.1.1:58662\nSat Jan 11 18:59:03 2020 10.250.7.77:17722 Connection reset, restarting [0]\nSat Jan 11 18:59:03 2020 100.64.1.1:58662 Connection reset, restarting [0]\nSat Jan 11 18:59:09 2020 TCP connection established with [AF_INET]10.250.7.77:33846\nSat Jan 11 18:59:09 2020 10.250.7.77:33846 TCP connection established with [AF_INET]100.64.1.1:63078\nSat Jan 11 18:59:09 2020 10.250.7.77:33846 Connection reset, restarting [0]\nSat Jan 11 18:59:09 2020 100.64.1.1:63078 Connection reset, restarting [0]\nSat Jan 11 18:59:10 2020 vpn-seed/100.64.1.1:51060 peer info: IV_VER=2.4.6\nSat Jan 11 18:59:10 2020 vpn-seed/100.64.1.1:51060 peer info: IV_PLAT=linux\nSat Jan 11 18:59:10 2020 vpn-seed/100.64.1.1:51060 peer info: IV_PROTO=2\nSat Jan 11 18:59:10 2020 vpn-seed/100.64.1.1:51060 peer info: IV_LZ4=1\nSat Jan 11 18:59:10 2020 vpn-seed/100.64.1.1:51060 peer info: IV_LZ4v2=1\nSat Jan 11 18:59:10 2020 vpn-seed/100.64.1.1:51060 peer info: IV_LZO=1\nSat Jan 11 18:59:10 2020 vpn-seed/100.64.1.1:51060 peer info: IV_COMP_STUB=1\nSat Jan 11 18:59:10 2020 vpn-seed/100.64.1.1:51060 peer info: IV_COMP_STUBv2=1\nSat Jan 11 18:59:10 2020 vpn-seed/100.64.1.1:51060 peer info: IV_TCPNL=1\nSat Jan 11 18:59:13 2020 TCP connection established with [AF_INET]10.250.7.77:17726\nSat Jan 11 18:59:13 2020 10.250.7.77:17726 TCP connection established with [AF_INET]100.64.1.1:58666\nSat Jan 11 18:59:13 2020 10.250.7.77:17726 Connection reset, restarting [0]\nSat Jan 11 18:59:13 2020 100.64.1.1:58666 Connection reset, restarting [0]\nSat Jan 11 18:59:19 2020 TCP connection established with [AF_INET]10.250.7.77:33858\nSat Jan 11 18:59:19 2020 10.250.7.77:33858 TCP connection established with [AF_INET]100.64.1.1:63090\nSat Jan 11 18:59:19 2020 10.250.7.77:33858 Connection reset, restarting [0]\nSat Jan 11 18:59:19 2020 100.64.1.1:63090 Connection reset, restarting [0]\nSat Jan 11 18:59:23 2020 TCP connection established with [AF_INET]10.250.7.77:17736\nSat Jan 11 18:59:23 2020 10.250.7.77:17736 TCP connection established with [AF_INET]100.64.1.1:58676\nSat Jan 11 18:59:23 2020 10.250.7.77:17736 Connection reset, restarting [0]\nSat Jan 11 18:59:23 2020 100.64.1.1:58676 Connection reset, restarting [0]\nSat Jan 11 18:59:29 2020 TCP connection established with [AF_INET]10.250.7.77:33862\nSat Jan 11 18:59:29 2020 10.250.7.77:33862 TCP connection established with [AF_INET]100.64.1.1:63094\nSat Jan 11 18:59:29 2020 10.250.7.77:33862 Connection reset, restarting [0]\nSat Jan 11 18:59:29 2020 100.64.1.1:63094 Connection reset, restarting [0]\nSat Jan 11 18:59:33 2020 TCP connection established with [AF_INET]10.250.7.77:17742\nSat Jan 11 18:59:33 2020 10.250.7.77:17742 TCP connection established with [AF_INET]100.64.1.1:58682\nSat Jan 11 18:59:33 2020 10.250.7.77:17742 Connection reset, restarting [0]\nSat Jan 11 18:59:33 2020 100.64.1.1:58682 Connection reset, restarting [0]\nSat Jan 11 18:59:39 2020 TCP connection established with [AF_INET]10.250.7.77:33874\nSat Jan 11 18:59:39 2020 10.250.7.77:33874 TCP connection established with [AF_INET]100.64.1.1:63106\nSat Jan 11 18:59:39 2020 10.250.7.77:33874 Connection reset, restarting [0]\nSat Jan 11 18:59:39 2020 100.64.1.1:63106 Connection reset, restarting [0]\nSat Jan 11 18:59:43 2020 TCP connection established with [AF_INET]10.250.7.77:17754\nSat Jan 11 18:59:43 2020 10.250.7.77:17754 TCP connection established with [AF_INET]100.64.1.1:58694\nSat Jan 11 18:59:43 2020 10.250.7.77:17754 Connection reset, restarting [0]\nSat Jan 11 18:59:43 2020 100.64.1.1:58694 Connection reset, restarting [0]\nSat Jan 11 18:59:49 2020 TCP connection established with [AF_INET]10.250.7.77:33880\nSat Jan 11 18:59:49 2020 10.250.7.77:33880 Connection reset, restarting [0]\nSat Jan 11 18:59:49 2020 TCP connection established with [AF_INET]100.64.1.1:63112\nSat Jan 11 18:59:49 2020 100.64.1.1:63112 Connection reset, restarting [0]\nSat Jan 11 18:59:53 2020 TCP connection established with [AF_INET]10.250.7.77:17760\nSat Jan 11 18:59:53 2020 10.250.7.77:17760 TCP connection established with [AF_INET]100.64.1.1:58700\nSat Jan 11 18:59:53 2020 10.250.7.77:17760 Connection reset, restarting [0]\nSat Jan 11 18:59:53 2020 100.64.1.1:58700 Connection reset, restarting [0]\nSat Jan 11 18:59:59 2020 TCP connection established with [AF_INET]10.250.7.77:33896\nSat Jan 11 18:59:59 2020 10.250.7.77:33896 TCP connection established with [AF_INET]100.64.1.1:63128\nSat Jan 11 18:59:59 2020 10.250.7.77:33896 Connection reset, restarting [0]\nSat Jan 11 18:59:59 2020 100.64.1.1:63128 Connection reset, restarting [0]\nSat Jan 11 19:00:03 2020 TCP connection established with [AF_INET]10.250.7.77:17784\nSat Jan 11 19:00:03 2020 10.250.7.77:17784 TCP connection established with [AF_INET]100.64.1.1:58724\nSat Jan 11 19:00:03 2020 10.250.7.77:17784 Connection reset, restarting [0]\nSat Jan 11 19:00:03 2020 100.64.1.1:58724 Connection reset, restarting [0]\nSat Jan 11 19:00:09 2020 TCP connection established with [AF_INET]10.250.7.77:33910\nSat Jan 11 19:00:09 2020 10.250.7.77:33910 TCP connection established with [AF_INET]100.64.1.1:63142\nSat Jan 11 19:00:09 2020 10.250.7.77:33910 Connection reset, restarting [0]\nSat Jan 11 19:00:09 2020 100.64.1.1:63142 Connection reset, restarting [0]\nSat Jan 11 19:00:13 2020 TCP connection established with [AF_INET]10.250.7.77:17788\nSat Jan 11 19:00:13 2020 10.250.7.77:17788 TCP connection established with [AF_INET]100.64.1.1:58728\nSat Jan 11 19:00:13 2020 10.250.7.77:17788 Connection reset, restarting [0]\nSat Jan 11 19:00:13 2020 100.64.1.1:58728 Connection reset, restarting [0]\nSat Jan 11 19:00:19 2020 TCP connection established with [AF_INET]10.250.7.77:33922\nSat Jan 11 19:00:19 2020 10.250.7.77:33922 TCP connection established with [AF_INET]100.64.1.1:63154\nSat Jan 11 19:00:19 2020 10.250.7.77:33922 Connection reset, restarting [0]\nSat Jan 11 19:00:19 2020 100.64.1.1:63154 Connection reset, restarting [0]\nSat Jan 11 19:00:23 2020 TCP connection established with [AF_INET]10.250.7.77:17798\nSat Jan 11 19:00:23 2020 10.250.7.77:17798 TCP connection established with [AF_INET]100.64.1.1:58738\nSat Jan 11 19:00:23 2020 10.250.7.77:17798 Connection reset, restarting [0]\nSat Jan 11 19:00:23 2020 100.64.1.1:58738 Connection reset, restarting [0]\nSat Jan 11 19:00:29 2020 vpn-seed/100.64.1.1:47320 peer info: IV_VER=2.4.6\nSat Jan 11 19:00:29 2020 vpn-seed/100.64.1.1:47320 peer info: IV_PLAT=linux\nSat Jan 11 19:00:29 2020 vpn-seed/100.64.1.1:47320 peer info: IV_PROTO=2\nSat Jan 11 19:00:29 2020 vpn-seed/100.64.1.1:47320 peer info: IV_LZ4=1\nSat Jan 11 19:00:29 2020 vpn-seed/100.64.1.1:47320 peer info: IV_LZ4v2=1\nSat Jan 11 19:00:29 2020 vpn-seed/100.64.1.1:47320 peer info: IV_LZO=1\nSat Jan 11 19:00:29 2020 vpn-seed/100.64.1.1:47320 peer info: IV_COMP_STUB=1\nSat Jan 11 19:00:29 2020 vpn-seed/100.64.1.1:47320 peer info: IV_COMP_STUBv2=1\nSat Jan 11 19:00:29 2020 vpn-seed/100.64.1.1:47320 peer info: IV_TCPNL=1\nSat Jan 11 19:00:29 2020 TCP connection established with [AF_INET]10.250.7.77:33926\nSat Jan 11 19:00:29 2020 10.250.7.77:33926 TCP connection established with [AF_INET]100.64.1.1:63158\nSat Jan 11 19:00:29 2020 10.250.7.77:33926 Connection reset, restarting [0]\nSat Jan 11 19:00:29 2020 100.64.1.1:63158 Connection reset, restarting [0]\nSat Jan 11 19:00:33 2020 TCP connection established with [AF_INET]10.250.7.77:17804\nSat Jan 11 19:00:33 2020 10.250.7.77:17804 TCP connection established with [AF_INET]100.64.1.1:58744\nSat Jan 11 19:00:33 2020 10.250.7.77:17804 Connection reset, restarting [0]\nSat Jan 11 19:00:33 2020 100.64.1.1:58744 Connection reset, restarting [0]\nSat Jan 11 19:00:39 2020 TCP connection established with [AF_INET]10.250.7.77:33934\nSat Jan 11 19:00:39 2020 10.250.7.77:33934 TCP connection established with [AF_INET]100.64.1.1:63166\nSat Jan 11 19:00:39 2020 10.250.7.77:33934 Connection reset, restarting [0]\nSat Jan 11 19:00:39 2020 100.64.1.1:63166 Connection reset, restarting [0]\nSat Jan 11 19:00:43 2020 TCP connection established with [AF_INET]10.250.7.77:17812\nSat Jan 11 19:00:43 2020 10.250.7.77:17812 TCP connection established with [AF_INET]100.64.1.1:58752\nSat Jan 11 19:00:43 2020 10.250.7.77:17812 Connection reset, restarting [0]\nSat Jan 11 19:00:43 2020 100.64.1.1:58752 Connection reset, restarting [0]\nSat Jan 11 19:00:49 2020 TCP connection established with [AF_INET]10.250.7.77:33940\nSat Jan 11 19:00:49 2020 10.250.7.77:33940 TCP connection established with [AF_INET]100.64.1.1:63172\nSat Jan 11 19:00:49 2020 10.250.7.77:33940 Connection reset, restarting [0]\nSat Jan 11 19:00:49 2020 100.64.1.1:63172 Connection reset, restarting [0]\nSat Jan 11 19:00:51 2020 vpn-seed/100.64.1.1:51770 peer info: IV_VER=2.4.6\nSat Jan 11 19:00:51 2020 vpn-seed/100.64.1.1:51770 peer info: IV_PLAT=linux\nSat Jan 11 19:00:51 2020 vpn-seed/100.64.1.1:51770 peer info: IV_PROTO=2\nSat Jan 11 19:00:51 2020 vpn-seed/100.64.1.1:51770 peer info: IV_LZ4=1\nSat Jan 11 19:00:51 2020 vpn-seed/100.64.1.1:51770 peer info: IV_LZ4v2=1\nSat Jan 11 19:00:51 2020 vpn-seed/100.64.1.1:51770 peer info: IV_LZO=1\nSat Jan 11 19:00:51 2020 vpn-seed/100.64.1.1:51770 peer info: IV_COMP_STUB=1\nSat Jan 11 19:00:51 2020 vpn-seed/100.64.1.1:51770 peer info: IV_COMP_STUBv2=1\nSat Jan 11 19:00:51 2020 vpn-seed/100.64.1.1:51770 peer info: IV_TCPNL=1\nSat Jan 11 19:00:53 2020 TCP connection established with [AF_INET]10.250.7.77:17824\nSat Jan 11 19:00:53 2020 10.250.7.77:17824 TCP connection established with [AF_INET]100.64.1.1:58764\nSat Jan 11 19:00:53 2020 10.250.7.77:17824 Connection reset, restarting [0]\nSat Jan 11 19:00:53 2020 100.64.1.1:58764 Connection reset, restarting [0]\nSat Jan 11 19:00:59 2020 TCP connection established with [AF_INET]10.250.7.77:33954\nSat Jan 11 19:00:59 2020 10.250.7.77:33954 TCP connection established with [AF_INET]100.64.1.1:63186\nSat Jan 11 19:00:59 2020 10.250.7.77:33954 Connection reset, restarting [0]\nSat Jan 11 19:00:59 2020 100.64.1.1:63186 Connection reset, restarting [0]\nSat Jan 11 19:01:03 2020 TCP connection established with [AF_INET]10.250.7.77:17842\nSat Jan 11 19:01:03 2020 10.250.7.77:17842 TCP connection established with [AF_INET]100.64.1.1:58782\nSat Jan 11 19:01:03 2020 10.250.7.77:17842 Connection reset, restarting [0]\nSat Jan 11 19:01:03 2020 100.64.1.1:58782 Connection reset, restarting [0]\nSat Jan 11 19:01:09 2020 TCP connection established with [AF_INET]10.250.7.77:33964\nSat Jan 11 19:01:09 2020 10.250.7.77:33964 TCP connection established with [AF_INET]100.64.1.1:63196\nSat Jan 11 19:01:09 2020 10.250.7.77:33964 Connection reset, restarting [0]\nSat Jan 11 19:01:09 2020 100.64.1.1:63196 Connection reset, restarting [0]\nSat Jan 11 19:01:13 2020 TCP connection established with [AF_INET]10.250.7.77:17846\nSat Jan 11 19:01:13 2020 10.250.7.77:17846 TCP connection established with [AF_INET]100.64.1.1:58786\nSat Jan 11 19:01:13 2020 10.250.7.77:17846 Connection reset, restarting [0]\nSat Jan 11 19:01:13 2020 100.64.1.1:58786 Connection reset, restarting [0]\nSat Jan 11 19:01:19 2020 TCP connection established with [AF_INET]10.250.7.77:33980\nSat Jan 11 19:01:19 2020 10.250.7.77:33980 TCP connection established with [AF_INET]100.64.1.1:63212\nSat Jan 11 19:01:19 2020 10.250.7.77:33980 Connection reset, restarting [0]\nSat Jan 11 19:01:19 2020 100.64.1.1:63212 Connection reset, restarting [0]\nSat Jan 11 19:01:21 2020 vpn-seed/10.250.7.77:22572 peer info: IV_VER=2.4.6\nSat Jan 11 19:01:21 2020 vpn-seed/10.250.7.77:22572 peer info: IV_PLAT=linux\nSat Jan 11 19:01:21 2020 vpn-seed/10.250.7.77:22572 peer info: IV_PROTO=2\nSat Jan 11 19:01:21 2020 vpn-seed/10.250.7.77:22572 peer info: IV_LZ4=1\nSat Jan 11 19:01:21 2020 vpn-seed/10.250.7.77:22572 peer info: IV_LZ4v2=1\nSat Jan 11 19:01:21 2020 vpn-seed/10.250.7.77:22572 peer info: IV_LZO=1\nSat Jan 11 19:01:21 2020 vpn-seed/10.250.7.77:22572 peer info: IV_COMP_STUB=1\nSat Jan 11 19:01:21 2020 vpn-seed/10.250.7.77:22572 peer info: IV_COMP_STUBv2=1\nSat Jan 11 19:01:21 2020 vpn-seed/10.250.7.77:22572 peer info: IV_TCPNL=1\nSat Jan 11 19:01:23 2020 TCP connection established with [AF_INET]10.250.7.77:17856\nSat Jan 11 19:01:23 2020 10.250.7.77:17856 TCP connection established with [AF_INET]100.64.1.1:58796\nSat Jan 11 19:01:23 2020 10.250.7.77:17856 Connection reset, restarting [0]\nSat Jan 11 19:01:23 2020 100.64.1.1:58796 Connection reset, restarting [0]\nSat Jan 11 19:01:29 2020 TCP connection established with [AF_INET]10.250.7.77:33992\nSat Jan 11 19:01:29 2020 10.250.7.77:33992 TCP connection established with [AF_INET]100.64.1.1:63224\nSat Jan 11 19:01:29 2020 10.250.7.77:33992 Connection reset, restarting [0]\nSat Jan 11 19:01:29 2020 100.64.1.1:63224 Connection reset, restarting [0]\nSat Jan 11 19:01:33 2020 TCP connection established with [AF_INET]10.250.7.77:17862\nSat Jan 11 19:01:33 2020 10.250.7.77:17862 TCP connection established with [AF_INET]100.64.1.1:58802\nSat Jan 11 19:01:33 2020 10.250.7.77:17862 Connection reset, restarting [0]\nSat Jan 11 19:01:33 2020 100.64.1.1:58802 Connection reset, restarting [0]\nSat Jan 11 19:01:35 2020 vpn-seed/100.64.1.1:47320 Connection reset, restarting [0]\nSat Jan 11 19:01:39 2020 TCP connection established with [AF_INET]10.250.7.77:34000\nSat Jan 11 19:01:39 2020 10.250.7.77:34000 TCP connection established with [AF_INET]100.64.1.1:63232\nSat Jan 11 19:01:39 2020 10.250.7.77:34000 Connection reset, restarting [0]\nSat Jan 11 19:01:39 2020 100.64.1.1:63232 Connection reset, restarting [0]\nSat Jan 11 19:01:43 2020 TCP connection established with [AF_INET]10.250.7.77:17870\nSat Jan 11 19:01:43 2020 10.250.7.77:17870 TCP connection established with [AF_INET]100.64.1.1:58810\nSat Jan 11 19:01:43 2020 10.250.7.77:17870 Connection reset, restarting [0]\nSat Jan 11 19:01:43 2020 100.64.1.1:58810 Connection reset, restarting [0]\nSat Jan 11 19:01:49 2020 TCP connection established with [AF_INET]10.250.7.77:17876\nSat Jan 11 19:01:49 2020 10.250.7.77:17876 Connection reset, restarting [0]\nSat Jan 11 19:01:49 2020 TCP connection established with [AF_INET]100.64.1.1:58820\nSat Jan 11 19:01:49 2020 100.64.1.1:58820 TCP connection established with [AF_INET]10.250.7.77:34006\nSat Jan 11 19:01:49 2020 10.250.7.77:34006 TCP connection established with [AF_INET]100.64.1.1:63238\nSat Jan 11 19:01:49 2020 10.250.7.77:34006 Connection reset, restarting [0]\nSat Jan 11 19:01:49 2020 100.64.1.1:63238 Connection reset, restarting [0]\nSat Jan 11 19:01:50 2020 100.64.1.1:58820 peer info: IV_VER=2.4.6\nSat Jan 11 19:01:50 2020 100.64.1.1:58820 peer info: IV_PLAT=linux\nSat Jan 11 19:01:50 2020 100.64.1.1:58820 peer info: IV_PROTO=2\nSat Jan 11 19:01:50 2020 100.64.1.1:58820 peer info: IV_NCP=2\nSat Jan 11 19:01:50 2020 100.64.1.1:58820 peer info: IV_LZ4=1\nSat Jan 11 19:01:50 2020 100.64.1.1:58820 peer info: IV_LZ4v2=1\nSat Jan 11 19:01:50 2020 100.64.1.1:58820 peer info: IV_LZO=1\nSat Jan 11 19:01:50 2020 100.64.1.1:58820 peer info: IV_COMP_STUB=1\nSat Jan 11 19:01:50 2020 100.64.1.1:58820 peer info: IV_COMP_STUBv2=1\nSat Jan 11 19:01:50 2020 100.64.1.1:58820 peer info: IV_TCPNL=1\nSat Jan 11 19:01:50 2020 100.64.1.1:58820 [vpn-seed] Peer Connection Initiated with [AF_INET]100.64.1.1:58820\nSat Jan 11 19:01:50 2020 vpn-seed/100.64.1.1:58820 MULTI_sva: pool returned IPv4=192.168.123.10, IPv6=(Not enabled)\nSat Jan 11 19:01:53 2020 TCP connection established with [AF_INET]10.250.7.77:17882\nSat Jan 11 19:01:53 2020 10.250.7.77:17882 TCP connection established with [AF_INET]100.64.1.1:58822\nSat Jan 11 19:01:53 2020 10.250.7.77:17882 Connection reset, restarting [0]\nSat Jan 11 19:01:53 2020 100.64.1.1:58822 Connection reset, restarting [0]\nSat Jan 11 19:01:59 2020 TCP connection established with [AF_INET]10.250.7.77:34020\nSat Jan 11 19:01:59 2020 10.250.7.77:34020 TCP connection established with [AF_INET]100.64.1.1:63252\nSat Jan 11 19:01:59 2020 10.250.7.77:34020 Connection reset, restarting [0]\nSat Jan 11 19:01:59 2020 100.64.1.1:63252 Connection reset, restarting [0]\nSat Jan 11 19:02:03 2020 TCP connection established with [AF_INET]10.250.7.77:17900\nSat Jan 11 19:02:03 2020 10.250.7.77:17900 TCP connection established with [AF_INET]100.64.1.1:58840\nSat Jan 11 19:02:03 2020 10.250.7.77:17900 Connection reset, restarting [0]\nSat Jan 11 19:02:03 2020 100.64.1.1:58840 Connection reset, restarting [0]\nSat Jan 11 19:02:09 2020 TCP connection established with [AF_INET]10.250.7.77:34030\nSat Jan 11 19:02:09 2020 10.250.7.77:34030 TCP connection established with [AF_INET]100.64.1.1:63262\nSat Jan 11 19:02:09 2020 10.250.7.77:34030 Connection reset, restarting [0]\nSat Jan 11 19:02:09 2020 100.64.1.1:63262 Connection reset, restarting [0]\nSat Jan 11 19:02:13 2020 TCP connection established with [AF_INET]10.250.7.77:17908\nSat Jan 11 19:02:13 2020 10.250.7.77:17908 TCP connection established with [AF_INET]100.64.1.1:58848\nSat Jan 11 19:02:13 2020 10.250.7.77:17908 Connection reset, restarting [0]\nSat Jan 11 19:02:13 2020 100.64.1.1:58848 Connection reset, restarting [0]\nSat Jan 11 19:02:19 2020 TCP connection established with [AF_INET]10.250.7.77:34042\nSat Jan 11 19:02:19 2020 10.250.7.77:34042 TCP connection established with [AF_INET]100.64.1.1:63274\nSat Jan 11 19:02:19 2020 10.250.7.77:34042 Connection reset, restarting [0]\nSat Jan 11 19:02:19 2020 100.64.1.1:63274 Connection reset, restarting [0]\nSat Jan 11 19:02:23 2020 TCP connection established with [AF_INET]10.250.7.77:17918\nSat Jan 11 19:02:23 2020 10.250.7.77:17918 TCP connection established with [AF_INET]100.64.1.1:58858\nSat Jan 11 19:02:23 2020 10.250.7.77:17918 Connection reset, restarting [0]\nSat Jan 11 19:02:23 2020 100.64.1.1:58858 Connection reset, restarting [0]\nSat Jan 11 19:02:29 2020 TCP connection established with [AF_INET]10.250.7.77:34050\nSat Jan 11 19:02:29 2020 10.250.7.77:34050 TCP connection established with [AF_INET]100.64.1.1:63282\nSat Jan 11 19:02:29 2020 10.250.7.77:34050 Connection reset, restarting [0]\nSat Jan 11 19:02:29 2020 100.64.1.1:63282 Connection reset, restarting [0]\nSat Jan 11 19:02:33 2020 TCP connection established with [AF_INET]10.250.7.77:17924\nSat Jan 11 19:02:33 2020 10.250.7.77:17924 TCP connection established with [AF_INET]100.64.1.1:58864\nSat Jan 11 19:02:33 2020 10.250.7.77:17924 Connection reset, restarting [0]\nSat Jan 11 19:02:33 2020 100.64.1.1:58864 Connection reset, restarting [0]\nSat Jan 11 19:02:39 2020 TCP connection established with [AF_INET]10.250.7.77:34058\nSat Jan 11 19:02:39 2020 10.250.7.77:34058 TCP connection established with [AF_INET]100.64.1.1:63290\nSat Jan 11 19:02:39 2020 10.250.7.77:34058 Connection reset, restarting [0]\nSat Jan 11 19:02:39 2020 100.64.1.1:63290 Connection reset, restarting [0]\nSat Jan 11 19:02:43 2020 TCP connection established with [AF_INET]10.250.7.77:17932\nSat Jan 11 19:02:43 2020 10.250.7.77:17932 TCP connection established with [AF_INET]100.64.1.1:58872\nSat Jan 11 19:02:43 2020 10.250.7.77:17932 Connection reset, restarting [0]\nSat Jan 11 19:02:43 2020 100.64.1.1:58872 Connection reset, restarting [0]\nSat Jan 11 19:02:49 2020 TCP connection established with [AF_INET]10.250.7.77:34066\nSat Jan 11 19:02:49 2020 10.250.7.77:34066 TCP connection established with [AF_INET]100.64.1.1:63298\nSat Jan 11 19:02:49 2020 10.250.7.77:34066 Connection reset, restarting [0]\nSat Jan 11 19:02:49 2020 100.64.1.1:63298 Connection reset, restarting [0]\nSat Jan 11 19:02:53 2020 TCP connection established with [AF_INET]10.250.7.77:17940\nSat Jan 11 19:02:53 2020 10.250.7.77:17940 TCP connection established with [AF_INET]100.64.1.1:58880\nSat Jan 11 19:02:53 2020 10.250.7.77:17940 Connection reset, restarting [0]\nSat Jan 11 19:02:53 2020 100.64.1.1:58880 Connection reset, restarting [0]\nSat Jan 11 19:02:59 2020 TCP connection established with [AF_INET]10.250.7.77:34078\nSat Jan 11 19:02:59 2020 10.250.7.77:34078 TCP connection established with [AF_INET]100.64.1.1:63310\nSat Jan 11 19:02:59 2020 10.250.7.77:34078 Connection reset, restarting [0]\nSat Jan 11 19:02:59 2020 100.64.1.1:63310 Connection reset, restarting [0]\nSat Jan 11 19:03:03 2020 TCP connection established with [AF_INET]10.250.7.77:17958\nSat Jan 11 19:03:03 2020 10.250.7.77:17958 Connection reset, restarting [0]\nSat Jan 11 19:03:03 2020 TCP connection established with [AF_INET]100.64.1.1:58898\nSat Jan 11 19:03:03 2020 100.64.1.1:58898 Connection reset, restarting [0]\nSat Jan 11 19:03:09 2020 TCP connection established with [AF_INET]10.250.7.77:34088\nSat Jan 11 19:03:09 2020 10.250.7.77:34088 TCP connection established with [AF_INET]100.64.1.1:63320\nSat Jan 11 19:03:09 2020 10.250.7.77:34088 Connection reset, restarting [0]\nSat Jan 11 19:03:09 2020 100.64.1.1:63320 Connection reset, restarting [0]\nSat Jan 11 19:03:13 2020 TCP connection established with [AF_INET]10.250.7.77:17962\nSat Jan 11 19:03:13 2020 10.250.7.77:17962 TCP connection established with [AF_INET]100.64.1.1:58902\nSat Jan 11 19:03:13 2020 10.250.7.77:17962 Connection reset, restarting [0]\nSat Jan 11 19:03:13 2020 100.64.1.1:58902 Connection reset, restarting [0]\nSat Jan 11 19:03:19 2020 TCP connection established with [AF_INET]10.250.7.77:34100\nSat Jan 11 19:03:19 2020 10.250.7.77:34100 TCP connection established with [AF_INET]100.64.1.1:63332\nSat Jan 11 19:03:19 2020 10.250.7.77:34100 Connection reset, restarting [0]\nSat Jan 11 19:03:19 2020 100.64.1.1:63332 Connection reset, restarting [0]\nSat Jan 11 19:03:23 2020 TCP connection established with [AF_INET]10.250.7.77:17976\nSat Jan 11 19:03:23 2020 10.250.7.77:17976 TCP connection established with [AF_INET]100.64.1.1:58916\nSat Jan 11 19:03:23 2020 10.250.7.77:17976 Connection reset, restarting [0]\nSat Jan 11 19:03:23 2020 100.64.1.1:58916 Connection reset, restarting [0]\nSat Jan 11 19:03:29 2020 TCP connection established with [AF_INET]10.250.7.77:34104\nSat Jan 11 19:03:29 2020 10.250.7.77:34104 TCP connection established with [AF_INET]100.64.1.1:63336\nSat Jan 11 19:03:29 2020 10.250.7.77:34104 Connection reset, restarting [0]\nSat Jan 11 19:03:29 2020 100.64.1.1:63336 Connection reset, restarting [0]\nSat Jan 11 19:03:33 2020 TCP connection established with [AF_INET]10.250.7.77:17982\nSat Jan 11 19:03:33 2020 10.250.7.77:17982 TCP connection established with [AF_INET]100.64.1.1:58922\nSat Jan 11 19:03:33 2020 10.250.7.77:17982 Connection reset, restarting [0]\nSat Jan 11 19:03:33 2020 100.64.1.1:58922 Connection reset, restarting [0]\nSat Jan 11 19:03:39 2020 TCP connection established with [AF_INET]10.250.7.77:34112\nSat Jan 11 19:03:39 2020 10.250.7.77:34112 TCP connection established with [AF_INET]100.64.1.1:63344\nSat Jan 11 19:03:39 2020 10.250.7.77:34112 Connection reset, restarting [0]\nSat Jan 11 19:03:39 2020 100.64.1.1:63344 Connection reset, restarting [0]\nSat Jan 11 19:03:43 2020 TCP connection established with [AF_INET]10.250.7.77:17992\nSat Jan 11 19:03:43 2020 10.250.7.77:17992 TCP connection established with [AF_INET]100.64.1.1:58932\nSat Jan 11 19:03:43 2020 10.250.7.77:17992 Connection reset, restarting [0]\nSat Jan 11 19:03:43 2020 100.64.1.1:58932 Connection reset, restarting [0]\nSat Jan 11 19:03:49 2020 TCP connection established with [AF_INET]10.250.7.77:34124\nSat Jan 11 19:03:49 2020 10.250.7.77:34124 TCP connection established with [AF_INET]100.64.1.1:63356\nSat Jan 11 19:03:49 2020 10.250.7.77:34124 Connection reset, restarting [0]\nSat Jan 11 19:03:49 2020 100.64.1.1:63356 Connection reset, restarting [0]\nSat Jan 11 19:03:53 2020 TCP connection established with [AF_INET]10.250.7.77:17998\nSat Jan 11 19:03:53 2020 10.250.7.77:17998 TCP connection established with [AF_INET]100.64.1.1:58938\nSat Jan 11 19:03:53 2020 10.250.7.77:17998 Connection reset, restarting [0]\nSat Jan 11 19:03:53 2020 100.64.1.1:58938 Connection reset, restarting [0]\nSat Jan 11 19:03:59 2020 TCP connection established with [AF_INET]10.250.7.77:34136\nSat Jan 11 19:03:59 2020 10.250.7.77:34136 TCP connection established with [AF_INET]100.64.1.1:63368\nSat Jan 11 19:03:59 2020 10.250.7.77:34136 Connection reset, restarting [0]\nSat Jan 11 19:03:59 2020 100.64.1.1:63368 Connection reset, restarting [0]\nSat Jan 11 19:04:03 2020 TCP connection established with [AF_INET]10.250.7.77:18016\nSat Jan 11 19:04:03 2020 10.250.7.77:18016 TCP connection established with [AF_INET]100.64.1.1:58956\nSat Jan 11 19:04:03 2020 10.250.7.77:18016 Connection reset, restarting [0]\nSat Jan 11 19:04:03 2020 100.64.1.1:58956 Connection reset, restarting [0]\nSat Jan 11 19:04:09 2020 TCP connection established with [AF_INET]10.250.7.77:34146\nSat Jan 11 19:04:09 2020 10.250.7.77:34146 TCP connection established with [AF_INET]100.64.1.1:63378\nSat Jan 11 19:04:09 2020 10.250.7.77:34146 Connection reset, restarting [0]\nSat Jan 11 19:04:09 2020 100.64.1.1:63378 Connection reset, restarting [0]\nSat Jan 11 19:04:13 2020 TCP connection established with [AF_INET]10.250.7.77:18020\nSat Jan 11 19:04:13 2020 10.250.7.77:18020 TCP connection established with [AF_INET]100.64.1.1:58960\nSat Jan 11 19:04:13 2020 10.250.7.77:18020 Connection reset, restarting [0]\nSat Jan 11 19:04:13 2020 100.64.1.1:58960 Connection reset, restarting [0]\nSat Jan 11 19:04:19 2020 TCP connection established with [AF_INET]10.250.7.77:34158\nSat Jan 11 19:04:19 2020 10.250.7.77:34158 TCP connection established with [AF_INET]100.64.1.1:63390\nSat Jan 11 19:04:19 2020 10.250.7.77:34158 Connection reset, restarting [0]\nSat Jan 11 19:04:19 2020 100.64.1.1:63390 Connection reset, restarting [0]\nSat Jan 11 19:04:23 2020 TCP connection established with [AF_INET]10.250.7.77:18030\nSat Jan 11 19:04:23 2020 10.250.7.77:18030 TCP connection established with [AF_INET]100.64.1.1:58970\nSat Jan 11 19:04:23 2020 10.250.7.77:18030 Connection reset, restarting [0]\nSat Jan 11 19:04:23 2020 100.64.1.1:58970 Connection reset, restarting [0]\nSat Jan 11 19:04:29 2020 TCP connection established with [AF_INET]10.250.7.77:34162\nSat Jan 11 19:04:29 2020 10.250.7.77:34162 TCP connection established with [AF_INET]100.64.1.1:63394\nSat Jan 11 19:04:29 2020 10.250.7.77:34162 Connection reset, restarting [0]\nSat Jan 11 19:04:29 2020 100.64.1.1:63394 Connection reset, restarting [0]\nSat Jan 11 19:04:33 2020 TCP connection established with [AF_INET]10.250.7.77:18036\nSat Jan 11 19:04:33 2020 10.250.7.77:18036 TCP connection established with [AF_INET]100.64.1.1:58976\nSat Jan 11 19:04:33 2020 10.250.7.77:18036 Connection reset, restarting [0]\nSat Jan 11 19:04:33 2020 100.64.1.1:58976 Connection reset, restarting [0]\nSat Jan 11 19:04:39 2020 TCP connection established with [AF_INET]10.250.7.77:34170\nSat Jan 11 19:04:39 2020 10.250.7.77:34170 TCP connection established with [AF_INET]100.64.1.1:63402\nSat Jan 11 19:04:39 2020 10.250.7.77:34170 Connection reset, restarting [0]\nSat Jan 11 19:04:39 2020 100.64.1.1:63402 Connection reset, restarting [0]\nSat Jan 11 19:04:43 2020 TCP connection established with [AF_INET]10.250.7.77:18050\nSat Jan 11 19:04:43 2020 10.250.7.77:18050 TCP connection established with [AF_INET]100.64.1.1:58990\nSat Jan 11 19:04:43 2020 10.250.7.77:18050 Connection reset, restarting [0]\nSat Jan 11 19:04:43 2020 100.64.1.1:58990 Connection reset, restarting [0]\nSat Jan 11 19:04:49 2020 TCP connection established with [AF_INET]10.250.7.77:34178\nSat Jan 11 19:04:49 2020 10.250.7.77:34178 TCP connection established with [AF_INET]100.64.1.1:63410\nSat Jan 11 19:04:49 2020 10.250.7.77:34178 Connection reset, restarting [0]\nSat Jan 11 19:04:49 2020 100.64.1.1:63410 Connection reset, restarting [0]\nSat Jan 11 19:04:53 2020 TCP connection established with [AF_INET]10.250.7.77:18056\nSat Jan 11 19:04:53 2020 10.250.7.77:18056 TCP connection established with [AF_INET]100.64.1.1:58996\nSat Jan 11 19:04:53 2020 10.250.7.77:18056 Connection reset, restarting [0]\nSat Jan 11 19:04:53 2020 100.64.1.1:58996 Connection reset, restarting [0]\nSat Jan 11 19:04:59 2020 TCP connection established with [AF_INET]10.250.7.77:34194\nSat Jan 11 19:04:59 2020 10.250.7.77:34194 TCP connection established with [AF_INET]100.64.1.1:63426\nSat Jan 11 19:04:59 2020 10.250.7.77:34194 Connection reset, restarting [0]\nSat Jan 11 19:04:59 2020 100.64.1.1:63426 Connection reset, restarting [0]\nSat Jan 11 19:05:03 2020 TCP connection established with [AF_INET]10.250.7.77:18074\nSat Jan 11 19:05:03 2020 10.250.7.77:18074 TCP connection established with [AF_INET]100.64.1.1:59014\nSat Jan 11 19:05:03 2020 10.250.7.77:18074 Connection reset, restarting [0]\nSat Jan 11 19:05:03 2020 100.64.1.1:59014 Connection reset, restarting [0]\nSat Jan 11 19:05:09 2020 TCP connection established with [AF_INET]100.64.1.1:63436\nSat Jan 11 19:05:09 2020 100.64.1.1:63436 Connection reset, restarting [0]\nSat Jan 11 19:05:09 2020 TCP connection established with [AF_INET]10.250.7.77:34204\nSat Jan 11 19:05:09 2020 10.250.7.77:34204 Connection reset, restarting [0]\nSat Jan 11 19:05:13 2020 TCP connection established with [AF_INET]10.250.7.77:18078\nSat Jan 11 19:05:13 2020 10.250.7.77:18078 TCP connection established with [AF_INET]100.64.1.1:59018\nSat Jan 11 19:05:13 2020 10.250.7.77:18078 Connection reset, restarting [0]\nSat Jan 11 19:05:13 2020 100.64.1.1:59018 Connection reset, restarting [0]\nSat Jan 11 19:05:19 2020 TCP connection established with [AF_INET]10.250.7.77:34216\nSat Jan 11 19:05:19 2020 10.250.7.77:34216 TCP connection established with [AF_INET]100.64.1.1:63448\nSat Jan 11 19:05:19 2020 10.250.7.77:34216 Connection reset, restarting [0]\nSat Jan 11 19:05:19 2020 100.64.1.1:63448 Connection reset, restarting [0]\nSat Jan 11 19:05:23 2020 TCP connection established with [AF_INET]10.250.7.77:18088\nSat Jan 11 19:05:23 2020 10.250.7.77:18088 TCP connection established with [AF_INET]100.64.1.1:59028\nSat Jan 11 19:05:23 2020 10.250.7.77:18088 Connection reset, restarting [0]\nSat Jan 11 19:05:23 2020 100.64.1.1:59028 Connection reset, restarting [0]\nSat Jan 11 19:05:29 2020 TCP connection established with [AF_INET]10.250.7.77:34220\nSat Jan 11 19:05:29 2020 10.250.7.77:34220 TCP connection established with [AF_INET]100.64.1.1:63452\nSat Jan 11 19:05:29 2020 10.250.7.77:34220 Connection reset, restarting [0]\nSat Jan 11 19:05:29 2020 100.64.1.1:63452 Connection reset, restarting [0]\nSat Jan 11 19:05:33 2020 TCP connection established with [AF_INET]10.250.7.77:18096\nSat Jan 11 19:05:33 2020 10.250.7.77:18096 TCP connection established with [AF_INET]100.64.1.1:59036\nSat Jan 11 19:05:33 2020 10.250.7.77:18096 Connection reset, restarting [0]\nSat Jan 11 19:05:33 2020 100.64.1.1:59036 Connection reset, restarting [0]\nSat Jan 11 19:05:39 2020 TCP connection established with [AF_INET]10.250.7.77:34230\nSat Jan 11 19:05:39 2020 10.250.7.77:34230 TCP connection established with [AF_INET]100.64.1.1:63462\nSat Jan 11 19:05:39 2020 10.250.7.77:34230 Connection reset, restarting [0]\nSat Jan 11 19:05:39 2020 100.64.1.1:63462 Connection reset, restarting [0]\nSat Jan 11 19:05:43 2020 TCP connection established with [AF_INET]10.250.7.77:18138\nSat Jan 11 19:05:43 2020 10.250.7.77:18138 TCP connection established with [AF_INET]100.64.1.1:59078\nSat Jan 11 19:05:43 2020 10.250.7.77:18138 Connection reset, restarting [0]\nSat Jan 11 19:05:43 2020 100.64.1.1:59078 Connection reset, restarting [0]\nSat Jan 11 19:05:49 2020 TCP connection established with [AF_INET]10.250.7.77:34236\nSat Jan 11 19:05:49 2020 10.250.7.77:34236 TCP connection established with [AF_INET]100.64.1.1:63468\nSat Jan 11 19:05:49 2020 10.250.7.77:34236 Connection reset, restarting [0]\nSat Jan 11 19:05:49 2020 100.64.1.1:63468 Connection reset, restarting [0]\nSat Jan 11 19:05:53 2020 TCP connection established with [AF_INET]10.250.7.77:18148\nSat Jan 11 19:05:53 2020 10.250.7.77:18148 TCP connection established with [AF_INET]100.64.1.1:59088\nSat Jan 11 19:05:53 2020 10.250.7.77:18148 Connection reset, restarting [0]\nSat Jan 11 19:05:53 2020 100.64.1.1:59088 Connection reset, restarting [0]\nSat Jan 11 19:05:59 2020 TCP connection established with [AF_INET]10.250.7.77:34248\nSat Jan 11 19:05:59 2020 10.250.7.77:34248 TCP connection established with [AF_INET]100.64.1.1:63480\nSat Jan 11 19:05:59 2020 10.250.7.77:34248 Connection reset, restarting [0]\nSat Jan 11 19:05:59 2020 100.64.1.1:63480 Connection reset, restarting [0]\nSat Jan 11 19:06:03 2020 TCP connection established with [AF_INET]10.250.7.77:18166\nSat Jan 11 19:06:03 2020 10.250.7.77:18166 TCP connection established with [AF_INET]100.64.1.1:59106\nSat Jan 11 19:06:03 2020 10.250.7.77:18166 Connection reset, restarting [0]\nSat Jan 11 19:06:03 2020 100.64.1.1:59106 Connection reset, restarting [0]\nSat Jan 11 19:06:09 2020 TCP connection established with [AF_INET]10.250.7.77:34258\nSat Jan 11 19:06:09 2020 10.250.7.77:34258 TCP connection established with [AF_INET]100.64.1.1:63490\nSat Jan 11 19:06:09 2020 10.250.7.77:34258 Connection reset, restarting [0]\nSat Jan 11 19:06:09 2020 100.64.1.1:63490 Connection reset, restarting [0]\nSat Jan 11 19:06:13 2020 TCP connection established with [AF_INET]10.250.7.77:18172\nSat Jan 11 19:06:13 2020 10.250.7.77:18172 TCP connection established with [AF_INET]100.64.1.1:59112\nSat Jan 11 19:06:13 2020 10.250.7.77:18172 Connection reset, restarting [0]\nSat Jan 11 19:06:13 2020 100.64.1.1:59112 Connection reset, restarting [0]\nSat Jan 11 19:06:19 2020 TCP connection established with [AF_INET]10.250.7.77:34274\nSat Jan 11 19:06:19 2020 10.250.7.77:34274 TCP connection established with [AF_INET]100.64.1.1:63506\nSat Jan 11 19:06:19 2020 10.250.7.77:34274 Connection reset, restarting [0]\nSat Jan 11 19:06:19 2020 100.64.1.1:63506 Connection reset, restarting [0]\nSat Jan 11 19:06:23 2020 TCP connection established with [AF_INET]10.250.7.77:18182\nSat Jan 11 19:06:23 2020 10.250.7.77:18182 TCP connection established with [AF_INET]100.64.1.1:59122\nSat Jan 11 19:06:23 2020 10.250.7.77:18182 Connection reset, restarting [0]\nSat Jan 11 19:06:23 2020 100.64.1.1:59122 Connection reset, restarting [0]\nSat Jan 11 19:06:29 2020 TCP connection established with [AF_INET]10.250.7.77:34278\nSat Jan 11 19:06:29 2020 10.250.7.77:34278 TCP connection established with [AF_INET]100.64.1.1:63510\nSat Jan 11 19:06:29 2020 10.250.7.77:34278 Connection reset, restarting [0]\nSat Jan 11 19:06:29 2020 100.64.1.1:63510 Connection reset, restarting [0]\nSat Jan 11 19:06:33 2020 TCP connection established with [AF_INET]10.250.7.77:18190\nSat Jan 11 19:06:33 2020 10.250.7.77:18190 TCP connection established with [AF_INET]100.64.1.1:59130\nSat Jan 11 19:06:33 2020 10.250.7.77:18190 Connection reset, restarting [0]\nSat Jan 11 19:06:33 2020 100.64.1.1:59130 Connection reset, restarting [0]\nSat Jan 11 19:06:39 2020 TCP connection established with [AF_INET]10.250.7.77:34288\nSat Jan 11 19:06:39 2020 10.250.7.77:34288 TCP connection established with [AF_INET]100.64.1.1:63520\nSat Jan 11 19:06:39 2020 10.250.7.77:34288 Connection reset, restarting [0]\nSat Jan 11 19:06:39 2020 100.64.1.1:63520 Connection reset, restarting [0]\nSat Jan 11 19:06:43 2020 TCP connection established with [AF_INET]10.250.7.77:18198\nSat Jan 11 19:06:43 2020 10.250.7.77:18198 TCP connection established with [AF_INET]100.64.1.1:59138\nSat Jan 11 19:06:43 2020 10.250.7.77:18198 Connection reset, restarting [0]\nSat Jan 11 19:06:43 2020 100.64.1.1:59138 Connection reset, restarting [0]\nSat Jan 11 19:06:49 2020 TCP connection established with [AF_INET]10.250.7.77:34294\nSat Jan 11 19:06:49 2020 10.250.7.77:34294 TCP connection established with [AF_INET]100.64.1.1:63526\nSat Jan 11 19:06:49 2020 10.250.7.77:34294 Connection reset, restarting [0]\nSat Jan 11 19:06:49 2020 100.64.1.1:63526 Connection reset, restarting [0]\nSat Jan 11 19:06:53 2020 TCP connection established with [AF_INET]100.64.1.1:59144\nSat Jan 11 19:06:53 2020 100.64.1.1:59144 Connection reset, restarting [0]\nSat Jan 11 19:06:53 2020 TCP connection established with [AF_INET]10.250.7.77:18204\nSat Jan 11 19:06:53 2020 10.250.7.77:18204 Connection reset, restarting [0]\nSat Jan 11 19:06:59 2020 TCP connection established with [AF_INET]10.250.7.77:34308\nSat Jan 11 19:06:59 2020 10.250.7.77:34308 TCP connection established with [AF_INET]100.64.1.1:63540\nSat Jan 11 19:06:59 2020 10.250.7.77:34308 Connection reset, restarting [0]\nSat Jan 11 19:06:59 2020 100.64.1.1:63540 Connection reset, restarting [0]\nSat Jan 11 19:07:03 2020 TCP connection established with [AF_INET]10.250.7.77:18222\nSat Jan 11 19:07:03 2020 10.250.7.77:18222 TCP connection established with [AF_INET]100.64.1.1:59162\nSat Jan 11 19:07:03 2020 10.250.7.77:18222 Connection reset, restarting [0]\nSat Jan 11 19:07:03 2020 100.64.1.1:59162 Connection reset, restarting [0]\nSat Jan 11 19:07:09 2020 TCP connection established with [AF_INET]10.250.7.77:34318\nSat Jan 11 19:07:09 2020 10.250.7.77:34318 TCP connection established with [AF_INET]100.64.1.1:63550\nSat Jan 11 19:07:09 2020 10.250.7.77:34318 Connection reset, restarting [0]\nSat Jan 11 19:07:09 2020 100.64.1.1:63550 Connection reset, restarting [0]\nSat Jan 11 19:07:13 2020 TCP connection established with [AF_INET]10.250.7.77:18230\nSat Jan 11 19:07:13 2020 10.250.7.77:18230 TCP connection established with [AF_INET]100.64.1.1:59170\nSat Jan 11 19:07:13 2020 10.250.7.77:18230 Connection reset, restarting [0]\nSat Jan 11 19:07:13 2020 100.64.1.1:59170 Connection reset, restarting [0]\nSat Jan 11 19:07:19 2020 TCP connection established with [AF_INET]10.250.7.77:34330\nSat Jan 11 19:07:19 2020 10.250.7.77:34330 TCP connection established with [AF_INET]100.64.1.1:63562\nSat Jan 11 19:07:19 2020 10.250.7.77:34330 Connection reset, restarting [0]\nSat Jan 11 19:07:19 2020 100.64.1.1:63562 Connection reset, restarting [0]\nSat Jan 11 19:07:23 2020 TCP connection established with [AF_INET]10.250.7.77:18240\nSat Jan 11 19:07:23 2020 10.250.7.77:18240 TCP connection established with [AF_INET]100.64.1.1:59180\nSat Jan 11 19:07:23 2020 10.250.7.77:18240 Connection reset, restarting [0]\nSat Jan 11 19:07:23 2020 100.64.1.1:59180 Connection reset, restarting [0]\nSat Jan 11 19:07:29 2020 TCP connection established with [AF_INET]10.250.7.77:34340\nSat Jan 11 19:07:29 2020 10.250.7.77:34340 TCP connection established with [AF_INET]100.64.1.1:63572\nSat Jan 11 19:07:29 2020 10.250.7.77:34340 Connection reset, restarting [0]\nSat Jan 11 19:07:29 2020 100.64.1.1:63572 Connection reset, restarting [0]\nSat Jan 11 19:07:33 2020 TCP connection established with [AF_INET]10.250.7.77:18248\nSat Jan 11 19:07:33 2020 10.250.7.77:18248 TCP connection established with [AF_INET]100.64.1.1:59188\nSat Jan 11 19:07:33 2020 10.250.7.77:18248 Connection reset, restarting [0]\nSat Jan 11 19:07:33 2020 100.64.1.1:59188 Connection reset, restarting [0]\nSat Jan 11 19:07:39 2020 TCP connection established with [AF_INET]10.250.7.77:34348\nSat Jan 11 19:07:39 2020 10.250.7.77:34348 TCP connection established with [AF_INET]100.64.1.1:63580\nSat Jan 11 19:07:39 2020 10.250.7.77:34348 Connection reset, restarting [0]\nSat Jan 11 19:07:39 2020 100.64.1.1:63580 Connection reset, restarting [0]\nSat Jan 11 19:07:43 2020 TCP connection established with [AF_INET]10.250.7.77:18256\nSat Jan 11 19:07:43 2020 10.250.7.77:18256 TCP connection established with [AF_INET]100.64.1.1:59196\nSat Jan 11 19:07:43 2020 10.250.7.77:18256 Connection reset, restarting [0]\nSat Jan 11 19:07:43 2020 100.64.1.1:59196 Connection reset, restarting [0]\nSat Jan 11 19:07:49 2020 TCP connection established with [AF_INET]10.250.7.77:34354\nSat Jan 11 19:07:49 2020 10.250.7.77:34354 TCP connection established with [AF_INET]100.64.1.1:63586\nSat Jan 11 19:07:49 2020 10.250.7.77:34354 Connection reset, restarting [0]\nSat Jan 11 19:07:49 2020 100.64.1.1:63586 Connection reset, restarting [0]\nSat Jan 11 19:07:53 2020 TCP connection established with [AF_INET]10.250.7.77:18262\nSat Jan 11 19:07:53 2020 10.250.7.77:18262 TCP connection established with [AF_INET]100.64.1.1:59202\nSat Jan 11 19:07:53 2020 10.250.7.77:18262 Connection reset, restarting [0]\nSat Jan 11 19:07:53 2020 100.64.1.1:59202 Connection reset, restarting [0]\nSat Jan 11 19:07:59 2020 TCP connection established with [AF_INET]10.250.7.77:34366\nSat Jan 11 19:07:59 2020 10.250.7.77:34366 TCP connection established with [AF_INET]100.64.1.1:63598\nSat Jan 11 19:07:59 2020 10.250.7.77:34366 Connection reset, restarting [0]\nSat Jan 11 19:07:59 2020 100.64.1.1:63598 Connection reset, restarting [0]\nSat Jan 11 19:08:03 2020 TCP connection established with [AF_INET]10.250.7.77:18280\nSat Jan 11 19:08:03 2020 10.250.7.77:18280 TCP connection established with [AF_INET]100.64.1.1:59220\nSat Jan 11 19:08:03 2020 10.250.7.77:18280 Connection reset, restarting [0]\nSat Jan 11 19:08:03 2020 100.64.1.1:59220 Connection reset, restarting [0]\nSat Jan 11 19:08:09 2020 TCP connection established with [AF_INET]10.250.7.77:34376\nSat Jan 11 19:08:09 2020 10.250.7.77:34376 TCP connection established with [AF_INET]100.64.1.1:63608\nSat Jan 11 19:08:09 2020 10.250.7.77:34376 Connection reset, restarting [0]\nSat Jan 11 19:08:09 2020 100.64.1.1:63608 Connection reset, restarting [0]\nSat Jan 11 19:08:13 2020 TCP connection established with [AF_INET]10.250.7.77:18284\nSat Jan 11 19:08:13 2020 10.250.7.77:18284 TCP connection established with [AF_INET]100.64.1.1:59224\nSat Jan 11 19:08:13 2020 10.250.7.77:18284 Connection reset, restarting [0]\nSat Jan 11 19:08:13 2020 100.64.1.1:59224 Connection reset, restarting [0]\nSat Jan 11 19:08:19 2020 TCP connection established with [AF_INET]10.250.7.77:34388\nSat Jan 11 19:08:19 2020 10.250.7.77:34388 TCP connection established with [AF_INET]100.64.1.1:63620\nSat Jan 11 19:08:19 2020 10.250.7.77:34388 Connection reset, restarting [0]\nSat Jan 11 19:08:19 2020 100.64.1.1:63620 Connection reset, restarting [0]\nSat Jan 11 19:08:23 2020 TCP connection established with [AF_INET]10.250.7.77:18300\nSat Jan 11 19:08:23 2020 10.250.7.77:18300 TCP connection established with [AF_INET]100.64.1.1:59240\nSat Jan 11 19:08:23 2020 10.250.7.77:18300 Connection reset, restarting [0]\nSat Jan 11 19:08:23 2020 100.64.1.1:59240 Connection reset, restarting [0]\nSat Jan 11 19:08:29 2020 TCP connection established with [AF_INET]10.250.7.77:34394\nSat Jan 11 19:08:29 2020 10.250.7.77:34394 TCP connection established with [AF_INET]100.64.1.1:63626\nSat Jan 11 19:08:29 2020 10.250.7.77:34394 Connection reset, restarting [0]\nSat Jan 11 19:08:29 2020 100.64.1.1:63626 Connection reset, restarting [0]\nSat Jan 11 19:08:33 2020 TCP connection established with [AF_INET]10.250.7.77:18306\nSat Jan 11 19:08:33 2020 10.250.7.77:18306 TCP connection established with [AF_INET]100.64.1.1:59246\nSat Jan 11 19:08:33 2020 10.250.7.77:18306 Connection reset, restarting [0]\nSat Jan 11 19:08:33 2020 100.64.1.1:59246 Connection reset, restarting [0]\nSat Jan 11 19:08:39 2020 TCP connection established with [AF_INET]10.250.7.77:34436\nSat Jan 11 19:08:39 2020 10.250.7.77:34436 TCP connection established with [AF_INET]100.64.1.1:63668\nSat Jan 11 19:08:39 2020 10.250.7.77:34436 Connection reset, restarting [0]\nSat Jan 11 19:08:39 2020 100.64.1.1:63668 Connection reset, restarting [0]\nSat Jan 11 19:08:43 2020 TCP connection established with [AF_INET]10.250.7.77:18314\nSat Jan 11 19:08:43 2020 10.250.7.77:18314 TCP connection established with [AF_INET]100.64.1.1:59254\nSat Jan 11 19:08:43 2020 10.250.7.77:18314 Connection reset, restarting [0]\nSat Jan 11 19:08:43 2020 100.64.1.1:59254 Connection reset, restarting [0]\nSat Jan 11 19:08:49 2020 TCP connection established with [AF_INET]10.250.7.77:34446\nSat Jan 11 19:08:49 2020 10.250.7.77:34446 TCP connection established with [AF_INET]100.64.1.1:63678\nSat Jan 11 19:08:49 2020 10.250.7.77:34446 Connection reset, restarting [0]\nSat Jan 11 19:08:49 2020 100.64.1.1:63678 Connection reset, restarting [0]\nSat Jan 11 19:08:53 2020 TCP connection established with [AF_INET]10.250.7.77:18320\nSat Jan 11 19:08:53 2020 10.250.7.77:18320 TCP connection established with [AF_INET]100.64.1.1:59260\nSat Jan 11 19:08:53 2020 10.250.7.77:18320 Connection reset, restarting [0]\nSat Jan 11 19:08:53 2020 100.64.1.1:59260 Connection reset, restarting [0]\nSat Jan 11 19:08:59 2020 TCP connection established with [AF_INET]10.250.7.77:34458\nSat Jan 11 19:08:59 2020 10.250.7.77:34458 TCP connection established with [AF_INET]100.64.1.1:63690\nSat Jan 11 19:08:59 2020 10.250.7.77:34458 Connection reset, restarting [0]\nSat Jan 11 19:08:59 2020 100.64.1.1:63690 Connection reset, restarting [0]\nSat Jan 11 19:09:03 2020 TCP connection established with [AF_INET]10.250.7.77:18338\nSat Jan 11 19:09:03 2020 10.250.7.77:18338 TCP connection established with [AF_INET]100.64.1.1:59278\nSat Jan 11 19:09:03 2020 10.250.7.77:18338 Connection reset, restarting [0]\nSat Jan 11 19:09:03 2020 100.64.1.1:59278 Connection reset, restarting [0]\nSat Jan 11 19:09:09 2020 TCP connection established with [AF_INET]10.250.7.77:34470\nSat Jan 11 19:09:09 2020 10.250.7.77:34470 TCP connection established with [AF_INET]100.64.1.1:63702\nSat Jan 11 19:09:09 2020 10.250.7.77:34470 Connection reset, restarting [0]\nSat Jan 11 19:09:09 2020 100.64.1.1:63702 Connection reset, restarting [0]\nSat Jan 11 19:09:13 2020 TCP connection established with [AF_INET]10.250.7.77:18342\nSat Jan 11 19:09:13 2020 10.250.7.77:18342 TCP connection established with [AF_INET]100.64.1.1:59282\nSat Jan 11 19:09:13 2020 10.250.7.77:18342 Connection reset, restarting [0]\nSat Jan 11 19:09:13 2020 100.64.1.1:59282 Connection reset, restarting [0]\nSat Jan 11 19:09:19 2020 TCP connection established with [AF_INET]10.250.7.77:34482\nSat Jan 11 19:09:19 2020 10.250.7.77:34482 TCP connection established with [AF_INET]100.64.1.1:63714\nSat Jan 11 19:09:19 2020 10.250.7.77:34482 Connection reset, restarting [0]\nSat Jan 11 19:09:19 2020 100.64.1.1:63714 Connection reset, restarting [0]\nSat Jan 11 19:09:23 2020 TCP connection established with [AF_INET]10.250.7.77:18354\nSat Jan 11 19:09:23 2020 10.250.7.77:18354 TCP connection established with [AF_INET]100.64.1.1:59294\nSat Jan 11 19:09:23 2020 10.250.7.77:18354 Connection reset, restarting [0]\nSat Jan 11 19:09:23 2020 100.64.1.1:59294 Connection reset, restarting [0]\nSat Jan 11 19:09:29 2020 TCP connection established with [AF_INET]10.250.7.77:34488\nSat Jan 11 19:09:29 2020 10.250.7.77:34488 TCP connection established with [AF_INET]100.64.1.1:63720\nSat Jan 11 19:09:29 2020 10.250.7.77:34488 Connection reset, restarting [0]\nSat Jan 11 19:09:29 2020 100.64.1.1:63720 Connection reset, restarting [0]\nSat Jan 11 19:09:33 2020 TCP connection established with [AF_INET]10.250.7.77:18360\nSat Jan 11 19:09:33 2020 10.250.7.77:18360 TCP connection established with [AF_INET]100.64.1.1:59300\nSat Jan 11 19:09:33 2020 10.250.7.77:18360 Connection reset, restarting [0]\nSat Jan 11 19:09:33 2020 100.64.1.1:59300 Connection reset, restarting [0]\nSat Jan 11 19:09:39 2020 TCP connection established with [AF_INET]10.250.7.77:34496\nSat Jan 11 19:09:39 2020 10.250.7.77:34496 TCP connection established with [AF_INET]100.64.1.1:63728\nSat Jan 11 19:09:39 2020 10.250.7.77:34496 Connection reset, restarting [0]\nSat Jan 11 19:09:39 2020 100.64.1.1:63728 Connection reset, restarting [0]\nSat Jan 11 19:09:43 2020 TCP connection established with [AF_INET]10.250.7.77:18372\nSat Jan 11 19:09:43 2020 10.250.7.77:18372 TCP connection established with [AF_INET]100.64.1.1:59312\nSat Jan 11 19:09:43 2020 10.250.7.77:18372 Connection reset, restarting [0]\nSat Jan 11 19:09:43 2020 100.64.1.1:59312 Connection reset, restarting [0]\nSat Jan 11 19:09:49 2020 TCP connection established with [AF_INET]10.250.7.77:34502\nSat Jan 11 19:09:49 2020 10.250.7.77:34502 TCP connection established with [AF_INET]100.64.1.1:63734\nSat Jan 11 19:09:49 2020 10.250.7.77:34502 Connection reset, restarting [0]\nSat Jan 11 19:09:49 2020 100.64.1.1:63734 Connection reset, restarting [0]\nSat Jan 11 19:09:53 2020 TCP connection established with [AF_INET]10.250.7.77:18378\nSat Jan 11 19:09:53 2020 10.250.7.77:18378 TCP connection established with [AF_INET]100.64.1.1:59318\nSat Jan 11 19:09:53 2020 10.250.7.77:18378 Connection reset, restarting [0]\nSat Jan 11 19:09:53 2020 100.64.1.1:59318 Connection reset, restarting [0]\nSat Jan 11 19:09:59 2020 TCP connection established with [AF_INET]10.250.7.77:34518\nSat Jan 11 19:09:59 2020 10.250.7.77:34518 TCP connection established with [AF_INET]100.64.1.1:63750\nSat Jan 11 19:09:59 2020 10.250.7.77:34518 Connection reset, restarting [0]\nSat Jan 11 19:09:59 2020 100.64.1.1:63750 Connection reset, restarting [0]\nSat Jan 11 19:10:03 2020 TCP connection established with [AF_INET]10.250.7.77:18396\nSat Jan 11 19:10:03 2020 10.250.7.77:18396 TCP connection established with [AF_INET]100.64.1.1:59336\nSat Jan 11 19:10:03 2020 10.250.7.77:18396 Connection reset, restarting [0]\nSat Jan 11 19:10:03 2020 100.64.1.1:59336 Connection reset, restarting [0]\nSat Jan 11 19:10:09 2020 TCP connection established with [AF_INET]10.250.7.77:34528\nSat Jan 11 19:10:09 2020 10.250.7.77:34528 TCP connection established with [AF_INET]100.64.1.1:63760\nSat Jan 11 19:10:09 2020 10.250.7.77:34528 Connection reset, restarting [0]\nSat Jan 11 19:10:09 2020 100.64.1.1:63760 Connection reset, restarting [0]\nSat Jan 11 19:10:13 2020 TCP connection established with [AF_INET]10.250.7.77:18402\nSat Jan 11 19:10:13 2020 10.250.7.77:18402 TCP connection established with [AF_INET]100.64.1.1:59342\nSat Jan 11 19:10:13 2020 10.250.7.77:18402 Connection reset, restarting [0]\nSat Jan 11 19:10:13 2020 100.64.1.1:59342 Connection reset, restarting [0]\nSat Jan 11 19:10:19 2020 TCP connection established with [AF_INET]10.250.7.77:34542\nSat Jan 11 19:10:19 2020 10.250.7.77:34542 TCP connection established with [AF_INET]100.64.1.1:63774\nSat Jan 11 19:10:19 2020 10.250.7.77:34542 Connection reset, restarting [0]\nSat Jan 11 19:10:19 2020 100.64.1.1:63774 Connection reset, restarting [0]\nSat Jan 11 19:10:23 2020 TCP connection established with [AF_INET]10.250.7.77:18412\nSat Jan 11 19:10:23 2020 10.250.7.77:18412 TCP connection established with [AF_INET]100.64.1.1:59352\nSat Jan 11 19:10:23 2020 10.250.7.77:18412 Connection reset, restarting [0]\nSat Jan 11 19:10:23 2020 100.64.1.1:59352 Connection reset, restarting [0]\nSat Jan 11 19:10:29 2020 TCP connection established with [AF_INET]10.250.7.77:34546\nSat Jan 11 19:10:29 2020 10.250.7.77:34546 TCP connection established with [AF_INET]100.64.1.1:63778\nSat Jan 11 19:10:29 2020 10.250.7.77:34546 Connection reset, restarting [0]\nSat Jan 11 19:10:29 2020 100.64.1.1:63778 Connection reset, restarting [0]\nSat Jan 11 19:10:33 2020 TCP connection established with [AF_INET]10.250.7.77:18418\nSat Jan 11 19:10:33 2020 10.250.7.77:18418 TCP connection established with [AF_INET]100.64.1.1:59358\nSat Jan 11 19:10:33 2020 10.250.7.77:18418 Connection reset, restarting [0]\nSat Jan 11 19:10:33 2020 100.64.1.1:59358 Connection reset, restarting [0]\nSat Jan 11 19:10:39 2020 TCP connection established with [AF_INET]10.250.7.77:34554\nSat Jan 11 19:10:39 2020 10.250.7.77:34554 TCP connection established with [AF_INET]100.64.1.1:63786\nSat Jan 11 19:10:39 2020 10.250.7.77:34554 Connection reset, restarting [0]\nSat Jan 11 19:10:39 2020 100.64.1.1:63786 Connection reset, restarting [0]\nSat Jan 11 19:10:43 2020 TCP connection established with [AF_INET]10.250.7.77:18426\nSat Jan 11 19:10:43 2020 10.250.7.77:18426 TCP connection established with [AF_INET]100.64.1.1:59366\nSat Jan 11 19:10:43 2020 10.250.7.77:18426 Connection reset, restarting [0]\nSat Jan 11 19:10:43 2020 100.64.1.1:59366 Connection reset, restarting [0]\nSat Jan 11 19:10:49 2020 TCP connection established with [AF_INET]10.250.7.77:34560\nSat Jan 11 19:10:49 2020 10.250.7.77:34560 TCP connection established with [AF_INET]100.64.1.1:63792\nSat Jan 11 19:10:49 2020 10.250.7.77:34560 Connection reset, restarting [0]\nSat Jan 11 19:10:49 2020 100.64.1.1:63792 Connection reset, restarting [0]\nSat Jan 11 19:10:53 2020 TCP connection established with [AF_INET]10.250.7.77:18436\nSat Jan 11 19:10:53 2020 10.250.7.77:18436 TCP connection established with [AF_INET]100.64.1.1:59376\nSat Jan 11 19:10:53 2020 10.250.7.77:18436 Connection reset, restarting [0]\nSat Jan 11 19:10:53 2020 100.64.1.1:59376 Connection reset, restarting [0]\nSat Jan 11 19:10:59 2020 TCP connection established with [AF_INET]10.250.7.77:34572\nSat Jan 11 19:10:59 2020 10.250.7.77:34572 TCP connection established with [AF_INET]100.64.1.1:63804\nSat Jan 11 19:10:59 2020 10.250.7.77:34572 Connection reset, restarting [0]\nSat Jan 11 19:10:59 2020 100.64.1.1:63804 Connection reset, restarting [0]\nSat Jan 11 19:11:03 2020 TCP connection established with [AF_INET]10.250.7.77:18454\nSat Jan 11 19:11:03 2020 10.250.7.77:18454 TCP connection established with [AF_INET]100.64.1.1:59394\nSat Jan 11 19:11:03 2020 10.250.7.77:18454 Connection reset, restarting [0]\nSat Jan 11 19:11:03 2020 100.64.1.1:59394 Connection reset, restarting [0]\nSat Jan 11 19:11:09 2020 TCP connection established with [AF_INET]10.250.7.77:34582\nSat Jan 11 19:11:09 2020 10.250.7.77:34582 TCP connection established with [AF_INET]100.64.1.1:63814\nSat Jan 11 19:11:09 2020 10.250.7.77:34582 Connection reset, restarting [0]\nSat Jan 11 19:11:09 2020 100.64.1.1:63814 Connection reset, restarting [0]\nSat Jan 11 19:11:13 2020 TCP connection established with [AF_INET]10.250.7.77:18460\nSat Jan 11 19:11:13 2020 10.250.7.77:18460 TCP connection established with [AF_INET]100.64.1.1:59400\nSat Jan 11 19:11:13 2020 10.250.7.77:18460 Connection reset, restarting [0]\nSat Jan 11 19:11:13 2020 100.64.1.1:59400 Connection reset, restarting [0]\nSat Jan 11 19:11:19 2020 TCP connection established with [AF_INET]10.250.7.77:34600\nSat Jan 11 19:11:19 2020 10.250.7.77:34600 TCP connection established with [AF_INET]100.64.1.1:63832\nSat Jan 11 19:11:19 2020 10.250.7.77:34600 Connection reset, restarting [0]\nSat Jan 11 19:11:19 2020 100.64.1.1:63832 Connection reset, restarting [0]\nSat Jan 11 19:11:23 2020 TCP connection established with [AF_INET]10.250.7.77:18470\nSat Jan 11 19:11:23 2020 10.250.7.77:18470 TCP connection established with [AF_INET]100.64.1.1:59410\nSat Jan 11 19:11:23 2020 10.250.7.77:18470 Connection reset, restarting [0]\nSat Jan 11 19:11:23 2020 100.64.1.1:59410 Connection reset, restarting [0]\nSat Jan 11 19:11:29 2020 TCP connection established with [AF_INET]10.250.7.77:34604\nSat Jan 11 19:11:29 2020 10.250.7.77:34604 TCP connection established with [AF_INET]100.64.1.1:63836\nSat Jan 11 19:11:29 2020 10.250.7.77:34604 Connection reset, restarting [0]\nSat Jan 11 19:11:29 2020 100.64.1.1:63836 Connection reset, restarting [0]\nSat Jan 11 19:11:33 2020 TCP connection established with [AF_INET]10.250.7.77:18476\nSat Jan 11 19:11:33 2020 10.250.7.77:18476 TCP connection established with [AF_INET]100.64.1.1:59416\nSat Jan 11 19:11:33 2020 10.250.7.77:18476 Connection reset, restarting [0]\nSat Jan 11 19:11:33 2020 100.64.1.1:59416 Connection reset, restarting [0]\nSat Jan 11 19:11:39 2020 TCP connection established with [AF_INET]10.250.7.77:34612\nSat Jan 11 19:11:39 2020 10.250.7.77:34612 TCP connection established with [AF_INET]100.64.1.1:63844\nSat Jan 11 19:11:39 2020 10.250.7.77:34612 Connection reset, restarting [0]\nSat Jan 11 19:11:39 2020 100.64.1.1:63844 Connection reset, restarting [0]\nSat Jan 11 19:11:43 2020 TCP connection established with [AF_INET]10.250.7.77:18490\nSat Jan 11 19:11:43 2020 10.250.7.77:18490 TCP connection established with [AF_INET]100.64.1.1:59430\nSat Jan 11 19:11:43 2020 10.250.7.77:18490 Connection reset, restarting [0]\nSat Jan 11 19:11:43 2020 100.64.1.1:59430 Connection reset, restarting [0]\nSat Jan 11 19:11:49 2020 TCP connection established with [AF_INET]10.250.7.77:34618\nSat Jan 11 19:11:49 2020 10.250.7.77:34618 TCP connection established with [AF_INET]100.64.1.1:63850\nSat Jan 11 19:11:49 2020 10.250.7.77:34618 Connection reset, restarting [0]\nSat Jan 11 19:11:49 2020 100.64.1.1:63850 Connection reset, restarting [0]\nSat Jan 11 19:11:53 2020 TCP connection established with [AF_INET]10.250.7.77:18496\nSat Jan 11 19:11:53 2020 10.250.7.77:18496 TCP connection established with [AF_INET]100.64.1.1:59436\nSat Jan 11 19:11:53 2020 10.250.7.77:18496 Connection reset, restarting [0]\nSat Jan 11 19:11:53 2020 100.64.1.1:59436 Connection reset, restarting [0]\nSat Jan 11 19:11:59 2020 TCP connection established with [AF_INET]10.250.7.77:34630\nSat Jan 11 19:11:59 2020 10.250.7.77:34630 TCP connection established with [AF_INET]100.64.1.1:63862\nSat Jan 11 19:11:59 2020 10.250.7.77:34630 Connection reset, restarting [0]\nSat Jan 11 19:11:59 2020 100.64.1.1:63862 Connection reset, restarting [0]\nSat Jan 11 19:12:03 2020 TCP connection established with [AF_INET]10.250.7.77:18514\nSat Jan 11 19:12:03 2020 10.250.7.77:18514 TCP connection established with [AF_INET]100.64.1.1:59454\nSat Jan 11 19:12:03 2020 10.250.7.77:18514 Connection reset, restarting [0]\nSat Jan 11 19:12:03 2020 100.64.1.1:59454 Connection reset, restarting [0]\nSat Jan 11 19:12:09 2020 TCP connection established with [AF_INET]10.250.7.77:34642\nSat Jan 11 19:12:09 2020 10.250.7.77:34642 TCP connection established with [AF_INET]100.64.1.1:63874\nSat Jan 11 19:12:09 2020 10.250.7.77:34642 Connection reset, restarting [0]\nSat Jan 11 19:12:09 2020 100.64.1.1:63874 Connection reset, restarting [0]\nSat Jan 11 19:12:13 2020 TCP connection established with [AF_INET]10.250.7.77:18524\nSat Jan 11 19:12:13 2020 10.250.7.77:18524 TCP connection established with [AF_INET]100.64.1.1:59464\nSat Jan 11 19:12:13 2020 10.250.7.77:18524 Connection reset, restarting [0]\nSat Jan 11 19:12:13 2020 100.64.1.1:59464 Connection reset, restarting [0]\nSat Jan 11 19:12:19 2020 TCP connection established with [AF_INET]10.250.7.77:34654\nSat Jan 11 19:12:19 2020 10.250.7.77:34654 TCP connection established with [AF_INET]100.64.1.1:63886\nSat Jan 11 19:12:19 2020 10.250.7.77:34654 Connection reset, restarting [0]\nSat Jan 11 19:12:19 2020 100.64.1.1:63886 Connection reset, restarting [0]\nSat Jan 11 19:12:23 2020 TCP connection established with [AF_INET]10.250.7.77:18534\nSat Jan 11 19:12:23 2020 10.250.7.77:18534 TCP connection established with [AF_INET]100.64.1.1:59474\nSat Jan 11 19:12:23 2020 10.250.7.77:18534 Connection reset, restarting [0]\nSat Jan 11 19:12:23 2020 100.64.1.1:59474 Connection reset, restarting [0]\nSat Jan 11 19:12:29 2020 TCP connection established with [AF_INET]10.250.7.77:34662\nSat Jan 11 19:12:29 2020 10.250.7.77:34662 TCP connection established with [AF_INET]100.64.1.1:63894\nSat Jan 11 19:12:29 2020 10.250.7.77:34662 Connection reset, restarting [0]\nSat Jan 11 19:12:29 2020 100.64.1.1:63894 Connection reset, restarting [0]\nSat Jan 11 19:12:33 2020 TCP connection established with [AF_INET]10.250.7.77:18540\nSat Jan 11 19:12:33 2020 10.250.7.77:18540 TCP connection established with [AF_INET]100.64.1.1:59480\nSat Jan 11 19:12:33 2020 10.250.7.77:18540 Connection reset, restarting [0]\nSat Jan 11 19:12:33 2020 100.64.1.1:59480 Connection reset, restarting [0]\nSat Jan 11 19:12:39 2020 TCP connection established with [AF_INET]10.250.7.77:34670\nSat Jan 11 19:12:39 2020 10.250.7.77:34670 TCP connection established with [AF_INET]100.64.1.1:63902\nSat Jan 11 19:12:39 2020 10.250.7.77:34670 Connection reset, restarting [0]\nSat Jan 11 19:12:39 2020 100.64.1.1:63902 Connection reset, restarting [0]\nSat Jan 11 19:12:43 2020 TCP connection established with [AF_INET]10.250.7.77:18548\nSat Jan 11 19:12:43 2020 10.250.7.77:18548 TCP connection established with [AF_INET]100.64.1.1:59488\nSat Jan 11 19:12:43 2020 10.250.7.77:18548 Connection reset, restarting [0]\nSat Jan 11 19:12:43 2020 100.64.1.1:59488 Connection reset, restarting [0]\nSat Jan 11 19:12:49 2020 TCP connection established with [AF_INET]10.250.7.77:34676\nSat Jan 11 19:12:49 2020 10.250.7.77:34676 TCP connection established with [AF_INET]100.64.1.1:63908\nSat Jan 11 19:12:49 2020 10.250.7.77:34676 Connection reset, restarting [0]\nSat Jan 11 19:12:49 2020 100.64.1.1:63908 Connection reset, restarting [0]\nSat Jan 11 19:12:53 2020 TCP connection established with [AF_INET]10.250.7.77:18554\nSat Jan 11 19:12:53 2020 10.250.7.77:18554 TCP connection established with [AF_INET]100.64.1.1:59494\nSat Jan 11 19:12:53 2020 10.250.7.77:18554 Connection reset, restarting [0]\nSat Jan 11 19:12:53 2020 100.64.1.1:59494 Connection reset, restarting [0]\nSat Jan 11 19:12:59 2020 TCP connection established with [AF_INET]10.250.7.77:34688\nSat Jan 11 19:12:59 2020 10.250.7.77:34688 TCP connection established with [AF_INET]100.64.1.1:63920\nSat Jan 11 19:12:59 2020 10.250.7.77:34688 Connection reset, restarting [0]\nSat Jan 11 19:12:59 2020 100.64.1.1:63920 Connection reset, restarting [0]\nSat Jan 11 19:13:03 2020 TCP connection established with [AF_INET]10.250.7.77:18574\nSat Jan 11 19:13:03 2020 10.250.7.77:18574 TCP connection established with [AF_INET]100.64.1.1:59514\nSat Jan 11 19:13:03 2020 10.250.7.77:18574 Connection reset, restarting [0]\nSat Jan 11 19:13:03 2020 100.64.1.1:59514 Connection reset, restarting [0]\nSat Jan 11 19:13:09 2020 TCP connection established with [AF_INET]10.250.7.77:34700\nSat Jan 11 19:13:09 2020 10.250.7.77:34700 TCP connection established with [AF_INET]100.64.1.1:63932\nSat Jan 11 19:13:09 2020 10.250.7.77:34700 Connection reset, restarting [0]\nSat Jan 11 19:13:09 2020 100.64.1.1:63932 Connection reset, restarting [0]\nSat Jan 11 19:13:13 2020 TCP connection established with [AF_INET]10.250.7.77:18578\nSat Jan 11 19:13:13 2020 10.250.7.77:18578 TCP connection established with [AF_INET]100.64.1.1:59518\nSat Jan 11 19:13:13 2020 10.250.7.77:18578 Connection reset, restarting [0]\nSat Jan 11 19:13:13 2020 100.64.1.1:59518 Connection reset, restarting [0]\nSat Jan 11 19:13:19 2020 TCP connection established with [AF_INET]10.250.7.77:34712\nSat Jan 11 19:13:19 2020 10.250.7.77:34712 TCP connection established with [AF_INET]100.64.1.1:63944\nSat Jan 11 19:13:19 2020 10.250.7.77:34712 Connection reset, restarting [0]\nSat Jan 11 19:13:19 2020 100.64.1.1:63944 Connection reset, restarting [0]\nSat Jan 11 19:13:23 2020 TCP connection established with [AF_INET]10.250.7.77:18592\nSat Jan 11 19:13:23 2020 10.250.7.77:18592 TCP connection established with [AF_INET]100.64.1.1:59532\nSat Jan 11 19:13:23 2020 10.250.7.77:18592 Connection reset, restarting [0]\nSat Jan 11 19:13:23 2020 100.64.1.1:59532 Connection reset, restarting [0]\nSat Jan 11 19:13:29 2020 TCP connection established with [AF_INET]10.250.7.77:34716\nSat Jan 11 19:13:29 2020 10.250.7.77:34716 TCP connection established with [AF_INET]100.64.1.1:63948\nSat Jan 11 19:13:29 2020 10.250.7.77:34716 Connection reset, restarting [0]\nSat Jan 11 19:13:29 2020 100.64.1.1:63948 Connection reset, restarting [0]\nSat Jan 11 19:13:33 2020 TCP connection established with [AF_INET]10.250.7.77:18598\nSat Jan 11 19:13:33 2020 10.250.7.77:18598 TCP connection established with [AF_INET]100.64.1.1:59538\nSat Jan 11 19:13:33 2020 10.250.7.77:18598 Connection reset, restarting [0]\nSat Jan 11 19:13:33 2020 100.64.1.1:59538 Connection reset, restarting [0]\nSat Jan 11 19:13:39 2020 TCP connection established with [AF_INET]10.250.7.77:34724\nSat Jan 11 19:13:39 2020 10.250.7.77:34724 TCP connection established with [AF_INET]100.64.1.1:63956\nSat Jan 11 19:13:39 2020 10.250.7.77:34724 Connection reset, restarting [0]\nSat Jan 11 19:13:39 2020 100.64.1.1:63956 Connection reset, restarting [0]\nSat Jan 11 19:13:43 2020 TCP connection established with [AF_INET]10.250.7.77:18616\nSat Jan 11 19:13:43 2020 10.250.7.77:18616 TCP connection established with [AF_INET]100.64.1.1:59556\nSat Jan 11 19:13:43 2020 10.250.7.77:18616 Connection reset, restarting [0]\nSat Jan 11 19:13:43 2020 100.64.1.1:59556 Connection reset, restarting [0]\nSat Jan 11 19:13:49 2020 TCP connection established with [AF_INET]10.250.7.77:34734\nSat Jan 11 19:13:49 2020 10.250.7.77:34734 TCP connection established with [AF_INET]100.64.1.1:63966\nSat Jan 11 19:13:49 2020 10.250.7.77:34734 Connection reset, restarting [0]\nSat Jan 11 19:13:49 2020 100.64.1.1:63966 Connection reset, restarting [0]\nSat Jan 11 19:13:53 2020 TCP connection established with [AF_INET]10.250.7.77:18622\nSat Jan 11 19:13:53 2020 10.250.7.77:18622 TCP connection established with [AF_INET]100.64.1.1:59562\nSat Jan 11 19:13:53 2020 10.250.7.77:18622 Connection reset, restarting [0]\nSat Jan 11 19:13:53 2020 100.64.1.1:59562 Connection reset, restarting [0]\nSat Jan 11 19:13:59 2020 TCP connection established with [AF_INET]10.250.7.77:34746\nSat Jan 11 19:13:59 2020 10.250.7.77:34746 TCP connection established with [AF_INET]100.64.1.1:63978\nSat Jan 11 19:13:59 2020 10.250.7.77:34746 Connection reset, restarting [0]\nSat Jan 11 19:13:59 2020 100.64.1.1:63978 Connection reset, restarting [0]\nSat Jan 11 19:14:03 2020 TCP connection established with [AF_INET]10.250.7.77:18648\nSat Jan 11 19:14:03 2020 10.250.7.77:18648 TCP connection established with [AF_INET]100.64.1.1:59588\nSat Jan 11 19:14:03 2020 10.250.7.77:18648 Connection reset, restarting [0]\nSat Jan 11 19:14:03 2020 100.64.1.1:59588 Connection reset, restarting [0]\nSat Jan 11 19:14:09 2020 TCP connection established with [AF_INET]10.250.7.77:34764\nSat Jan 11 19:14:09 2020 10.250.7.77:34764 TCP connection established with [AF_INET]100.64.1.1:63996\nSat Jan 11 19:14:09 2020 10.250.7.77:34764 Connection reset, restarting [0]\nSat Jan 11 19:14:09 2020 100.64.1.1:63996 Connection reset, restarting [0]\nSat Jan 11 19:14:13 2020 TCP connection established with [AF_INET]10.250.7.77:18652\nSat Jan 11 19:14:13 2020 10.250.7.77:18652 TCP connection established with [AF_INET]100.64.1.1:59592\nSat Jan 11 19:14:13 2020 10.250.7.77:18652 Connection reset, restarting [0]\nSat Jan 11 19:14:13 2020 100.64.1.1:59592 Connection reset, restarting [0]\nSat Jan 11 19:14:19 2020 TCP connection established with [AF_INET]10.250.7.77:34776\nSat Jan 11 19:14:19 2020 10.250.7.77:34776 TCP connection established with [AF_INET]100.64.1.1:64008\nSat Jan 11 19:14:19 2020 10.250.7.77:34776 Connection reset, restarting [0]\nSat Jan 11 19:14:19 2020 100.64.1.1:64008 Connection reset, restarting [0]\nSat Jan 11 19:14:23 2020 TCP connection established with [AF_INET]10.250.7.77:18662\nSat Jan 11 19:14:23 2020 10.250.7.77:18662 TCP connection established with [AF_INET]100.64.1.1:59602\nSat Jan 11 19:14:23 2020 10.250.7.77:18662 Connection reset, restarting [0]\nSat Jan 11 19:14:23 2020 100.64.1.1:59602 Connection reset, restarting [0]\nSat Jan 11 19:14:29 2020 TCP connection established with [AF_INET]10.250.7.77:34780\nSat Jan 11 19:14:29 2020 10.250.7.77:34780 TCP connection established with [AF_INET]100.64.1.1:64012\nSat Jan 11 19:14:29 2020 10.250.7.77:34780 Connection reset, restarting [0]\nSat Jan 11 19:14:29 2020 100.64.1.1:64012 Connection reset, restarting [0]\nSat Jan 11 19:14:33 2020 TCP connection established with [AF_INET]10.250.7.77:18668\nSat Jan 11 19:14:33 2020 10.250.7.77:18668 TCP connection established with [AF_INET]100.64.1.1:59608\nSat Jan 11 19:14:33 2020 10.250.7.77:18668 Connection reset, restarting [0]\nSat Jan 11 19:14:33 2020 100.64.1.1:59608 Connection reset, restarting [0]\nSat Jan 11 19:14:39 2020 TCP connection established with [AF_INET]10.250.7.77:34800\nSat Jan 11 19:14:39 2020 10.250.7.77:34800 TCP connection established with [AF_INET]100.64.1.1:64032\nSat Jan 11 19:14:39 2020 10.250.7.77:34800 Connection reset, restarting [0]\nSat Jan 11 19:14:39 2020 100.64.1.1:64032 Connection reset, restarting [0]\nSat Jan 11 19:14:43 2020 TCP connection established with [AF_INET]10.250.7.77:18680\nSat Jan 11 19:14:43 2020 10.250.7.77:18680 TCP connection established with [AF_INET]100.64.1.1:59620\nSat Jan 11 19:14:43 2020 10.250.7.77:18680 Connection reset, restarting [0]\nSat Jan 11 19:14:43 2020 100.64.1.1:59620 Connection reset, restarting [0]\nSat Jan 11 19:14:49 2020 TCP connection established with [AF_INET]10.250.7.77:34812\nSat Jan 11 19:14:49 2020 10.250.7.77:34812 TCP connection established with [AF_INET]100.64.1.1:64044\nSat Jan 11 19:14:49 2020 10.250.7.77:34812 Connection reset, restarting [0]\nSat Jan 11 19:14:49 2020 100.64.1.1:64044 Connection reset, restarting [0]\nSat Jan 11 19:14:53 2020 TCP connection established with [AF_INET]10.250.7.77:18688\nSat Jan 11 19:14:53 2020 10.250.7.77:18688 TCP connection established with [AF_INET]100.64.1.1:59628\nSat Jan 11 19:14:53 2020 10.250.7.77:18688 Connection reset, restarting [0]\nSat Jan 11 19:14:53 2020 100.64.1.1:59628 Connection reset, restarting [0]\nSat Jan 11 19:14:59 2020 TCP connection established with [AF_INET]10.250.7.77:34830\nSat Jan 11 19:14:59 2020 10.250.7.77:34830 TCP connection established with [AF_INET]100.64.1.1:64062\nSat Jan 11 19:14:59 2020 10.250.7.77:34830 Connection reset, restarting [0]\nSat Jan 11 19:14:59 2020 100.64.1.1:64062 Connection reset, restarting [0]\nSat Jan 11 19:15:03 2020 TCP connection established with [AF_INET]10.250.7.77:18706\nSat Jan 11 19:15:03 2020 10.250.7.77:18706 TCP connection established with [AF_INET]100.64.1.1:59646\nSat Jan 11 19:15:03 2020 10.250.7.77:18706 Connection reset, restarting [0]\nSat Jan 11 19:15:03 2020 100.64.1.1:59646 Connection reset, restarting [0]\nSat Jan 11 19:15:09 2020 TCP connection established with [AF_INET]10.250.7.77:34840\nSat Jan 11 19:15:09 2020 10.250.7.77:34840 TCP connection established with [AF_INET]100.64.1.1:64072\nSat Jan 11 19:15:09 2020 10.250.7.77:34840 Connection reset, restarting [0]\nSat Jan 11 19:15:09 2020 100.64.1.1:64072 Connection reset, restarting [0]\nSat Jan 11 19:15:13 2020 TCP connection established with [AF_INET]10.250.7.77:18710\nSat Jan 11 19:15:13 2020 10.250.7.77:18710 TCP connection established with [AF_INET]100.64.1.1:59650\nSat Jan 11 19:15:13 2020 10.250.7.77:18710 Connection reset, restarting [0]\nSat Jan 11 19:15:13 2020 100.64.1.1:59650 Connection reset, restarting [0]\nSat Jan 11 19:15:19 2020 TCP connection established with [AF_INET]10.250.7.77:34852\nSat Jan 11 19:15:19 2020 10.250.7.77:34852 TCP connection established with [AF_INET]100.64.1.1:64084\nSat Jan 11 19:15:19 2020 10.250.7.77:34852 Connection reset, restarting [0]\nSat Jan 11 19:15:19 2020 100.64.1.1:64084 Connection reset, restarting [0]\nSat Jan 11 19:15:23 2020 TCP connection established with [AF_INET]10.250.7.77:18720\nSat Jan 11 19:15:23 2020 10.250.7.77:18720 TCP connection established with [AF_INET]100.64.1.1:59660\nSat Jan 11 19:15:23 2020 10.250.7.77:18720 Connection reset, restarting [0]\nSat Jan 11 19:15:23 2020 100.64.1.1:59660 Connection reset, restarting [0]\nSat Jan 11 19:15:29 2020 TCP connection established with [AF_INET]10.250.7.77:34870\nSat Jan 11 19:15:29 2020 10.250.7.77:34870 TCP connection established with [AF_INET]100.64.1.1:64102\nSat Jan 11 19:15:29 2020 10.250.7.77:34870 Connection reset, restarting [0]\nSat Jan 11 19:15:29 2020 100.64.1.1:64102 Connection reset, restarting [0]\nSat Jan 11 19:15:33 2020 TCP connection established with [AF_INET]10.250.7.77:18726\nSat Jan 11 19:15:33 2020 10.250.7.77:18726 TCP connection established with [AF_INET]100.64.1.1:59666\nSat Jan 11 19:15:33 2020 10.250.7.77:18726 Connection reset, restarting [0]\nSat Jan 11 19:15:33 2020 100.64.1.1:59666 Connection reset, restarting [0]\nSat Jan 11 19:15:39 2020 TCP connection established with [AF_INET]10.250.7.77:34886\nSat Jan 11 19:15:39 2020 10.250.7.77:34886 TCP connection established with [AF_INET]100.64.1.1:64118\nSat Jan 11 19:15:39 2020 10.250.7.77:34886 Connection reset, restarting [0]\nSat Jan 11 19:15:39 2020 100.64.1.1:64118 Connection reset, restarting [0]\nSat Jan 11 19:15:43 2020 TCP connection established with [AF_INET]10.250.7.77:18734\nSat Jan 11 19:15:43 2020 10.250.7.77:18734 TCP connection established with [AF_INET]100.64.1.1:59674\nSat Jan 11 19:15:43 2020 10.250.7.77:18734 Connection reset, restarting [0]\nSat Jan 11 19:15:43 2020 100.64.1.1:59674 Connection reset, restarting [0]\nSat Jan 11 19:15:49 2020 TCP connection established with [AF_INET]10.250.7.77:34892\nSat Jan 11 19:15:49 2020 10.250.7.77:34892 TCP connection established with [AF_INET]100.64.1.1:64124\nSat Jan 11 19:15:49 2020 10.250.7.77:34892 Connection reset, restarting [0]\nSat Jan 11 19:15:49 2020 100.64.1.1:64124 Connection reset, restarting [0]\nSat Jan 11 19:15:53 2020 TCP connection established with [AF_INET]10.250.7.77:18780\nSat Jan 11 19:15:53 2020 10.250.7.77:18780 TCP connection established with [AF_INET]100.64.1.1:59720\nSat Jan 11 19:15:53 2020 10.250.7.77:18780 Connection reset, restarting [0]\nSat Jan 11 19:15:53 2020 100.64.1.1:59720 Connection reset, restarting [0]\nSat Jan 11 19:15:59 2020 TCP connection established with [AF_INET]10.250.7.77:34906\nSat Jan 11 19:15:59 2020 10.250.7.77:34906 TCP connection established with [AF_INET]100.64.1.1:64138\nSat Jan 11 19:15:59 2020 10.250.7.77:34906 Connection reset, restarting [0]\nSat Jan 11 19:15:59 2020 100.64.1.1:64138 Connection reset, restarting [0]\nSat Jan 11 19:16:03 2020 TCP connection established with [AF_INET]10.250.7.77:18798\nSat Jan 11 19:16:03 2020 10.250.7.77:18798 TCP connection established with [AF_INET]100.64.1.1:59738\nSat Jan 11 19:16:03 2020 10.250.7.77:18798 Connection reset, restarting [0]\nSat Jan 11 19:16:03 2020 100.64.1.1:59738 Connection reset, restarting [0]\nSat Jan 11 19:16:09 2020 TCP connection established with [AF_INET]10.250.7.77:34916\nSat Jan 11 19:16:09 2020 10.250.7.77:34916 TCP connection established with [AF_INET]100.64.1.1:64148\nSat Jan 11 19:16:09 2020 10.250.7.77:34916 Connection reset, restarting [0]\nSat Jan 11 19:16:09 2020 100.64.1.1:64148 Connection reset, restarting [0]\nSat Jan 11 19:16:13 2020 TCP connection established with [AF_INET]10.250.7.77:18804\nSat Jan 11 19:16:13 2020 10.250.7.77:18804 TCP connection established with [AF_INET]100.64.1.1:59744\nSat Jan 11 19:16:13 2020 10.250.7.77:18804 Connection reset, restarting [0]\nSat Jan 11 19:16:13 2020 100.64.1.1:59744 Connection reset, restarting [0]\nSat Jan 11 19:16:19 2020 TCP connection established with [AF_INET]10.250.7.77:34932\nSat Jan 11 19:16:19 2020 10.250.7.77:34932 TCP connection established with [AF_INET]100.64.1.1:64164\nSat Jan 11 19:16:19 2020 10.250.7.77:34932 Connection reset, restarting [0]\nSat Jan 11 19:16:19 2020 100.64.1.1:64164 Connection reset, restarting [0]\nSat Jan 11 19:16:23 2020 TCP connection established with [AF_INET]10.250.7.77:18814\nSat Jan 11 19:16:23 2020 10.250.7.77:18814 TCP connection established with [AF_INET]100.64.1.1:59754\nSat Jan 11 19:16:23 2020 10.250.7.77:18814 Connection reset, restarting [0]\nSat Jan 11 19:16:23 2020 100.64.1.1:59754 Connection reset, restarting [0]\nSat Jan 11 19:16:29 2020 TCP connection established with [AF_INET]10.250.7.77:34936\nSat Jan 11 19:16:29 2020 10.250.7.77:34936 TCP connection established with [AF_INET]100.64.1.1:64168\nSat Jan 11 19:16:29 2020 10.250.7.77:34936 Connection reset, restarting [0]\nSat Jan 11 19:16:29 2020 100.64.1.1:64168 Connection reset, restarting [0]\nSat Jan 11 19:16:33 2020 TCP connection established with [AF_INET]10.250.7.77:18826\nSat Jan 11 19:16:33 2020 10.250.7.77:18826 TCP connection established with [AF_INET]100.64.1.1:59766\nSat Jan 11 19:16:33 2020 10.250.7.77:18826 Connection reset, restarting [0]\nSat Jan 11 19:16:33 2020 100.64.1.1:59766 Connection reset, restarting [0]\nSat Jan 11 19:16:39 2020 TCP connection established with [AF_INET]100.64.1.1:64176\nSat Jan 11 19:16:39 2020 100.64.1.1:64176 Connection reset, restarting [0]\nSat Jan 11 19:16:39 2020 TCP connection established with [AF_INET]10.250.7.77:34944\nSat Jan 11 19:16:39 2020 10.250.7.77:34944 Connection reset, restarting [0]\nSat Jan 11 19:16:43 2020 TCP connection established with [AF_INET]10.250.7.77:18834\nSat Jan 11 19:16:43 2020 10.250.7.77:18834 TCP connection established with [AF_INET]100.64.1.1:59774\nSat Jan 11 19:16:43 2020 10.250.7.77:18834 Connection reset, restarting [0]\nSat Jan 11 19:16:43 2020 100.64.1.1:59774 Connection reset, restarting [0]\nSat Jan 11 19:16:49 2020 TCP connection established with [AF_INET]10.250.7.77:34952\nSat Jan 11 19:16:49 2020 10.250.7.77:34952 Connection reset, restarting [0]\nSat Jan 11 19:16:49 2020 TCP connection established with [AF_INET]100.64.1.1:64184\nSat Jan 11 19:16:49 2020 100.64.1.1:64184 Connection reset, restarting [0]\nSat Jan 11 19:16:53 2020 TCP connection established with [AF_INET]10.250.7.77:18842\nSat Jan 11 19:16:53 2020 10.250.7.77:18842 TCP connection established with [AF_INET]100.64.1.1:59782\nSat Jan 11 19:16:53 2020 10.250.7.77:18842 Connection reset, restarting [0]\nSat Jan 11 19:16:53 2020 100.64.1.1:59782 Connection reset, restarting [0]\nSat Jan 11 19:16:59 2020 TCP connection established with [AF_INET]10.250.7.77:34964\nSat Jan 11 19:16:59 2020 10.250.7.77:34964 TCP connection established with [AF_INET]100.64.1.1:64196\nSat Jan 11 19:16:59 2020 10.250.7.77:34964 Connection reset, restarting [0]\nSat Jan 11 19:16:59 2020 100.64.1.1:64196 Connection reset, restarting [0]\nSat Jan 11 19:17:03 2020 TCP connection established with [AF_INET]10.250.7.77:18860\nSat Jan 11 19:17:03 2020 10.250.7.77:18860 TCP connection established with [AF_INET]100.64.1.1:59800\nSat Jan 11 19:17:03 2020 10.250.7.77:18860 Connection reset, restarting [0]\nSat Jan 11 19:17:03 2020 100.64.1.1:59800 Connection reset, restarting [0]\nSat Jan 11 19:17:09 2020 TCP connection established with [AF_INET]10.250.7.77:34974\nSat Jan 11 19:17:09 2020 10.250.7.77:34974 TCP connection established with [AF_INET]100.64.1.1:64206\nSat Jan 11 19:17:09 2020 10.250.7.77:34974 Connection reset, restarting [0]\nSat Jan 11 19:17:09 2020 100.64.1.1:64206 Connection reset, restarting [0]\nSat Jan 11 19:17:13 2020 TCP connection established with [AF_INET]10.250.7.77:18868\nSat Jan 11 19:17:13 2020 10.250.7.77:18868 TCP connection established with [AF_INET]100.64.1.1:59808\nSat Jan 11 19:17:13 2020 10.250.7.77:18868 Connection reset, restarting [0]\nSat Jan 11 19:17:13 2020 100.64.1.1:59808 Connection reset, restarting [0]\nSat Jan 11 19:17:19 2020 TCP connection established with [AF_INET]10.250.7.77:34986\nSat Jan 11 19:17:19 2020 10.250.7.77:34986 TCP connection established with [AF_INET]100.64.1.1:64218\nSat Jan 11 19:17:19 2020 10.250.7.77:34986 Connection reset, restarting [0]\nSat Jan 11 19:17:19 2020 100.64.1.1:64218 Connection reset, restarting [0]\nSat Jan 11 19:17:23 2020 TCP connection established with [AF_INET]10.250.7.77:18884\nSat Jan 11 19:17:23 2020 10.250.7.77:18884 TCP connection established with [AF_INET]100.64.1.1:59824\nSat Jan 11 19:17:23 2020 10.250.7.77:18884 Connection reset, restarting [0]\nSat Jan 11 19:17:23 2020 100.64.1.1:59824 Connection reset, restarting [0]\nSat Jan 11 19:17:29 2020 TCP connection established with [AF_INET]10.250.7.77:34994\nSat Jan 11 19:17:29 2020 10.250.7.77:34994 TCP connection established with [AF_INET]100.64.1.1:64226\nSat Jan 11 19:17:29 2020 10.250.7.77:34994 Connection reset, restarting [0]\nSat Jan 11 19:17:29 2020 100.64.1.1:64226 Connection reset, restarting [0]\nSat Jan 11 19:17:33 2020 TCP connection established with [AF_INET]10.250.7.77:18890\nSat Jan 11 19:17:33 2020 10.250.7.77:18890 TCP connection established with [AF_INET]100.64.1.1:59830\nSat Jan 11 19:17:33 2020 10.250.7.77:18890 Connection reset, restarting [0]\nSat Jan 11 19:17:33 2020 100.64.1.1:59830 Connection reset, restarting [0]\nSat Jan 11 19:17:39 2020 TCP connection established with [AF_INET]10.250.7.77:35002\nSat Jan 11 19:17:39 2020 10.250.7.77:35002 TCP connection established with [AF_INET]100.64.1.1:64234\nSat Jan 11 19:17:39 2020 10.250.7.77:35002 Connection reset, restarting [0]\nSat Jan 11 19:17:39 2020 100.64.1.1:64234 Connection reset, restarting [0]\nSat Jan 11 19:17:43 2020 TCP connection established with [AF_INET]10.250.7.77:18900\nSat Jan 11 19:17:43 2020 10.250.7.77:18900 TCP connection established with [AF_INET]100.64.1.1:59840\nSat Jan 11 19:17:43 2020 10.250.7.77:18900 Connection reset, restarting [0]\nSat Jan 11 19:17:43 2020 100.64.1.1:59840 Connection reset, restarting [0]\nSat Jan 11 19:17:49 2020 TCP connection established with [AF_INET]10.250.7.77:35010\nSat Jan 11 19:17:49 2020 10.250.7.77:35010 Connection reset, restarting [0]\nSat Jan 11 19:17:49 2020 TCP connection established with [AF_INET]100.64.1.1:64242\nSat Jan 11 19:17:49 2020 100.64.1.1:64242 Connection reset, restarting [0]\nSat Jan 11 19:17:53 2020 TCP connection established with [AF_INET]10.250.7.77:18916\nSat Jan 11 19:17:53 2020 10.250.7.77:18916 TCP connection established with [AF_INET]100.64.1.1:59856\nSat Jan 11 19:17:53 2020 10.250.7.77:18916 Connection reset, restarting [0]\nSat Jan 11 19:17:53 2020 100.64.1.1:59856 Connection reset, restarting [0]\nSat Jan 11 19:17:59 2020 TCP connection established with [AF_INET]10.250.7.77:35022\nSat Jan 11 19:17:59 2020 10.250.7.77:35022 TCP connection established with [AF_INET]100.64.1.1:64254\nSat Jan 11 19:17:59 2020 10.250.7.77:35022 Connection reset, restarting [0]\nSat Jan 11 19:17:59 2020 100.64.1.1:64254 Connection reset, restarting [0]\nSat Jan 11 19:18:03 2020 TCP connection established with [AF_INET]10.250.7.77:18934\nSat Jan 11 19:18:03 2020 10.250.7.77:18934 TCP connection established with [AF_INET]100.64.1.1:59874\nSat Jan 11 19:18:03 2020 10.250.7.77:18934 Connection reset, restarting [0]\nSat Jan 11 19:18:03 2020 100.64.1.1:59874 Connection reset, restarting [0]\nSat Jan 11 19:18:09 2020 TCP connection established with [AF_INET]10.250.7.77:35032\nSat Jan 11 19:18:09 2020 10.250.7.77:35032 TCP connection established with [AF_INET]100.64.1.1:64264\nSat Jan 11 19:18:09 2020 10.250.7.77:35032 Connection reset, restarting [0]\nSat Jan 11 19:18:09 2020 100.64.1.1:64264 Connection reset, restarting [0]\nSat Jan 11 19:18:13 2020 TCP connection established with [AF_INET]10.250.7.77:18938\nSat Jan 11 19:18:13 2020 10.250.7.77:18938 TCP connection established with [AF_INET]100.64.1.1:59878\nSat Jan 11 19:18:13 2020 10.250.7.77:18938 Connection reset, restarting [0]\nSat Jan 11 19:18:13 2020 100.64.1.1:59878 Connection reset, restarting [0]\nSat Jan 11 19:18:19 2020 TCP connection established with [AF_INET]10.250.7.77:35044\nSat Jan 11 19:18:19 2020 10.250.7.77:35044 TCP connection established with [AF_INET]100.64.1.1:64276\nSat Jan 11 19:18:19 2020 10.250.7.77:35044 Connection reset, restarting [0]\nSat Jan 11 19:18:19 2020 100.64.1.1:64276 Connection reset, restarting [0]\nSat Jan 11 19:18:23 2020 TCP connection established with [AF_INET]10.250.7.77:18952\nSat Jan 11 19:18:23 2020 10.250.7.77:18952 TCP connection established with [AF_INET]100.64.1.1:59892\nSat Jan 11 19:18:23 2020 10.250.7.77:18952 Connection reset, restarting [0]\nSat Jan 11 19:18:23 2020 100.64.1.1:59892 Connection reset, restarting [0]\nSat Jan 11 19:18:29 2020 TCP connection established with [AF_INET]10.250.7.77:35048\nSat Jan 11 19:18:29 2020 10.250.7.77:35048 TCP connection established with [AF_INET]100.64.1.1:64280\nSat Jan 11 19:18:29 2020 10.250.7.77:35048 Connection reset, restarting [0]\nSat Jan 11 19:18:29 2020 100.64.1.1:64280 Connection reset, restarting [0]\nSat Jan 11 19:18:33 2020 TCP connection established with [AF_INET]10.250.7.77:18958\nSat Jan 11 19:18:33 2020 10.250.7.77:18958 TCP connection established with [AF_INET]100.64.1.1:59898\nSat Jan 11 19:18:33 2020 10.250.7.77:18958 Connection reset, restarting [0]\nSat Jan 11 19:18:33 2020 100.64.1.1:59898 Connection reset, restarting [0]\nSat Jan 11 19:18:39 2020 TCP connection established with [AF_INET]10.250.7.77:35056\nSat Jan 11 19:18:39 2020 10.250.7.77:35056 TCP connection established with [AF_INET]100.64.1.1:64288\nSat Jan 11 19:18:39 2020 10.250.7.77:35056 Connection reset, restarting [0]\nSat Jan 11 19:18:39 2020 100.64.1.1:64288 Connection reset, restarting [0]\nSat Jan 11 19:18:43 2020 TCP connection established with [AF_INET]10.250.7.77:18968\nSat Jan 11 19:18:43 2020 10.250.7.77:18968 TCP connection established with [AF_INET]100.64.1.1:59908\nSat Jan 11 19:18:43 2020 10.250.7.77:18968 Connection reset, restarting [0]\nSat Jan 11 19:18:43 2020 100.64.1.1:59908 Connection reset, restarting [0]\nSat Jan 11 19:18:49 2020 TCP connection established with [AF_INET]10.250.7.77:35102\nSat Jan 11 19:18:49 2020 10.250.7.77:35102 TCP connection established with [AF_INET]100.64.1.1:64334\nSat Jan 11 19:18:49 2020 10.250.7.77:35102 Connection reset, restarting [0]\nSat Jan 11 19:18:49 2020 100.64.1.1:64334 Connection reset, restarting [0]\nSat Jan 11 19:18:53 2020 TCP connection established with [AF_INET]10.250.7.77:18974\nSat Jan 11 19:18:53 2020 10.250.7.77:18974 TCP connection established with [AF_INET]100.64.1.1:59914\nSat Jan 11 19:18:53 2020 10.250.7.77:18974 Connection reset, restarting [0]\nSat Jan 11 19:18:53 2020 100.64.1.1:59914 Connection reset, restarting [0]\nSat Jan 11 19:18:59 2020 TCP connection established with [AF_INET]10.250.7.77:35114\nSat Jan 11 19:18:59 2020 10.250.7.77:35114 TCP connection established with [AF_INET]100.64.1.1:64346\nSat Jan 11 19:18:59 2020 10.250.7.77:35114 Connection reset, restarting [0]\nSat Jan 11 19:18:59 2020 100.64.1.1:64346 Connection reset, restarting [0]\nSat Jan 11 19:19:03 2020 TCP connection established with [AF_INET]10.250.7.77:18992\nSat Jan 11 19:19:03 2020 10.250.7.77:18992 Connection reset, restarting [0]\nSat Jan 11 19:19:03 2020 TCP connection established with [AF_INET]100.64.1.1:59932\nSat Jan 11 19:19:03 2020 100.64.1.1:59932 Connection reset, restarting [0]\nSat Jan 11 19:19:09 2020 TCP connection established with [AF_INET]10.250.7.77:35126\nSat Jan 11 19:19:09 2020 10.250.7.77:35126 TCP connection established with [AF_INET]100.64.1.1:64358\nSat Jan 11 19:19:09 2020 10.250.7.77:35126 Connection reset, restarting [0]\nSat Jan 11 19:19:09 2020 100.64.1.1:64358 Connection reset, restarting [0]\nSat Jan 11 19:19:13 2020 TCP connection established with [AF_INET]10.250.7.77:19002\nSat Jan 11 19:19:13 2020 10.250.7.77:19002 TCP connection established with [AF_INET]100.64.1.1:59942\nSat Jan 11 19:19:13 2020 10.250.7.77:19002 Connection reset, restarting [0]\nSat Jan 11 19:19:13 2020 100.64.1.1:59942 Connection reset, restarting [0]\nSat Jan 11 19:19:19 2020 TCP connection established with [AF_INET]10.250.7.77:35144\nSat Jan 11 19:19:19 2020 10.250.7.77:35144 TCP connection established with [AF_INET]100.64.1.1:64376\nSat Jan 11 19:19:19 2020 10.250.7.77:35144 Connection reset, restarting [0]\nSat Jan 11 19:19:19 2020 100.64.1.1:64376 Connection reset, restarting [0]\nSat Jan 11 19:19:23 2020 TCP connection established with [AF_INET]10.250.7.77:19018\nSat Jan 11 19:19:23 2020 10.250.7.77:19018 TCP connection established with [AF_INET]100.64.1.1:59958\nSat Jan 11 19:19:23 2020 10.250.7.77:19018 Connection reset, restarting [0]\nSat Jan 11 19:19:23 2020 100.64.1.1:59958 Connection reset, restarting [0]\nSat Jan 11 19:19:29 2020 TCP connection established with [AF_INET]10.250.7.77:35148\nSat Jan 11 19:19:29 2020 10.250.7.77:35148 TCP connection established with [AF_INET]100.64.1.1:64380\nSat Jan 11 19:19:29 2020 10.250.7.77:35148 Connection reset, restarting [0]\nSat Jan 11 19:19:29 2020 100.64.1.1:64380 Connection reset, restarting [0]\nSat Jan 11 19:19:33 2020 TCP connection established with [AF_INET]10.250.7.77:19026\nSat Jan 11 19:19:33 2020 10.250.7.77:19026 TCP connection established with [AF_INET]100.64.1.1:59966\nSat Jan 11 19:19:33 2020 10.250.7.77:19026 Connection reset, restarting [0]\nSat Jan 11 19:19:33 2020 100.64.1.1:59966 Connection reset, restarting [0]\nSat Jan 11 19:19:39 2020 TCP connection established with [AF_INET]10.250.7.77:35158\nSat Jan 11 19:19:39 2020 10.250.7.77:35158 TCP connection established with [AF_INET]100.64.1.1:64390\nSat Jan 11 19:19:39 2020 10.250.7.77:35158 Connection reset, restarting [0]\nSat Jan 11 19:19:39 2020 100.64.1.1:64390 Connection reset, restarting [0]\nSat Jan 11 19:19:43 2020 TCP connection established with [AF_INET]10.250.7.77:19038\nSat Jan 11 19:19:43 2020 10.250.7.77:19038 TCP connection established with [AF_INET]100.64.1.1:59978\nSat Jan 11 19:19:43 2020 10.250.7.77:19038 Connection reset, restarting [0]\nSat Jan 11 19:19:43 2020 100.64.1.1:59978 Connection reset, restarting [0]\nSat Jan 11 19:19:49 2020 TCP connection established with [AF_INET]10.250.7.77:35164\nSat Jan 11 19:19:49 2020 10.250.7.77:35164 TCP connection established with [AF_INET]100.64.1.1:64396\nSat Jan 11 19:19:49 2020 10.250.7.77:35164 Connection reset, restarting [0]\nSat Jan 11 19:19:49 2020 100.64.1.1:64396 Connection reset, restarting [0]\nSat Jan 11 19:19:53 2020 TCP connection established with [AF_INET]10.250.7.77:19052\nSat Jan 11 19:19:53 2020 10.250.7.77:19052 TCP connection established with [AF_INET]100.64.1.1:59992\nSat Jan 11 19:19:53 2020 10.250.7.77:19052 Connection reset, restarting [0]\nSat Jan 11 19:19:53 2020 100.64.1.1:59992 Connection reset, restarting [0]\nSat Jan 11 19:19:59 2020 TCP connection established with [AF_INET]10.250.7.77:35180\nSat Jan 11 19:19:59 2020 10.250.7.77:35180 TCP connection established with [AF_INET]100.64.1.1:64412\nSat Jan 11 19:19:59 2020 10.250.7.77:35180 Connection reset, restarting [0]\nSat Jan 11 19:19:59 2020 100.64.1.1:64412 Connection reset, restarting [0]\nSat Jan 11 19:20:03 2020 TCP connection established with [AF_INET]10.250.7.77:19070\nSat Jan 11 19:20:03 2020 10.250.7.77:19070 TCP connection established with [AF_INET]100.64.1.1:60010\nSat Jan 11 19:20:03 2020 10.250.7.77:19070 Connection reset, restarting [0]\nSat Jan 11 19:20:03 2020 100.64.1.1:60010 Connection reset, restarting [0]\nSat Jan 11 19:20:09 2020 TCP connection established with [AF_INET]10.250.7.77:35190\nSat Jan 11 19:20:09 2020 10.250.7.77:35190 TCP connection established with [AF_INET]100.64.1.1:64422\nSat Jan 11 19:20:09 2020 10.250.7.77:35190 Connection reset, restarting [0]\nSat Jan 11 19:20:09 2020 100.64.1.1:64422 Connection reset, restarting [0]\nSat Jan 11 19:20:13 2020 TCP connection established with [AF_INET]10.250.7.77:19074\nSat Jan 11 19:20:13 2020 10.250.7.77:19074 TCP connection established with [AF_INET]100.64.1.1:60014\nSat Jan 11 19:20:13 2020 10.250.7.77:19074 Connection reset, restarting [0]\nSat Jan 11 19:20:13 2020 100.64.1.1:60014 Connection reset, restarting [0]\nSat Jan 11 19:20:19 2020 TCP connection established with [AF_INET]10.250.7.77:35208\nSat Jan 11 19:20:19 2020 10.250.7.77:35208 TCP connection established with [AF_INET]100.64.1.1:64440\nSat Jan 11 19:20:19 2020 10.250.7.77:35208 Connection reset, restarting [0]\nSat Jan 11 19:20:19 2020 100.64.1.1:64440 Connection reset, restarting [0]\nSat Jan 11 19:20:23 2020 TCP connection established with [AF_INET]10.250.7.77:19084\nSat Jan 11 19:20:23 2020 10.250.7.77:19084 TCP connection established with [AF_INET]100.64.1.1:60024\nSat Jan 11 19:20:23 2020 10.250.7.77:19084 Connection reset, restarting [0]\nSat Jan 11 19:20:23 2020 100.64.1.1:60024 Connection reset, restarting [0]\nSat Jan 11 19:20:29 2020 TCP connection established with [AF_INET]10.250.7.77:35212\nSat Jan 11 19:20:29 2020 10.250.7.77:35212 TCP connection established with [AF_INET]100.64.1.1:64444\nSat Jan 11 19:20:29 2020 10.250.7.77:35212 Connection reset, restarting [0]\nSat Jan 11 19:20:29 2020 100.64.1.1:64444 Connection reset, restarting [0]\nSat Jan 11 19:20:33 2020 TCP connection established with [AF_INET]10.250.7.77:19092\nSat Jan 11 19:20:33 2020 10.250.7.77:19092 TCP connection established with [AF_INET]100.64.1.1:60032\nSat Jan 11 19:20:33 2020 10.250.7.77:19092 Connection reset, restarting [0]\nSat Jan 11 19:20:33 2020 100.64.1.1:60032 Connection reset, restarting [0]\nSat Jan 11 19:20:39 2020 TCP connection established with [AF_INET]10.250.7.77:35222\nSat Jan 11 19:20:39 2020 10.250.7.77:35222 TCP connection established with [AF_INET]100.64.1.1:64454\nSat Jan 11 19:20:39 2020 10.250.7.77:35222 Connection reset, restarting [0]\nSat Jan 11 19:20:39 2020 100.64.1.1:64454 Connection reset, restarting [0]\nSat Jan 11 19:20:43 2020 TCP connection established with [AF_INET]10.250.7.77:19100\nSat Jan 11 19:20:43 2020 10.250.7.77:19100 TCP connection established with [AF_INET]100.64.1.1:60040\nSat Jan 11 19:20:43 2020 10.250.7.77:19100 Connection reset, restarting [0]\nSat Jan 11 19:20:43 2020 100.64.1.1:60040 Connection reset, restarting [0]\nSat Jan 11 19:20:49 2020 TCP connection established with [AF_INET]10.250.7.77:35238\nSat Jan 11 19:20:49 2020 10.250.7.77:35238 TCP connection established with [AF_INET]100.64.1.1:64470\nSat Jan 11 19:20:49 2020 10.250.7.77:35238 Connection reset, restarting [0]\nSat Jan 11 19:20:49 2020 100.64.1.1:64470 Connection reset, restarting [0]\nSat Jan 11 19:20:53 2020 TCP connection established with [AF_INET]10.250.7.77:19110\nSat Jan 11 19:20:53 2020 10.250.7.77:19110 TCP connection established with [AF_INET]100.64.1.1:60050\nSat Jan 11 19:20:53 2020 10.250.7.77:19110 Connection reset, restarting [0]\nSat Jan 11 19:20:53 2020 100.64.1.1:60050 Connection reset, restarting [0]\nSat Jan 11 19:20:59 2020 TCP connection established with [AF_INET]10.250.7.77:35250\nSat Jan 11 19:20:59 2020 10.250.7.77:35250 TCP connection established with [AF_INET]100.64.1.1:64482\nSat Jan 11 19:20:59 2020 10.250.7.77:35250 Connection reset, restarting [0]\nSat Jan 11 19:20:59 2020 100.64.1.1:64482 Connection reset, restarting [0]\nSat Jan 11 19:21:03 2020 TCP connection established with [AF_INET]10.250.7.77:19128\nSat Jan 11 19:21:03 2020 10.250.7.77:19128 TCP connection established with [AF_INET]100.64.1.1:60068\nSat Jan 11 19:21:03 2020 10.250.7.77:19128 Connection reset, restarting [0]\nSat Jan 11 19:21:03 2020 100.64.1.1:60068 Connection reset, restarting [0]\nSat Jan 11 19:21:09 2020 TCP connection established with [AF_INET]10.250.7.77:35260\nSat Jan 11 19:21:09 2020 10.250.7.77:35260 TCP connection established with [AF_INET]100.64.1.1:64492\nSat Jan 11 19:21:09 2020 10.250.7.77:35260 Connection reset, restarting [0]\nSat Jan 11 19:21:09 2020 100.64.1.1:64492 Connection reset, restarting [0]\nSat Jan 11 19:21:13 2020 TCP connection established with [AF_INET]10.250.7.77:19132\nSat Jan 11 19:21:13 2020 10.250.7.77:19132 TCP connection established with [AF_INET]100.64.1.1:60072\nSat Jan 11 19:21:13 2020 10.250.7.77:19132 Connection reset, restarting [0]\nSat Jan 11 19:21:13 2020 100.64.1.1:60072 Connection reset, restarting [0]\nSat Jan 11 19:21:19 2020 TCP connection established with [AF_INET]10.250.7.77:35276\nSat Jan 11 19:21:19 2020 10.250.7.77:35276 TCP connection established with [AF_INET]100.64.1.1:64508\nSat Jan 11 19:21:19 2020 10.250.7.77:35276 Connection reset, restarting [0]\nSat Jan 11 19:21:19 2020 100.64.1.1:64508 Connection reset, restarting [0]\nSat Jan 11 19:21:23 2020 TCP connection established with [AF_INET]10.250.7.77:19142\nSat Jan 11 19:21:23 2020 10.250.7.77:19142 TCP connection established with [AF_INET]100.64.1.1:60082\nSat Jan 11 19:21:23 2020 10.250.7.77:19142 Connection reset, restarting [0]\nSat Jan 11 19:21:23 2020 100.64.1.1:60082 Connection reset, restarting [0]\nSat Jan 11 19:21:29 2020 TCP connection established with [AF_INET]10.250.7.77:35282\nSat Jan 11 19:21:29 2020 10.250.7.77:35282 TCP connection established with [AF_INET]100.64.1.1:64514\nSat Jan 11 19:21:29 2020 10.250.7.77:35282 Connection reset, restarting [0]\nSat Jan 11 19:21:29 2020 100.64.1.1:64514 Connection reset, restarting [0]\nSat Jan 11 19:21:33 2020 TCP connection established with [AF_INET]10.250.7.77:19150\nSat Jan 11 19:21:33 2020 10.250.7.77:19150 TCP connection established with [AF_INET]100.64.1.1:60090\nSat Jan 11 19:21:33 2020 10.250.7.77:19150 Connection reset, restarting [0]\nSat Jan 11 19:21:33 2020 100.64.1.1:60090 Connection reset, restarting [0]\nSat Jan 11 19:21:39 2020 TCP connection established with [AF_INET]10.250.7.77:35290\nSat Jan 11 19:21:39 2020 10.250.7.77:35290 TCP connection established with [AF_INET]100.64.1.1:64522\nSat Jan 11 19:21:39 2020 10.250.7.77:35290 Connection reset, restarting [0]\nSat Jan 11 19:21:39 2020 100.64.1.1:64522 Connection reset, restarting [0]\nSat Jan 11 19:21:43 2020 TCP connection established with [AF_INET]10.250.7.77:19158\nSat Jan 11 19:21:43 2020 10.250.7.77:19158 TCP connection established with [AF_INET]100.64.1.1:60098\nSat Jan 11 19:21:43 2020 10.250.7.77:19158 Connection reset, restarting [0]\nSat Jan 11 19:21:43 2020 100.64.1.1:60098 Connection reset, restarting [0]\nSat Jan 11 19:21:49 2020 TCP connection established with [AF_INET]10.250.7.77:35296\nSat Jan 11 19:21:49 2020 10.250.7.77:35296 TCP connection established with [AF_INET]100.64.1.1:64528\nSat Jan 11 19:21:49 2020 10.250.7.77:35296 Connection reset, restarting [0]\nSat Jan 11 19:21:49 2020 100.64.1.1:64528 Connection reset, restarting [0]\nSat Jan 11 19:21:53 2020 TCP connection established with [AF_INET]10.250.7.77:19164\nSat Jan 11 19:21:53 2020 10.250.7.77:19164 TCP connection established with [AF_INET]100.64.1.1:60104\nSat Jan 11 19:21:53 2020 10.250.7.77:19164 Connection reset, restarting [0]\nSat Jan 11 19:21:53 2020 100.64.1.1:60104 Connection reset, restarting [0]\nSat Jan 11 19:21:59 2020 TCP connection established with [AF_INET]10.250.7.77:35308\nSat Jan 11 19:21:59 2020 10.250.7.77:35308 TCP connection established with [AF_INET]100.64.1.1:64540\nSat Jan 11 19:21:59 2020 10.250.7.77:35308 Connection reset, restarting [0]\nSat Jan 11 19:21:59 2020 100.64.1.1:64540 Connection reset, restarting [0]\nSat Jan 11 19:22:03 2020 TCP connection established with [AF_INET]10.250.7.77:19182\nSat Jan 11 19:22:03 2020 10.250.7.77:19182 TCP connection established with [AF_INET]100.64.1.1:60122\nSat Jan 11 19:22:03 2020 10.250.7.77:19182 Connection reset, restarting [0]\nSat Jan 11 19:22:03 2020 100.64.1.1:60122 Connection reset, restarting [0]\nSat Jan 11 19:22:09 2020 TCP connection established with [AF_INET]10.250.7.77:35318\nSat Jan 11 19:22:09 2020 10.250.7.77:35318 TCP connection established with [AF_INET]100.64.1.1:64550\nSat Jan 11 19:22:09 2020 10.250.7.77:35318 Connection reset, restarting [0]\nSat Jan 11 19:22:09 2020 100.64.1.1:64550 Connection reset, restarting [0]\nSat Jan 11 19:22:13 2020 TCP connection established with [AF_INET]10.250.7.77:19190\nSat Jan 11 19:22:13 2020 10.250.7.77:19190 TCP connection established with [AF_INET]100.64.1.1:60130\nSat Jan 11 19:22:13 2020 10.250.7.77:19190 Connection reset, restarting [0]\nSat Jan 11 19:22:13 2020 100.64.1.1:60130 Connection reset, restarting [0]\nSat Jan 11 19:22:19 2020 TCP connection established with [AF_INET]10.250.7.77:35330\nSat Jan 11 19:22:19 2020 10.250.7.77:35330 TCP connection established with [AF_INET]100.64.1.1:64562\nSat Jan 11 19:22:19 2020 10.250.7.77:35330 Connection reset, restarting [0]\nSat Jan 11 19:22:19 2020 100.64.1.1:64562 Connection reset, restarting [0]\nSat Jan 11 19:22:23 2020 TCP connection established with [AF_INET]10.250.7.77:19202\nSat Jan 11 19:22:23 2020 10.250.7.77:19202 TCP connection established with [AF_INET]100.64.1.1:60142\nSat Jan 11 19:22:23 2020 10.250.7.77:19202 Connection reset, restarting [0]\nSat Jan 11 19:22:23 2020 100.64.1.1:60142 Connection reset, restarting [0]\nSat Jan 11 19:22:29 2020 TCP connection established with [AF_INET]10.250.7.77:35340\nSat Jan 11 19:22:29 2020 10.250.7.77:35340 TCP connection established with [AF_INET]100.64.1.1:64572\nSat Jan 11 19:22:29 2020 10.250.7.77:35340 Connection reset, restarting [0]\nSat Jan 11 19:22:29 2020 100.64.1.1:64572 Connection reset, restarting [0]\nSat Jan 11 19:22:33 2020 TCP connection established with [AF_INET]10.250.7.77:19208\nSat Jan 11 19:22:33 2020 10.250.7.77:19208 TCP connection established with [AF_INET]100.64.1.1:60148\nSat Jan 11 19:22:33 2020 10.250.7.77:19208 Connection reset, restarting [0]\nSat Jan 11 19:22:33 2020 100.64.1.1:60148 Connection reset, restarting [0]\nSat Jan 11 19:22:39 2020 TCP connection established with [AF_INET]10.250.7.77:35348\nSat Jan 11 19:22:39 2020 10.250.7.77:35348 TCP connection established with [AF_INET]100.64.1.1:64580\nSat Jan 11 19:22:39 2020 10.250.7.77:35348 Connection reset, restarting [0]\nSat Jan 11 19:22:39 2020 100.64.1.1:64580 Connection reset, restarting [0]\nSat Jan 11 19:22:43 2020 TCP connection established with [AF_INET]10.250.7.77:19216\nSat Jan 11 19:22:43 2020 10.250.7.77:19216 TCP connection established with [AF_INET]100.64.1.1:60156\nSat Jan 11 19:22:43 2020 10.250.7.77:19216 Connection reset, restarting [0]\nSat Jan 11 19:22:43 2020 100.64.1.1:60156 Connection reset, restarting [0]\nSat Jan 11 19:22:49 2020 TCP connection established with [AF_INET]10.250.7.77:35354\nSat Jan 11 19:22:49 2020 10.250.7.77:35354 TCP connection established with [AF_INET]100.64.1.1:64586\nSat Jan 11 19:22:49 2020 10.250.7.77:35354 Connection reset, restarting [0]\nSat Jan 11 19:22:49 2020 100.64.1.1:64586 Connection reset, restarting [0]\nSat Jan 11 19:22:53 2020 TCP connection established with [AF_INET]10.250.7.77:19222\nSat Jan 11 19:22:53 2020 10.250.7.77:19222 TCP connection established with [AF_INET]100.64.1.1:60162\nSat Jan 11 19:22:53 2020 10.250.7.77:19222 Connection reset, restarting [0]\nSat Jan 11 19:22:53 2020 100.64.1.1:60162 Connection reset, restarting [0]\nSat Jan 11 19:22:59 2020 TCP connection established with [AF_INET]10.250.7.77:35366\nSat Jan 11 19:22:59 2020 10.250.7.77:35366 TCP connection established with [AF_INET]100.64.1.1:64598\nSat Jan 11 19:22:59 2020 10.250.7.77:35366 Connection reset, restarting [0]\nSat Jan 11 19:22:59 2020 100.64.1.1:64598 Connection reset, restarting [0]\nSat Jan 11 19:23:03 2020 TCP connection established with [AF_INET]10.250.7.77:19240\nSat Jan 11 19:23:03 2020 10.250.7.77:19240 TCP connection established with [AF_INET]100.64.1.1:60180\nSat Jan 11 19:23:03 2020 10.250.7.77:19240 Connection reset, restarting [0]\nSat Jan 11 19:23:03 2020 100.64.1.1:60180 Connection reset, restarting [0]\nSat Jan 11 19:23:09 2020 TCP connection established with [AF_INET]10.250.7.77:35376\nSat Jan 11 19:23:09 2020 10.250.7.77:35376 TCP connection established with [AF_INET]100.64.1.1:64608\nSat Jan 11 19:23:09 2020 10.250.7.77:35376 Connection reset, restarting [0]\nSat Jan 11 19:23:09 2020 100.64.1.1:64608 Connection reset, restarting [0]\nSat Jan 11 19:23:13 2020 TCP connection established with [AF_INET]10.250.7.77:19244\nSat Jan 11 19:23:13 2020 10.250.7.77:19244 TCP connection established with [AF_INET]100.64.1.1:60184\nSat Jan 11 19:23:13 2020 10.250.7.77:19244 Connection reset, restarting [0]\nSat Jan 11 19:23:13 2020 100.64.1.1:60184 Connection reset, restarting [0]\nSat Jan 11 19:23:19 2020 TCP connection established with [AF_INET]10.250.7.77:35388\nSat Jan 11 19:23:19 2020 10.250.7.77:35388 TCP connection established with [AF_INET]100.64.1.1:64620\nSat Jan 11 19:23:19 2020 10.250.7.77:35388 Connection reset, restarting [0]\nSat Jan 11 19:23:19 2020 100.64.1.1:64620 Connection reset, restarting [0]\nSat Jan 11 19:23:23 2020 TCP connection established with [AF_INET]10.250.7.77:19260\nSat Jan 11 19:23:23 2020 10.250.7.77:19260 TCP connection established with [AF_INET]100.64.1.1:60200\nSat Jan 11 19:23:23 2020 10.250.7.77:19260 Connection reset, restarting [0]\nSat Jan 11 19:23:23 2020 100.64.1.1:60200 Connection reset, restarting [0]\nSat Jan 11 19:23:29 2020 TCP connection established with [AF_INET]10.250.7.77:35394\nSat Jan 11 19:23:29 2020 10.250.7.77:35394 TCP connection established with [AF_INET]100.64.1.1:64626\nSat Jan 11 19:23:29 2020 10.250.7.77:35394 Connection reset, restarting [0]\nSat Jan 11 19:23:29 2020 100.64.1.1:64626 Connection reset, restarting [0]\nSat Jan 11 19:23:33 2020 TCP connection established with [AF_INET]10.250.7.77:19266\nSat Jan 11 19:23:33 2020 10.250.7.77:19266 TCP connection established with [AF_INET]100.64.1.1:60206\nSat Jan 11 19:23:33 2020 10.250.7.77:19266 Connection reset, restarting [0]\nSat Jan 11 19:23:33 2020 100.64.1.1:60206 Connection reset, restarting [0]\nSat Jan 11 19:23:39 2020 TCP connection established with [AF_INET]10.250.7.77:35402\nSat Jan 11 19:23:39 2020 10.250.7.77:35402 TCP connection established with [AF_INET]100.64.1.1:64634\nSat Jan 11 19:23:39 2020 10.250.7.77:35402 Connection reset, restarting [0]\nSat Jan 11 19:23:39 2020 100.64.1.1:64634 Connection reset, restarting [0]\nSat Jan 11 19:23:43 2020 TCP connection established with [AF_INET]10.250.7.77:19274\nSat Jan 11 19:23:43 2020 10.250.7.77:19274 TCP connection established with [AF_INET]100.64.1.1:60214\nSat Jan 11 19:23:43 2020 10.250.7.77:19274 Connection reset, restarting [0]\nSat Jan 11 19:23:43 2020 100.64.1.1:60214 Connection reset, restarting [0]\nSat Jan 11 19:23:49 2020 TCP connection established with [AF_INET]10.250.7.77:35412\nSat Jan 11 19:23:49 2020 10.250.7.77:35412 TCP connection established with [AF_INET]100.64.1.1:64644\nSat Jan 11 19:23:49 2020 10.250.7.77:35412 Connection reset, restarting [0]\nSat Jan 11 19:23:49 2020 100.64.1.1:64644 Connection reset, restarting [0]\nSat Jan 11 19:23:53 2020 TCP connection established with [AF_INET]10.250.7.77:19280\nSat Jan 11 19:23:53 2020 10.250.7.77:19280 TCP connection established with [AF_INET]100.64.1.1:60220\nSat Jan 11 19:23:53 2020 10.250.7.77:19280 Connection reset, restarting [0]\nSat Jan 11 19:23:53 2020 100.64.1.1:60220 Connection reset, restarting [0]\nSat Jan 11 19:23:59 2020 TCP connection established with [AF_INET]100.64.1.1:64654\nSat Jan 11 19:23:59 2020 100.64.1.1:64654 TCP connection established with [AF_INET]10.250.7.77:35422\nSat Jan 11 19:23:59 2020 100.64.1.1:64654 Connection reset, restarting [0]\nSat Jan 11 19:23:59 2020 10.250.7.77:35422 Connection reset, restarting [0]\nSat Jan 11 19:24:03 2020 TCP connection established with [AF_INET]10.250.7.77:19298\nSat Jan 11 19:24:03 2020 10.250.7.77:19298 TCP connection established with [AF_INET]100.64.1.1:60238\nSat Jan 11 19:24:03 2020 10.250.7.77:19298 Connection reset, restarting [0]\nSat Jan 11 19:24:03 2020 100.64.1.1:60238 Connection reset, restarting [0]\nSat Jan 11 19:24:09 2020 TCP connection established with [AF_INET]10.250.7.77:35434\nSat Jan 11 19:24:09 2020 10.250.7.77:35434 TCP connection established with [AF_INET]100.64.1.1:64666\nSat Jan 11 19:24:09 2020 10.250.7.77:35434 Connection reset, restarting [0]\nSat Jan 11 19:24:09 2020 100.64.1.1:64666 Connection reset, restarting [0]\nSat Jan 11 19:24:13 2020 TCP connection established with [AF_INET]10.250.7.77:19304\nSat Jan 11 19:24:13 2020 10.250.7.77:19304 TCP connection established with [AF_INET]100.64.1.1:60244\nSat Jan 11 19:24:13 2020 10.250.7.77:19304 Connection reset, restarting [0]\nSat Jan 11 19:24:13 2020 100.64.1.1:60244 Connection reset, restarting [0]\nSat Jan 11 19:24:19 2020 TCP connection established with [AF_INET]10.250.7.77:35444\nSat Jan 11 19:24:19 2020 10.250.7.77:35444 TCP connection established with [AF_INET]100.64.1.1:64676\nSat Jan 11 19:24:19 2020 10.250.7.77:35444 Connection reset, restarting [0]\nSat Jan 11 19:24:19 2020 100.64.1.1:64676 Connection reset, restarting [0]\nSat Jan 11 19:24:23 2020 TCP connection established with [AF_INET]10.250.7.77:19314\nSat Jan 11 19:24:23 2020 10.250.7.77:19314 TCP connection established with [AF_INET]100.64.1.1:60254\nSat Jan 11 19:24:23 2020 10.250.7.77:19314 Connection reset, restarting [0]\nSat Jan 11 19:24:23 2020 100.64.1.1:60254 Connection reset, restarting [0]\nSat Jan 11 19:24:29 2020 TCP connection established with [AF_INET]10.250.7.77:35452\nSat Jan 11 19:24:29 2020 10.250.7.77:35452 TCP connection established with [AF_INET]100.64.1.1:64684\nSat Jan 11 19:24:29 2020 10.250.7.77:35452 Connection reset, restarting [0]\nSat Jan 11 19:24:29 2020 100.64.1.1:64684 Connection reset, restarting [0]\nSat Jan 11 19:24:33 2020 TCP connection established with [AF_INET]10.250.7.77:19320\nSat Jan 11 19:24:33 2020 10.250.7.77:19320 TCP connection established with [AF_INET]100.64.1.1:60260\nSat Jan 11 19:24:33 2020 10.250.7.77:19320 Connection reset, restarting [0]\nSat Jan 11 19:24:33 2020 100.64.1.1:60260 Connection reset, restarting [0]\nSat Jan 11 19:24:39 2020 TCP connection established with [AF_INET]10.250.7.77:35456\nSat Jan 11 19:24:39 2020 10.250.7.77:35456 TCP connection established with [AF_INET]100.64.1.1:64688\nSat Jan 11 19:24:39 2020 10.250.7.77:35456 Connection reset, restarting [0]\nSat Jan 11 19:24:39 2020 100.64.1.1:64688 Connection reset, restarting [0]\nSat Jan 11 19:24:43 2020 TCP connection established with [AF_INET]10.250.7.77:19332\nSat Jan 11 19:24:43 2020 10.250.7.77:19332 TCP connection established with [AF_INET]100.64.1.1:60272\nSat Jan 11 19:24:43 2020 10.250.7.77:19332 Connection reset, restarting [0]\nSat Jan 11 19:24:43 2020 100.64.1.1:60272 Connection reset, restarting [0]\nSat Jan 11 19:24:49 2020 TCP connection established with [AF_INET]10.250.7.77:35466\nSat Jan 11 19:24:49 2020 10.250.7.77:35466 TCP connection established with [AF_INET]100.64.1.1:64698\nSat Jan 11 19:24:49 2020 10.250.7.77:35466 Connection reset, restarting [0]\nSat Jan 11 19:24:49 2020 100.64.1.1:64698 Connection reset, restarting [0]\nSat Jan 11 19:24:53 2020 TCP connection established with [AF_INET]10.250.7.77:19338\nSat Jan 11 19:24:53 2020 10.250.7.77:19338 TCP connection established with [AF_INET]100.64.1.1:60278\nSat Jan 11 19:24:53 2020 10.250.7.77:19338 Connection reset, restarting [0]\nSat Jan 11 19:24:53 2020 100.64.1.1:60278 Connection reset, restarting [0]\nSat Jan 11 19:24:59 2020 TCP connection established with [AF_INET]10.250.7.77:35478\nSat Jan 11 19:24:59 2020 10.250.7.77:35478 TCP connection established with [AF_INET]100.64.1.1:64710\nSat Jan 11 19:24:59 2020 10.250.7.77:35478 Connection reset, restarting [0]\nSat Jan 11 19:24:59 2020 100.64.1.1:64710 Connection reset, restarting [0]\nSat Jan 11 19:25:03 2020 TCP connection established with [AF_INET]10.250.7.77:19356\nSat Jan 11 19:25:03 2020 10.250.7.77:19356 TCP connection established with [AF_INET]100.64.1.1:60296\nSat Jan 11 19:25:03 2020 10.250.7.77:19356 Connection reset, restarting [0]\nSat Jan 11 19:25:03 2020 100.64.1.1:60296 Connection reset, restarting [0]\nSat Jan 11 19:25:09 2020 TCP connection established with [AF_INET]10.250.7.77:35492\nSat Jan 11 19:25:09 2020 10.250.7.77:35492 TCP connection established with [AF_INET]100.64.1.1:64724\nSat Jan 11 19:25:09 2020 10.250.7.77:35492 Connection reset, restarting [0]\nSat Jan 11 19:25:09 2020 100.64.1.1:64724 Connection reset, restarting [0]\nSat Jan 11 19:25:13 2020 TCP connection established with [AF_INET]10.250.7.77:19362\nSat Jan 11 19:25:13 2020 10.250.7.77:19362 TCP connection established with [AF_INET]100.64.1.1:60302\nSat Jan 11 19:25:13 2020 10.250.7.77:19362 Connection reset, restarting [0]\nSat Jan 11 19:25:13 2020 100.64.1.1:60302 Connection reset, restarting [0]\nSat Jan 11 19:25:19 2020 TCP connection established with [AF_INET]10.250.7.77:35502\nSat Jan 11 19:25:19 2020 10.250.7.77:35502 TCP connection established with [AF_INET]100.64.1.1:64734\nSat Jan 11 19:25:19 2020 10.250.7.77:35502 Connection reset, restarting [0]\nSat Jan 11 19:25:19 2020 100.64.1.1:64734 Connection reset, restarting [0]\nSat Jan 11 19:25:23 2020 TCP connection established with [AF_INET]10.250.7.77:19372\nSat Jan 11 19:25:23 2020 10.250.7.77:19372 TCP connection established with [AF_INET]100.64.1.1:60312\nSat Jan 11 19:25:23 2020 10.250.7.77:19372 Connection reset, restarting [0]\nSat Jan 11 19:25:23 2020 100.64.1.1:60312 Connection reset, restarting [0]\nSat Jan 11 19:25:29 2020 TCP connection established with [AF_INET]10.250.7.77:35510\nSat Jan 11 19:25:29 2020 10.250.7.77:35510 TCP connection established with [AF_INET]100.64.1.1:64742\nSat Jan 11 19:25:29 2020 10.250.7.77:35510 Connection reset, restarting [0]\nSat Jan 11 19:25:29 2020 100.64.1.1:64742 Connection reset, restarting [0]\nSat Jan 11 19:25:33 2020 TCP connection established with [AF_INET]10.250.7.77:19378\nSat Jan 11 19:25:33 2020 10.250.7.77:19378 TCP connection established with [AF_INET]100.64.1.1:60318\nSat Jan 11 19:25:33 2020 10.250.7.77:19378 Connection reset, restarting [0]\nSat Jan 11 19:25:33 2020 100.64.1.1:60318 Connection reset, restarting [0]\nSat Jan 11 19:25:39 2020 TCP connection established with [AF_INET]10.250.7.77:35514\nSat Jan 11 19:25:39 2020 10.250.7.77:35514 TCP connection established with [AF_INET]100.64.1.1:64746\nSat Jan 11 19:25:39 2020 10.250.7.77:35514 Connection reset, restarting [0]\nSat Jan 11 19:25:39 2020 100.64.1.1:64746 Connection reset, restarting [0]\nSat Jan 11 19:25:43 2020 TCP connection established with [AF_INET]10.250.7.77:19386\nSat Jan 11 19:25:43 2020 10.250.7.77:19386 TCP connection established with [AF_INET]100.64.1.1:60326\nSat Jan 11 19:25:43 2020 10.250.7.77:19386 Connection reset, restarting [0]\nSat Jan 11 19:25:43 2020 100.64.1.1:60326 Connection reset, restarting [0]\nSat Jan 11 19:25:49 2020 TCP connection established with [AF_INET]10.250.7.77:35524\nSat Jan 11 19:25:49 2020 10.250.7.77:35524 TCP connection established with [AF_INET]100.64.1.1:64756\nSat Jan 11 19:25:49 2020 10.250.7.77:35524 Connection reset, restarting [0]\nSat Jan 11 19:25:49 2020 100.64.1.1:64756 Connection reset, restarting [0]\nSat Jan 11 19:25:53 2020 TCP connection established with [AF_INET]10.250.7.77:19430\nSat Jan 11 19:25:53 2020 10.250.7.77:19430 TCP connection established with [AF_INET]100.64.1.1:60370\nSat Jan 11 19:25:53 2020 100.64.1.1:60370 Connection reset, restarting [0]\nSat Jan 11 19:25:53 2020 10.250.7.77:19430 Connection reset, restarting [0]\nSat Jan 11 19:25:59 2020 TCP connection established with [AF_INET]10.250.7.77:35532\nSat Jan 11 19:25:59 2020 10.250.7.77:35532 TCP connection established with [AF_INET]100.64.1.1:64764\nSat Jan 11 19:25:59 2020 10.250.7.77:35532 Connection reset, restarting [0]\nSat Jan 11 19:25:59 2020 100.64.1.1:64764 Connection reset, restarting [0]\nSat Jan 11 19:26:03 2020 TCP connection established with [AF_INET]10.250.7.77:19448\nSat Jan 11 19:26:03 2020 10.250.7.77:19448 TCP connection established with [AF_INET]100.64.1.1:60388\nSat Jan 11 19:26:03 2020 10.250.7.77:19448 Connection reset, restarting [0]\nSat Jan 11 19:26:03 2020 100.64.1.1:60388 Connection reset, restarting [0]\nSat Jan 11 19:26:09 2020 TCP connection established with [AF_INET]10.250.7.77:35548\nSat Jan 11 19:26:09 2020 10.250.7.77:35548 TCP connection established with [AF_INET]100.64.1.1:64780\nSat Jan 11 19:26:09 2020 10.250.7.77:35548 Connection reset, restarting [0]\nSat Jan 11 19:26:09 2020 100.64.1.1:64780 Connection reset, restarting [0]\nSat Jan 11 19:26:13 2020 TCP connection established with [AF_INET]10.250.7.77:19456\nSat Jan 11 19:26:13 2020 10.250.7.77:19456 TCP connection established with [AF_INET]100.64.1.1:60396\nSat Jan 11 19:26:13 2020 10.250.7.77:19456 Connection reset, restarting [0]\nSat Jan 11 19:26:13 2020 100.64.1.1:60396 Connection reset, restarting [0]\nSat Jan 11 19:26:19 2020 TCP connection established with [AF_INET]10.250.7.77:35560\nSat Jan 11 19:26:19 2020 10.250.7.77:35560 TCP connection established with [AF_INET]100.64.1.1:64792\nSat Jan 11 19:26:19 2020 10.250.7.77:35560 Connection reset, restarting [0]\nSat Jan 11 19:26:19 2020 100.64.1.1:64792 Connection reset, restarting [0]\nSat Jan 11 19:26:23 2020 TCP connection established with [AF_INET]10.250.7.77:19466\nSat Jan 11 19:26:23 2020 10.250.7.77:19466 TCP connection established with [AF_INET]100.64.1.1:60406\nSat Jan 11 19:26:23 2020 10.250.7.77:19466 Connection reset, restarting [0]\nSat Jan 11 19:26:23 2020 100.64.1.1:60406 Connection reset, restarting [0]\nSat Jan 11 19:26:29 2020 TCP connection established with [AF_INET]10.250.7.77:35568\nSat Jan 11 19:26:29 2020 10.250.7.77:35568 TCP connection established with [AF_INET]100.64.1.1:64800\nSat Jan 11 19:26:29 2020 10.250.7.77:35568 Connection reset, restarting [0]\nSat Jan 11 19:26:29 2020 100.64.1.1:64800 Connection reset, restarting [0]\nSat Jan 11 19:26:33 2020 TCP connection established with [AF_INET]10.250.7.77:19472\nSat Jan 11 19:26:33 2020 10.250.7.77:19472 TCP connection established with [AF_INET]100.64.1.1:60412\nSat Jan 11 19:26:33 2020 10.250.7.77:19472 Connection reset, restarting [0]\nSat Jan 11 19:26:33 2020 100.64.1.1:60412 Connection reset, restarting [0]\nSat Jan 11 19:26:39 2020 TCP connection established with [AF_INET]10.250.7.77:35572\nSat Jan 11 19:26:39 2020 10.250.7.77:35572 TCP connection established with [AF_INET]100.64.1.1:64804\nSat Jan 11 19:26:39 2020 10.250.7.77:35572 Connection reset, restarting [0]\nSat Jan 11 19:26:39 2020 100.64.1.1:64804 Connection reset, restarting [0]\nSat Jan 11 19:26:43 2020 TCP connection established with [AF_INET]10.250.7.77:19480\nSat Jan 11 19:26:43 2020 10.250.7.77:19480 TCP connection established with [AF_INET]100.64.1.1:60420\nSat Jan 11 19:26:43 2020 10.250.7.77:19480 Connection reset, restarting [0]\nSat Jan 11 19:26:43 2020 100.64.1.1:60420 Connection reset, restarting [0]\nSat Jan 11 19:26:49 2020 TCP connection established with [AF_INET]10.250.7.77:35582\nSat Jan 11 19:26:49 2020 10.250.7.77:35582 TCP connection established with [AF_INET]100.64.1.1:64814\nSat Jan 11 19:26:49 2020 10.250.7.77:35582 Connection reset, restarting [0]\nSat Jan 11 19:26:49 2020 100.64.1.1:64814 Connection reset, restarting [0]\nSat Jan 11 19:26:53 2020 TCP connection established with [AF_INET]10.250.7.77:19486\nSat Jan 11 19:26:53 2020 10.250.7.77:19486 TCP connection established with [AF_INET]100.64.1.1:60426\nSat Jan 11 19:26:53 2020 10.250.7.77:19486 Connection reset, restarting [0]\nSat Jan 11 19:26:53 2020 100.64.1.1:60426 Connection reset, restarting [0]\nSat Jan 11 19:26:59 2020 TCP connection established with [AF_INET]10.250.7.77:35590\nSat Jan 11 19:26:59 2020 10.250.7.77:35590 TCP connection established with [AF_INET]100.64.1.1:64822\nSat Jan 11 19:26:59 2020 10.250.7.77:35590 Connection reset, restarting [0]\nSat Jan 11 19:26:59 2020 100.64.1.1:64822 Connection reset, restarting [0]\nSat Jan 11 19:27:03 2020 TCP connection established with [AF_INET]10.250.7.77:19508\nSat Jan 11 19:27:03 2020 10.250.7.77:19508 TCP connection established with [AF_INET]100.64.1.1:60448\nSat Jan 11 19:27:03 2020 10.250.7.77:19508 Connection reset, restarting [0]\nSat Jan 11 19:27:03 2020 100.64.1.1:60448 Connection reset, restarting [0]\nSat Jan 11 19:27:09 2020 TCP connection established with [AF_INET]10.250.7.77:35608\nSat Jan 11 19:27:09 2020 10.250.7.77:35608 TCP connection established with [AF_INET]100.64.1.1:64840\nSat Jan 11 19:27:09 2020 10.250.7.77:35608 Connection reset, restarting [0]\nSat Jan 11 19:27:09 2020 100.64.1.1:64840 Connection reset, restarting [0]\nSat Jan 11 19:27:13 2020 TCP connection established with [AF_INET]10.250.7.77:19516\nSat Jan 11 19:27:13 2020 10.250.7.77:19516 TCP connection established with [AF_INET]100.64.1.1:60456\nSat Jan 11 19:27:13 2020 10.250.7.77:19516 Connection reset, restarting [0]\nSat Jan 11 19:27:13 2020 100.64.1.1:60456 Connection reset, restarting [0]\nSat Jan 11 19:27:19 2020 TCP connection established with [AF_INET]10.250.7.77:35616\nSat Jan 11 19:27:19 2020 10.250.7.77:35616 TCP connection established with [AF_INET]100.64.1.1:64848\nSat Jan 11 19:27:19 2020 10.250.7.77:35616 Connection reset, restarting [0]\nSat Jan 11 19:27:19 2020 100.64.1.1:64848 Connection reset, restarting [0]\nSat Jan 11 19:27:23 2020 TCP connection established with [AF_INET]10.250.7.77:19526\nSat Jan 11 19:27:23 2020 10.250.7.77:19526 TCP connection established with [AF_INET]100.64.1.1:60466\nSat Jan 11 19:27:23 2020 10.250.7.77:19526 Connection reset, restarting [0]\nSat Jan 11 19:27:23 2020 100.64.1.1:60466 Connection reset, restarting [0]\nSat Jan 11 19:27:29 2020 TCP connection established with [AF_INET]10.250.7.77:35628\nSat Jan 11 19:27:29 2020 10.250.7.77:35628 TCP connection established with [AF_INET]100.64.1.1:64860\nSat Jan 11 19:27:29 2020 10.250.7.77:35628 Connection reset, restarting [0]\nSat Jan 11 19:27:29 2020 100.64.1.1:64860 Connection reset, restarting [0]\nSat Jan 11 19:27:33 2020 TCP connection established with [AF_INET]100.64.1.1:60472\nSat Jan 11 19:27:33 2020 100.64.1.1:60472 Connection reset, restarting [0]\nSat Jan 11 19:27:33 2020 TCP connection established with [AF_INET]10.250.7.77:19532\nSat Jan 11 19:27:33 2020 10.250.7.77:19532 Connection reset, restarting [0]\nSat Jan 11 19:27:39 2020 TCP connection established with [AF_INET]10.250.7.77:35632\nSat Jan 11 19:27:39 2020 10.250.7.77:35632 TCP connection established with [AF_INET]100.64.1.1:64864\nSat Jan 11 19:27:39 2020 10.250.7.77:35632 Connection reset, restarting [0]\nSat Jan 11 19:27:39 2020 100.64.1.1:64864 Connection reset, restarting [0]\nSat Jan 11 19:27:43 2020 TCP connection established with [AF_INET]10.250.7.77:19540\nSat Jan 11 19:27:43 2020 10.250.7.77:19540 TCP connection established with [AF_INET]100.64.1.1:60480\nSat Jan 11 19:27:43 2020 10.250.7.77:19540 Connection reset, restarting [0]\nSat Jan 11 19:27:43 2020 100.64.1.1:60480 Connection reset, restarting [0]\nSat Jan 11 19:27:49 2020 TCP connection established with [AF_INET]10.250.7.77:35642\nSat Jan 11 19:27:49 2020 10.250.7.77:35642 TCP connection established with [AF_INET]100.64.1.1:64874\nSat Jan 11 19:27:49 2020 10.250.7.77:35642 Connection reset, restarting [0]\nSat Jan 11 19:27:49 2020 100.64.1.1:64874 Connection reset, restarting [0]\nSat Jan 11 19:27:53 2020 TCP connection established with [AF_INET]10.250.7.77:19546\nSat Jan 11 19:27:53 2020 10.250.7.77:19546 TCP connection established with [AF_INET]100.64.1.1:60486\nSat Jan 11 19:27:53 2020 10.250.7.77:19546 Connection reset, restarting [0]\nSat Jan 11 19:27:53 2020 100.64.1.1:60486 Connection reset, restarting [0]\nSat Jan 11 19:27:59 2020 TCP connection established with [AF_INET]10.250.7.77:35650\nSat Jan 11 19:27:59 2020 10.250.7.77:35650 TCP connection established with [AF_INET]100.64.1.1:64882\nSat Jan 11 19:27:59 2020 10.250.7.77:35650 Connection reset, restarting [0]\nSat Jan 11 19:27:59 2020 100.64.1.1:64882 Connection reset, restarting [0]\nSat Jan 11 19:28:03 2020 TCP connection established with [AF_INET]10.250.7.77:19572\nSat Jan 11 19:28:03 2020 10.250.7.77:19572 TCP connection established with [AF_INET]100.64.1.1:60512\nSat Jan 11 19:28:03 2020 10.250.7.77:19572 Connection reset, restarting [0]\nSat Jan 11 19:28:03 2020 100.64.1.1:60512 Connection reset, restarting [0]\nSat Jan 11 19:28:09 2020 TCP connection established with [AF_INET]10.250.7.77:35680\nSat Jan 11 19:28:09 2020 10.250.7.77:35680 TCP connection established with [AF_INET]100.64.1.1:64912\nSat Jan 11 19:28:09 2020 10.250.7.77:35680 Connection reset, restarting [0]\nSat Jan 11 19:28:09 2020 100.64.1.1:64912 Connection reset, restarting [0]\nSat Jan 11 19:28:13 2020 TCP connection established with [AF_INET]10.250.7.77:19576\nSat Jan 11 19:28:13 2020 10.250.7.77:19576 Connection reset, restarting [0]\nSat Jan 11 19:28:13 2020 TCP connection established with [AF_INET]100.64.1.1:60516\nSat Jan 11 19:28:13 2020 100.64.1.1:60516 Connection reset, restarting [0]\nSat Jan 11 19:28:19 2020 TCP connection established with [AF_INET]10.250.7.77:35688\nSat Jan 11 19:28:19 2020 10.250.7.77:35688 TCP connection established with [AF_INET]100.64.1.1:64920\nSat Jan 11 19:28:19 2020 10.250.7.77:35688 Connection reset, restarting [0]\nSat Jan 11 19:28:19 2020 100.64.1.1:64920 Connection reset, restarting [0]\nSat Jan 11 19:28:23 2020 TCP connection established with [AF_INET]10.250.7.77:19590\nSat Jan 11 19:28:23 2020 10.250.7.77:19590 TCP connection established with [AF_INET]100.64.1.1:60530\nSat Jan 11 19:28:23 2020 10.250.7.77:19590 Connection reset, restarting [0]\nSat Jan 11 19:28:23 2020 100.64.1.1:60530 Connection reset, restarting [0]\nSat Jan 11 19:28:29 2020 TCP connection established with [AF_INET]10.250.7.77:35696\nSat Jan 11 19:28:29 2020 10.250.7.77:35696 TCP connection established with [AF_INET]100.64.1.1:64928\nSat Jan 11 19:28:29 2020 10.250.7.77:35696 Connection reset, restarting [0]\nSat Jan 11 19:28:29 2020 100.64.1.1:64928 Connection reset, restarting [0]\nSat Jan 11 19:28:33 2020 TCP connection established with [AF_INET]10.250.7.77:19596\nSat Jan 11 19:28:33 2020 10.250.7.77:19596 TCP connection established with [AF_INET]100.64.1.1:60536\nSat Jan 11 19:28:33 2020 10.250.7.77:19596 Connection reset, restarting [0]\nSat Jan 11 19:28:33 2020 100.64.1.1:60536 Connection reset, restarting [0]\nSat Jan 11 19:28:39 2020 TCP connection established with [AF_INET]10.250.7.77:35700\nSat Jan 11 19:28:39 2020 10.250.7.77:35700 TCP connection established with [AF_INET]100.64.1.1:64932\nSat Jan 11 19:28:39 2020 10.250.7.77:35700 Connection reset, restarting [0]\nSat Jan 11 19:28:39 2020 100.64.1.1:64932 Connection reset, restarting [0]\nSat Jan 11 19:28:43 2020 TCP connection established with [AF_INET]10.250.7.77:19604\nSat Jan 11 19:28:43 2020 10.250.7.77:19604 TCP connection established with [AF_INET]100.64.1.1:60544\nSat Jan 11 19:28:43 2020 10.250.7.77:19604 Connection reset, restarting [0]\nSat Jan 11 19:28:43 2020 100.64.1.1:60544 Connection reset, restarting [0]\nSat Jan 11 19:28:49 2020 TCP connection established with [AF_INET]10.250.7.77:35748\nSat Jan 11 19:28:49 2020 10.250.7.77:35748 TCP connection established with [AF_INET]100.64.1.1:64980\nSat Jan 11 19:28:49 2020 10.250.7.77:35748 Connection reset, restarting [0]\nSat Jan 11 19:28:49 2020 100.64.1.1:64980 Connection reset, restarting [0]\nSat Jan 11 19:28:53 2020 TCP connection established with [AF_INET]10.250.7.77:19612\nSat Jan 11 19:28:53 2020 10.250.7.77:19612 TCP connection established with [AF_INET]100.64.1.1:60552\nSat Jan 11 19:28:53 2020 10.250.7.77:19612 Connection reset, restarting [0]\nSat Jan 11 19:28:53 2020 100.64.1.1:60552 Connection reset, restarting [0]\nSat Jan 11 19:28:59 2020 TCP connection established with [AF_INET]10.250.7.77:35758\nSat Jan 11 19:28:59 2020 10.250.7.77:35758 TCP connection established with [AF_INET]100.64.1.1:64990\nSat Jan 11 19:28:59 2020 10.250.7.77:35758 Connection reset, restarting [0]\nSat Jan 11 19:28:59 2020 100.64.1.1:64990 Connection reset, restarting [0]\nSat Jan 11 19:29:03 2020 TCP connection established with [AF_INET]10.250.7.77:19634\nSat Jan 11 19:29:03 2020 10.250.7.77:19634 TCP connection established with [AF_INET]100.64.1.1:60574\nSat Jan 11 19:29:03 2020 10.250.7.77:19634 Connection reset, restarting [0]\nSat Jan 11 19:29:03 2020 100.64.1.1:60574 Connection reset, restarting [0]\nSat Jan 11 19:29:09 2020 TCP connection established with [AF_INET]10.250.7.77:35778\nSat Jan 11 19:29:09 2020 10.250.7.77:35778 Connection reset, restarting [0]\nSat Jan 11 19:29:09 2020 TCP connection established with [AF_INET]100.64.1.1:1034\nSat Jan 11 19:29:09 2020 100.64.1.1:1034 Connection reset, restarting [0]\nSat Jan 11 19:29:13 2020 TCP connection established with [AF_INET]10.250.7.77:19638\nSat Jan 11 19:29:13 2020 10.250.7.77:19638 TCP connection established with [AF_INET]100.64.1.1:60578\nSat Jan 11 19:29:13 2020 10.250.7.77:19638 Connection reset, restarting [0]\nSat Jan 11 19:29:13 2020 100.64.1.1:60578 Connection reset, restarting [0]\nSat Jan 11 19:29:19 2020 TCP connection established with [AF_INET]10.250.7.77:35786\nSat Jan 11 19:29:19 2020 10.250.7.77:35786 TCP connection established with [AF_INET]100.64.1.1:1042\nSat Jan 11 19:29:19 2020 10.250.7.77:35786 Connection reset, restarting [0]\nSat Jan 11 19:29:19 2020 100.64.1.1:1042 Connection reset, restarting [0]\nSat Jan 11 19:29:23 2020 TCP connection established with [AF_INET]10.250.7.77:19650\nSat Jan 11 19:29:23 2020 10.250.7.77:19650 TCP connection established with [AF_INET]100.64.1.1:60590\nSat Jan 11 19:29:23 2020 10.250.7.77:19650 Connection reset, restarting [0]\nSat Jan 11 19:29:23 2020 100.64.1.1:60590 Connection reset, restarting [0]\nSat Jan 11 19:29:29 2020 TCP connection established with [AF_INET]10.250.7.77:35794\nSat Jan 11 19:29:29 2020 10.250.7.77:35794 TCP connection established with [AF_INET]100.64.1.1:1050\nSat Jan 11 19:29:29 2020 10.250.7.77:35794 Connection reset, restarting [0]\nSat Jan 11 19:29:29 2020 100.64.1.1:1050 Connection reset, restarting [0]\nSat Jan 11 19:29:33 2020 TCP connection established with [AF_INET]10.250.7.77:19656\nSat Jan 11 19:29:33 2020 10.250.7.77:19656 TCP connection established with [AF_INET]100.64.1.1:60596\nSat Jan 11 19:29:33 2020 10.250.7.77:19656 Connection reset, restarting [0]\nSat Jan 11 19:29:33 2020 100.64.1.1:60596 Connection reset, restarting [0]\nSat Jan 11 19:29:39 2020 TCP connection established with [AF_INET]10.250.7.77:35798\nSat Jan 11 19:29:39 2020 10.250.7.77:35798 TCP connection established with [AF_INET]100.64.1.1:1054\nSat Jan 11 19:29:39 2020 10.250.7.77:35798 Connection reset, restarting [0]\nSat Jan 11 19:29:39 2020 100.64.1.1:1054 Connection reset, restarting [0]\nSat Jan 11 19:29:43 2020 TCP connection established with [AF_INET]10.250.7.77:19668\nSat Jan 11 19:29:43 2020 10.250.7.77:19668 TCP connection established with [AF_INET]100.64.1.1:60608\nSat Jan 11 19:29:43 2020 10.250.7.77:19668 Connection reset, restarting [0]\nSat Jan 11 19:29:43 2020 100.64.1.1:60608 Connection reset, restarting [0]\nSat Jan 11 19:29:49 2020 TCP connection established with [AF_INET]10.250.7.77:35808\nSat Jan 11 19:29:49 2020 10.250.7.77:35808 TCP connection established with [AF_INET]100.64.1.1:1064\nSat Jan 11 19:29:49 2020 10.250.7.77:35808 Connection reset, restarting [0]\nSat Jan 11 19:29:49 2020 100.64.1.1:1064 Connection reset, restarting [0]\nSat Jan 11 19:29:53 2020 TCP connection established with [AF_INET]10.250.7.77:19676\nSat Jan 11 19:29:53 2020 10.250.7.77:19676 TCP connection established with [AF_INET]100.64.1.1:60616\nSat Jan 11 19:29:53 2020 10.250.7.77:19676 Connection reset, restarting [0]\nSat Jan 11 19:29:53 2020 100.64.1.1:60616 Connection reset, restarting [0]\nSat Jan 11 19:29:59 2020 TCP connection established with [AF_INET]10.250.7.77:35822\nSat Jan 11 19:29:59 2020 10.250.7.77:35822 TCP connection established with [AF_INET]100.64.1.1:1078\nSat Jan 11 19:29:59 2020 10.250.7.77:35822 Connection reset, restarting [0]\nSat Jan 11 19:29:59 2020 100.64.1.1:1078 Connection reset, restarting [0]\nSat Jan 11 19:30:03 2020 TCP connection established with [AF_INET]100.64.1.1:60634\nSat Jan 11 19:30:03 2020 100.64.1.1:60634 TCP connection established with [AF_INET]10.250.7.77:19694\nSat Jan 11 19:30:03 2020 100.64.1.1:60634 Connection reset, restarting [0]\nSat Jan 11 19:30:03 2020 10.250.7.77:19694 Connection reset, restarting [0]\nSat Jan 11 19:30:09 2020 TCP connection established with [AF_INET]10.250.7.77:35836\nSat Jan 11 19:30:09 2020 10.250.7.77:35836 TCP connection established with [AF_INET]100.64.1.1:1092\nSat Jan 11 19:30:09 2020 10.250.7.77:35836 Connection reset, restarting [0]\nSat Jan 11 19:30:09 2020 100.64.1.1:1092 Connection reset, restarting [0]\nSat Jan 11 19:30:13 2020 TCP connection established with [AF_INET]10.250.7.77:19698\nSat Jan 11 19:30:13 2020 10.250.7.77:19698 TCP connection established with [AF_INET]100.64.1.1:60638\nSat Jan 11 19:30:13 2020 10.250.7.77:19698 Connection reset, restarting [0]\nSat Jan 11 19:30:13 2020 100.64.1.1:60638 Connection reset, restarting [0]\nSat Jan 11 19:30:19 2020 TCP connection established with [AF_INET]10.250.7.77:35844\nSat Jan 11 19:30:19 2020 10.250.7.77:35844 TCP connection established with [AF_INET]100.64.1.1:1100\nSat Jan 11 19:30:19 2020 10.250.7.77:35844 Connection reset, restarting [0]\nSat Jan 11 19:30:19 2020 100.64.1.1:1100 Connection reset, restarting [0]\nSat Jan 11 19:30:23 2020 TCP connection established with [AF_INET]10.250.7.77:19708\nSat Jan 11 19:30:23 2020 10.250.7.77:19708 TCP connection established with [AF_INET]100.64.1.1:60648\nSat Jan 11 19:30:23 2020 10.250.7.77:19708 Connection reset, restarting [0]\nSat Jan 11 19:30:23 2020 100.64.1.1:60648 Connection reset, restarting [0]\nSat Jan 11 19:30:29 2020 TCP connection established with [AF_INET]10.250.7.77:35852\nSat Jan 11 19:30:29 2020 10.250.7.77:35852 TCP connection established with [AF_INET]100.64.1.1:1108\nSat Jan 11 19:30:29 2020 10.250.7.77:35852 Connection reset, restarting [0]\nSat Jan 11 19:30:29 2020 100.64.1.1:1108 Connection reset, restarting [0]\nSat Jan 11 19:30:33 2020 TCP connection established with [AF_INET]10.250.7.77:19714\nSat Jan 11 19:30:33 2020 10.250.7.77:19714 TCP connection established with [AF_INET]100.64.1.1:60654\nSat Jan 11 19:30:33 2020 10.250.7.77:19714 Connection reset, restarting [0]\nSat Jan 11 19:30:33 2020 100.64.1.1:60654 Connection reset, restarting [0]\nSat Jan 11 19:30:39 2020 TCP connection established with [AF_INET]10.250.7.77:35856\nSat Jan 11 19:30:39 2020 10.250.7.77:35856 TCP connection established with [AF_INET]100.64.1.1:1112\nSat Jan 11 19:30:39 2020 10.250.7.77:35856 Connection reset, restarting [0]\nSat Jan 11 19:30:39 2020 100.64.1.1:1112 Connection reset, restarting [0]\nSat Jan 11 19:30:43 2020 TCP connection established with [AF_INET]10.250.7.77:19722\nSat Jan 11 19:30:43 2020 10.250.7.77:19722 TCP connection established with [AF_INET]100.64.1.1:60662\nSat Jan 11 19:30:43 2020 10.250.7.77:19722 Connection reset, restarting [0]\nSat Jan 11 19:30:43 2020 100.64.1.1:60662 Connection reset, restarting [0]\nSat Jan 11 19:30:49 2020 TCP connection established with [AF_INET]10.250.7.77:35868\nSat Jan 11 19:30:49 2020 10.250.7.77:35868 TCP connection established with [AF_INET]100.64.1.1:1124\nSat Jan 11 19:30:49 2020 10.250.7.77:35868 Connection reset, restarting [0]\nSat Jan 11 19:30:49 2020 100.64.1.1:1124 Connection reset, restarting [0]\nSat Jan 11 19:30:53 2020 TCP connection established with [AF_INET]10.250.7.77:19734\nSat Jan 11 19:30:53 2020 10.250.7.77:19734 TCP connection established with [AF_INET]100.64.1.1:60674\nSat Jan 11 19:30:53 2020 10.250.7.77:19734 Connection reset, restarting [0]\nSat Jan 11 19:30:53 2020 100.64.1.1:60674 Connection reset, restarting [0]\nSat Jan 11 19:30:59 2020 TCP connection established with [AF_INET]10.250.7.77:35876\nSat Jan 11 19:30:59 2020 10.250.7.77:35876 TCP connection established with [AF_INET]100.64.1.1:1132\nSat Jan 11 19:30:59 2020 10.250.7.77:35876 Connection reset, restarting [0]\nSat Jan 11 19:30:59 2020 100.64.1.1:1132 Connection reset, restarting [0]\nSat Jan 11 19:31:03 2020 TCP connection established with [AF_INET]10.250.7.77:19752\nSat Jan 11 19:31:03 2020 10.250.7.77:19752 TCP connection established with [AF_INET]100.64.1.1:60692\nSat Jan 11 19:31:03 2020 10.250.7.77:19752 Connection reset, restarting [0]\nSat Jan 11 19:31:03 2020 100.64.1.1:60692 Connection reset, restarting [0]\nSat Jan 11 19:31:09 2020 TCP connection established with [AF_INET]10.250.7.77:35890\nSat Jan 11 19:31:09 2020 10.250.7.77:35890 TCP connection established with [AF_INET]100.64.1.1:1146\nSat Jan 11 19:31:09 2020 10.250.7.77:35890 Connection reset, restarting [0]\nSat Jan 11 19:31:09 2020 100.64.1.1:1146 Connection reset, restarting [0]\nSat Jan 11 19:31:13 2020 TCP connection established with [AF_INET]10.250.7.77:19756\nSat Jan 11 19:31:13 2020 10.250.7.77:19756 TCP connection established with [AF_INET]100.64.1.1:60696\nSat Jan 11 19:31:13 2020 10.250.7.77:19756 Connection reset, restarting [0]\nSat Jan 11 19:31:13 2020 100.64.1.1:60696 Connection reset, restarting [0]\nSat Jan 11 19:31:19 2020 TCP connection established with [AF_INET]10.250.7.77:35902\nSat Jan 11 19:31:19 2020 10.250.7.77:35902 TCP connection established with [AF_INET]100.64.1.1:1158\nSat Jan 11 19:31:19 2020 10.250.7.77:35902 Connection reset, restarting [0]\nSat Jan 11 19:31:19 2020 100.64.1.1:1158 Connection reset, restarting [0]\nSat Jan 11 19:31:23 2020 TCP connection established with [AF_INET]10.250.7.77:19766\nSat Jan 11 19:31:23 2020 10.250.7.77:19766 TCP connection established with [AF_INET]100.64.1.1:60706\nSat Jan 11 19:31:23 2020 10.250.7.77:19766 Connection reset, restarting [0]\nSat Jan 11 19:31:23 2020 100.64.1.1:60706 Connection reset, restarting [0]\nSat Jan 11 19:31:29 2020 TCP connection established with [AF_INET]10.250.7.77:35910\nSat Jan 11 19:31:29 2020 10.250.7.77:35910 TCP connection established with [AF_INET]100.64.1.1:1166\nSat Jan 11 19:31:29 2020 10.250.7.77:35910 Connection reset, restarting [0]\nSat Jan 11 19:31:29 2020 100.64.1.1:1166 Connection reset, restarting [0]\nSat Jan 11 19:31:33 2020 TCP connection established with [AF_INET]10.250.7.77:19772\nSat Jan 11 19:31:33 2020 10.250.7.77:19772 Connection reset, restarting [0]\nSat Jan 11 19:31:33 2020 TCP connection established with [AF_INET]100.64.1.1:60712\nSat Jan 11 19:31:33 2020 100.64.1.1:60712 Connection reset, restarting [0]\nSat Jan 11 19:31:39 2020 TCP connection established with [AF_INET]10.250.7.77:35914\nSat Jan 11 19:31:39 2020 10.250.7.77:35914 TCP connection established with [AF_INET]100.64.1.1:1170\nSat Jan 11 19:31:39 2020 10.250.7.77:35914 Connection reset, restarting [0]\nSat Jan 11 19:31:39 2020 100.64.1.1:1170 Connection reset, restarting [0]\nSat Jan 11 19:31:43 2020 TCP connection established with [AF_INET]10.250.7.77:19782\nSat Jan 11 19:31:43 2020 10.250.7.77:19782 TCP connection established with [AF_INET]100.64.1.1:60722\nSat Jan 11 19:31:43 2020 10.250.7.77:19782 Connection reset, restarting [0]\nSat Jan 11 19:31:43 2020 100.64.1.1:60722 Connection reset, restarting [0]\nSat Jan 11 19:31:49 2020 TCP connection established with [AF_INET]10.250.7.77:35926\nSat Jan 11 19:31:49 2020 10.250.7.77:35926 TCP connection established with [AF_INET]100.64.1.1:1182\nSat Jan 11 19:31:49 2020 10.250.7.77:35926 Connection reset, restarting [0]\nSat Jan 11 19:31:49 2020 100.64.1.1:1182 Connection reset, restarting [0]\nSat Jan 11 19:31:53 2020 TCP connection established with [AF_INET]10.250.7.77:19788\nSat Jan 11 19:31:53 2020 10.250.7.77:19788 TCP connection established with [AF_INET]100.64.1.1:60728\nSat Jan 11 19:31:53 2020 10.250.7.77:19788 Connection reset, restarting [0]\nSat Jan 11 19:31:53 2020 100.64.1.1:60728 Connection reset, restarting [0]\nSat Jan 11 19:31:59 2020 TCP connection established with [AF_INET]10.250.7.77:35934\nSat Jan 11 19:31:59 2020 10.250.7.77:35934 TCP connection established with [AF_INET]100.64.1.1:1190\nSat Jan 11 19:31:59 2020 10.250.7.77:35934 Connection reset, restarting [0]\nSat Jan 11 19:31:59 2020 100.64.1.1:1190 Connection reset, restarting [0]\nSat Jan 11 19:32:03 2020 TCP connection established with [AF_INET]10.250.7.77:19806\nSat Jan 11 19:32:03 2020 10.250.7.77:19806 TCP connection established with [AF_INET]100.64.1.1:60746\nSat Jan 11 19:32:03 2020 10.250.7.77:19806 Connection reset, restarting [0]\nSat Jan 11 19:32:03 2020 100.64.1.1:60746 Connection reset, restarting [0]\nSat Jan 11 19:32:09 2020 TCP connection established with [AF_INET]10.250.7.77:35948\nSat Jan 11 19:32:09 2020 10.250.7.77:35948 TCP connection established with [AF_INET]100.64.1.1:1204\nSat Jan 11 19:32:09 2020 10.250.7.77:35948 Connection reset, restarting [0]\nSat Jan 11 19:32:09 2020 100.64.1.1:1204 Connection reset, restarting [0]\nSat Jan 11 19:32:13 2020 TCP connection established with [AF_INET]10.250.7.77:19814\nSat Jan 11 19:32:13 2020 10.250.7.77:19814 TCP connection established with [AF_INET]100.64.1.1:60754\nSat Jan 11 19:32:13 2020 10.250.7.77:19814 Connection reset, restarting [0]\nSat Jan 11 19:32:13 2020 100.64.1.1:60754 Connection reset, restarting [0]\nSat Jan 11 19:32:19 2020 TCP connection established with [AF_INET]10.250.7.77:35956\nSat Jan 11 19:32:19 2020 10.250.7.77:35956 TCP connection established with [AF_INET]100.64.1.1:1212\nSat Jan 11 19:32:19 2020 10.250.7.77:35956 Connection reset, restarting [0]\nSat Jan 11 19:32:19 2020 100.64.1.1:1212 Connection reset, restarting [0]\nSat Jan 11 19:32:23 2020 TCP connection established with [AF_INET]10.250.7.77:19824\nSat Jan 11 19:32:23 2020 10.250.7.77:19824 TCP connection established with [AF_INET]100.64.1.1:60764\nSat Jan 11 19:32:23 2020 10.250.7.77:19824 Connection reset, restarting [0]\nSat Jan 11 19:32:23 2020 100.64.1.1:60764 Connection reset, restarting [0]\nSat Jan 11 19:32:29 2020 TCP connection established with [AF_INET]10.250.7.77:35968\nSat Jan 11 19:32:29 2020 10.250.7.77:35968 TCP connection established with [AF_INET]100.64.1.1:1224\nSat Jan 11 19:32:29 2020 10.250.7.77:35968 Connection reset, restarting [0]\nSat Jan 11 19:32:29 2020 100.64.1.1:1224 Connection reset, restarting [0]\nSat Jan 11 19:32:33 2020 TCP connection established with [AF_INET]10.250.7.77:19830\nSat Jan 11 19:32:33 2020 10.250.7.77:19830 TCP connection established with [AF_INET]100.64.1.1:60770\nSat Jan 11 19:32:33 2020 10.250.7.77:19830 Connection reset, restarting [0]\nSat Jan 11 19:32:33 2020 100.64.1.1:60770 Connection reset, restarting [0]\nSat Jan 11 19:32:39 2020 TCP connection established with [AF_INET]10.250.7.77:35972\nSat Jan 11 19:32:39 2020 10.250.7.77:35972 TCP connection established with [AF_INET]100.64.1.1:1228\nSat Jan 11 19:32:39 2020 10.250.7.77:35972 Connection reset, restarting [0]\nSat Jan 11 19:32:39 2020 100.64.1.1:1228 Connection reset, restarting [0]\nSat Jan 11 19:32:43 2020 TCP connection established with [AF_INET]10.250.7.77:19840\nSat Jan 11 19:32:43 2020 10.250.7.77:19840 TCP connection established with [AF_INET]100.64.1.1:60780\nSat Jan 11 19:32:43 2020 10.250.7.77:19840 Connection reset, restarting [0]\nSat Jan 11 19:32:43 2020 100.64.1.1:60780 Connection reset, restarting [0]\nSat Jan 11 19:32:49 2020 TCP connection established with [AF_INET]10.250.7.77:35984\nSat Jan 11 19:32:49 2020 10.250.7.77:35984 TCP connection established with [AF_INET]100.64.1.1:1240\nSat Jan 11 19:32:49 2020 10.250.7.77:35984 Connection reset, restarting [0]\nSat Jan 11 19:32:49 2020 100.64.1.1:1240 Connection reset, restarting [0]\nSat Jan 11 19:32:53 2020 TCP connection established with [AF_INET]10.250.7.77:19846\nSat Jan 11 19:32:53 2020 10.250.7.77:19846 TCP connection established with [AF_INET]100.64.1.1:60786\nSat Jan 11 19:32:53 2020 10.250.7.77:19846 Connection reset, restarting [0]\nSat Jan 11 19:32:53 2020 100.64.1.1:60786 Connection reset, restarting [0]\nSat Jan 11 19:32:59 2020 TCP connection established with [AF_INET]10.250.7.77:35992\nSat Jan 11 19:32:59 2020 10.250.7.77:35992 TCP connection established with [AF_INET]100.64.1.1:1248\nSat Jan 11 19:32:59 2020 10.250.7.77:35992 Connection reset, restarting [0]\nSat Jan 11 19:32:59 2020 100.64.1.1:1248 Connection reset, restarting [0]\nSat Jan 11 19:33:03 2020 TCP connection established with [AF_INET]10.250.7.77:19874\nSat Jan 11 19:33:03 2020 10.250.7.77:19874 TCP connection established with [AF_INET]100.64.1.1:60814\nSat Jan 11 19:33:03 2020 10.250.7.77:19874 Connection reset, restarting [0]\nSat Jan 11 19:33:03 2020 100.64.1.1:60814 Connection reset, restarting [0]\nSat Jan 11 19:33:09 2020 TCP connection established with [AF_INET]10.250.7.77:36006\nSat Jan 11 19:33:09 2020 10.250.7.77:36006 TCP connection established with [AF_INET]100.64.1.1:1262\nSat Jan 11 19:33:09 2020 10.250.7.77:36006 Connection reset, restarting [0]\nSat Jan 11 19:33:09 2020 100.64.1.1:1262 Connection reset, restarting [0]\nSat Jan 11 19:33:13 2020 TCP connection established with [AF_INET]10.250.7.77:19878\nSat Jan 11 19:33:13 2020 10.250.7.77:19878 TCP connection established with [AF_INET]100.64.1.1:60818\nSat Jan 11 19:33:13 2020 10.250.7.77:19878 Connection reset, restarting [0]\nSat Jan 11 19:33:13 2020 100.64.1.1:60818 Connection reset, restarting [0]\nSat Jan 11 19:33:19 2020 TCP connection established with [AF_INET]10.250.7.77:36014\nSat Jan 11 19:33:19 2020 10.250.7.77:36014 TCP connection established with [AF_INET]100.64.1.1:1270\nSat Jan 11 19:33:19 2020 10.250.7.77:36014 Connection reset, restarting [0]\nSat Jan 11 19:33:19 2020 100.64.1.1:1270 Connection reset, restarting [0]\nSat Jan 11 19:33:23 2020 TCP connection established with [AF_INET]10.250.7.77:19892\nSat Jan 11 19:33:23 2020 10.250.7.77:19892 TCP connection established with [AF_INET]100.64.1.1:60832\nSat Jan 11 19:33:23 2020 10.250.7.77:19892 Connection reset, restarting [0]\nSat Jan 11 19:33:23 2020 100.64.1.1:60832 Connection reset, restarting [0]\nSat Jan 11 19:33:29 2020 TCP connection established with [AF_INET]10.250.7.77:36022\nSat Jan 11 19:33:29 2020 10.250.7.77:36022 TCP connection established with [AF_INET]100.64.1.1:1278\nSat Jan 11 19:33:29 2020 10.250.7.77:36022 Connection reset, restarting [0]\nSat Jan 11 19:33:29 2020 100.64.1.1:1278 Connection reset, restarting [0]\nSat Jan 11 19:33:33 2020 TCP connection established with [AF_INET]10.250.7.77:19902\nSat Jan 11 19:33:33 2020 10.250.7.77:19902 TCP connection established with [AF_INET]100.64.1.1:60842\nSat Jan 11 19:33:33 2020 10.250.7.77:19902 Connection reset, restarting [0]\nSat Jan 11 19:33:33 2020 100.64.1.1:60842 Connection reset, restarting [0]\nSat Jan 11 19:33:39 2020 TCP connection established with [AF_INET]10.250.7.77:36028\nSat Jan 11 19:33:39 2020 10.250.7.77:36028 TCP connection established with [AF_INET]100.64.1.1:1284\nSat Jan 11 19:33:39 2020 10.250.7.77:36028 Connection reset, restarting [0]\nSat Jan 11 19:33:39 2020 100.64.1.1:1284 Connection reset, restarting [0]\nSat Jan 11 19:33:43 2020 TCP connection established with [AF_INET]10.250.7.77:19910\nSat Jan 11 19:33:43 2020 10.250.7.77:19910 Connection reset, restarting [0]\nSat Jan 11 19:33:43 2020 TCP connection established with [AF_INET]100.64.1.1:60850\nSat Jan 11 19:33:43 2020 100.64.1.1:60850 Connection reset, restarting [0]\nSat Jan 11 19:33:49 2020 TCP connection established with [AF_INET]10.250.7.77:36042\nSat Jan 11 19:33:49 2020 10.250.7.77:36042 TCP connection established with [AF_INET]100.64.1.1:1298\nSat Jan 11 19:33:49 2020 10.250.7.77:36042 Connection reset, restarting [0]\nSat Jan 11 19:33:49 2020 100.64.1.1:1298 Connection reset, restarting [0]\nSat Jan 11 19:33:53 2020 TCP connection established with [AF_INET]10.250.7.77:19916\nSat Jan 11 19:33:53 2020 10.250.7.77:19916 TCP connection established with [AF_INET]100.64.1.1:60856\nSat Jan 11 19:33:53 2020 10.250.7.77:19916 Connection reset, restarting [0]\nSat Jan 11 19:33:53 2020 100.64.1.1:60856 Connection reset, restarting [0]\nSat Jan 11 19:33:59 2020 TCP connection established with [AF_INET]10.250.7.77:36050\nSat Jan 11 19:33:59 2020 10.250.7.77:36050 TCP connection established with [AF_INET]100.64.1.1:1306\nSat Jan 11 19:33:59 2020 10.250.7.77:36050 Connection reset, restarting [0]\nSat Jan 11 19:33:59 2020 100.64.1.1:1306 Connection reset, restarting [0]\nSat Jan 11 19:34:03 2020 TCP connection established with [AF_INET]10.250.7.77:19936\nSat Jan 11 19:34:03 2020 10.250.7.77:19936 TCP connection established with [AF_INET]100.64.1.1:60876\nSat Jan 11 19:34:03 2020 10.250.7.77:19936 Connection reset, restarting [0]\nSat Jan 11 19:34:03 2020 100.64.1.1:60876 Connection reset, restarting [0]\nSat Jan 11 19:34:09 2020 TCP connection established with [AF_INET]10.250.7.77:36064\nSat Jan 11 19:34:09 2020 10.250.7.77:36064 TCP connection established with [AF_INET]100.64.1.1:1320\nSat Jan 11 19:34:09 2020 10.250.7.77:36064 Connection reset, restarting [0]\nSat Jan 11 19:34:09 2020 100.64.1.1:1320 Connection reset, restarting [0]\nSat Jan 11 19:34:13 2020 TCP connection established with [AF_INET]10.250.7.77:19940\nSat Jan 11 19:34:13 2020 10.250.7.77:19940 TCP connection established with [AF_INET]100.64.1.1:60880\nSat Jan 11 19:34:13 2020 10.250.7.77:19940 Connection reset, restarting [0]\nSat Jan 11 19:34:13 2020 100.64.1.1:60880 Connection reset, restarting [0]\nSat Jan 11 19:34:19 2020 TCP connection established with [AF_INET]10.250.7.77:36072\nSat Jan 11 19:34:19 2020 10.250.7.77:36072 TCP connection established with [AF_INET]100.64.1.1:1328\nSat Jan 11 19:34:19 2020 10.250.7.77:36072 Connection reset, restarting [0]\nSat Jan 11 19:34:19 2020 100.64.1.1:1328 Connection reset, restarting [0]\nSat Jan 11 19:34:23 2020 TCP connection established with [AF_INET]10.250.7.77:19950\nSat Jan 11 19:34:23 2020 10.250.7.77:19950 TCP connection established with [AF_INET]100.64.1.1:60890\nSat Jan 11 19:34:23 2020 10.250.7.77:19950 Connection reset, restarting [0]\nSat Jan 11 19:34:23 2020 100.64.1.1:60890 Connection reset, restarting [0]\nSat Jan 11 19:34:29 2020 TCP connection established with [AF_INET]10.250.7.77:36080\nSat Jan 11 19:34:29 2020 10.250.7.77:36080 TCP connection established with [AF_INET]100.64.1.1:1336\nSat Jan 11 19:34:29 2020 10.250.7.77:36080 Connection reset, restarting [0]\nSat Jan 11 19:34:29 2020 100.64.1.1:1336 Connection reset, restarting [0]\nSat Jan 11 19:34:33 2020 TCP connection established with [AF_INET]10.250.7.77:19958\nSat Jan 11 19:34:33 2020 10.250.7.77:19958 Connection reset, restarting [0]\nSat Jan 11 19:34:33 2020 TCP connection established with [AF_INET]100.64.1.1:60898\nSat Jan 11 19:34:33 2020 100.64.1.1:60898 Connection reset, restarting [0]\nSat Jan 11 19:34:39 2020 TCP connection established with [AF_INET]10.250.7.77:36088\nSat Jan 11 19:34:39 2020 10.250.7.77:36088 TCP connection established with [AF_INET]100.64.1.1:1344\nSat Jan 11 19:34:39 2020 10.250.7.77:36088 Connection reset, restarting [0]\nSat Jan 11 19:34:39 2020 100.64.1.1:1344 Connection reset, restarting [0]\nSat Jan 11 19:34:43 2020 TCP connection established with [AF_INET]10.250.7.77:19972\nSat Jan 11 19:34:43 2020 10.250.7.77:19972 TCP connection established with [AF_INET]100.64.1.1:60912\nSat Jan 11 19:34:43 2020 10.250.7.77:19972 Connection reset, restarting [0]\nSat Jan 11 19:34:43 2020 100.64.1.1:60912 Connection reset, restarting [0]\nSat Jan 11 19:34:49 2020 TCP connection established with [AF_INET]10.250.7.77:36098\nSat Jan 11 19:34:49 2020 10.250.7.77:36098 TCP connection established with [AF_INET]100.64.1.1:1354\nSat Jan 11 19:34:49 2020 10.250.7.77:36098 Connection reset, restarting [0]\nSat Jan 11 19:34:49 2020 100.64.1.1:1354 Connection reset, restarting [0]\nSat Jan 11 19:34:53 2020 TCP connection established with [AF_INET]10.250.7.77:19978\nSat Jan 11 19:34:53 2020 10.250.7.77:19978 TCP connection established with [AF_INET]100.64.1.1:60918\nSat Jan 11 19:34:53 2020 10.250.7.77:19978 Connection reset, restarting [0]\nSat Jan 11 19:34:53 2020 100.64.1.1:60918 Connection reset, restarting [0]\nSat Jan 11 19:34:59 2020 TCP connection established with [AF_INET]10.250.7.77:36110\nSat Jan 11 19:34:59 2020 10.250.7.77:36110 TCP connection established with [AF_INET]100.64.1.1:1366\nSat Jan 11 19:34:59 2020 10.250.7.77:36110 Connection reset, restarting [0]\nSat Jan 11 19:34:59 2020 100.64.1.1:1366 Connection reset, restarting [0]\nSat Jan 11 19:35:03 2020 TCP connection established with [AF_INET]10.250.7.77:19996\nSat Jan 11 19:35:03 2020 10.250.7.77:19996 TCP connection established with [AF_INET]100.64.1.1:60936\nSat Jan 11 19:35:03 2020 10.250.7.77:19996 Connection reset, restarting [0]\nSat Jan 11 19:35:03 2020 100.64.1.1:60936 Connection reset, restarting [0]\nSat Jan 11 19:35:09 2020 TCP connection established with [AF_INET]10.250.7.77:36124\nSat Jan 11 19:35:09 2020 10.250.7.77:36124 TCP connection established with [AF_INET]100.64.1.1:1380\nSat Jan 11 19:35:09 2020 10.250.7.77:36124 Connection reset, restarting [0]\nSat Jan 11 19:35:09 2020 100.64.1.1:1380 Connection reset, restarting [0]\nSat Jan 11 19:35:13 2020 TCP connection established with [AF_INET]10.250.7.77:20000\nSat Jan 11 19:35:13 2020 10.250.7.77:20000 TCP connection established with [AF_INET]100.64.1.1:60940\nSat Jan 11 19:35:13 2020 10.250.7.77:20000 Connection reset, restarting [0]\nSat Jan 11 19:35:13 2020 100.64.1.1:60940 Connection reset, restarting [0]\nSat Jan 11 19:35:19 2020 TCP connection established with [AF_INET]10.250.7.77:36132\nSat Jan 11 19:35:19 2020 10.250.7.77:36132 TCP connection established with [AF_INET]100.64.1.1:1388\nSat Jan 11 19:35:19 2020 10.250.7.77:36132 Connection reset, restarting [0]\nSat Jan 11 19:35:19 2020 100.64.1.1:1388 Connection reset, restarting [0]\nSat Jan 11 19:35:23 2020 TCP connection established with [AF_INET]10.250.7.77:20010\nSat Jan 11 19:35:23 2020 10.250.7.77:20010 TCP connection established with [AF_INET]100.64.1.1:60950\nSat Jan 11 19:35:23 2020 10.250.7.77:20010 Connection reset, restarting [0]\nSat Jan 11 19:35:23 2020 100.64.1.1:60950 Connection reset, restarting [0]\nSat Jan 11 19:35:29 2020 TCP connection established with [AF_INET]10.250.7.77:36142\nSat Jan 11 19:35:29 2020 10.250.7.77:36142 TCP connection established with [AF_INET]100.64.1.1:1398\nSat Jan 11 19:35:29 2020 10.250.7.77:36142 Connection reset, restarting [0]\nSat Jan 11 19:35:29 2020 100.64.1.1:1398 Connection reset, restarting [0]\nSat Jan 11 19:35:33 2020 TCP connection established with [AF_INET]10.250.7.77:20018\nSat Jan 11 19:35:33 2020 10.250.7.77:20018 TCP connection established with [AF_INET]100.64.1.1:60958\nSat Jan 11 19:35:33 2020 10.250.7.77:20018 Connection reset, restarting [0]\nSat Jan 11 19:35:33 2020 100.64.1.1:60958 Connection reset, restarting [0]\nSat Jan 11 19:35:39 2020 TCP connection established with [AF_INET]10.250.7.77:36146\nSat Jan 11 19:35:39 2020 10.250.7.77:36146 TCP connection established with [AF_INET]100.64.1.1:1402\nSat Jan 11 19:35:39 2020 10.250.7.77:36146 Connection reset, restarting [0]\nSat Jan 11 19:35:39 2020 100.64.1.1:1402 Connection reset, restarting [0]\nSat Jan 11 19:35:43 2020 TCP connection established with [AF_INET]10.250.7.77:20026\nSat Jan 11 19:35:43 2020 10.250.7.77:20026 TCP connection established with [AF_INET]100.64.1.1:60966\nSat Jan 11 19:35:43 2020 10.250.7.77:20026 Connection reset, restarting [0]\nSat Jan 11 19:35:43 2020 100.64.1.1:60966 Connection reset, restarting [0]\nSat Jan 11 19:35:49 2020 TCP connection established with [AF_INET]10.250.7.77:36156\nSat Jan 11 19:35:49 2020 10.250.7.77:36156 TCP connection established with [AF_INET]100.64.1.1:1412\nSat Jan 11 19:35:49 2020 10.250.7.77:36156 Connection reset, restarting [0]\nSat Jan 11 19:35:49 2020 100.64.1.1:1412 Connection reset, restarting [0]\nSat Jan 11 19:35:53 2020 TCP connection established with [AF_INET]10.250.7.77:20070\nSat Jan 11 19:35:53 2020 10.250.7.77:20070 TCP connection established with [AF_INET]100.64.1.1:61010\nSat Jan 11 19:35:53 2020 10.250.7.77:20070 Connection reset, restarting [0]\nSat Jan 11 19:35:53 2020 100.64.1.1:61010 Connection reset, restarting [0]\nSat Jan 11 19:35:59 2020 TCP connection established with [AF_INET]10.250.7.77:36166\nSat Jan 11 19:35:59 2020 10.250.7.77:36166 TCP connection established with [AF_INET]100.64.1.1:1422\nSat Jan 11 19:35:59 2020 10.250.7.77:36166 Connection reset, restarting [0]\nSat Jan 11 19:35:59 2020 100.64.1.1:1422 Connection reset, restarting [0]\nSat Jan 11 19:36:03 2020 TCP connection established with [AF_INET]10.250.7.77:20088\nSat Jan 11 19:36:03 2020 10.250.7.77:20088 TCP connection established with [AF_INET]100.64.1.1:61028\nSat Jan 11 19:36:03 2020 10.250.7.77:20088 Connection reset, restarting [0]\nSat Jan 11 19:36:03 2020 100.64.1.1:61028 Connection reset, restarting [0]\nSat Jan 11 19:36:09 2020 TCP connection established with [AF_INET]10.250.7.77:36180\nSat Jan 11 19:36:09 2020 10.250.7.77:36180 TCP connection established with [AF_INET]100.64.1.1:1436\nSat Jan 11 19:36:09 2020 10.250.7.77:36180 Connection reset, restarting [0]\nSat Jan 11 19:36:09 2020 100.64.1.1:1436 Connection reset, restarting [0]\nSat Jan 11 19:36:13 2020 TCP connection established with [AF_INET]10.250.7.77:20094\nSat Jan 11 19:36:13 2020 10.250.7.77:20094 TCP connection established with [AF_INET]100.64.1.1:61034\nSat Jan 11 19:36:13 2020 10.250.7.77:20094 Connection reset, restarting [0]\nSat Jan 11 19:36:13 2020 100.64.1.1:61034 Connection reset, restarting [0]\nSat Jan 11 19:36:19 2020 TCP connection established with [AF_INET]10.250.7.77:36192\nSat Jan 11 19:36:19 2020 10.250.7.77:36192 TCP connection established with [AF_INET]100.64.1.1:1448\nSat Jan 11 19:36:19 2020 10.250.7.77:36192 Connection reset, restarting [0]\nSat Jan 11 19:36:19 2020 100.64.1.1:1448 Connection reset, restarting [0]\nSat Jan 11 19:36:23 2020 TCP connection established with [AF_INET]10.250.7.77:20106\nSat Jan 11 19:36:23 2020 10.250.7.77:20106 TCP connection established with [AF_INET]100.64.1.1:61046\nSat Jan 11 19:36:23 2020 10.250.7.77:20106 Connection reset, restarting [0]\nSat Jan 11 19:36:23 2020 100.64.1.1:61046 Connection reset, restarting [0]\nSat Jan 11 19:36:29 2020 TCP connection established with [AF_INET]10.250.7.77:36204\nSat Jan 11 19:36:29 2020 10.250.7.77:36204 TCP connection established with [AF_INET]100.64.1.1:1460\nSat Jan 11 19:36:29 2020 10.250.7.77:36204 Connection reset, restarting [0]\nSat Jan 11 19:36:29 2020 100.64.1.1:1460 Connection reset, restarting [0]\nSat Jan 11 19:36:33 2020 TCP connection established with [AF_INET]10.250.7.77:20112\nSat Jan 11 19:36:33 2020 10.250.7.77:20112 TCP connection established with [AF_INET]100.64.1.1:61052\nSat Jan 11 19:36:33 2020 10.250.7.77:20112 Connection reset, restarting [0]\nSat Jan 11 19:36:33 2020 100.64.1.1:61052 Connection reset, restarting [0]\nSat Jan 11 19:36:39 2020 TCP connection established with [AF_INET]10.250.7.77:36208\nSat Jan 11 19:36:39 2020 10.250.7.77:36208 TCP connection established with [AF_INET]100.64.1.1:1464\nSat Jan 11 19:36:39 2020 10.250.7.77:36208 Connection reset, restarting [0]\nSat Jan 11 19:36:39 2020 100.64.1.1:1464 Connection reset, restarting [0]\nSat Jan 11 19:36:43 2020 TCP connection established with [AF_INET]10.250.7.77:20120\nSat Jan 11 19:36:43 2020 10.250.7.77:20120 TCP connection established with [AF_INET]100.64.1.1:61060\nSat Jan 11 19:36:43 2020 10.250.7.77:20120 Connection reset, restarting [0]\nSat Jan 11 19:36:43 2020 100.64.1.1:61060 Connection reset, restarting [0]\nSat Jan 11 19:36:49 2020 TCP connection established with [AF_INET]10.250.7.77:36218\nSat Jan 11 19:36:49 2020 10.250.7.77:36218 TCP connection established with [AF_INET]100.64.1.1:1474\nSat Jan 11 19:36:49 2020 10.250.7.77:36218 Connection reset, restarting [0]\nSat Jan 11 19:36:49 2020 100.64.1.1:1474 Connection reset, restarting [0]\nSat Jan 11 19:36:53 2020 TCP connection established with [AF_INET]10.250.7.77:20126\nSat Jan 11 19:36:53 2020 10.250.7.77:20126 TCP connection established with [AF_INET]100.64.1.1:61066\nSat Jan 11 19:36:53 2020 10.250.7.77:20126 Connection reset, restarting [0]\nSat Jan 11 19:36:53 2020 100.64.1.1:61066 Connection reset, restarting [0]\nSat Jan 11 19:36:59 2020 TCP connection established with [AF_INET]10.250.7.77:36228\nSat Jan 11 19:36:59 2020 10.250.7.77:36228 TCP connection established with [AF_INET]100.64.1.1:1484\nSat Jan 11 19:36:59 2020 10.250.7.77:36228 Connection reset, restarting [0]\nSat Jan 11 19:36:59 2020 100.64.1.1:1484 Connection reset, restarting [0]\nSat Jan 11 19:37:03 2020 TCP connection established with [AF_INET]10.250.7.77:20148\nSat Jan 11 19:37:03 2020 10.250.7.77:20148 TCP connection established with [AF_INET]100.64.1.1:61088\nSat Jan 11 19:37:03 2020 10.250.7.77:20148 Connection reset, restarting [0]\nSat Jan 11 19:37:03 2020 100.64.1.1:61088 Connection reset, restarting [0]\nSat Jan 11 19:37:09 2020 TCP connection established with [AF_INET]10.250.7.77:36242\nSat Jan 11 19:37:09 2020 10.250.7.77:36242 TCP connection established with [AF_INET]100.64.1.1:1498\nSat Jan 11 19:37:09 2020 10.250.7.77:36242 Connection reset, restarting [0]\nSat Jan 11 19:37:09 2020 100.64.1.1:1498 Connection reset, restarting [0]\nSat Jan 11 19:37:13 2020 TCP connection established with [AF_INET]10.250.7.77:20156\nSat Jan 11 19:37:13 2020 10.250.7.77:20156 TCP connection established with [AF_INET]100.64.1.1:61096\nSat Jan 11 19:37:13 2020 10.250.7.77:20156 Connection reset, restarting [0]\nSat Jan 11 19:37:13 2020 100.64.1.1:61096 Connection reset, restarting [0]\nSat Jan 11 19:37:19 2020 TCP connection established with [AF_INET]10.250.7.77:36250\nSat Jan 11 19:37:19 2020 10.250.7.77:36250 TCP connection established with [AF_INET]100.64.1.1:1506\nSat Jan 11 19:37:19 2020 10.250.7.77:36250 Connection reset, restarting [0]\nSat Jan 11 19:37:19 2020 100.64.1.1:1506 Connection reset, restarting [0]\nSat Jan 11 19:37:23 2020 TCP connection established with [AF_INET]10.250.7.77:20168\nSat Jan 11 19:37:23 2020 10.250.7.77:20168 TCP connection established with [AF_INET]100.64.1.1:61108\nSat Jan 11 19:37:23 2020 10.250.7.77:20168 Connection reset, restarting [0]\nSat Jan 11 19:37:23 2020 100.64.1.1:61108 Connection reset, restarting [0]\nSat Jan 11 19:37:29 2020 TCP connection established with [AF_INET]10.250.7.77:36264\nSat Jan 11 19:37:29 2020 10.250.7.77:36264 TCP connection established with [AF_INET]100.64.1.1:1520\nSat Jan 11 19:37:29 2020 10.250.7.77:36264 Connection reset, restarting [0]\nSat Jan 11 19:37:29 2020 100.64.1.1:1520 Connection reset, restarting [0]\nSat Jan 11 19:37:33 2020 TCP connection established with [AF_INET]10.250.7.77:20174\nSat Jan 11 19:37:33 2020 10.250.7.77:20174 TCP connection established with [AF_INET]100.64.1.1:61114\nSat Jan 11 19:37:33 2020 10.250.7.77:20174 Connection reset, restarting [0]\nSat Jan 11 19:37:33 2020 100.64.1.1:61114 Connection reset, restarting [0]\nSat Jan 11 19:37:39 2020 TCP connection established with [AF_INET]10.250.7.77:36270\nSat Jan 11 19:37:39 2020 10.250.7.77:36270 TCP connection established with [AF_INET]100.64.1.1:1526\nSat Jan 11 19:37:39 2020 10.250.7.77:36270 Connection reset, restarting [0]\nSat Jan 11 19:37:39 2020 100.64.1.1:1526 Connection reset, restarting [0]\nSat Jan 11 19:37:43 2020 TCP connection established with [AF_INET]10.250.7.77:20182\nSat Jan 11 19:37:43 2020 10.250.7.77:20182 TCP connection established with [AF_INET]100.64.1.1:61122\nSat Jan 11 19:37:43 2020 10.250.7.77:20182 Connection reset, restarting [0]\nSat Jan 11 19:37:43 2020 100.64.1.1:61122 Connection reset, restarting [0]\nSat Jan 11 19:37:49 2020 TCP connection established with [AF_INET]10.250.7.77:36280\nSat Jan 11 19:37:49 2020 10.250.7.77:36280 TCP connection established with [AF_INET]100.64.1.1:1536\nSat Jan 11 19:37:49 2020 10.250.7.77:36280 Connection reset, restarting [0]\nSat Jan 11 19:37:49 2020 100.64.1.1:1536 Connection reset, restarting [0]\nSat Jan 11 19:37:53 2020 TCP connection established with [AF_INET]10.250.7.77:20188\nSat Jan 11 19:37:53 2020 10.250.7.77:20188 TCP connection established with [AF_INET]100.64.1.1:61128\nSat Jan 11 19:37:53 2020 10.250.7.77:20188 Connection reset, restarting [0]\nSat Jan 11 19:37:53 2020 100.64.1.1:61128 Connection reset, restarting [0]\nSat Jan 11 19:37:59 2020 TCP connection established with [AF_INET]10.250.7.77:36288\nSat Jan 11 19:37:59 2020 10.250.7.77:36288 TCP connection established with [AF_INET]100.64.1.1:1544\nSat Jan 11 19:37:59 2020 10.250.7.77:36288 Connection reset, restarting [0]\nSat Jan 11 19:37:59 2020 100.64.1.1:1544 Connection reset, restarting [0]\nSat Jan 11 19:38:03 2020 TCP connection established with [AF_INET]10.250.7.77:20206\nSat Jan 11 19:38:03 2020 10.250.7.77:20206 TCP connection established with [AF_INET]100.64.1.1:61146\nSat Jan 11 19:38:03 2020 10.250.7.77:20206 Connection reset, restarting [0]\nSat Jan 11 19:38:03 2020 100.64.1.1:61146 Connection reset, restarting [0]\nSat Jan 11 19:38:09 2020 TCP connection established with [AF_INET]10.250.7.77:36302\nSat Jan 11 19:38:09 2020 10.250.7.77:36302 TCP connection established with [AF_INET]100.64.1.1:1558\nSat Jan 11 19:38:09 2020 10.250.7.77:36302 Connection reset, restarting [0]\nSat Jan 11 19:38:09 2020 100.64.1.1:1558 Connection reset, restarting [0]\nSat Jan 11 19:38:13 2020 TCP connection established with [AF_INET]10.250.7.77:20212\nSat Jan 11 19:38:13 2020 10.250.7.77:20212 TCP connection established with [AF_INET]100.64.1.1:61152\nSat Jan 11 19:38:13 2020 10.250.7.77:20212 Connection reset, restarting [0]\nSat Jan 11 19:38:13 2020 100.64.1.1:61152 Connection reset, restarting [0]\nSat Jan 11 19:38:19 2020 TCP connection established with [AF_INET]10.250.7.77:36312\nSat Jan 11 19:38:19 2020 10.250.7.77:36312 TCP connection established with [AF_INET]100.64.1.1:1568\nSat Jan 11 19:38:19 2020 10.250.7.77:36312 Connection reset, restarting [0]\nSat Jan 11 19:38:19 2020 100.64.1.1:1568 Connection reset, restarting [0]\nSat Jan 11 19:38:23 2020 TCP connection established with [AF_INET]10.250.7.77:20226\nSat Jan 11 19:38:23 2020 10.250.7.77:20226 TCP connection established with [AF_INET]100.64.1.1:61166\nSat Jan 11 19:38:23 2020 10.250.7.77:20226 Connection reset, restarting [0]\nSat Jan 11 19:38:23 2020 100.64.1.1:61166 Connection reset, restarting [0]\nSat Jan 11 19:38:29 2020 TCP connection established with [AF_INET]10.250.7.77:36320\nSat Jan 11 19:38:29 2020 10.250.7.77:36320 TCP connection established with [AF_INET]100.64.1.1:1576\nSat Jan 11 19:38:29 2020 10.250.7.77:36320 Connection reset, restarting [0]\nSat Jan 11 19:38:29 2020 100.64.1.1:1576 Connection reset, restarting [0]\nSat Jan 11 19:38:33 2020 TCP connection established with [AF_INET]10.250.7.77:20232\nSat Jan 11 19:38:33 2020 10.250.7.77:20232 TCP connection established with [AF_INET]100.64.1.1:61172\nSat Jan 11 19:38:33 2020 10.250.7.77:20232 Connection reset, restarting [0]\nSat Jan 11 19:38:33 2020 100.64.1.1:61172 Connection reset, restarting [0]\nSat Jan 11 19:38:39 2020 TCP connection established with [AF_INET]10.250.7.77:36324\nSat Jan 11 19:38:39 2020 10.250.7.77:36324 TCP connection established with [AF_INET]100.64.1.1:1580\nSat Jan 11 19:38:39 2020 10.250.7.77:36324 Connection reset, restarting [0]\nSat Jan 11 19:38:39 2020 100.64.1.1:1580 Connection reset, restarting [0]\nSat Jan 11 19:38:43 2020 TCP connection established with [AF_INET]10.250.7.77:20240\nSat Jan 11 19:38:43 2020 10.250.7.77:20240 Connection reset, restarting [0]\nSat Jan 11 19:38:43 2020 TCP connection established with [AF_INET]100.64.1.1:61180\nSat Jan 11 19:38:43 2020 100.64.1.1:61180 Connection reset, restarting [0]\nSat Jan 11 19:38:49 2020 TCP connection established with [AF_INET]10.250.7.77:36372\nSat Jan 11 19:38:49 2020 10.250.7.77:36372 TCP connection established with [AF_INET]100.64.1.1:1628\nSat Jan 11 19:38:49 2020 10.250.7.77:36372 Connection reset, restarting [0]\nSat Jan 11 19:38:49 2020 100.64.1.1:1628 Connection reset, restarting [0]\nSat Jan 11 19:38:53 2020 TCP connection established with [AF_INET]10.250.7.77:20246\nSat Jan 11 19:38:53 2020 10.250.7.77:20246 TCP connection established with [AF_INET]100.64.1.1:61186\nSat Jan 11 19:38:53 2020 10.250.7.77:20246 Connection reset, restarting [0]\nSat Jan 11 19:38:53 2020 100.64.1.1:61186 Connection reset, restarting [0]\nSat Jan 11 19:38:59 2020 TCP connection established with [AF_INET]10.250.7.77:36380\nSat Jan 11 19:38:59 2020 10.250.7.77:36380 TCP connection established with [AF_INET]100.64.1.1:1636\nSat Jan 11 19:38:59 2020 10.250.7.77:36380 Connection reset, restarting [0]\nSat Jan 11 19:38:59 2020 100.64.1.1:1636 Connection reset, restarting [0]\nSat Jan 11 19:39:03 2020 TCP connection established with [AF_INET]10.250.7.77:20264\nSat Jan 11 19:39:03 2020 10.250.7.77:20264 TCP connection established with [AF_INET]100.64.1.1:61204\nSat Jan 11 19:39:03 2020 10.250.7.77:20264 Connection reset, restarting [0]\nSat Jan 11 19:39:03 2020 100.64.1.1:61204 Connection reset, restarting [0]\nSat Jan 11 19:39:09 2020 TCP connection established with [AF_INET]10.250.7.77:36396\nSat Jan 11 19:39:09 2020 10.250.7.77:36396 TCP connection established with [AF_INET]100.64.1.1:1652\nSat Jan 11 19:39:09 2020 10.250.7.77:36396 Connection reset, restarting [0]\nSat Jan 11 19:39:09 2020 100.64.1.1:1652 Connection reset, restarting [0]\nSat Jan 11 19:39:13 2020 TCP connection established with [AF_INET]10.250.7.77:20270\nSat Jan 11 19:39:13 2020 10.250.7.77:20270 TCP connection established with [AF_INET]100.64.1.1:61210\nSat Jan 11 19:39:13 2020 10.250.7.77:20270 Connection reset, restarting [0]\nSat Jan 11 19:39:13 2020 100.64.1.1:61210 Connection reset, restarting [0]\nSat Jan 11 19:39:19 2020 TCP connection established with [AF_INET]10.250.7.77:36406\nSat Jan 11 19:39:19 2020 10.250.7.77:36406 TCP connection established with [AF_INET]100.64.1.1:1662\nSat Jan 11 19:39:19 2020 10.250.7.77:36406 Connection reset, restarting [0]\nSat Jan 11 19:39:19 2020 100.64.1.1:1662 Connection reset, restarting [0]\nSat Jan 11 19:39:23 2020 TCP connection established with [AF_INET]10.250.7.77:20280\nSat Jan 11 19:39:23 2020 10.250.7.77:20280 TCP connection established with [AF_INET]100.64.1.1:61220\nSat Jan 11 19:39:23 2020 10.250.7.77:20280 Connection reset, restarting [0]\nSat Jan 11 19:39:23 2020 100.64.1.1:61220 Connection reset, restarting [0]\nSat Jan 11 19:39:29 2020 TCP connection established with [AF_INET]10.250.7.77:36414\nSat Jan 11 19:39:29 2020 10.250.7.77:36414 TCP connection established with [AF_INET]100.64.1.1:1670\nSat Jan 11 19:39:29 2020 10.250.7.77:36414 Connection reset, restarting [0]\nSat Jan 11 19:39:29 2020 100.64.1.1:1670 Connection reset, restarting [0]\nSat Jan 11 19:39:33 2020 TCP connection established with [AF_INET]10.250.7.77:20286\nSat Jan 11 19:39:33 2020 10.250.7.77:20286 TCP connection established with [AF_INET]100.64.1.1:61226\nSat Jan 11 19:39:33 2020 10.250.7.77:20286 Connection reset, restarting [0]\nSat Jan 11 19:39:33 2020 100.64.1.1:61226 Connection reset, restarting [0]\nSat Jan 11 19:39:39 2020 TCP connection established with [AF_INET]10.250.7.77:36418\nSat Jan 11 19:39:39 2020 10.250.7.77:36418 TCP connection established with [AF_INET]100.64.1.1:1674\nSat Jan 11 19:39:39 2020 10.250.7.77:36418 Connection reset, restarting [0]\nSat Jan 11 19:39:39 2020 100.64.1.1:1674 Connection reset, restarting [0]\nSat Jan 11 19:39:43 2020 TCP connection established with [AF_INET]10.250.7.77:20298\nSat Jan 11 19:39:43 2020 TCP connection established with [AF_INET]100.64.1.1:61238\nSat Jan 11 19:39:43 2020 10.250.7.77:20298 Connection reset, restarting [0]\nSat Jan 11 19:39:43 2020 100.64.1.1:61238 Connection reset, restarting [0]\nSat Jan 11 19:39:49 2020 TCP connection established with [AF_INET]10.250.7.77:36428\nSat Jan 11 19:39:49 2020 10.250.7.77:36428 TCP connection established with [AF_INET]100.64.1.1:1684\nSat Jan 11 19:39:49 2020 10.250.7.77:36428 Connection reset, restarting [0]\nSat Jan 11 19:39:49 2020 100.64.1.1:1684 Connection reset, restarting [0]\nSat Jan 11 19:39:53 2020 TCP connection established with [AF_INET]10.250.7.77:20304\nSat Jan 11 19:39:53 2020 10.250.7.77:20304 TCP connection established with [AF_INET]100.64.1.1:61244\nSat Jan 11 19:39:53 2020 10.250.7.77:20304 Connection reset, restarting [0]\nSat Jan 11 19:39:53 2020 100.64.1.1:61244 Connection reset, restarting [0]\nSat Jan 11 19:39:59 2020 TCP connection established with [AF_INET]10.250.7.77:36440\nSat Jan 11 19:39:59 2020 10.250.7.77:36440 TCP connection established with [AF_INET]100.64.1.1:1696\nSat Jan 11 19:39:59 2020 10.250.7.77:36440 Connection reset, restarting [0]\nSat Jan 11 19:39:59 2020 100.64.1.1:1696 Connection reset, restarting [0]\nSat Jan 11 19:40:03 2020 TCP connection established with [AF_INET]10.250.7.77:20322\nSat Jan 11 19:40:03 2020 10.250.7.77:20322 TCP connection established with [AF_INET]100.64.1.1:61262\nSat Jan 11 19:40:03 2020 10.250.7.77:20322 Connection reset, restarting [0]\nSat Jan 11 19:40:03 2020 100.64.1.1:61262 Connection reset, restarting [0]\nSat Jan 11 19:40:09 2020 TCP connection established with [AF_INET]10.250.7.77:36456\nSat Jan 11 19:40:09 2020 10.250.7.77:36456 TCP connection established with [AF_INET]100.64.1.1:1712\nSat Jan 11 19:40:09 2020 10.250.7.77:36456 Connection reset, restarting [0]\nSat Jan 11 19:40:09 2020 100.64.1.1:1712 Connection reset, restarting [0]\nSat Jan 11 19:40:13 2020 TCP connection established with [AF_INET]10.250.7.77:20328\nSat Jan 11 19:40:13 2020 10.250.7.77:20328 TCP connection established with [AF_INET]100.64.1.1:61268\nSat Jan 11 19:40:13 2020 10.250.7.77:20328 Connection reset, restarting [0]\nSat Jan 11 19:40:13 2020 100.64.1.1:61268 Connection reset, restarting [0]\nSat Jan 11 19:40:19 2020 TCP connection established with [AF_INET]10.250.7.77:36464\nSat Jan 11 19:40:19 2020 10.250.7.77:36464 TCP connection established with [AF_INET]100.64.1.1:1720\nSat Jan 11 19:40:19 2020 10.250.7.77:36464 Connection reset, restarting [0]\nSat Jan 11 19:40:19 2020 100.64.1.1:1720 Connection reset, restarting [0]\nSat Jan 11 19:40:23 2020 TCP connection established with [AF_INET]10.250.7.77:20338\nSat Jan 11 19:40:23 2020 10.250.7.77:20338 TCP connection established with [AF_INET]100.64.1.1:61278\nSat Jan 11 19:40:23 2020 10.250.7.77:20338 Connection reset, restarting [0]\nSat Jan 11 19:40:23 2020 100.64.1.1:61278 Connection reset, restarting [0]\nSat Jan 11 19:40:29 2020 TCP connection established with [AF_INET]10.250.7.77:36472\nSat Jan 11 19:40:29 2020 10.250.7.77:36472 TCP connection established with [AF_INET]100.64.1.1:1728\nSat Jan 11 19:40:29 2020 10.250.7.77:36472 Connection reset, restarting [0]\nSat Jan 11 19:40:29 2020 100.64.1.1:1728 Connection reset, restarting [0]\nSat Jan 11 19:40:33 2020 TCP connection established with [AF_INET]10.250.7.77:20344\nSat Jan 11 19:40:33 2020 10.250.7.77:20344 TCP connection established with [AF_INET]100.64.1.1:61284\nSat Jan 11 19:40:33 2020 10.250.7.77:20344 Connection reset, restarting [0]\nSat Jan 11 19:40:33 2020 100.64.1.1:61284 Connection reset, restarting [0]\nSat Jan 11 19:40:39 2020 TCP connection established with [AF_INET]10.250.7.77:36476\nSat Jan 11 19:40:39 2020 10.250.7.77:36476 TCP connection established with [AF_INET]100.64.1.1:1732\nSat Jan 11 19:40:39 2020 10.250.7.77:36476 Connection reset, restarting [0]\nSat Jan 11 19:40:39 2020 100.64.1.1:1732 Connection reset, restarting [0]\nSat Jan 11 19:40:43 2020 TCP connection established with [AF_INET]10.250.7.77:20352\nSat Jan 11 19:40:43 2020 10.250.7.77:20352 TCP connection established with [AF_INET]100.64.1.1:61292\nSat Jan 11 19:40:43 2020 10.250.7.77:20352 Connection reset, restarting [0]\nSat Jan 11 19:40:43 2020 100.64.1.1:61292 Connection reset, restarting [0]\nSat Jan 11 19:40:49 2020 TCP connection established with [AF_INET]10.250.7.77:36486\nSat Jan 11 19:40:49 2020 10.250.7.77:36486 TCP connection established with [AF_INET]100.64.1.1:1742\nSat Jan 11 19:40:49 2020 10.250.7.77:36486 Connection reset, restarting [0]\nSat Jan 11 19:40:49 2020 100.64.1.1:1742 Connection reset, restarting [0]\nSat Jan 11 19:40:53 2020 TCP connection established with [AF_INET]10.250.7.77:20362\nSat Jan 11 19:40:53 2020 10.250.7.77:20362 TCP connection established with [AF_INET]100.64.1.1:61302\nSat Jan 11 19:40:53 2020 10.250.7.77:20362 Connection reset, restarting [0]\nSat Jan 11 19:40:53 2020 100.64.1.1:61302 Connection reset, restarting [0]\nSat Jan 11 19:40:59 2020 TCP connection established with [AF_INET]10.250.7.77:36494\nSat Jan 11 19:40:59 2020 10.250.7.77:36494 TCP connection established with [AF_INET]100.64.1.1:1750\nSat Jan 11 19:40:59 2020 10.250.7.77:36494 Connection reset, restarting [0]\nSat Jan 11 19:40:59 2020 100.64.1.1:1750 Connection reset, restarting [0]\nSat Jan 11 19:41:03 2020 TCP connection established with [AF_INET]10.250.7.77:20382\nSat Jan 11 19:41:03 2020 10.250.7.77:20382 TCP connection established with [AF_INET]100.64.1.1:61322\nSat Jan 11 19:41:03 2020 10.250.7.77:20382 Connection reset, restarting [0]\nSat Jan 11 19:41:03 2020 100.64.1.1:61322 Connection reset, restarting [0]\nSat Jan 11 19:41:09 2020 TCP connection established with [AF_INET]10.250.7.77:36510\nSat Jan 11 19:41:09 2020 10.250.7.77:36510 Connection reset, restarting [0]\nSat Jan 11 19:41:09 2020 TCP connection established with [AF_INET]100.64.1.1:1766\nSat Jan 11 19:41:09 2020 100.64.1.1:1766 Connection reset, restarting [0]\nSat Jan 11 19:41:13 2020 TCP connection established with [AF_INET]10.250.7.77:20386\nSat Jan 11 19:41:13 2020 10.250.7.77:20386 TCP connection established with [AF_INET]100.64.1.1:61326\nSat Jan 11 19:41:13 2020 10.250.7.77:20386 Connection reset, restarting [0]\nSat Jan 11 19:41:13 2020 100.64.1.1:61326 Connection reset, restarting [0]\nSat Jan 11 19:41:19 2020 TCP connection established with [AF_INET]10.250.7.77:36522\nSat Jan 11 19:41:19 2020 10.250.7.77:36522 TCP connection established with [AF_INET]100.64.1.1:1778\nSat Jan 11 19:41:19 2020 10.250.7.77:36522 Connection reset, restarting [0]\nSat Jan 11 19:41:19 2020 100.64.1.1:1778 Connection reset, restarting [0]\nSat Jan 11 19:41:23 2020 TCP connection established with [AF_INET]10.250.7.77:20396\nSat Jan 11 19:41:23 2020 10.250.7.77:20396 TCP connection established with [AF_INET]100.64.1.1:61336\nSat Jan 11 19:41:23 2020 10.250.7.77:20396 Connection reset, restarting [0]\nSat Jan 11 19:41:23 2020 100.64.1.1:61336 Connection reset, restarting [0]\nSat Jan 11 19:41:29 2020 TCP connection established with [AF_INET]10.250.7.77:36530\nSat Jan 11 19:41:29 2020 10.250.7.77:36530 TCP connection established with [AF_INET]100.64.1.1:1786\nSat Jan 11 19:41:29 2020 10.250.7.77:36530 Connection reset, restarting [0]\nSat Jan 11 19:41:29 2020 100.64.1.1:1786 Connection reset, restarting [0]\nSat Jan 11 19:41:33 2020 TCP connection established with [AF_INET]10.250.7.77:20402\nSat Jan 11 19:41:33 2020 10.250.7.77:20402 TCP connection established with [AF_INET]100.64.1.1:61342\nSat Jan 11 19:41:33 2020 10.250.7.77:20402 Connection reset, restarting [0]\nSat Jan 11 19:41:33 2020 100.64.1.1:61342 Connection reset, restarting [0]\nSat Jan 11 19:41:39 2020 TCP connection established with [AF_INET]10.250.7.77:36534\nSat Jan 11 19:41:39 2020 10.250.7.77:36534 TCP connection established with [AF_INET]100.64.1.1:1790\nSat Jan 11 19:41:39 2020 10.250.7.77:36534 Connection reset, restarting [0]\nSat Jan 11 19:41:39 2020 100.64.1.1:1790 Connection reset, restarting [0]\nSat Jan 11 19:41:43 2020 TCP connection established with [AF_INET]10.250.7.77:20410\nSat Jan 11 19:41:43 2020 10.250.7.77:20410 TCP connection established with [AF_INET]100.64.1.1:61350\nSat Jan 11 19:41:43 2020 10.250.7.77:20410 Connection reset, restarting [0]\nSat Jan 11 19:41:43 2020 100.64.1.1:61350 Connection reset, restarting [0]\nSat Jan 11 19:41:49 2020 TCP connection established with [AF_INET]10.250.7.77:36544\nSat Jan 11 19:41:49 2020 10.250.7.77:36544 TCP connection established with [AF_INET]100.64.1.1:1800\nSat Jan 11 19:41:49 2020 10.250.7.77:36544 Connection reset, restarting [0]\nSat Jan 11 19:41:49 2020 100.64.1.1:1800 Connection reset, restarting [0]\nSat Jan 11 19:41:53 2020 TCP connection established with [AF_INET]10.250.7.77:20416\nSat Jan 11 19:41:53 2020 10.250.7.77:20416 TCP connection established with [AF_INET]100.64.1.1:61356\nSat Jan 11 19:41:53 2020 10.250.7.77:20416 Connection reset, restarting [0]\nSat Jan 11 19:41:53 2020 100.64.1.1:61356 Connection reset, restarting [0]\nSat Jan 11 19:41:59 2020 TCP connection established with [AF_INET]10.250.7.77:36552\nSat Jan 11 19:41:59 2020 10.250.7.77:36552 TCP connection established with [AF_INET]100.64.1.1:1808\nSat Jan 11 19:41:59 2020 10.250.7.77:36552 Connection reset, restarting [0]\nSat Jan 11 19:41:59 2020 100.64.1.1:1808 Connection reset, restarting [0]\nSat Jan 11 19:42:03 2020 TCP connection established with [AF_INET]10.250.7.77:20442\nSat Jan 11 19:42:03 2020 10.250.7.77:20442 TCP connection established with [AF_INET]100.64.1.1:61382\nSat Jan 11 19:42:03 2020 10.250.7.77:20442 Connection reset, restarting [0]\nSat Jan 11 19:42:03 2020 100.64.1.1:61382 Connection reset, restarting [0]\nSat Jan 11 19:42:09 2020 TCP connection established with [AF_INET]10.250.7.77:36574\nSat Jan 11 19:42:09 2020 10.250.7.77:36574 TCP connection established with [AF_INET]100.64.1.1:1830\nSat Jan 11 19:42:09 2020 10.250.7.77:36574 Connection reset, restarting [0]\nSat Jan 11 19:42:09 2020 100.64.1.1:1830 Connection reset, restarting [0]\nSat Jan 11 19:42:13 2020 TCP connection established with [AF_INET]10.250.7.77:20450\nSat Jan 11 19:42:13 2020 10.250.7.77:20450 TCP connection established with [AF_INET]100.64.1.1:61390\nSat Jan 11 19:42:13 2020 10.250.7.77:20450 Connection reset, restarting [0]\nSat Jan 11 19:42:13 2020 100.64.1.1:61390 Connection reset, restarting [0]\nSat Jan 11 19:42:19 2020 TCP connection established with [AF_INET]10.250.7.77:36582\nSat Jan 11 19:42:19 2020 10.250.7.77:36582 TCP connection established with [AF_INET]100.64.1.1:1838\nSat Jan 11 19:42:19 2020 10.250.7.77:36582 Connection reset, restarting [0]\nSat Jan 11 19:42:19 2020 100.64.1.1:1838 Connection reset, restarting [0]\nSat Jan 11 19:42:23 2020 TCP connection established with [AF_INET]10.250.7.77:20460\nSat Jan 11 19:42:23 2020 10.250.7.77:20460 Connection reset, restarting [0]\nSat Jan 11 19:42:23 2020 TCP connection established with [AF_INET]100.64.1.1:61400\nSat Jan 11 19:42:23 2020 100.64.1.1:61400 Connection reset, restarting [0]\nSat Jan 11 19:42:29 2020 TCP connection established with [AF_INET]10.250.7.77:36594\nSat Jan 11 19:42:29 2020 10.250.7.77:36594 TCP connection established with [AF_INET]100.64.1.1:1850\nSat Jan 11 19:42:29 2020 10.250.7.77:36594 Connection reset, restarting [0]\nSat Jan 11 19:42:29 2020 100.64.1.1:1850 Connection reset, restarting [0]\nSat Jan 11 19:42:33 2020 TCP connection established with [AF_INET]10.250.7.77:20466\nSat Jan 11 19:42:33 2020 10.250.7.77:20466 TCP connection established with [AF_INET]100.64.1.1:61406\nSat Jan 11 19:42:33 2020 10.250.7.77:20466 Connection reset, restarting [0]\nSat Jan 11 19:42:33 2020 100.64.1.1:61406 Connection reset, restarting [0]\nSat Jan 11 19:42:39 2020 TCP connection established with [AF_INET]10.250.7.77:36598\nSat Jan 11 19:42:39 2020 10.250.7.77:36598 TCP connection established with [AF_INET]100.64.1.1:1854\nSat Jan 11 19:42:39 2020 10.250.7.77:36598 Connection reset, restarting [0]\nSat Jan 11 19:42:39 2020 100.64.1.1:1854 Connection reset, restarting [0]\nSat Jan 11 19:42:43 2020 TCP connection established with [AF_INET]10.250.7.77:20474\nSat Jan 11 19:42:43 2020 10.250.7.77:20474 TCP connection established with [AF_INET]100.64.1.1:61414\nSat Jan 11 19:42:43 2020 10.250.7.77:20474 Connection reset, restarting [0]\nSat Jan 11 19:42:43 2020 100.64.1.1:61414 Connection reset, restarting [0]\nSat Jan 11 19:42:49 2020 TCP connection established with [AF_INET]10.250.7.77:36608\nSat Jan 11 19:42:49 2020 10.250.7.77:36608 TCP connection established with [AF_INET]100.64.1.1:1864\nSat Jan 11 19:42:49 2020 10.250.7.77:36608 Connection reset, restarting [0]\nSat Jan 11 19:42:49 2020 100.64.1.1:1864 Connection reset, restarting [0]\nSat Jan 11 19:42:53 2020 TCP connection established with [AF_INET]10.250.7.77:20482\nSat Jan 11 19:42:53 2020 10.250.7.77:20482 TCP connection established with [AF_INET]100.64.1.1:61422\nSat Jan 11 19:42:53 2020 10.250.7.77:20482 Connection reset, restarting [0]\nSat Jan 11 19:42:53 2020 100.64.1.1:61422 Connection reset, restarting [0]\nSat Jan 11 19:42:59 2020 TCP connection established with [AF_INET]10.250.7.77:36618\nSat Jan 11 19:42:59 2020 10.250.7.77:36618 TCP connection established with [AF_INET]100.64.1.1:1874\nSat Jan 11 19:42:59 2020 10.250.7.77:36618 Connection reset, restarting [0]\nSat Jan 11 19:42:59 2020 100.64.1.1:1874 Connection reset, restarting [0]\nSat Jan 11 19:43:03 2020 TCP connection established with [AF_INET]10.250.7.77:20500\nSat Jan 11 19:43:03 2020 10.250.7.77:20500 TCP connection established with [AF_INET]100.64.1.1:61440\nSat Jan 11 19:43:03 2020 10.250.7.77:20500 Connection reset, restarting [0]\nSat Jan 11 19:43:03 2020 100.64.1.1:61440 Connection reset, restarting [0]\nSat Jan 11 19:43:09 2020 TCP connection established with [AF_INET]100.64.1.1:1888\nSat Jan 11 19:43:09 2020 100.64.1.1:1888 Connection reset, restarting [0]\nSat Jan 11 19:43:09 2020 TCP connection established with [AF_INET]10.250.7.77:36632\nSat Jan 11 19:43:09 2020 10.250.7.77:36632 Connection reset, restarting [0]\nSat Jan 11 19:43:13 2020 TCP connection established with [AF_INET]100.64.1.1:61444\nSat Jan 11 19:43:13 2020 100.64.1.1:61444 TCP connection established with [AF_INET]10.250.7.77:20504\nSat Jan 11 19:43:13 2020 100.64.1.1:61444 Connection reset, restarting [0]\nSat Jan 11 19:43:13 2020 10.250.7.77:20504 Connection reset, restarting [0]\nSat Jan 11 19:43:19 2020 TCP connection established with [AF_INET]10.250.7.77:36640\nSat Jan 11 19:43:19 2020 10.250.7.77:36640 TCP connection established with [AF_INET]100.64.1.1:1896\nSat Jan 11 19:43:19 2020 10.250.7.77:36640 Connection reset, restarting [0]\nSat Jan 11 19:43:19 2020 100.64.1.1:1896 Connection reset, restarting [0]\nSat Jan 11 19:43:23 2020 TCP connection established with [AF_INET]10.250.7.77:20518\nSat Jan 11 19:43:23 2020 10.250.7.77:20518 TCP connection established with [AF_INET]100.64.1.1:61458\nSat Jan 11 19:43:23 2020 10.250.7.77:20518 Connection reset, restarting [0]\nSat Jan 11 19:43:23 2020 100.64.1.1:61458 Connection reset, restarting [0]\nSat Jan 11 19:43:29 2020 TCP connection established with [AF_INET]10.250.7.77:36648\nSat Jan 11 19:43:29 2020 10.250.7.77:36648 TCP connection established with [AF_INET]100.64.1.1:1904\nSat Jan 11 19:43:29 2020 10.250.7.77:36648 Connection reset, restarting [0]\nSat Jan 11 19:43:29 2020 100.64.1.1:1904 Connection reset, restarting [0]\nSat Jan 11 19:43:33 2020 TCP connection established with [AF_INET]10.250.7.77:20524\nSat Jan 11 19:43:33 2020 10.250.7.77:20524 TCP connection established with [AF_INET]100.64.1.1:61464\nSat Jan 11 19:43:33 2020 10.250.7.77:20524 Connection reset, restarting [0]\nSat Jan 11 19:43:33 2020 100.64.1.1:61464 Connection reset, restarting [0]\nSat Jan 11 19:43:39 2020 TCP connection established with [AF_INET]10.250.7.77:36652\nSat Jan 11 19:43:39 2020 10.250.7.77:36652 TCP connection established with [AF_INET]100.64.1.1:1908\nSat Jan 11 19:43:39 2020 10.250.7.77:36652 Connection reset, restarting [0]\nSat Jan 11 19:43:39 2020 100.64.1.1:1908 Connection reset, restarting [0]\nSat Jan 11 19:43:43 2020 TCP connection established with [AF_INET]10.250.7.77:20532\nSat Jan 11 19:43:43 2020 10.250.7.77:20532 TCP connection established with [AF_INET]100.64.1.1:61472\nSat Jan 11 19:43:43 2020 10.250.7.77:20532 Connection reset, restarting [0]\nSat Jan 11 19:43:43 2020 100.64.1.1:61472 Connection reset, restarting [0]\nSat Jan 11 19:43:49 2020 TCP connection established with [AF_INET]10.250.7.77:36674\nSat Jan 11 19:43:49 2020 10.250.7.77:36674 TCP connection established with [AF_INET]100.64.1.1:1930\nSat Jan 11 19:43:49 2020 10.250.7.77:36674 Connection reset, restarting [0]\nSat Jan 11 19:43:49 2020 100.64.1.1:1930 Connection reset, restarting [0]\nSat Jan 11 19:43:53 2020 TCP connection established with [AF_INET]10.250.7.77:20540\nSat Jan 11 19:43:53 2020 10.250.7.77:20540 TCP connection established with [AF_INET]100.64.1.1:61480\nSat Jan 11 19:43:53 2020 10.250.7.77:20540 Connection reset, restarting [0]\nSat Jan 11 19:43:53 2020 100.64.1.1:61480 Connection reset, restarting [0]\nSat Jan 11 19:43:59 2020 TCP connection established with [AF_INET]10.250.7.77:36684\nSat Jan 11 19:43:59 2020 10.250.7.77:36684 TCP connection established with [AF_INET]100.64.1.1:1940\nSat Jan 11 19:43:59 2020 10.250.7.77:36684 Connection reset, restarting [0]\nSat Jan 11 19:43:59 2020 100.64.1.1:1940 Connection reset, restarting [0]\nSat Jan 11 19:44:03 2020 TCP connection established with [AF_INET]10.250.7.77:20558\nSat Jan 11 19:44:03 2020 10.250.7.77:20558 TCP connection established with [AF_INET]100.64.1.1:61498\nSat Jan 11 19:44:03 2020 10.250.7.77:20558 Connection reset, restarting [0]\nSat Jan 11 19:44:03 2020 100.64.1.1:61498 Connection reset, restarting [0]\nSat Jan 11 19:44:09 2020 TCP connection established with [AF_INET]10.250.7.77:36698\nSat Jan 11 19:44:09 2020 10.250.7.77:36698 TCP connection established with [AF_INET]100.64.1.1:1954\nSat Jan 11 19:44:09 2020 10.250.7.77:36698 Connection reset, restarting [0]\nSat Jan 11 19:44:09 2020 100.64.1.1:1954 Connection reset, restarting [0]\nSat Jan 11 19:44:13 2020 TCP connection established with [AF_INET]10.250.7.77:20562\nSat Jan 11 19:44:13 2020 10.250.7.77:20562 TCP connection established with [AF_INET]100.64.1.1:61502\nSat Jan 11 19:44:13 2020 10.250.7.77:20562 Connection reset, restarting [0]\nSat Jan 11 19:44:13 2020 100.64.1.1:61502 Connection reset, restarting [0]\nSat Jan 11 19:44:19 2020 TCP connection established with [AF_INET]10.250.7.77:36706\nSat Jan 11 19:44:19 2020 10.250.7.77:36706 TCP connection established with [AF_INET]100.64.1.1:1962\nSat Jan 11 19:44:19 2020 10.250.7.77:36706 Connection reset, restarting [0]\nSat Jan 11 19:44:19 2020 100.64.1.1:1962 Connection reset, restarting [0]\nSat Jan 11 19:44:23 2020 TCP connection established with [AF_INET]10.250.7.77:20572\nSat Jan 11 19:44:23 2020 10.250.7.77:20572 TCP connection established with [AF_INET]100.64.1.1:61512\nSat Jan 11 19:44:23 2020 10.250.7.77:20572 Connection reset, restarting [0]\nSat Jan 11 19:44:23 2020 100.64.1.1:61512 Connection reset, restarting [0]\nSat Jan 11 19:44:29 2020 TCP connection established with [AF_INET]10.250.7.77:36714\nSat Jan 11 19:44:29 2020 10.250.7.77:36714 TCP connection established with [AF_INET]100.64.1.1:1970\nSat Jan 11 19:44:29 2020 10.250.7.77:36714 Connection reset, restarting [0]\nSat Jan 11 19:44:29 2020 100.64.1.1:1970 Connection reset, restarting [0]\nSat Jan 11 19:44:33 2020 TCP connection established with [AF_INET]10.250.7.77:20578\nSat Jan 11 19:44:33 2020 10.250.7.77:20578 TCP connection established with [AF_INET]100.64.1.1:61518\nSat Jan 11 19:44:33 2020 10.250.7.77:20578 Connection reset, restarting [0]\nSat Jan 11 19:44:33 2020 100.64.1.1:61518 Connection reset, restarting [0]\nSat Jan 11 19:44:39 2020 TCP connection established with [AF_INET]10.250.7.77:36718\nSat Jan 11 19:44:39 2020 10.250.7.77:36718 TCP connection established with [AF_INET]100.64.1.1:1974\nSat Jan 11 19:44:39 2020 10.250.7.77:36718 Connection reset, restarting [0]\nSat Jan 11 19:44:39 2020 100.64.1.1:1974 Connection reset, restarting [0]\nSat Jan 11 19:44:43 2020 TCP connection established with [AF_INET]10.250.7.77:20590\nSat Jan 11 19:44:43 2020 10.250.7.77:20590 TCP connection established with [AF_INET]100.64.1.1:61530\nSat Jan 11 19:44:43 2020 10.250.7.77:20590 Connection reset, restarting [0]\nSat Jan 11 19:44:43 2020 100.64.1.1:61530 Connection reset, restarting [0]\nSat Jan 11 19:44:49 2020 TCP connection established with [AF_INET]10.250.7.77:36730\nSat Jan 11 19:44:49 2020 10.250.7.77:36730 TCP connection established with [AF_INET]100.64.1.1:1986\nSat Jan 11 19:44:49 2020 10.250.7.77:36730 Connection reset, restarting [0]\nSat Jan 11 19:44:49 2020 100.64.1.1:1986 Connection reset, restarting [0]\nSat Jan 11 19:44:53 2020 TCP connection established with [AF_INET]10.250.7.77:20598\nSat Jan 11 19:44:53 2020 10.250.7.77:20598 TCP connection established with [AF_INET]100.64.1.1:61538\nSat Jan 11 19:44:53 2020 10.250.7.77:20598 Connection reset, restarting [0]\nSat Jan 11 19:44:53 2020 100.64.1.1:61538 Connection reset, restarting [0]\nSat Jan 11 19:44:59 2020 TCP connection established with [AF_INET]10.250.7.77:36742\nSat Jan 11 19:44:59 2020 10.250.7.77:36742 TCP connection established with [AF_INET]100.64.1.1:1998\nSat Jan 11 19:44:59 2020 10.250.7.77:36742 Connection reset, restarting [0]\nSat Jan 11 19:44:59 2020 100.64.1.1:1998 Connection reset, restarting [0]\nSat Jan 11 19:45:03 2020 TCP connection established with [AF_INET]10.250.7.77:20616\nSat Jan 11 19:45:03 2020 10.250.7.77:20616 TCP connection established with [AF_INET]100.64.1.1:61556\nSat Jan 11 19:45:03 2020 10.250.7.77:20616 Connection reset, restarting [0]\nSat Jan 11 19:45:03 2020 100.64.1.1:61556 Connection reset, restarting [0]\nSat Jan 11 19:45:09 2020 TCP connection established with [AF_INET]10.250.7.77:36756\nSat Jan 11 19:45:09 2020 10.250.7.77:36756 TCP connection established with [AF_INET]100.64.1.1:2012\nSat Jan 11 19:45:09 2020 10.250.7.77:36756 Connection reset, restarting [0]\nSat Jan 11 19:45:09 2020 100.64.1.1:2012 Connection reset, restarting [0]\nSat Jan 11 19:45:13 2020 TCP connection established with [AF_INET]10.250.7.77:20620\nSat Jan 11 19:45:13 2020 10.250.7.77:20620 TCP connection established with [AF_INET]100.64.1.1:61560\nSat Jan 11 19:45:13 2020 10.250.7.77:20620 Connection reset, restarting [0]\nSat Jan 11 19:45:13 2020 100.64.1.1:61560 Connection reset, restarting [0]\nSat Jan 11 19:45:19 2020 TCP connection established with [AF_INET]10.250.7.77:36764\nSat Jan 11 19:45:19 2020 10.250.7.77:36764 TCP connection established with [AF_INET]100.64.1.1:2020\nSat Jan 11 19:45:19 2020 10.250.7.77:36764 Connection reset, restarting [0]\nSat Jan 11 19:45:19 2020 100.64.1.1:2020 Connection reset, restarting [0]\nSat Jan 11 19:45:23 2020 TCP connection established with [AF_INET]10.250.7.77:20630\nSat Jan 11 19:45:23 2020 10.250.7.77:20630 TCP connection established with [AF_INET]100.64.1.1:61570\nSat Jan 11 19:45:23 2020 10.250.7.77:20630 Connection reset, restarting [0]\nSat Jan 11 19:45:23 2020 100.64.1.1:61570 Connection reset, restarting [0]\nSat Jan 11 19:45:29 2020 TCP connection established with [AF_INET]10.250.7.77:36772\nSat Jan 11 19:45:29 2020 10.250.7.77:36772 TCP connection established with [AF_INET]100.64.1.1:2028\nSat Jan 11 19:45:29 2020 10.250.7.77:36772 Connection reset, restarting [0]\nSat Jan 11 19:45:29 2020 100.64.1.1:2028 Connection reset, restarting [0]\nSat Jan 11 19:45:33 2020 TCP connection established with [AF_INET]10.250.7.77:20644\nSat Jan 11 19:45:33 2020 10.250.7.77:20644 TCP connection established with [AF_INET]100.64.1.1:61584\nSat Jan 11 19:45:33 2020 10.250.7.77:20644 Connection reset, restarting [0]\nSat Jan 11 19:45:33 2020 100.64.1.1:61584 Connection reset, restarting [0]\nSat Jan 11 19:45:39 2020 TCP connection established with [AF_INET]10.250.7.77:36776\nSat Jan 11 19:45:39 2020 10.250.7.77:36776 TCP connection established with [AF_INET]100.64.1.1:2032\nSat Jan 11 19:45:39 2020 10.250.7.77:36776 Connection reset, restarting [0]\nSat Jan 11 19:45:39 2020 100.64.1.1:2032 Connection reset, restarting [0]\nSat Jan 11 19:45:43 2020 TCP connection established with [AF_INET]10.250.7.77:20688\nSat Jan 11 19:45:43 2020 10.250.7.77:20688 TCP connection established with [AF_INET]100.64.1.1:61628\nSat Jan 11 19:45:43 2020 10.250.7.77:20688 Connection reset, restarting [0]\nSat Jan 11 19:45:43 2020 100.64.1.1:61628 Connection reset, restarting [0]\nSat Jan 11 19:45:49 2020 TCP connection established with [AF_INET]10.250.7.77:36788\nSat Jan 11 19:45:49 2020 10.250.7.77:36788 TCP connection established with [AF_INET]100.64.1.1:2044\nSat Jan 11 19:45:49 2020 10.250.7.77:36788 Connection reset, restarting [0]\nSat Jan 11 19:45:49 2020 100.64.1.1:2044 Connection reset, restarting [0]\nSat Jan 11 19:45:53 2020 TCP connection established with [AF_INET]10.250.7.77:20698\nSat Jan 11 19:45:53 2020 10.250.7.77:20698 TCP connection established with [AF_INET]100.64.1.1:61638\nSat Jan 11 19:45:53 2020 10.250.7.77:20698 Connection reset, restarting [0]\nSat Jan 11 19:45:53 2020 100.64.1.1:61638 Connection reset, restarting [0]\nSat Jan 11 19:45:59 2020 TCP connection established with [AF_INET]10.250.7.77:36796\nSat Jan 11 19:45:59 2020 10.250.7.77:36796 TCP connection established with [AF_INET]100.64.1.1:2052\nSat Jan 11 19:45:59 2020 10.250.7.77:36796 Connection reset, restarting [0]\nSat Jan 11 19:45:59 2020 100.64.1.1:2052 Connection reset, restarting [0]\nSat Jan 11 19:46:03 2020 TCP connection established with [AF_INET]10.250.7.77:20716\nSat Jan 11 19:46:03 2020 10.250.7.77:20716 TCP connection established with [AF_INET]100.64.1.1:61656\nSat Jan 11 19:46:03 2020 10.250.7.77:20716 Connection reset, restarting [0]\nSat Jan 11 19:46:03 2020 100.64.1.1:61656 Connection reset, restarting [0]\nSat Jan 11 19:46:09 2020 TCP connection established with [AF_INET]10.250.7.77:36810\nSat Jan 11 19:46:09 2020 10.250.7.77:36810 TCP connection established with [AF_INET]100.64.1.1:2066\nSat Jan 11 19:46:09 2020 10.250.7.77:36810 Connection reset, restarting [0]\nSat Jan 11 19:46:09 2020 100.64.1.1:2066 Connection reset, restarting [0]\nSat Jan 11 19:46:13 2020 TCP connection established with [AF_INET]10.250.7.77:20722\nSat Jan 11 19:46:13 2020 10.250.7.77:20722 TCP connection established with [AF_INET]100.64.1.1:61662\nSat Jan 11 19:46:13 2020 10.250.7.77:20722 Connection reset, restarting [0]\nSat Jan 11 19:46:13 2020 100.64.1.1:61662 Connection reset, restarting [0]\nSat Jan 11 19:46:19 2020 TCP connection established with [AF_INET]10.250.7.77:36822\nSat Jan 11 19:46:19 2020 10.250.7.77:36822 TCP connection established with [AF_INET]100.64.1.1:2078\nSat Jan 11 19:46:19 2020 10.250.7.77:36822 Connection reset, restarting [0]\nSat Jan 11 19:46:19 2020 100.64.1.1:2078 Connection reset, restarting [0]\nSat Jan 11 19:46:23 2020 TCP connection established with [AF_INET]10.250.7.77:20732\nSat Jan 11 19:46:23 2020 10.250.7.77:20732 TCP connection established with [AF_INET]100.64.1.1:61672\nSat Jan 11 19:46:23 2020 10.250.7.77:20732 Connection reset, restarting [0]\nSat Jan 11 19:46:23 2020 100.64.1.1:61672 Connection reset, restarting [0]\nSat Jan 11 19:46:29 2020 TCP connection established with [AF_INET]10.250.7.77:36830\nSat Jan 11 19:46:29 2020 10.250.7.77:36830 TCP connection established with [AF_INET]100.64.1.1:2086\nSat Jan 11 19:46:29 2020 10.250.7.77:36830 Connection reset, restarting [0]\nSat Jan 11 19:46:29 2020 100.64.1.1:2086 Connection reset, restarting [0]\nSat Jan 11 19:46:33 2020 TCP connection established with [AF_INET]10.250.7.77:20738\nSat Jan 11 19:46:33 2020 10.250.7.77:20738 TCP connection established with [AF_INET]100.64.1.1:61678\nSat Jan 11 19:46:33 2020 10.250.7.77:20738 Connection reset, restarting [0]\nSat Jan 11 19:46:33 2020 100.64.1.1:61678 Connection reset, restarting [0]\nSat Jan 11 19:46:39 2020 TCP connection established with [AF_INET]10.250.7.77:36834\nSat Jan 11 19:46:39 2020 10.250.7.77:36834 TCP connection established with [AF_INET]100.64.1.1:2090\nSat Jan 11 19:46:39 2020 10.250.7.77:36834 Connection reset, restarting [0]\nSat Jan 11 19:46:39 2020 100.64.1.1:2090 Connection reset, restarting [0]\nSat Jan 11 19:46:43 2020 TCP connection established with [AF_INET]10.250.7.77:20748\nSat Jan 11 19:46:43 2020 10.250.7.77:20748 TCP connection established with [AF_INET]100.64.1.1:61688\nSat Jan 11 19:46:43 2020 10.250.7.77:20748 Connection reset, restarting [0]\nSat Jan 11 19:46:43 2020 100.64.1.1:61688 Connection reset, restarting [0]\nSat Jan 11 19:46:49 2020 TCP connection established with [AF_INET]10.250.7.77:36846\nSat Jan 11 19:46:49 2020 10.250.7.77:36846 TCP connection established with [AF_INET]100.64.1.1:2102\nSat Jan 11 19:46:49 2020 10.250.7.77:36846 Connection reset, restarting [0]\nSat Jan 11 19:46:49 2020 100.64.1.1:2102 Connection reset, restarting [0]\nSat Jan 11 19:46:53 2020 TCP connection established with [AF_INET]10.250.7.77:20754\nSat Jan 11 19:46:53 2020 10.250.7.77:20754 TCP connection established with [AF_INET]100.64.1.1:61694\nSat Jan 11 19:46:53 2020 10.250.7.77:20754 Connection reset, restarting [0]\nSat Jan 11 19:46:53 2020 100.64.1.1:61694 Connection reset, restarting [0]\nSat Jan 11 19:46:59 2020 TCP connection established with [AF_INET]10.250.7.77:36854\nSat Jan 11 19:46:59 2020 10.250.7.77:36854 Connection reset, restarting [0]\nSat Jan 11 19:46:59 2020 TCP connection established with [AF_INET]100.64.1.1:2110\nSat Jan 11 19:46:59 2020 100.64.1.1:2110 Connection reset, restarting [0]\nSat Jan 11 19:47:03 2020 TCP connection established with [AF_INET]10.250.7.77:20772\nSat Jan 11 19:47:03 2020 10.250.7.77:20772 TCP connection established with [AF_INET]100.64.1.1:61712\nSat Jan 11 19:47:03 2020 10.250.7.77:20772 Connection reset, restarting [0]\nSat Jan 11 19:47:03 2020 100.64.1.1:61712 Connection reset, restarting [0]\nSat Jan 11 19:47:09 2020 TCP connection established with [AF_INET]10.250.7.77:36868\nSat Jan 11 19:47:09 2020 10.250.7.77:36868 Connection reset, restarting [0]\nSat Jan 11 19:47:09 2020 TCP connection established with [AF_INET]100.64.1.1:2124\nSat Jan 11 19:47:09 2020 100.64.1.1:2124 Connection reset, restarting [0]\nSat Jan 11 19:47:13 2020 TCP connection established with [AF_INET]10.250.7.77:20780\nSat Jan 11 19:47:13 2020 10.250.7.77:20780 TCP connection established with [AF_INET]100.64.1.1:61720\nSat Jan 11 19:47:13 2020 10.250.7.77:20780 Connection reset, restarting [0]\nSat Jan 11 19:47:13 2020 100.64.1.1:61720 Connection reset, restarting [0]\nSat Jan 11 19:47:19 2020 TCP connection established with [AF_INET]10.250.7.77:36876\nSat Jan 11 19:47:19 2020 10.250.7.77:36876 TCP connection established with [AF_INET]100.64.1.1:2132\nSat Jan 11 19:47:19 2020 10.250.7.77:36876 Connection reset, restarting [0]\nSat Jan 11 19:47:19 2020 100.64.1.1:2132 Connection reset, restarting [0]\nSat Jan 11 19:47:23 2020 TCP connection established with [AF_INET]10.250.7.77:20790\nSat Jan 11 19:47:23 2020 10.250.7.77:20790 TCP connection established with [AF_INET]100.64.1.1:61730\nSat Jan 11 19:47:23 2020 10.250.7.77:20790 Connection reset, restarting [0]\nSat Jan 11 19:47:23 2020 100.64.1.1:61730 Connection reset, restarting [0]\nSat Jan 11 19:47:29 2020 TCP connection established with [AF_INET]10.250.7.77:36888\nSat Jan 11 19:47:29 2020 10.250.7.77:36888 TCP connection established with [AF_INET]100.64.1.1:2144\nSat Jan 11 19:47:29 2020 10.250.7.77:36888 Connection reset, restarting [0]\nSat Jan 11 19:47:29 2020 100.64.1.1:2144 Connection reset, restarting [0]\nSat Jan 11 19:47:33 2020 TCP connection established with [AF_INET]10.250.7.77:20798\nSat Jan 11 19:47:33 2020 10.250.7.77:20798 TCP connection established with [AF_INET]100.64.1.1:61738\nSat Jan 11 19:47:33 2020 10.250.7.77:20798 Connection reset, restarting [0]\nSat Jan 11 19:47:33 2020 100.64.1.1:61738 Connection reset, restarting [0]\nSat Jan 11 19:47:39 2020 TCP connection established with [AF_INET]10.250.7.77:36894\nSat Jan 11 19:47:39 2020 10.250.7.77:36894 TCP connection established with [AF_INET]100.64.1.1:2150\nSat Jan 11 19:47:39 2020 10.250.7.77:36894 Connection reset, restarting [0]\nSat Jan 11 19:47:39 2020 100.64.1.1:2150 Connection reset, restarting [0]\nSat Jan 11 19:47:43 2020 TCP connection established with [AF_INET]10.250.7.77:20806\nSat Jan 11 19:47:43 2020 10.250.7.77:20806 TCP connection established with [AF_INET]100.64.1.1:61746\nSat Jan 11 19:47:43 2020 10.250.7.77:20806 Connection reset, restarting [0]\nSat Jan 11 19:47:43 2020 100.64.1.1:61746 Connection reset, restarting [0]\nSat Jan 11 19:47:49 2020 TCP connection established with [AF_INET]10.250.7.77:36904\nSat Jan 11 19:47:49 2020 10.250.7.77:36904 TCP connection established with [AF_INET]100.64.1.1:2160\nSat Jan 11 19:47:49 2020 10.250.7.77:36904 Connection reset, restarting [0]\nSat Jan 11 19:47:49 2020 100.64.1.1:2160 Connection reset, restarting [0]\nSat Jan 11 19:47:53 2020 TCP connection established with [AF_INET]10.250.7.77:20812\nSat Jan 11 19:47:53 2020 10.250.7.77:20812 TCP connection established with [AF_INET]100.64.1.1:61752\nSat Jan 11 19:47:53 2020 10.250.7.77:20812 Connection reset, restarting [0]\nSat Jan 11 19:47:53 2020 100.64.1.1:61752 Connection reset, restarting [0]\nSat Jan 11 19:47:59 2020 TCP connection established with [AF_INET]10.250.7.77:36912\nSat Jan 11 19:47:59 2020 10.250.7.77:36912 TCP connection established with [AF_INET]100.64.1.1:2168\nSat Jan 11 19:47:59 2020 10.250.7.77:36912 Connection reset, restarting [0]\nSat Jan 11 19:47:59 2020 100.64.1.1:2168 Connection reset, restarting [0]\nSat Jan 11 19:48:03 2020 TCP connection established with [AF_INET]10.250.7.77:20830\nSat Jan 11 19:48:03 2020 10.250.7.77:20830 TCP connection established with [AF_INET]100.64.1.1:61770\nSat Jan 11 19:48:03 2020 10.250.7.77:20830 Connection reset, restarting [0]\nSat Jan 11 19:48:03 2020 100.64.1.1:61770 Connection reset, restarting [0]\nSat Jan 11 19:48:09 2020 TCP connection established with [AF_INET]10.250.7.77:36926\nSat Jan 11 19:48:09 2020 10.250.7.77:36926 TCP connection established with [AF_INET]100.64.1.1:2182\nSat Jan 11 19:48:09 2020 10.250.7.77:36926 Connection reset, restarting [0]\nSat Jan 11 19:48:09 2020 100.64.1.1:2182 Connection reset, restarting [0]\nSat Jan 11 19:48:13 2020 TCP connection established with [AF_INET]10.250.7.77:20834\nSat Jan 11 19:48:13 2020 10.250.7.77:20834 TCP connection established with [AF_INET]100.64.1.1:61774\nSat Jan 11 19:48:13 2020 10.250.7.77:20834 Connection reset, restarting [0]\nSat Jan 11 19:48:13 2020 100.64.1.1:61774 Connection reset, restarting [0]\nSat Jan 11 19:48:19 2020 TCP connection established with [AF_INET]10.250.7.77:36934\nSat Jan 11 19:48:19 2020 10.250.7.77:36934 TCP connection established with [AF_INET]100.64.1.1:2190\nSat Jan 11 19:48:19 2020 10.250.7.77:36934 Connection reset, restarting [0]\nSat Jan 11 19:48:19 2020 100.64.1.1:2190 Connection reset, restarting [0]\nSat Jan 11 19:48:23 2020 TCP connection established with [AF_INET]10.250.7.77:20848\nSat Jan 11 19:48:23 2020 10.250.7.77:20848 TCP connection established with [AF_INET]100.64.1.1:61788\nSat Jan 11 19:48:23 2020 10.250.7.77:20848 Connection reset, restarting [0]\nSat Jan 11 19:48:23 2020 100.64.1.1:61788 Connection reset, restarting [0]\nSat Jan 11 19:48:29 2020 TCP connection established with [AF_INET]10.250.7.77:36942\nSat Jan 11 19:48:29 2020 10.250.7.77:36942 TCP connection established with [AF_INET]100.64.1.1:2198\nSat Jan 11 19:48:29 2020 10.250.7.77:36942 Connection reset, restarting [0]\nSat Jan 11 19:48:29 2020 100.64.1.1:2198 Connection reset, restarting [0]\nSat Jan 11 19:48:33 2020 TCP connection established with [AF_INET]10.250.7.77:20856\nSat Jan 11 19:48:33 2020 10.250.7.77:20856 TCP connection established with [AF_INET]100.64.1.1:61796\nSat Jan 11 19:48:33 2020 10.250.7.77:20856 Connection reset, restarting [0]\nSat Jan 11 19:48:33 2020 100.64.1.1:61796 Connection reset, restarting [0]\nSat Jan 11 19:48:39 2020 TCP connection established with [AF_INET]10.250.7.77:36982\nSat Jan 11 19:48:39 2020 10.250.7.77:36982 TCP connection established with [AF_INET]100.64.1.1:2238\nSat Jan 11 19:48:39 2020 10.250.7.77:36982 Connection reset, restarting [0]\nSat Jan 11 19:48:39 2020 100.64.1.1:2238 Connection reset, restarting [0]\nSat Jan 11 19:48:43 2020 TCP connection established with [AF_INET]10.250.7.77:20864\nSat Jan 11 19:48:43 2020 10.250.7.77:20864 Connection reset, restarting [0]\nSat Jan 11 19:48:43 2020 TCP connection established with [AF_INET]100.64.1.1:61804\nSat Jan 11 19:48:43 2020 100.64.1.1:61804 Connection reset, restarting [0]\nSat Jan 11 19:48:49 2020 TCP connection established with [AF_INET]10.250.7.77:36996\nSat Jan 11 19:48:49 2020 10.250.7.77:36996 TCP connection established with [AF_INET]100.64.1.1:2252\nSat Jan 11 19:48:49 2020 10.250.7.77:36996 Connection reset, restarting [0]\nSat Jan 11 19:48:49 2020 100.64.1.1:2252 Connection reset, restarting [0]\nSat Jan 11 19:48:53 2020 TCP connection established with [AF_INET]10.250.7.77:20870\nSat Jan 11 19:48:53 2020 10.250.7.77:20870 TCP connection established with [AF_INET]100.64.1.1:61810\nSat Jan 11 19:48:53 2020 10.250.7.77:20870 Connection reset, restarting [0]\nSat Jan 11 19:48:53 2020 100.64.1.1:61810 Connection reset, restarting [0]\nSat Jan 11 19:48:59 2020 TCP connection established with [AF_INET]10.250.7.77:37004\nSat Jan 11 19:48:59 2020 10.250.7.77:37004 TCP connection established with [AF_INET]100.64.1.1:2260\nSat Jan 11 19:48:59 2020 10.250.7.77:37004 Connection reset, restarting [0]\nSat Jan 11 19:48:59 2020 100.64.1.1:2260 Connection reset, restarting [0]\nSat Jan 11 19:49:03 2020 TCP connection established with [AF_INET]10.250.7.77:20888\nSat Jan 11 19:49:03 2020 10.250.7.77:20888 TCP connection established with [AF_INET]100.64.1.1:61828\nSat Jan 11 19:49:03 2020 10.250.7.77:20888 Connection reset, restarting [0]\nSat Jan 11 19:49:03 2020 100.64.1.1:61828 Connection reset, restarting [0]\nSat Jan 11 19:49:09 2020 TCP connection established with [AF_INET]10.250.7.77:37020\nSat Jan 11 19:49:09 2020 10.250.7.77:37020 TCP connection established with [AF_INET]100.64.1.1:2276\nSat Jan 11 19:49:09 2020 10.250.7.77:37020 Connection reset, restarting [0]\nSat Jan 11 19:49:09 2020 100.64.1.1:2276 Connection reset, restarting [0]\nSat Jan 11 19:49:13 2020 TCP connection established with [AF_INET]10.250.7.77:20892\nSat Jan 11 19:49:13 2020 10.250.7.77:20892 TCP connection established with [AF_INET]100.64.1.1:61832\nSat Jan 11 19:49:13 2020 10.250.7.77:20892 Connection reset, restarting [0]\nSat Jan 11 19:49:13 2020 100.64.1.1:61832 Connection reset, restarting [0]\nSat Jan 11 19:49:19 2020 TCP connection established with [AF_INET]10.250.7.77:37028\nSat Jan 11 19:49:19 2020 10.250.7.77:37028 TCP connection established with [AF_INET]100.64.1.1:2284\nSat Jan 11 19:49:19 2020 10.250.7.77:37028 Connection reset, restarting [0]\nSat Jan 11 19:49:19 2020 100.64.1.1:2284 Connection reset, restarting [0]\nSat Jan 11 19:49:23 2020 TCP connection established with [AF_INET]10.250.7.77:20902\nSat Jan 11 19:49:23 2020 10.250.7.77:20902 TCP connection established with [AF_INET]100.64.1.1:61842\nSat Jan 11 19:49:23 2020 10.250.7.77:20902 Connection reset, restarting [0]\nSat Jan 11 19:49:23 2020 100.64.1.1:61842 Connection reset, restarting [0]\nSat Jan 11 19:49:29 2020 TCP connection established with [AF_INET]10.250.7.77:37038\nSat Jan 11 19:49:29 2020 10.250.7.77:37038 TCP connection established with [AF_INET]100.64.1.1:2294\nSat Jan 11 19:49:29 2020 10.250.7.77:37038 Connection reset, restarting [0]\nSat Jan 11 19:49:29 2020 100.64.1.1:2294 Connection reset, restarting [0]\nSat Jan 11 19:49:33 2020 TCP connection established with [AF_INET]10.250.7.77:20910\nSat Jan 11 19:49:33 2020 10.250.7.77:20910 TCP connection established with [AF_INET]100.64.1.1:61850\nSat Jan 11 19:49:33 2020 10.250.7.77:20910 Connection reset, restarting [0]\nSat Jan 11 19:49:33 2020 100.64.1.1:61850 Connection reset, restarting [0]\nSat Jan 11 19:49:39 2020 TCP connection established with [AF_INET]10.250.7.77:37042\nSat Jan 11 19:49:39 2020 10.250.7.77:37042 TCP connection established with [AF_INET]100.64.1.1:2298\nSat Jan 11 19:49:39 2020 10.250.7.77:37042 Connection reset, restarting [0]\nSat Jan 11 19:49:39 2020 100.64.1.1:2298 Connection reset, restarting [0]\nSat Jan 11 19:49:43 2020 TCP connection established with [AF_INET]10.250.7.77:20922\nSat Jan 11 19:49:43 2020 10.250.7.77:20922 TCP connection established with [AF_INET]100.64.1.1:61862\nSat Jan 11 19:49:43 2020 10.250.7.77:20922 Connection reset, restarting [0]\nSat Jan 11 19:49:43 2020 100.64.1.1:61862 Connection reset, restarting [0]\nSat Jan 11 19:49:49 2020 TCP connection established with [AF_INET]10.250.7.77:37052\nSat Jan 11 19:49:49 2020 10.250.7.77:37052 Connection reset, restarting [0]\nSat Jan 11 19:49:49 2020 TCP connection established with [AF_INET]100.64.1.1:2308\nSat Jan 11 19:49:49 2020 100.64.1.1:2308 Connection reset, restarting [0]\nSat Jan 11 19:49:53 2020 TCP connection established with [AF_INET]10.250.7.77:20928\nSat Jan 11 19:49:53 2020 10.250.7.77:20928 TCP connection established with [AF_INET]100.64.1.1:61868\nSat Jan 11 19:49:53 2020 10.250.7.77:20928 Connection reset, restarting [0]\nSat Jan 11 19:49:53 2020 100.64.1.1:61868 Connection reset, restarting [0]\nSat Jan 11 19:49:59 2020 TCP connection established with [AF_INET]10.250.7.77:37064\nSat Jan 11 19:49:59 2020 10.250.7.77:37064 TCP connection established with [AF_INET]100.64.1.1:2320\nSat Jan 11 19:49:59 2020 10.250.7.77:37064 Connection reset, restarting [0]\nSat Jan 11 19:49:59 2020 100.64.1.1:2320 Connection reset, restarting [0]\nSat Jan 11 19:50:03 2020 TCP connection established with [AF_INET]10.250.7.77:20946\nSat Jan 11 19:50:03 2020 10.250.7.77:20946 TCP connection established with [AF_INET]100.64.1.1:61886\nSat Jan 11 19:50:03 2020 10.250.7.77:20946 Connection reset, restarting [0]\nSat Jan 11 19:50:03 2020 100.64.1.1:61886 Connection reset, restarting [0]\nSat Jan 11 19:50:09 2020 TCP connection established with [AF_INET]10.250.7.77:37078\nSat Jan 11 19:50:09 2020 10.250.7.77:37078 TCP connection established with [AF_INET]100.64.1.1:2334\nSat Jan 11 19:50:09 2020 10.250.7.77:37078 Connection reset, restarting [0]\nSat Jan 11 19:50:09 2020 100.64.1.1:2334 Connection reset, restarting [0]\nSat Jan 11 19:50:13 2020 TCP connection established with [AF_INET]10.250.7.77:20950\nSat Jan 11 19:50:13 2020 10.250.7.77:20950 TCP connection established with [AF_INET]100.64.1.1:61890\nSat Jan 11 19:50:13 2020 10.250.7.77:20950 Connection reset, restarting [0]\nSat Jan 11 19:50:13 2020 100.64.1.1:61890 Connection reset, restarting [0]\nSat Jan 11 19:50:19 2020 TCP connection established with [AF_INET]10.250.7.77:37086\nSat Jan 11 19:50:19 2020 10.250.7.77:37086 TCP connection established with [AF_INET]100.64.1.1:2342\nSat Jan 11 19:50:19 2020 10.250.7.77:37086 Connection reset, restarting [0]\nSat Jan 11 19:50:19 2020 100.64.1.1:2342 Connection reset, restarting [0]\nSat Jan 11 19:50:23 2020 TCP connection established with [AF_INET]10.250.7.77:20962\nSat Jan 11 19:50:23 2020 10.250.7.77:20962 TCP connection established with [AF_INET]100.64.1.1:61902\nSat Jan 11 19:50:23 2020 10.250.7.77:20962 Connection reset, restarting [0]\nSat Jan 11 19:50:23 2020 100.64.1.1:61902 Connection reset, restarting [0]\nSat Jan 11 19:50:29 2020 TCP connection established with [AF_INET]10.250.7.77:37096\nSat Jan 11 19:50:29 2020 10.250.7.77:37096 TCP connection established with [AF_INET]100.64.1.1:2352\nSat Jan 11 19:50:29 2020 10.250.7.77:37096 Connection reset, restarting [0]\nSat Jan 11 19:50:29 2020 100.64.1.1:2352 Connection reset, restarting [0]\nSat Jan 11 19:50:33 2020 TCP connection established with [AF_INET]10.250.7.77:20968\nSat Jan 11 19:50:33 2020 10.250.7.77:20968 TCP connection established with [AF_INET]100.64.1.1:61908\nSat Jan 11 19:50:33 2020 10.250.7.77:20968 Connection reset, restarting [0]\nSat Jan 11 19:50:33 2020 100.64.1.1:61908 Connection reset, restarting [0]\nSat Jan 11 19:50:39 2020 TCP connection established with [AF_INET]10.250.7.77:37100\nSat Jan 11 19:50:39 2020 10.250.7.77:37100 TCP connection established with [AF_INET]100.64.1.1:2356\nSat Jan 11 19:50:39 2020 10.250.7.77:37100 Connection reset, restarting [0]\nSat Jan 11 19:50:39 2020 100.64.1.1:2356 Connection reset, restarting [0]\nSat Jan 11 19:50:43 2020 TCP connection established with [AF_INET]10.250.7.77:20976\nSat Jan 11 19:50:43 2020 10.250.7.77:20976 TCP connection established with [AF_INET]100.64.1.1:61916\nSat Jan 11 19:50:43 2020 10.250.7.77:20976 Connection reset, restarting [0]\nSat Jan 11 19:50:43 2020 100.64.1.1:61916 Connection reset, restarting [0]\nSat Jan 11 19:50:49 2020 TCP connection established with [AF_INET]10.250.7.77:37110\nSat Jan 11 19:50:49 2020 10.250.7.77:37110 TCP connection established with [AF_INET]100.64.1.1:2366\nSat Jan 11 19:50:49 2020 10.250.7.77:37110 Connection reset, restarting [0]\nSat Jan 11 19:50:49 2020 100.64.1.1:2366 Connection reset, restarting [0]\nSat Jan 11 19:50:53 2020 TCP connection established with [AF_INET]10.250.7.77:20986\nSat Jan 11 19:50:53 2020 10.250.7.77:20986 TCP connection established with [AF_INET]100.64.1.1:61926\nSat Jan 11 19:50:53 2020 10.250.7.77:20986 Connection reset, restarting [0]\nSat Jan 11 19:50:53 2020 100.64.1.1:61926 Connection reset, restarting [0]\nSat Jan 11 19:50:59 2020 TCP connection established with [AF_INET]10.250.7.77:37118\nSat Jan 11 19:50:59 2020 10.250.7.77:37118 TCP connection established with [AF_INET]100.64.1.1:2374\nSat Jan 11 19:50:59 2020 10.250.7.77:37118 Connection reset, restarting [0]\nSat Jan 11 19:50:59 2020 100.64.1.1:2374 Connection reset, restarting [0]\nSat Jan 11 19:51:03 2020 TCP connection established with [AF_INET]10.250.7.77:21004\nSat Jan 11 19:51:03 2020 10.250.7.77:21004 TCP connection established with [AF_INET]100.64.1.1:61944\nSat Jan 11 19:51:03 2020 10.250.7.77:21004 Connection reset, restarting [0]\nSat Jan 11 19:51:03 2020 100.64.1.1:61944 Connection reset, restarting [0]\nSat Jan 11 19:51:09 2020 TCP connection established with [AF_INET]10.250.7.77:37132\nSat Jan 11 19:51:09 2020 10.250.7.77:37132 TCP connection established with [AF_INET]100.64.1.1:2388\nSat Jan 11 19:51:09 2020 10.250.7.77:37132 Connection reset, restarting [0]\nSat Jan 11 19:51:09 2020 100.64.1.1:2388 Connection reset, restarting [0]\nSat Jan 11 19:51:13 2020 TCP connection established with [AF_INET]10.250.7.77:21008\nSat Jan 11 19:51:13 2020 10.250.7.77:21008 TCP connection established with [AF_INET]100.64.1.1:61948\nSat Jan 11 19:51:13 2020 10.250.7.77:21008 Connection reset, restarting [0]\nSat Jan 11 19:51:13 2020 100.64.1.1:61948 Connection reset, restarting [0]\nSat Jan 11 19:51:19 2020 TCP connection established with [AF_INET]10.250.7.77:37144\nSat Jan 11 19:51:19 2020 10.250.7.77:37144 TCP connection established with [AF_INET]100.64.1.1:2400\nSat Jan 11 19:51:19 2020 10.250.7.77:37144 Connection reset, restarting [0]\nSat Jan 11 19:51:19 2020 100.64.1.1:2400 Connection reset, restarting [0]\nSat Jan 11 19:51:23 2020 TCP connection established with [AF_INET]10.250.7.77:21020\nSat Jan 11 19:51:23 2020 10.250.7.77:21020 TCP connection established with [AF_INET]100.64.1.1:61960\nSat Jan 11 19:51:23 2020 10.250.7.77:21020 Connection reset, restarting [0]\nSat Jan 11 19:51:23 2020 100.64.1.1:61960 Connection reset, restarting [0]\nSat Jan 11 19:51:29 2020 TCP connection established with [AF_INET]10.250.7.77:37154\nSat Jan 11 19:51:29 2020 10.250.7.77:37154 TCP connection established with [AF_INET]100.64.1.1:2410\nSat Jan 11 19:51:29 2020 10.250.7.77:37154 Connection reset, restarting [0]\nSat Jan 11 19:51:29 2020 100.64.1.1:2410 Connection reset, restarting [0]\nSat Jan 11 19:51:33 2020 TCP connection established with [AF_INET]10.250.7.77:21026\nSat Jan 11 19:51:33 2020 10.250.7.77:21026 TCP connection established with [AF_INET]100.64.1.1:61966\nSat Jan 11 19:51:33 2020 10.250.7.77:21026 Connection reset, restarting [0]\nSat Jan 11 19:51:33 2020 100.64.1.1:61966 Connection reset, restarting [0]\nSat Jan 11 19:51:39 2020 TCP connection established with [AF_INET]10.250.7.77:37158\nSat Jan 11 19:51:39 2020 10.250.7.77:37158 TCP connection established with [AF_INET]100.64.1.1:2414\nSat Jan 11 19:51:39 2020 10.250.7.77:37158 Connection reset, restarting [0]\nSat Jan 11 19:51:39 2020 100.64.1.1:2414 Connection reset, restarting [0]\nSat Jan 11 19:51:43 2020 TCP connection established with [AF_INET]10.250.7.77:21034\nSat Jan 11 19:51:43 2020 10.250.7.77:21034 TCP connection established with [AF_INET]100.64.1.1:61974\nSat Jan 11 19:51:43 2020 10.250.7.77:21034 Connection reset, restarting [0]\nSat Jan 11 19:51:43 2020 100.64.1.1:61974 Connection reset, restarting [0]\nSat Jan 11 19:51:49 2020 TCP connection established with [AF_INET]10.250.7.77:37168\nSat Jan 11 19:51:49 2020 10.250.7.77:37168 TCP connection established with [AF_INET]100.64.1.1:2424\nSat Jan 11 19:51:49 2020 10.250.7.77:37168 Connection reset, restarting [0]\nSat Jan 11 19:51:49 2020 100.64.1.1:2424 Connection reset, restarting [0]\nSat Jan 11 19:51:53 2020 TCP connection established with [AF_INET]10.250.7.77:21040\nSat Jan 11 19:51:53 2020 10.250.7.77:21040 TCP connection established with [AF_INET]100.64.1.1:61980\nSat Jan 11 19:51:53 2020 10.250.7.77:21040 Connection reset, restarting [0]\nSat Jan 11 19:51:53 2020 100.64.1.1:61980 Connection reset, restarting [0]\nSat Jan 11 19:51:59 2020 TCP connection established with [AF_INET]10.250.7.77:37176\nSat Jan 11 19:51:59 2020 10.250.7.77:37176 TCP connection established with [AF_INET]100.64.1.1:2432\nSat Jan 11 19:51:59 2020 10.250.7.77:37176 Connection reset, restarting [0]\nSat Jan 11 19:51:59 2020 100.64.1.1:2432 Connection reset, restarting [0]\nSat Jan 11 19:52:03 2020 TCP connection established with [AF_INET]10.250.7.77:21058\nSat Jan 11 19:52:03 2020 10.250.7.77:21058 TCP connection established with [AF_INET]100.64.1.1:61998\nSat Jan 11 19:52:03 2020 10.250.7.77:21058 Connection reset, restarting [0]\nSat Jan 11 19:52:03 2020 100.64.1.1:61998 Connection reset, restarting [0]\nSat Jan 11 19:52:09 2020 TCP connection established with [AF_INET]10.250.7.77:37190\nSat Jan 11 19:52:09 2020 10.250.7.77:37190 TCP connection established with [AF_INET]100.64.1.1:2446\nSat Jan 11 19:52:09 2020 10.250.7.77:37190 Connection reset, restarting [0]\nSat Jan 11 19:52:09 2020 100.64.1.1:2446 Connection reset, restarting [0]\nSat Jan 11 19:52:13 2020 TCP connection established with [AF_INET]10.250.7.77:21068\nSat Jan 11 19:52:13 2020 10.250.7.77:21068 TCP connection established with [AF_INET]100.64.1.1:62008\nSat Jan 11 19:52:13 2020 10.250.7.77:21068 Connection reset, restarting [0]\nSat Jan 11 19:52:13 2020 100.64.1.1:62008 Connection reset, restarting [0]\nSat Jan 11 19:52:19 2020 TCP connection established with [AF_INET]10.250.7.77:37200\nSat Jan 11 19:52:19 2020 10.250.7.77:37200 TCP connection established with [AF_INET]100.64.1.1:2456\nSat Jan 11 19:52:19 2020 10.250.7.77:37200 Connection reset, restarting [0]\nSat Jan 11 19:52:19 2020 100.64.1.1:2456 Connection reset, restarting [0]\nSat Jan 11 19:52:23 2020 TCP connection established with [AF_INET]10.250.7.77:21078\nSat Jan 11 19:52:23 2020 10.250.7.77:21078 TCP connection established with [AF_INET]100.64.1.1:62018\nSat Jan 11 19:52:23 2020 10.250.7.77:21078 Connection reset, restarting [0]\nSat Jan 11 19:52:23 2020 100.64.1.1:62018 Connection reset, restarting [0]\nSat Jan 11 19:52:29 2020 TCP connection established with [AF_INET]10.250.7.77:37212\nSat Jan 11 19:52:29 2020 10.250.7.77:37212 TCP connection established with [AF_INET]100.64.1.1:2468\nSat Jan 11 19:52:29 2020 10.250.7.77:37212 Connection reset, restarting [0]\nSat Jan 11 19:52:29 2020 100.64.1.1:2468 Connection reset, restarting [0]\nSat Jan 11 19:52:33 2020 TCP connection established with [AF_INET]10.250.7.77:21084\nSat Jan 11 19:52:33 2020 10.250.7.77:21084 TCP connection established with [AF_INET]100.64.1.1:62024\nSat Jan 11 19:52:33 2020 10.250.7.77:21084 Connection reset, restarting [0]\nSat Jan 11 19:52:33 2020 100.64.1.1:62024 Connection reset, restarting [0]\nSat Jan 11 19:52:39 2020 TCP connection established with [AF_INET]10.250.7.77:37216\nSat Jan 11 19:52:39 2020 10.250.7.77:37216 TCP connection established with [AF_INET]100.64.1.1:2472\nSat Jan 11 19:52:39 2020 10.250.7.77:37216 Connection reset, restarting [0]\nSat Jan 11 19:52:39 2020 100.64.1.1:2472 Connection reset, restarting [0]\nSat Jan 11 19:52:43 2020 TCP connection established with [AF_INET]10.250.7.77:21092\nSat Jan 11 19:52:43 2020 10.250.7.77:21092 TCP connection established with [AF_INET]100.64.1.1:62032\nSat Jan 11 19:52:43 2020 10.250.7.77:21092 Connection reset, restarting [0]\nSat Jan 11 19:52:43 2020 100.64.1.1:62032 Connection reset, restarting [0]\nSat Jan 11 19:52:49 2020 TCP connection established with [AF_INET]10.250.7.77:37226\nSat Jan 11 19:52:49 2020 10.250.7.77:37226 TCP connection established with [AF_INET]100.64.1.1:2482\nSat Jan 11 19:52:49 2020 10.250.7.77:37226 Connection reset, restarting [0]\nSat Jan 11 19:52:49 2020 100.64.1.1:2482 Connection reset, restarting [0]\nSat Jan 11 19:52:53 2020 TCP connection established with [AF_INET]10.250.7.77:21098\nSat Jan 11 19:52:53 2020 10.250.7.77:21098 TCP connection established with [AF_INET]100.64.1.1:62038\nSat Jan 11 19:52:53 2020 10.250.7.77:21098 Connection reset, restarting [0]\nSat Jan 11 19:52:53 2020 100.64.1.1:62038 Connection reset, restarting [0]\nSat Jan 11 19:52:59 2020 TCP connection established with [AF_INET]10.250.7.77:37234\nSat Jan 11 19:52:59 2020 10.250.7.77:37234 TCP connection established with [AF_INET]100.64.1.1:2490\nSat Jan 11 19:52:59 2020 10.250.7.77:37234 Connection reset, restarting [0]\nSat Jan 11 19:52:59 2020 100.64.1.1:2490 Connection reset, restarting [0]\nSat Jan 11 19:53:03 2020 TCP connection established with [AF_INET]10.250.7.77:21116\nSat Jan 11 19:53:03 2020 10.250.7.77:21116 TCP connection established with [AF_INET]100.64.1.1:62056\nSat Jan 11 19:53:03 2020 10.250.7.77:21116 Connection reset, restarting [0]\nSat Jan 11 19:53:03 2020 100.64.1.1:62056 Connection reset, restarting [0]\nSat Jan 11 19:53:09 2020 TCP connection established with [AF_INET]10.250.7.77:37248\nSat Jan 11 19:53:09 2020 10.250.7.77:37248 TCP connection established with [AF_INET]100.64.1.1:2504\nSat Jan 11 19:53:09 2020 10.250.7.77:37248 Connection reset, restarting [0]\nSat Jan 11 19:53:09 2020 100.64.1.1:2504 Connection reset, restarting [0]\nSat Jan 11 19:53:13 2020 TCP connection established with [AF_INET]10.250.7.77:21122\nSat Jan 11 19:53:13 2020 10.250.7.77:21122 TCP connection established with [AF_INET]100.64.1.1:62062\nSat Jan 11 19:53:13 2020 10.250.7.77:21122 Connection reset, restarting [0]\nSat Jan 11 19:53:13 2020 100.64.1.1:62062 Connection reset, restarting [0]\nSat Jan 11 19:53:19 2020 TCP connection established with [AF_INET]10.250.7.77:37258\nSat Jan 11 19:53:19 2020 10.250.7.77:37258 TCP connection established with [AF_INET]100.64.1.1:2514\nSat Jan 11 19:53:19 2020 10.250.7.77:37258 Connection reset, restarting [0]\nSat Jan 11 19:53:19 2020 100.64.1.1:2514 Connection reset, restarting [0]\nSat Jan 11 19:53:23 2020 TCP connection established with [AF_INET]10.250.7.77:21136\nSat Jan 11 19:53:23 2020 10.250.7.77:21136 TCP connection established with [AF_INET]100.64.1.1:62076\nSat Jan 11 19:53:23 2020 10.250.7.77:21136 Connection reset, restarting [0]\nSat Jan 11 19:53:23 2020 100.64.1.1:62076 Connection reset, restarting [0]\nSat Jan 11 19:53:29 2020 TCP connection established with [AF_INET]10.250.7.77:37266\nSat Jan 11 19:53:29 2020 10.250.7.77:37266 TCP connection established with [AF_INET]100.64.1.1:2522\nSat Jan 11 19:53:29 2020 10.250.7.77:37266 Connection reset, restarting [0]\nSat Jan 11 19:53:29 2020 100.64.1.1:2522 Connection reset, restarting [0]\nSat Jan 11 19:53:33 2020 TCP connection established with [AF_INET]10.250.7.77:21142\nSat Jan 11 19:53:33 2020 10.250.7.77:21142 TCP connection established with [AF_INET]100.64.1.1:62082\nSat Jan 11 19:53:33 2020 10.250.7.77:21142 Connection reset, restarting [0]\nSat Jan 11 19:53:33 2020 100.64.1.1:62082 Connection reset, restarting [0]\nSat Jan 11 19:53:39 2020 TCP connection established with [AF_INET]10.250.7.77:37270\nSat Jan 11 19:53:39 2020 10.250.7.77:37270 TCP connection established with [AF_INET]100.64.1.1:2526\nSat Jan 11 19:53:39 2020 10.250.7.77:37270 Connection reset, restarting [0]\nSat Jan 11 19:53:39 2020 100.64.1.1:2526 Connection reset, restarting [0]\nSat Jan 11 19:53:43 2020 TCP connection established with [AF_INET]10.250.7.77:21150\nSat Jan 11 19:53:43 2020 10.250.7.77:21150 TCP connection established with [AF_INET]100.64.1.1:62090\nSat Jan 11 19:53:43 2020 10.250.7.77:21150 Connection reset, restarting [0]\nSat Jan 11 19:53:43 2020 100.64.1.1:62090 Connection reset, restarting [0]\nSat Jan 11 19:53:49 2020 TCP connection established with [AF_INET]10.250.7.77:37284\nSat Jan 11 19:53:49 2020 10.250.7.77:37284 TCP connection established with [AF_INET]100.64.1.1:2540\nSat Jan 11 19:53:49 2020 10.250.7.77:37284 Connection reset, restarting [0]\nSat Jan 11 19:53:49 2020 100.64.1.1:2540 Connection reset, restarting [0]\nSat Jan 11 19:53:53 2020 TCP connection established with [AF_INET]10.250.7.77:21156\nSat Jan 11 19:53:53 2020 10.250.7.77:21156 TCP connection established with [AF_INET]100.64.1.1:62096\nSat Jan 11 19:53:53 2020 10.250.7.77:21156 Connection reset, restarting [0]\nSat Jan 11 19:53:53 2020 100.64.1.1:62096 Connection reset, restarting [0]\nSat Jan 11 19:53:59 2020 TCP connection established with [AF_INET]10.250.7.77:37292\nSat Jan 11 19:53:59 2020 10.250.7.77:37292 TCP connection established with [AF_INET]100.64.1.1:2548\nSat Jan 11 19:53:59 2020 10.250.7.77:37292 Connection reset, restarting [0]\nSat Jan 11 19:53:59 2020 100.64.1.1:2548 Connection reset, restarting [0]\nSat Jan 11 19:54:03 2020 TCP connection established with [AF_INET]10.250.7.77:21174\nSat Jan 11 19:54:03 2020 10.250.7.77:21174 TCP connection established with [AF_INET]100.64.1.1:62114\nSat Jan 11 19:54:03 2020 10.250.7.77:21174 Connection reset, restarting [0]\nSat Jan 11 19:54:03 2020 100.64.1.1:62114 Connection reset, restarting [0]\nSat Jan 11 19:54:09 2020 TCP connection established with [AF_INET]10.250.7.77:37308\nSat Jan 11 19:54:09 2020 10.250.7.77:37308 TCP connection established with [AF_INET]100.64.1.1:2564\nSat Jan 11 19:54:09 2020 10.250.7.77:37308 Connection reset, restarting [0]\nSat Jan 11 19:54:09 2020 100.64.1.1:2564 Connection reset, restarting [0]\nSat Jan 11 19:54:13 2020 TCP connection established with [AF_INET]10.250.7.77:21180\nSat Jan 11 19:54:13 2020 10.250.7.77:21180 TCP connection established with [AF_INET]100.64.1.1:62120\nSat Jan 11 19:54:13 2020 10.250.7.77:21180 Connection reset, restarting [0]\nSat Jan 11 19:54:13 2020 100.64.1.1:62120 Connection reset, restarting [0]\nSat Jan 11 19:54:19 2020 TCP connection established with [AF_INET]10.250.7.77:37316\nSat Jan 11 19:54:19 2020 10.250.7.77:37316 TCP connection established with [AF_INET]100.64.1.1:2572\nSat Jan 11 19:54:19 2020 10.250.7.77:37316 Connection reset, restarting [0]\nSat Jan 11 19:54:19 2020 100.64.1.1:2572 Connection reset, restarting [0]\nSat Jan 11 19:54:23 2020 TCP connection established with [AF_INET]10.250.7.77:21190\nSat Jan 11 19:54:23 2020 10.250.7.77:21190 TCP connection established with [AF_INET]100.64.1.1:62130\nSat Jan 11 19:54:23 2020 10.250.7.77:21190 Connection reset, restarting [0]\nSat Jan 11 19:54:23 2020 100.64.1.1:62130 Connection reset, restarting [0]\nSat Jan 11 19:54:29 2020 TCP connection established with [AF_INET]10.250.7.77:37324\nSat Jan 11 19:54:29 2020 10.250.7.77:37324 TCP connection established with [AF_INET]100.64.1.1:2580\nSat Jan 11 19:54:29 2020 10.250.7.77:37324 Connection reset, restarting [0]\nSat Jan 11 19:54:29 2020 100.64.1.1:2580 Connection reset, restarting [0]\nSat Jan 11 19:54:33 2020 TCP connection established with [AF_INET]10.250.7.77:21196\nSat Jan 11 19:54:33 2020 10.250.7.77:21196 TCP connection established with [AF_INET]100.64.1.1:62136\nSat Jan 11 19:54:33 2020 10.250.7.77:21196 Connection reset, restarting [0]\nSat Jan 11 19:54:33 2020 100.64.1.1:62136 Connection reset, restarting [0]\nSat Jan 11 19:54:39 2020 TCP connection established with [AF_INET]10.250.7.77:37328\nSat Jan 11 19:54:39 2020 10.250.7.77:37328 TCP connection established with [AF_INET]100.64.1.1:2584\nSat Jan 11 19:54:39 2020 10.250.7.77:37328 Connection reset, restarting [0]\nSat Jan 11 19:54:39 2020 100.64.1.1:2584 Connection reset, restarting [0]\nSat Jan 11 19:54:43 2020 TCP connection established with [AF_INET]10.250.7.77:21208\nSat Jan 11 19:54:43 2020 10.250.7.77:21208 TCP connection established with [AF_INET]100.64.1.1:62148\nSat Jan 11 19:54:43 2020 10.250.7.77:21208 Connection reset, restarting [0]\nSat Jan 11 19:54:43 2020 100.64.1.1:62148 Connection reset, restarting [0]\nSat Jan 11 19:54:49 2020 TCP connection established with [AF_INET]10.250.7.77:37338\nSat Jan 11 19:54:49 2020 10.250.7.77:37338 TCP connection established with [AF_INET]100.64.1.1:2594\nSat Jan 11 19:54:49 2020 10.250.7.77:37338 Connection reset, restarting [0]\nSat Jan 11 19:54:49 2020 100.64.1.1:2594 Connection reset, restarting [0]\nSat Jan 11 19:54:53 2020 TCP connection established with [AF_INET]10.250.7.77:21214\nSat Jan 11 19:54:53 2020 10.250.7.77:21214 TCP connection established with [AF_INET]100.64.1.1:62154\nSat Jan 11 19:54:53 2020 10.250.7.77:21214 Connection reset, restarting [0]\nSat Jan 11 19:54:53 2020 100.64.1.1:62154 Connection reset, restarting [0]\nSat Jan 11 19:54:59 2020 TCP connection established with [AF_INET]10.250.7.77:37350\nSat Jan 11 19:54:59 2020 10.250.7.77:37350 TCP connection established with [AF_INET]100.64.1.1:2606\nSat Jan 11 19:54:59 2020 10.250.7.77:37350 Connection reset, restarting [0]\nSat Jan 11 19:54:59 2020 100.64.1.1:2606 Connection reset, restarting [0]\nSat Jan 11 19:55:03 2020 TCP connection established with [AF_INET]10.250.7.77:21234\nSat Jan 11 19:55:03 2020 10.250.7.77:21234 TCP connection established with [AF_INET]100.64.1.1:62174\nSat Jan 11 19:55:03 2020 10.250.7.77:21234 Connection reset, restarting [0]\nSat Jan 11 19:55:03 2020 100.64.1.1:62174 Connection reset, restarting [0]\nSat Jan 11 19:55:09 2020 TCP connection established with [AF_INET]10.250.7.77:37366\nSat Jan 11 19:55:09 2020 10.250.7.77:37366 TCP connection established with [AF_INET]100.64.1.1:2622\nSat Jan 11 19:55:09 2020 10.250.7.77:37366 Connection reset, restarting [0]\nSat Jan 11 19:55:09 2020 100.64.1.1:2622 Connection reset, restarting [0]\nSat Jan 11 19:55:13 2020 TCP connection established with [AF_INET]10.250.7.77:21238\nSat Jan 11 19:55:13 2020 10.250.7.77:21238 TCP connection established with [AF_INET]100.64.1.1:62178\nSat Jan 11 19:55:13 2020 10.250.7.77:21238 Connection reset, restarting [0]\nSat Jan 11 19:55:13 2020 100.64.1.1:62178 Connection reset, restarting [0]\nSat Jan 11 19:55:19 2020 TCP connection established with [AF_INET]10.250.7.77:37374\nSat Jan 11 19:55:19 2020 10.250.7.77:37374 TCP connection established with [AF_INET]100.64.1.1:2630\nSat Jan 11 19:55:19 2020 10.250.7.77:37374 Connection reset, restarting [0]\nSat Jan 11 19:55:19 2020 100.64.1.1:2630 Connection reset, restarting [0]\nSat Jan 11 19:55:23 2020 TCP connection established with [AF_INET]10.250.7.77:21248\nSat Jan 11 19:55:23 2020 10.250.7.77:21248 TCP connection established with [AF_INET]100.64.1.1:62188\nSat Jan 11 19:55:23 2020 10.250.7.77:21248 Connection reset, restarting [0]\nSat Jan 11 19:55:23 2020 100.64.1.1:62188 Connection reset, restarting [0]\nSat Jan 11 19:55:29 2020 TCP connection established with [AF_INET]10.250.7.77:37382\nSat Jan 11 19:55:29 2020 10.250.7.77:37382 TCP connection established with [AF_INET]100.64.1.1:2638\nSat Jan 11 19:55:29 2020 10.250.7.77:37382 Connection reset, restarting [0]\nSat Jan 11 19:55:29 2020 100.64.1.1:2638 Connection reset, restarting [0]\nSat Jan 11 19:55:33 2020 TCP connection established with [AF_INET]10.250.7.77:21254\nSat Jan 11 19:55:33 2020 10.250.7.77:21254 TCP connection established with [AF_INET]100.64.1.1:62194\nSat Jan 11 19:55:33 2020 10.250.7.77:21254 Connection reset, restarting [0]\nSat Jan 11 19:55:33 2020 100.64.1.1:62194 Connection reset, restarting [0]\nSat Jan 11 19:55:39 2020 TCP connection established with [AF_INET]10.250.7.77:37386\nSat Jan 11 19:55:39 2020 10.250.7.77:37386 TCP connection established with [AF_INET]100.64.1.1:2642\nSat Jan 11 19:55:39 2020 10.250.7.77:37386 Connection reset, restarting [0]\nSat Jan 11 19:55:39 2020 100.64.1.1:2642 Connection reset, restarting [0]\nSat Jan 11 19:55:43 2020 TCP connection established with [AF_INET]10.250.7.77:21296\nSat Jan 11 19:55:43 2020 10.250.7.77:21296 TCP connection established with [AF_INET]100.64.1.1:62236\nSat Jan 11 19:55:43 2020 10.250.7.77:21296 Connection reset, restarting [0]\nSat Jan 11 19:55:43 2020 100.64.1.1:62236 Connection reset, restarting [0]\nSat Jan 11 19:55:49 2020 TCP connection established with [AF_INET]10.250.7.77:37396\nSat Jan 11 19:55:49 2020 10.250.7.77:37396 TCP connection established with [AF_INET]100.64.1.1:2652\nSat Jan 11 19:55:49 2020 10.250.7.77:37396 Connection reset, restarting [0]\nSat Jan 11 19:55:49 2020 100.64.1.1:2652 Connection reset, restarting [0]\nSat Jan 11 19:55:53 2020 TCP connection established with [AF_INET]10.250.7.77:21308\nSat Jan 11 19:55:53 2020 10.250.7.77:21308 TCP connection established with [AF_INET]100.64.1.1:62248\nSat Jan 11 19:55:53 2020 10.250.7.77:21308 Connection reset, restarting [0]\nSat Jan 11 19:55:53 2020 100.64.1.1:62248 Connection reset, restarting [0]\nSat Jan 11 19:55:59 2020 TCP connection established with [AF_INET]10.250.7.77:37404\nSat Jan 11 19:55:59 2020 10.250.7.77:37404 TCP connection established with [AF_INET]100.64.1.1:2660\nSat Jan 11 19:55:59 2020 10.250.7.77:37404 Connection reset, restarting [0]\nSat Jan 11 19:55:59 2020 100.64.1.1:2660 Connection reset, restarting [0]\nSat Jan 11 19:56:03 2020 TCP connection established with [AF_INET]10.250.7.77:21334\nSat Jan 11 19:56:03 2020 10.250.7.77:21334 TCP connection established with [AF_INET]100.64.1.1:62274\nSat Jan 11 19:56:03 2020 10.250.7.77:21334 Connection reset, restarting [0]\nSat Jan 11 19:56:03 2020 100.64.1.1:62274 Connection reset, restarting [0]\nSat Jan 11 19:56:09 2020 TCP connection established with [AF_INET]10.250.7.77:37426\nSat Jan 11 19:56:09 2020 10.250.7.77:37426 TCP connection established with [AF_INET]100.64.1.1:2682\nSat Jan 11 19:56:09 2020 10.250.7.77:37426 Connection reset, restarting [0]\nSat Jan 11 19:56:09 2020 100.64.1.1:2682 Connection reset, restarting [0]\nSat Jan 11 19:56:13 2020 TCP connection established with [AF_INET]10.250.7.77:21340\nSat Jan 11 19:56:13 2020 10.250.7.77:21340 TCP connection established with [AF_INET]100.64.1.1:62280\nSat Jan 11 19:56:13 2020 10.250.7.77:21340 Connection reset, restarting [0]\nSat Jan 11 19:56:13 2020 100.64.1.1:62280 Connection reset, restarting [0]\nSat Jan 11 19:56:19 2020 TCP connection established with [AF_INET]100.64.1.1:2694\nSat Jan 11 19:56:19 2020 100.64.1.1:2694 TCP connection established with [AF_INET]10.250.7.77:37438\nSat Jan 11 19:56:19 2020 100.64.1.1:2694 Connection reset, restarting [0]\nSat Jan 11 19:56:19 2020 10.250.7.77:37438 Connection reset, restarting [0]\nSat Jan 11 19:56:23 2020 TCP connection established with [AF_INET]10.250.7.77:21350\nSat Jan 11 19:56:23 2020 10.250.7.77:21350 TCP connection established with [AF_INET]100.64.1.1:62290\nSat Jan 11 19:56:23 2020 10.250.7.77:21350 Connection reset, restarting [0]\nSat Jan 11 19:56:23 2020 100.64.1.1:62290 Connection reset, restarting [0]\nSat Jan 11 19:56:29 2020 TCP connection established with [AF_INET]10.250.7.77:37446\nSat Jan 11 19:56:29 2020 10.250.7.77:37446 TCP connection established with [AF_INET]100.64.1.1:2702\nSat Jan 11 19:56:29 2020 10.250.7.77:37446 Connection reset, restarting [0]\nSat Jan 11 19:56:29 2020 100.64.1.1:2702 Connection reset, restarting [0]\nSat Jan 11 19:56:33 2020 TCP connection established with [AF_INET]10.250.7.77:21356\nSat Jan 11 19:56:33 2020 10.250.7.77:21356 TCP connection established with [AF_INET]100.64.1.1:62296\nSat Jan 11 19:56:33 2020 10.250.7.77:21356 Connection reset, restarting [0]\nSat Jan 11 19:56:33 2020 100.64.1.1:62296 Connection reset, restarting [0]\nSat Jan 11 19:56:39 2020 TCP connection established with [AF_INET]10.250.7.77:37450\nSat Jan 11 19:56:39 2020 10.250.7.77:37450 TCP connection established with [AF_INET]100.64.1.1:2706\nSat Jan 11 19:56:39 2020 10.250.7.77:37450 Connection reset, restarting [0]\nSat Jan 11 19:56:39 2020 100.64.1.1:2706 Connection reset, restarting [0]\nSat Jan 11 19:56:43 2020 TCP connection established with [AF_INET]10.250.7.77:21364\nSat Jan 11 19:56:43 2020 10.250.7.77:21364 TCP connection established with [AF_INET]100.64.1.1:62304\nSat Jan 11 19:56:43 2020 10.250.7.77:21364 Connection reset, restarting [0]\nSat Jan 11 19:56:43 2020 100.64.1.1:62304 Connection reset, restarting [0]\nSat Jan 11 19:56:49 2020 TCP connection established with [AF_INET]10.250.7.77:37460\nSat Jan 11 19:56:49 2020 10.250.7.77:37460 TCP connection established with [AF_INET]100.64.1.1:2716\nSat Jan 11 19:56:49 2020 10.250.7.77:37460 Connection reset, restarting [0]\nSat Jan 11 19:56:49 2020 100.64.1.1:2716 Connection reset, restarting [0]\nSat Jan 11 19:56:53 2020 TCP connection established with [AF_INET]10.250.7.77:21376\nSat Jan 11 19:56:53 2020 10.250.7.77:21376 TCP connection established with [AF_INET]100.64.1.1:62316\nSat Jan 11 19:56:53 2020 10.250.7.77:21376 Connection reset, restarting [0]\nSat Jan 11 19:56:53 2020 100.64.1.1:62316 Connection reset, restarting [0]\nSat Jan 11 19:56:59 2020 TCP connection established with [AF_INET]10.250.7.77:37470\nSat Jan 11 19:56:59 2020 10.250.7.77:37470 TCP connection established with [AF_INET]100.64.1.1:2726\nSat Jan 11 19:56:59 2020 10.250.7.77:37470 Connection reset, restarting [0]\nSat Jan 11 19:56:59 2020 100.64.1.1:2726 Connection reset, restarting [0]\nSat Jan 11 19:57:03 2020 TCP connection established with [AF_INET]10.250.7.77:21394\nSat Jan 11 19:57:03 2020 10.250.7.77:21394 TCP connection established with [AF_INET]100.64.1.1:62334\nSat Jan 11 19:57:03 2020 10.250.7.77:21394 Connection reset, restarting [0]\nSat Jan 11 19:57:03 2020 100.64.1.1:62334 Connection reset, restarting [0]\nSat Jan 11 19:57:09 2020 TCP connection established with [AF_INET]10.250.7.77:37484\nSat Jan 11 19:57:09 2020 10.250.7.77:37484 TCP connection established with [AF_INET]100.64.1.1:2740\nSat Jan 11 19:57:09 2020 10.250.7.77:37484 Connection reset, restarting [0]\nSat Jan 11 19:57:09 2020 100.64.1.1:2740 Connection reset, restarting [0]\nSat Jan 11 19:57:13 2020 TCP connection established with [AF_INET]10.250.7.77:21402\nSat Jan 11 19:57:13 2020 10.250.7.77:21402 TCP connection established with [AF_INET]100.64.1.1:62342\nSat Jan 11 19:57:13 2020 10.250.7.77:21402 Connection reset, restarting [0]\nSat Jan 11 19:57:13 2020 100.64.1.1:62342 Connection reset, restarting [0]\nSat Jan 11 19:57:19 2020 TCP connection established with [AF_INET]10.250.7.77:37500\nSat Jan 11 19:57:19 2020 10.250.7.77:37500 TCP connection established with [AF_INET]100.64.1.1:2756\nSat Jan 11 19:57:19 2020 10.250.7.77:37500 Connection reset, restarting [0]\nSat Jan 11 19:57:19 2020 100.64.1.1:2756 Connection reset, restarting [0]\nSat Jan 11 19:57:23 2020 TCP connection established with [AF_INET]10.250.7.77:21412\nSat Jan 11 19:57:23 2020 10.250.7.77:21412 TCP connection established with [AF_INET]100.64.1.1:62352\nSat Jan 11 19:57:23 2020 10.250.7.77:21412 Connection reset, restarting [0]\nSat Jan 11 19:57:23 2020 100.64.1.1:62352 Connection reset, restarting [0]\nSat Jan 11 19:57:29 2020 TCP connection established with [AF_INET]10.250.7.77:37512\nSat Jan 11 19:57:29 2020 10.250.7.77:37512 TCP connection established with [AF_INET]100.64.1.1:2768\nSat Jan 11 19:57:29 2020 10.250.7.77:37512 Connection reset, restarting [0]\nSat Jan 11 19:57:29 2020 100.64.1.1:2768 Connection reset, restarting [0]\nSat Jan 11 19:57:33 2020 TCP connection established with [AF_INET]10.250.7.77:21418\nSat Jan 11 19:57:33 2020 10.250.7.77:21418 TCP connection established with [AF_INET]100.64.1.1:62358\nSat Jan 11 19:57:33 2020 10.250.7.77:21418 Connection reset, restarting [0]\nSat Jan 11 19:57:33 2020 100.64.1.1:62358 Connection reset, restarting [0]\nSat Jan 11 19:57:39 2020 TCP connection established with [AF_INET]10.250.7.77:37516\nSat Jan 11 19:57:39 2020 10.250.7.77:37516 TCP connection established with [AF_INET]100.64.1.1:2772\nSat Jan 11 19:57:39 2020 10.250.7.77:37516 Connection reset, restarting [0]\nSat Jan 11 19:57:39 2020 100.64.1.1:2772 Connection reset, restarting [0]\nSat Jan 11 19:57:43 2020 TCP connection established with [AF_INET]10.250.7.77:21426\nSat Jan 11 19:57:43 2020 10.250.7.77:21426 TCP connection established with [AF_INET]100.64.1.1:62366\nSat Jan 11 19:57:43 2020 10.250.7.77:21426 Connection reset, restarting [0]\nSat Jan 11 19:57:43 2020 100.64.1.1:62366 Connection reset, restarting [0]\nSat Jan 11 19:57:49 2020 TCP connection established with [AF_INET]10.250.7.77:37526\nSat Jan 11 19:57:49 2020 10.250.7.77:37526 TCP connection established with [AF_INET]100.64.1.1:2782\nSat Jan 11 19:57:49 2020 10.250.7.77:37526 Connection reset, restarting [0]\nSat Jan 11 19:57:49 2020 100.64.1.1:2782 Connection reset, restarting [0]\nSat Jan 11 19:57:53 2020 TCP connection established with [AF_INET]10.250.7.77:21434\nSat Jan 11 19:57:53 2020 10.250.7.77:21434 TCP connection established with [AF_INET]100.64.1.1:62374\nSat Jan 11 19:57:53 2020 10.250.7.77:21434 Connection reset, restarting [0]\nSat Jan 11 19:57:53 2020 100.64.1.1:62374 Connection reset, restarting [0]\nSat Jan 11 19:57:59 2020 TCP connection established with [AF_INET]10.250.7.77:37536\nSat Jan 11 19:57:59 2020 10.250.7.77:37536 TCP connection established with [AF_INET]100.64.1.1:2792\nSat Jan 11 19:57:59 2020 10.250.7.77:37536 Connection reset, restarting [0]\nSat Jan 11 19:57:59 2020 100.64.1.1:2792 Connection reset, restarting [0]\nSat Jan 11 19:58:03 2020 TCP connection established with [AF_INET]10.250.7.77:21452\nSat Jan 11 19:58:03 2020 10.250.7.77:21452 TCP connection established with [AF_INET]100.64.1.1:62392\nSat Jan 11 19:58:03 2020 10.250.7.77:21452 Connection reset, restarting [0]\nSat Jan 11 19:58:03 2020 100.64.1.1:62392 Connection reset, restarting [0]\nSat Jan 11 19:58:09 2020 TCP connection established with [AF_INET]10.250.7.77:37550\nSat Jan 11 19:58:09 2020 10.250.7.77:37550 TCP connection established with [AF_INET]100.64.1.1:2806\nSat Jan 11 19:58:09 2020 10.250.7.77:37550 Connection reset, restarting [0]\nSat Jan 11 19:58:09 2020 100.64.1.1:2806 Connection reset, restarting [0]\nSat Jan 11 19:58:13 2020 TCP connection established with [AF_INET]10.250.7.77:21456\nSat Jan 11 19:58:13 2020 10.250.7.77:21456 TCP connection established with [AF_INET]100.64.1.1:62396\nSat Jan 11 19:58:13 2020 10.250.7.77:21456 Connection reset, restarting [0]\nSat Jan 11 19:58:13 2020 100.64.1.1:62396 Connection reset, restarting [0]\nSat Jan 11 19:58:19 2020 TCP connection established with [AF_INET]10.250.7.77:37558\nSat Jan 11 19:58:19 2020 10.250.7.77:37558 TCP connection established with [AF_INET]100.64.1.1:2814\nSat Jan 11 19:58:19 2020 10.250.7.77:37558 Connection reset, restarting [0]\nSat Jan 11 19:58:19 2020 100.64.1.1:2814 Connection reset, restarting [0]\nSat Jan 11 19:58:23 2020 TCP connection established with [AF_INET]10.250.7.77:21470\nSat Jan 11 19:58:23 2020 10.250.7.77:21470 TCP connection established with [AF_INET]100.64.1.1:62410\nSat Jan 11 19:58:23 2020 10.250.7.77:21470 Connection reset, restarting [0]\nSat Jan 11 19:58:23 2020 100.64.1.1:62410 Connection reset, restarting [0]\nSat Jan 11 19:58:29 2020 TCP connection established with [AF_INET]10.250.7.77:37566\nSat Jan 11 19:58:29 2020 10.250.7.77:37566 TCP connection established with [AF_INET]100.64.1.1:2822\nSat Jan 11 19:58:29 2020 10.250.7.77:37566 Connection reset, restarting [0]\nSat Jan 11 19:58:29 2020 100.64.1.1:2822 Connection reset, restarting [0]\nSat Jan 11 19:58:33 2020 TCP connection established with [AF_INET]10.250.7.77:21476\nSat Jan 11 19:58:33 2020 10.250.7.77:21476 TCP connection established with [AF_INET]100.64.1.1:62416\nSat Jan 11 19:58:33 2020 10.250.7.77:21476 Connection reset, restarting [0]\nSat Jan 11 19:58:33 2020 100.64.1.1:62416 Connection reset, restarting [0]\nSat Jan 11 19:58:39 2020 TCP connection established with [AF_INET]10.250.7.77:37604\nSat Jan 11 19:58:39 2020 10.250.7.77:37604 TCP connection established with [AF_INET]100.64.1.1:2860\nSat Jan 11 19:58:39 2020 10.250.7.77:37604 Connection reset, restarting [0]\nSat Jan 11 19:58:39 2020 100.64.1.1:2860 Connection reset, restarting [0]\nSat Jan 11 19:58:43 2020 TCP connection established with [AF_INET]10.250.7.77:21484\nSat Jan 11 19:58:43 2020 10.250.7.77:21484 TCP connection established with [AF_INET]100.64.1.1:62424\nSat Jan 11 19:58:43 2020 10.250.7.77:21484 Connection reset, restarting [0]\nSat Jan 11 19:58:43 2020 100.64.1.1:62424 Connection reset, restarting [0]\nSat Jan 11 19:58:49 2020 TCP connection established with [AF_INET]10.250.7.77:37620\nSat Jan 11 19:58:49 2020 10.250.7.77:37620 TCP connection established with [AF_INET]100.64.1.1:2876\nSat Jan 11 19:58:49 2020 10.250.7.77:37620 Connection reset, restarting [0]\nSat Jan 11 19:58:49 2020 100.64.1.1:2876 Connection reset, restarting [0]\nSat Jan 11 19:58:53 2020 TCP connection established with [AF_INET]10.250.7.77:21492\nSat Jan 11 19:58:53 2020 10.250.7.77:21492 TCP connection established with [AF_INET]100.64.1.1:62432\nSat Jan 11 19:58:53 2020 10.250.7.77:21492 Connection reset, restarting [0]\nSat Jan 11 19:58:53 2020 100.64.1.1:62432 Connection reset, restarting [0]\nSat Jan 11 19:58:59 2020 TCP connection established with [AF_INET]10.250.7.77:37628\nSat Jan 11 19:58:59 2020 10.250.7.77:37628 TCP connection established with [AF_INET]100.64.1.1:2884\nSat Jan 11 19:58:59 2020 10.250.7.77:37628 Connection reset, restarting [0]\nSat Jan 11 19:58:59 2020 100.64.1.1:2884 Connection reset, restarting [0]\nSat Jan 11 19:59:03 2020 TCP connection established with [AF_INET]10.250.7.77:21510\nSat Jan 11 19:59:03 2020 10.250.7.77:21510 TCP connection established with [AF_INET]100.64.1.1:62450\nSat Jan 11 19:59:03 2020 10.250.7.77:21510 Connection reset, restarting [0]\nSat Jan 11 19:59:03 2020 100.64.1.1:62450 Connection reset, restarting [0]\nSat Jan 11 19:59:09 2020 TCP connection established with [AF_INET]10.250.7.77:37644\nSat Jan 11 19:59:09 2020 10.250.7.77:37644 TCP connection established with [AF_INET]100.64.1.1:2900\nSat Jan 11 19:59:09 2020 10.250.7.77:37644 Connection reset, restarting [0]\nSat Jan 11 19:59:09 2020 100.64.1.1:2900 Connection reset, restarting [0]\nSat Jan 11 19:59:10 2020 vpn-seed/100.64.1.1:51060 peer info: IV_VER=2.4.6\nSat Jan 11 19:59:10 2020 vpn-seed/100.64.1.1:51060 peer info: IV_PLAT=linux\nSat Jan 11 19:59:10 2020 vpn-seed/100.64.1.1:51060 peer info: IV_PROTO=2\nSat Jan 11 19:59:10 2020 vpn-seed/100.64.1.1:51060 peer info: IV_LZ4=1\nSat Jan 11 19:59:10 2020 vpn-seed/100.64.1.1:51060 peer info: IV_LZ4v2=1\nSat Jan 11 19:59:10 2020 vpn-seed/100.64.1.1:51060 peer info: IV_LZO=1\nSat Jan 11 19:59:10 2020 vpn-seed/100.64.1.1:51060 peer info: IV_COMP_STUB=1\nSat Jan 11 19:59:10 2020 vpn-seed/100.64.1.1:51060 peer info: IV_COMP_STUBv2=1\nSat Jan 11 19:59:10 2020 vpn-seed/100.64.1.1:51060 peer info: IV_TCPNL=1\nSat Jan 11 19:59:13 2020 TCP connection established with [AF_INET]100.64.1.1:62454\nSat Jan 11 19:59:13 2020 100.64.1.1:62454 TCP connection established with [AF_INET]10.250.7.77:21514\nSat Jan 11 19:59:13 2020 100.64.1.1:62454 Connection reset, restarting [0]\nSat Jan 11 19:59:13 2020 10.250.7.77:21514 Connection reset, restarting [0]\nSat Jan 11 19:59:19 2020 TCP connection established with [AF_INET]10.250.7.77:37652\nSat Jan 11 19:59:19 2020 10.250.7.77:37652 TCP connection established with [AF_INET]100.64.1.1:2908\nSat Jan 11 19:59:19 2020 10.250.7.77:37652 Connection reset, restarting [0]\nSat Jan 11 19:59:19 2020 100.64.1.1:2908 Connection reset, restarting [0]\nSat Jan 11 19:59:23 2020 TCP connection established with [AF_INET]10.250.7.77:21524\nSat Jan 11 19:59:23 2020 10.250.7.77:21524 TCP connection established with [AF_INET]100.64.1.1:62464\nSat Jan 11 19:59:23 2020 10.250.7.77:21524 Connection reset, restarting [0]\nSat Jan 11 19:59:23 2020 100.64.1.1:62464 Connection reset, restarting [0]\nSat Jan 11 19:59:29 2020 TCP connection established with [AF_INET]10.250.7.77:37660\nSat Jan 11 19:59:29 2020 10.250.7.77:37660 TCP connection established with [AF_INET]100.64.1.1:2916\nSat Jan 11 19:59:29 2020 10.250.7.77:37660 Connection reset, restarting [0]\nSat Jan 11 19:59:29 2020 100.64.1.1:2916 Connection reset, restarting [0]\nSat Jan 11 19:59:33 2020 TCP connection established with [AF_INET]10.250.7.77:21530\nSat Jan 11 19:59:33 2020 10.250.7.77:21530 TCP connection established with [AF_INET]100.64.1.1:62470\nSat Jan 11 19:59:33 2020 10.250.7.77:21530 Connection reset, restarting [0]\nSat Jan 11 19:59:33 2020 100.64.1.1:62470 Connection reset, restarting [0]\nSat Jan 11 19:59:39 2020 TCP connection established with [AF_INET]10.250.7.77:37664\nSat Jan 11 19:59:39 2020 10.250.7.77:37664 TCP connection established with [AF_INET]100.64.1.1:2920\nSat Jan 11 19:59:39 2020 10.250.7.77:37664 Connection reset, restarting [0]\nSat Jan 11 19:59:39 2020 100.64.1.1:2920 Connection reset, restarting [0]\nSat Jan 11 19:59:43 2020 TCP connection established with [AF_INET]10.250.7.77:21544\nSat Jan 11 19:59:43 2020 10.250.7.77:21544 TCP connection established with [AF_INET]100.64.1.1:62484\nSat Jan 11 19:59:43 2020 10.250.7.77:21544 Connection reset, restarting [0]\nSat Jan 11 19:59:43 2020 100.64.1.1:62484 Connection reset, restarting [0]\nSat Jan 11 19:59:49 2020 TCP connection established with [AF_INET]10.250.7.77:37680\nSat Jan 11 19:59:49 2020 10.250.7.77:37680 TCP connection established with [AF_INET]100.64.1.1:2936\nSat Jan 11 19:59:49 2020 10.250.7.77:37680 Connection reset, restarting [0]\nSat Jan 11 19:59:49 2020 100.64.1.1:2936 Connection reset, restarting [0]\nSat Jan 11 19:59:53 2020 TCP connection established with [AF_INET]10.250.7.77:21550\nSat Jan 11 19:59:53 2020 10.250.7.77:21550 TCP connection established with [AF_INET]100.64.1.1:62490\nSat Jan 11 19:59:53 2020 10.250.7.77:21550 Connection reset, restarting [0]\nSat Jan 11 19:59:53 2020 100.64.1.1:62490 Connection reset, restarting [0]\nSat Jan 11 19:59:59 2020 TCP connection established with [AF_INET]10.250.7.77:37692\nSat Jan 11 19:59:59 2020 10.250.7.77:37692 TCP connection established with [AF_INET]100.64.1.1:2948\nSat Jan 11 19:59:59 2020 10.250.7.77:37692 Connection reset, restarting [0]\nSat Jan 11 19:59:59 2020 100.64.1.1:2948 Connection reset, restarting [0]\nSat Jan 11 20:00:03 2020 TCP connection established with [AF_INET]10.250.7.77:21570\nSat Jan 11 20:00:03 2020 10.250.7.77:21570 TCP connection established with [AF_INET]100.64.1.1:62510\nSat Jan 11 20:00:03 2020 10.250.7.77:21570 Connection reset, restarting [0]\nSat Jan 11 20:00:03 2020 100.64.1.1:62510 Connection reset, restarting [0]\nSat Jan 11 20:00:09 2020 TCP connection established with [AF_INET]10.250.7.77:37708\nSat Jan 11 20:00:09 2020 10.250.7.77:37708 TCP connection established with [AF_INET]100.64.1.1:2964\nSat Jan 11 20:00:09 2020 10.250.7.77:37708 Connection reset, restarting [0]\nSat Jan 11 20:00:09 2020 100.64.1.1:2964 Connection reset, restarting [0]\nSat Jan 11 20:00:13 2020 TCP connection established with [AF_INET]10.250.7.77:21574\nSat Jan 11 20:00:13 2020 10.250.7.77:21574 TCP connection established with [AF_INET]100.64.1.1:62514\nSat Jan 11 20:00:13 2020 10.250.7.77:21574 Connection reset, restarting [0]\nSat Jan 11 20:00:13 2020 100.64.1.1:62514 Connection reset, restarting [0]\nSat Jan 11 20:00:19 2020 TCP connection established with [AF_INET]10.250.7.77:37716\nSat Jan 11 20:00:19 2020 10.250.7.77:37716 TCP connection established with [AF_INET]100.64.1.1:2972\nSat Jan 11 20:00:19 2020 10.250.7.77:37716 Connection reset, restarting [0]\nSat Jan 11 20:00:19 2020 100.64.1.1:2972 Connection reset, restarting [0]\nSat Jan 11 20:00:23 2020 TCP connection established with [AF_INET]10.250.7.77:21584\nSat Jan 11 20:00:23 2020 10.250.7.77:21584 TCP connection established with [AF_INET]100.64.1.1:62524\nSat Jan 11 20:00:23 2020 10.250.7.77:21584 Connection reset, restarting [0]\nSat Jan 11 20:00:23 2020 100.64.1.1:62524 Connection reset, restarting [0]\nSat Jan 11 20:00:29 2020 TCP connection established with [AF_INET]10.250.7.77:37724\nSat Jan 11 20:00:29 2020 10.250.7.77:37724 TCP connection established with [AF_INET]100.64.1.1:2980\nSat Jan 11 20:00:29 2020 10.250.7.77:37724 Connection reset, restarting [0]\nSat Jan 11 20:00:29 2020 100.64.1.1:2980 Connection reset, restarting [0]\nSat Jan 11 20:00:33 2020 TCP connection established with [AF_INET]10.250.7.77:21590\nSat Jan 11 20:00:33 2020 10.250.7.77:21590 TCP connection established with [AF_INET]100.64.1.1:62530\nSat Jan 11 20:00:33 2020 10.250.7.77:21590 Connection reset, restarting [0]\nSat Jan 11 20:00:33 2020 100.64.1.1:62530 Connection reset, restarting [0]\nSat Jan 11 20:00:39 2020 TCP connection established with [AF_INET]10.250.7.77:37730\nSat Jan 11 20:00:39 2020 10.250.7.77:37730 TCP connection established with [AF_INET]100.64.1.1:2986\nSat Jan 11 20:00:39 2020 10.250.7.77:37730 Connection reset, restarting [0]\nSat Jan 11 20:00:39 2020 100.64.1.1:2986 Connection reset, restarting [0]\nSat Jan 11 20:00:39 2020 TCP connection established with [AF_INET]100.64.1.1:2986\nSat Jan 11 20:00:43 2020 TCP connection established with [AF_INET]10.250.7.77:21600\nSat Jan 11 20:00:43 2020 10.250.7.77:21600 TCP connection established with [AF_INET]100.64.1.1:62540\nSat Jan 11 20:00:43 2020 10.250.7.77:21600 Connection reset, restarting [0]\nSat Jan 11 20:00:43 2020 100.64.1.1:62540 Connection reset, restarting [0]\nSat Jan 11 20:00:49 2020 TCP connection established with [AF_INET]10.250.7.77:37742\nSat Jan 11 20:00:49 2020 10.250.7.77:37742 TCP connection established with [AF_INET]100.64.1.1:2998\nSat Jan 11 20:00:49 2020 10.250.7.77:37742 Connection reset, restarting [0]\nSat Jan 11 20:00:49 2020 100.64.1.1:2998 Connection reset, restarting [0]\nSat Jan 11 20:00:51 2020 vpn-seed/100.64.1.1:51770 peer info: IV_VER=2.4.6\nSat Jan 11 20:00:51 2020 vpn-seed/100.64.1.1:51770 peer info: IV_PLAT=linux\nSat Jan 11 20:00:51 2020 vpn-seed/100.64.1.1:51770 peer info: IV_PROTO=2\nSat Jan 11 20:00:51 2020 vpn-seed/100.64.1.1:51770 peer info: IV_LZ4=1\nSat Jan 11 20:00:51 2020 vpn-seed/100.64.1.1:51770 peer info: IV_LZ4v2=1\nSat Jan 11 20:00:51 2020 vpn-seed/100.64.1.1:51770 peer info: IV_LZO=1\nSat Jan 11 20:00:51 2020 vpn-seed/100.64.1.1:51770 peer info: IV_COMP_STUB=1\nSat Jan 11 20:00:51 2020 vpn-seed/100.64.1.1:51770 peer info: IV_COMP_STUBv2=1\nSat Jan 11 20:00:51 2020 vpn-seed/100.64.1.1:51770 peer info: IV_TCPNL=1\nSat Jan 11 20:00:53 2020 TCP connection established with [AF_INET]10.250.7.77:21610\nSat Jan 11 20:00:53 2020 10.250.7.77:21610 TCP connection established with [AF_INET]100.64.1.1:62550\nSat Jan 11 20:00:53 2020 10.250.7.77:21610 Connection reset, restarting [0]\nSat Jan 11 20:00:53 2020 100.64.1.1:62550 Connection reset, restarting [0]\nSat Jan 11 20:00:59 2020 TCP connection established with [AF_INET]10.250.7.77:37750\nSat Jan 11 20:00:59 2020 10.250.7.77:37750 TCP connection established with [AF_INET]100.64.1.1:3006\nSat Jan 11 20:00:59 2020 10.250.7.77:37750 Connection reset, restarting [0]\nSat Jan 11 20:00:59 2020 100.64.1.1:3006 Connection reset, restarting [0]\nSat Jan 11 20:01:03 2020 TCP connection established with [AF_INET]10.250.7.77:21628\nSat Jan 11 20:01:03 2020 10.250.7.77:21628 TCP connection established with [AF_INET]100.64.1.1:62568\nSat Jan 11 20:01:03 2020 10.250.7.77:21628 Connection reset, restarting [0]\nSat Jan 11 20:01:03 2020 100.64.1.1:62568 Connection reset, restarting [0]\nSat Jan 11 20:01:09 2020 TCP connection established with [AF_INET]10.250.7.77:37764\nSat Jan 11 20:01:09 2020 10.250.7.77:37764 TCP connection established with [AF_INET]100.64.1.1:3020\nSat Jan 11 20:01:09 2020 10.250.7.77:37764 Connection reset, restarting [0]\nSat Jan 11 20:01:09 2020 100.64.1.1:3020 Connection reset, restarting [0]\nSat Jan 11 20:01:13 2020 TCP connection established with [AF_INET]10.250.7.77:21634\nSat Jan 11 20:01:13 2020 10.250.7.77:21634 TCP connection established with [AF_INET]100.64.1.1:62574\nSat Jan 11 20:01:13 2020 10.250.7.77:21634 Connection reset, restarting [0]\nSat Jan 11 20:01:13 2020 100.64.1.1:62574 Connection reset, restarting [0]\nSat Jan 11 20:01:19 2020 TCP connection established with [AF_INET]10.250.7.77:37776\nSat Jan 11 20:01:19 2020 10.250.7.77:37776 TCP connection established with [AF_INET]100.64.1.1:3032\nSat Jan 11 20:01:19 2020 10.250.7.77:37776 Connection reset, restarting [0]\nSat Jan 11 20:01:19 2020 100.64.1.1:3032 Connection reset, restarting [0]\nSat Jan 11 20:01:21 2020 vpn-seed/10.250.7.77:22572 peer info: IV_VER=2.4.6\nSat Jan 11 20:01:21 2020 vpn-seed/10.250.7.77:22572 peer info: IV_PLAT=linux\nSat Jan 11 20:01:21 2020 vpn-seed/10.250.7.77:22572 peer info: IV_PROTO=2\nSat Jan 11 20:01:21 2020 vpn-seed/10.250.7.77:22572 peer info: IV_LZ4=1\nSat Jan 11 20:01:21 2020 vpn-seed/10.250.7.77:22572 peer info: IV_LZ4v2=1\nSat Jan 11 20:01:21 2020 vpn-seed/10.250.7.77:22572 peer info: IV_LZO=1\nSat Jan 11 20:01:21 2020 vpn-seed/10.250.7.77:22572 peer info: IV_COMP_STUB=1\nSat Jan 11 20:01:21 2020 vpn-seed/10.250.7.77:22572 peer info: IV_COMP_STUBv2=1\nSat Jan 11 20:01:21 2020 vpn-seed/10.250.7.77:22572 peer info: IV_TCPNL=1\nSat Jan 11 20:01:23 2020 TCP connection established with [AF_INET]10.250.7.77:21650\nSat Jan 11 20:01:23 2020 10.250.7.77:21650 TCP connection established with [AF_INET]100.64.1.1:62590\nSat Jan 11 20:01:23 2020 10.250.7.77:21650 Connection reset, restarting [0]\nSat Jan 11 20:01:23 2020 100.64.1.1:62590 Connection reset, restarting [0]\nSat Jan 11 20:01:29 2020 TCP connection established with [AF_INET]10.250.7.77:37784\nSat Jan 11 20:01:29 2020 10.250.7.77:37784 TCP connection established with [AF_INET]100.64.1.1:3040\nSat Jan 11 20:01:29 2020 10.250.7.77:37784 Connection reset, restarting [0]\nSat Jan 11 20:01:29 2020 100.64.1.1:3040 Connection reset, restarting [0]\nSat Jan 11 20:01:33 2020 TCP connection established with [AF_INET]10.250.7.77:21658\nSat Jan 11 20:01:33 2020 10.250.7.77:21658 TCP connection established with [AF_INET]100.64.1.1:62598\nSat Jan 11 20:01:33 2020 10.250.7.77:21658 Connection reset, restarting [0]\nSat Jan 11 20:01:33 2020 100.64.1.1:62598 Connection reset, restarting [0]\nSat Jan 11 20:01:39 2020 100.64.1.1:2986 TLS Error: TLS key negotiation failed to occur within 60 seconds (check your network connectivity)\nSat Jan 11 20:01:39 2020 100.64.1.1:2986 TLS Error: TLS handshake failed\nSat Jan 11 20:01:39 2020 100.64.1.1:2986 Fatal TLS error (check_tls_errors_co), restarting\nSat Jan 11 20:01:39 2020 TCP connection established with [AF_INET]10.250.7.77:37790\nSat Jan 11 20:01:39 2020 10.250.7.77:37790 TCP connection established with [AF_INET]100.64.1.1:3046\nSat Jan 11 20:01:39 2020 10.250.7.77:37790 Connection reset, restarting [0]\nSat Jan 11 20:01:39 2020 100.64.1.1:3046 Connection reset, restarting [0]\nSat Jan 11 20:01:43 2020 TCP connection established with [AF_INET]10.250.7.77:21666\nSat Jan 11 20:01:43 2020 10.250.7.77:21666 TCP connection established with [AF_INET]100.64.1.1:62606\nSat Jan 11 20:01:43 2020 10.250.7.77:21666 Connection reset, restarting [0]\nSat Jan 11 20:01:43 2020 100.64.1.1:62606 Connection reset, restarting [0]\nSat Jan 11 20:01:49 2020 TCP connection established with [AF_INET]10.250.7.77:37800\nSat Jan 11 20:01:49 2020 10.250.7.77:37800 TCP connection established with [AF_INET]100.64.1.1:3056\nSat Jan 11 20:01:49 2020 10.250.7.77:37800 Connection reset, restarting [0]\nSat Jan 11 20:01:49 2020 100.64.1.1:3056 Connection reset, restarting [0]\nSat Jan 11 20:01:50 2020 vpn-seed/100.64.1.1:58820 peer info: IV_VER=2.4.6\nSat Jan 11 20:01:50 2020 vpn-seed/100.64.1.1:58820 peer info: IV_PLAT=linux\nSat Jan 11 20:01:50 2020 vpn-seed/100.64.1.1:58820 peer info: IV_PROTO=2\nSat Jan 11 20:01:50 2020 vpn-seed/100.64.1.1:58820 peer info: IV_LZ4=1\nSat Jan 11 20:01:50 2020 vpn-seed/100.64.1.1:58820 peer info: IV_LZ4v2=1\nSat Jan 11 20:01:50 2020 vpn-seed/100.64.1.1:58820 peer info: IV_LZO=1\nSat Jan 11 20:01:50 2020 vpn-seed/100.64.1.1:58820 peer info: IV_COMP_STUB=1\nSat Jan 11 20:01:50 2020 vpn-seed/100.64.1.1:58820 peer info: IV_COMP_STUBv2=1\nSat Jan 11 20:01:50 2020 vpn-seed/100.64.1.1:58820 peer info: IV_TCPNL=1\nSat Jan 11 20:01:53 2020 TCP connection established with [AF_INET]10.250.7.77:21672\nSat Jan 11 20:01:53 2020 10.250.7.77:21672 TCP connection established with [AF_INET]100.64.1.1:62612\nSat Jan 11 20:01:53 2020 10.250.7.77:21672 Connection reset, restarting [0]\nSat Jan 11 20:01:53 2020 100.64.1.1:62612 Connection reset, restarting [0]\nSat Jan 11 20:01:59 2020 TCP connection established with [AF_INET]10.250.7.77:37808\nSat Jan 11 20:01:59 2020 10.250.7.77:37808 TCP connection established with [AF_INET]100.64.1.1:3064\nSat Jan 11 20:01:59 2020 10.250.7.77:37808 Connection reset, restarting [0]\nSat Jan 11 20:01:59 2020 100.64.1.1:3064 Connection reset, restarting [0]\nSat Jan 11 20:02:03 2020 TCP connection established with [AF_INET]10.250.7.77:21690\nSat Jan 11 20:02:03 2020 10.250.7.77:21690 TCP connection established with [AF_INET]100.64.1.1:62630\nSat Jan 11 20:02:03 2020 10.250.7.77:21690 Connection reset, restarting [0]\nSat Jan 11 20:02:03 2020 100.64.1.1:62630 Connection reset, restarting [0]\nSat Jan 11 20:02:09 2020 TCP connection established with [AF_INET]10.250.7.77:37822\nSat Jan 11 20:02:09 2020 10.250.7.77:37822 TCP connection established with [AF_INET]100.64.1.1:3078\nSat Jan 11 20:02:09 2020 10.250.7.77:37822 Connection reset, restarting [0]\nSat Jan 11 20:02:09 2020 100.64.1.1:3078 Connection reset, restarting [0]\nSat Jan 11 20:02:13 2020 TCP connection established with [AF_INET]10.250.7.77:21698\nSat Jan 11 20:02:13 2020 10.250.7.77:21698 TCP connection established with [AF_INET]100.64.1.1:62638\nSat Jan 11 20:02:13 2020 10.250.7.77:21698 Connection reset, restarting [0]\nSat Jan 11 20:02:13 2020 100.64.1.1:62638 Connection reset, restarting [0]\nSat Jan 11 20:02:19 2020 TCP connection established with [AF_INET]10.250.7.77:37830\nSat Jan 11 20:02:19 2020 10.250.7.77:37830 TCP connection established with [AF_INET]100.64.1.1:3086\nSat Jan 11 20:02:19 2020 10.250.7.77:37830 Connection reset, restarting [0]\nSat Jan 11 20:02:19 2020 100.64.1.1:3086 Connection reset, restarting [0]\nSat Jan 11 20:02:23 2020 TCP connection established with [AF_INET]10.250.7.77:21708\nSat Jan 11 20:02:23 2020 10.250.7.77:21708 TCP connection established with [AF_INET]100.64.1.1:62648\nSat Jan 11 20:02:23 2020 10.250.7.77:21708 Connection reset, restarting [0]\nSat Jan 11 20:02:23 2020 100.64.1.1:62648 Connection reset, restarting [0]\nSat Jan 11 20:02:29 2020 TCP connection established with [AF_INET]10.250.7.77:37842\nSat Jan 11 20:02:29 2020 10.250.7.77:37842 TCP connection established with [AF_INET]100.64.1.1:3098\nSat Jan 11 20:02:29 2020 10.250.7.77:37842 Connection reset, restarting [0]\nSat Jan 11 20:02:29 2020 100.64.1.1:3098 Connection reset, restarting [0]\nSat Jan 11 20:02:33 2020 TCP connection established with [AF_INET]10.250.7.77:21716\nSat Jan 11 20:02:33 2020 10.250.7.77:21716 TCP connection established with [AF_INET]100.64.1.1:62656\nSat Jan 11 20:02:33 2020 10.250.7.77:21716 Connection reset, restarting [0]\nSat Jan 11 20:02:33 2020 100.64.1.1:62656 Connection reset, restarting [0]\nSat Jan 11 20:02:39 2020 TCP connection established with [AF_INET]10.250.7.77:37848\nSat Jan 11 20:02:39 2020 10.250.7.77:37848 TCP connection established with [AF_INET]100.64.1.1:3104\nSat Jan 11 20:02:39 2020 10.250.7.77:37848 Connection reset, restarting [0]\nSat Jan 11 20:02:39 2020 100.64.1.1:3104 Connection reset, restarting [0]\nSat Jan 11 20:02:43 2020 TCP connection established with [AF_INET]10.250.7.77:21724\nSat Jan 11 20:02:43 2020 10.250.7.77:21724 TCP connection established with [AF_INET]100.64.1.1:62664\nSat Jan 11 20:02:43 2020 10.250.7.77:21724 Connection reset, restarting [0]\nSat Jan 11 20:02:43 2020 100.64.1.1:62664 Connection reset, restarting [0]\nSat Jan 11 20:02:49 2020 TCP connection established with [AF_INET]10.250.7.77:37858\nSat Jan 11 20:02:49 2020 10.250.7.77:37858 TCP connection established with [AF_INET]100.64.1.1:3114\nSat Jan 11 20:02:49 2020 10.250.7.77:37858 Connection reset, restarting [0]\nSat Jan 11 20:02:49 2020 100.64.1.1:3114 Connection reset, restarting [0]\nSat Jan 11 20:02:53 2020 TCP connection established with [AF_INET]10.250.7.77:21730\nSat Jan 11 20:02:53 2020 10.250.7.77:21730 TCP connection established with [AF_INET]100.64.1.1:62670\nSat Jan 11 20:02:53 2020 10.250.7.77:21730 Connection reset, restarting [0]\nSat Jan 11 20:02:53 2020 100.64.1.1:62670 Connection reset, restarting [0]\nSat Jan 11 20:02:59 2020 TCP connection established with [AF_INET]10.250.7.77:37866\nSat Jan 11 20:02:59 2020 10.250.7.77:37866 TCP connection established with [AF_INET]100.64.1.1:3122\nSat Jan 11 20:02:59 2020 10.250.7.77:37866 Connection reset, restarting [0]\nSat Jan 11 20:02:59 2020 100.64.1.1:3122 Connection reset, restarting [0]\nSat Jan 11 20:03:03 2020 TCP connection established with [AF_INET]10.250.7.77:21748\nSat Jan 11 20:03:03 2020 10.250.7.77:21748 TCP connection established with [AF_INET]100.64.1.1:62688\nSat Jan 11 20:03:03 2020 10.250.7.77:21748 Connection reset, restarting [0]\nSat Jan 11 20:03:03 2020 100.64.1.1:62688 Connection reset, restarting [0]\nSat Jan 11 20:03:09 2020 TCP connection established with [AF_INET]10.250.7.77:37880\nSat Jan 11 20:03:09 2020 10.250.7.77:37880 TCP connection established with [AF_INET]100.64.1.1:3136\nSat Jan 11 20:03:09 2020 10.250.7.77:37880 Connection reset, restarting [0]\nSat Jan 11 20:03:09 2020 100.64.1.1:3136 Connection reset, restarting [0]\nSat Jan 11 20:03:13 2020 TCP connection established with [AF_INET]10.250.7.77:21752\nSat Jan 11 20:03:13 2020 10.250.7.77:21752 TCP connection established with [AF_INET]100.64.1.1:62692\nSat Jan 11 20:03:13 2020 10.250.7.77:21752 Connection reset, restarting [0]\nSat Jan 11 20:03:13 2020 100.64.1.1:62692 Connection reset, restarting [0]\nSat Jan 11 20:03:19 2020 TCP connection established with [AF_INET]10.250.7.77:37888\nSat Jan 11 20:03:19 2020 10.250.7.77:37888 TCP connection established with [AF_INET]100.64.1.1:3144\nSat Jan 11 20:03:19 2020 10.250.7.77:37888 Connection reset, restarting [0]\nSat Jan 11 20:03:19 2020 100.64.1.1:3144 Connection reset, restarting [0]\nSat Jan 11 20:03:23 2020 TCP connection established with [AF_INET]10.250.7.77:21766\nSat Jan 11 20:03:23 2020 10.250.7.77:21766 TCP connection established with [AF_INET]100.64.1.1:62706\nSat Jan 11 20:03:23 2020 10.250.7.77:21766 Connection reset, restarting [0]\nSat Jan 11 20:03:23 2020 100.64.1.1:62706 Connection reset, restarting [0]\nSat Jan 11 20:03:29 2020 TCP connection established with [AF_INET]10.250.7.77:37896\nSat Jan 11 20:03:29 2020 10.250.7.77:37896 TCP connection established with [AF_INET]100.64.1.1:3152\nSat Jan 11 20:03:29 2020 10.250.7.77:37896 Connection reset, restarting [0]\nSat Jan 11 20:03:29 2020 100.64.1.1:3152 Connection reset, restarting [0]\nSat Jan 11 20:03:33 2020 TCP connection established with [AF_INET]10.250.7.77:21774\nSat Jan 11 20:03:33 2020 10.250.7.77:21774 TCP connection established with [AF_INET]100.64.1.1:62714\nSat Jan 11 20:03:33 2020 10.250.7.77:21774 Connection reset, restarting [0]\nSat Jan 11 20:03:33 2020 100.64.1.1:62714 Connection reset, restarting [0]\nSat Jan 11 20:03:39 2020 TCP connection established with [AF_INET]10.250.7.77:37902\nSat Jan 11 20:03:39 2020 10.250.7.77:37902 TCP connection established with [AF_INET]100.64.1.1:3158\nSat Jan 11 20:03:39 2020 10.250.7.77:37902 Connection reset, restarting [0]\nSat Jan 11 20:03:39 2020 100.64.1.1:3158 Connection reset, restarting [0]\nSat Jan 11 20:03:43 2020 TCP connection established with [AF_INET]10.250.7.77:21782\nSat Jan 11 20:03:43 2020 10.250.7.77:21782 TCP connection established with [AF_INET]100.64.1.1:62722\nSat Jan 11 20:03:43 2020 10.250.7.77:21782 Connection reset, restarting [0]\nSat Jan 11 20:03:43 2020 100.64.1.1:62722 Connection reset, restarting [0]\nSat Jan 11 20:03:49 2020 TCP connection established with [AF_INET]10.250.7.77:37916\nSat Jan 11 20:03:49 2020 10.250.7.77:37916 TCP connection established with [AF_INET]100.64.1.1:3172\nSat Jan 11 20:03:49 2020 10.250.7.77:37916 Connection reset, restarting [0]\nSat Jan 11 20:03:49 2020 100.64.1.1:3172 Connection reset, restarting [0]\nSat Jan 11 20:03:53 2020 TCP connection established with [AF_INET]10.250.7.77:21788\nSat Jan 11 20:03:53 2020 10.250.7.77:21788 TCP connection established with [AF_INET]100.64.1.1:62728\nSat Jan 11 20:03:53 2020 10.250.7.77:21788 Connection reset, restarting [0]\nSat Jan 11 20:03:53 2020 100.64.1.1:62728 Connection reset, restarting [0]\nSat Jan 11 20:03:59 2020 TCP connection established with [AF_INET]10.250.7.77:37924\nSat Jan 11 20:03:59 2020 10.250.7.77:37924 TCP connection established with [AF_INET]100.64.1.1:3180\nSat Jan 11 20:03:59 2020 10.250.7.77:37924 Connection reset, restarting [0]\nSat Jan 11 20:03:59 2020 100.64.1.1:3180 Connection reset, restarting [0]\nSat Jan 11 20:04:03 2020 TCP connection established with [AF_INET]10.250.7.77:21802\nSat Jan 11 20:04:03 2020 10.250.7.77:21802 TCP connection established with [AF_INET]100.64.1.1:62742\nSat Jan 11 20:04:03 2020 10.250.7.77:21802 Connection reset, restarting [0]\nSat Jan 11 20:04:03 2020 100.64.1.1:62742 Connection reset, restarting [0]\nSat Jan 11 20:04:09 2020 TCP connection established with [AF_INET]10.250.7.77:37938\nSat Jan 11 20:04:09 2020 10.250.7.77:37938 TCP connection established with [AF_INET]100.64.1.1:3194\nSat Jan 11 20:04:09 2020 10.250.7.77:37938 Connection reset, restarting [0]\nSat Jan 11 20:04:09 2020 100.64.1.1:3194 Connection reset, restarting [0]\nSat Jan 11 20:04:13 2020 TCP connection established with [AF_INET]10.250.7.77:21810\nSat Jan 11 20:04:13 2020 10.250.7.77:21810 TCP connection established with [AF_INET]100.64.1.1:62750\nSat Jan 11 20:04:13 2020 10.250.7.77:21810 Connection reset, restarting [0]\nSat Jan 11 20:04:13 2020 100.64.1.1:62750 Connection reset, restarting [0]\nSat Jan 11 20:04:19 2020 TCP connection established with [AF_INET]10.250.7.77:37946\nSat Jan 11 20:04:19 2020 10.250.7.77:37946 TCP connection established with [AF_INET]100.64.1.1:3202\nSat Jan 11 20:04:19 2020 10.250.7.77:37946 Connection reset, restarting [0]\nSat Jan 11 20:04:19 2020 100.64.1.1:3202 Connection reset, restarting [0]\nSat Jan 11 20:04:23 2020 TCP connection established with [AF_INET]10.250.7.77:21818\nSat Jan 11 20:04:23 2020 10.250.7.77:21818 TCP connection established with [AF_INET]100.64.1.1:62758\nSat Jan 11 20:04:23 2020 10.250.7.77:21818 Connection reset, restarting [0]\nSat Jan 11 20:04:23 2020 100.64.1.1:62758 Connection reset, restarting [0]\nSat Jan 11 20:04:29 2020 TCP connection established with [AF_INET]10.250.7.77:37956\nSat Jan 11 20:04:29 2020 10.250.7.77:37956 TCP connection established with [AF_INET]100.64.1.1:3212\nSat Jan 11 20:04:29 2020 10.250.7.77:37956 Connection reset, restarting [0]\nSat Jan 11 20:04:29 2020 100.64.1.1:3212 Connection reset, restarting [0]\nSat Jan 11 20:04:33 2020 TCP connection established with [AF_INET]10.250.7.77:21828\nSat Jan 11 20:04:33 2020 10.250.7.77:21828 TCP connection established with [AF_INET]100.64.1.1:62768\nSat Jan 11 20:04:33 2020 10.250.7.77:21828 Connection reset, restarting [0]\nSat Jan 11 20:04:33 2020 100.64.1.1:62768 Connection reset, restarting [0]\nSat Jan 11 20:04:39 2020 TCP connection established with [AF_INET]10.250.7.77:37960\nSat Jan 11 20:04:39 2020 10.250.7.77:37960 TCP connection established with [AF_INET]100.64.1.1:3216\nSat Jan 11 20:04:39 2020 10.250.7.77:37960 Connection reset, restarting [0]\nSat Jan 11 20:04:39 2020 100.64.1.1:3216 Connection reset, restarting [0]\nSat Jan 11 20:04:43 2020 TCP connection established with [AF_INET]10.250.7.77:21836\nSat Jan 11 20:04:43 2020 10.250.7.77:21836 TCP connection established with [AF_INET]100.64.1.1:62776\nSat Jan 11 20:04:43 2020 10.250.7.77:21836 Connection reset, restarting [0]\nSat Jan 11 20:04:43 2020 100.64.1.1:62776 Connection reset, restarting [0]\nSat Jan 11 20:04:49 2020 TCP connection established with [AF_INET]10.250.7.77:37970\nSat Jan 11 20:04:49 2020 10.250.7.77:37970 TCP connection established with [AF_INET]100.64.1.1:3226\nSat Jan 11 20:04:49 2020 10.250.7.77:37970 Connection reset, restarting [0]\nSat Jan 11 20:04:49 2020 100.64.1.1:3226 Connection reset, restarting [0]\nSat Jan 11 20:04:53 2020 TCP connection established with [AF_INET]10.250.7.77:21846\nSat Jan 11 20:04:53 2020 10.250.7.77:21846 TCP connection established with [AF_INET]100.64.1.1:62786\nSat Jan 11 20:04:53 2020 10.250.7.77:21846 Connection reset, restarting [0]\nSat Jan 11 20:04:53 2020 100.64.1.1:62786 Connection reset, restarting [0]\nSat Jan 11 20:04:59 2020 TCP connection established with [AF_INET]10.250.7.77:37982\nSat Jan 11 20:04:59 2020 10.250.7.77:37982 TCP connection established with [AF_INET]100.64.1.1:3238\nSat Jan 11 20:04:59 2020 10.250.7.77:37982 Connection reset, restarting [0]\nSat Jan 11 20:04:59 2020 100.64.1.1:3238 Connection reset, restarting [0]\nSat Jan 11 20:05:03 2020 TCP connection established with [AF_INET]10.250.7.77:21860\nSat Jan 11 20:05:03 2020 10.250.7.77:21860 TCP connection established with [AF_INET]100.64.1.1:62800\nSat Jan 11 20:05:03 2020 10.250.7.77:21860 Connection reset, restarting [0]\nSat Jan 11 20:05:03 2020 100.64.1.1:62800 Connection reset, restarting [0]\nSat Jan 11 20:05:09 2020 TCP connection established with [AF_INET]10.250.7.77:37996\nSat Jan 11 20:05:09 2020 10.250.7.77:37996 TCP connection established with [AF_INET]100.64.1.1:3252\nSat Jan 11 20:05:09 2020 10.250.7.77:37996 Connection reset, restarting [0]\nSat Jan 11 20:05:09 2020 100.64.1.1:3252 Connection reset, restarting [0]\nSat Jan 11 20:05:13 2020 TCP connection established with [AF_INET]10.250.7.77:21868\nSat Jan 11 20:05:13 2020 10.250.7.77:21868 TCP connection established with [AF_INET]100.64.1.1:62808\nSat Jan 11 20:05:13 2020 10.250.7.77:21868 Connection reset, restarting [0]\nSat Jan 11 20:05:13 2020 100.64.1.1:62808 Connection reset, restarting [0]\nSat Jan 11 20:05:19 2020 TCP connection established with [AF_INET]10.250.7.77:38004\nSat Jan 11 20:05:19 2020 10.250.7.77:38004 TCP connection established with [AF_INET]100.64.1.1:3260\nSat Jan 11 20:05:19 2020 10.250.7.77:38004 Connection reset, restarting [0]\nSat Jan 11 20:05:19 2020 100.64.1.1:3260 Connection reset, restarting [0]\nSat Jan 11 20:05:23 2020 TCP connection established with [AF_INET]10.250.7.77:21876\nSat Jan 11 20:05:23 2020 10.250.7.77:21876 TCP connection established with [AF_INET]100.64.1.1:62816\nSat Jan 11 20:05:23 2020 10.250.7.77:21876 Connection reset, restarting [0]\nSat Jan 11 20:05:23 2020 100.64.1.1:62816 Connection reset, restarting [0]\nSat Jan 11 20:05:29 2020 TCP connection established with [AF_INET]10.250.7.77:38014\nSat Jan 11 20:05:29 2020 10.250.7.77:38014 Connection reset, restarting [0]\nSat Jan 11 20:05:29 2020 TCP connection established with [AF_INET]100.64.1.1:3270\nSat Jan 11 20:05:29 2020 100.64.1.1:3270 Connection reset, restarting [0]\nSat Jan 11 20:05:33 2020 TCP connection established with [AF_INET]10.250.7.77:21886\nSat Jan 11 20:05:33 2020 10.250.7.77:21886 Connection reset, restarting [0]\nSat Jan 11 20:05:33 2020 TCP connection established with [AF_INET]100.64.1.1:62826\nSat Jan 11 20:05:33 2020 100.64.1.1:62826 Connection reset, restarting [0]\nSat Jan 11 20:05:39 2020 TCP connection established with [AF_INET]10.250.7.77:38018\nSat Jan 11 20:05:39 2020 10.250.7.77:38018 TCP connection established with [AF_INET]100.64.1.1:3274\nSat Jan 11 20:05:39 2020 10.250.7.77:38018 Connection reset, restarting [0]\nSat Jan 11 20:05:39 2020 100.64.1.1:3274 Connection reset, restarting [0]\nSat Jan 11 20:05:43 2020 TCP connection established with [AF_INET]10.250.7.77:21924\nSat Jan 11 20:05:43 2020 10.250.7.77:21924 TCP connection established with [AF_INET]100.64.1.1:62864\nSat Jan 11 20:05:43 2020 10.250.7.77:21924 Connection reset, restarting [0]\nSat Jan 11 20:05:43 2020 100.64.1.1:62864 Connection reset, restarting [0]\nSat Jan 11 20:05:49 2020 TCP connection established with [AF_INET]10.250.7.77:38028\nSat Jan 11 20:05:49 2020 10.250.7.77:38028 TCP connection established with [AF_INET]100.64.1.1:3284\nSat Jan 11 20:05:49 2020 10.250.7.77:38028 Connection reset, restarting [0]\nSat Jan 11 20:05:49 2020 100.64.1.1:3284 Connection reset, restarting [0]\nSat Jan 11 20:05:53 2020 TCP connection established with [AF_INET]10.250.7.77:21938\nSat Jan 11 20:05:53 2020 10.250.7.77:21938 TCP connection established with [AF_INET]100.64.1.1:62878\nSat Jan 11 20:05:53 2020 10.250.7.77:21938 Connection reset, restarting [0]\nSat Jan 11 20:05:53 2020 100.64.1.1:62878 Connection reset, restarting [0]\nSat Jan 11 20:05:59 2020 TCP connection established with [AF_INET]10.250.7.77:38036\nSat Jan 11 20:05:59 2020 10.250.7.77:38036 TCP connection established with [AF_INET]100.64.1.1:3292\nSat Jan 11 20:05:59 2020 10.250.7.77:38036 Connection reset, restarting [0]\nSat Jan 11 20:05:59 2020 100.64.1.1:3292 Connection reset, restarting [0]\nSat Jan 11 20:06:03 2020 TCP connection established with [AF_INET]10.250.7.77:21952\nSat Jan 11 20:06:03 2020 10.250.7.77:21952 TCP connection established with [AF_INET]100.64.1.1:62892\nSat Jan 11 20:06:03 2020 10.250.7.77:21952 Connection reset, restarting [0]\nSat Jan 11 20:06:03 2020 100.64.1.1:62892 Connection reset, restarting [0]\nSat Jan 11 20:06:09 2020 TCP connection established with [AF_INET]10.250.7.77:38050\nSat Jan 11 20:06:09 2020 10.250.7.77:38050 TCP connection established with [AF_INET]100.64.1.1:3306\nSat Jan 11 20:06:09 2020 10.250.7.77:38050 Connection reset, restarting [0]\nSat Jan 11 20:06:09 2020 100.64.1.1:3306 Connection reset, restarting [0]\nSat Jan 11 20:06:13 2020 TCP connection established with [AF_INET]10.250.7.77:21964\nSat Jan 11 20:06:13 2020 10.250.7.77:21964 TCP connection established with [AF_INET]100.64.1.1:62904\nSat Jan 11 20:06:13 2020 10.250.7.77:21964 Connection reset, restarting [0]\nSat Jan 11 20:06:13 2020 100.64.1.1:62904 Connection reset, restarting [0]\nSat Jan 11 20:06:19 2020 TCP connection established with [AF_INET]10.250.7.77:38062\nSat Jan 11 20:06:19 2020 10.250.7.77:38062 TCP connection established with [AF_INET]100.64.1.1:3318\nSat Jan 11 20:06:19 2020 10.250.7.77:38062 Connection reset, restarting [0]\nSat Jan 11 20:06:19 2020 100.64.1.1:3318 Connection reset, restarting [0]\nSat Jan 11 20:06:23 2020 TCP connection established with [AF_INET]10.250.7.77:21970\nSat Jan 11 20:06:23 2020 10.250.7.77:21970 TCP connection established with [AF_INET]100.64.1.1:62910\nSat Jan 11 20:06:23 2020 10.250.7.77:21970 Connection reset, restarting [0]\nSat Jan 11 20:06:23 2020 100.64.1.1:62910 Connection reset, restarting [0]\nSat Jan 11 20:06:29 2020 TCP connection established with [AF_INET]10.250.7.77:38072\nSat Jan 11 20:06:29 2020 10.250.7.77:38072 TCP connection established with [AF_INET]100.64.1.1:3328\nSat Jan 11 20:06:29 2020 10.250.7.77:38072 Connection reset, restarting [0]\nSat Jan 11 20:06:29 2020 100.64.1.1:3328 Connection reset, restarting [0]\nSat Jan 11 20:06:33 2020 TCP connection established with [AF_INET]10.250.7.77:21986\nSat Jan 11 20:06:33 2020 10.250.7.77:21986 TCP connection established with [AF_INET]100.64.1.1:62926\nSat Jan 11 20:06:33 2020 10.250.7.77:21986 Connection reset, restarting [0]\nSat Jan 11 20:06:33 2020 100.64.1.1:62926 Connection reset, restarting [0]\nSat Jan 11 20:06:39 2020 TCP connection established with [AF_INET]10.250.7.77:38076\nSat Jan 11 20:06:39 2020 10.250.7.77:38076 TCP connection established with [AF_INET]100.64.1.1:3332\nSat Jan 11 20:06:39 2020 10.250.7.77:38076 Connection reset, restarting [0]\nSat Jan 11 20:06:39 2020 100.64.1.1:3332 Connection reset, restarting [0]\nSat Jan 11 20:06:43 2020 TCP connection established with [AF_INET]10.250.7.77:21990\nSat Jan 11 20:06:43 2020 10.250.7.77:21990 TCP connection established with [AF_INET]100.64.1.1:62930\nSat Jan 11 20:06:43 2020 10.250.7.77:21990 Connection reset, restarting [0]\nSat Jan 11 20:06:43 2020 100.64.1.1:62930 Connection reset, restarting [0]\nSat Jan 11 20:06:49 2020 TCP connection established with [AF_INET]10.250.7.77:38086\nSat Jan 11 20:06:49 2020 10.250.7.77:38086 TCP connection established with [AF_INET]100.64.1.1:3342\nSat Jan 11 20:06:49 2020 10.250.7.77:38086 Connection reset, restarting [0]\nSat Jan 11 20:06:49 2020 100.64.1.1:3342 Connection reset, restarting [0]\nSat Jan 11 20:06:53 2020 TCP connection established with [AF_INET]10.250.7.77:22000\nSat Jan 11 20:06:53 2020 10.250.7.77:22000 TCP connection established with [AF_INET]100.64.1.1:62940\nSat Jan 11 20:06:53 2020 10.250.7.77:22000 Connection reset, restarting [0]\nSat Jan 11 20:06:53 2020 100.64.1.1:62940 Connection reset, restarting [0]\nSat Jan 11 20:06:59 2020 TCP connection established with [AF_INET]10.250.7.77:38094\nSat Jan 11 20:06:59 2020 10.250.7.77:38094 TCP connection established with [AF_INET]100.64.1.1:3350\nSat Jan 11 20:06:59 2020 10.250.7.77:38094 Connection reset, restarting [0]\nSat Jan 11 20:06:59 2020 100.64.1.1:3350 Connection reset, restarting [0]\nSat Jan 11 20:07:03 2020 TCP connection established with [AF_INET]10.250.7.77:22014\nSat Jan 11 20:07:03 2020 10.250.7.77:22014 TCP connection established with [AF_INET]100.64.1.1:62954\nSat Jan 11 20:07:03 2020 10.250.7.77:22014 Connection reset, restarting [0]\nSat Jan 11 20:07:03 2020 100.64.1.1:62954 Connection reset, restarting [0]\nSat Jan 11 20:07:09 2020 TCP connection established with [AF_INET]10.250.7.77:38108\nSat Jan 11 20:07:09 2020 10.250.7.77:38108 TCP connection established with [AF_INET]100.64.1.1:3364\nSat Jan 11 20:07:09 2020 10.250.7.77:38108 Connection reset, restarting [0]\nSat Jan 11 20:07:09 2020 100.64.1.1:3364 Connection reset, restarting [0]\nSat Jan 11 20:07:13 2020 TCP connection established with [AF_INET]10.250.7.77:22028\nSat Jan 11 20:07:13 2020 10.250.7.77:22028 TCP connection established with [AF_INET]100.64.1.1:62968\nSat Jan 11 20:07:13 2020 10.250.7.77:22028 Connection reset, restarting [0]\nSat Jan 11 20:07:13 2020 100.64.1.1:62968 Connection reset, restarting [0]\nSat Jan 11 20:07:19 2020 TCP connection established with [AF_INET]10.250.7.77:38118\nSat Jan 11 20:07:19 2020 10.250.7.77:38118 TCP connection established with [AF_INET]100.64.1.1:3374\nSat Jan 11 20:07:19 2020 10.250.7.77:38118 Connection reset, restarting [0]\nSat Jan 11 20:07:19 2020 100.64.1.1:3374 Connection reset, restarting [0]\nSat Jan 11 20:07:23 2020 TCP connection established with [AF_INET]100.64.1.1:62974\nSat Jan 11 20:07:23 2020 100.64.1.1:62974 Connection reset, restarting [0]\nSat Jan 11 20:07:23 2020 TCP connection established with [AF_INET]10.250.7.77:22034\nSat Jan 11 20:07:23 2020 10.250.7.77:22034 Connection reset, restarting [0]\nSat Jan 11 20:07:29 2020 TCP connection established with [AF_INET]10.250.7.77:38130\nSat Jan 11 20:07:29 2020 10.250.7.77:38130 TCP connection established with [AF_INET]100.64.1.1:3386\nSat Jan 11 20:07:29 2020 10.250.7.77:38130 Connection reset, restarting [0]\nSat Jan 11 20:07:29 2020 100.64.1.1:3386 Connection reset, restarting [0]\nSat Jan 11 20:07:33 2020 TCP connection established with [AF_INET]10.250.7.77:22044\nSat Jan 11 20:07:33 2020 10.250.7.77:22044 TCP connection established with [AF_INET]100.64.1.1:62984\nSat Jan 11 20:07:33 2020 10.250.7.77:22044 Connection reset, restarting [0]\nSat Jan 11 20:07:33 2020 100.64.1.1:62984 Connection reset, restarting [0]\nSat Jan 11 20:07:39 2020 TCP connection established with [AF_INET]10.250.7.77:38134\nSat Jan 11 20:07:39 2020 10.250.7.77:38134 TCP connection established with [AF_INET]100.64.1.1:3390\nSat Jan 11 20:07:39 2020 10.250.7.77:38134 Connection reset, restarting [0]\nSat Jan 11 20:07:39 2020 100.64.1.1:3390 Connection reset, restarting [0]\nSat Jan 11 20:07:43 2020 TCP connection established with [AF_INET]10.250.7.77:22048\nSat Jan 11 20:07:43 2020 10.250.7.77:22048 TCP connection established with [AF_INET]100.64.1.1:62988\nSat Jan 11 20:07:43 2020 10.250.7.77:22048 Connection reset, restarting [0]\nSat Jan 11 20:07:43 2020 100.64.1.1:62988 Connection reset, restarting [0]\nSat Jan 11 20:07:49 2020 TCP connection established with [AF_INET]10.250.7.77:38144\nSat Jan 11 20:07:49 2020 10.250.7.77:38144 TCP connection established with [AF_INET]100.64.1.1:3400\nSat Jan 11 20:07:49 2020 10.250.7.77:38144 Connection reset, restarting [0]\nSat Jan 11 20:07:49 2020 100.64.1.1:3400 Connection reset, restarting [0]\nSat Jan 11 20:07:53 2020 TCP connection established with [AF_INET]10.250.7.77:22058\nSat Jan 11 20:07:53 2020 10.250.7.77:22058 TCP connection established with [AF_INET]100.64.1.1:62998\nSat Jan 11 20:07:53 2020 10.250.7.77:22058 Connection reset, restarting [0]\nSat Jan 11 20:07:53 2020 100.64.1.1:62998 Connection reset, restarting [0]\nSat Jan 11 20:07:59 2020 TCP connection established with [AF_INET]10.250.7.77:38152\nSat Jan 11 20:07:59 2020 10.250.7.77:38152 TCP connection established with [AF_INET]100.64.1.1:3408\nSat Jan 11 20:07:59 2020 10.250.7.77:38152 Connection reset, restarting [0]\nSat Jan 11 20:07:59 2020 100.64.1.1:3408 Connection reset, restarting [0]\nSat Jan 11 20:08:03 2020 TCP connection established with [AF_INET]10.250.7.77:22072\nSat Jan 11 20:08:03 2020 10.250.7.77:22072 TCP connection established with [AF_INET]100.64.1.1:63012\nSat Jan 11 20:08:03 2020 10.250.7.77:22072 Connection reset, restarting [0]\nSat Jan 11 20:08:03 2020 100.64.1.1:63012 Connection reset, restarting [0]\nSat Jan 11 20:08:09 2020 TCP connection established with [AF_INET]10.250.7.77:38166\nSat Jan 11 20:08:09 2020 10.250.7.77:38166 TCP connection established with [AF_INET]100.64.1.1:3422\nSat Jan 11 20:08:09 2020 10.250.7.77:38166 Connection reset, restarting [0]\nSat Jan 11 20:08:09 2020 100.64.1.1:3422 Connection reset, restarting [0]\nSat Jan 11 20:08:13 2020 TCP connection established with [AF_INET]10.250.7.77:22082\nSat Jan 11 20:08:13 2020 10.250.7.77:22082 TCP connection established with [AF_INET]100.64.1.1:63022\nSat Jan 11 20:08:13 2020 10.250.7.77:22082 Connection reset, restarting [0]\nSat Jan 11 20:08:13 2020 100.64.1.1:63022 Connection reset, restarting [0]\nSat Jan 11 20:08:19 2020 TCP connection established with [AF_INET]10.250.7.77:38176\nSat Jan 11 20:08:19 2020 10.250.7.77:38176 TCP connection established with [AF_INET]100.64.1.1:3432\nSat Jan 11 20:08:19 2020 10.250.7.77:38176 Connection reset, restarting [0]\nSat Jan 11 20:08:19 2020 100.64.1.1:3432 Connection reset, restarting [0]\nSat Jan 11 20:08:23 2020 TCP connection established with [AF_INET]10.250.7.77:22092\nSat Jan 11 20:08:23 2020 10.250.7.77:22092 TCP connection established with [AF_INET]100.64.1.1:63032\nSat Jan 11 20:08:23 2020 10.250.7.77:22092 Connection reset, restarting [0]\nSat Jan 11 20:08:23 2020 100.64.1.1:63032 Connection reset, restarting [0]\nSat Jan 11 20:08:29 2020 TCP connection established with [AF_INET]10.250.7.77:38184\nSat Jan 11 20:08:29 2020 10.250.7.77:38184 TCP connection established with [AF_INET]100.64.1.1:3440\nSat Jan 11 20:08:29 2020 10.250.7.77:38184 Connection reset, restarting [0]\nSat Jan 11 20:08:29 2020 100.64.1.1:3440 Connection reset, restarting [0]\nSat Jan 11 20:08:33 2020 TCP connection established with [AF_INET]10.250.7.77:22102\nSat Jan 11 20:08:33 2020 10.250.7.77:22102 TCP connection established with [AF_INET]100.64.1.1:63042\nSat Jan 11 20:08:33 2020 10.250.7.77:22102 Connection reset, restarting [0]\nSat Jan 11 20:08:33 2020 100.64.1.1:63042 Connection reset, restarting [0]\nSat Jan 11 20:08:39 2020 TCP connection established with [AF_INET]10.250.7.77:38222\nSat Jan 11 20:08:39 2020 10.250.7.77:38222 TCP connection established with [AF_INET]100.64.1.1:3478\nSat Jan 11 20:08:39 2020 10.250.7.77:38222 Connection reset, restarting [0]\nSat Jan 11 20:08:39 2020 100.64.1.1:3478 Connection reset, restarting [0]\nSat Jan 11 20:08:43 2020 TCP connection established with [AF_INET]10.250.7.77:22106\nSat Jan 11 20:08:43 2020 10.250.7.77:22106 TCP connection established with [AF_INET]100.64.1.1:63046\nSat Jan 11 20:08:43 2020 10.250.7.77:22106 Connection reset, restarting [0]\nSat Jan 11 20:08:43 2020 100.64.1.1:63046 Connection reset, restarting [0]\nSat Jan 11 20:08:49 2020 TCP connection established with [AF_INET]10.250.7.77:38236\nSat Jan 11 20:08:49 2020 10.250.7.77:38236 TCP connection established with [AF_INET]100.64.1.1:3492\nSat Jan 11 20:08:49 2020 10.250.7.77:38236 Connection reset, restarting [0]\nSat Jan 11 20:08:49 2020 100.64.1.1:3492 Connection reset, restarting [0]\nSat Jan 11 20:08:53 2020 TCP connection established with [AF_INET]10.250.7.77:22116\nSat Jan 11 20:08:53 2020 10.250.7.77:22116 TCP connection established with [AF_INET]100.64.1.1:63056\nSat Jan 11 20:08:53 2020 10.250.7.77:22116 Connection reset, restarting [0]\nSat Jan 11 20:08:53 2020 100.64.1.1:63056 Connection reset, restarting [0]\nSat Jan 11 20:08:59 2020 TCP connection established with [AF_INET]10.250.7.77:38244\nSat Jan 11 20:08:59 2020 10.250.7.77:38244 TCP connection established with [AF_INET]100.64.1.1:3500\nSat Jan 11 20:08:59 2020 10.250.7.77:38244 Connection reset, restarting [0]\nSat Jan 11 20:08:59 2020 100.64.1.1:3500 Connection reset, restarting [0]\nSat Jan 11 20:09:03 2020 TCP connection established with [AF_INET]10.250.7.77:22132\nSat Jan 11 20:09:03 2020 10.250.7.77:22132 TCP connection established with [AF_INET]100.64.1.1:63072\nSat Jan 11 20:09:03 2020 10.250.7.77:22132 Connection reset, restarting [0]\nSat Jan 11 20:09:03 2020 100.64.1.1:63072 Connection reset, restarting [0]\nSat Jan 11 20:09:09 2020 TCP connection established with [AF_INET]10.250.7.77:38262\nSat Jan 11 20:09:09 2020 10.250.7.77:38262 TCP connection established with [AF_INET]100.64.1.1:3518\nSat Jan 11 20:09:09 2020 10.250.7.77:38262 Connection reset, restarting [0]\nSat Jan 11 20:09:09 2020 100.64.1.1:3518 Connection reset, restarting [0]\nSat Jan 11 20:09:13 2020 TCP connection established with [AF_INET]10.250.7.77:22140\nSat Jan 11 20:09:13 2020 10.250.7.77:22140 TCP connection established with [AF_INET]100.64.1.1:63080\nSat Jan 11 20:09:13 2020 10.250.7.77:22140 Connection reset, restarting [0]\nSat Jan 11 20:09:13 2020 100.64.1.1:63080 Connection reset, restarting [0]\nSat Jan 11 20:09:19 2020 TCP connection established with [AF_INET]10.250.7.77:38270\nSat Jan 11 20:09:19 2020 10.250.7.77:38270 TCP connection established with [AF_INET]100.64.1.1:3526\nSat Jan 11 20:09:19 2020 10.250.7.77:38270 Connection reset, restarting [0]\nSat Jan 11 20:09:19 2020 100.64.1.1:3526 Connection reset, restarting [0]\nSat Jan 11 20:09:23 2020 TCP connection established with [AF_INET]10.250.7.77:22146\nSat Jan 11 20:09:23 2020 10.250.7.77:22146 TCP connection established with [AF_INET]100.64.1.1:63086\nSat Jan 11 20:09:23 2020 10.250.7.77:22146 Connection reset, restarting [0]\nSat Jan 11 20:09:23 2020 100.64.1.1:63086 Connection reset, restarting [0]\nSat Jan 11 20:09:29 2020 TCP connection established with [AF_INET]10.250.7.77:38284\nSat Jan 11 20:09:29 2020 10.250.7.77:38284 TCP connection established with [AF_INET]100.64.1.1:3540\nSat Jan 11 20:09:29 2020 10.250.7.77:38284 Connection reset, restarting [0]\nSat Jan 11 20:09:29 2020 100.64.1.1:3540 Connection reset, restarting [0]\nSat Jan 11 20:09:33 2020 TCP connection established with [AF_INET]10.250.7.77:22156\nSat Jan 11 20:09:33 2020 10.250.7.77:22156 TCP connection established with [AF_INET]100.64.1.1:63096\nSat Jan 11 20:09:33 2020 10.250.7.77:22156 Connection reset, restarting [0]\nSat Jan 11 20:09:33 2020 100.64.1.1:63096 Connection reset, restarting [0]\nSat Jan 11 20:09:39 2020 TCP connection established with [AF_INET]10.250.7.77:38288\nSat Jan 11 20:09:39 2020 10.250.7.77:38288 TCP connection established with [AF_INET]100.64.1.1:3544\nSat Jan 11 20:09:39 2020 10.250.7.77:38288 Connection reset, restarting [0]\nSat Jan 11 20:09:39 2020 100.64.1.1:3544 Connection reset, restarting [0]\nSat Jan 11 20:09:43 2020 TCP connection established with [AF_INET]10.250.7.77:22164\nSat Jan 11 20:09:43 2020 10.250.7.77:22164 TCP connection established with [AF_INET]100.64.1.1:63104\nSat Jan 11 20:09:43 2020 10.250.7.77:22164 Connection reset, restarting [0]\nSat Jan 11 20:09:43 2020 100.64.1.1:63104 Connection reset, restarting [0]\nSat Jan 11 20:09:49 2020 TCP connection established with [AF_INET]10.250.7.77:38298\nSat Jan 11 20:09:49 2020 10.250.7.77:38298 TCP connection established with [AF_INET]100.64.1.1:3554\nSat Jan 11 20:09:49 2020 10.250.7.77:38298 Connection reset, restarting [0]\nSat Jan 11 20:09:49 2020 100.64.1.1:3554 Connection reset, restarting [0]\nSat Jan 11 20:09:53 2020 TCP connection established with [AF_INET]10.250.7.77:22174\nSat Jan 11 20:09:53 2020 10.250.7.77:22174 TCP connection established with [AF_INET]100.64.1.1:63114\nSat Jan 11 20:09:53 2020 10.250.7.77:22174 Connection reset, restarting [0]\nSat Jan 11 20:09:53 2020 100.64.1.1:63114 Connection reset, restarting [0]\nSat Jan 11 20:09:59 2020 TCP connection established with [AF_INET]10.250.7.77:38310\nSat Jan 11 20:09:59 2020 10.250.7.77:38310 TCP connection established with [AF_INET]100.64.1.1:3566\nSat Jan 11 20:09:59 2020 10.250.7.77:38310 Connection reset, restarting [0]\nSat Jan 11 20:09:59 2020 100.64.1.1:3566 Connection reset, restarting [0]\nSat Jan 11 20:10:03 2020 TCP connection established with [AF_INET]10.250.7.77:22190\nSat Jan 11 20:10:03 2020 10.250.7.77:22190 TCP connection established with [AF_INET]100.64.1.1:63130\nSat Jan 11 20:10:03 2020 10.250.7.77:22190 Connection reset, restarting [0]\nSat Jan 11 20:10:03 2020 100.64.1.1:63130 Connection reset, restarting [0]\nSat Jan 11 20:10:09 2020 TCP connection established with [AF_INET]10.250.7.77:38326\nSat Jan 11 20:10:09 2020 10.250.7.77:38326 TCP connection established with [AF_INET]100.64.1.1:3582\nSat Jan 11 20:10:09 2020 10.250.7.77:38326 Connection reset, restarting [0]\nSat Jan 11 20:10:09 2020 100.64.1.1:3582 Connection reset, restarting [0]\nSat Jan 11 20:10:13 2020 TCP connection established with [AF_INET]10.250.7.77:22198\nSat Jan 11 20:10:13 2020 10.250.7.77:22198 TCP connection established with [AF_INET]100.64.1.1:63138\nSat Jan 11 20:10:13 2020 10.250.7.77:22198 Connection reset, restarting [0]\nSat Jan 11 20:10:13 2020 100.64.1.1:63138 Connection reset, restarting [0]\nSat Jan 11 20:10:19 2020 TCP connection established with [AF_INET]10.250.7.77:38334\nSat Jan 11 20:10:19 2020 10.250.7.77:38334 TCP connection established with [AF_INET]100.64.1.1:3590\nSat Jan 11 20:10:19 2020 10.250.7.77:38334 Connection reset, restarting [0]\nSat Jan 11 20:10:19 2020 100.64.1.1:3590 Connection reset, restarting [0]\nSat Jan 11 20:10:23 2020 TCP connection established with [AF_INET]10.250.7.77:22204\nSat Jan 11 20:10:23 2020 10.250.7.77:22204 TCP connection established with [AF_INET]100.64.1.1:63144\nSat Jan 11 20:10:23 2020 10.250.7.77:22204 Connection reset, restarting [0]\nSat Jan 11 20:10:23 2020 100.64.1.1:63144 Connection reset, restarting [0]\nSat Jan 11 20:10:29 2020 TCP connection established with [AF_INET]10.250.7.77:38342\nSat Jan 11 20:10:29 2020 10.250.7.77:38342 TCP connection established with [AF_INET]100.64.1.1:3598\nSat Jan 11 20:10:29 2020 10.250.7.77:38342 Connection reset, restarting [0]\nSat Jan 11 20:10:29 2020 100.64.1.1:3598 Connection reset, restarting [0]\nSat Jan 11 20:10:33 2020 TCP connection established with [AF_INET]10.250.7.77:22214\nSat Jan 11 20:10:33 2020 10.250.7.77:22214 TCP connection established with [AF_INET]100.64.1.1:63154\nSat Jan 11 20:10:33 2020 10.250.7.77:22214 Connection reset, restarting [0]\nSat Jan 11 20:10:33 2020 100.64.1.1:63154 Connection reset, restarting [0]\nSat Jan 11 20:10:39 2020 TCP connection established with [AF_INET]10.250.7.77:38346\nSat Jan 11 20:10:39 2020 10.250.7.77:38346 TCP connection established with [AF_INET]100.64.1.1:3602\nSat Jan 11 20:10:39 2020 10.250.7.77:38346 Connection reset, restarting [0]\nSat Jan 11 20:10:39 2020 100.64.1.1:3602 Connection reset, restarting [0]\nSat Jan 11 20:10:43 2020 TCP connection established with [AF_INET]10.250.7.77:22218\nSat Jan 11 20:10:43 2020 10.250.7.77:22218 TCP connection established with [AF_INET]100.64.1.1:63158\nSat Jan 11 20:10:43 2020 10.250.7.77:22218 Connection reset, restarting [0]\nSat Jan 11 20:10:43 2020 100.64.1.1:63158 Connection reset, restarting [0]\nSat Jan 11 20:10:49 2020 TCP connection established with [AF_INET]10.250.7.77:38356\nSat Jan 11 20:10:49 2020 10.250.7.77:38356 TCP connection established with [AF_INET]100.64.1.1:3612\nSat Jan 11 20:10:49 2020 10.250.7.77:38356 Connection reset, restarting [0]\nSat Jan 11 20:10:49 2020 100.64.1.1:3612 Connection reset, restarting [0]\nSat Jan 11 20:10:53 2020 TCP connection established with [AF_INET]10.250.7.77:22234\nSat Jan 11 20:10:53 2020 10.250.7.77:22234 TCP connection established with [AF_INET]100.64.1.1:63174\nSat Jan 11 20:10:53 2020 10.250.7.77:22234 Connection reset, restarting [0]\nSat Jan 11 20:10:53 2020 100.64.1.1:63174 Connection reset, restarting [0]\nSat Jan 11 20:10:59 2020 TCP connection established with [AF_INET]10.250.7.77:38364\nSat Jan 11 20:10:59 2020 10.250.7.77:38364 TCP connection established with [AF_INET]100.64.1.1:3620\nSat Jan 11 20:10:59 2020 10.250.7.77:38364 Connection reset, restarting [0]\nSat Jan 11 20:10:59 2020 100.64.1.1:3620 Connection reset, restarting [0]\nSat Jan 11 20:11:03 2020 TCP connection established with [AF_INET]10.250.7.77:22248\nSat Jan 11 20:11:03 2020 10.250.7.77:22248 TCP connection established with [AF_INET]100.64.1.1:63188\nSat Jan 11 20:11:03 2020 10.250.7.77:22248 Connection reset, restarting [0]\nSat Jan 11 20:11:03 2020 100.64.1.1:63188 Connection reset, restarting [0]\nSat Jan 11 20:11:09 2020 TCP connection established with [AF_INET]10.250.7.77:38380\nSat Jan 11 20:11:09 2020 10.250.7.77:38380 TCP connection established with [AF_INET]100.64.1.1:3636\nSat Jan 11 20:11:09 2020 10.250.7.77:38380 Connection reset, restarting [0]\nSat Jan 11 20:11:09 2020 100.64.1.1:3636 Connection reset, restarting [0]\nSat Jan 11 20:11:13 2020 TCP connection established with [AF_INET]10.250.7.77:22256\nSat Jan 11 20:11:13 2020 10.250.7.77:22256 Connection reset, restarting [0]\nSat Jan 11 20:11:13 2020 TCP connection established with [AF_INET]100.64.1.1:63196\nSat Jan 11 20:11:13 2020 100.64.1.1:63196 Connection reset, restarting [0]\nSat Jan 11 20:11:19 2020 TCP connection established with [AF_INET]10.250.7.77:38392\nSat Jan 11 20:11:19 2020 10.250.7.77:38392 TCP connection established with [AF_INET]100.64.1.1:3648\nSat Jan 11 20:11:19 2020 10.250.7.77:38392 Connection reset, restarting [0]\nSat Jan 11 20:11:19 2020 100.64.1.1:3648 Connection reset, restarting [0]\nSat Jan 11 20:11:23 2020 TCP connection established with [AF_INET]10.250.7.77:22262\nSat Jan 11 20:11:23 2020 10.250.7.77:22262 TCP connection established with [AF_INET]100.64.1.1:63202\nSat Jan 11 20:11:23 2020 10.250.7.77:22262 Connection reset, restarting [0]\nSat Jan 11 20:11:23 2020 100.64.1.1:63202 Connection reset, restarting [0]\nSat Jan 11 20:11:29 2020 TCP connection established with [AF_INET]10.250.7.77:38400\nSat Jan 11 20:11:29 2020 10.250.7.77:38400 TCP connection established with [AF_INET]100.64.1.1:3656\nSat Jan 11 20:11:29 2020 10.250.7.77:38400 Connection reset, restarting [0]\nSat Jan 11 20:11:29 2020 100.64.1.1:3656 Connection reset, restarting [0]\nSat Jan 11 20:11:33 2020 TCP connection established with [AF_INET]10.250.7.77:22282\nSat Jan 11 20:11:33 2020 10.250.7.77:22282 TCP connection established with [AF_INET]100.64.1.1:63222\nSat Jan 11 20:11:33 2020 10.250.7.77:22282 Connection reset, restarting [0]\nSat Jan 11 20:11:33 2020 100.64.1.1:63222 Connection reset, restarting [0]\nSat Jan 11 20:11:39 2020 TCP connection established with [AF_INET]10.250.7.77:38404\nSat Jan 11 20:11:39 2020 10.250.7.77:38404 TCP connection established with [AF_INET]100.64.1.1:3660\nSat Jan 11 20:11:39 2020 10.250.7.77:38404 Connection reset, restarting [0]\nSat Jan 11 20:11:39 2020 100.64.1.1:3660 Connection reset, restarting [0]\nSat Jan 11 20:11:43 2020 TCP connection established with [AF_INET]10.250.7.77:22286\nSat Jan 11 20:11:43 2020 10.250.7.77:22286 TCP connection established with [AF_INET]100.64.1.1:63226\nSat Jan 11 20:11:43 2020 10.250.7.77:22286 Connection reset, restarting [0]\nSat Jan 11 20:11:43 2020 100.64.1.1:63226 Connection reset, restarting [0]\nSat Jan 11 20:11:49 2020 TCP connection established with [AF_INET]10.250.7.77:38414\nSat Jan 11 20:11:49 2020 10.250.7.77:38414 TCP connection established with [AF_INET]100.64.1.1:3670\nSat Jan 11 20:11:49 2020 10.250.7.77:38414 Connection reset, restarting [0]\nSat Jan 11 20:11:49 2020 100.64.1.1:3670 Connection reset, restarting [0]\nSat Jan 11 20:11:53 2020 TCP connection established with [AF_INET]10.250.7.77:22298\nSat Jan 11 20:11:53 2020 10.250.7.77:22298 TCP connection established with [AF_INET]100.64.1.1:63238\nSat Jan 11 20:11:53 2020 10.250.7.77:22298 Connection reset, restarting [0]\nSat Jan 11 20:11:53 2020 100.64.1.1:63238 Connection reset, restarting [0]\nSat Jan 11 20:11:59 2020 TCP connection established with [AF_INET]10.250.7.77:38424\nSat Jan 11 20:11:59 2020 10.250.7.77:38424 TCP connection established with [AF_INET]100.64.1.1:3680\nSat Jan 11 20:11:59 2020 10.250.7.77:38424 Connection reset, restarting [0]\nSat Jan 11 20:11:59 2020 100.64.1.1:3680 Connection reset, restarting [0]\nSat Jan 11 20:12:03 2020 TCP connection established with [AF_INET]10.250.7.77:22312\nSat Jan 11 20:12:03 2020 10.250.7.77:22312 TCP connection established with [AF_INET]100.64.1.1:63252\nSat Jan 11 20:12:03 2020 10.250.7.77:22312 Connection reset, restarting [0]\nSat Jan 11 20:12:03 2020 100.64.1.1:63252 Connection reset, restarting [0]\nSat Jan 11 20:12:09 2020 TCP connection established with [AF_INET]10.250.7.77:38438\nSat Jan 11 20:12:09 2020 10.250.7.77:38438 TCP connection established with [AF_INET]100.64.1.1:3694\nSat Jan 11 20:12:09 2020 10.250.7.77:38438 Connection reset, restarting [0]\nSat Jan 11 20:12:09 2020 100.64.1.1:3694 Connection reset, restarting [0]\nSat Jan 11 20:12:13 2020 TCP connection established with [AF_INET]10.250.7.77:22324\nSat Jan 11 20:12:13 2020 10.250.7.77:22324 TCP connection established with [AF_INET]100.64.1.1:63264\nSat Jan 11 20:12:13 2020 10.250.7.77:22324 Connection reset, restarting [0]\nSat Jan 11 20:12:13 2020 100.64.1.1:63264 Connection reset, restarting [0]\nSat Jan 11 20:12:19 2020 TCP connection established with [AF_INET]10.250.7.77:38446\nSat Jan 11 20:12:19 2020 10.250.7.77:38446 TCP connection established with [AF_INET]100.64.1.1:3702\nSat Jan 11 20:12:19 2020 10.250.7.77:38446 Connection reset, restarting [0]\nSat Jan 11 20:12:19 2020 100.64.1.1:3702 Connection reset, restarting [0]\nSat Jan 11 20:12:23 2020 TCP connection established with [AF_INET]10.250.7.77:22330\nSat Jan 11 20:12:23 2020 10.250.7.77:22330 TCP connection established with [AF_INET]100.64.1.1:63270\nSat Jan 11 20:12:23 2020 10.250.7.77:22330 Connection reset, restarting [0]\nSat Jan 11 20:12:23 2020 100.64.1.1:63270 Connection reset, restarting [0]\nSat Jan 11 20:12:29 2020 TCP connection established with [AF_INET]10.250.7.77:38458\nSat Jan 11 20:12:29 2020 10.250.7.77:38458 TCP connection established with [AF_INET]100.64.1.1:3714\nSat Jan 11 20:12:29 2020 10.250.7.77:38458 Connection reset, restarting [0]\nSat Jan 11 20:12:29 2020 100.64.1.1:3714 Connection reset, restarting [0]\nSat Jan 11 20:12:33 2020 TCP connection established with [AF_INET]10.250.7.77:22340\nSat Jan 11 20:12:33 2020 10.250.7.77:22340 TCP connection established with [AF_INET]100.64.1.1:63280\nSat Jan 11 20:12:33 2020 10.250.7.77:22340 Connection reset, restarting [0]\nSat Jan 11 20:12:33 2020 100.64.1.1:63280 Connection reset, restarting [0]\nSat Jan 11 20:12:39 2020 TCP connection established with [AF_INET]10.250.7.77:38462\nSat Jan 11 20:12:39 2020 10.250.7.77:38462 TCP connection established with [AF_INET]100.64.1.1:3718\nSat Jan 11 20:12:39 2020 10.250.7.77:38462 Connection reset, restarting [0]\nSat Jan 11 20:12:39 2020 100.64.1.1:3718 Connection reset, restarting [0]\nSat Jan 11 20:12:43 2020 TCP connection established with [AF_INET]10.250.7.77:22344\nSat Jan 11 20:12:43 2020 10.250.7.77:22344 TCP connection established with [AF_INET]100.64.1.1:63284\nSat Jan 11 20:12:43 2020 10.250.7.77:22344 Connection reset, restarting [0]\nSat Jan 11 20:12:43 2020 100.64.1.1:63284 Connection reset, restarting [0]\nSat Jan 11 20:12:49 2020 TCP connection established with [AF_INET]10.250.7.77:38472\nSat Jan 11 20:12:49 2020 10.250.7.77:38472 TCP connection established with [AF_INET]100.64.1.1:3728\nSat Jan 11 20:12:49 2020 10.250.7.77:38472 Connection reset, restarting [0]\nSat Jan 11 20:12:49 2020 100.64.1.1:3728 Connection reset, restarting [0]\nSat Jan 11 20:12:53 2020 TCP connection established with [AF_INET]10.250.7.77:22356\nSat Jan 11 20:12:53 2020 10.250.7.77:22356 TCP connection established with [AF_INET]100.64.1.1:63296\nSat Jan 11 20:12:53 2020 10.250.7.77:22356 Connection reset, restarting [0]\nSat Jan 11 20:12:53 2020 100.64.1.1:63296 Connection reset, restarting [0]\nSat Jan 11 20:12:59 2020 TCP connection established with [AF_INET]10.250.7.77:38482\nSat Jan 11 20:12:59 2020 10.250.7.77:38482 TCP connection established with [AF_INET]100.64.1.1:3738\nSat Jan 11 20:12:59 2020 10.250.7.77:38482 Connection reset, restarting [0]\nSat Jan 11 20:12:59 2020 100.64.1.1:3738 Connection reset, restarting [0]\nSat Jan 11 20:13:03 2020 TCP connection established with [AF_INET]10.250.7.77:22370\nSat Jan 11 20:13:03 2020 10.250.7.77:22370 TCP connection established with [AF_INET]100.64.1.1:63310\nSat Jan 11 20:13:03 2020 10.250.7.77:22370 Connection reset, restarting [0]\nSat Jan 11 20:13:03 2020 100.64.1.1:63310 Connection reset, restarting [0]\nSat Jan 11 20:13:09 2020 TCP connection established with [AF_INET]10.250.7.77:38496\nSat Jan 11 20:13:09 2020 10.250.7.77:38496 TCP connection established with [AF_INET]100.64.1.1:3752\nSat Jan 11 20:13:09 2020 10.250.7.77:38496 Connection reset, restarting [0]\nSat Jan 11 20:13:09 2020 100.64.1.1:3752 Connection reset, restarting [0]\nSat Jan 11 20:13:13 2020 TCP connection established with [AF_INET]10.250.7.77:22384\nSat Jan 11 20:13:13 2020 10.250.7.77:22384 TCP connection established with [AF_INET]100.64.1.1:63324\nSat Jan 11 20:13:13 2020 10.250.7.77:22384 Connection reset, restarting [0]\nSat Jan 11 20:13:13 2020 100.64.1.1:63324 Connection reset, restarting [0]\nSat Jan 11 20:13:19 2020 TCP connection established with [AF_INET]10.250.7.77:38504\nSat Jan 11 20:13:19 2020 10.250.7.77:38504 TCP connection established with [AF_INET]100.64.1.1:3760\nSat Jan 11 20:13:19 2020 10.250.7.77:38504 Connection reset, restarting [0]\nSat Jan 11 20:13:19 2020 100.64.1.1:3760 Connection reset, restarting [0]\nSat Jan 11 20:13:23 2020 TCP connection established with [AF_INET]10.250.7.77:22394\nSat Jan 11 20:13:23 2020 10.250.7.77:22394 TCP connection established with [AF_INET]100.64.1.1:63334\nSat Jan 11 20:13:23 2020 10.250.7.77:22394 Connection reset, restarting [0]\nSat Jan 11 20:13:23 2020 100.64.1.1:63334 Connection reset, restarting [0]\nSat Jan 11 20:13:29 2020 TCP connection established with [AF_INET]10.250.7.77:38512\nSat Jan 11 20:13:29 2020 10.250.7.77:38512 TCP connection established with [AF_INET]100.64.1.1:3768\nSat Jan 11 20:13:29 2020 10.250.7.77:38512 Connection reset, restarting [0]\nSat Jan 11 20:13:29 2020 100.64.1.1:3768 Connection reset, restarting [0]\nSat Jan 11 20:13:33 2020 TCP connection established with [AF_INET]10.250.7.77:22404\nSat Jan 11 20:13:33 2020 10.250.7.77:22404 TCP connection established with [AF_INET]100.64.1.1:63344\nSat Jan 11 20:13:33 2020 10.250.7.77:22404 Connection reset, restarting [0]\nSat Jan 11 20:13:33 2020 100.64.1.1:63344 Connection reset, restarting [0]\nSat Jan 11 20:13:39 2020 TCP connection established with [AF_INET]10.250.7.77:38516\nSat Jan 11 20:13:39 2020 10.250.7.77:38516 TCP connection established with [AF_INET]100.64.1.1:3772\nSat Jan 11 20:13:39 2020 10.250.7.77:38516 Connection reset, restarting [0]\nSat Jan 11 20:13:39 2020 100.64.1.1:3772 Connection reset, restarting [0]\nSat Jan 11 20:13:43 2020 TCP connection established with [AF_INET]10.250.7.77:22410\nSat Jan 11 20:13:43 2020 10.250.7.77:22410 TCP connection established with [AF_INET]100.64.1.1:63350\nSat Jan 11 20:13:43 2020 10.250.7.77:22410 Connection reset, restarting [0]\nSat Jan 11 20:13:43 2020 100.64.1.1:63350 Connection reset, restarting [0]\nSat Jan 11 20:13:49 2020 TCP connection established with [AF_INET]10.250.7.77:38532\nSat Jan 11 20:13:49 2020 10.250.7.77:38532 Connection reset, restarting [0]\nSat Jan 11 20:13:49 2020 TCP connection established with [AF_INET]100.64.1.1:3788\nSat Jan 11 20:13:49 2020 100.64.1.1:3788 Connection reset, restarting [0]\nSat Jan 11 20:13:53 2020 TCP connection established with [AF_INET]10.250.7.77:22420\nSat Jan 11 20:13:53 2020 10.250.7.77:22420 TCP connection established with [AF_INET]100.64.1.1:63360\nSat Jan 11 20:13:53 2020 10.250.7.77:22420 Connection reset, restarting [0]\nSat Jan 11 20:13:53 2020 100.64.1.1:63360 Connection reset, restarting [0]\nSat Jan 11 20:13:59 2020 TCP connection established with [AF_INET]10.250.7.77:38540\nSat Jan 11 20:13:59 2020 10.250.7.77:38540 TCP connection established with [AF_INET]100.64.1.1:3796\nSat Jan 11 20:13:59 2020 10.250.7.77:38540 Connection reset, restarting [0]\nSat Jan 11 20:13:59 2020 100.64.1.1:3796 Connection reset, restarting [0]\nSat Jan 11 20:14:03 2020 TCP connection established with [AF_INET]10.250.7.77:22440\nSat Jan 11 20:14:03 2020 10.250.7.77:22440 TCP connection established with [AF_INET]100.64.1.1:63380\nSat Jan 11 20:14:03 2020 10.250.7.77:22440 Connection reset, restarting [0]\nSat Jan 11 20:14:03 2020 100.64.1.1:63380 Connection reset, restarting [0]\nSat Jan 11 20:14:09 2020 TCP connection established with [AF_INET]10.250.7.77:38560\nSat Jan 11 20:14:09 2020 10.250.7.77:38560 TCP connection established with [AF_INET]100.64.1.1:3816\nSat Jan 11 20:14:09 2020 10.250.7.77:38560 Connection reset, restarting [0]\nSat Jan 11 20:14:09 2020 100.64.1.1:3816 Connection reset, restarting [0]\nSat Jan 11 20:14:13 2020 TCP connection established with [AF_INET]10.250.7.77:22448\nSat Jan 11 20:14:13 2020 10.250.7.77:22448 TCP connection established with [AF_INET]100.64.1.1:63388\nSat Jan 11 20:14:13 2020 10.250.7.77:22448 Connection reset, restarting [0]\nSat Jan 11 20:14:13 2020 100.64.1.1:63388 Connection reset, restarting [0]\nSat Jan 11 20:14:19 2020 TCP connection established with [AF_INET]10.250.7.77:38568\nSat Jan 11 20:14:19 2020 10.250.7.77:38568 TCP connection established with [AF_INET]100.64.1.1:3824\nSat Jan 11 20:14:19 2020 10.250.7.77:38568 Connection reset, restarting [0]\nSat Jan 11 20:14:19 2020 100.64.1.1:3824 Connection reset, restarting [0]\nSat Jan 11 20:14:23 2020 TCP connection established with [AF_INET]10.250.7.77:22454\nSat Jan 11 20:14:23 2020 10.250.7.77:22454 TCP connection established with [AF_INET]100.64.1.1:63394\nSat Jan 11 20:14:23 2020 10.250.7.77:22454 Connection reset, restarting [0]\nSat Jan 11 20:14:23 2020 100.64.1.1:63394 Connection reset, restarting [0]\nSat Jan 11 20:14:29 2020 TCP connection established with [AF_INET]10.250.7.77:38576\nSat Jan 11 20:14:29 2020 10.250.7.77:38576 TCP connection established with [AF_INET]100.64.1.1:3832\nSat Jan 11 20:14:29 2020 10.250.7.77:38576 Connection reset, restarting [0]\nSat Jan 11 20:14:29 2020 100.64.1.1:3832 Connection reset, restarting [0]\nSat Jan 11 20:14:33 2020 TCP connection established with [AF_INET]10.250.7.77:22464\nSat Jan 11 20:14:33 2020 10.250.7.77:22464 TCP connection established with [AF_INET]100.64.1.1:63404\nSat Jan 11 20:14:33 2020 10.250.7.77:22464 Connection reset, restarting [0]\nSat Jan 11 20:14:33 2020 100.64.1.1:63404 Connection reset, restarting [0]\nSat Jan 11 20:14:39 2020 TCP connection established with [AF_INET]100.64.1.1:3836\nSat Jan 11 20:14:39 2020 100.64.1.1:3836 TCP connection established with [AF_INET]10.250.7.77:38580\nSat Jan 11 20:14:39 2020 100.64.1.1:3836 Connection reset, restarting [0]\nSat Jan 11 20:14:39 2020 10.250.7.77:38580 Connection reset, restarting [0]\nSat Jan 11 20:14:43 2020 TCP connection established with [AF_INET]10.250.7.77:22474\nSat Jan 11 20:14:43 2020 10.250.7.77:22474 TCP connection established with [AF_INET]100.64.1.1:63414\nSat Jan 11 20:14:43 2020 10.250.7.77:22474 Connection reset, restarting [0]\nSat Jan 11 20:14:43 2020 100.64.1.1:63414 Connection reset, restarting [0]\nSat Jan 11 20:14:49 2020 TCP connection established with [AF_INET]10.250.7.77:38598\nSat Jan 11 20:14:49 2020 10.250.7.77:38598 TCP connection established with [AF_INET]100.64.1.1:3854\nSat Jan 11 20:14:49 2020 10.250.7.77:38598 Connection reset, restarting [0]\nSat Jan 11 20:14:49 2020 100.64.1.1:3854 Connection reset, restarting [0]\nSat Jan 11 20:14:53 2020 TCP connection established with [AF_INET]10.250.7.77:22484\nSat Jan 11 20:14:53 2020 10.250.7.77:22484 TCP connection established with [AF_INET]100.64.1.1:63424\nSat Jan 11 20:14:53 2020 10.250.7.77:22484 Connection reset, restarting [0]\nSat Jan 11 20:14:53 2020 100.64.1.1:63424 Connection reset, restarting [0]\nSat Jan 11 20:14:59 2020 TCP connection established with [AF_INET]10.250.7.77:38616\nSat Jan 11 20:14:59 2020 10.250.7.77:38616 TCP connection established with [AF_INET]100.64.1.1:3872\nSat Jan 11 20:14:59 2020 10.250.7.77:38616 Connection reset, restarting [0]\nSat Jan 11 20:14:59 2020 100.64.1.1:3872 Connection reset, restarting [0]\nSat Jan 11 20:15:03 2020 TCP connection established with [AF_INET]10.250.7.77:22498\nSat Jan 11 20:15:03 2020 10.250.7.77:22498 TCP connection established with [AF_INET]100.64.1.1:63438\nSat Jan 11 20:15:03 2020 10.250.7.77:22498 Connection reset, restarting [0]\nSat Jan 11 20:15:03 2020 100.64.1.1:63438 Connection reset, restarting [0]\nSat Jan 11 20:15:09 2020 TCP connection established with [AF_INET]10.250.7.77:38630\nSat Jan 11 20:15:09 2020 10.250.7.77:38630 TCP connection established with [AF_INET]100.64.1.1:3886\nSat Jan 11 20:15:09 2020 10.250.7.77:38630 Connection reset, restarting [0]\nSat Jan 11 20:15:09 2020 100.64.1.1:3886 Connection reset, restarting [0]\nSat Jan 11 20:15:13 2020 TCP connection established with [AF_INET]10.250.7.77:22506\nSat Jan 11 20:15:13 2020 10.250.7.77:22506 TCP connection established with [AF_INET]100.64.1.1:63446\nSat Jan 11 20:15:13 2020 10.250.7.77:22506 Connection reset, restarting [0]\nSat Jan 11 20:15:13 2020 100.64.1.1:63446 Connection reset, restarting [0]\nSat Jan 11 20:15:19 2020 TCP connection established with [AF_INET]10.250.7.77:38638\nSat Jan 11 20:15:19 2020 10.250.7.77:38638 TCP connection established with [AF_INET]100.64.1.1:3894\nSat Jan 11 20:15:19 2020 10.250.7.77:38638 Connection reset, restarting [0]\nSat Jan 11 20:15:19 2020 100.64.1.1:3894 Connection reset, restarting [0]\nSat Jan 11 20:15:23 2020 TCP connection established with [AF_INET]10.250.7.77:22512\nSat Jan 11 20:15:23 2020 10.250.7.77:22512 TCP connection established with [AF_INET]100.64.1.1:63452\nSat Jan 11 20:15:23 2020 10.250.7.77:22512 Connection reset, restarting [0]\nSat Jan 11 20:15:23 2020 100.64.1.1:63452 Connection reset, restarting [0]\nSat Jan 11 20:15:29 2020 TCP connection established with [AF_INET]10.250.7.77:38646\nSat Jan 11 20:15:29 2020 10.250.7.77:38646 TCP connection established with [AF_INET]100.64.1.1:3902\nSat Jan 11 20:15:29 2020 10.250.7.77:38646 Connection reset, restarting [0]\nSat Jan 11 20:15:29 2020 100.64.1.1:3902 Connection reset, restarting [0]\nSat Jan 11 20:15:33 2020 TCP connection established with [AF_INET]10.250.7.77:22524\nSat Jan 11 20:15:33 2020 10.250.7.77:22524 TCP connection established with [AF_INET]100.64.1.1:63464\nSat Jan 11 20:15:33 2020 10.250.7.77:22524 Connection reset, restarting [0]\nSat Jan 11 20:15:33 2020 100.64.1.1:63464 Connection reset, restarting [0]\nSat Jan 11 20:15:39 2020 TCP connection established with [AF_INET]10.250.7.77:38650\nSat Jan 11 20:15:39 2020 10.250.7.77:38650 TCP connection established with [AF_INET]100.64.1.1:3906\nSat Jan 11 20:15:39 2020 10.250.7.77:38650 Connection reset, restarting [0]\nSat Jan 11 20:15:39 2020 100.64.1.1:3906 Connection reset, restarting [0]\nSat Jan 11 20:15:43 2020 TCP connection established with [AF_INET]10.250.7.77:22562\nSat Jan 11 20:15:43 2020 10.250.7.77:22562 TCP connection established with [AF_INET]100.64.1.1:63502\nSat Jan 11 20:15:43 2020 10.250.7.77:22562 Connection reset, restarting [0]\nSat Jan 11 20:15:43 2020 100.64.1.1:63502 Connection reset, restarting [0]\nSat Jan 11 20:15:49 2020 TCP connection established with [AF_INET]10.250.7.77:38662\nSat Jan 11 20:15:49 2020 10.250.7.77:38662 TCP connection established with [AF_INET]100.64.1.1:3918\nSat Jan 11 20:15:49 2020 10.250.7.77:38662 Connection reset, restarting [0]\nSat Jan 11 20:15:49 2020 100.64.1.1:3918 Connection reset, restarting [0]\nSat Jan 11 20:15:53 2020 TCP connection established with [AF_INET]10.250.7.77:22576\nSat Jan 11 20:15:53 2020 10.250.7.77:22576 TCP connection established with [AF_INET]100.64.1.1:63516\nSat Jan 11 20:15:53 2020 10.250.7.77:22576 Connection reset, restarting [0]\nSat Jan 11 20:15:53 2020 100.64.1.1:63516 Connection reset, restarting [0]\nSat Jan 11 20:15:59 2020 TCP connection established with [AF_INET]10.250.7.77:38670\nSat Jan 11 20:15:59 2020 10.250.7.77:38670 TCP connection established with [AF_INET]100.64.1.1:3926\nSat Jan 11 20:15:59 2020 10.250.7.77:38670 Connection reset, restarting [0]\nSat Jan 11 20:15:59 2020 100.64.1.1:3926 Connection reset, restarting [0]\nSat Jan 11 20:16:03 2020 TCP connection established with [AF_INET]10.250.7.77:22590\nSat Jan 11 20:16:03 2020 10.250.7.77:22590 TCP connection established with [AF_INET]100.64.1.1:63530\nSat Jan 11 20:16:03 2020 10.250.7.77:22590 Connection reset, restarting [0]\nSat Jan 11 20:16:03 2020 100.64.1.1:63530 Connection reset, restarting [0]\nSat Jan 11 20:16:09 2020 TCP connection established with [AF_INET]100.64.1.1:3946\nSat Jan 11 20:16:09 2020 100.64.1.1:3946 TCP connection established with [AF_INET]10.250.7.77:38690\nSat Jan 11 20:16:09 2020 100.64.1.1:3946 Connection reset, restarting [0]\nSat Jan 11 20:16:09 2020 10.250.7.77:38690 Connection reset, restarting [0]\nSat Jan 11 20:16:13 2020 TCP connection established with [AF_INET]10.250.7.77:22600\nSat Jan 11 20:16:13 2020 10.250.7.77:22600 Connection reset, restarting [0]\nSat Jan 11 20:16:13 2020 TCP connection established with [AF_INET]100.64.1.1:63540\nSat Jan 11 20:16:13 2020 100.64.1.1:63540 Connection reset, restarting [0]\nSat Jan 11 20:16:19 2020 TCP connection established with [AF_INET]10.250.7.77:38702\nSat Jan 11 20:16:19 2020 10.250.7.77:38702 TCP connection established with [AF_INET]100.64.1.1:3958\nSat Jan 11 20:16:19 2020 10.250.7.77:38702 Connection reset, restarting [0]\nSat Jan 11 20:16:19 2020 100.64.1.1:3958 Connection reset, restarting [0]\nSat Jan 11 20:16:23 2020 TCP connection established with [AF_INET]10.250.7.77:22606\nSat Jan 11 20:16:23 2020 10.250.7.77:22606 TCP connection established with [AF_INET]100.64.1.1:63546\nSat Jan 11 20:16:23 2020 10.250.7.77:22606 Connection reset, restarting [0]\nSat Jan 11 20:16:23 2020 100.64.1.1:63546 Connection reset, restarting [0]\nSat Jan 11 20:16:29 2020 TCP connection established with [AF_INET]10.250.7.77:38710\nSat Jan 11 20:16:29 2020 10.250.7.77:38710 TCP connection established with [AF_INET]100.64.1.1:3966\nSat Jan 11 20:16:29 2020 10.250.7.77:38710 Connection reset, restarting [0]\nSat Jan 11 20:16:29 2020 100.64.1.1:3966 Connection reset, restarting [0]\nSat Jan 11 20:16:33 2020 TCP connection established with [AF_INET]10.250.7.77:22618\nSat Jan 11 20:16:33 2020 10.250.7.77:22618 Connection reset, restarting [0]\nSat Jan 11 20:16:33 2020 TCP connection established with [AF_INET]100.64.1.1:63558\nSat Jan 11 20:16:33 2020 100.64.1.1:63558 Connection reset, restarting [0]\nSat Jan 11 20:16:39 2020 TCP connection established with [AF_INET]10.250.7.77:38716\nSat Jan 11 20:16:39 2020 10.250.7.77:38716 TCP connection established with [AF_INET]100.64.1.1:3972\nSat Jan 11 20:16:39 2020 10.250.7.77:38716 Connection reset, restarting [0]\nSat Jan 11 20:16:39 2020 100.64.1.1:3972 Connection reset, restarting [0]\nSat Jan 11 20:16:43 2020 TCP connection established with [AF_INET]10.250.7.77:22622\nSat Jan 11 20:16:43 2020 10.250.7.77:22622 TCP connection established with [AF_INET]100.64.1.1:63562\nSat Jan 11 20:16:43 2020 10.250.7.77:22622 Connection reset, restarting [0]\nSat Jan 11 20:16:43 2020 100.64.1.1:63562 Connection reset, restarting [0]\nSat Jan 11 20:16:49 2020 TCP connection established with [AF_INET]10.250.7.77:38726\nSat Jan 11 20:16:49 2020 10.250.7.77:38726 TCP connection established with [AF_INET]100.64.1.1:3982\nSat Jan 11 20:16:49 2020 10.250.7.77:38726 Connection reset, restarting [0]\nSat Jan 11 20:16:49 2020 100.64.1.1:3982 Connection reset, restarting [0]\nSat Jan 11 20:16:53 2020 TCP connection established with [AF_INET]10.250.7.77:22632\nSat Jan 11 20:16:53 2020 10.250.7.77:22632 TCP connection established with [AF_INET]100.64.1.1:63572\nSat Jan 11 20:16:53 2020 10.250.7.77:22632 Connection reset, restarting [0]\nSat Jan 11 20:16:53 2020 100.64.1.1:63572 Connection reset, restarting [0]\nSat Jan 11 20:16:59 2020 TCP connection established with [AF_INET]10.250.7.77:38734\nSat Jan 11 20:16:59 2020 10.250.7.77:38734 TCP connection established with [AF_INET]100.64.1.1:3990\nSat Jan 11 20:16:59 2020 10.250.7.77:38734 Connection reset, restarting [0]\nSat Jan 11 20:16:59 2020 100.64.1.1:3990 Connection reset, restarting [0]\nSat Jan 11 20:17:03 2020 TCP connection established with [AF_INET]10.250.7.77:22646\nSat Jan 11 20:17:03 2020 10.250.7.77:22646 TCP connection established with [AF_INET]100.64.1.1:63586\nSat Jan 11 20:17:03 2020 10.250.7.77:22646 Connection reset, restarting [0]\nSat Jan 11 20:17:03 2020 100.64.1.1:63586 Connection reset, restarting [0]\nSat Jan 11 20:17:09 2020 TCP connection established with [AF_INET]10.250.7.77:38748\nSat Jan 11 20:17:09 2020 10.250.7.77:38748 TCP connection established with [AF_INET]100.64.1.1:4004\nSat Jan 11 20:17:09 2020 10.250.7.77:38748 Connection reset, restarting [0]\nSat Jan 11 20:17:09 2020 100.64.1.1:4004 Connection reset, restarting [0]\nSat Jan 11 20:17:13 2020 TCP connection established with [AF_INET]10.250.7.77:22658\nSat Jan 11 20:17:13 2020 10.250.7.77:22658 TCP connection established with [AF_INET]100.64.1.1:63598\nSat Jan 11 20:17:13 2020 10.250.7.77:22658 Connection reset, restarting [0]\nSat Jan 11 20:17:13 2020 100.64.1.1:63598 Connection reset, restarting [0]\nSat Jan 11 20:17:19 2020 TCP connection established with [AF_INET]10.250.7.77:38756\nSat Jan 11 20:17:19 2020 10.250.7.77:38756 TCP connection established with [AF_INET]100.64.1.1:4012\nSat Jan 11 20:17:19 2020 10.250.7.77:38756 Connection reset, restarting [0]\nSat Jan 11 20:17:19 2020 100.64.1.1:4012 Connection reset, restarting [0]\nSat Jan 11 20:17:23 2020 TCP connection established with [AF_INET]10.250.7.77:22664\nSat Jan 11 20:17:23 2020 10.250.7.77:22664 TCP connection established with [AF_INET]100.64.1.1:63604\nSat Jan 11 20:17:23 2020 10.250.7.77:22664 Connection reset, restarting [0]\nSat Jan 11 20:17:23 2020 100.64.1.1:63604 Connection reset, restarting [0]\nSat Jan 11 20:17:29 2020 TCP connection established with [AF_INET]10.250.7.77:38768\nSat Jan 11 20:17:29 2020 10.250.7.77:38768 TCP connection established with [AF_INET]100.64.1.1:4024\nSat Jan 11 20:17:29 2020 10.250.7.77:38768 Connection reset, restarting [0]\nSat Jan 11 20:17:29 2020 100.64.1.1:4024 Connection reset, restarting [0]\nSat Jan 11 20:17:33 2020 TCP connection established with [AF_INET]10.250.7.77:22676\nSat Jan 11 20:17:33 2020 10.250.7.77:22676 TCP connection established with [AF_INET]100.64.1.1:63616\nSat Jan 11 20:17:33 2020 10.250.7.77:22676 Connection reset, restarting [0]\nSat Jan 11 20:17:33 2020 100.64.1.1:63616 Connection reset, restarting [0]\nSat Jan 11 20:17:39 2020 TCP connection established with [AF_INET]10.250.7.77:38774\nSat Jan 11 20:17:39 2020 10.250.7.77:38774 TCP connection established with [AF_INET]100.64.1.1:4030\nSat Jan 11 20:17:39 2020 10.250.7.77:38774 Connection reset, restarting [0]\nSat Jan 11 20:17:39 2020 100.64.1.1:4030 Connection reset, restarting [0]\nSat Jan 11 20:17:43 2020 TCP connection established with [AF_INET]10.250.7.77:22680\nSat Jan 11 20:17:43 2020 10.250.7.77:22680 TCP connection established with [AF_INET]100.64.1.1:63620\nSat Jan 11 20:17:43 2020 10.250.7.77:22680 Connection reset, restarting [0]\nSat Jan 11 20:17:43 2020 100.64.1.1:63620 Connection reset, restarting [0]\nSat Jan 11 20:17:49 2020 TCP connection established with [AF_INET]10.250.7.77:38784\nSat Jan 11 20:17:49 2020 10.250.7.77:38784 TCP connection established with [AF_INET]100.64.1.1:4040\nSat Jan 11 20:17:49 2020 10.250.7.77:38784 Connection reset, restarting [0]\nSat Jan 11 20:17:49 2020 100.64.1.1:4040 Connection reset, restarting [0]\nSat Jan 11 20:17:53 2020 TCP connection established with [AF_INET]10.250.7.77:22690\nSat Jan 11 20:17:53 2020 10.250.7.77:22690 TCP connection established with [AF_INET]100.64.1.1:63630\nSat Jan 11 20:17:53 2020 10.250.7.77:22690 Connection reset, restarting [0]\nSat Jan 11 20:17:53 2020 100.64.1.1:63630 Connection reset, restarting [0]\nSat Jan 11 20:17:59 2020 TCP connection established with [AF_INET]10.250.7.77:38792\nSat Jan 11 20:17:59 2020 10.250.7.77:38792 TCP connection established with [AF_INET]100.64.1.1:4048\nSat Jan 11 20:17:59 2020 10.250.7.77:38792 Connection reset, restarting [0]\nSat Jan 11 20:17:59 2020 100.64.1.1:4048 Connection reset, restarting [0]\nSat Jan 11 20:18:03 2020 TCP connection established with [AF_INET]10.250.7.77:22704\nSat Jan 11 20:18:03 2020 10.250.7.77:22704 TCP connection established with [AF_INET]100.64.1.1:63644\nSat Jan 11 20:18:03 2020 10.250.7.77:22704 Connection reset, restarting [0]\nSat Jan 11 20:18:03 2020 100.64.1.1:63644 Connection reset, restarting [0]\nSat Jan 11 20:18:09 2020 TCP connection established with [AF_INET]10.250.7.77:38806\nSat Jan 11 20:18:09 2020 10.250.7.77:38806 Connection reset, restarting [0]\nSat Jan 11 20:18:09 2020 TCP connection established with [AF_INET]100.64.1.1:4062\nSat Jan 11 20:18:09 2020 100.64.1.1:4062 Connection reset, restarting [0]\nSat Jan 11 20:18:13 2020 TCP connection established with [AF_INET]10.250.7.77:22712\nSat Jan 11 20:18:13 2020 10.250.7.77:22712 TCP connection established with [AF_INET]100.64.1.1:63652\nSat Jan 11 20:18:13 2020 10.250.7.77:22712 Connection reset, restarting [0]\nSat Jan 11 20:18:13 2020 100.64.1.1:63652 Connection reset, restarting [0]\nSat Jan 11 20:18:19 2020 TCP connection established with [AF_INET]10.250.7.77:38814\nSat Jan 11 20:18:19 2020 10.250.7.77:38814 TCP connection established with [AF_INET]100.64.1.1:4070\nSat Jan 11 20:18:19 2020 10.250.7.77:38814 Connection reset, restarting [0]\nSat Jan 11 20:18:19 2020 100.64.1.1:4070 Connection reset, restarting [0]\nSat Jan 11 20:18:23 2020 TCP connection established with [AF_INET]10.250.7.77:22724\nSat Jan 11 20:18:23 2020 10.250.7.77:22724 TCP connection established with [AF_INET]100.64.1.1:63664\nSat Jan 11 20:18:23 2020 10.250.7.77:22724 Connection reset, restarting [0]\nSat Jan 11 20:18:23 2020 100.64.1.1:63664 Connection reset, restarting [0]\nSat Jan 11 20:18:29 2020 TCP connection established with [AF_INET]10.250.7.77:38824\nSat Jan 11 20:18:29 2020 10.250.7.77:38824 TCP connection established with [AF_INET]100.64.1.1:4080\nSat Jan 11 20:18:29 2020 10.250.7.77:38824 Connection reset, restarting [0]\nSat Jan 11 20:18:29 2020 100.64.1.1:4080 Connection reset, restarting [0]\nSat Jan 11 20:18:33 2020 TCP connection established with [AF_INET]10.250.7.77:22734\nSat Jan 11 20:18:33 2020 10.250.7.77:22734 TCP connection established with [AF_INET]100.64.1.1:63674\nSat Jan 11 20:18:33 2020 10.250.7.77:22734 Connection reset, restarting [0]\nSat Jan 11 20:18:33 2020 100.64.1.1:63674 Connection reset, restarting [0]\nSat Jan 11 20:18:39 2020 TCP connection established with [AF_INET]10.250.7.77:38862\nSat Jan 11 20:18:39 2020 10.250.7.77:38862 TCP connection established with [AF_INET]100.64.1.1:4118\nSat Jan 11 20:18:39 2020 10.250.7.77:38862 Connection reset, restarting [0]\nSat Jan 11 20:18:39 2020 100.64.1.1:4118 Connection reset, restarting [0]\nSat Jan 11 20:18:43 2020 TCP connection established with [AF_INET]10.250.7.77:22738\nSat Jan 11 20:18:43 2020 10.250.7.77:22738 TCP connection established with [AF_INET]100.64.1.1:63678\nSat Jan 11 20:18:43 2020 10.250.7.77:22738 Connection reset, restarting [0]\nSat Jan 11 20:18:43 2020 100.64.1.1:63678 Connection reset, restarting [0]\nSat Jan 11 20:18:49 2020 TCP connection established with [AF_INET]10.250.7.77:38884\nSat Jan 11 20:18:49 2020 10.250.7.77:38884 TCP connection established with [AF_INET]100.64.1.1:4140\nSat Jan 11 20:18:49 2020 10.250.7.77:38884 Connection reset, restarting [0]\nSat Jan 11 20:18:49 2020 100.64.1.1:4140 Connection reset, restarting [0]\nSat Jan 11 20:18:53 2020 TCP connection established with [AF_INET]10.250.7.77:22758\nSat Jan 11 20:18:53 2020 10.250.7.77:22758 TCP connection established with [AF_INET]100.64.1.1:63698\nSat Jan 11 20:18:53 2020 10.250.7.77:22758 Connection reset, restarting [0]\nSat Jan 11 20:18:53 2020 100.64.1.1:63698 Connection reset, restarting [0]\nSat Jan 11 20:18:59 2020 TCP connection established with [AF_INET]10.250.7.77:38892\nSat Jan 11 20:18:59 2020 10.250.7.77:38892 TCP connection established with [AF_INET]100.64.1.1:4148\nSat Jan 11 20:18:59 2020 10.250.7.77:38892 Connection reset, restarting [0]\nSat Jan 11 20:18:59 2020 100.64.1.1:4148 Connection reset, restarting [0]\nSat Jan 11 20:19:03 2020 TCP connection established with [AF_INET]10.250.7.77:22772\nSat Jan 11 20:19:03 2020 10.250.7.77:22772 TCP connection established with [AF_INET]100.64.1.1:63712\nSat Jan 11 20:19:03 2020 10.250.7.77:22772 Connection reset, restarting [0]\nSat Jan 11 20:19:03 2020 100.64.1.1:63712 Connection reset, restarting [0]\nSat Jan 11 20:19:09 2020 TCP connection established with [AF_INET]10.250.7.77:38908\nSat Jan 11 20:19:09 2020 10.250.7.77:38908 TCP connection established with [AF_INET]100.64.1.1:4164\nSat Jan 11 20:19:09 2020 10.250.7.77:38908 Connection reset, restarting [0]\nSat Jan 11 20:19:09 2020 100.64.1.1:4164 Connection reset, restarting [0]\nSat Jan 11 20:19:13 2020 TCP connection established with [AF_INET]10.250.7.77:22780\nSat Jan 11 20:19:13 2020 10.250.7.77:22780 TCP connection established with [AF_INET]100.64.1.1:63720\nSat Jan 11 20:19:13 2020 10.250.7.77:22780 Connection reset, restarting [0]\nSat Jan 11 20:19:13 2020 100.64.1.1:63720 Connection reset, restarting [0]\nSat Jan 11 20:19:19 2020 TCP connection established with [AF_INET]10.250.7.77:38916\nSat Jan 11 20:19:19 2020 10.250.7.77:38916 TCP connection established with [AF_INET]100.64.1.1:4172\nSat Jan 11 20:19:19 2020 10.250.7.77:38916 Connection reset, restarting [0]\nSat Jan 11 20:19:19 2020 100.64.1.1:4172 Connection reset, restarting [0]\nSat Jan 11 20:19:23 2020 TCP connection established with [AF_INET]10.250.7.77:22788\nSat Jan 11 20:19:23 2020 10.250.7.77:22788 TCP connection established with [AF_INET]100.64.1.1:63728\nSat Jan 11 20:19:23 2020 10.250.7.77:22788 Connection reset, restarting [0]\nSat Jan 11 20:19:23 2020 100.64.1.1:63728 Connection reset, restarting [0]\nSat Jan 11 20:19:29 2020 TCP connection established with [AF_INET]10.250.7.77:38926\nSat Jan 11 20:19:29 2020 10.250.7.77:38926 TCP connection established with [AF_INET]100.64.1.1:4182\nSat Jan 11 20:19:29 2020 10.250.7.77:38926 Connection reset, restarting [0]\nSat Jan 11 20:19:29 2020 100.64.1.1:4182 Connection reset, restarting [0]\nSat Jan 11 20:19:33 2020 TCP connection established with [AF_INET]10.250.7.77:22804\nSat Jan 11 20:19:33 2020 10.250.7.77:22804 TCP connection established with [AF_INET]100.64.1.1:63744\nSat Jan 11 20:19:33 2020 10.250.7.77:22804 Connection reset, restarting [0]\nSat Jan 11 20:19:33 2020 100.64.1.1:63744 Connection reset, restarting [0]\nSat Jan 11 20:19:39 2020 TCP connection established with [AF_INET]10.250.7.77:38930\nSat Jan 11 20:19:39 2020 10.250.7.77:38930 TCP connection established with [AF_INET]100.64.1.1:4186\nSat Jan 11 20:19:39 2020 10.250.7.77:38930 Connection reset, restarting [0]\nSat Jan 11 20:19:39 2020 100.64.1.1:4186 Connection reset, restarting [0]\nSat Jan 11 20:19:43 2020 TCP connection established with [AF_INET]10.250.7.77:22818\nSat Jan 11 20:19:43 2020 10.250.7.77:22818 TCP connection established with [AF_INET]100.64.1.1:63758\nSat Jan 11 20:19:43 2020 10.250.7.77:22818 Connection reset, restarting [0]\nSat Jan 11 20:19:43 2020 100.64.1.1:63758 Connection reset, restarting [0]\nSat Jan 11 20:19:49 2020 TCP connection established with [AF_INET]10.250.7.77:38940\nSat Jan 11 20:19:49 2020 10.250.7.77:38940 TCP connection established with [AF_INET]100.64.1.1:4196\nSat Jan 11 20:19:49 2020 10.250.7.77:38940 Connection reset, restarting [0]\nSat Jan 11 20:19:49 2020 100.64.1.1:4196 Connection reset, restarting [0]\nSat Jan 11 20:19:53 2020 TCP connection established with [AF_INET]10.250.7.77:22828\nSat Jan 11 20:19:53 2020 10.250.7.77:22828 TCP connection established with [AF_INET]100.64.1.1:63768\nSat Jan 11 20:19:53 2020 10.250.7.77:22828 Connection reset, restarting [0]\nSat Jan 11 20:19:53 2020 100.64.1.1:63768 Connection reset, restarting [0]\nSat Jan 11 20:19:59 2020 TCP connection established with [AF_INET]10.250.7.77:38952\nSat Jan 11 20:19:59 2020 10.250.7.77:38952 TCP connection established with [AF_INET]100.64.1.1:4208\nSat Jan 11 20:19:59 2020 10.250.7.77:38952 Connection reset, restarting [0]\nSat Jan 11 20:19:59 2020 100.64.1.1:4208 Connection reset, restarting [0]\nSat Jan 11 20:20:03 2020 TCP connection established with [AF_INET]10.250.7.77:22842\nSat Jan 11 20:20:03 2020 10.250.7.77:22842 TCP connection established with [AF_INET]100.64.1.1:63782\nSat Jan 11 20:20:03 2020 10.250.7.77:22842 Connection reset, restarting [0]\nSat Jan 11 20:20:03 2020 100.64.1.1:63782 Connection reset, restarting [0]\nSat Jan 11 20:20:09 2020 TCP connection established with [AF_INET]10.250.7.77:38966\nSat Jan 11 20:20:09 2020 10.250.7.77:38966 TCP connection established with [AF_INET]100.64.1.1:4222\nSat Jan 11 20:20:09 2020 10.250.7.77:38966 Connection reset, restarting [0]\nSat Jan 11 20:20:09 2020 100.64.1.1:4222 Connection reset, restarting [0]\nSat Jan 11 20:20:13 2020 TCP connection established with [AF_INET]10.250.7.77:22852\nSat Jan 11 20:20:13 2020 10.250.7.77:22852 TCP connection established with [AF_INET]100.64.1.1:63792\nSat Jan 11 20:20:13 2020 10.250.7.77:22852 Connection reset, restarting [0]\nSat Jan 11 20:20:13 2020 100.64.1.1:63792 Connection reset, restarting [0]\nSat Jan 11 20:20:19 2020 TCP connection established with [AF_INET]10.250.7.77:38974\nSat Jan 11 20:20:19 2020 10.250.7.77:38974 TCP connection established with [AF_INET]100.64.1.1:4230\nSat Jan 11 20:20:19 2020 10.250.7.77:38974 Connection reset, restarting [0]\nSat Jan 11 20:20:19 2020 100.64.1.1:4230 Connection reset, restarting [0]\nSat Jan 11 20:20:23 2020 TCP connection established with [AF_INET]10.250.7.77:22860\nSat Jan 11 20:20:23 2020 10.250.7.77:22860 TCP connection established with [AF_INET]100.64.1.1:63800\nSat Jan 11 20:20:23 2020 10.250.7.77:22860 Connection reset, restarting [0]\nSat Jan 11 20:20:23 2020 100.64.1.1:63800 Connection reset, restarting [0]\nSat Jan 11 20:20:29 2020 TCP connection established with [AF_INET]10.250.7.77:38984\nSat Jan 11 20:20:29 2020 10.250.7.77:38984 TCP connection established with [AF_INET]100.64.1.1:4240\nSat Jan 11 20:20:29 2020 10.250.7.77:38984 Connection reset, restarting [0]\nSat Jan 11 20:20:29 2020 100.64.1.1:4240 Connection reset, restarting [0]\nSat Jan 11 20:20:33 2020 TCP connection established with [AF_INET]10.250.7.77:22878\nSat Jan 11 20:20:33 2020 10.250.7.77:22878 TCP connection established with [AF_INET]100.64.1.1:63818\nSat Jan 11 20:20:33 2020 10.250.7.77:22878 Connection reset, restarting [0]\nSat Jan 11 20:20:33 2020 100.64.1.1:63818 Connection reset, restarting [0]\nSat Jan 11 20:20:39 2020 TCP connection established with [AF_INET]10.250.7.77:38998\nSat Jan 11 20:20:39 2020 10.250.7.77:38998 TCP connection established with [AF_INET]100.64.1.1:4254\nSat Jan 11 20:20:39 2020 10.250.7.77:38998 Connection reset, restarting [0]\nSat Jan 11 20:20:39 2020 100.64.1.1:4254 Connection reset, restarting [0]\nSat Jan 11 20:20:43 2020 TCP connection established with [AF_INET]10.250.7.77:22882\nSat Jan 11 20:20:43 2020 10.250.7.77:22882 TCP connection established with [AF_INET]100.64.1.1:63822\nSat Jan 11 20:20:43 2020 10.250.7.77:22882 Connection reset, restarting [0]\nSat Jan 11 20:20:43 2020 100.64.1.1:63822 Connection reset, restarting [0]\nSat Jan 11 20:20:49 2020 TCP connection established with [AF_INET]10.250.7.77:39008\nSat Jan 11 20:20:49 2020 10.250.7.77:39008 TCP connection established with [AF_INET]100.64.1.1:4264\nSat Jan 11 20:20:49 2020 10.250.7.77:39008 Connection reset, restarting [0]\nSat Jan 11 20:20:49 2020 100.64.1.1:4264 Connection reset, restarting [0]\nSat Jan 11 20:20:53 2020 TCP connection established with [AF_INET]10.250.7.77:22896\nSat Jan 11 20:20:53 2020 10.250.7.77:22896 TCP connection established with [AF_INET]100.64.1.1:63836\nSat Jan 11 20:20:53 2020 10.250.7.77:22896 Connection reset, restarting [0]\nSat Jan 11 20:20:53 2020 100.64.1.1:63836 Connection reset, restarting [0]\nSat Jan 11 20:20:59 2020 TCP connection established with [AF_INET]10.250.7.77:39016\nSat Jan 11 20:20:59 2020 10.250.7.77:39016 TCP connection established with [AF_INET]100.64.1.1:4272\nSat Jan 11 20:20:59 2020 10.250.7.77:39016 Connection reset, restarting [0]\nSat Jan 11 20:20:59 2020 100.64.1.1:4272 Connection reset, restarting [0]\nSat Jan 11 20:21:03 2020 TCP connection established with [AF_INET]10.250.7.77:22910\nSat Jan 11 20:21:03 2020 10.250.7.77:22910 TCP connection established with [AF_INET]100.64.1.1:63850\nSat Jan 11 20:21:03 2020 10.250.7.77:22910 Connection reset, restarting [0]\nSat Jan 11 20:21:03 2020 100.64.1.1:63850 Connection reset, restarting [0]\nSat Jan 11 20:21:09 2020 TCP connection established with [AF_INET]10.250.7.77:39030\nSat Jan 11 20:21:09 2020 10.250.7.77:39030 TCP connection established with [AF_INET]100.64.1.1:4286\nSat Jan 11 20:21:09 2020 10.250.7.77:39030 Connection reset, restarting [0]\nSat Jan 11 20:21:09 2020 100.64.1.1:4286 Connection reset, restarting [0]\nSat Jan 11 20:21:13 2020 TCP connection established with [AF_INET]10.250.7.77:22920\nSat Jan 11 20:21:13 2020 10.250.7.77:22920 TCP connection established with [AF_INET]100.64.1.1:63860\nSat Jan 11 20:21:13 2020 10.250.7.77:22920 Connection reset, restarting [0]\nSat Jan 11 20:21:13 2020 100.64.1.1:63860 Connection reset, restarting [0]\nSat Jan 11 20:21:19 2020 TCP connection established with [AF_INET]10.250.7.77:39044\nSat Jan 11 20:21:19 2020 10.250.7.77:39044 TCP connection established with [AF_INET]100.64.1.1:4300\nSat Jan 11 20:21:19 2020 10.250.7.77:39044 Connection reset, restarting [0]\nSat Jan 11 20:21:19 2020 100.64.1.1:4300 Connection reset, restarting [0]\nSat Jan 11 20:21:23 2020 TCP connection established with [AF_INET]10.250.7.77:22926\nSat Jan 11 20:21:23 2020 10.250.7.77:22926 TCP connection established with [AF_INET]100.64.1.1:63866\nSat Jan 11 20:21:23 2020 10.250.7.77:22926 Connection reset, restarting [0]\nSat Jan 11 20:21:23 2020 100.64.1.1:63866 Connection reset, restarting [0]\nSat Jan 11 20:21:29 2020 TCP connection established with [AF_INET]10.250.7.77:39052\nSat Jan 11 20:21:29 2020 10.250.7.77:39052 TCP connection established with [AF_INET]100.64.1.1:4308\nSat Jan 11 20:21:29 2020 10.250.7.77:39052 Connection reset, restarting [0]\nSat Jan 11 20:21:29 2020 100.64.1.1:4308 Connection reset, restarting [0]\nSat Jan 11 20:21:33 2020 TCP connection established with [AF_INET]10.250.7.77:22936\nSat Jan 11 20:21:33 2020 10.250.7.77:22936 TCP connection established with [AF_INET]100.64.1.1:63876\nSat Jan 11 20:21:33 2020 10.250.7.77:22936 Connection reset, restarting [0]\nSat Jan 11 20:21:33 2020 100.64.1.1:63876 Connection reset, restarting [0]\nSat Jan 11 20:21:39 2020 TCP connection established with [AF_INET]10.250.7.77:39056\nSat Jan 11 20:21:39 2020 10.250.7.77:39056 TCP connection established with [AF_INET]100.64.1.1:4312\nSat Jan 11 20:21:39 2020 10.250.7.77:39056 Connection reset, restarting [0]\nSat Jan 11 20:21:39 2020 100.64.1.1:4312 Connection reset, restarting [0]\nSat Jan 11 20:21:43 2020 TCP connection established with [AF_INET]10.250.7.77:22940\nSat Jan 11 20:21:43 2020 10.250.7.77:22940 TCP connection established with [AF_INET]100.64.1.1:63880\nSat Jan 11 20:21:43 2020 10.250.7.77:22940 Connection reset, restarting [0]\nSat Jan 11 20:21:43 2020 100.64.1.1:63880 Connection reset, restarting [0]\nSat Jan 11 20:21:49 2020 TCP connection established with [AF_INET]10.250.7.77:39078\nSat Jan 11 20:21:49 2020 10.250.7.77:39078 TCP connection established with [AF_INET]100.64.1.1:4334\nSat Jan 11 20:21:49 2020 10.250.7.77:39078 Connection reset, restarting [0]\nSat Jan 11 20:21:49 2020 100.64.1.1:4334 Connection reset, restarting [0]\nSat Jan 11 20:21:53 2020 TCP connection established with [AF_INET]10.250.7.77:22950\nSat Jan 11 20:21:53 2020 10.250.7.77:22950 TCP connection established with [AF_INET]100.64.1.1:63890\nSat Jan 11 20:21:53 2020 10.250.7.77:22950 Connection reset, restarting [0]\nSat Jan 11 20:21:53 2020 100.64.1.1:63890 Connection reset, restarting [0]\nSat Jan 11 20:21:59 2020 TCP connection established with [AF_INET]10.250.7.77:39086\nSat Jan 11 20:21:59 2020 10.250.7.77:39086 TCP connection established with [AF_INET]100.64.1.1:4342\nSat Jan 11 20:21:59 2020 10.250.7.77:39086 Connection reset, restarting [0]\nSat Jan 11 20:21:59 2020 100.64.1.1:4342 Connection reset, restarting [0]\nSat Jan 11 20:22:03 2020 TCP connection established with [AF_INET]10.250.7.77:22964\nSat Jan 11 20:22:03 2020 10.250.7.77:22964 TCP connection established with [AF_INET]100.64.1.1:63904\nSat Jan 11 20:22:03 2020 10.250.7.77:22964 Connection reset, restarting [0]\nSat Jan 11 20:22:03 2020 100.64.1.1:63904 Connection reset, restarting [0]\nSat Jan 11 20:22:09 2020 TCP connection established with [AF_INET]10.250.7.77:39100\nSat Jan 11 20:22:09 2020 10.250.7.77:39100 TCP connection established with [AF_INET]100.64.1.1:4356\nSat Jan 11 20:22:09 2020 10.250.7.77:39100 Connection reset, restarting [0]\nSat Jan 11 20:22:09 2020 100.64.1.1:4356 Connection reset, restarting [0]\nSat Jan 11 20:22:13 2020 TCP connection established with [AF_INET]10.250.7.77:22978\nSat Jan 11 20:22:13 2020 10.250.7.77:22978 TCP connection established with [AF_INET]100.64.1.1:63918\nSat Jan 11 20:22:13 2020 10.250.7.77:22978 Connection reset, restarting [0]\nSat Jan 11 20:22:13 2020 100.64.1.1:63918 Connection reset, restarting [0]\nSat Jan 11 20:22:19 2020 TCP connection established with [AF_INET]10.250.7.77:39110\nSat Jan 11 20:22:19 2020 10.250.7.77:39110 TCP connection established with [AF_INET]100.64.1.1:4366\nSat Jan 11 20:22:19 2020 10.250.7.77:39110 Connection reset, restarting [0]\nSat Jan 11 20:22:19 2020 100.64.1.1:4366 Connection reset, restarting [0]\nSat Jan 11 20:22:23 2020 TCP connection established with [AF_INET]10.250.7.77:22984\nSat Jan 11 20:22:23 2020 10.250.7.77:22984 TCP connection established with [AF_INET]100.64.1.1:63924\nSat Jan 11 20:22:23 2020 10.250.7.77:22984 Connection reset, restarting [0]\nSat Jan 11 20:22:23 2020 100.64.1.1:63924 Connection reset, restarting [0]\nSat Jan 11 20:22:29 2020 TCP connection established with [AF_INET]10.250.7.77:39122\nSat Jan 11 20:22:29 2020 10.250.7.77:39122 TCP connection established with [AF_INET]100.64.1.1:4378\nSat Jan 11 20:22:29 2020 10.250.7.77:39122 Connection reset, restarting [0]\nSat Jan 11 20:22:29 2020 100.64.1.1:4378 Connection reset, restarting [0]\nSat Jan 11 20:22:33 2020 TCP connection established with [AF_INET]10.250.7.77:22994\nSat Jan 11 20:22:33 2020 10.250.7.77:22994 TCP connection established with [AF_INET]100.64.1.1:63934\nSat Jan 11 20:22:33 2020 10.250.7.77:22994 Connection reset, restarting [0]\nSat Jan 11 20:22:33 2020 100.64.1.1:63934 Connection reset, restarting [0]\nSat Jan 11 20:22:39 2020 TCP connection established with [AF_INET]10.250.7.77:39126\nSat Jan 11 20:22:39 2020 10.250.7.77:39126 TCP connection established with [AF_INET]100.64.1.1:4382\nSat Jan 11 20:22:39 2020 10.250.7.77:39126 Connection reset, restarting [0]\nSat Jan 11 20:22:39 2020 100.64.1.1:4382 Connection reset, restarting [0]\nSat Jan 11 20:22:43 2020 TCP connection established with [AF_INET]10.250.7.77:22998\nSat Jan 11 20:22:43 2020 10.250.7.77:22998 TCP connection established with [AF_INET]100.64.1.1:63938\nSat Jan 11 20:22:43 2020 10.250.7.77:22998 Connection reset, restarting [0]\nSat Jan 11 20:22:43 2020 100.64.1.1:63938 Connection reset, restarting [0]\nSat Jan 11 20:22:49 2020 TCP connection established with [AF_INET]10.250.7.77:39136\nSat Jan 11 20:22:49 2020 10.250.7.77:39136 TCP connection established with [AF_INET]100.64.1.1:4392\nSat Jan 11 20:22:49 2020 10.250.7.77:39136 Connection reset, restarting [0]\nSat Jan 11 20:22:49 2020 100.64.1.1:4392 Connection reset, restarting [0]\nSat Jan 11 20:22:53 2020 TCP connection established with [AF_INET]10.250.7.77:23008\nSat Jan 11 20:22:53 2020 10.250.7.77:23008 TCP connection established with [AF_INET]100.64.1.1:63948\nSat Jan 11 20:22:53 2020 10.250.7.77:23008 Connection reset, restarting [0]\nSat Jan 11 20:22:53 2020 100.64.1.1:63948 Connection reset, restarting [0]\nSat Jan 11 20:22:59 2020 TCP connection established with [AF_INET]10.250.7.77:39144\nSat Jan 11 20:22:59 2020 10.250.7.77:39144 TCP connection established with [AF_INET]100.64.1.1:4400\nSat Jan 11 20:22:59 2020 10.250.7.77:39144 Connection reset, restarting [0]\nSat Jan 11 20:22:59 2020 100.64.1.1:4400 Connection reset, restarting [0]\nSat Jan 11 20:23:03 2020 TCP connection established with [AF_INET]10.250.7.77:23024\nSat Jan 11 20:23:03 2020 10.250.7.77:23024 TCP connection established with [AF_INET]100.64.1.1:63964\nSat Jan 11 20:23:03 2020 10.250.7.77:23024 Connection reset, restarting [0]\nSat Jan 11 20:23:03 2020 100.64.1.1:63964 Connection reset, restarting [0]\nSat Jan 11 20:23:09 2020 TCP connection established with [AF_INET]10.250.7.77:39160\nSat Jan 11 20:23:09 2020 10.250.7.77:39160 TCP connection established with [AF_INET]100.64.1.1:4416\nSat Jan 11 20:23:09 2020 10.250.7.77:39160 Connection reset, restarting [0]\nSat Jan 11 20:23:09 2020 100.64.1.1:4416 Connection reset, restarting [0]\nSat Jan 11 20:23:13 2020 TCP connection established with [AF_INET]10.250.7.77:23032\nSat Jan 11 20:23:13 2020 10.250.7.77:23032 TCP connection established with [AF_INET]100.64.1.1:63972\nSat Jan 11 20:23:13 2020 10.250.7.77:23032 Connection reset, restarting [0]\nSat Jan 11 20:23:13 2020 100.64.1.1:63972 Connection reset, restarting [0]\nSat Jan 11 20:23:19 2020 TCP connection established with [AF_INET]10.250.7.77:39168\nSat Jan 11 20:23:19 2020 10.250.7.77:39168 TCP connection established with [AF_INET]100.64.1.1:4424\nSat Jan 11 20:23:19 2020 10.250.7.77:39168 Connection reset, restarting [0]\nSat Jan 11 20:23:19 2020 100.64.1.1:4424 Connection reset, restarting [0]\nSat Jan 11 20:23:23 2020 TCP connection established with [AF_INET]10.250.7.77:23042\nSat Jan 11 20:23:23 2020 10.250.7.77:23042 TCP connection established with [AF_INET]100.64.1.1:63982\nSat Jan 11 20:23:23 2020 10.250.7.77:23042 Connection reset, restarting [0]\nSat Jan 11 20:23:23 2020 100.64.1.1:63982 Connection reset, restarting [0]\nSat Jan 11 20:23:29 2020 TCP connection established with [AF_INET]10.250.7.77:39176\nSat Jan 11 20:23:29 2020 10.250.7.77:39176 Connection reset, restarting [0]\nSat Jan 11 20:23:29 2020 TCP connection established with [AF_INET]100.64.1.1:4432\nSat Jan 11 20:23:29 2020 100.64.1.1:4432 Connection reset, restarting [0]\nSat Jan 11 20:23:33 2020 TCP connection established with [AF_INET]10.250.7.77:23052\nSat Jan 11 20:23:33 2020 10.250.7.77:23052 TCP connection established with [AF_INET]100.64.1.1:63992\nSat Jan 11 20:23:33 2020 10.250.7.77:23052 Connection reset, restarting [0]\nSat Jan 11 20:23:33 2020 100.64.1.1:63992 Connection reset, restarting [0]\nSat Jan 11 20:23:39 2020 TCP connection established with [AF_INET]10.250.7.77:39180\nSat Jan 11 20:23:39 2020 10.250.7.77:39180 TCP connection established with [AF_INET]100.64.1.1:4436\nSat Jan 11 20:23:39 2020 10.250.7.77:39180 Connection reset, restarting [0]\nSat Jan 11 20:23:39 2020 100.64.1.1:4436 Connection reset, restarting [0]\nSat Jan 11 20:23:43 2020 TCP connection established with [AF_INET]10.250.7.77:23056\nSat Jan 11 20:23:43 2020 10.250.7.77:23056 TCP connection established with [AF_INET]100.64.1.1:63996\nSat Jan 11 20:23:43 2020 10.250.7.77:23056 Connection reset, restarting [0]\nSat Jan 11 20:23:43 2020 100.64.1.1:63996 Connection reset, restarting [0]\nSat Jan 11 20:23:49 2020 TCP connection established with [AF_INET]10.250.7.77:39194\nSat Jan 11 20:23:49 2020 10.250.7.77:39194 TCP connection established with [AF_INET]100.64.1.1:4450\nSat Jan 11 20:23:49 2020 10.250.7.77:39194 Connection reset, restarting [0]\nSat Jan 11 20:23:49 2020 100.64.1.1:4450 Connection reset, restarting [0]\nSat Jan 11 20:23:53 2020 TCP connection established with [AF_INET]10.250.7.77:23066\nSat Jan 11 20:23:53 2020 10.250.7.77:23066 TCP connection established with [AF_INET]100.64.1.1:64006\nSat Jan 11 20:23:53 2020 10.250.7.77:23066 Connection reset, restarting [0]\nSat Jan 11 20:23:53 2020 100.64.1.1:64006 Connection reset, restarting [0]\nSat Jan 11 20:23:59 2020 TCP connection established with [AF_INET]10.250.7.77:39202\nSat Jan 11 20:23:59 2020 10.250.7.77:39202 TCP connection established with [AF_INET]100.64.1.1:4458\nSat Jan 11 20:23:59 2020 10.250.7.77:39202 Connection reset, restarting [0]\nSat Jan 11 20:23:59 2020 100.64.1.1:4458 Connection reset, restarting [0]\nSat Jan 11 20:24:03 2020 TCP connection established with [AF_INET]10.250.7.77:23082\nSat Jan 11 20:24:03 2020 10.250.7.77:23082 TCP connection established with [AF_INET]100.64.1.1:64022\nSat Jan 11 20:24:03 2020 10.250.7.77:23082 Connection reset, restarting [0]\nSat Jan 11 20:24:03 2020 100.64.1.1:64022 Connection reset, restarting [0]\nSat Jan 11 20:24:09 2020 TCP connection established with [AF_INET]10.250.7.77:39218\nSat Jan 11 20:24:09 2020 10.250.7.77:39218 Connection reset, restarting [0]\nSat Jan 11 20:24:09 2020 TCP connection established with [AF_INET]100.64.1.1:4474\nSat Jan 11 20:24:09 2020 100.64.1.1:4474 Connection reset, restarting [0]\nSat Jan 11 20:24:13 2020 TCP connection established with [AF_INET]10.250.7.77:23090\nSat Jan 11 20:24:13 2020 10.250.7.77:23090 TCP connection established with [AF_INET]100.64.1.1:64030\nSat Jan 11 20:24:13 2020 10.250.7.77:23090 Connection reset, restarting [0]\nSat Jan 11 20:24:13 2020 100.64.1.1:64030 Connection reset, restarting [0]\nSat Jan 11 20:24:19 2020 TCP connection established with [AF_INET]10.250.7.77:39226\nSat Jan 11 20:24:19 2020 10.250.7.77:39226 TCP connection established with [AF_INET]100.64.1.1:4482\nSat Jan 11 20:24:19 2020 10.250.7.77:39226 Connection reset, restarting [0]\nSat Jan 11 20:24:19 2020 100.64.1.1:4482 Connection reset, restarting [0]\nSat Jan 11 20:24:23 2020 TCP connection established with [AF_INET]10.250.7.77:23096\nSat Jan 11 20:24:23 2020 10.250.7.77:23096 TCP connection established with [AF_INET]100.64.1.1:64036\nSat Jan 11 20:24:23 2020 10.250.7.77:23096 Connection reset, restarting [0]\nSat Jan 11 20:24:23 2020 100.64.1.1:64036 Connection reset, restarting [0]\nSat Jan 11 20:24:29 2020 TCP connection established with [AF_INET]10.250.7.77:39234\nSat Jan 11 20:24:29 2020 10.250.7.77:39234 TCP connection established with [AF_INET]100.64.1.1:4490\nSat Jan 11 20:24:29 2020 10.250.7.77:39234 Connection reset, restarting [0]\nSat Jan 11 20:24:29 2020 100.64.1.1:4490 Connection reset, restarting [0]\nSat Jan 11 20:24:33 2020 TCP connection established with [AF_INET]10.250.7.77:23106\nSat Jan 11 20:24:33 2020 10.250.7.77:23106 TCP connection established with [AF_INET]100.64.1.1:64046\nSat Jan 11 20:24:33 2020 10.250.7.77:23106 Connection reset, restarting [0]\nSat Jan 11 20:24:33 2020 100.64.1.1:64046 Connection reset, restarting [0]\nSat Jan 11 20:24:39 2020 TCP connection established with [AF_INET]10.250.7.77:39238\nSat Jan 11 20:24:39 2020 10.250.7.77:39238 TCP connection established with [AF_INET]100.64.1.1:4494\nSat Jan 11 20:24:39 2020 10.250.7.77:39238 Connection reset, restarting [0]\nSat Jan 11 20:24:39 2020 100.64.1.1:4494 Connection reset, restarting [0]\nSat Jan 11 20:24:43 2020 TCP connection established with [AF_INET]10.250.7.77:23114\nSat Jan 11 20:24:43 2020 10.250.7.77:23114 TCP connection established with [AF_INET]100.64.1.1:64054\nSat Jan 11 20:24:43 2020 10.250.7.77:23114 Connection reset, restarting [0]\nSat Jan 11 20:24:43 2020 100.64.1.1:64054 Connection reset, restarting [0]\nSat Jan 11 20:24:49 2020 TCP connection established with [AF_INET]10.250.7.77:39248\nSat Jan 11 20:24:49 2020 10.250.7.77:39248 TCP connection established with [AF_INET]100.64.1.1:4504\nSat Jan 11 20:24:49 2020 10.250.7.77:39248 Connection reset, restarting [0]\nSat Jan 11 20:24:49 2020 100.64.1.1:4504 Connection reset, restarting [0]\nSat Jan 11 20:24:53 2020 TCP connection established with [AF_INET]10.250.7.77:23126\nSat Jan 11 20:24:53 2020 10.250.7.77:23126 TCP connection established with [AF_INET]100.64.1.1:64066\nSat Jan 11 20:24:53 2020 10.250.7.77:23126 Connection reset, restarting [0]\nSat Jan 11 20:24:53 2020 100.64.1.1:64066 Connection reset, restarting [0]\nSat Jan 11 20:24:59 2020 TCP connection established with [AF_INET]10.250.7.77:39260\nSat Jan 11 20:24:59 2020 10.250.7.77:39260 TCP connection established with [AF_INET]100.64.1.1:4516\nSat Jan 11 20:24:59 2020 10.250.7.77:39260 Connection reset, restarting [0]\nSat Jan 11 20:24:59 2020 100.64.1.1:4516 Connection reset, restarting [0]\nSat Jan 11 20:25:03 2020 TCP connection established with [AF_INET]10.250.7.77:23140\nSat Jan 11 20:25:03 2020 10.250.7.77:23140 TCP connection established with [AF_INET]100.64.1.1:64080\nSat Jan 11 20:25:03 2020 10.250.7.77:23140 Connection reset, restarting [0]\nSat Jan 11 20:25:03 2020 100.64.1.1:64080 Connection reset, restarting [0]\nSat Jan 11 20:25:09 2020 TCP connection established with [AF_INET]10.250.7.77:39276\nSat Jan 11 20:25:09 2020 10.250.7.77:39276 TCP connection established with [AF_INET]100.64.1.1:4532\nSat Jan 11 20:25:09 2020 10.250.7.77:39276 Connection reset, restarting [0]\nSat Jan 11 20:25:09 2020 100.64.1.1:4532 Connection reset, restarting [0]\nSat Jan 11 20:25:13 2020 TCP connection established with [AF_INET]10.250.7.77:23148\nSat Jan 11 20:25:13 2020 10.250.7.77:23148 TCP connection established with [AF_INET]100.64.1.1:64088\nSat Jan 11 20:25:13 2020 10.250.7.77:23148 Connection reset, restarting [0]\nSat Jan 11 20:25:13 2020 100.64.1.1:64088 Connection reset, restarting [0]\nSat Jan 11 20:25:19 2020 TCP connection established with [AF_INET]10.250.7.77:39284\nSat Jan 11 20:25:19 2020 10.250.7.77:39284 TCP connection established with [AF_INET]100.64.1.1:4540\nSat Jan 11 20:25:19 2020 10.250.7.77:39284 Connection reset, restarting [0]\nSat Jan 11 20:25:19 2020 100.64.1.1:4540 Connection reset, restarting [0]\nSat Jan 11 20:25:23 2020 TCP connection established with [AF_INET]10.250.7.77:23154\nSat Jan 11 20:25:23 2020 10.250.7.77:23154 TCP connection established with [AF_INET]100.64.1.1:64094\nSat Jan 11 20:25:23 2020 10.250.7.77:23154 Connection reset, restarting [0]\nSat Jan 11 20:25:23 2020 100.64.1.1:64094 Connection reset, restarting [0]\nSat Jan 11 20:25:29 2020 TCP connection established with [AF_INET]10.250.7.77:39292\nSat Jan 11 20:25:29 2020 10.250.7.77:39292 TCP connection established with [AF_INET]100.64.1.1:4548\nSat Jan 11 20:25:29 2020 10.250.7.77:39292 Connection reset, restarting [0]\nSat Jan 11 20:25:29 2020 100.64.1.1:4548 Connection reset, restarting [0]\nSat Jan 11 20:25:33 2020 TCP connection established with [AF_INET]10.250.7.77:23170\nSat Jan 11 20:25:33 2020 10.250.7.77:23170 TCP connection established with [AF_INET]100.64.1.1:64110\nSat Jan 11 20:25:33 2020 10.250.7.77:23170 Connection reset, restarting [0]\nSat Jan 11 20:25:33 2020 100.64.1.1:64110 Connection reset, restarting [0]\nSat Jan 11 20:25:39 2020 TCP connection established with [AF_INET]10.250.7.77:39296\nSat Jan 11 20:25:39 2020 10.250.7.77:39296 TCP connection established with [AF_INET]100.64.1.1:4552\nSat Jan 11 20:25:39 2020 10.250.7.77:39296 Connection reset, restarting [0]\nSat Jan 11 20:25:39 2020 100.64.1.1:4552 Connection reset, restarting [0]\nSat Jan 11 20:25:43 2020 TCP connection established with [AF_INET]10.250.7.77:23208\nSat Jan 11 20:25:43 2020 10.250.7.77:23208 TCP connection established with [AF_INET]100.64.1.1:64148\nSat Jan 11 20:25:43 2020 10.250.7.77:23208 Connection reset, restarting [0]\nSat Jan 11 20:25:43 2020 100.64.1.1:64148 Connection reset, restarting [0]\nSat Jan 11 20:25:49 2020 TCP connection established with [AF_INET]10.250.7.77:39306\nSat Jan 11 20:25:49 2020 10.250.7.77:39306 TCP connection established with [AF_INET]100.64.1.1:4562\nSat Jan 11 20:25:49 2020 10.250.7.77:39306 Connection reset, restarting [0]\nSat Jan 11 20:25:49 2020 100.64.1.1:4562 Connection reset, restarting [0]\nSat Jan 11 20:25:53 2020 TCP connection established with [AF_INET]10.250.7.77:23224\nSat Jan 11 20:25:53 2020 10.250.7.77:23224 TCP connection established with [AF_INET]100.64.1.1:64164\nSat Jan 11 20:25:53 2020 10.250.7.77:23224 Connection reset, restarting [0]\nSat Jan 11 20:25:53 2020 100.64.1.1:64164 Connection reset, restarting [0]\nSat Jan 11 20:25:59 2020 TCP connection established with [AF_INET]10.250.7.77:39316\nSat Jan 11 20:25:59 2020 10.250.7.77:39316 TCP connection established with [AF_INET]100.64.1.1:4572\nSat Jan 11 20:25:59 2020 10.250.7.77:39316 Connection reset, restarting [0]\nSat Jan 11 20:25:59 2020 100.64.1.1:4572 Connection reset, restarting [0]\nSat Jan 11 20:26:03 2020 TCP connection established with [AF_INET]10.250.7.77:23238\nSat Jan 11 20:26:03 2020 10.250.7.77:23238 TCP connection established with [AF_INET]100.64.1.1:64178\nSat Jan 11 20:26:03 2020 10.250.7.77:23238 Connection reset, restarting [0]\nSat Jan 11 20:26:03 2020 100.64.1.1:64178 Connection reset, restarting [0]\nSat Jan 11 20:26:09 2020 TCP connection established with [AF_INET]10.250.7.77:39330\nSat Jan 11 20:26:09 2020 10.250.7.77:39330 TCP connection established with [AF_INET]100.64.1.1:4586\nSat Jan 11 20:26:09 2020 10.250.7.77:39330 Connection reset, restarting [0]\nSat Jan 11 20:26:09 2020 100.64.1.1:4586 Connection reset, restarting [0]\nSat Jan 11 20:26:13 2020 TCP connection established with [AF_INET]10.250.7.77:23248\nSat Jan 11 20:26:13 2020 10.250.7.77:23248 TCP connection established with [AF_INET]100.64.1.1:64188\nSat Jan 11 20:26:13 2020 10.250.7.77:23248 Connection reset, restarting [0]\nSat Jan 11 20:26:13 2020 100.64.1.1:64188 Connection reset, restarting [0]\nSat Jan 11 20:26:19 2020 TCP connection established with [AF_INET]10.250.7.77:39342\nSat Jan 11 20:26:19 2020 10.250.7.77:39342 TCP connection established with [AF_INET]100.64.1.1:4598\nSat Jan 11 20:26:19 2020 10.250.7.77:39342 Connection reset, restarting [0]\nSat Jan 11 20:26:19 2020 100.64.1.1:4598 Connection reset, restarting [0]\nSat Jan 11 20:26:23 2020 TCP connection established with [AF_INET]10.250.7.77:23254\nSat Jan 11 20:26:23 2020 10.250.7.77:23254 TCP connection established with [AF_INET]100.64.1.1:64194\nSat Jan 11 20:26:23 2020 10.250.7.77:23254 Connection reset, restarting [0]\nSat Jan 11 20:26:23 2020 100.64.1.1:64194 Connection reset, restarting [0]\nSat Jan 11 20:26:29 2020 TCP connection established with [AF_INET]10.250.7.77:39350\nSat Jan 11 20:26:29 2020 10.250.7.77:39350 TCP connection established with [AF_INET]100.64.1.1:4606\nSat Jan 11 20:26:29 2020 10.250.7.77:39350 Connection reset, restarting [0]\nSat Jan 11 20:26:29 2020 100.64.1.1:4606 Connection reset, restarting [0]\nSat Jan 11 20:26:33 2020 TCP connection established with [AF_INET]10.250.7.77:23264\nSat Jan 11 20:26:33 2020 10.250.7.77:23264 TCP connection established with [AF_INET]100.64.1.1:64204\nSat Jan 11 20:26:33 2020 10.250.7.77:23264 Connection reset, restarting [0]\nSat Jan 11 20:26:33 2020 100.64.1.1:64204 Connection reset, restarting [0]\nSat Jan 11 20:26:39 2020 TCP connection established with [AF_INET]10.250.7.77:39354\nSat Jan 11 20:26:39 2020 10.250.7.77:39354 TCP connection established with [AF_INET]100.64.1.1:4610\nSat Jan 11 20:26:39 2020 10.250.7.77:39354 Connection reset, restarting [0]\nSat Jan 11 20:26:39 2020 100.64.1.1:4610 Connection reset, restarting [0]\nSat Jan 11 20:26:43 2020 TCP connection established with [AF_INET]10.250.7.77:23268\nSat Jan 11 20:26:43 2020 10.250.7.77:23268 TCP connection established with [AF_INET]100.64.1.1:64208\nSat Jan 11 20:26:43 2020 10.250.7.77:23268 Connection reset, restarting [0]\nSat Jan 11 20:26:43 2020 100.64.1.1:64208 Connection reset, restarting [0]\nSat Jan 11 20:26:49 2020 TCP connection established with [AF_INET]10.250.7.77:39364\nSat Jan 11 20:26:49 2020 10.250.7.77:39364 TCP connection established with [AF_INET]100.64.1.1:4620\nSat Jan 11 20:26:49 2020 10.250.7.77:39364 Connection reset, restarting [0]\nSat Jan 11 20:26:49 2020 100.64.1.1:4620 Connection reset, restarting [0]\nSat Jan 11 20:26:53 2020 TCP connection established with [AF_INET]10.250.7.77:23280\nSat Jan 11 20:26:53 2020 10.250.7.77:23280 TCP connection established with [AF_INET]100.64.1.1:64220\nSat Jan 11 20:26:53 2020 10.250.7.77:23280 Connection reset, restarting [0]\nSat Jan 11 20:26:53 2020 100.64.1.1:64220 Connection reset, restarting [0]\nSat Jan 11 20:26:59 2020 TCP connection established with [AF_INET]10.250.7.77:39374\nSat Jan 11 20:26:59 2020 10.250.7.77:39374 Connection reset, restarting [0]\nSat Jan 11 20:26:59 2020 TCP connection established with [AF_INET]100.64.1.1:4630\nSat Jan 11 20:26:59 2020 100.64.1.1:4630 Connection reset, restarting [0]\nSat Jan 11 20:27:03 2020 TCP connection established with [AF_INET]10.250.7.77:23296\nSat Jan 11 20:27:03 2020 10.250.7.77:23296 TCP connection established with [AF_INET]100.64.1.1:64236\nSat Jan 11 20:27:03 2020 10.250.7.77:23296 Connection reset, restarting [0]\nSat Jan 11 20:27:03 2020 100.64.1.1:64236 Connection reset, restarting [0]\nSat Jan 11 20:27:09 2020 TCP connection established with [AF_INET]10.250.7.77:39390\nSat Jan 11 20:27:09 2020 10.250.7.77:39390 TCP connection established with [AF_INET]100.64.1.1:4646\nSat Jan 11 20:27:09 2020 10.250.7.77:39390 Connection reset, restarting [0]\nSat Jan 11 20:27:09 2020 100.64.1.1:4646 Connection reset, restarting [0]\nSat Jan 11 20:27:13 2020 TCP connection established with [AF_INET]10.250.7.77:23308\nSat Jan 11 20:27:13 2020 10.250.7.77:23308 TCP connection established with [AF_INET]100.64.1.1:64248\nSat Jan 11 20:27:13 2020 10.250.7.77:23308 Connection reset, restarting [0]\nSat Jan 11 20:27:13 2020 100.64.1.1:64248 Connection reset, restarting [0]\nSat Jan 11 20:27:19 2020 TCP connection established with [AF_INET]10.250.7.77:39398\nSat Jan 11 20:27:19 2020 10.250.7.77:39398 TCP connection established with [AF_INET]100.64.1.1:4654\nSat Jan 11 20:27:19 2020 10.250.7.77:39398 Connection reset, restarting [0]\nSat Jan 11 20:27:19 2020 100.64.1.1:4654 Connection reset, restarting [0]\nSat Jan 11 20:27:23 2020 TCP connection established with [AF_INET]10.250.7.77:23314\nSat Jan 11 20:27:23 2020 10.250.7.77:23314 TCP connection established with [AF_INET]100.64.1.1:64254\nSat Jan 11 20:27:23 2020 10.250.7.77:23314 Connection reset, restarting [0]\nSat Jan 11 20:27:23 2020 100.64.1.1:64254 Connection reset, restarting [0]\nSat Jan 11 20:27:29 2020 TCP connection established with [AF_INET]10.250.7.77:39410\nSat Jan 11 20:27:29 2020 10.250.7.77:39410 TCP connection established with [AF_INET]100.64.1.1:4666\nSat Jan 11 20:27:29 2020 10.250.7.77:39410 Connection reset, restarting [0]\nSat Jan 11 20:27:29 2020 100.64.1.1:4666 Connection reset, restarting [0]\nSat Jan 11 20:27:33 2020 TCP connection established with [AF_INET]10.250.7.77:23324\nSat Jan 11 20:27:33 2020 10.250.7.77:23324 TCP connection established with [AF_INET]100.64.1.1:64264\nSat Jan 11 20:27:33 2020 10.250.7.77:23324 Connection reset, restarting [0]\nSat Jan 11 20:27:33 2020 100.64.1.1:64264 Connection reset, restarting [0]\nSat Jan 11 20:27:39 2020 TCP connection established with [AF_INET]10.250.7.77:39414\nSat Jan 11 20:27:39 2020 10.250.7.77:39414 TCP connection established with [AF_INET]100.64.1.1:4670\nSat Jan 11 20:27:39 2020 10.250.7.77:39414 Connection reset, restarting [0]\nSat Jan 11 20:27:39 2020 100.64.1.1:4670 Connection reset, restarting [0]\nSat Jan 11 20:27:43 2020 TCP connection established with [AF_INET]10.250.7.77:23330\nSat Jan 11 20:27:43 2020 10.250.7.77:23330 Connection reset, restarting [0]\nSat Jan 11 20:27:43 2020 TCP connection established with [AF_INET]100.64.1.1:64270\nSat Jan 11 20:27:43 2020 100.64.1.1:64270 Connection reset, restarting [0]\nSat Jan 11 20:27:49 2020 TCP connection established with [AF_INET]10.250.7.77:39426\nSat Jan 11 20:27:49 2020 10.250.7.77:39426 TCP connection established with [AF_INET]100.64.1.1:4682\nSat Jan 11 20:27:49 2020 10.250.7.77:39426 Connection reset, restarting [0]\nSat Jan 11 20:27:49 2020 100.64.1.1:4682 Connection reset, restarting [0]\nSat Jan 11 20:27:53 2020 TCP connection established with [AF_INET]10.250.7.77:23340\nSat Jan 11 20:27:53 2020 10.250.7.77:23340 TCP connection established with [AF_INET]100.64.1.1:64280\nSat Jan 11 20:27:53 2020 10.250.7.77:23340 Connection reset, restarting [0]\nSat Jan 11 20:27:53 2020 100.64.1.1:64280 Connection reset, restarting [0]\nSat Jan 11 20:27:59 2020 TCP connection established with [AF_INET]10.250.7.77:39434\nSat Jan 11 20:27:59 2020 10.250.7.77:39434 TCP connection established with [AF_INET]100.64.1.1:4690\nSat Jan 11 20:27:59 2020 10.250.7.77:39434 Connection reset, restarting [0]\nSat Jan 11 20:27:59 2020 100.64.1.1:4690 Connection reset, restarting [0]\nSat Jan 11 20:28:03 2020 TCP connection established with [AF_INET]10.250.7.77:23360\nSat Jan 11 20:28:03 2020 10.250.7.77:23360 TCP connection established with [AF_INET]100.64.1.1:64300\nSat Jan 11 20:28:03 2020 10.250.7.77:23360 Connection reset, restarting [0]\nSat Jan 11 20:28:03 2020 100.64.1.1:64300 Connection reset, restarting [0]\nSat Jan 11 20:28:09 2020 TCP connection established with [AF_INET]10.250.7.77:39454\nSat Jan 11 20:28:09 2020 10.250.7.77:39454 Connection reset, restarting [0]\nSat Jan 11 20:28:09 2020 TCP connection established with [AF_INET]100.64.1.1:4710\nSat Jan 11 20:28:09 2020 100.64.1.1:4710 Connection reset, restarting [0]\nSat Jan 11 20:28:13 2020 TCP connection established with [AF_INET]10.250.7.77:23368\nSat Jan 11 20:28:13 2020 10.250.7.77:23368 TCP connection established with [AF_INET]100.64.1.1:64308\nSat Jan 11 20:28:13 2020 10.250.7.77:23368 Connection reset, restarting [0]\nSat Jan 11 20:28:13 2020 100.64.1.1:64308 Connection reset, restarting [0]\nSat Jan 11 20:28:19 2020 TCP connection established with [AF_INET]10.250.7.77:39462\nSat Jan 11 20:28:19 2020 10.250.7.77:39462 Connection reset, restarting [0]\nSat Jan 11 20:28:19 2020 TCP connection established with [AF_INET]100.64.1.1:4718\nSat Jan 11 20:28:19 2020 100.64.1.1:4718 Connection reset, restarting [0]\nSat Jan 11 20:28:23 2020 TCP connection established with [AF_INET]10.250.7.77:23378\nSat Jan 11 20:28:23 2020 10.250.7.77:23378 TCP connection established with [AF_INET]100.64.1.1:64318\nSat Jan 11 20:28:23 2020 10.250.7.77:23378 Connection reset, restarting [0]\nSat Jan 11 20:28:23 2020 100.64.1.1:64318 Connection reset, restarting [0]\nSat Jan 11 20:28:29 2020 TCP connection established with [AF_INET]10.250.7.77:39476\nSat Jan 11 20:28:29 2020 10.250.7.77:39476 TCP connection established with [AF_INET]100.64.1.1:4732\nSat Jan 11 20:28:29 2020 10.250.7.77:39476 Connection reset, restarting [0]\nSat Jan 11 20:28:29 2020 100.64.1.1:4732 Connection reset, restarting [0]\nSat Jan 11 20:28:33 2020 TCP connection established with [AF_INET]10.250.7.77:23388\nSat Jan 11 20:28:33 2020 10.250.7.77:23388 TCP connection established with [AF_INET]100.64.1.1:64328\nSat Jan 11 20:28:33 2020 10.250.7.77:23388 Connection reset, restarting [0]\nSat Jan 11 20:28:33 2020 100.64.1.1:64328 Connection reset, restarting [0]\nSat Jan 11 20:28:39 2020 TCP connection established with [AF_INET]10.250.7.77:39514\nSat Jan 11 20:28:39 2020 10.250.7.77:39514 TCP connection established with [AF_INET]100.64.1.1:4770\nSat Jan 11 20:28:39 2020 10.250.7.77:39514 Connection reset, restarting [0]\nSat Jan 11 20:28:39 2020 100.64.1.1:4770 Connection reset, restarting [0]\nSat Jan 11 20:28:43 2020 TCP connection established with [AF_INET]10.250.7.77:23394\nSat Jan 11 20:28:43 2020 10.250.7.77:23394 TCP connection established with [AF_INET]100.64.1.1:64334\nSat Jan 11 20:28:43 2020 10.250.7.77:23394 Connection reset, restarting [0]\nSat Jan 11 20:28:43 2020 100.64.1.1:64334 Connection reset, restarting [0]\nSat Jan 11 20:28:49 2020 TCP connection established with [AF_INET]10.250.7.77:39530\nSat Jan 11 20:28:49 2020 10.250.7.77:39530 TCP connection established with [AF_INET]100.64.1.1:4786\nSat Jan 11 20:28:49 2020 10.250.7.77:39530 Connection reset, restarting [0]\nSat Jan 11 20:28:49 2020 100.64.1.1:4786 Connection reset, restarting [0]\nSat Jan 11 20:28:53 2020 TCP connection established with [AF_INET]10.250.7.77:23404\nSat Jan 11 20:28:53 2020 10.250.7.77:23404 TCP connection established with [AF_INET]100.64.1.1:64344\nSat Jan 11 20:28:53 2020 10.250.7.77:23404 Connection reset, restarting [0]\nSat Jan 11 20:28:53 2020 100.64.1.1:64344 Connection reset, restarting [0]\nSat Jan 11 20:28:59 2020 TCP connection established with [AF_INET]10.250.7.77:39538\nSat Jan 11 20:28:59 2020 10.250.7.77:39538 TCP connection established with [AF_INET]100.64.1.1:4794\nSat Jan 11 20:28:59 2020 10.250.7.77:39538 Connection reset, restarting [0]\nSat Jan 11 20:28:59 2020 100.64.1.1:4794 Connection reset, restarting [0]\nSat Jan 11 20:29:03 2020 TCP connection established with [AF_INET]10.250.7.77:23422\nSat Jan 11 20:29:03 2020 10.250.7.77:23422 TCP connection established with [AF_INET]100.64.1.1:64362\nSat Jan 11 20:29:03 2020 10.250.7.77:23422 Connection reset, restarting [0]\nSat Jan 11 20:29:03 2020 100.64.1.1:64362 Connection reset, restarting [0]\nSat Jan 11 20:29:09 2020 TCP connection established with [AF_INET]10.250.7.77:39558\nSat Jan 11 20:29:09 2020 10.250.7.77:39558 TCP connection established with [AF_INET]100.64.1.1:4814\nSat Jan 11 20:29:09 2020 10.250.7.77:39558 Connection reset, restarting [0]\nSat Jan 11 20:29:09 2020 100.64.1.1:4814 Connection reset, restarting [0]\nSat Jan 11 20:29:13 2020 TCP connection established with [AF_INET]10.250.7.77:23430\nSat Jan 11 20:29:13 2020 10.250.7.77:23430 TCP connection established with [AF_INET]100.64.1.1:64370\nSat Jan 11 20:29:13 2020 10.250.7.77:23430 Connection reset, restarting [0]\nSat Jan 11 20:29:13 2020 100.64.1.1:64370 Connection reset, restarting [0]\nSat Jan 11 20:29:19 2020 TCP connection established with [AF_INET]10.250.7.77:39566\nSat Jan 11 20:29:19 2020 10.250.7.77:39566 TCP connection established with [AF_INET]100.64.1.1:4822\nSat Jan 11 20:29:19 2020 10.250.7.77:39566 Connection reset, restarting [0]\nSat Jan 11 20:29:19 2020 100.64.1.1:4822 Connection reset, restarting [0]\nSat Jan 11 20:29:23 2020 TCP connection established with [AF_INET]10.250.7.77:23436\nSat Jan 11 20:29:23 2020 10.250.7.77:23436 TCP connection established with [AF_INET]100.64.1.1:64376\nSat Jan 11 20:29:23 2020 10.250.7.77:23436 Connection reset, restarting [0]\nSat Jan 11 20:29:23 2020 100.64.1.1:64376 Connection reset, restarting [0]\n==== END logs for container vpn-shoot of pod kube-system/vpn-shoot-5d76665b65-6rkww ====\n{\n \"kind\": \"EventList\",\n \"apiVersion\": \"v1\",\n \"metadata\": {\n \"selfLink\": \"/api/v1/namespaces/default/events\",\n \"resourceVersion\": \"25475\"\n },\n \"items\": [\n {\n \"metadata\": {\n \"name\": \"e2e-test-webhook.15e8ed24220bcd58\",\n \"namespace\": \"default\",\n \"selfLink\": \"/api/v1/namespaces/default/events/e2e-test-webhook.15e8ed24220bcd58\",\n \"uid\": \"7770be41-22fa-4faf-a7a5-ff8acf3ae266\",\n \"resourceVersion\": \"16943\",\n \"creationTimestamp\": \"2020-01-11T19:53:28Z\"\n },\n \"involvedObject\": {\n \"kind\": \"Endpoints\",\n \"name\": \"e2e-test-webhook\",\n \"apiVersion\": \"v1\"\n },\n \"reason\": \"FailedToCreateEndpoint\",\n \"message\": \"Failed to create endpoint for service webhook-9767/e2e-test-webhook: endpoints \\\"e2e-test-webhook\\\" is forbidden: unable to create new content in namespace webhook-9767 because it is being terminated\",\n \"source\": {\n \"component\": \"endpoint-controller\"\n },\n \"firstTimestamp\": \"2020-01-11T19:53:28Z\",\n \"lastTimestamp\": \"2020-01-11T19:53:29Z\",\n \"count\": 5,\n \"type\": \"Warning\",\n \"eventTime\": null,\n \"reportingComponent\": \"\",\n \"reportingInstance\": \"\"\n },\n {\n \"metadata\": {\n \"name\": \"fooservice.15e8ed9d5f4c5ac2\",\n \"namespace\": \"default\",\n \"selfLink\": \"/api/v1/namespaces/default/events/fooservice.15e8ed9d5f4c5ac2\",\n \"uid\": \"2e73f18d-d6a0-45cc-8dd3-a87f3c7a8e36\",\n \"resourceVersion\": \"18697\",\n \"creationTimestamp\": \"2020-01-11T20:02:09Z\"\n },\n \"involvedObject\": {\n \"kind\": \"Endpoints\",\n \"name\": \"fooservice\",\n \"apiVersion\": \"v1\"\n },\n \"reason\": \"FailedToCreateEndpoint\",\n \"message\": \"Failed to create endpoint for service pods-6711/fooservice: endpoints \\\"fooservice\\\" is forbidden: unable to create new content in namespace pods-6711 because it is being terminated\",\n \"source\": {\n \"component\": \"endpoint-controller\"\n },\n \"firstTimestamp\": \"2020-01-11T20:02:09Z\",\n \"lastTimestamp\": \"2020-01-11T20:02:09Z\",\n \"count\": 4,\n \"type\": \"Warning\",\n \"eventTime\": null,\n \"reportingComponent\": \"\",\n \"reportingInstance\": \"\"\n },\n {\n \"metadata\": {\n \"name\": \"lb-finalizer.15e8edc09de21bc1\",\n \"namespace\": \"default\",\n \"selfLink\": \"/api/v1/namespaces/default/events/lb-finalizer.15e8edc09de21bc1\",\n \"uid\": \"10e107c9-816a-4f08-a317-a66f757d1193\",\n \"resourceVersion\": \"19331\",\n \"creationTimestamp\": \"2020-01-11T20:04:41Z\"\n },\n \"involvedObject\": {\n \"kind\": \"Endpoints\",\n \"name\": \"lb-finalizer\",\n \"apiVersion\": \"v1\"\n },\n \"reason\": \"FailedToCreateEndpoint\",\n \"message\": \"Failed to create endpoint for service services-6943/lb-finalizer: endpoints \\\"lb-finalizer\\\" already exists\",\n \"source\": {\n \"component\": \"endpoint-controller\"\n },\n \"firstTimestamp\": \"2020-01-11T20:04:40Z\",\n \"lastTimestamp\": \"2020-01-11T20:04:40Z\",\n \"count\": 1,\n \"type\": \"Warning\",\n \"eventTime\": null,\n \"reportingComponent\": \"\",\n \"reportingInstance\": \"\"\n },\n {\n \"metadata\": {\n \"name\": \"nfs-ddr4b.15e8ede47c4f3350\",\n \"namespace\": \"default\",\n \"selfLink\": \"/api/v1/namespaces/default/events/nfs-ddr4b.15e8ede47c4f3350\",\n \"uid\": \"1f78fe17-f77e-42cf-983a-27115719c9ea\",\n \"resourceVersion\": \"20155\",\n \"creationTimestamp\": \"2020-01-11T20:07:14Z\"\n },\n \"involvedObject\": {\n \"kind\": \"PersistentVolume\",\n \"name\": \"nfs-ddr4b\",\n \"uid\": \"43903af6-3336-4a1e-a331-1d7deeb217c8\",\n \"apiVersion\": \"v1\",\n \"resourceVersion\": \"69399\"\n },\n \"reason\": \"RecyclerPod\",\n \"message\": \"Recycler pod: Successfully assigned default/recycler-for-nfs-ddr4b to ip-10-250-27-25.ec2.internal\",\n \"source\": {\n \"component\": \"persistentvolume-controller\"\n },\n \"firstTimestamp\": \"2020-01-11T20:07:14Z\",\n \"lastTimestamp\": \"2020-01-11T20:07:22Z\",\n \"count\": 3,\n \"type\": \"Normal\",\n \"eventTime\": null,\n \"reportingComponent\": \"\",\n \"reportingInstance\": \"\"\n },\n {\n \"metadata\": {\n \"name\": \"nfs-ddr4b.15e8ede4b7ee586d\",\n \"namespace\": \"default\",\n \"selfLink\": \"/api/v1/namespaces/default/events/nfs-ddr4b.15e8ede4b7ee586d\",\n \"uid\": \"19851978-5025-4c72-a656-98e05fb2b292\",\n \"resourceVersion\": \"20152\",\n \"creationTimestamp\": \"2020-01-11T20:07:15Z\"\n },\n \"involvedObject\": {\n \"kind\": \"PersistentVolume\",\n \"name\": \"nfs-ddr4b\",\n \"uid\": \"43903af6-3336-4a1e-a331-1d7deeb217c8\",\n \"apiVersion\": \"v1\",\n \"resourceVersion\": \"69399\"\n },\n \"reason\": \"RecyclerPod\",\n \"message\": \"Recycler pod: Pulling image \\\"busybox:1.27\\\"\",\n \"source\": {\n \"component\": \"persistentvolume-controller\"\n },\n \"firstTimestamp\": \"2020-01-11T20:07:15Z\",\n \"lastTimestamp\": \"2020-01-11T20:07:22Z\",\n \"count\": 2,\n \"type\": \"Normal\",\n \"eventTime\": null,\n \"reportingComponent\": \"\",\n \"reportingInstance\": \"\"\n },\n {\n \"metadata\": {\n \"name\": \"nfs-ddr4b.15e8ede4c9e789ed\",\n \"namespace\": \"default\",\n \"selfLink\": \"/api/v1/namespaces/default/events/nfs-ddr4b.15e8ede4c9e789ed\",\n \"uid\": \"9daedd90-cf26-4443-b33f-937a6d785f13\",\n \"resourceVersion\": \"20154\",\n \"creationTimestamp\": \"2020-01-11T20:07:16Z\"\n },\n \"involvedObject\": {\n \"kind\": \"PersistentVolume\",\n \"name\": \"nfs-ddr4b\",\n \"uid\": \"43903af6-3336-4a1e-a331-1d7deeb217c8\",\n \"apiVersion\": \"v1\",\n \"resourceVersion\": \"69399\"\n },\n \"reason\": \"RecyclerPod\",\n \"message\": \"Recycler pod: Successfully pulled image \\\"busybox:1.27\\\"\",\n \"source\": {\n \"component\": \"persistentvolume-controller\"\n },\n \"firstTimestamp\": \"2020-01-11T20:07:16Z\",\n \"lastTimestamp\": \"2020-01-11T20:07:22Z\",\n \"count\": 2,\n \"type\": \"Normal\",\n \"eventTime\": null,\n \"reportingComponent\": \"\",\n \"reportingInstance\": \"\"\n },\n {\n \"metadata\": {\n \"name\": \"nfs-ddr4b.15e8ede4ce9c34e0\",\n \"namespace\": \"default\",\n \"selfLink\": \"/api/v1/namespaces/default/events/nfs-ddr4b.15e8ede4ce9c34e0\",\n \"uid\": \"418ade72-e29d-4a0b-ae61-72d2cb184a6b\",\n \"resourceVersion\": \"20160\",\n \"creationTimestamp\": \"2020-01-11T20:07:16Z\"\n },\n \"involvedObject\": {\n \"kind\": \"PersistentVolume\",\n \"name\": \"nfs-ddr4b\",\n \"uid\": \"43903af6-3336-4a1e-a331-1d7deeb217c8\",\n \"apiVersion\": \"v1\",\n \"resourceVersion\": \"69399\"\n },\n \"reason\": \"RecyclerPod\",\n \"message\": \"Recycler pod: Created container pv-recycler\",\n \"source\": {\n \"component\": \"persistentvolume-controller\"\n },\n \"firstTimestamp\": \"2020-01-11T20:07:16Z\",\n \"lastTimestamp\": \"2020-01-11T20:07:23Z\",\n \"count\": 3,\n \"type\": \"Normal\",\n \"eventTime\": null,\n \"reportingComponent\": \"\",\n \"reportingInstance\": \"\"\n },\n {\n \"metadata\": {\n \"name\": \"nfs-ddr4b.15e8ede4d54ffa09\",\n \"namespace\": \"default\",\n \"selfLink\": \"/api/v1/namespaces/default/events/nfs-ddr4b.15e8ede4d54ffa09\",\n \"uid\": \"b07d6416-5549-4a6c-98e1-f8999022ee4d\",\n \"resourceVersion\": \"20162\",\n \"creationTimestamp\": \"2020-01-11T20:07:16Z\"\n },\n \"involvedObject\": {\n \"kind\": \"PersistentVolume\",\n \"name\": \"nfs-ddr4b\",\n \"uid\": \"43903af6-3336-4a1e-a331-1d7deeb217c8\",\n \"apiVersion\": \"v1\",\n \"resourceVersion\": \"69399\"\n },\n \"reason\": \"RecyclerPod\",\n \"message\": \"Recycler pod: Started container pv-recycler\",\n \"source\": {\n \"component\": \"persistentvolume-controller\"\n },\n \"firstTimestamp\": \"2020-01-11T20:07:16Z\",\n \"lastTimestamp\": \"2020-01-11T20:07:23Z\",\n \"count\": 3,\n \"type\": \"Normal\",\n \"eventTime\": null,\n \"reportingComponent\": \"\",\n \"reportingInstance\": \"\"\n },\n {\n \"metadata\": {\n \"name\": \"nfs-ddr4b.15e8ede501dd9257\",\n \"namespace\": \"default\",\n \"selfLink\": \"/api/v1/namespaces/default/events/nfs-ddr4b.15e8ede501dd9257\",\n \"uid\": \"bbc55a12-cb94-4d50-93b9-8e29cdda2290\",\n \"resourceVersion\": \"20176\",\n \"creationTimestamp\": \"2020-01-11T20:07:17Z\"\n },\n \"involvedObject\": {\n \"kind\": \"PersistentVolume\",\n \"name\": \"nfs-ddr4b\",\n \"uid\": \"43903af6-3336-4a1e-a331-1d7deeb217c8\",\n \"apiVersion\": \"v1\",\n \"resourceVersion\": \"69399\"\n },\n \"reason\": \"VolumeRecycled\",\n \"message\": \"Volume recycled\",\n \"source\": {\n \"component\": \"persistentvolume-controller\"\n },\n \"firstTimestamp\": \"2020-01-11T20:07:17Z\",\n \"lastTimestamp\": \"2020-01-11T20:07:24Z\",\n \"count\": 2,\n \"type\": \"Normal\",\n \"eventTime\": null,\n \"reportingComponent\": \"\",\n \"reportingInstance\": \"\"\n },\n {\n \"metadata\": {\n \"name\": \"nfs-ddr4b.15e8ede66de52688\",\n \"namespace\": \"default\",\n \"selfLink\": \"/api/v1/namespaces/default/events/nfs-ddr4b.15e8ede66de52688\",\n \"uid\": \"1f58a0c1-aa76-4557-9803-11052dc27ddd\",\n \"resourceVersion\": \"20158\",\n \"creationTimestamp\": \"2020-01-11T20:07:23Z\"\n },\n \"involvedObject\": {\n \"kind\": \"PersistentVolume\",\n \"name\": \"nfs-ddr4b\",\n \"uid\": \"43903af6-3336-4a1e-a331-1d7deeb217c8\",\n \"apiVersion\": \"v1\",\n \"resourceVersion\": \"69479\"\n },\n \"reason\": \"RecyclerPod\",\n \"message\": \"Recycler pod: Container image \\\"busybox:1.27\\\" already present on machine\",\n \"source\": {\n \"component\": \"persistentvolume-controller\"\n },\n \"firstTimestamp\": \"2020-01-11T20:07:23Z\",\n \"lastTimestamp\": \"2020-01-11T20:07:23Z\",\n \"count\": 1,\n \"type\": \"Normal\",\n \"eventTime\": null,\n \"reportingComponent\": \"\",\n \"reportingInstance\": \"\"\n },\n {\n \"metadata\": {\n \"name\": \"pvc-13f9e99f-ec34-4234-912c-e91f2c45ca65.15e8ee902caef888\",\n \"namespace\": \"default\",\n \"selfLink\": \"/api/v1/namespaces/default/events/pvc-13f9e99f-ec34-4234-912c-e91f2c45ca65.15e8ee902caef888\",\n \"uid\": \"eece06c3-c3f0-4657-adba-90159c08ff5e\",\n \"resourceVersion\": \"23847\",\n \"creationTimestamp\": \"2020-01-11T20:19:32Z\"\n },\n \"involvedObject\": {\n \"kind\": \"PersistentVolume\",\n \"name\": \"pvc-13f9e99f-ec34-4234-912c-e91f2c45ca65\",\n \"uid\": \"65556e40-c2d8-4a1c-a964-e9b4616b04cd\",\n \"apiVersion\": \"v1\",\n \"resourceVersion\": \"77921\"\n },\n \"reason\": \"VolumeFailedDelete\",\n \"message\": \"Error deleting EBS volume \\\"vol-09b429fb053b5eb75\\\" since volume is currently attached to \\\"i-0551dba45aad7abfa\\\"\",\n \"source\": {\n \"component\": \"persistentvolume-controller\"\n },\n \"firstTimestamp\": \"2020-01-11T20:19:32Z\",\n \"lastTimestamp\": \"2020-01-11T20:19:32Z\",\n \"count\": 1,\n \"type\": \"Warning\",\n \"eventTime\": null,\n \"reportingComponent\": \"\",\n \"reportingInstance\": \"\"\n },\n {\n \"metadata\": {\n \"name\": \"pvc-4fda035b-38d7-4cbf-bae0-98d8dc84b28e.15e8ef09a73c31cd\",\n \"namespace\": \"default\",\n \"selfLink\": \"/api/v1/namespaces/default/events/pvc-4fda035b-38d7-4cbf-bae0-98d8dc84b28e.15e8ef09a73c31cd\",\n \"uid\": \"84c34e05-22ec-42c7-bcc9-15d4b2ef56d8\",\n \"resourceVersion\": \"25194\",\n \"creationTimestamp\": \"2020-01-11T20:28:14Z\"\n },\n \"involvedObject\": {\n \"kind\": \"PersistentVolume\",\n \"name\": \"pvc-4fda035b-38d7-4cbf-bae0-98d8dc84b28e\",\n \"uid\": \"df756554-d7df-421c-bd81-957c935ba612\",\n \"apiVersion\": \"v1\",\n \"resourceVersion\": \"82540\"\n },\n \"reason\": \"VolumeFailedDelete\",\n \"message\": \"Error deleting EBS volume \\\"vol-0bd2e5ab9a6c4c9c3\\\" since volume is currently attached to \\\"i-0a8c404292a3c92e9\\\"\",\n \"source\": {\n \"component\": \"persistentvolume-controller\"\n },\n \"firstTimestamp\": \"2020-01-11T20:28:14Z\",\n \"lastTimestamp\": \"2020-01-11T20:28:14Z\",\n \"count\": 1,\n \"type\": \"Warning\",\n \"eventTime\": null,\n \"reportingComponent\": \"\",\n \"reportingInstance\": \"\"\n },\n {\n \"metadata\": {\n \"name\": \"pvc-542d1ad0-af62-4173-b477-10746b3f242d.15e8ed46e6de67f4\",\n \"namespace\": \"default\",\n \"selfLink\": \"/api/v1/namespaces/default/events/pvc-542d1ad0-af62-4173-b477-10746b3f242d.15e8ed46e6de67f4\",\n \"uid\": \"fdeaf98d-4b82-47e3-b7b2-c3ba9fd77422\",\n \"resourceVersion\": \"17365\",\n \"creationTimestamp\": \"2020-01-11T19:55:58Z\"\n },\n \"involvedObject\": {\n \"kind\": \"PersistentVolume\",\n \"name\": \"pvc-542d1ad0-af62-4173-b477-10746b3f242d\",\n \"uid\": \"82395ec8-78a6-4177-9eb6-14c53c90d86b\",\n \"apiVersion\": \"v1\",\n \"resourceVersion\": \"62570\"\n },\n \"reason\": \"VolumeFailedDelete\",\n \"message\": \"Error deleting EBS volume \\\"vol-0eda39f54c136dac5\\\" since volume is currently attached to \\\"i-0551dba45aad7abfa\\\"\",\n \"source\": {\n \"component\": \"persistentvolume-controller\"\n },\n \"firstTimestamp\": \"2020-01-11T19:55:58Z\",\n \"lastTimestamp\": \"2020-01-11T19:55:58Z\",\n \"count\": 1,\n \"type\": \"Warning\",\n \"eventTime\": null,\n \"reportingComponent\": \"\",\n \"reportingInstance\": \"\"\n },\n {\n \"metadata\": {\n \"name\": \"pvc-7a273daf-0b68-49e5-b4fc-bb24164bc112.15e8ee9027347500\",\n \"namespace\": \"default\",\n \"selfLink\": \"/api/v1/namespaces/default/events/pvc-7a273daf-0b68-49e5-b4fc-bb24164bc112.15e8ee9027347500\",\n \"uid\": \"88e53ded-b97a-42bb-b4e8-cbffb89e282b\",\n \"resourceVersion\": \"23846\",\n \"creationTimestamp\": \"2020-01-11T20:19:32Z\"\n },\n \"involvedObject\": {\n \"kind\": \"PersistentVolume\",\n \"name\": \"pvc-7a273daf-0b68-49e5-b4fc-bb24164bc112\",\n \"uid\": \"abdc2dce-e2ee-42dc-b8ba-effa1e62c622\",\n \"apiVersion\": \"v1\",\n \"resourceVersion\": \"77920\"\n },\n \"reason\": \"VolumeFailedDelete\",\n \"message\": \"Error deleting EBS volume \\\"vol-017fc9057b555bf8f\\\" since volume is currently attached to \\\"i-0a8c404292a3c92e9\\\"\",\n \"source\": {\n \"component\": \"persistentvolume-controller\"\n },\n \"firstTimestamp\": \"2020-01-11T20:19:32Z\",\n \"lastTimestamp\": \"2020-01-11T20:19:32Z\",\n \"count\": 1,\n \"type\": \"Warning\",\n \"eventTime\": null,\n \"reportingComponent\": \"\",\n \"reportingInstance\": \"\"\n },\n {\n \"metadata\": {\n \"name\": \"pvc-88bad2b5-c157-4cf6-a481-9b4d88a11947.15e8ee32103eac20\",\n \"namespace\": \"default\",\n \"selfLink\": \"/api/v1/namespaces/default/events/pvc-88bad2b5-c157-4cf6-a481-9b4d88a11947.15e8ee32103eac20\",\n \"uid\": \"88b15b39-7d91-4e39-aa6d-f12c0bee1187\",\n \"resourceVersion\": \"21494\",\n \"creationTimestamp\": \"2020-01-11T20:12:48Z\"\n },\n \"involvedObject\": {\n \"kind\": \"PersistentVolume\",\n \"name\": \"pvc-88bad2b5-c157-4cf6-a481-9b4d88a11947\",\n \"uid\": \"dba4334c-26af-4291-b7ab-6d78082930e6\",\n \"apiVersion\": \"v1\",\n \"resourceVersion\": \"72709\"\n },\n \"reason\": \"VolumeFailedDelete\",\n \"message\": \"Error deleting EBS volume \\\"vol-0b3a38d6f8487e32f\\\" since volume is currently attached to \\\"i-0a8c404292a3c92e9\\\"\",\n \"source\": {\n \"component\": \"persistentvolume-controller\"\n },\n \"firstTimestamp\": \"2020-01-11T20:12:48Z\",\n \"lastTimestamp\": \"2020-01-11T20:12:48Z\",\n \"count\": 1,\n \"type\": \"Warning\",\n \"eventTime\": null,\n \"reportingComponent\": \"\",\n \"reportingInstance\": \"\"\n },\n {\n \"metadata\": {\n \"name\": \"pvc-c530afb1-159f-4b2f-b8ae-5bb4ee03981f.15e8ec568abb9831\",\n \"namespace\": \"default\",\n \"selfLink\": \"/api/v1/namespaces/default/events/pvc-c530afb1-159f-4b2f-b8ae-5bb4ee03981f.15e8ec568abb9831\",\n \"uid\": \"fb1026f0-fd34-4399-a264-4217a61bba72\",\n \"resourceVersion\": \"10682\",\n \"creationTimestamp\": \"2020-01-11T19:38:45Z\"\n },\n \"involvedObject\": {\n \"kind\": \"PersistentVolume\",\n \"name\": \"pvc-c530afb1-159f-4b2f-b8ae-5bb4ee03981f\",\n \"uid\": \"6e490600-5537-43d1-9d13-3b889f34fc0c\",\n \"apiVersion\": \"v1\",\n \"resourceVersion\": \"49396\"\n },\n \"reason\": \"VolumeFailedDelete\",\n \"message\": \"Error deleting EBS volume \\\"vol-0ce5f295774336786\\\" since volume is currently attached to \\\"i-0a8c404292a3c92e9\\\"\",\n \"source\": {\n \"component\": \"persistentvolume-controller\"\n },\n \"firstTimestamp\": \"2020-01-11T19:38:45Z\",\n \"lastTimestamp\": \"2020-01-11T19:38:45Z\",\n \"count\": 1,\n \"type\": \"Warning\",\n \"eventTime\": null,\n \"reportingComponent\": \"\",\n \"reportingInstance\": \"\"\n },\n {\n \"metadata\": {\n \"name\": \"recycler-for-nfs-ddr4b.15e8ede47c250c50\",\n \"namespace\": \"default\",\n \"selfLink\": \"/api/v1/namespaces/default/events/recycler-for-nfs-ddr4b.15e8ede47c250c50\",\n \"uid\": \"7137b1c8-c12e-4bd4-89cf-718e9ac2db8f\",\n \"resourceVersion\": \"20120\",\n \"creationTimestamp\": \"2020-01-11T20:07:14Z\"\n },\n \"involvedObject\": {\n \"kind\": \"Pod\",\n \"namespace\": \"default\",\n \"name\": \"recycler-for-nfs-ddr4b\",\n \"uid\": \"21396ff2-81af-48ce-abd1-2a1b8227d833\",\n \"apiVersion\": \"v1\",\n \"resourceVersion\": \"69400\"\n },\n \"reason\": \"Scheduled\",\n \"message\": \"Successfully assigned default/recycler-for-nfs-ddr4b to ip-10-250-27-25.ec2.internal\",\n \"source\": {\n \"component\": \"default-scheduler\"\n },\n \"firstTimestamp\": null,\n \"lastTimestamp\": null,\n \"type\": \"Normal\",\n \"eventTime\": \"2020-01-11T20:07:14.984482Z\",\n \"action\": \"Binding\",\n \"reportingComponent\": \"default-scheduler\",\n \"reportingInstance\": \"default-scheduler-kube-scheduler-6f8f595df8-tfkxs\"\n },\n {\n \"metadata\": {\n \"name\": \"recycler-for-nfs-ddr4b.15e8ede4b7e5e64a\",\n \"namespace\": \"default\",\n \"selfLink\": \"/api/v1/namespaces/default/events/recycler-for-nfs-ddr4b.15e8ede4b7e5e64a\",\n \"uid\": \"681557b3-d1b6-4b72-bbfa-c5684f156ef2\",\n \"resourceVersion\": \"20127\",\n \"creationTimestamp\": \"2020-01-11T20:07:15Z\"\n },\n \"involvedObject\": {\n \"kind\": \"Pod\",\n \"namespace\": \"default\",\n \"name\": \"recycler-for-nfs-ddr4b\",\n \"uid\": \"21396ff2-81af-48ce-abd1-2a1b8227d833\",\n \"apiVersion\": \"v1\",\n \"resourceVersion\": \"69401\",\n \"fieldPath\": \"spec.containers{pv-recycler}\"\n },\n \"reason\": \"Pulling\",\n \"message\": \"Pulling image \\\"busybox:1.27\\\"\",\n \"source\": {\n \"component\": \"kubelet\",\n \"host\": \"ip-10-250-27-25.ec2.internal\"\n },\n \"firstTimestamp\": \"2020-01-11T20:07:15Z\",\n \"lastTimestamp\": \"2020-01-11T20:07:15Z\",\n \"count\": 1,\n \"type\": \"Normal\",\n \"eventTime\": null,\n \"reportingComponent\": \"\",\n \"reportingInstance\": \"\"\n },\n {\n \"metadata\": {\n \"name\": \"recycler-for-nfs-ddr4b.15e8ede4c9db8bb1\",\n \"namespace\": \"default\",\n \"selfLink\": \"/api/v1/namespaces/default/events/recycler-for-nfs-ddr4b.15e8ede4c9db8bb1\",\n \"uid\": \"327404b1-7751-4c0a-b49b-74870bfa58f1\",\n \"resourceVersion\": \"20129\",\n \"creationTimestamp\": \"2020-01-11T20:07:16Z\"\n },\n \"involvedObject\": {\n \"kind\": \"Pod\",\n \"namespace\": \"default\",\n \"name\": \"recycler-for-nfs-ddr4b\",\n \"uid\": \"21396ff2-81af-48ce-abd1-2a1b8227d833\",\n \"apiVersion\": \"v1\",\n \"resourceVersion\": \"69401\",\n \"fieldPath\": \"spec.containers{pv-recycler}\"\n },\n \"reason\": \"Pulled\",\n \"message\": \"Successfully pulled image \\\"busybox:1.27\\\"\",\n \"source\": {\n \"component\": \"kubelet\",\n \"host\": \"ip-10-250-27-25.ec2.internal\"\n },\n \"firstTimestamp\": \"2020-01-11T20:07:16Z\",\n \"lastTimestamp\": \"2020-01-11T20:07:16Z\",\n \"count\": 1,\n \"type\": \"Normal\",\n \"eventTime\": null,\n \"reportingComponent\": \"\",\n \"reportingInstance\": \"\"\n },\n {\n \"metadata\": {\n \"name\": \"recycler-for-nfs-ddr4b.15e8ede4ce90d61a\",\n \"namespace\": \"default\",\n \"selfLink\": \"/api/v1/namespaces/default/events/recycler-for-nfs-ddr4b.15e8ede4ce90d61a\",\n \"uid\": \"b29227dc-a945-4971-b9d4-bf5b88758703\",\n \"resourceVersion\": \"20132\",\n \"creationTimestamp\": \"2020-01-11T20:07:16Z\"\n },\n \"involvedObject\": {\n \"kind\": \"Pod\",\n \"namespace\": \"default\",\n \"name\": \"recycler-for-nfs-ddr4b\",\n \"uid\": \"21396ff2-81af-48ce-abd1-2a1b8227d833\",\n \"apiVersion\": \"v1\",\n \"resourceVersion\": \"69401\",\n \"fieldPath\": \"spec.containers{pv-recycler}\"\n },\n \"reason\": \"Created\",\n \"message\": \"Created container pv-recycler\",\n \"source\": {\n \"component\": \"kubelet\",\n \"host\": \"ip-10-250-27-25.ec2.internal\"\n },\n \"firstTimestamp\": \"2020-01-11T20:07:16Z\",\n \"lastTimestamp\": \"2020-01-11T20:07:16Z\",\n \"count\": 1,\n \"type\": \"Normal\",\n \"eventTime\": null,\n \"reportingComponent\": \"\",\n \"reportingInstance\": \"\"\n },\n {\n \"metadata\": {\n \"name\": \"recycler-for-nfs-ddr4b.15e8ede4d51e782e\",\n \"namespace\": \"default\",\n \"selfLink\": \"/api/v1/namespaces/default/events/recycler-for-nfs-ddr4b.15e8ede4d51e782e\",\n \"uid\": \"1212a141-207b-43ee-af0c-7b201b845c28\",\n \"resourceVersion\": \"20134\",\n \"creationTimestamp\": \"2020-01-11T20:07:16Z\"\n },\n \"involvedObject\": {\n \"kind\": \"Pod\",\n \"namespace\": \"default\",\n \"name\": \"recycler-for-nfs-ddr4b\",\n \"uid\": \"21396ff2-81af-48ce-abd1-2a1b8227d833\",\n \"apiVersion\": \"v1\",\n \"resourceVersion\": \"69401\",\n \"fieldPath\": \"spec.containers{pv-recycler}\"\n },\n \"reason\": \"Started\",\n \"message\": \"Started container pv-recycler\",\n \"source\": {\n \"component\": \"kubelet\",\n \"host\": \"ip-10-250-27-25.ec2.internal\"\n },\n \"firstTimestamp\": \"2020-01-11T20:07:16Z\",\n \"lastTimestamp\": \"2020-01-11T20:07:16Z\",\n \"count\": 1,\n \"type\": \"Normal\",\n \"eventTime\": null,\n \"reportingComponent\": \"\",\n \"reportingInstance\": \"\"\n },\n {\n \"metadata\": {\n \"name\": \"recycler-for-nfs-ddr4b.15e8ede621cf7061\",\n \"namespace\": \"default\",\n \"selfLink\": \"/api/v1/namespaces/default/events/recycler-for-nfs-ddr4b.15e8ede621cf7061\",\n \"uid\": \"385de585-fd45-4633-9ac2-f43ebdfb9da2\",\n \"resourceVersion\": \"20150\",\n \"creationTimestamp\": \"2020-01-11T20:07:22Z\"\n },\n \"involvedObject\": {\n \"kind\": \"Pod\",\n \"namespace\": \"default\",\n \"name\": \"recycler-for-nfs-ddr4b\",\n \"uid\": \"317deb46-b323-4b93-97cc-1b55e928b1f6\",\n \"apiVersion\": \"v1\",\n \"resourceVersion\": \"69480\"\n },\n \"reason\": \"Scheduled\",\n \"message\": \"Successfully assigned default/recycler-for-nfs-ddr4b to ip-10-250-27-25.ec2.internal\",\n \"source\": {\n \"component\": \"default-scheduler\"\n },\n \"firstTimestamp\": null,\n \"lastTimestamp\": null,\n \"type\": \"Normal\",\n \"eventTime\": \"2020-01-11T20:07:22.058858Z\",\n \"action\": \"Binding\",\n \"reportingComponent\": \"default-scheduler\",\n \"reportingInstance\": \"default-scheduler-kube-scheduler-6f8f595df8-tfkxs\"\n },\n {\n \"metadata\": {\n \"name\": \"recycler-for-nfs-ddr4b.15e8ede66dd998ae\",\n \"namespace\": \"default\",\n \"selfLink\": \"/api/v1/namespaces/default/events/recycler-for-nfs-ddr4b.15e8ede66dd998ae\",\n \"uid\": \"e32c43ba-6666-4a2f-b9b2-d114764a5ee9\",\n \"resourceVersion\": \"20157\",\n \"creationTimestamp\": \"2020-01-11T20:07:23Z\"\n },\n \"involvedObject\": {\n \"kind\": \"Pod\",\n \"namespace\": \"default\",\n \"name\": \"recycler-for-nfs-ddr4b\",\n \"uid\": \"317deb46-b323-4b93-97cc-1b55e928b1f6\",\n \"apiVersion\": \"v1\",\n \"resourceVersion\": \"69481\",\n \"fieldPath\": \"spec.containers{pv-recycler}\"\n },\n \"reason\": \"Pulled\",\n \"message\": \"Container image \\\"busybox:1.27\\\" already present on machine\",\n \"source\": {\n \"component\": \"kubelet\",\n \"host\": \"ip-10-250-27-25.ec2.internal\"\n },\n \"firstTimestamp\": \"2020-01-11T20:07:23Z\",\n \"lastTimestamp\": \"2020-01-11T20:07:23Z\",\n \"count\": 1,\n \"type\": \"Normal\",\n \"eventTime\": null,\n \"reportingComponent\": \"\",\n \"reportingInstance\": \"\"\n },\n {\n \"metadata\": {\n \"name\": \"recycler-for-nfs-ddr4b.15e8ede678a5485e\",\n \"namespace\": \"default\",\n \"selfLink\": \"/api/v1/namespaces/default/events/recycler-for-nfs-ddr4b.15e8ede678a5485e\",\n \"uid\": \"27aa71ee-911c-4140-97d5-4bd11e9e016d\",\n \"resourceVersion\": \"20159\",\n \"creationTimestamp\": \"2020-01-11T20:07:23Z\"\n },\n \"involvedObject\": {\n \"kind\": \"Pod\",\n \"namespace\": \"default\",\n \"name\": \"recycler-for-nfs-ddr4b\",\n \"uid\": \"317deb46-b323-4b93-97cc-1b55e928b1f6\",\n \"apiVersion\": \"v1\",\n \"resourceVersion\": \"69481\",\n \"fieldPath\": \"spec.containers{pv-recycler}\"\n },\n \"reason\": \"Created\",\n \"message\": \"Created container pv-recycler\",\n \"source\": {\n \"component\": \"kubelet\",\n \"host\": \"ip-10-250-27-25.ec2.internal\"\n },\n \"firstTimestamp\": \"2020-01-11T20:07:23Z\",\n \"lastTimestamp\": \"2020-01-11T20:07:23Z\",\n \"count\": 1,\n \"type\": \"Normal\",\n \"eventTime\": null,\n \"reportingComponent\": \"\",\n \"reportingInstance\": \"\"\n },\n {\n \"metadata\": {\n \"name\": \"recycler-for-nfs-ddr4b.15e8ede684a50352\",\n \"namespace\": \"default\",\n \"selfLink\": \"/api/v1/namespaces/default/events/recycler-for-nfs-ddr4b.15e8ede684a50352\",\n \"uid\": \"3cbcc731-ff03-4d43-9add-4f30bc0695bf\",\n \"resourceVersion\": \"20161\",\n \"creationTimestamp\": \"2020-01-11T20:07:23Z\"\n },\n \"involvedObject\": {\n \"kind\": \"Pod\",\n \"namespace\": \"default\",\n \"name\": \"recycler-for-nfs-ddr4b\",\n \"uid\": \"317deb46-b323-4b93-97cc-1b55e928b1f6\",\n \"apiVersion\": \"v1\",\n \"resourceVersion\": \"69481\",\n \"fieldPath\": \"spec.containers{pv-recycler}\"\n },\n \"reason\": \"Started\",\n \"message\": \"Started container pv-recycler\",\n \"source\": {\n \"component\": \"kubelet\",\n \"host\": \"ip-10-250-27-25.ec2.internal\"\n },\n \"firstTimestamp\": \"2020-01-11T20:07:23Z\",\n \"lastTimestamp\": \"2020-01-11T20:07:23Z\",\n \"count\": 1,\n \"type\": \"Normal\",\n \"eventTime\": null,\n \"reportingComponent\": \"\",\n \"reportingInstance\": \"\"\n },\n {\n \"metadata\": {\n \"name\": \"recycler-for-nfs-ddr4b.15e8ede6c93bf2e4\",\n \"namespace\": \"default\",\n \"selfLink\": \"/api/v1/namespaces/default/events/recycler-for-nfs-ddr4b.15e8ede6c93bf2e4\",\n \"uid\": \"f185ba63-1fac-4d21-9a64-b34d291a0937\",\n \"resourceVersion\": \"20177\",\n \"creationTimestamp\": \"2020-01-11T20:07:24Z\"\n },\n \"involvedObject\": {\n \"kind\": \"Pod\",\n \"namespace\": \"default\",\n \"name\": \"recycler-for-nfs-ddr4b\",\n \"uid\": \"317deb46-b323-4b93-97cc-1b55e928b1f6\",\n \"apiVersion\": \"v1\",\n \"resourceVersion\": \"69481\",\n \"fieldPath\": \"spec.containers{pv-recycler}\"\n },\n \"reason\": \"Killing\",\n \"message\": \"Stopping container pv-recycler\",\n \"source\": {\n \"component\": \"kubelet\",\n \"host\": \"ip-10-250-27-25.ec2.internal\"\n },\n \"firstTimestamp\": \"2020-01-11T20:07:24Z\",\n \"lastTimestamp\": \"2020-01-11T20:07:24Z\",\n \"count\": 1,\n \"type\": \"Normal\",\n \"eventTime\": null,\n \"reportingComponent\": \"\",\n \"reportingInstance\": \"\"\n },\n {\n \"metadata\": {\n \"name\": \"session-affinity-service.15e8ed487b9bfd6d\",\n \"namespace\": \"default\",\n \"selfLink\": \"/api/v1/namespaces/default/events/session-affinity-service.15e8ed487b9bfd6d\",\n \"uid\": \"f2ebc2ae-cc52-49b5-9a4e-c44c0b909d42\",\n \"resourceVersion\": \"17435\",\n \"creationTimestamp\": \"2020-01-11T19:56:05Z\"\n },\n \"involvedObject\": {\n \"kind\": \"Endpoints\",\n \"name\": \"session-affinity-service\",\n \"apiVersion\": \"v1\"\n },\n \"reason\": \"FailedToCreateEndpoint\",\n \"message\": \"Failed to create endpoint for service nettest-5543/session-affinity-service: endpoints \\\"session-affinity-service\\\" is forbidden: unable to create new content in namespace nettest-5543 because it is being terminated\",\n \"source\": {\n \"component\": \"endpoint-controller\"\n },\n \"firstTimestamp\": \"2020-01-11T19:56:04Z\",\n \"lastTimestamp\": \"2020-01-11T19:56:05Z\",\n \"count\": 7,\n \"type\": \"Warning\",\n \"eventTime\": null,\n \"reportingComponent\": \"\",\n \"reportingInstance\": \"\"\n }\n ]\n}\n{\n \"kind\": \"ReplicationControllerList\",\n \"apiVersion\": \"v1\",\n \"metadata\": {\n \"selfLink\": \"/api/v1/namespaces/default/replicationcontrollers\",\n \"resourceVersion\": \"83782\"\n },\n \"items\": []\n}\n{\n \"kind\": \"ServiceList\",\n \"apiVersion\": \"v1\",\n \"metadata\": {\n \"selfLink\": \"/api/v1/namespaces/default/services\",\n \"resourceVersion\": \"83788\"\n },\n \"items\": [\n {\n \"metadata\": {\n \"name\": \"kubernetes\",\n \"namespace\": \"default\",\n \"selfLink\": \"/api/v1/namespaces/default/services/kubernetes\",\n \"uid\": \"add45b8b-a893-4631-8ff6-0a581263c42a\",\n \"resourceVersion\": \"145\",\n \"creationTimestamp\": \"2020-01-11T15:53:38Z\",\n \"labels\": {\n \"component\": \"apiserver\",\n \"provider\": \"kubernetes\"\n }\n },\n \"spec\": {\n \"ports\": [\n {\n \"name\": \"https\",\n \"protocol\": \"TCP\",\n \"port\": 443,\n \"targetPort\": 443\n }\n ],\n \"clusterIP\": \"100.104.0.1\",\n \"type\": \"ClusterIP\",\n \"sessionAffinity\": \"None\"\n },\n \"status\": {\n \"loadBalancer\": {}\n }\n }\n ]\n}\n{\n \"kind\": \"DaemonSetList\",\n \"apiVersion\": \"apps/v1\",\n \"metadata\": {\n \"selfLink\": \"/apis/apps/v1/namespaces/default/daemonsets\",\n \"resourceVersion\": \"83794\"\n },\n \"items\": []\n}\n{\n \"kind\": \"DeploymentList\",\n \"apiVersion\": \"apps/v1\",\n \"metadata\": {\n \"selfLink\": \"/apis/apps/v1/namespaces/default/deployments\",\n \"resourceVersion\": \"83801\"\n },\n \"items\": []\n}\n{\n \"kind\": \"ReplicaSetList\",\n \"apiVersion\": \"apps/v1\",\n \"metadata\": {\n \"selfLink\": \"/apis/apps/v1/namespaces/default/replicasets\",\n \"resourceVersion\": \"83807\"\n },\n \"items\": []\n}\n{\n \"kind\": \"PodList\",\n \"apiVersion\": \"v1\",\n \"metadata\": {\n \"selfLink\": \"/api/v1/namespaces/default/pods\",\n \"resourceVersion\": \"83811\"\n },\n \"items\": []\n}\nCluster info dumped to standard output\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:29:25.268: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9583" for this suite. Jan 11 20:29:31.630: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:29:35.048: INFO: namespace kubectl-9583 deletion completed in 9.688294488s • [SLOW TEST:16.654 seconds] [sig-cli] Kubectl client /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl cluster-info dump /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:993 should check if cluster-info dump succeeds /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:994 ------------------------------ S ------------------------------ [BeforeEach] [sig-network] Services /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:29:33.274: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename services STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-5950 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:91 [It] should provide secure master service [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [AfterEach] [sig-network] Services /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:29:34.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5950" for this suite. Jan 11 20:29:41.308: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:29:44.640: INFO: namespace services-5950 deletion completed in 9.609011327s [AfterEach] [sig-network] Services /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:95 • [SLOW TEST:11.365 seconds] [sig-network] Services /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide secure master service [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ [BeforeEach] [sig-network] Service endpoints latency /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:29:14.905: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename svc-latency STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in svc-latency-7431 STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: creating replication controller svc-latency-rc in namespace svc-latency-7431 I0111 20:29:15.640305 8631 runners.go:184] Created replication controller with name: svc-latency-rc, namespace: svc-latency-7431, replica count: 1 I0111 20:29:16.740908 8631 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 11 20:29:16.935: INFO: Created: latency-svc-s6rdv Jan 11 20:29:16.938: INFO: Got endpoints: latency-svc-s6rdv [97.017788ms] Jan 11 20:29:17.034: INFO: Created: latency-svc-q7krl Jan 11 20:29:17.038: INFO: Created: latency-svc-s2b7g Jan 11 20:29:17.038: INFO: Got endpoints: latency-svc-q7krl [99.766385ms] Jan 11 20:29:17.041: INFO: Created: latency-svc-cqmxc Jan 11 20:29:17.041: INFO: Got endpoints: latency-svc-s2b7g [102.887653ms] Jan 11 20:29:17.043: INFO: Got endpoints: latency-svc-cqmxc [104.600524ms] Jan 11 20:29:17.118: INFO: Created: latency-svc-wblts Jan 11 20:29:17.121: INFO: Got endpoints: latency-svc-wblts [182.606955ms] Jan 11 20:29:17.122: INFO: Created: latency-svc-zlkgw Jan 11 20:29:17.123: INFO: Got endpoints: latency-svc-zlkgw [184.411358ms] Jan 11 20:29:17.126: INFO: Created: latency-svc-52zhd Jan 11 20:29:17.134: INFO: Got endpoints: latency-svc-52zhd [194.674639ms] Jan 11 20:29:17.134: INFO: Created: latency-svc-ncb5r Jan 11 20:29:17.138: INFO: Got endpoints: latency-svc-ncb5r [198.696279ms] Jan 11 20:29:17.138: INFO: Created: latency-svc-2jj8w Jan 11 20:29:17.139: INFO: Got endpoints: latency-svc-2jj8w [199.755719ms] Jan 11 20:29:17.142: INFO: Created: latency-svc-mxv6b Jan 11 20:29:17.146: INFO: Got endpoints: latency-svc-mxv6b [207.225843ms] Jan 11 20:29:17.147: INFO: Created: latency-svc-7487x Jan 11 20:29:17.148: INFO: Got endpoints: latency-svc-7487x [208.388372ms] Jan 11 20:29:17.151: INFO: Created: latency-svc-5ps2n Jan 11 20:29:17.154: INFO: Got endpoints: latency-svc-5ps2n [214.876906ms] Jan 11 20:29:17.155: INFO: Created: latency-svc-bz8mw Jan 11 20:29:17.156: INFO: Got endpoints: latency-svc-bz8mw [216.286289ms] Jan 11 20:29:17.159: INFO: Created: latency-svc-mthv6 Jan 11 20:29:17.160: INFO: Got endpoints: latency-svc-mthv6 [220.743557ms] Jan 11 20:29:17.163: INFO: Created: latency-svc-gl9mt Jan 11 20:29:17.165: INFO: Got endpoints: latency-svc-gl9mt [224.959313ms] Jan 11 20:29:17.167: INFO: Created: latency-svc-8wwgc Jan 11 20:29:17.169: INFO: Got endpoints: latency-svc-8wwgc [229.449299ms] Jan 11 20:29:17.173: INFO: Created: latency-svc-sz7fx Jan 11 20:29:17.174: INFO: Got endpoints: latency-svc-sz7fx [136.533728ms] Jan 11 20:29:17.177: INFO: Created: latency-svc-fjjc4 Jan 11 20:29:17.179: INFO: Got endpoints: latency-svc-fjjc4 [137.175079ms] Jan 11 20:29:17.181: INFO: Created: latency-svc-m2zvw Jan 11 20:29:17.182: INFO: Got endpoints: latency-svc-m2zvw [138.920315ms] Jan 11 20:29:17.216: INFO: Created: latency-svc-x7bxp Jan 11 20:29:17.218: INFO: Got endpoints: latency-svc-x7bxp [96.130213ms] Jan 11 20:29:17.222: INFO: Created: latency-svc-9xzhq Jan 11 20:29:17.224: INFO: Got endpoints: latency-svc-9xzhq [100.765383ms] Jan 11 20:29:17.230: INFO: Created: latency-svc-ts8zr Jan 11 20:29:17.232: INFO: Created: latency-svc-xb97n Jan 11 20:29:17.232: INFO: Got endpoints: latency-svc-ts8zr [98.178952ms] Jan 11 20:29:17.240: INFO: Got endpoints: latency-svc-xb97n [102.117884ms] Jan 11 20:29:17.240: INFO: Created: latency-svc-p5qn4 Jan 11 20:29:17.243: INFO: Got endpoints: latency-svc-p5qn4 [104.33634ms] Jan 11 20:29:17.244: INFO: Created: latency-svc-c7c64 Jan 11 20:29:17.247: INFO: Got endpoints: latency-svc-c7c64 [100.651707ms] Jan 11 20:29:17.248: INFO: Created: latency-svc-6z7qf Jan 11 20:29:17.249: INFO: Got endpoints: latency-svc-6z7qf [101.072006ms] Jan 11 20:29:17.252: INFO: Created: latency-svc-8dgcz Jan 11 20:29:17.252: INFO: Got endpoints: latency-svc-8dgcz [97.934005ms] Jan 11 20:29:17.256: INFO: Created: latency-svc-nqjfp Jan 11 20:29:17.257: INFO: Got endpoints: latency-svc-nqjfp [101.419442ms] Jan 11 20:29:17.261: INFO: Created: latency-svc-wbrp9 Jan 11 20:29:17.278: INFO: Got endpoints: latency-svc-wbrp9 [117.334052ms] Jan 11 20:29:17.304: INFO: Created: latency-svc-wc6hr Jan 11 20:29:17.307: INFO: Created: latency-svc-g5jvm Jan 11 20:29:17.307: INFO: Got endpoints: latency-svc-wc6hr [141.814048ms] Jan 11 20:29:17.307: INFO: Got endpoints: latency-svc-g5jvm [137.786322ms] Jan 11 20:29:17.310: INFO: Created: latency-svc-bq2lc Jan 11 20:29:17.314: INFO: Created: latency-svc-xdhk5 Jan 11 20:29:17.314: INFO: Got endpoints: latency-svc-bq2lc [135.105869ms] Jan 11 20:29:17.317: INFO: Got endpoints: latency-svc-xdhk5 [135.169361ms] Jan 11 20:29:17.318: INFO: Created: latency-svc-h9dnx Jan 11 20:29:17.327: INFO: Created: latency-svc-sqjqx Jan 11 20:29:17.327: INFO: Created: latency-svc-wgjmq Jan 11 20:29:17.327: INFO: Got endpoints: latency-svc-wgjmq [109.333852ms] Jan 11 20:29:17.327: INFO: Got endpoints: latency-svc-h9dnx [152.900989ms] Jan 11 20:29:17.332: INFO: Got endpoints: latency-svc-sqjqx [107.397ms] Jan 11 20:29:17.332: INFO: Created: latency-svc-w859m Jan 11 20:29:17.340: INFO: Created: latency-svc-992f9 Jan 11 20:29:17.340: INFO: Created: latency-svc-58ct8 Jan 11 20:29:17.340: INFO: Got endpoints: latency-svc-w859m [107.946294ms] Jan 11 20:29:17.347: INFO: Created: latency-svc-9kzqd Jan 11 20:29:17.351: INFO: Created: latency-svc-lw287 Jan 11 20:29:17.355: INFO: Created: latency-svc-jnplv Jan 11 20:29:17.358: INFO: Created: latency-svc-c7khz Jan 11 20:29:17.372: INFO: Created: latency-svc-jkck5 Jan 11 20:29:17.386: INFO: Got endpoints: latency-svc-58ct8 [145.923484ms] Jan 11 20:29:17.400: INFO: Created: latency-svc-kxh8b Jan 11 20:29:17.404: INFO: Created: latency-svc-cbkqt Jan 11 20:29:17.421: INFO: Created: latency-svc-94qtf Jan 11 20:29:17.424: INFO: Created: latency-svc-rs69c Jan 11 20:29:17.428: INFO: Created: latency-svc-7zmw2 Jan 11 20:29:17.432: INFO: Created: latency-svc-fbg4b Jan 11 20:29:17.436: INFO: Created: latency-svc-pg55q Jan 11 20:29:17.436: INFO: Got endpoints: latency-svc-992f9 [193.081921ms] Jan 11 20:29:17.451: INFO: Created: latency-svc-p4257 Jan 11 20:29:17.479: INFO: Created: latency-svc-9ljvn Jan 11 20:29:17.486: INFO: Got endpoints: latency-svc-9kzqd [238.413029ms] Jan 11 20:29:17.530: INFO: Created: latency-svc-npx27 Jan 11 20:29:17.536: INFO: Got endpoints: latency-svc-lw287 [286.691966ms] Jan 11 20:29:17.579: INFO: Created: latency-svc-hk25w Jan 11 20:29:17.586: INFO: Got endpoints: latency-svc-jnplv [333.296775ms] Jan 11 20:29:17.629: INFO: Created: latency-svc-pczsx Jan 11 20:29:17.636: INFO: Got endpoints: latency-svc-c7khz [378.471378ms] Jan 11 20:29:17.680: INFO: Created: latency-svc-fc7z5 Jan 11 20:29:17.686: INFO: Got endpoints: latency-svc-jkck5 [407.734843ms] Jan 11 20:29:17.731: INFO: Created: latency-svc-6dhzz Jan 11 20:29:17.735: INFO: Got endpoints: latency-svc-kxh8b [428.732358ms] Jan 11 20:29:17.780: INFO: Created: latency-svc-vlpcx Jan 11 20:29:17.786: INFO: Got endpoints: latency-svc-cbkqt [478.869256ms] Jan 11 20:29:17.829: INFO: Created: latency-svc-jbr4n Jan 11 20:29:17.836: INFO: Got endpoints: latency-svc-94qtf [522.295507ms] Jan 11 20:29:17.881: INFO: Created: latency-svc-r4d9z Jan 11 20:29:17.886: INFO: Got endpoints: latency-svc-rs69c [568.222969ms] Jan 11 20:29:17.930: INFO: Created: latency-svc-vwvc6 Jan 11 20:29:17.936: INFO: Got endpoints: latency-svc-7zmw2 [609.134351ms] Jan 11 20:29:17.979: INFO: Created: latency-svc-4s2lt Jan 11 20:29:17.986: INFO: Got endpoints: latency-svc-fbg4b [658.574813ms] Jan 11 20:29:18.030: INFO: Created: latency-svc-7dg5w Jan 11 20:29:18.036: INFO: Got endpoints: latency-svc-pg55q [704.015279ms] Jan 11 20:29:18.079: INFO: Created: latency-svc-fdplr Jan 11 20:29:18.086: INFO: Got endpoints: latency-svc-p4257 [745.845016ms] Jan 11 20:29:18.129: INFO: Created: latency-svc-m2r4k Jan 11 20:29:18.136: INFO: Got endpoints: latency-svc-9ljvn [750.042369ms] Jan 11 20:29:18.180: INFO: Created: latency-svc-fhrxk Jan 11 20:29:18.186: INFO: Got endpoints: latency-svc-npx27 [749.24219ms] Jan 11 20:29:18.229: INFO: Created: latency-svc-kvtbx Jan 11 20:29:18.236: INFO: Got endpoints: latency-svc-hk25w [749.947258ms] Jan 11 20:29:18.279: INFO: Created: latency-svc-6kl97 Jan 11 20:29:18.287: INFO: Got endpoints: latency-svc-pczsx [750.956121ms] Jan 11 20:29:18.329: INFO: Created: latency-svc-kndd7 Jan 11 20:29:18.336: INFO: Got endpoints: latency-svc-fc7z5 [749.78361ms] Jan 11 20:29:18.380: INFO: Created: latency-svc-k529k Jan 11 20:29:18.386: INFO: Got endpoints: latency-svc-6dhzz [749.676472ms] Jan 11 20:29:18.429: INFO: Created: latency-svc-6xztd Jan 11 20:29:18.436: INFO: Got endpoints: latency-svc-vlpcx [749.850031ms] Jan 11 20:29:18.481: INFO: Created: latency-svc-hx4c9 Jan 11 20:29:18.486: INFO: Got endpoints: latency-svc-jbr4n [750.258396ms] Jan 11 20:29:18.530: INFO: Created: latency-svc-xp7z4 Jan 11 20:29:18.536: INFO: Got endpoints: latency-svc-r4d9z [749.54548ms] Jan 11 20:29:18.581: INFO: Created: latency-svc-pvn95 Jan 11 20:29:18.591: INFO: Got endpoints: latency-svc-vwvc6 [754.441002ms] Jan 11 20:29:18.629: INFO: Created: latency-svc-xjpbr Jan 11 20:29:18.636: INFO: Got endpoints: latency-svc-4s2lt [750.176104ms] Jan 11 20:29:18.687: INFO: Got endpoints: latency-svc-7dg5w [750.782048ms] Jan 11 20:29:18.687: INFO: Created: latency-svc-kbd7z Jan 11 20:29:18.730: INFO: Created: latency-svc-5qnq6 Jan 11 20:29:18.736: INFO: Got endpoints: latency-svc-fdplr [750.539523ms] Jan 11 20:29:18.780: INFO: Created: latency-svc-jwks9 Jan 11 20:29:18.786: INFO: Got endpoints: latency-svc-m2r4k [750.217904ms] Jan 11 20:29:18.830: INFO: Created: latency-svc-5r6k6 Jan 11 20:29:18.837: INFO: Got endpoints: latency-svc-fhrxk [750.607555ms] Jan 11 20:29:18.879: INFO: Created: latency-svc-sz22t Jan 11 20:29:18.886: INFO: Got endpoints: latency-svc-kvtbx [749.545149ms] Jan 11 20:29:18.930: INFO: Created: latency-svc-hzpx5 Jan 11 20:29:18.935: INFO: Got endpoints: latency-svc-6kl97 [749.544041ms] Jan 11 20:29:18.979: INFO: Created: latency-svc-txx5w Jan 11 20:29:18.986: INFO: Got endpoints: latency-svc-kndd7 [750.14011ms] Jan 11 20:29:19.031: INFO: Created: latency-svc-rlg5v Jan 11 20:29:19.036: INFO: Got endpoints: latency-svc-k529k [749.370712ms] Jan 11 20:29:19.080: INFO: Created: latency-svc-989p5 Jan 11 20:29:19.087: INFO: Got endpoints: latency-svc-6xztd [751.839646ms] Jan 11 20:29:19.131: INFO: Created: latency-svc-5qfmz Jan 11 20:29:19.137: INFO: Got endpoints: latency-svc-hx4c9 [750.802869ms] Jan 11 20:29:19.182: INFO: Created: latency-svc-6wt25 Jan 11 20:29:19.188: INFO: Got endpoints: latency-svc-xp7z4 [751.854647ms] Jan 11 20:29:19.231: INFO: Created: latency-svc-r9vdn Jan 11 20:29:19.235: INFO: Got endpoints: latency-svc-pvn95 [749.599933ms] Jan 11 20:29:19.284: INFO: Created: latency-svc-7565s Jan 11 20:29:19.286: INFO: Got endpoints: latency-svc-xjpbr [750.014563ms] Jan 11 20:29:19.332: INFO: Created: latency-svc-prmvm Jan 11 20:29:19.336: INFO: Got endpoints: latency-svc-kbd7z [745.131897ms] Jan 11 20:29:19.379: INFO: Created: latency-svc-dz8hd Jan 11 20:29:19.387: INFO: Got endpoints: latency-svc-5qnq6 [750.431438ms] Jan 11 20:29:19.429: INFO: Created: latency-svc-bzdlz Jan 11 20:29:19.436: INFO: Got endpoints: latency-svc-jwks9 [748.516822ms] Jan 11 20:29:19.480: INFO: Created: latency-svc-hkxhd Jan 11 20:29:19.486: INFO: Got endpoints: latency-svc-5r6k6 [749.012833ms] Jan 11 20:29:19.529: INFO: Created: latency-svc-fxd6l Jan 11 20:29:19.536: INFO: Got endpoints: latency-svc-sz22t [749.578836ms] Jan 11 20:29:19.579: INFO: Created: latency-svc-s65vs Jan 11 20:29:19.586: INFO: Got endpoints: latency-svc-hzpx5 [749.303465ms] Jan 11 20:29:19.629: INFO: Created: latency-svc-6mc9h Jan 11 20:29:19.636: INFO: Got endpoints: latency-svc-txx5w [750.03959ms] Jan 11 20:29:19.679: INFO: Created: latency-svc-b6448 Jan 11 20:29:19.692: INFO: Got endpoints: latency-svc-rlg5v [756.079061ms] Jan 11 20:29:19.730: INFO: Created: latency-svc-55gtf Jan 11 20:29:19.736: INFO: Got endpoints: latency-svc-989p5 [749.702332ms] Jan 11 20:29:19.785: INFO: Created: latency-svc-wrknt Jan 11 20:29:19.786: INFO: Got endpoints: latency-svc-5qfmz [749.901494ms] Jan 11 20:29:19.829: INFO: Created: latency-svc-8md9n Jan 11 20:29:19.837: INFO: Got endpoints: latency-svc-6wt25 [749.396994ms] Jan 11 20:29:19.879: INFO: Created: latency-svc-6tkm4 Jan 11 20:29:19.886: INFO: Got endpoints: latency-svc-r9vdn [748.981296ms] Jan 11 20:29:19.933: INFO: Created: latency-svc-5pmk8 Jan 11 20:29:19.936: INFO: Got endpoints: latency-svc-7565s [748.078811ms] Jan 11 20:29:19.979: INFO: Created: latency-svc-sxsrj Jan 11 20:29:19.992: INFO: Got endpoints: latency-svc-prmvm [756.068343ms] Jan 11 20:29:20.032: INFO: Created: latency-svc-q2xhr Jan 11 20:29:20.038: INFO: Got endpoints: latency-svc-dz8hd [751.793251ms] Jan 11 20:29:20.086: INFO: Got endpoints: latency-svc-bzdlz [750.681376ms] Jan 11 20:29:20.087: INFO: Created: latency-svc-8hxbr Jan 11 20:29:20.131: INFO: Created: latency-svc-tqpks Jan 11 20:29:20.137: INFO: Got endpoints: latency-svc-hkxhd [750.02143ms] Jan 11 20:29:20.180: INFO: Created: latency-svc-hksc7 Jan 11 20:29:20.186: INFO: Got endpoints: latency-svc-fxd6l [750.049866ms] Jan 11 20:29:20.230: INFO: Created: latency-svc-wvx2k Jan 11 20:29:20.237: INFO: Got endpoints: latency-svc-s65vs [750.89094ms] Jan 11 20:29:20.279: INFO: Created: latency-svc-rq5x4 Jan 11 20:29:20.286: INFO: Got endpoints: latency-svc-6mc9h [750.232896ms] Jan 11 20:29:20.330: INFO: Created: latency-svc-l2w82 Jan 11 20:29:20.336: INFO: Got endpoints: latency-svc-b6448 [749.513915ms] Jan 11 20:29:20.380: INFO: Created: latency-svc-vxsbd Jan 11 20:29:20.386: INFO: Got endpoints: latency-svc-55gtf [749.922395ms] Jan 11 20:29:20.430: INFO: Created: latency-svc-bw56h Jan 11 20:29:20.437: INFO: Got endpoints: latency-svc-wrknt [744.736728ms] Jan 11 20:29:20.479: INFO: Created: latency-svc-cqvqq Jan 11 20:29:20.486: INFO: Got endpoints: latency-svc-8md9n [750.667319ms] Jan 11 20:29:20.533: INFO: Created: latency-svc-t45wj Jan 11 20:29:20.580: INFO: Got endpoints: latency-svc-6tkm4 [793.670127ms] Jan 11 20:29:20.580: INFO: Created: latency-svc-zq9n9 Jan 11 20:29:20.585: INFO: Got endpoints: latency-svc-5pmk8 [748.506847ms] Jan 11 20:29:20.674: INFO: Created: latency-svc-6f2l9 Jan 11 20:29:20.678: INFO: Got endpoints: latency-svc-sxsrj [792.789328ms] Jan 11 20:29:20.679: INFO: Created: latency-svc-mgcrt Jan 11 20:29:20.686: INFO: Got endpoints: latency-svc-q2xhr [749.808499ms] Jan 11 20:29:20.736: INFO: Got endpoints: latency-svc-8hxbr [744.46008ms] Jan 11 20:29:20.772: INFO: Created: latency-svc-7fm9r Jan 11 20:29:20.779: INFO: Created: latency-svc-jp7dq Jan 11 20:29:20.786: INFO: Got endpoints: latency-svc-tqpks [748.324966ms] Jan 11 20:29:20.833: INFO: Created: latency-svc-nzg9v Jan 11 20:29:20.879: INFO: Got endpoints: latency-svc-hksc7 [792.433018ms] Jan 11 20:29:20.879: INFO: Created: latency-svc-c6t4p Jan 11 20:29:20.886: INFO: Got endpoints: latency-svc-wvx2k [748.920914ms] Jan 11 20:29:20.938: INFO: Got endpoints: latency-svc-rq5x4 [751.949557ms] Jan 11 20:29:20.973: INFO: Created: latency-svc-wkxr8 Jan 11 20:29:20.981: INFO: Created: latency-svc-ptj6z Jan 11 20:29:20.986: INFO: Got endpoints: latency-svc-l2w82 [749.047683ms] Jan 11 20:29:21.032: INFO: Created: latency-svc-8llmx Jan 11 20:29:21.035: INFO: Got endpoints: latency-svc-vxsbd [749.555589ms] Jan 11 20:29:21.080: INFO: Created: latency-svc-dhr5n Jan 11 20:29:21.086: INFO: Got endpoints: latency-svc-bw56h [750.533659ms] Jan 11 20:29:21.131: INFO: Created: latency-svc-c7vtv Jan 11 20:29:21.136: INFO: Got endpoints: latency-svc-cqvqq [750.093309ms] Jan 11 20:29:21.180: INFO: Created: latency-svc-shrdl Jan 11 20:29:21.188: INFO: Got endpoints: latency-svc-t45wj [751.000738ms] Jan 11 20:29:21.232: INFO: Created: latency-svc-rzpqh Jan 11 20:29:21.236: INFO: Got endpoints: latency-svc-zq9n9 [749.831591ms] Jan 11 20:29:21.282: INFO: Created: latency-svc-dqmd8 Jan 11 20:29:21.287: INFO: Got endpoints: latency-svc-6f2l9 [706.736112ms] Jan 11 20:29:21.334: INFO: Created: latency-svc-f72wt Jan 11 20:29:21.339: INFO: Got endpoints: latency-svc-mgcrt [752.974273ms] Jan 11 20:29:21.387: INFO: Got endpoints: latency-svc-7fm9r [708.625186ms] Jan 11 20:29:21.408: INFO: Created: latency-svc-j99bh Jan 11 20:29:21.432: INFO: Created: latency-svc-pssrp Jan 11 20:29:21.435: INFO: Got endpoints: latency-svc-jp7dq [749.700732ms] Jan 11 20:29:21.481: INFO: Created: latency-svc-z5cpl Jan 11 20:29:21.486: INFO: Got endpoints: latency-svc-nzg9v [749.350007ms] Jan 11 20:29:21.529: INFO: Created: latency-svc-9zslb Jan 11 20:29:21.536: INFO: Got endpoints: latency-svc-c6t4p [749.615025ms] Jan 11 20:29:21.579: INFO: Created: latency-svc-2rnrs Jan 11 20:29:21.587: INFO: Got endpoints: latency-svc-wkxr8 [707.927503ms] Jan 11 20:29:21.629: INFO: Created: latency-svc-9m945 Jan 11 20:29:21.636: INFO: Got endpoints: latency-svc-ptj6z [750.049913ms] Jan 11 20:29:21.681: INFO: Created: latency-svc-sr8ck Jan 11 20:29:21.686: INFO: Got endpoints: latency-svc-8llmx [747.845073ms] Jan 11 20:29:21.730: INFO: Created: latency-svc-vvf98 Jan 11 20:29:21.736: INFO: Got endpoints: latency-svc-dhr5n [749.895926ms] Jan 11 20:29:21.780: INFO: Created: latency-svc-2rxnn Jan 11 20:29:21.785: INFO: Got endpoints: latency-svc-c7vtv [749.912278ms] Jan 11 20:29:21.835: INFO: Created: latency-svc-5q62h Jan 11 20:29:21.836: INFO: Got endpoints: latency-svc-shrdl [749.34226ms] Jan 11 20:29:21.879: INFO: Created: latency-svc-c9vww Jan 11 20:29:21.886: INFO: Got endpoints: latency-svc-rzpqh [749.610561ms] Jan 11 20:29:21.929: INFO: Created: latency-svc-6rzg7 Jan 11 20:29:21.936: INFO: Got endpoints: latency-svc-dqmd8 [748.085724ms] Jan 11 20:29:21.979: INFO: Created: latency-svc-bnbzz Jan 11 20:29:21.986: INFO: Got endpoints: latency-svc-f72wt [745.350826ms] Jan 11 20:29:22.029: INFO: Created: latency-svc-xk5p9 Jan 11 20:29:22.036: INFO: Got endpoints: latency-svc-j99bh [722.062289ms] Jan 11 20:29:22.081: INFO: Created: latency-svc-xc6xg Jan 11 20:29:22.086: INFO: Got endpoints: latency-svc-pssrp [747.422172ms] Jan 11 20:29:22.130: INFO: Created: latency-svc-mqmb7 Jan 11 20:29:22.136: INFO: Got endpoints: latency-svc-z5cpl [748.510612ms] Jan 11 20:29:22.180: INFO: Created: latency-svc-42ntw Jan 11 20:29:22.186: INFO: Got endpoints: latency-svc-9zslb [750.209158ms] Jan 11 20:29:22.229: INFO: Created: latency-svc-45vjl Jan 11 20:29:22.236: INFO: Got endpoints: latency-svc-2rnrs [749.926483ms] Jan 11 20:29:22.279: INFO: Created: latency-svc-9vf74 Jan 11 20:29:22.286: INFO: Got endpoints: latency-svc-9m945 [750.203473ms] Jan 11 20:29:22.329: INFO: Created: latency-svc-r6bp4 Jan 11 20:29:22.336: INFO: Got endpoints: latency-svc-sr8ck [748.688261ms] Jan 11 20:29:22.379: INFO: Created: latency-svc-47l2j Jan 11 20:29:22.386: INFO: Got endpoints: latency-svc-vvf98 [750.061459ms] Jan 11 20:29:22.429: INFO: Created: latency-svc-jz2fm Jan 11 20:29:22.436: INFO: Got endpoints: latency-svc-2rxnn [750.752999ms] Jan 11 20:29:22.480: INFO: Created: latency-svc-5vdf6 Jan 11 20:29:22.486: INFO: Got endpoints: latency-svc-5q62h [750.235571ms] Jan 11 20:29:22.530: INFO: Created: latency-svc-wf46g Jan 11 20:29:22.536: INFO: Got endpoints: latency-svc-c9vww [750.447876ms] Jan 11 20:29:22.580: INFO: Created: latency-svc-h7vk6 Jan 11 20:29:22.586: INFO: Got endpoints: latency-svc-6rzg7 [750.30505ms] Jan 11 20:29:22.630: INFO: Created: latency-svc-vd6nh Jan 11 20:29:22.636: INFO: Got endpoints: latency-svc-bnbzz [750.436334ms] Jan 11 20:29:22.680: INFO: Created: latency-svc-wfmp8 Jan 11 20:29:22.685: INFO: Got endpoints: latency-svc-xk5p9 [749.729719ms] Jan 11 20:29:22.730: INFO: Created: latency-svc-cljxl Jan 11 20:29:22.736: INFO: Got endpoints: latency-svc-xc6xg [749.684401ms] Jan 11 20:29:22.782: INFO: Created: latency-svc-5h4t8 Jan 11 20:29:22.788: INFO: Got endpoints: latency-svc-mqmb7 [751.770039ms] Jan 11 20:29:22.829: INFO: Created: latency-svc-svsxt Jan 11 20:29:22.836: INFO: Got endpoints: latency-svc-42ntw [749.772486ms] Jan 11 20:29:22.891: INFO: Got endpoints: latency-svc-45vjl [754.876249ms] Jan 11 20:29:22.891: INFO: Created: latency-svc-vchzg Jan 11 20:29:22.930: INFO: Created: latency-svc-5j26h Jan 11 20:29:22.936: INFO: Got endpoints: latency-svc-9vf74 [750.204292ms] Jan 11 20:29:22.987: INFO: Got endpoints: latency-svc-r6bp4 [751.301226ms] Jan 11 20:29:22.987: INFO: Created: latency-svc-46dd8 Jan 11 20:29:23.029: INFO: Created: latency-svc-sn7wn Jan 11 20:29:23.036: INFO: Got endpoints: latency-svc-47l2j [749.593421ms] Jan 11 20:29:23.081: INFO: Created: latency-svc-p28cm Jan 11 20:29:23.087: INFO: Got endpoints: latency-svc-jz2fm [750.911897ms] Jan 11 20:29:23.129: INFO: Created: latency-svc-qzhnz Jan 11 20:29:23.136: INFO: Got endpoints: latency-svc-5vdf6 [749.770232ms] Jan 11 20:29:23.179: INFO: Created: latency-svc-vmnbv Jan 11 20:29:23.186: INFO: Got endpoints: latency-svc-wf46g [749.363661ms] Jan 11 20:29:23.230: INFO: Created: latency-svc-pthfc Jan 11 20:29:23.236: INFO: Got endpoints: latency-svc-h7vk6 [749.655329ms] Jan 11 20:29:23.280: INFO: Created: latency-svc-cwlzw Jan 11 20:29:23.289: INFO: Got endpoints: latency-svc-vd6nh [752.557174ms] Jan 11 20:29:23.329: INFO: Created: latency-svc-k87mr Jan 11 20:29:23.336: INFO: Got endpoints: latency-svc-wfmp8 [749.554254ms] Jan 11 20:29:23.382: INFO: Created: latency-svc-rq77j Jan 11 20:29:23.386: INFO: Got endpoints: latency-svc-cljxl [750.211903ms] Jan 11 20:29:23.430: INFO: Created: latency-svc-fj95p Jan 11 20:29:23.436: INFO: Got endpoints: latency-svc-5h4t8 [750.085344ms] Jan 11 20:29:23.480: INFO: Created: latency-svc-zbqxg Jan 11 20:29:23.486: INFO: Got endpoints: latency-svc-svsxt [750.130581ms] Jan 11 20:29:23.529: INFO: Created: latency-svc-628tg Jan 11 20:29:23.536: INFO: Got endpoints: latency-svc-vchzg [747.926554ms] Jan 11 20:29:23.579: INFO: Created: latency-svc-5mqm5 Jan 11 20:29:23.586: INFO: Got endpoints: latency-svc-5j26h [749.581585ms] Jan 11 20:29:23.629: INFO: Created: latency-svc-ggq4n Jan 11 20:29:23.639: INFO: Got endpoints: latency-svc-46dd8 [747.920522ms] Jan 11 20:29:23.679: INFO: Created: latency-svc-mhfpz Jan 11 20:29:23.686: INFO: Got endpoints: latency-svc-sn7wn [749.94615ms] Jan 11 20:29:23.732: INFO: Created: latency-svc-lbgt4 Jan 11 20:29:23.736: INFO: Got endpoints: latency-svc-p28cm [748.822775ms] Jan 11 20:29:23.779: INFO: Created: latency-svc-rj65q Jan 11 20:29:23.787: INFO: Got endpoints: latency-svc-qzhnz [750.950016ms] Jan 11 20:29:23.829: INFO: Created: latency-svc-7b5sd Jan 11 20:29:23.836: INFO: Got endpoints: latency-svc-vmnbv [748.953165ms] Jan 11 20:29:23.880: INFO: Created: latency-svc-bxr9z Jan 11 20:29:23.886: INFO: Got endpoints: latency-svc-pthfc [749.900664ms] Jan 11 20:29:23.931: INFO: Created: latency-svc-f7n7c Jan 11 20:29:23.936: INFO: Got endpoints: latency-svc-cwlzw [749.658856ms] Jan 11 20:29:23.979: INFO: Created: latency-svc-25pd4 Jan 11 20:29:23.988: INFO: Got endpoints: latency-svc-k87mr [751.920923ms] Jan 11 20:29:24.030: INFO: Created: latency-svc-svfzx Jan 11 20:29:24.036: INFO: Got endpoints: latency-svc-rq77j [747.116983ms] Jan 11 20:29:24.081: INFO: Created: latency-svc-jhbcc Jan 11 20:29:24.086: INFO: Got endpoints: latency-svc-fj95p [750.0535ms] Jan 11 20:29:24.129: INFO: Created: latency-svc-wpd5h Jan 11 20:29:24.136: INFO: Got endpoints: latency-svc-zbqxg [749.449142ms] Jan 11 20:29:24.179: INFO: Created: latency-svc-fdq48 Jan 11 20:29:24.187: INFO: Got endpoints: latency-svc-628tg [751.725095ms] Jan 11 20:29:24.229: INFO: Created: latency-svc-lktff Jan 11 20:29:24.236: INFO: Got endpoints: latency-svc-5mqm5 [749.846856ms] Jan 11 20:29:24.281: INFO: Created: latency-svc-92ck4 Jan 11 20:29:24.286: INFO: Got endpoints: latency-svc-ggq4n [750.107757ms] Jan 11 20:29:24.329: INFO: Created: latency-svc-4ztn4 Jan 11 20:29:24.336: INFO: Got endpoints: latency-svc-mhfpz [750.318827ms] Jan 11 20:29:24.380: INFO: Created: latency-svc-l7w46 Jan 11 20:29:24.393: INFO: Got endpoints: latency-svc-lbgt4 [754.018188ms] Jan 11 20:29:24.430: INFO: Created: latency-svc-6kwx9 Jan 11 20:29:24.436: INFO: Got endpoints: latency-svc-rj65q [749.574841ms] Jan 11 20:29:24.486: INFO: Got endpoints: latency-svc-7b5sd [750.176964ms] Jan 11 20:29:24.486: INFO: Created: latency-svc-znlw8 Jan 11 20:29:24.529: INFO: Created: latency-svc-vgz7h Jan 11 20:29:24.536: INFO: Got endpoints: latency-svc-bxr9z [749.185089ms] Jan 11 20:29:24.579: INFO: Created: latency-svc-kw2cm Jan 11 20:29:24.587: INFO: Got endpoints: latency-svc-f7n7c [751.563438ms] Jan 11 20:29:24.632: INFO: Created: latency-svc-cnksv Jan 11 20:29:24.636: INFO: Got endpoints: latency-svc-25pd4 [750.093844ms] Jan 11 20:29:24.681: INFO: Created: latency-svc-pc7qd Jan 11 20:29:24.686: INFO: Got endpoints: latency-svc-svfzx [750.008774ms] Jan 11 20:29:24.731: INFO: Created: latency-svc-jrnf4 Jan 11 20:29:24.736: INFO: Got endpoints: latency-svc-jhbcc [748.028722ms] Jan 11 20:29:24.779: INFO: Created: latency-svc-ctzvh Jan 11 20:29:24.788: INFO: Got endpoints: latency-svc-wpd5h [751.80181ms] Jan 11 20:29:24.830: INFO: Created: latency-svc-kxxft Jan 11 20:29:24.839: INFO: Got endpoints: latency-svc-fdq48 [752.927491ms] Jan 11 20:29:24.881: INFO: Created: latency-svc-4dl4v Jan 11 20:29:24.886: INFO: Got endpoints: latency-svc-lktff [750.18344ms] Jan 11 20:29:24.936: INFO: Got endpoints: latency-svc-92ck4 [748.715538ms] Jan 11 20:29:24.988: INFO: Got endpoints: latency-svc-4ztn4 [751.81556ms] Jan 11 20:29:25.036: INFO: Got endpoints: latency-svc-l7w46 [750.316617ms] Jan 11 20:29:25.086: INFO: Got endpoints: latency-svc-6kwx9 [749.752934ms] Jan 11 20:29:25.136: INFO: Got endpoints: latency-svc-znlw8 [743.75203ms] Jan 11 20:29:25.187: INFO: Got endpoints: latency-svc-vgz7h [751.343666ms] Jan 11 20:29:25.240: INFO: Got endpoints: latency-svc-kw2cm [753.520488ms] Jan 11 20:29:25.288: INFO: Got endpoints: latency-svc-cnksv [751.753519ms] Jan 11 20:29:25.338: INFO: Got endpoints: latency-svc-pc7qd [750.242805ms] Jan 11 20:29:25.386: INFO: Got endpoints: latency-svc-jrnf4 [749.951437ms] Jan 11 20:29:25.437: INFO: Got endpoints: latency-svc-ctzvh [750.857836ms] Jan 11 20:29:25.489: INFO: Got endpoints: latency-svc-kxxft [752.847986ms] Jan 11 20:29:25.536: INFO: Got endpoints: latency-svc-4dl4v [748.476934ms] Jan 11 20:29:25.536: INFO: Latencies: [96.130213ms 97.934005ms 98.178952ms 99.766385ms 100.651707ms 100.765383ms 101.072006ms 101.419442ms 102.117884ms 102.887653ms 104.33634ms 104.600524ms 107.397ms 107.946294ms 109.333852ms 117.334052ms 135.105869ms 135.169361ms 136.533728ms 137.175079ms 137.786322ms 138.920315ms 141.814048ms 145.923484ms 152.900989ms 182.606955ms 184.411358ms 193.081921ms 194.674639ms 198.696279ms 199.755719ms 207.225843ms 208.388372ms 214.876906ms 216.286289ms 220.743557ms 224.959313ms 229.449299ms 238.413029ms 286.691966ms 333.296775ms 378.471378ms 407.734843ms 428.732358ms 478.869256ms 522.295507ms 568.222969ms 609.134351ms 658.574813ms 704.015279ms 706.736112ms 707.927503ms 708.625186ms 722.062289ms 743.75203ms 744.46008ms 744.736728ms 745.131897ms 745.350826ms 745.845016ms 747.116983ms 747.422172ms 747.845073ms 747.920522ms 747.926554ms 748.028722ms 748.078811ms 748.085724ms 748.324966ms 748.476934ms 748.506847ms 748.510612ms 748.516822ms 748.688261ms 748.715538ms 748.822775ms 748.920914ms 748.953165ms 748.981296ms 749.012833ms 749.047683ms 749.185089ms 749.24219ms 749.303465ms 749.34226ms 749.350007ms 749.363661ms 749.370712ms 749.396994ms 749.449142ms 749.513915ms 749.544041ms 749.545149ms 749.54548ms 749.554254ms 749.555589ms 749.574841ms 749.578836ms 749.581585ms 749.593421ms 749.599933ms 749.610561ms 749.615025ms 749.655329ms 749.658856ms 749.676472ms 749.684401ms 749.700732ms 749.702332ms 749.729719ms 749.752934ms 749.770232ms 749.772486ms 749.78361ms 749.808499ms 749.831591ms 749.846856ms 749.850031ms 749.895926ms 749.900664ms 749.901494ms 749.912278ms 749.922395ms 749.926483ms 749.94615ms 749.947258ms 749.951437ms 750.008774ms 750.014563ms 750.02143ms 750.03959ms 750.042369ms 750.049866ms 750.049913ms 750.0535ms 750.061459ms 750.085344ms 750.093309ms 750.093844ms 750.107757ms 750.130581ms 750.14011ms 750.176104ms 750.176964ms 750.18344ms 750.203473ms 750.204292ms 750.209158ms 750.211903ms 750.217904ms 750.232896ms 750.235571ms 750.242805ms 750.258396ms 750.30505ms 750.316617ms 750.318827ms 750.431438ms 750.436334ms 750.447876ms 750.533659ms 750.539523ms 750.607555ms 750.667319ms 750.681376ms 750.752999ms 750.782048ms 750.802869ms 750.857836ms 750.89094ms 750.911897ms 750.950016ms 750.956121ms 751.000738ms 751.301226ms 751.343666ms 751.563438ms 751.725095ms 751.753519ms 751.770039ms 751.793251ms 751.80181ms 751.81556ms 751.839646ms 751.854647ms 751.920923ms 751.949557ms 752.557174ms 752.847986ms 752.927491ms 752.974273ms 753.520488ms 754.018188ms 754.441002ms 754.876249ms 756.068343ms 756.079061ms 792.433018ms 792.789328ms 793.670127ms] Jan 11 20:29:25.537: INFO: 50 %ile: 749.599933ms Jan 11 20:29:25.537: INFO: 90 %ile: 751.793251ms Jan 11 20:29:25.537: INFO: 99 %ile: 792.789328ms Jan 11 20:29:25.537: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:29:25.537: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-7431" for this suite. Jan 11 20:29:45.896: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:29:49.219: INFO: namespace svc-latency-7431 deletion completed in 23.591292745s • [SLOW TEST:34.315 seconds] [sig-network] Service endpoints latency /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:28:58.840: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename services STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-1480 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:91 [It] should have session affinity work for NodePort service /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1813 STEP: creating service in namespace services-1480 STEP: creating service affinity-nodeport in namespace services-1480 STEP: creating replication controller affinity-nodeport in namespace services-1480 I0111 20:28:59.739890 8611 runners.go:184] Created replication controller with name: affinity-nodeport, namespace: services-1480, replica count: 3 I0111 20:29:02.840341 8611 runners.go:184] affinity-nodeport Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 11 20:29:03.111: INFO: Creating new exec pod Jan 11 20:29:06.474: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=services-1480 execpod-affinitytrtbx -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport 80' Jan 11 20:29:07.833: INFO: stderr: "+ nc -zv -t -w 2 affinity-nodeport 80\nConnection to affinity-nodeport 80 port [tcp/http] succeeded!\n" Jan 11 20:29:07.833: INFO: stdout: "" Jan 11 20:29:07.834: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=services-1480 execpod-affinitytrtbx -- /bin/sh -x -c nc -zv -t -w 2 100.108.78.161 80' Jan 11 20:29:09.145: INFO: stderr: "+ nc -zv -t -w 2 100.108.78.161 80\nConnection to 100.108.78.161 80 port [tcp/http] succeeded!\n" Jan 11 20:29:09.145: INFO: stdout: "" Jan 11 20:29:09.146: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=services-1480 execpod-affinitytrtbx -- /bin/sh -x -c nc -zv -t -w 2 10.250.27.25 31995' Jan 11 20:29:10.572: INFO: stderr: "+ nc -zv -t -w 2 10.250.27.25 31995\nConnection to 10.250.27.25 31995 port [tcp/31995] succeeded!\n" Jan 11 20:29:10.572: INFO: stdout: "" Jan 11 20:29:10.572: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=services-1480 execpod-affinitytrtbx -- /bin/sh -x -c nc -zv -t -w 2 10.250.7.77 31995' Jan 11 20:29:11.877: INFO: stderr: "+ nc -zv -t -w 2 10.250.7.77 31995\nConnection to 10.250.7.77 31995 port [tcp/31995] succeeded!\n" Jan 11 20:29:11.877: INFO: stdout: "" Jan 11 20:29:11.877: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=services-1480 execpod-affinitytrtbx -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.250.27.25:31995/' Jan 11 20:29:13.177: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://10.250.27.25:31995/\n" Jan 11 20:29:13.178: INFO: stdout: "affinity-nodeport-zkzx2" Jan 11 20:29:13.178: INFO: Received response from host: affinity-nodeport-zkzx2 Jan 11 20:29:15.178: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=services-1480 execpod-affinitytrtbx -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.250.27.25:31995/' Jan 11 20:29:16.654: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://10.250.27.25:31995/\n" Jan 11 20:29:16.654: INFO: stdout: "affinity-nodeport-zkzx2" Jan 11 20:29:16.654: INFO: Received response from host: affinity-nodeport-zkzx2 Jan 11 20:29:17.178: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=services-1480 execpod-affinitytrtbx -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.250.27.25:31995/' Jan 11 20:29:18.629: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://10.250.27.25:31995/\n" Jan 11 20:29:18.629: INFO: stdout: "affinity-nodeport-zkzx2" Jan 11 20:29:18.629: INFO: Received response from host: affinity-nodeport-zkzx2 Jan 11 20:29:19.178: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=services-1480 execpod-affinitytrtbx -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.250.27.25:31995/' Jan 11 20:29:20.587: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://10.250.27.25:31995/\n" Jan 11 20:29:20.588: INFO: stdout: "affinity-nodeport-zkzx2" Jan 11 20:29:20.588: INFO: Received response from host: affinity-nodeport-zkzx2 Jan 11 20:29:21.178: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=services-1480 execpod-affinitytrtbx -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.250.27.25:31995/' Jan 11 20:29:22.642: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://10.250.27.25:31995/\n" Jan 11 20:29:22.642: INFO: stdout: "affinity-nodeport-zkzx2" Jan 11 20:29:22.642: INFO: Received response from host: affinity-nodeport-zkzx2 Jan 11 20:29:23.178: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=services-1480 execpod-affinitytrtbx -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.250.27.25:31995/' Jan 11 20:29:24.602: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://10.250.27.25:31995/\n" Jan 11 20:29:24.602: INFO: stdout: "affinity-nodeport-zkzx2" Jan 11 20:29:24.602: INFO: Received response from host: affinity-nodeport-zkzx2 Jan 11 20:29:25.179: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=services-1480 execpod-affinitytrtbx -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.250.27.25:31995/' Jan 11 20:29:26.721: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://10.250.27.25:31995/\n" Jan 11 20:29:26.721: INFO: stdout: "affinity-nodeport-zkzx2" Jan 11 20:29:26.721: INFO: Received response from host: affinity-nodeport-zkzx2 Jan 11 20:29:27.178: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=services-1480 execpod-affinitytrtbx -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.250.27.25:31995/' Jan 11 20:29:28.553: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://10.250.27.25:31995/\n" Jan 11 20:29:28.553: INFO: stdout: "affinity-nodeport-zkzx2" Jan 11 20:29:28.553: INFO: Received response from host: affinity-nodeport-zkzx2 Jan 11 20:29:29.178: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=services-1480 execpod-affinitytrtbx -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.250.27.25:31995/' Jan 11 20:29:30.636: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://10.250.27.25:31995/\n" Jan 11 20:29:30.636: INFO: stdout: "affinity-nodeport-zkzx2" Jan 11 20:29:30.636: INFO: Received response from host: affinity-nodeport-zkzx2 Jan 11 20:29:31.178: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=services-1480 execpod-affinitytrtbx -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.250.27.25:31995/' Jan 11 20:29:32.688: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://10.250.27.25:31995/\n" Jan 11 20:29:32.689: INFO: stdout: "affinity-nodeport-zkzx2" Jan 11 20:29:32.689: INFO: Received response from host: affinity-nodeport-zkzx2 Jan 11 20:29:33.178: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=services-1480 execpod-affinitytrtbx -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.250.27.25:31995/' Jan 11 20:29:34.668: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://10.250.27.25:31995/\n" Jan 11 20:29:34.668: INFO: stdout: "affinity-nodeport-zkzx2" Jan 11 20:29:34.668: INFO: Received response from host: affinity-nodeport-zkzx2 Jan 11 20:29:35.178: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=services-1480 execpod-affinitytrtbx -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.250.27.25:31995/' Jan 11 20:29:36.562: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://10.250.27.25:31995/\n" Jan 11 20:29:36.562: INFO: stdout: "affinity-nodeport-zkzx2" Jan 11 20:29:36.562: INFO: Received response from host: affinity-nodeport-zkzx2 Jan 11 20:29:37.178: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=services-1480 execpod-affinitytrtbx -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.250.27.25:31995/' Jan 11 20:29:38.573: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://10.250.27.25:31995/\n" Jan 11 20:29:38.573: INFO: stdout: "affinity-nodeport-zkzx2" Jan 11 20:29:38.573: INFO: Received response from host: affinity-nodeport-zkzx2 Jan 11 20:29:39.178: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=services-1480 execpod-affinitytrtbx -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.250.27.25:31995/' Jan 11 20:29:40.619: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://10.250.27.25:31995/\n" Jan 11 20:29:40.619: INFO: stdout: "affinity-nodeport-zkzx2" Jan 11 20:29:40.619: INFO: Received response from host: affinity-nodeport-zkzx2 Jan 11 20:29:41.178: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=services-1480 execpod-affinitytrtbx -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.250.27.25:31995/' Jan 11 20:29:42.576: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://10.250.27.25:31995/\n" Jan 11 20:29:42.576: INFO: stdout: "affinity-nodeport-zkzx2" Jan 11 20:29:42.576: INFO: Received response from host: affinity-nodeport-zkzx2 Jan 11 20:29:42.576: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport in namespace services-1480, will wait for the garbage collector to delete the pods Jan 11 20:29:42.949: INFO: Deleting ReplicationController affinity-nodeport took: 90.916087ms Jan 11 20:29:43.050: INFO: Terminating ReplicationController affinity-nodeport pods took: 100.283807ms [AfterEach] [sig-network] Services /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:29:54.047: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1480" for this suite. Jan 11 20:30:00.414: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:30:03.770: INFO: namespace services-1480 deletion completed in 9.629034795s [AfterEach] [sig-network] Services /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:95 • [SLOW TEST:64.931 seconds] [sig-network] Services /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity work for NodePort service /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1813 ------------------------------ SSS ------------------------------ [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:93 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:29:44.642: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename provisioning STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in provisioning-5947 STEP: Waiting for a default service account to be provisioned in namespace [It] should fail if subpath directory is outside the volume [Slow] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:223 Jan 11 20:29:45.350: INFO: Could not find CSI Name for in-tree plugin kubernetes.io/empty-dir Jan 11 20:29:45.350: INFO: Creating resource for inline volume STEP: Creating pod pod-subpath-test-emptydir-f7kl STEP: Checking for subpath error in container status Jan 11 20:29:49.625: INFO: Deleting pod "pod-subpath-test-emptydir-f7kl" in namespace "provisioning-5947" Jan 11 20:29:49.716: INFO: Wait up to 5m0s for pod "pod-subpath-test-emptydir-f7kl" to be fully deleted STEP: Deleting pod Jan 11 20:29:53.896: INFO: Deleting pod "pod-subpath-test-emptydir-f7kl" in namespace "provisioning-5947" Jan 11 20:29:53.986: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics [AfterEach] [Testpattern: Inline-volume (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:29:53.986: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "provisioning-5947" for this suite. Jan 11 20:30:00.348: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:30:03.925: INFO: namespace provisioning-5947 deletion completed in 9.847117757s • [SLOW TEST:19.283 seconds] [sig-storage] In-tree Volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Driver: emptydir] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:69 [Testpattern: Inline-volume (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:92 should fail if subpath directory is outside the volume [Slow] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:223 ------------------------------ SSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:29:35.444: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in persistent-local-volumes-test-3786 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:152 [BeforeEach] [Volume type: dir-bindmounted] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Jan 11 20:29:38.700: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-3786 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-b18c400f-52b8-48f1-acc7-ead125f10d43 && mount --bind /tmp/local-volume-test-b18c400f-52b8-48f1-acc7-ead125f10d43 /tmp/local-volume-test-b18c400f-52b8-48f1-acc7-ead125f10d43' Jan 11 20:29:40.218: INFO: stderr: "" Jan 11 20:29:40.218: INFO: stdout: "" STEP: Creating local PVCs and PVs Jan 11 20:29:40.218: INFO: Creating a PV followed by a PVC Jan 11 20:29:40.396: INFO: Waiting for PV local-pvpg7t2 to bind to PVC pvc-bm9wt Jan 11 20:29:40.396: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-bm9wt] to have phase Bound Jan 11 20:29:40.486: INFO: PersistentVolumeClaim pvc-bm9wt found and phase=Bound (89.210589ms) Jan 11 20:29:40.486: INFO: Waiting up to 3m0s for PersistentVolume local-pvpg7t2 to have phase Bound Jan 11 20:29:40.575: INFO: PersistentVolume local-pvpg7t2 found and phase=Bound (89.129757ms) [It] should be able to write from pod1 and read from pod2 /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 STEP: Creating pod1 to write to the PV STEP: Creating a pod Jan 11 20:29:43.201: INFO: pod "security-context-7813ddbc-05a9-4a2a-8433-b04f02fdfa14" created on Node "ip-10-250-27-25.ec2.internal" STEP: Writing in pod1 Jan 11 20:29:43.202: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-3786 security-context-7813ddbc-05a9-4a2a-8433-b04f02fdfa14 -- /bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file' Jan 11 20:29:44.628: INFO: stderr: "" Jan 11 20:29:44.628: INFO: stdout: "" Jan 11 20:29:44.628: INFO: podRWCmdExec out: "" err: Jan 11 20:29:44.628: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-3786 security-context-7813ddbc-05a9-4a2a-8433-b04f02fdfa14 -- /bin/sh -c cat /mnt/volume1/test-file' Jan 11 20:29:46.065: INFO: stderr: "" Jan 11 20:29:46.065: INFO: stdout: "test-file-content\n" Jan 11 20:29:46.065: INFO: podRWCmdExec out: "test-file-content\n" err: STEP: Creating pod2 to read from the PV STEP: Creating a pod Jan 11 20:29:48.513: INFO: pod "security-context-e958aba2-481c-41cc-bc33-b5a64d5bc0f2" created on Node "ip-10-250-27-25.ec2.internal" Jan 11 20:29:48.513: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-3786 security-context-e958aba2-481c-41cc-bc33-b5a64d5bc0f2 -- /bin/sh -c cat /mnt/volume1/test-file' Jan 11 20:29:50.033: INFO: stderr: "" Jan 11 20:29:50.033: INFO: stdout: "test-file-content\n" Jan 11 20:29:50.033: INFO: podRWCmdExec out: "test-file-content\n" err: STEP: Writing in pod2 Jan 11 20:29:50.033: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-3786 security-context-e958aba2-481c-41cc-bc33-b5a64d5bc0f2 -- /bin/sh -c mkdir -p /mnt/volume1; echo /tmp/local-volume-test-b18c400f-52b8-48f1-acc7-ead125f10d43 > /mnt/volume1/test-file' Jan 11 20:29:51.499: INFO: stderr: "" Jan 11 20:29:51.500: INFO: stdout: "" Jan 11 20:29:51.500: INFO: podRWCmdExec out: "" err: STEP: Reading in pod1 Jan 11 20:29:51.500: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-3786 security-context-7813ddbc-05a9-4a2a-8433-b04f02fdfa14 -- /bin/sh -c cat /mnt/volume1/test-file' Jan 11 20:29:52.910: INFO: stderr: "" Jan 11 20:29:52.910: INFO: stdout: "/tmp/local-volume-test-b18c400f-52b8-48f1-acc7-ead125f10d43\n" Jan 11 20:29:52.910: INFO: podRWCmdExec out: "/tmp/local-volume-test-b18c400f-52b8-48f1-acc7-ead125f10d43\n" err: STEP: Deleting pod1 STEP: Deleting pod security-context-7813ddbc-05a9-4a2a-8433-b04f02fdfa14 in namespace persistent-local-volumes-test-3786 STEP: Deleting pod2 STEP: Deleting pod security-context-e958aba2-481c-41cc-bc33-b5a64d5bc0f2 in namespace persistent-local-volumes-test-3786 [AfterEach] [Volume type: dir-bindmounted] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jan 11 20:29:53.091: INFO: Deleting PersistentVolumeClaim "pvc-bm9wt" Jan 11 20:29:53.181: INFO: Deleting PersistentVolume "local-pvpg7t2" STEP: Removing the test directory Jan 11 20:29:53.271: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-3786 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount /tmp/local-volume-test-b18c400f-52b8-48f1-acc7-ead125f10d43 && rm -r /tmp/local-volume-test-b18c400f-52b8-48f1-acc7-ead125f10d43' Jan 11 20:29:54.764: INFO: stderr: "" Jan 11 20:29:54.764: INFO: stdout: "" [AfterEach] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:29:54.855: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-3786" for this suite. Jan 11 20:30:01.214: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:30:04.735: INFO: namespace persistent-local-volumes-test-3786 deletion completed in 9.790523115s • [SLOW TEST:29.292 seconds] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-bindmounted] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume at the same time /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248 should be able to write from pod1 and read from pod2 /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:29:24.928: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in csi-mock-volumes-4733 STEP: Waiting for a default service account to be provisioned in namespace [It] contain ephemeral=true when using inline volume /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:347 STEP: deploying csi mock driver Jan 11 20:29:25.766: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4733/csi-attacher Jan 11 20:29:25.856: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-4733 Jan 11 20:29:25.856: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-4733 Jan 11 20:29:25.947: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-4733 Jan 11 20:29:26.036: INFO: creating *v1.Role: csi-mock-volumes-4733/external-attacher-cfg-csi-mock-volumes-4733 Jan 11 20:29:26.126: INFO: creating *v1.RoleBinding: csi-mock-volumes-4733/csi-attacher-role-cfg Jan 11 20:29:26.215: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4733/csi-provisioner Jan 11 20:29:26.305: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-4733 Jan 11 20:29:26.305: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-4733 Jan 11 20:29:26.395: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-4733 Jan 11 20:29:26.485: INFO: creating *v1.Role: csi-mock-volumes-4733/external-provisioner-cfg-csi-mock-volumes-4733 Jan 11 20:29:26.575: INFO: creating *v1.RoleBinding: csi-mock-volumes-4733/csi-provisioner-role-cfg Jan 11 20:29:26.664: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4733/csi-resizer Jan 11 20:29:26.754: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-4733 Jan 11 20:29:26.754: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-4733 Jan 11 20:29:26.844: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-4733 Jan 11 20:29:26.933: INFO: creating *v1.Role: csi-mock-volumes-4733/external-resizer-cfg-csi-mock-volumes-4733 Jan 11 20:29:27.023: INFO: creating *v1.RoleBinding: csi-mock-volumes-4733/csi-resizer-role-cfg Jan 11 20:29:27.112: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4733/csi-mock Jan 11 20:29:27.203: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-4733 Jan 11 20:29:27.293: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-4733 Jan 11 20:29:27.382: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-4733 Jan 11 20:29:27.472: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-4733 Jan 11 20:29:27.561: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-4733 Jan 11 20:29:27.651: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-4733 Jan 11 20:29:27.740: INFO: creating *v1.StatefulSet: csi-mock-volumes-4733/csi-mockplugin Jan 11 20:29:27.830: INFO: creating *v1beta1.CSIDriver: csi-mock-csi-mock-volumes-4733 Jan 11 20:29:27.919: INFO: creating *v1.StatefulSet: csi-mock-volumes-4733/csi-mockplugin-attacher Jan 11 20:29:28.010: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-4733" STEP: Creating pod STEP: checking for CSIInlineVolumes feature Jan 11 20:29:34.550: INFO: Error getting logs for pod csi-inline-volume-dvq9t: the server rejected our request for an unknown reason (get pods csi-inline-volume-dvq9t) STEP: Deleting pod csi-inline-volume-dvq9t in namespace csi-mock-volumes-4733 STEP: Deleting the previously created pod Jan 11 20:29:44.828: INFO: Deleting pod "pvc-volume-tester-zdmcm" in namespace "csi-mock-volumes-4733" Jan 11 20:29:44.919: INFO: Wait up to 5m0s for pod "pvc-volume-tester-zdmcm" to be fully deleted STEP: Checking CSI driver logs Jan 11 20:29:47.195: INFO: CSI driver logs: mock driver started gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":""} gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-4733","vendor_version":"0.3.0","manifest":{"url":"https://github.com/kubernetes-csi/csi-test/mock"}},"Error":""} gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":""} gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":9}}}]},"Error":""} gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":""} gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-4733","vendor_version":"0.3.0","manifest":{"url":"https://github.com/kubernetes-csi/csi-test/mock"}},"Error":""} gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":""} gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":9}}}]},"Error":""} gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-4733","vendor_version":"0.3.0","manifest":{"url":"https://github.com/kubernetes-csi/csi-test/mock"}},"Error":""} gRPCCall: {"Method":"/csi.v1.Node/NodeGetInfo","Request":{},"Response":{"node_id":"csi-mock-csi-mock-volumes-4733","max_volumes_per_node":2},"Error":""} gRPCCall: {"Method":"/csi.v1.Node/NodePublishVolume","Request":{"volume_id":"csi-1e098f02384c661c389558e207bd881a7ee5199f9da3f87265444e31156608b1","target_path":"/var/lib/kubelet/pods/ff5f6d0c-5747-46c1-aaac-b5c439116c86/volumes/kubernetes.io~csi/my-volume/mount","volume_capability":{"AccessType":{"Mount":{}},"access_mode":{"mode":1}},"volume_context":{"csi.storage.k8s.io/ephemeral":"true","csi.storage.k8s.io/pod.name":"pvc-volume-tester-zdmcm","csi.storage.k8s.io/pod.namespace":"csi-mock-volumes-4733","csi.storage.k8s.io/pod.uid":"ff5f6d0c-5747-46c1-aaac-b5c439116c86","csi.storage.k8s.io/serviceAccount.name":"default"}},"Response":{},"Error":""} gRPCCall: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"csi-1e098f02384c661c389558e207bd881a7ee5199f9da3f87265444e31156608b1","target_path":"/var/lib/kubelet/pods/ff5f6d0c-5747-46c1-aaac-b5c439116c86/volumes/kubernetes.io~csi/my-volume/mount"},"Response":{},"Error":""} Jan 11 20:29:47.195: INFO: Found volume attribute csi.storage.k8s.io/serviceAccount.name: default Jan 11 20:29:47.195: INFO: Found volume attribute csi.storage.k8s.io/pod.name: pvc-volume-tester-zdmcm Jan 11 20:29:47.195: INFO: Found volume attribute csi.storage.k8s.io/pod.namespace: csi-mock-volumes-4733 Jan 11 20:29:47.195: INFO: Found volume attribute csi.storage.k8s.io/pod.uid: ff5f6d0c-5747-46c1-aaac-b5c439116c86 Jan 11 20:29:47.195: INFO: Found volume attribute csi.storage.k8s.io/ephemeral: true Jan 11 20:29:47.195: INFO: Found NodeUnpublishVolume: {Method:/csi.v1.Node/NodeUnpublishVolume Request:{VolumeContext:map[]}} STEP: Deleting pod pvc-volume-tester-zdmcm Jan 11 20:29:47.195: INFO: Deleting pod "pvc-volume-tester-zdmcm" in namespace "csi-mock-volumes-4733" STEP: Cleaning up resources STEP: uninstalling csi mock driver Jan 11 20:29:47.286: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4733/csi-attacher Jan 11 20:29:47.376: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-4733 Jan 11 20:29:47.467: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-4733 Jan 11 20:29:47.557: INFO: deleting *v1.Role: csi-mock-volumes-4733/external-attacher-cfg-csi-mock-volumes-4733 Jan 11 20:29:47.647: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4733/csi-attacher-role-cfg Jan 11 20:29:47.738: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4733/csi-provisioner Jan 11 20:29:47.828: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-4733 Jan 11 20:29:47.919: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-4733 Jan 11 20:29:48.013: INFO: deleting *v1.Role: csi-mock-volumes-4733/external-provisioner-cfg-csi-mock-volumes-4733 Jan 11 20:29:48.103: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4733/csi-provisioner-role-cfg Jan 11 20:29:48.194: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4733/csi-resizer Jan 11 20:29:48.285: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-4733 Jan 11 20:29:48.376: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-4733 Jan 11 20:29:48.466: INFO: deleting *v1.Role: csi-mock-volumes-4733/external-resizer-cfg-csi-mock-volumes-4733 Jan 11 20:29:48.559: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4733/csi-resizer-role-cfg Jan 11 20:29:48.650: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4733/csi-mock Jan 11 20:29:48.740: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-4733 Jan 11 20:29:48.831: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-4733 Jan 11 20:29:48.922: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-4733 Jan 11 20:29:49.013: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-4733 Jan 11 20:29:49.103: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-4733 Jan 11 20:29:49.194: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-4733 Jan 11 20:29:49.286: INFO: deleting *v1.StatefulSet: csi-mock-volumes-4733/csi-mockplugin Jan 11 20:29:49.377: INFO: deleting *v1beta1.CSIDriver: csi-mock-csi-mock-volumes-4733 Jan 11 20:29:49.470: INFO: deleting *v1.StatefulSet: csi-mock-volumes-4733/csi-mockplugin-attacher [AfterEach] [sig-storage] CSI mock volume /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:29:49.652: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "csi-mock-volumes-4733" for this suite. Jan 11 20:30:02.014: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:30:05.266: INFO: namespace csi-mock-volumes-4733 deletion completed in 15.523966144s • [SLOW TEST:40.338 seconds] [sig-storage] CSI mock volume /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI workload information using mock driver /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:297 contain ephemeral=true when using inline volume /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:347 ------------------------------ SSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] [sig-node] PreStop /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:29:19.443: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename prestop STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in prestop-2654 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:173 [It] should call prestop when killing a pod [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating server pod server in namespace prestop-2654 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-2654 STEP: Deleting pre-stop pod Jan 11 20:29:29.992: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:29:30.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-2654" for this suite. Jan 11 20:30:14.446: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:30:17.772: INFO: namespace prestop-2654 deletion completed in 47.59641418s • [SLOW TEST:58.329 seconds] [k8s.io] [sig-node] PreStop /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693 should call prestop when killing a pod [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:30:04.737: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in persistent-local-volumes-test-8667 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:152 [BeforeEach] [Volume type: dir-link] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Jan 11 20:30:07.828: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-8667 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-c04397e5-58e9-4701-8c46-f4f72517a36a-backend && ln -s /tmp/local-volume-test-c04397e5-58e9-4701-8c46-f4f72517a36a-backend /tmp/local-volume-test-c04397e5-58e9-4701-8c46-f4f72517a36a' Jan 11 20:30:09.293: INFO: stderr: "" Jan 11 20:30:09.293: INFO: stdout: "" STEP: Creating local PVCs and PVs Jan 11 20:30:09.293: INFO: Creating a PV followed by a PVC Jan 11 20:30:09.472: INFO: Waiting for PV local-pvgll7v to bind to PVC pvc-zdrww Jan 11 20:30:09.472: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-zdrww] to have phase Bound Jan 11 20:30:09.562: INFO: PersistentVolumeClaim pvc-zdrww found and phase=Bound (89.546957ms) Jan 11 20:30:09.562: INFO: Waiting up to 3m0s for PersistentVolume local-pvgll7v to have phase Bound Jan 11 20:30:09.652: INFO: PersistentVolume local-pvgll7v found and phase=Bound (89.681591ms) [BeforeEach] Set fsGroup for local volume /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set fsGroup for one pod [Slow] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:267 STEP: Checking fsGroup is set STEP: Creating a pod Jan 11 20:30:12.189: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec security-context-b11ab410-5c48-4b23-a625-323ab5afc468 --namespace=persistent-local-volumes-test-8667 -- stat -c %g /mnt/volume1' Jan 11 20:30:13.618: INFO: stderr: "" Jan 11 20:30:13.618: INFO: stdout: "1234\n" STEP: Deleting pod STEP: Deleting pod security-context-b11ab410-5c48-4b23-a625-323ab5afc468 in namespace persistent-local-volumes-test-8667 [AfterEach] [Volume type: dir-link] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jan 11 20:30:13.709: INFO: Deleting PersistentVolumeClaim "pvc-zdrww" Jan 11 20:30:13.800: INFO: Deleting PersistentVolume "local-pvgll7v" STEP: Removing the test directory Jan 11 20:30:13.895: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-8667 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-c04397e5-58e9-4701-8c46-f4f72517a36a && rm -r /tmp/local-volume-test-c04397e5-58e9-4701-8c46-f4f72517a36a-backend' Jan 11 20:30:15.261: INFO: stderr: "" Jan 11 20:30:15.261: INFO: stdout: "" [AfterEach] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:30:15.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-8667" for this suite. Jan 11 20:30:23.710: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:30:27.010: INFO: namespace persistent-local-volumes-test-8667 deletion completed in 11.567745488s • [SLOW TEST:22.273 seconds] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-link] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set fsGroup for one pod [Slow] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:267 ------------------------------ SSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:30:05.288: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename kubectl STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-1217 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:225 [BeforeEach] Simple pod /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:371 STEP: creating the pod from apiVersion: v1 kind: Pod metadata: name: httpd labels: name: httpd spec: containers: - name: httpd image: docker.io/library/httpd:2.4.38-alpine ports: - containerPort: 80 readinessProbe: httpGet: path: / port: 80 initialDelaySeconds: 5 timeoutSeconds: 5 Jan 11 20:30:05.949: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config create -f - --namespace=kubectl-1217' Jan 11 20:30:07.073: INFO: stderr: "" Jan 11 20:30:07.073: INFO: stdout: "pod/httpd created\n" Jan 11 20:30:07.073: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [httpd] Jan 11 20:30:07.073: INFO: Waiting up to 5m0s for pod "httpd" in namespace "kubectl-1217" to be "running and ready" Jan 11 20:30:07.162: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 89.618776ms Jan 11 20:30:09.252: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 2.178967396s Jan 11 20:30:11.341: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 4.268492355s Jan 11 20:30:13.431: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 6.358422731s Jan 11 20:30:15.520: INFO: Pod "httpd": Phase="Running", Reason="", readiness=true. Elapsed: 8.44749856s Jan 11 20:30:15.520: INFO: Pod "httpd" satisfied condition "running and ready" Jan 11 20:30:15.520: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [httpd] [It] should support exec using resource/name /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:421 STEP: executing a command in the container Jan 11 20:30:15.520: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=kubectl-1217 pod/httpd echo running in container' Jan 11 20:30:16.910: INFO: stderr: "" Jan 11 20:30:16.910: INFO: stdout: "running in container\n" [AfterEach] Simple pod /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:377 STEP: using delete to clean up resources Jan 11 20:30:16.910: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config delete --grace-period=0 --force -f - --namespace=kubectl-1217' Jan 11 20:30:17.528: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 11 20:30:17.528: INFO: stdout: "pod \"httpd\" force deleted\n" Jan 11 20:30:17.528: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config get rc,svc -l name=httpd --no-headers --namespace=kubectl-1217' Jan 11 20:30:18.125: INFO: stderr: "No resources found in kubectl-1217 namespace.\n" Jan 11 20:30:18.125: INFO: stdout: "" Jan 11 20:30:18.125: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config get pods -l name=httpd --namespace=kubectl-1217 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jan 11 20:30:18.619: INFO: stderr: "" Jan 11 20:30:18.619: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:30:18.619: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1217" for this suite. Jan 11 20:30:24.978: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:30:28.203: INFO: namespace kubectl-1217 deletion completed in 9.492923945s • [SLOW TEST:22.915 seconds] [sig-cli] Kubectl client /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Simple pod /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:369 should support exec using resource/name /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:421 ------------------------------ SSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:30:03.776: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename services STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-7914 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:91 [It] should serve multiport endpoints from pods [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: creating service multi-endpoint-test in namespace services-7914 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7914 to expose endpoints map[] Jan 11 20:30:04.603: INFO: successfully validated that service multi-endpoint-test in namespace services-7914 exposes endpoints map[] (90.671023ms elapsed) STEP: Creating pod pod1 in namespace services-7914 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7914 to expose endpoints map[pod1:[100]] Jan 11 20:30:07.241: INFO: successfully validated that service multi-endpoint-test in namespace services-7914 exposes endpoints map[pod1:[100]] (2.544330843s elapsed) STEP: Creating pod pod2 in namespace services-7914 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7914 to expose endpoints map[pod1:[100] pod2:[101]] Jan 11 20:30:08.870: INFO: successfully validated that service multi-endpoint-test in namespace services-7914 exposes endpoints map[pod1:[100] pod2:[101]] (1.538600495s elapsed) STEP: Deleting pod pod1 in namespace services-7914 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7914 to expose endpoints map[pod2:[101]] Jan 11 20:30:10.323: INFO: successfully validated that service multi-endpoint-test in namespace services-7914 exposes endpoints map[pod2:[101]] (1.362253965s elapsed) STEP: Deleting pod pod2 in namespace services-7914 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7914 to expose endpoints map[] Jan 11 20:30:10.504: INFO: successfully validated that service multi-endpoint-test in namespace services-7914 exposes endpoints map[] (89.444772ms elapsed) [AfterEach] [sig-network] Services /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:30:10.600: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7914" for this suite. Jan 11 20:30:24.962: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:30:28.284: INFO: namespace services-7914 deletion completed in 17.592391413s [AfterEach] [sig-network] Services /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:95 • [SLOW TEST:24.508 seconds] [sig-network] Services /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SS ------------------------------ [BeforeEach] [sig-storage] Ephemeralstorage /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:29:49.239: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename pv STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pv-4827 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Ephemeralstorage /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:50 [It] should allow deletion of pod with invalid volume : projected /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:56 Jan 11 20:30:19.969: INFO: Deleting pod "pv-4827"/"pod-ephm-test-projected-fvf7" Jan 11 20:30:19.969: INFO: Deleting pod "pod-ephm-test-projected-fvf7" in namespace "pv-4827" Jan 11 20:30:20.060: INFO: Wait up to 5m0s for pod "pod-ephm-test-projected-fvf7" to be fully deleted [AfterEach] [sig-storage] Ephemeralstorage /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:30:24.239: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-4827" for this suite. Jan 11 20:30:30.597: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:30:34.047: INFO: namespace pv-4827 deletion completed in 9.71751179s • [SLOW TEST:44.808 seconds] [sig-storage] Ephemeralstorage /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 When pod refers to non-existent ephemeral storage /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:54 should allow deletion of pod with invalid volume : projected /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:56 ------------------------------ SSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:30:28.215: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename secrets STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secrets-4135 STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating secret with name secret-test-map-45bd8ad2-b15f-4359-8d43-daa3e0b63fe5 STEP: Creating a pod to test consume secrets Jan 11 20:30:29.034: INFO: Waiting up to 5m0s for pod "pod-secrets-c8abdae4-7309-4ea0-acfb-d093996af7af" in namespace "secrets-4135" to be "success or failure" Jan 11 20:30:29.123: INFO: Pod "pod-secrets-c8abdae4-7309-4ea0-acfb-d093996af7af": Phase="Pending", Reason="", readiness=false. Elapsed: 89.042446ms Jan 11 20:30:31.213: INFO: Pod "pod-secrets-c8abdae4-7309-4ea0-acfb-d093996af7af": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.17873183s STEP: Saw pod success Jan 11 20:30:31.213: INFO: Pod "pod-secrets-c8abdae4-7309-4ea0-acfb-d093996af7af" satisfied condition "success or failure" Jan 11 20:30:31.302: INFO: Trying to get logs from node ip-10-250-7-77.ec2.internal pod pod-secrets-c8abdae4-7309-4ea0-acfb-d093996af7af container secret-volume-test: STEP: delete the pod Jan 11 20:30:31.496: INFO: Waiting for pod pod-secrets-c8abdae4-7309-4ea0-acfb-d093996af7af to disappear Jan 11 20:30:31.585: INFO: Pod pod-secrets-c8abdae4-7309-4ea0-acfb-d093996af7af no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:30:31.585: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4135" for this suite. Jan 11 20:30:39.944: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:30:43.163: INFO: namespace secrets-4135 deletion completed in 11.487737468s • [SLOW TEST:14.948 seconds] [sig-storage] Secrets /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:35 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSS ------------------------------ [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:30:27.020: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename init-container STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in init-container-4139 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartNever pod [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: creating the pod Jan 11 20:30:27.657: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:30:31.656: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-4139" for this suite. Jan 11 20:30:40.016: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:30:43.316: INFO: namespace init-container-4139 deletion completed in 11.569158582s • [SLOW TEST:16.296 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693 should invoke init containers on a RestartNever pod [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:30:43.318: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename kubectl STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-5998 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:225 [It] should create a job from an image, then delete the job [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: executing a command with run --rm and attach with stdin Jan 11 20:30:43.954: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config --namespace=kubectl-5998 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' Jan 11 20:30:47.259: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\n" Jan 11 20:30:47.260: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:30:49.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5998" for this suite. Jan 11 20:30:55.798: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:30:59.093: INFO: namespace kubectl-5998 deletion completed in 9.564073083s • [SLOW TEST:15.776 seconds] [sig-cli] Kubectl client /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run --rm job /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1751 should create a job from an image, then delete the job [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:30:43.170: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename kubectl STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-2805 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:225 [BeforeEach] Kubectl logs /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1274 STEP: creating an pod Jan 11 20:30:43.807: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config run logs-generator --generator=run-pod/v1 --image=gcr.io/kubernetes-e2e-test-images/agnhost:2.6 --namespace=kubectl-2805 -- logs-generator --log-lines-total 100 --run-duration 20s' Jan 11 20:30:44.258: INFO: stderr: "" Jan 11 20:30:44.259: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Waiting for log generator to start. Jan 11 20:30:44.259: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] Jan 11 20:30:44.259: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-2805" to be "running and ready, or succeeded" Jan 11 20:30:44.348: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 89.211192ms Jan 11 20:30:46.439: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 2.180027527s Jan 11 20:30:46.439: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" Jan 11 20:30:46.439: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings Jan 11 20:30:46.439: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config logs logs-generator logs-generator --namespace=kubectl-2805' Jan 11 20:30:47.114: INFO: stderr: "" Jan 11 20:30:47.114: INFO: stdout: "I0111 20:30:45.003739 1 logs_generator.go:76] 0 GET /api/v1/namespaces/ns/pods/jhn4 223\nI0111 20:30:45.204131 1 logs_generator.go:76] 1 POST /api/v1/namespaces/default/pods/tpr 370\nI0111 20:30:45.404107 1 logs_generator.go:76] 2 POST /api/v1/namespaces/ns/pods/dd8 461\nI0111 20:30:45.603849 1 logs_generator.go:76] 3 GET /api/v1/namespaces/default/pods/dpnk 339\nI0111 20:30:45.803911 1 logs_generator.go:76] 4 POST /api/v1/namespaces/ns/pods/x8j 514\nI0111 20:30:46.003881 1 logs_generator.go:76] 5 PUT /api/v1/namespaces/default/pods/xf65 313\nI0111 20:30:46.203948 1 logs_generator.go:76] 6 GET /api/v1/namespaces/kube-system/pods/rrc7 458\nI0111 20:30:46.404116 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/ns/pods/6pv 522\nI0111 20:30:46.604094 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/ns/pods/xl9 583\nI0111 20:30:46.804698 1 logs_generator.go:76] 9 POST /api/v1/namespaces/kube-system/pods/n2w6 394\nI0111 20:30:47.003924 1 logs_generator.go:76] 10 POST /api/v1/namespaces/default/pods/tss 558\n" STEP: limiting log lines Jan 11 20:30:47.114: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config logs logs-generator logs-generator --namespace=kubectl-2805 --tail=1' Jan 11 20:30:47.642: INFO: stderr: "" Jan 11 20:30:47.642: INFO: stdout: "I0111 20:30:47.403898 1 logs_generator.go:76] 12 GET /api/v1/namespaces/default/pods/r4x 346\n" STEP: limiting log bytes Jan 11 20:30:47.642: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config logs logs-generator logs-generator --namespace=kubectl-2805 --limit-bytes=1' Jan 11 20:30:48.185: INFO: stderr: "" Jan 11 20:30:48.185: INFO: stdout: "I" STEP: exposing timestamps Jan 11 20:30:48.185: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config logs logs-generator logs-generator --namespace=kubectl-2805 --tail=1 --timestamps' Jan 11 20:30:48.743: INFO: stderr: "" Jan 11 20:30:48.743: INFO: stdout: "2020-01-11T20:30:48.603998381Z I0111 20:30:48.603868 1 logs_generator.go:76] 18 POST /api/v1/namespaces/kube-system/pods/9wn 260\n" STEP: restricting to a time range Jan 11 20:30:51.243: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config logs logs-generator logs-generator --namespace=kubectl-2805 --since=1s' Jan 11 20:30:51.814: INFO: stderr: "" Jan 11 20:30:51.814: INFO: stdout: "I0111 20:30:50.803875 1 logs_generator.go:76] 29 POST /api/v1/namespaces/kube-system/pods/prr 574\nI0111 20:30:51.003858 1 logs_generator.go:76] 30 GET /api/v1/namespaces/kube-system/pods/q8t 262\nI0111 20:30:51.203909 1 logs_generator.go:76] 31 POST /api/v1/namespaces/kube-system/pods/kv5 444\nI0111 20:30:51.403911 1 logs_generator.go:76] 32 PUT /api/v1/namespaces/ns/pods/5r8k 579\nI0111 20:30:51.603894 1 logs_generator.go:76] 33 PUT /api/v1/namespaces/default/pods/lv6w 497\n" Jan 11 20:30:51.814: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config logs logs-generator logs-generator --namespace=kubectl-2805 --since=24h' Jan 11 20:30:52.409: INFO: stderr: "" Jan 11 20:30:52.409: INFO: stdout: "I0111 20:30:45.003739 1 logs_generator.go:76] 0 GET /api/v1/namespaces/ns/pods/jhn4 223\nI0111 20:30:45.204131 1 logs_generator.go:76] 1 POST /api/v1/namespaces/default/pods/tpr 370\nI0111 20:30:45.404107 1 logs_generator.go:76] 2 POST /api/v1/namespaces/ns/pods/dd8 461\nI0111 20:30:45.603849 1 logs_generator.go:76] 3 GET /api/v1/namespaces/default/pods/dpnk 339\nI0111 20:30:45.803911 1 logs_generator.go:76] 4 POST /api/v1/namespaces/ns/pods/x8j 514\nI0111 20:30:46.003881 1 logs_generator.go:76] 5 PUT /api/v1/namespaces/default/pods/xf65 313\nI0111 20:30:46.203948 1 logs_generator.go:76] 6 GET /api/v1/namespaces/kube-system/pods/rrc7 458\nI0111 20:30:46.404116 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/ns/pods/6pv 522\nI0111 20:30:46.604094 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/ns/pods/xl9 583\nI0111 20:30:46.804698 1 logs_generator.go:76] 9 POST /api/v1/namespaces/kube-system/pods/n2w6 394\nI0111 20:30:47.003924 1 logs_generator.go:76] 10 POST /api/v1/namespaces/default/pods/tss 558\nI0111 20:30:47.203860 1 logs_generator.go:76] 11 POST /api/v1/namespaces/ns/pods/cdg 528\nI0111 20:30:47.403898 1 logs_generator.go:76] 12 GET /api/v1/namespaces/default/pods/r4x 346\nI0111 20:30:47.603924 1 logs_generator.go:76] 13 POST /api/v1/namespaces/default/pods/dsfd 348\nI0111 20:30:47.803855 1 logs_generator.go:76] 14 POST /api/v1/namespaces/kube-system/pods/kgt 433\nI0111 20:30:48.003899 1 logs_generator.go:76] 15 GET /api/v1/namespaces/ns/pods/prd2 279\nI0111 20:30:48.203917 1 logs_generator.go:76] 16 GET /api/v1/namespaces/ns/pods/wrf 436\nI0111 20:30:48.403898 1 logs_generator.go:76] 17 GET /api/v1/namespaces/ns/pods/tv4 360\nI0111 20:30:48.603868 1 logs_generator.go:76] 18 POST /api/v1/namespaces/kube-system/pods/9wn 260\nI0111 20:30:48.803856 1 logs_generator.go:76] 19 POST /api/v1/namespaces/default/pods/gjtw 247\nI0111 20:30:49.003860 1 logs_generator.go:76] 20 POST /api/v1/namespaces/ns/pods/qfx7 497\nI0111 20:30:49.203867 1 logs_generator.go:76] 21 POST /api/v1/namespaces/default/pods/q9jw 420\nI0111 20:30:49.403868 1 logs_generator.go:76] 22 PUT /api/v1/namespaces/kube-system/pods/fx2 503\nI0111 20:30:49.603861 1 logs_generator.go:76] 23 GET /api/v1/namespaces/default/pods/4tj 529\nI0111 20:30:49.803871 1 logs_generator.go:76] 24 POST /api/v1/namespaces/default/pods/brs 243\nI0111 20:30:50.004928 1 logs_generator.go:76] 25 GET /api/v1/namespaces/ns/pods/z77 586\nI0111 20:30:50.203838 1 logs_generator.go:76] 26 GET /api/v1/namespaces/ns/pods/st8 312\nI0111 20:30:50.403889 1 logs_generator.go:76] 27 GET /api/v1/namespaces/kube-system/pods/v7k 545\nI0111 20:30:50.603853 1 logs_generator.go:76] 28 POST /api/v1/namespaces/ns/pods/qlss 420\nI0111 20:30:50.803875 1 logs_generator.go:76] 29 POST /api/v1/namespaces/kube-system/pods/prr 574\nI0111 20:30:51.003858 1 logs_generator.go:76] 30 GET /api/v1/namespaces/kube-system/pods/q8t 262\nI0111 20:30:51.203909 1 logs_generator.go:76] 31 POST /api/v1/namespaces/kube-system/pods/kv5 444\nI0111 20:30:51.403911 1 logs_generator.go:76] 32 PUT /api/v1/namespaces/ns/pods/5r8k 579\nI0111 20:30:51.603894 1 logs_generator.go:76] 33 PUT /api/v1/namespaces/default/pods/lv6w 497\nI0111 20:30:51.803894 1 logs_generator.go:76] 34 POST /api/v1/namespaces/default/pods/kgr 310\nI0111 20:30:52.003932 1 logs_generator.go:76] 35 PUT /api/v1/namespaces/default/pods/fpvm 249\nI0111 20:30:52.203851 1 logs_generator.go:76] 36 POST /api/v1/namespaces/kube-system/pods/rrcx 525\n" [AfterEach] Kubectl logs /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1280 Jan 11 20:30:52.409: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config delete pod logs-generator --namespace=kubectl-2805' Jan 11 20:30:58.049: INFO: stderr: "" Jan 11 20:30:58.049: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:30:58.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2805" for this suite. Jan 11 20:31:04.409: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:31:07.642: INFO: namespace kubectl-2805 deletion completed in 9.501656709s • [SLOW TEST:24.472 seconds] [sig-cli] Kubectl client /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1270 should be able to retrieve and filter logs [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:30:59.100: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename downward-api STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-4292 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating a pod to test downward API volume plugin Jan 11 20:30:59.843: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c65406e1-6045-4749-8929-3f9db8a293e7" in namespace "downward-api-4292" to be "success or failure" Jan 11 20:30:59.933: INFO: Pod "downwardapi-volume-c65406e1-6045-4749-8929-3f9db8a293e7": Phase="Pending", Reason="", readiness=false. Elapsed: 89.880548ms Jan 11 20:31:02.023: INFO: Pod "downwardapi-volume-c65406e1-6045-4749-8929-3f9db8a293e7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.179670223s STEP: Saw pod success Jan 11 20:31:02.023: INFO: Pod "downwardapi-volume-c65406e1-6045-4749-8929-3f9db8a293e7" satisfied condition "success or failure" Jan 11 20:31:02.112: INFO: Trying to get logs from node ip-10-250-7-77.ec2.internal pod downwardapi-volume-c65406e1-6045-4749-8929-3f9db8a293e7 container client-container: STEP: delete the pod Jan 11 20:31:02.302: INFO: Waiting for pod downwardapi-volume-c65406e1-6045-4749-8929-3f9db8a293e7 to disappear Jan 11 20:31:02.391: INFO: Pod downwardapi-volume-c65406e1-6045-4749-8929-3f9db8a293e7 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:31:02.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4292" for this suite. Jan 11 20:31:08.751: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:31:12.046: INFO: namespace downward-api-4292 deletion completed in 9.564136815s • [SLOW TEST:12.946 seconds] [sig-storage] Downward API volume /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:30:03.935: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename gc STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in gc-1509 STEP: Waiting for a default service account to be provisioned in namespace [It] should delete jobs and pods created by cronjob /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/garbage_collector.go:1074 STEP: Create the cronjob STEP: Wait for the CronJob to create new Job STEP: Delete the cronjob STEP: Verify if cronjob does not leave jobs nor pods behind STEP: expected 0 jobs, got 1 jobs STEP: expected 0 pods, got 1 pods STEP: expected 0 jobs, got 1 jobs STEP: Gathering metrics W0111 20:31:04.667067 8610 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 11 20:31:04.667: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:31:04.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1509" for this suite. Jan 11 20:31:11.040: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:31:14.364: INFO: namespace gc-1509 deletion completed in 9.596149005s • [SLOW TEST:70.430 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete jobs and pods created by cronjob /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/garbage_collector.go:1074 ------------------------------ SSSSSS ------------------------------ [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:93 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:30:17.806: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename provisioning STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in provisioning-181 STEP: Waiting for a default service account to be provisioned in namespace [It] should fail if subpath file is outside the volume [Slow][LinuxOnly] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:239 STEP: deploying csi-hostpath driver Jan 11 20:30:18.644: INFO: creating *v1.ServiceAccount: provisioning-181/csi-attacher Jan 11 20:30:18.735: INFO: creating *v1.ClusterRole: external-attacher-runner-provisioning-181 Jan 11 20:30:18.735: INFO: Define cluster role external-attacher-runner-provisioning-181 Jan 11 20:30:18.825: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-provisioning-181 Jan 11 20:30:18.915: INFO: creating *v1.Role: provisioning-181/external-attacher-cfg-provisioning-181 Jan 11 20:30:19.005: INFO: creating *v1.RoleBinding: provisioning-181/csi-attacher-role-cfg Jan 11 20:30:19.096: INFO: creating *v1.ServiceAccount: provisioning-181/csi-provisioner Jan 11 20:30:19.186: INFO: creating *v1.ClusterRole: external-provisioner-runner-provisioning-181 Jan 11 20:30:19.186: INFO: Define cluster role external-provisioner-runner-provisioning-181 Jan 11 20:30:19.276: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-provisioning-181 Jan 11 20:30:19.366: INFO: creating *v1.Role: provisioning-181/external-provisioner-cfg-provisioning-181 Jan 11 20:30:19.456: INFO: creating *v1.RoleBinding: provisioning-181/csi-provisioner-role-cfg Jan 11 20:30:19.546: INFO: creating *v1.ServiceAccount: provisioning-181/csi-snapshotter Jan 11 20:30:19.636: INFO: creating *v1.ClusterRole: external-snapshotter-runner-provisioning-181 Jan 11 20:30:19.636: INFO: Define cluster role external-snapshotter-runner-provisioning-181 Jan 11 20:30:19.726: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-provisioning-181 Jan 11 20:30:19.816: INFO: creating *v1.Role: provisioning-181/external-snapshotter-leaderelection-provisioning-181 Jan 11 20:30:19.906: INFO: creating *v1.RoleBinding: provisioning-181/external-snapshotter-leaderelection Jan 11 20:30:19.996: INFO: creating *v1.ServiceAccount: provisioning-181/csi-resizer Jan 11 20:30:20.086: INFO: creating *v1.ClusterRole: external-resizer-runner-provisioning-181 Jan 11 20:30:20.086: INFO: Define cluster role external-resizer-runner-provisioning-181 Jan 11 20:30:20.176: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-provisioning-181 Jan 11 20:30:20.266: INFO: creating *v1.Role: provisioning-181/external-resizer-cfg-provisioning-181 Jan 11 20:30:20.356: INFO: creating *v1.RoleBinding: provisioning-181/csi-resizer-role-cfg Jan 11 20:30:20.445: INFO: creating *v1.Service: provisioning-181/csi-hostpath-attacher Jan 11 20:30:20.540: INFO: creating *v1.StatefulSet: provisioning-181/csi-hostpath-attacher Jan 11 20:30:20.630: INFO: creating *v1beta1.CSIDriver: csi-hostpath-provisioning-181 Jan 11 20:30:20.720: INFO: creating *v1.Service: provisioning-181/csi-hostpathplugin Jan 11 20:30:20.814: INFO: creating *v1.StatefulSet: provisioning-181/csi-hostpathplugin Jan 11 20:30:20.905: INFO: creating *v1.Service: provisioning-181/csi-hostpath-provisioner Jan 11 20:30:20.998: INFO: creating *v1.StatefulSet: provisioning-181/csi-hostpath-provisioner Jan 11 20:30:21.089: INFO: creating *v1.Service: provisioning-181/csi-hostpath-resizer Jan 11 20:30:21.189: INFO: creating *v1.StatefulSet: provisioning-181/csi-hostpath-resizer Jan 11 20:30:21.279: INFO: creating *v1.Service: provisioning-181/csi-snapshotter Jan 11 20:30:21.373: INFO: creating *v1.StatefulSet: provisioning-181/csi-snapshotter Jan 11 20:30:21.463: INFO: creating *v1.ClusterRoleBinding: psp-csi-hostpath-role-provisioning-181 Jan 11 20:30:21.553: INFO: Test running for native CSI Driver, not checking metrics Jan 11 20:30:21.553: INFO: Creating resource for dynamic PV STEP: creating a StorageClass provisioning-181-csi-hostpath-provisioning-181-scnhkkm STEP: creating a claim Jan 11 20:30:21.642: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Jan 11 20:30:21.734: INFO: Waiting up to 5m0s for PersistentVolumeClaims [csi-hostpathhpz56] to have phase Bound Jan 11 20:30:21.825: INFO: PersistentVolumeClaim csi-hostpathhpz56 found but phase is Pending instead of Bound. Jan 11 20:30:23.915: INFO: PersistentVolumeClaim csi-hostpathhpz56 found but phase is Pending instead of Bound. Jan 11 20:30:26.005: INFO: PersistentVolumeClaim csi-hostpathhpz56 found but phase is Pending instead of Bound. Jan 11 20:30:28.095: INFO: PersistentVolumeClaim csi-hostpathhpz56 found but phase is Pending instead of Bound. Jan 11 20:30:30.185: INFO: PersistentVolumeClaim csi-hostpathhpz56 found but phase is Pending instead of Bound. Jan 11 20:30:32.276: INFO: PersistentVolumeClaim csi-hostpathhpz56 found but phase is Pending instead of Bound. Jan 11 20:30:34.366: INFO: PersistentVolumeClaim csi-hostpathhpz56 found but phase is Pending instead of Bound. Jan 11 20:30:36.456: INFO: PersistentVolumeClaim csi-hostpathhpz56 found but phase is Pending instead of Bound. Jan 11 20:30:38.546: INFO: PersistentVolumeClaim csi-hostpathhpz56 found and phase=Bound (16.811568953s) STEP: Creating pod pod-subpath-test-csi-hostpath-dynamicpv-5c64 STEP: Checking for subpath error in container status Jan 11 20:30:57.002: INFO: Deleting pod "pod-subpath-test-csi-hostpath-dynamicpv-5c64" in namespace "provisioning-181" Jan 11 20:30:57.093: INFO: Wait up to 5m0s for pod "pod-subpath-test-csi-hostpath-dynamicpv-5c64" to be fully deleted STEP: Deleting pod Jan 11 20:31:05.273: INFO: Deleting pod "pod-subpath-test-csi-hostpath-dynamicpv-5c64" in namespace "provisioning-181" STEP: Deleting pvc Jan 11 20:31:05.363: INFO: Deleting PersistentVolumeClaim "csi-hostpathhpz56" Jan 11 20:31:05.454: INFO: Waiting up to 5m0s for PersistentVolume pvc-0bb48d4b-df83-4973-9598-d9b8ec7639b7 to get deleted Jan 11 20:31:05.544: INFO: PersistentVolume pvc-0bb48d4b-df83-4973-9598-d9b8ec7639b7 was removed STEP: Deleting sc STEP: uninstalling csi-hostpath driver Jan 11 20:31:05.635: INFO: deleting *v1.ServiceAccount: provisioning-181/csi-attacher Jan 11 20:31:05.726: INFO: deleting *v1.ClusterRole: external-attacher-runner-provisioning-181 Jan 11 20:31:05.817: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-provisioning-181 Jan 11 20:31:05.909: INFO: deleting *v1.Role: provisioning-181/external-attacher-cfg-provisioning-181 Jan 11 20:31:06.000: INFO: deleting *v1.RoleBinding: provisioning-181/csi-attacher-role-cfg Jan 11 20:31:06.092: INFO: deleting *v1.ServiceAccount: provisioning-181/csi-provisioner Jan 11 20:31:06.184: INFO: deleting *v1.ClusterRole: external-provisioner-runner-provisioning-181 Jan 11 20:31:06.275: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-provisioning-181 Jan 11 20:31:06.367: INFO: deleting *v1.Role: provisioning-181/external-provisioner-cfg-provisioning-181 Jan 11 20:31:06.459: INFO: deleting *v1.RoleBinding: provisioning-181/csi-provisioner-role-cfg Jan 11 20:31:06.550: INFO: deleting *v1.ServiceAccount: provisioning-181/csi-snapshotter Jan 11 20:31:06.642: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-provisioning-181 Jan 11 20:31:06.733: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-provisioning-181 Jan 11 20:31:06.824: INFO: deleting *v1.Role: provisioning-181/external-snapshotter-leaderelection-provisioning-181 Jan 11 20:31:06.916: INFO: deleting *v1.RoleBinding: provisioning-181/external-snapshotter-leaderelection Jan 11 20:31:07.007: INFO: deleting *v1.ServiceAccount: provisioning-181/csi-resizer Jan 11 20:31:07.099: INFO: deleting *v1.ClusterRole: external-resizer-runner-provisioning-181 Jan 11 20:31:07.191: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-provisioning-181 Jan 11 20:31:07.283: INFO: deleting *v1.Role: provisioning-181/external-resizer-cfg-provisioning-181 Jan 11 20:31:07.374: INFO: deleting *v1.RoleBinding: provisioning-181/csi-resizer-role-cfg Jan 11 20:31:07.466: INFO: deleting *v1.Service: provisioning-181/csi-hostpath-attacher Jan 11 20:31:07.562: INFO: deleting *v1.StatefulSet: provisioning-181/csi-hostpath-attacher Jan 11 20:31:07.655: INFO: deleting *v1beta1.CSIDriver: csi-hostpath-provisioning-181 Jan 11 20:31:07.746: INFO: deleting *v1.Service: provisioning-181/csi-hostpathplugin Jan 11 20:31:07.843: INFO: deleting *v1.StatefulSet: provisioning-181/csi-hostpathplugin Jan 11 20:31:07.935: INFO: deleting *v1.Service: provisioning-181/csi-hostpath-provisioner Jan 11 20:31:08.031: INFO: deleting *v1.StatefulSet: provisioning-181/csi-hostpath-provisioner Jan 11 20:31:08.123: INFO: deleting *v1.Service: provisioning-181/csi-hostpath-resizer Jan 11 20:31:08.218: INFO: deleting *v1.StatefulSet: provisioning-181/csi-hostpath-resizer Jan 11 20:31:08.309: INFO: deleting *v1.Service: provisioning-181/csi-snapshotter Jan 11 20:31:08.405: INFO: deleting *v1.StatefulSet: provisioning-181/csi-snapshotter Jan 11 20:31:08.497: INFO: deleting *v1.ClusterRoleBinding: psp-csi-hostpath-role-provisioning-181 [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:31:08.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready WARNING: pod log: csi-hostpath-attacher-0/csi-attacher: context canceled WARNING: pod log: csi-hostpathplugin-0/hostpath: context canceled WARNING: pod log: csi-hostpathplugin-0/liveness-probe: context canceled STEP: Destroying namespace "provisioning-181" for this suite. WARNING: pod log: csi-hostpath-attacher-0/csi-attacher: context canceled WARNING: pod log: csi-hostpathplugin-0/hostpath: context canceled WARNING: pod log: csi-hostpathplugin-0/liveness-probe: context canceled Jan 11 20:31:16.951: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:31:20.265: INFO: namespace provisioning-181 deletion completed in 11.585186371s • [SLOW TEST:62.459 seconds] [sig-storage] CSI Volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Driver: csi-hostpath] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:62 [Testpattern: Dynamic PV (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:92 should fail if subpath file is outside the volume [Slow][LinuxOnly] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:239 ------------------------------ SSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:31:07.659: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename resourcequota STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in resourcequota-3489 STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:31:25.820: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3489" for this suite. Jan 11 20:31:32.179: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:31:35.396: INFO: namespace resourcequota-3489 deletion completed in 9.485661413s • [SLOW TEST:27.737 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SS ------------------------------ [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:31:20.276: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename deployment STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in deployment-6964 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 [It] test Deployment ReplicaSet orphaning and adoption regarding controllerRef /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:110 Jan 11 20:31:20.915: INFO: Creating Deployment "test-orphan-deployment" Jan 11 20:31:21.095: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714371480, loc:(*time.Location)(0x84bfb00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714371480, loc:(*time.Location)(0x84bfb00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714371480, loc:(*time.Location)(0x84bfb00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714371480, loc:(*time.Location)(0x84bfb00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-595b5b9587\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 11 20:31:23.185: INFO: Verifying Deployment "test-orphan-deployment" has only one ReplicaSet Jan 11 20:31:23.275: INFO: Obtaining the ReplicaSet's UID Jan 11 20:31:23.275: INFO: Checking the ReplicaSet has the right controllerRef Jan 11 20:31:23.366: INFO: Deleting Deployment "test-orphan-deployment" and orphaning its ReplicaSet STEP: Wait for the ReplicaSet to be orphaned Jan 11 20:31:25.547: INFO: Creating Deployment "test-adopt-deployment" to adopt the ReplicaSet Jan 11 20:31:25.728: INFO: Waiting for the ReplicaSet to have the right controllerRef Jan 11 20:31:25.818: INFO: Verifying no extra ReplicaSet is created (Deployment "test-adopt-deployment" still has only one ReplicaSet after adoption) Jan 11 20:31:25.908: INFO: Verifying the ReplicaSet has the same UID as the orphaned ReplicaSet [AfterEach] [sig-apps] Deployment /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:62 Jan 11 20:31:25.998: INFO: Deployment "test-adopt-deployment": &Deployment{ObjectMeta:{test-adopt-deployment deployment-6964 /apis/apps/v1/namespaces/deployment-6964/deployments/test-adopt-deployment 4ecdc9ca-36d5-44a4-a246-47d4c7096e16 85627 1 2020-01-11 20:31:25 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:1] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0017c8438 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-01-11 20:31:25 +0000 UTC,LastTransitionTime:2020-01-11 20:31:25 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-orphan-deployment-595b5b9587" has successfully progressed.,LastUpdateTime:2020-01-11 20:31:25 +0000 UTC,LastTransitionTime:2020-01-11 20:31:25 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Jan 11 20:31:26.088: INFO: New ReplicaSet "test-orphan-deployment-595b5b9587" of Deployment "test-adopt-deployment": &ReplicaSet{ObjectMeta:{test-orphan-deployment-595b5b9587 deployment-6964 /apis/apps/v1/namespaces/deployment-6964/replicasets/test-orphan-deployment-595b5b9587 ba2a75b6-d50e-49d3-830e-222427d51ce5 85625 1 2020-01-11 20:31:20 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-adopt-deployment 4ecdc9ca-36d5-44a4-a246-47d4c7096e16 0xc0017c8d97 0xc0017c8d98}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 595b5b9587,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0017c8e08 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Jan 11 20:31:26.179: INFO: Pod "test-orphan-deployment-595b5b9587-dzpst" is available: &Pod{ObjectMeta:{test-orphan-deployment-595b5b9587-dzpst test-orphan-deployment-595b5b9587- deployment-6964 /api/v1/namespaces/deployment-6964/pods/test-orphan-deployment-595b5b9587-dzpst 2c515fbf-c321-4d96-bb26-b9057c167187 85605 0 2020-01-11 20:31:20 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[cni.projectcalico.org/podIP:100.64.1.140/32 kubernetes.io/psp:e2e-test-privileged-psp] [{apps/v1 ReplicaSet test-orphan-deployment-595b5b9587 ba2a75b6-d50e-49d3-830e-222427d51ce5 0xc004176387 0xc004176388}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-fb8qh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-fb8qh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-fb8qh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-10-250-27-25.ec2.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 20:31:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 20:31:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 20:31:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-11 20:31:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.250.27.25,PodIP:100.64.1.140,StartTime:2020-01-11 20:31:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-11 20:31:21 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://1390cc473c92aabb2fed566f470626fb27d03519589823df2917e26bb384f265,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:100.64.1.140,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:31:26.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-6964" for this suite. Jan 11 20:31:32.540: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:31:35.854: INFO: namespace deployment-6964 deletion completed in 9.583596123s • [SLOW TEST:15.579 seconds] [sig-apps] Deployment /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 test Deployment ReplicaSet orphaning and adoption regarding controllerRef /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:110 ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:30:34.062: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename crd-watch STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in crd-watch-4428 STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Jan 11 20:30:34.699: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Creating first CR Jan 11 20:30:35.620: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-01-11T20:30:35Z generation:1 name:name1 resourceVersion:85115 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:c579d3bd-c0fe-4bac-a123-3a8ca6a81b07] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR Jan 11 20:30:45.711: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-01-11T20:30:45Z generation:1 name:name2 resourceVersion:85183 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:d50a73ef-03e6-4556-b2e4-cf91dfef5b85] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR Jan 11 20:30:55.801: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-01-11T20:30:35Z generation:2 name:name1 resourceVersion:85225 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:c579d3bd-c0fe-4bac-a123-3a8ca6a81b07] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR Jan 11 20:31:05.892: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-01-11T20:30:45Z generation:2 name:name2 resourceVersion:85296 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:d50a73ef-03e6-4556-b2e4-cf91dfef5b85] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR Jan 11 20:31:15.983: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-01-11T20:30:35Z generation:2 name:name1 resourceVersion:85457 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:c579d3bd-c0fe-4bac-a123-3a8ca6a81b07] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR Jan 11 20:31:26.078: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-01-11T20:30:45Z generation:2 name:name2 resourceVersion:85630 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:d50a73ef-03e6-4556-b2e4-cf91dfef5b85] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:31:36.257: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-4428" for this suite. Jan 11 20:31:44.616: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:31:47.916: INFO: namespace crd-watch-4428 deletion completed in 11.567677894s • [SLOW TEST:73.854 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:42 watch on custom resource definition objects [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:31:12.058: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename webhook STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-4959 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:88 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 11 20:31:13.796: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714371473, loc:(*time.Location)(0x84bfb00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714371473, loc:(*time.Location)(0x84bfb00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714371473, loc:(*time.Location)(0x84bfb00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714371473, loc:(*time.Location)(0x84bfb00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 11 20:31:16.979: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook Jan 11 20:31:19.675: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config attach --namespace=webhook-4959 to-be-attached-pod -i -c=container1' Jan 11 20:31:21.321: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:31:21.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4959" for this suite. Jan 11 20:31:33.770: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:31:37.070: INFO: namespace webhook-4959 deletion completed in 15.567520321s STEP: Destroying namespace "webhook-4959-markers" for this suite. Jan 11 20:31:45.338: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:31:48.629: INFO: namespace webhook-4959-markers deletion completed in 11.559456229s [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:103 • [SLOW TEST:36.930 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ [BeforeEach] [sig-api-machinery] Servers with support for API chunking /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:25:08.901: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename chunking STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in chunking-6875 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for API chunking /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/chunking.go:50 STEP: creating a large number of resources [It] should support continue listing from the last key if the original version has been compacted away, though the list is inconsistent [Slow] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/chunking.go:125 STEP: retrieving the first page Jan 11 20:25:27.232: INFO: Retrieved 40/40 results with rv 80530 and continue eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6ODA1MzAsInN0YXJ0IjoidGVtcGxhdGUtMDAzOVx1MDAwMCJ9 STEP: retrieving the second page until the token expires Jan 11 20:25:47.325: INFO: Token eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6ODA1MzAsInN0YXJ0IjoidGVtcGxhdGUtMDAzOVx1MDAwMCJ9 has not expired yet Jan 11 20:26:07.324: INFO: Token eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6ODA1MzAsInN0YXJ0IjoidGVtcGxhdGUtMDAzOVx1MDAwMCJ9 has not expired yet Jan 11 20:26:27.324: INFO: Token eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6ODA1MzAsInN0YXJ0IjoidGVtcGxhdGUtMDAzOVx1MDAwMCJ9 has not expired yet Jan 11 20:26:47.324: INFO: Token eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6ODA1MzAsInN0YXJ0IjoidGVtcGxhdGUtMDAzOVx1MDAwMCJ9 has not expired yet Jan 11 20:27:07.325: INFO: Token eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6ODA1MzAsInN0YXJ0IjoidGVtcGxhdGUtMDAzOVx1MDAwMCJ9 has not expired yet Jan 11 20:27:27.324: INFO: Token eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6ODA1MzAsInN0YXJ0IjoidGVtcGxhdGUtMDAzOVx1MDAwMCJ9 has not expired yet Jan 11 20:27:47.323: INFO: Token eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6ODA1MzAsInN0YXJ0IjoidGVtcGxhdGUtMDAzOVx1MDAwMCJ9 has not expired yet Jan 11 20:28:07.325: INFO: Token eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6ODA1MzAsInN0YXJ0IjoidGVtcGxhdGUtMDAzOVx1MDAwMCJ9 has not expired yet Jan 11 20:28:27.324: INFO: Token eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6ODA1MzAsInN0YXJ0IjoidGVtcGxhdGUtMDAzOVx1MDAwMCJ9 has not expired yet Jan 11 20:28:47.325: INFO: Token eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6ODA1MzAsInN0YXJ0IjoidGVtcGxhdGUtMDAzOVx1MDAwMCJ9 has not expired yet Jan 11 20:29:07.323: INFO: Token eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6ODA1MzAsInN0YXJ0IjoidGVtcGxhdGUtMDAzOVx1MDAwMCJ9 has not expired yet Jan 11 20:29:27.325: INFO: Token eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6ODA1MzAsInN0YXJ0IjoidGVtcGxhdGUtMDAzOVx1MDAwMCJ9 has not expired yet Jan 11 20:29:47.324: INFO: Token eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6ODA1MzAsInN0YXJ0IjoidGVtcGxhdGUtMDAzOVx1MDAwMCJ9 has not expired yet Jan 11 20:30:07.323: INFO: Token eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6ODA1MzAsInN0YXJ0IjoidGVtcGxhdGUtMDAzOVx1MDAwMCJ9 has not expired yet Jan 11 20:30:27.324: INFO: Token eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6ODA1MzAsInN0YXJ0IjoidGVtcGxhdGUtMDAzOVx1MDAwMCJ9 has not expired yet Jan 11 20:30:47.340: INFO: Token eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6ODA1MzAsInN0YXJ0IjoidGVtcGxhdGUtMDAzOVx1MDAwMCJ9 has not expired yet Jan 11 20:31:07.323: INFO: Token eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6ODA1MzAsInN0YXJ0IjoidGVtcGxhdGUtMDAzOVx1MDAwMCJ9 has not expired yet Jan 11 20:31:27.324: INFO: Token eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6ODA1MzAsInN0YXJ0IjoidGVtcGxhdGUtMDAzOVx1MDAwMCJ9 has not expired yet Jan 11 20:31:47.322: INFO: got error The provided continue parameter is too old to display a consistent list result. You can start a new list without the continue parameter, or use the continue token in this response to retrieve the remainder of the results. Continuing with the provided token results in an inconsistent list - objects that were created, modified, or deleted between the time the first chunk was returned and now may show up in the list. Jan 11 20:31:47.322: INFO: Retrieved inconsistent continue eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6LTEsInN0YXJ0IjoidGVtcGxhdGUtMDAzOVx1MDAwMCJ9 STEP: retrieving the second page again with the token received with the error message STEP: retrieving all remaining pages Jan 11 20:31:47.505: INFO: Retrieved 40/40 results with rv 86096 and continue eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6ODYwOTYsInN0YXJ0IjoidGVtcGxhdGUtMDExOVx1MDAwMCJ9 Jan 11 20:31:47.595: INFO: Retrieved 40/40 results with rv 86096 and continue eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6ODYwOTYsInN0YXJ0IjoidGVtcGxhdGUtMDE1OVx1MDAwMCJ9 Jan 11 20:31:47.687: INFO: Retrieved 40/40 results with rv 86096 and continue eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6ODYwOTYsInN0YXJ0IjoidGVtcGxhdGUtMDE5OVx1MDAwMCJ9 Jan 11 20:31:47.777: INFO: Retrieved 40/40 results with rv 86096 and continue eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6ODYwOTYsInN0YXJ0IjoidGVtcGxhdGUtMDIzOVx1MDAwMCJ9 Jan 11 20:31:47.868: INFO: Retrieved 40/40 results with rv 86096 and continue eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6ODYwOTYsInN0YXJ0IjoidGVtcGxhdGUtMDI3OVx1MDAwMCJ9 Jan 11 20:31:47.959: INFO: Retrieved 40/40 results with rv 86096 and continue eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6ODYwOTYsInN0YXJ0IjoidGVtcGxhdGUtMDMxOVx1MDAwMCJ9 Jan 11 20:31:48.051: INFO: Retrieved 40/40 results with rv 86096 and continue eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6ODYwOTYsInN0YXJ0IjoidGVtcGxhdGUtMDM1OVx1MDAwMCJ9 Jan 11 20:31:48.141: INFO: Retrieved 40/40 results with rv 86096 and continue [AfterEach] [sig-api-machinery] Servers with support for API chunking /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:31:48.141: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "chunking-6875" for this suite. Jan 11 20:31:58.504: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:32:01.871: INFO: namespace chunking-6875 deletion completed in 13.639327531s • [SLOW TEST:412.971 seconds] [sig-api-machinery] Servers with support for API chunking /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should support continue listing from the last key if the original version has been compacted away, though the list is inconsistent [Slow] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/chunking.go:125 ------------------------------ SS ------------------------------ [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:32:01.876: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename replication-controller STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in replication-controller-6757 STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Jan 11 20:32:02.851: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Jan 11 20:32:04.480: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:32:05.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-6757" for this suite. Jan 11 20:32:14.021: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:32:17.326: INFO: namespace replication-controller-6757 deletion completed in 11.573913074s • [SLOW TEST:15.450 seconds] [sig-apps] ReplicationController /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should surface a failure condition on a common issue like exceeded quota [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Networking /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:31:48.990: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename nettest STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nettest-2897 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Networking /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:35 STEP: Executing a successful http request from the external internet [It] should function for node-Service: http /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:123 STEP: Performing setup for networking test in namespace nettest-2897 STEP: creating a selector STEP: Creating the service pods in kubernetes Jan 11 20:31:51.316: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods STEP: Getting node addresses Jan 11 20:32:14.839: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Jan 11 20:32:15.019: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-network] Networking /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:32:15.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "nettest-2897" for this suite. Jan 11 20:32:29.378: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:32:32.794: INFO: namespace nettest-2897 deletion completed in 17.683607179s S [SKIPPING] [43.804 seconds] [sig-network] Networking /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 Granular Checks: Services /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:103 should function for node-Service: http [It] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:123 Requires at least 2 nodes (not -1) /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:597 ------------------------------ SSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:30:28.290: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename var-expansion STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in var-expansion-642 STEP: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with backticks [sig-storage][NodeFeature:VolumeSubpathEnvExpansion][Slow] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/expansion.go:222 Jan 11 20:32:29.411: INFO: Deleting pod "var-expansion-5b683f57-8d71-4804-8be5-37edca46eed5" in namespace "var-expansion-642" Jan 11 20:32:29.502: INFO: Wait up to 5m0s for pod "var-expansion-5b683f57-8d71-4804-8be5-37edca46eed5" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:32:31.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-642" for this suite. Jan 11 20:32:42.043: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:32:45.360: INFO: namespace var-expansion-642 deletion completed in 13.587095587s • [SLOW TEST:137.070 seconds] [k8s.io] Variable Expansion /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693 should fail substituting values in a volume subpath with backticks [sig-storage][NodeFeature:VolumeSubpathEnvExpansion][Slow] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/expansion.go:222 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:93 [BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:85 [BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:31:35.860: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename volume-expand STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in volume-expand-6586 STEP: Waiting for a default service account to be provisioned in namespace [It] Verify if offline PVC expansion works /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:154 STEP: deploying csi-hostpath driver Jan 11 20:31:36.699: INFO: creating *v1.ServiceAccount: volume-expand-6586/csi-attacher Jan 11 20:31:36.789: INFO: creating *v1.ClusterRole: external-attacher-runner-volume-expand-6586 Jan 11 20:31:36.789: INFO: Define cluster role external-attacher-runner-volume-expand-6586 Jan 11 20:31:36.878: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-volume-expand-6586 Jan 11 20:31:36.969: INFO: creating *v1.Role: volume-expand-6586/external-attacher-cfg-volume-expand-6586 Jan 11 20:31:37.059: INFO: creating *v1.RoleBinding: volume-expand-6586/csi-attacher-role-cfg Jan 11 20:31:37.149: INFO: creating *v1.ServiceAccount: volume-expand-6586/csi-provisioner Jan 11 20:31:37.239: INFO: creating *v1.ClusterRole: external-provisioner-runner-volume-expand-6586 Jan 11 20:31:37.239: INFO: Define cluster role external-provisioner-runner-volume-expand-6586 Jan 11 20:31:37.329: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-volume-expand-6586 Jan 11 20:31:37.419: INFO: creating *v1.Role: volume-expand-6586/external-provisioner-cfg-volume-expand-6586 Jan 11 20:31:37.509: INFO: creating *v1.RoleBinding: volume-expand-6586/csi-provisioner-role-cfg Jan 11 20:31:37.599: INFO: creating *v1.ServiceAccount: volume-expand-6586/csi-snapshotter Jan 11 20:31:37.690: INFO: creating *v1.ClusterRole: external-snapshotter-runner-volume-expand-6586 Jan 11 20:31:37.690: INFO: Define cluster role external-snapshotter-runner-volume-expand-6586 Jan 11 20:31:37.781: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-volume-expand-6586 Jan 11 20:31:37.872: INFO: creating *v1.Role: volume-expand-6586/external-snapshotter-leaderelection-volume-expand-6586 Jan 11 20:31:37.962: INFO: creating *v1.RoleBinding: volume-expand-6586/external-snapshotter-leaderelection Jan 11 20:31:38.053: INFO: creating *v1.ServiceAccount: volume-expand-6586/csi-resizer Jan 11 20:31:38.143: INFO: creating *v1.ClusterRole: external-resizer-runner-volume-expand-6586 Jan 11 20:31:38.143: INFO: Define cluster role external-resizer-runner-volume-expand-6586 Jan 11 20:31:38.233: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-volume-expand-6586 Jan 11 20:31:38.323: INFO: creating *v1.Role: volume-expand-6586/external-resizer-cfg-volume-expand-6586 Jan 11 20:31:38.413: INFO: creating *v1.RoleBinding: volume-expand-6586/csi-resizer-role-cfg Jan 11 20:31:38.503: INFO: creating *v1.Service: volume-expand-6586/csi-hostpath-attacher Jan 11 20:31:38.597: INFO: creating *v1.StatefulSet: volume-expand-6586/csi-hostpath-attacher Jan 11 20:31:38.688: INFO: creating *v1beta1.CSIDriver: csi-hostpath-volume-expand-6586 Jan 11 20:31:38.778: INFO: creating *v1.Service: volume-expand-6586/csi-hostpathplugin Jan 11 20:31:38.872: INFO: creating *v1.StatefulSet: volume-expand-6586/csi-hostpathplugin Jan 11 20:31:38.962: INFO: creating *v1.Service: volume-expand-6586/csi-hostpath-provisioner Jan 11 20:31:39.059: INFO: creating *v1.StatefulSet: volume-expand-6586/csi-hostpath-provisioner Jan 11 20:31:39.149: INFO: creating *v1.Service: volume-expand-6586/csi-hostpath-resizer Jan 11 20:31:39.243: INFO: creating *v1.StatefulSet: volume-expand-6586/csi-hostpath-resizer Jan 11 20:31:39.333: INFO: creating *v1.Service: volume-expand-6586/csi-snapshotter Jan 11 20:31:39.427: INFO: creating *v1.StatefulSet: volume-expand-6586/csi-snapshotter Jan 11 20:31:39.517: INFO: creating *v1.ClusterRoleBinding: psp-csi-hostpath-role-volume-expand-6586 Jan 11 20:31:39.607: INFO: Test running for native CSI Driver, not checking metrics Jan 11 20:31:39.607: INFO: Creating resource for dynamic PV STEP: creating a StorageClass volume-expand-6586-csi-hostpath-volume-expand-6586-sc7g88t STEP: creating a claim Jan 11 20:31:39.790: INFO: Waiting up to 5m0s for PersistentVolumeClaims [csi-hostpath8xq2f] to have phase Bound Jan 11 20:31:39.879: INFO: PersistentVolumeClaim csi-hostpath8xq2f found but phase is Pending instead of Bound. Jan 11 20:31:41.969: INFO: PersistentVolumeClaim csi-hostpath8xq2f found but phase is Pending instead of Bound. Jan 11 20:31:44.059: INFO: PersistentVolumeClaim csi-hostpath8xq2f found and phase=Bound (4.269235631s) STEP: Creating a pod with dynamically provisioned volume STEP: Deleting the previously created pod Jan 11 20:31:54.601: INFO: Deleting pod "security-context-5d920b0f-2571-4ada-88a3-5d3cde09d33d" in namespace "volume-expand-6586" Jan 11 20:31:54.692: INFO: Wait up to 5m0s for pod "security-context-5d920b0f-2571-4ada-88a3-5d3cde09d33d" to be fully deleted STEP: Expanding current pvc Jan 11 20:32:04.871: INFO: currentPvcSize {{5368709120 0} {} 5Gi BinarySI}, newSize {{6442450944 0} {} BinarySI} STEP: Waiting for cloudprovider resize to finish STEP: Checking for conditions on pvc STEP: Creating a new pod with same volume STEP: Waiting for file system resize to finish Jan 11 20:32:11.681: INFO: Deleting pod "security-context-e290cb8d-86ee-4587-935f-07b3b855aebe" in namespace "volume-expand-6586" Jan 11 20:32:11.772: INFO: Wait up to 5m0s for pod "security-context-e290cb8d-86ee-4587-935f-07b3b855aebe" to be fully deleted Jan 11 20:32:23.952: INFO: Deleting pod "security-context-5d920b0f-2571-4ada-88a3-5d3cde09d33d" in namespace "volume-expand-6586" STEP: Deleting pod Jan 11 20:32:24.042: INFO: Deleting pod "security-context-5d920b0f-2571-4ada-88a3-5d3cde09d33d" in namespace "volume-expand-6586" STEP: Deleting pod2 Jan 11 20:32:24.131: INFO: Deleting pod "security-context-e290cb8d-86ee-4587-935f-07b3b855aebe" in namespace "volume-expand-6586" STEP: Deleting pvc Jan 11 20:32:24.221: INFO: Deleting PersistentVolumeClaim "csi-hostpath8xq2f" Jan 11 20:32:24.312: INFO: Waiting up to 5m0s for PersistentVolume pvc-9b71baab-ccde-4d9d-9b6c-7fbec014d89b to get deleted Jan 11 20:32:24.402: INFO: PersistentVolume pvc-9b71baab-ccde-4d9d-9b6c-7fbec014d89b found and phase=Bound (89.787896ms) Jan 11 20:32:29.491: INFO: PersistentVolume pvc-9b71baab-ccde-4d9d-9b6c-7fbec014d89b was removed STEP: Deleting sc STEP: uninstalling csi-hostpath driver Jan 11 20:32:29.584: INFO: deleting *v1.ServiceAccount: volume-expand-6586/csi-attacher Jan 11 20:32:29.675: INFO: deleting *v1.ClusterRole: external-attacher-runner-volume-expand-6586 Jan 11 20:32:29.766: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-volume-expand-6586 Jan 11 20:32:29.858: INFO: deleting *v1.Role: volume-expand-6586/external-attacher-cfg-volume-expand-6586 Jan 11 20:32:29.950: INFO: deleting *v1.RoleBinding: volume-expand-6586/csi-attacher-role-cfg Jan 11 20:32:30.041: INFO: deleting *v1.ServiceAccount: volume-expand-6586/csi-provisioner Jan 11 20:32:30.132: INFO: deleting *v1.ClusterRole: external-provisioner-runner-volume-expand-6586 Jan 11 20:32:30.223: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-volume-expand-6586 Jan 11 20:32:30.315: INFO: deleting *v1.Role: volume-expand-6586/external-provisioner-cfg-volume-expand-6586 Jan 11 20:32:30.406: INFO: deleting *v1.RoleBinding: volume-expand-6586/csi-provisioner-role-cfg Jan 11 20:32:30.497: INFO: deleting *v1.ServiceAccount: volume-expand-6586/csi-snapshotter Jan 11 20:32:30.588: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-volume-expand-6586 Jan 11 20:32:30.680: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-volume-expand-6586 Jan 11 20:32:30.771: INFO: deleting *v1.Role: volume-expand-6586/external-snapshotter-leaderelection-volume-expand-6586 Jan 11 20:32:30.862: INFO: deleting *v1.RoleBinding: volume-expand-6586/external-snapshotter-leaderelection Jan 11 20:32:30.953: INFO: deleting *v1.ServiceAccount: volume-expand-6586/csi-resizer Jan 11 20:32:31.045: INFO: deleting *v1.ClusterRole: external-resizer-runner-volume-expand-6586 Jan 11 20:32:31.136: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-volume-expand-6586 Jan 11 20:32:31.227: INFO: deleting *v1.Role: volume-expand-6586/external-resizer-cfg-volume-expand-6586 Jan 11 20:32:31.318: INFO: deleting *v1.RoleBinding: volume-expand-6586/csi-resizer-role-cfg Jan 11 20:32:31.409: INFO: deleting *v1.Service: volume-expand-6586/csi-hostpath-attacher Jan 11 20:32:31.505: INFO: deleting *v1.StatefulSet: volume-expand-6586/csi-hostpath-attacher Jan 11 20:32:31.598: INFO: deleting *v1beta1.CSIDriver: csi-hostpath-volume-expand-6586 Jan 11 20:32:31.690: INFO: deleting *v1.Service: volume-expand-6586/csi-hostpathplugin Jan 11 20:32:31.789: INFO: deleting *v1.StatefulSet: volume-expand-6586/csi-hostpathplugin Jan 11 20:32:31.880: INFO: deleting *v1.Service: volume-expand-6586/csi-hostpath-provisioner Jan 11 20:32:31.978: INFO: deleting *v1.StatefulSet: volume-expand-6586/csi-hostpath-provisioner Jan 11 20:32:32.169: INFO: deleting *v1.Service: volume-expand-6586/csi-hostpath-resizer Jan 11 20:32:32.265: INFO: deleting *v1.StatefulSet: volume-expand-6586/csi-hostpath-resizer Jan 11 20:32:32.356: INFO: deleting *v1.Service: volume-expand-6586/csi-snapshotter Jan 11 20:32:32.452: INFO: deleting *v1.StatefulSet: volume-expand-6586/csi-snapshotter Jan 11 20:32:32.543: INFO: deleting *v1.ClusterRoleBinding: psp-csi-hostpath-role-volume-expand-6586 [AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:32:32.634: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "volume-expand-6586" for this suite. Jan 11 20:32:48.997: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:32:52.327: INFO: namespace volume-expand-6586 deletion completed in 19.601927508s • [SLOW TEST:76.466 seconds] [sig-storage] CSI Volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Driver: csi-hostpath] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:62 [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:92 Verify if offline PVC expansion works /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:154 ------------------------------ SSS ------------------------------ [BeforeEach] [sig-node] ConfigMap /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:32:45.390: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename configmap STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-1668 STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating configMap configmap-1668/configmap-test-0eb6590d-2dae-45a6-ac4d-6ff380155e7c STEP: Creating a pod to test consume configMaps Jan 11 20:32:46.374: INFO: Waiting up to 5m0s for pod "pod-configmaps-eb2f5404-0dbb-4a6b-9eae-66c1ab12634e" in namespace "configmap-1668" to be "success or failure" Jan 11 20:32:46.463: INFO: Pod "pod-configmaps-eb2f5404-0dbb-4a6b-9eae-66c1ab12634e": Phase="Pending", Reason="", readiness=false. Elapsed: 89.579697ms Jan 11 20:32:48.554: INFO: Pod "pod-configmaps-eb2f5404-0dbb-4a6b-9eae-66c1ab12634e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.179997882s STEP: Saw pod success Jan 11 20:32:48.554: INFO: Pod "pod-configmaps-eb2f5404-0dbb-4a6b-9eae-66c1ab12634e" satisfied condition "success or failure" Jan 11 20:32:48.643: INFO: Trying to get logs from node ip-10-250-27-25.ec2.internal pod pod-configmaps-eb2f5404-0dbb-4a6b-9eae-66c1ab12634e container env-test: STEP: delete the pod Jan 11 20:32:48.947: INFO: Waiting for pod pod-configmaps-eb2f5404-0dbb-4a6b-9eae-66c1ab12634e to disappear Jan 11 20:32:49.037: INFO: Pod pod-configmaps-eb2f5404-0dbb-4a6b-9eae-66c1ab12634e no longer exists [AfterEach] [sig-node] ConfigMap /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:32:49.037: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1668" for this suite. Jan 11 20:32:57.398: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:33:00.722: INFO: namespace configmap-1668 deletion completed in 11.593529768s • [SLOW TEST:15.331 seconds] [sig-node] ConfigMap /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:32 should be consumable via environment variable [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ [BeforeEach] [sig-network] Networking /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:32:17.343: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename nettest STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nettest-8295 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Networking /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:35 STEP: Executing a successful http request from the external internet [It] should update endpoints: http /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:159 STEP: Performing setup for networking test in namespace nettest-8295 STEP: creating a selector STEP: Creating the service pods in kubernetes Jan 11 20:32:19.207: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods STEP: Getting node addresses Jan 11 20:32:42.744: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Jan 11 20:32:42.926: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-network] Networking /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:32:42.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "nettest-8295" for this suite. Jan 11 20:32:59.286: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:33:02.751: INFO: namespace nettest-8295 deletion completed in 19.733529968s S [SKIPPING] [45.408 seconds] [sig-network] Networking /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 Granular Checks: Services /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:103 should update endpoints: http [It] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:159 Requires at least 2 nodes (not -1) /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:597 ------------------------------ SSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:32:32.808: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in csi-mock-volumes-1547 STEP: Waiting for a default service account to be provisioned in namespace [It] should expand volume by restarting pod if attach=off, nodeExpansion=on /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:449 STEP: deploying csi mock driver Jan 11 20:32:34.835: INFO: creating *v1.ServiceAccount: csi-mock-volumes-1547/csi-attacher Jan 11 20:32:34.924: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-1547 Jan 11 20:32:34.925: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-1547 Jan 11 20:32:35.016: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-1547 Jan 11 20:32:35.106: INFO: creating *v1.Role: csi-mock-volumes-1547/external-attacher-cfg-csi-mock-volumes-1547 Jan 11 20:32:35.195: INFO: creating *v1.RoleBinding: csi-mock-volumes-1547/csi-attacher-role-cfg Jan 11 20:32:35.287: INFO: creating *v1.ServiceAccount: csi-mock-volumes-1547/csi-provisioner Jan 11 20:32:35.377: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-1547 Jan 11 20:32:35.377: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-1547 Jan 11 20:32:35.466: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-1547 Jan 11 20:32:35.556: INFO: creating *v1.Role: csi-mock-volumes-1547/external-provisioner-cfg-csi-mock-volumes-1547 Jan 11 20:32:35.645: INFO: creating *v1.RoleBinding: csi-mock-volumes-1547/csi-provisioner-role-cfg Jan 11 20:32:35.734: INFO: creating *v1.ServiceAccount: csi-mock-volumes-1547/csi-resizer Jan 11 20:32:35.824: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-1547 Jan 11 20:32:35.824: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-1547 Jan 11 20:32:35.913: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-1547 Jan 11 20:32:36.002: INFO: creating *v1.Role: csi-mock-volumes-1547/external-resizer-cfg-csi-mock-volumes-1547 Jan 11 20:32:36.091: INFO: creating *v1.RoleBinding: csi-mock-volumes-1547/csi-resizer-role-cfg Jan 11 20:32:36.181: INFO: creating *v1.ServiceAccount: csi-mock-volumes-1547/csi-mock Jan 11 20:32:36.271: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-1547 Jan 11 20:32:36.360: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-1547 Jan 11 20:32:36.450: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-1547 Jan 11 20:32:36.539: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-1547 Jan 11 20:32:36.629: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-1547 Jan 11 20:32:36.717: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-1547 Jan 11 20:32:36.807: INFO: creating *v1.StatefulSet: csi-mock-volumes-1547/csi-mockplugin Jan 11 20:32:36.897: INFO: creating *v1beta1.CSIDriver: csi-mock-csi-mock-volumes-1547 Jan 11 20:32:36.986: INFO: creating *v1.StatefulSet: csi-mock-volumes-1547/csi-mockplugin-resizer Jan 11 20:32:37.076: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-1547" STEP: Creating pod Jan 11 20:32:37.344: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Jan 11 20:32:37.435: INFO: Waiting up to 5m0s for PersistentVolumeClaims [pvc-d5xsg] to have phase Bound Jan 11 20:32:37.524: INFO: PersistentVolumeClaim pvc-d5xsg found but phase is Pending instead of Bound. Jan 11 20:32:39.613: INFO: PersistentVolumeClaim pvc-d5xsg found but phase is Pending instead of Bound. Jan 11 20:32:41.702: INFO: PersistentVolumeClaim pvc-d5xsg found and phase=Bound (4.267280753s) STEP: Expanding current pvc STEP: Waiting for persistent volume resize to finish STEP: Checking for conditions on pvc STEP: Deleting the previously created pod Jan 11 20:32:44.508: INFO: Deleting pod "pvc-volume-tester-kdgrq" in namespace "csi-mock-volumes-1547" Jan 11 20:32:44.599: INFO: Wait up to 5m0s for pod "pvc-volume-tester-kdgrq" to be fully deleted STEP: Creating a new pod with same volume STEP: Waiting for PVC resize to finish STEP: Deleting pod pvc-volume-tester-kdgrq Jan 11 20:32:48.956: INFO: Deleting pod "pvc-volume-tester-kdgrq" in namespace "csi-mock-volumes-1547" STEP: Deleting pod pvc-volume-tester-bqmjd Jan 11 20:32:49.046: INFO: Deleting pod "pvc-volume-tester-bqmjd" in namespace "csi-mock-volumes-1547" Jan 11 20:32:49.137: INFO: Wait up to 5m0s for pod "pvc-volume-tester-bqmjd" to be fully deleted STEP: Deleting claim pvc-d5xsg Jan 11 20:32:55.494: INFO: Waiting up to 2m0s for PersistentVolume pvc-23e3c7b3-9832-4e8c-8d46-4b5b6b26ad2e to get deleted Jan 11 20:32:55.583: INFO: PersistentVolume pvc-23e3c7b3-9832-4e8c-8d46-4b5b6b26ad2e found and phase=Bound (88.858078ms) Jan 11 20:32:57.672: INFO: PersistentVolume pvc-23e3c7b3-9832-4e8c-8d46-4b5b6b26ad2e was removed STEP: Deleting storageclass csi-mock-volumes-1547-sc STEP: Cleaning up resources STEP: uninstalling csi mock driver Jan 11 20:32:57.763: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-1547/csi-attacher Jan 11 20:32:57.853: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-1547 Jan 11 20:32:57.948: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-1547 Jan 11 20:32:58.039: INFO: deleting *v1.Role: csi-mock-volumes-1547/external-attacher-cfg-csi-mock-volumes-1547 Jan 11 20:32:58.130: INFO: deleting *v1.RoleBinding: csi-mock-volumes-1547/csi-attacher-role-cfg Jan 11 20:32:58.221: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-1547/csi-provisioner Jan 11 20:32:58.311: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-1547 Jan 11 20:32:58.401: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-1547 Jan 11 20:32:58.492: INFO: deleting *v1.Role: csi-mock-volumes-1547/external-provisioner-cfg-csi-mock-volumes-1547 Jan 11 20:32:58.584: INFO: deleting *v1.RoleBinding: csi-mock-volumes-1547/csi-provisioner-role-cfg Jan 11 20:32:58.675: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-1547/csi-resizer Jan 11 20:32:58.765: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-1547 Jan 11 20:32:58.856: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-1547 Jan 11 20:32:58.946: INFO: deleting *v1.Role: csi-mock-volumes-1547/external-resizer-cfg-csi-mock-volumes-1547 Jan 11 20:32:59.038: INFO: deleting *v1.RoleBinding: csi-mock-volumes-1547/csi-resizer-role-cfg Jan 11 20:32:59.129: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-1547/csi-mock Jan 11 20:32:59.220: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-1547 Jan 11 20:32:59.311: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-1547 Jan 11 20:32:59.402: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-1547 Jan 11 20:32:59.493: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-1547 Jan 11 20:32:59.584: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-1547 Jan 11 20:32:59.674: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-1547 Jan 11 20:32:59.765: INFO: deleting *v1.StatefulSet: csi-mock-volumes-1547/csi-mockplugin Jan 11 20:32:59.856: INFO: deleting *v1beta1.CSIDriver: csi-mock-csi-mock-volumes-1547 Jan 11 20:32:59.947: INFO: deleting *v1.StatefulSet: csi-mock-volumes-1547/csi-mockplugin-resizer [AfterEach] [sig-storage] CSI mock volume /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:33:00.127: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "csi-mock-volumes-1547" for this suite. Jan 11 20:33:08.485: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:33:11.809: INFO: namespace csi-mock-volumes-1547 deletion completed in 11.590666014s • [SLOW TEST:39.001 seconds] [sig-storage] CSI mock volume /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI Volume expansion /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:420 should expand volume by restarting pod if attach=off, nodeExpansion=on /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:449 ------------------------------ S ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:33:00.723: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename emptydir STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-4327 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:46 [It] volume on default medium should have the correct mode using FSGroup /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:67 STEP: Creating a pod to test emptydir volume type on node default medium Jan 11 20:33:01.848: INFO: Waiting up to 5m0s for pod "pod-68b3d7b3-cbbb-47af-ac95-25946ec2d665" in namespace "emptydir-4327" to be "success or failure" Jan 11 20:33:01.938: INFO: Pod "pod-68b3d7b3-cbbb-47af-ac95-25946ec2d665": Phase="Pending", Reason="", readiness=false. Elapsed: 89.789744ms Jan 11 20:33:04.028: INFO: Pod "pod-68b3d7b3-cbbb-47af-ac95-25946ec2d665": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.179863331s STEP: Saw pod success Jan 11 20:33:04.028: INFO: Pod "pod-68b3d7b3-cbbb-47af-ac95-25946ec2d665" satisfied condition "success or failure" Jan 11 20:33:04.118: INFO: Trying to get logs from node ip-10-250-27-25.ec2.internal pod pod-68b3d7b3-cbbb-47af-ac95-25946ec2d665 container test-container: STEP: delete the pod Jan 11 20:33:04.309: INFO: Waiting for pod pod-68b3d7b3-cbbb-47af-ac95-25946ec2d665 to disappear Jan 11 20:33:04.398: INFO: Pod pod-68b3d7b3-cbbb-47af-ac95-25946ec2d665 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:33:04.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4327" for this suite. Jan 11 20:33:12.761: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:33:16.075: INFO: namespace emptydir-4327 deletion completed in 11.58444764s • [SLOW TEST:15.352 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:44 volume on default medium should have the correct mode using FSGroup /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:67 ------------------------------ SSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Downward API /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:33:02.766: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename downward-api STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-5475 STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating a pod to test downward api env vars Jan 11 20:33:03.843: INFO: Waiting up to 5m0s for pod "downward-api-90a3025f-335a-48a3-8890-edb373be7bfd" in namespace "downward-api-5475" to be "success or failure" Jan 11 20:33:03.933: INFO: Pod "downward-api-90a3025f-335a-48a3-8890-edb373be7bfd": Phase="Pending", Reason="", readiness=false. Elapsed: 89.309897ms Jan 11 20:33:06.025: INFO: Pod "downward-api-90a3025f-335a-48a3-8890-edb373be7bfd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.182051148s STEP: Saw pod success Jan 11 20:33:06.025: INFO: Pod "downward-api-90a3025f-335a-48a3-8890-edb373be7bfd" satisfied condition "success or failure" Jan 11 20:33:06.115: INFO: Trying to get logs from node ip-10-250-27-25.ec2.internal pod downward-api-90a3025f-335a-48a3-8890-edb373be7bfd container dapi-container: STEP: delete the pod Jan 11 20:33:06.335: INFO: Waiting for pod downward-api-90a3025f-335a-48a3-8890-edb373be7bfd to disappear Jan 11 20:33:06.425: INFO: Pod downward-api-90a3025f-335a-48a3-8890-edb373be7bfd no longer exists [AfterEach] [sig-node] Downward API /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:33:06.425: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5475" for this suite. Jan 11 20:33:14.786: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:33:18.092: INFO: namespace downward-api-5475 deletion completed in 11.575279356s • [SLOW TEST:15.327 seconds] [sig-node] Downward API /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:32:52.333: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename resourcequota STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in resourcequota-1444 STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a persistent volume claim. [sig-storage] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/resource_quota.go:458 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a PersistentVolumeClaim STEP: Ensuring resource quota status captures persistent volume claim creation STEP: Deleting a PersistentVolumeClaim STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:33:05.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1444" for this suite. Jan 11 20:33:15.748: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:33:19.061: INFO: namespace resourcequota-1444 deletion completed in 13.583536879s • [SLOW TEST:26.728 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a persistent volume claim. [sig-storage] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/resource_quota.go:458 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:33:16.090: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename projected STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-1843 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating a pod to test downward API volume plugin Jan 11 20:33:17.345: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8fb6cc0c-9d4c-4907-aa6e-1837e472883a" in namespace "projected-1843" to be "success or failure" Jan 11 20:33:17.434: INFO: Pod "downwardapi-volume-8fb6cc0c-9d4c-4907-aa6e-1837e472883a": Phase="Pending", Reason="", readiness=false. Elapsed: 89.284472ms Jan 11 20:33:19.524: INFO: Pod "downwardapi-volume-8fb6cc0c-9d4c-4907-aa6e-1837e472883a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.179158627s STEP: Saw pod success Jan 11 20:33:19.524: INFO: Pod "downwardapi-volume-8fb6cc0c-9d4c-4907-aa6e-1837e472883a" satisfied condition "success or failure" Jan 11 20:33:19.614: INFO: Trying to get logs from node ip-10-250-27-25.ec2.internal pod downwardapi-volume-8fb6cc0c-9d4c-4907-aa6e-1837e472883a container client-container: STEP: delete the pod Jan 11 20:33:19.802: INFO: Waiting for pod downwardapi-volume-8fb6cc0c-9d4c-4907-aa6e-1837e472883a to disappear Jan 11 20:33:19.891: INFO: Pod downwardapi-volume-8fb6cc0c-9d4c-4907-aa6e-1837e472883a no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:33:19.891: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1843" for this suite. Jan 11 20:33:28.252: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:33:31.567: INFO: namespace projected-1843 deletion completed in 11.584019574s • [SLOW TEST:15.477 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSS ------------------------------ [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:93 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:31:14.374: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename provisioning STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in provisioning-1157 STEP: Waiting for a default service account to be provisioned in namespace [It] should support restarting containers using file as subpath [Slow][LinuxOnly] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:318 STEP: deploying csi-hostpath driver Jan 11 20:31:15.737: INFO: creating *v1.ServiceAccount: provisioning-1157/csi-attacher Jan 11 20:31:15.827: INFO: creating *v1.ClusterRole: external-attacher-runner-provisioning-1157 Jan 11 20:31:15.827: INFO: Define cluster role external-attacher-runner-provisioning-1157 Jan 11 20:31:15.917: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-provisioning-1157 Jan 11 20:31:16.008: INFO: creating *v1.Role: provisioning-1157/external-attacher-cfg-provisioning-1157 Jan 11 20:31:16.098: INFO: creating *v1.RoleBinding: provisioning-1157/csi-attacher-role-cfg Jan 11 20:31:16.188: INFO: creating *v1.ServiceAccount: provisioning-1157/csi-provisioner Jan 11 20:31:16.279: INFO: creating *v1.ClusterRole: external-provisioner-runner-provisioning-1157 Jan 11 20:31:16.279: INFO: Define cluster role external-provisioner-runner-provisioning-1157 Jan 11 20:31:16.369: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-provisioning-1157 Jan 11 20:31:16.459: INFO: creating *v1.Role: provisioning-1157/external-provisioner-cfg-provisioning-1157 Jan 11 20:31:16.549: INFO: creating *v1.RoleBinding: provisioning-1157/csi-provisioner-role-cfg Jan 11 20:31:16.639: INFO: creating *v1.ServiceAccount: provisioning-1157/csi-snapshotter Jan 11 20:31:16.729: INFO: creating *v1.ClusterRole: external-snapshotter-runner-provisioning-1157 Jan 11 20:31:16.729: INFO: Define cluster role external-snapshotter-runner-provisioning-1157 Jan 11 20:31:16.820: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-provisioning-1157 Jan 11 20:31:16.911: INFO: creating *v1.Role: provisioning-1157/external-snapshotter-leaderelection-provisioning-1157 Jan 11 20:31:17.001: INFO: creating *v1.RoleBinding: provisioning-1157/external-snapshotter-leaderelection Jan 11 20:31:17.091: INFO: creating *v1.ServiceAccount: provisioning-1157/csi-resizer Jan 11 20:31:17.181: INFO: creating *v1.ClusterRole: external-resizer-runner-provisioning-1157 Jan 11 20:31:17.182: INFO: Define cluster role external-resizer-runner-provisioning-1157 Jan 11 20:31:17.271: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-provisioning-1157 Jan 11 20:31:17.362: INFO: creating *v1.Role: provisioning-1157/external-resizer-cfg-provisioning-1157 Jan 11 20:31:17.452: INFO: creating *v1.RoleBinding: provisioning-1157/csi-resizer-role-cfg Jan 11 20:31:17.542: INFO: creating *v1.Service: provisioning-1157/csi-hostpath-attacher Jan 11 20:31:17.636: INFO: creating *v1.StatefulSet: provisioning-1157/csi-hostpath-attacher Jan 11 20:31:17.727: INFO: creating *v1beta1.CSIDriver: csi-hostpath-provisioning-1157 Jan 11 20:31:17.817: INFO: creating *v1.Service: provisioning-1157/csi-hostpathplugin Jan 11 20:31:17.912: INFO: creating *v1.StatefulSet: provisioning-1157/csi-hostpathplugin Jan 11 20:31:18.002: INFO: creating *v1.Service: provisioning-1157/csi-hostpath-provisioner Jan 11 20:31:18.097: INFO: creating *v1.StatefulSet: provisioning-1157/csi-hostpath-provisioner Jan 11 20:31:18.188: INFO: creating *v1.Service: provisioning-1157/csi-hostpath-resizer Jan 11 20:31:18.282: INFO: creating *v1.StatefulSet: provisioning-1157/csi-hostpath-resizer Jan 11 20:31:18.372: INFO: creating *v1.Service: provisioning-1157/csi-snapshotter Jan 11 20:31:18.467: INFO: creating *v1.StatefulSet: provisioning-1157/csi-snapshotter Jan 11 20:31:18.557: INFO: creating *v1.ClusterRoleBinding: psp-csi-hostpath-role-provisioning-1157 Jan 11 20:31:18.647: INFO: Test running for native CSI Driver, not checking metrics Jan 11 20:31:18.647: INFO: Creating resource for dynamic PV STEP: creating a StorageClass provisioning-1157-csi-hostpath-provisioning-1157-sc9kbfc STEP: creating a claim Jan 11 20:31:18.738: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Jan 11 20:31:18.830: INFO: Waiting up to 5m0s for PersistentVolumeClaims [csi-hostpath5jm2x] to have phase Bound Jan 11 20:31:18.920: INFO: PersistentVolumeClaim csi-hostpath5jm2x found but phase is Pending instead of Bound. Jan 11 20:31:21.010: INFO: PersistentVolumeClaim csi-hostpath5jm2x found and phase=Bound (2.179497047s) STEP: Creating pod pod-subpath-test-csi-hostpath-dynamicpv-66lr STEP: Failing liveness probe Jan 11 20:31:31.461: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=provisioning-1157 pod-subpath-test-csi-hostpath-dynamicpv-66lr --container test-container-volume-csi-hostpath-dynamicpv-66lr -- /bin/sh -c rm /probe-volume/probe-file' Jan 11 20:31:32.761: INFO: stderr: "" Jan 11 20:31:32.761: INFO: stdout: "" Jan 11 20:31:32.761: INFO: Pod exec output: STEP: Waiting for container to restart Jan 11 20:31:32.851: INFO: Container test-container-subpath-csi-hostpath-dynamicpv-66lr, restarts: 0 Jan 11 20:31:42.941: INFO: Container test-container-subpath-csi-hostpath-dynamicpv-66lr, restarts: 2 Jan 11 20:31:42.941: INFO: Container has restart count: 2 STEP: Rewriting the file Jan 11 20:31:42.941: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=provisioning-1157 pod-subpath-test-csi-hostpath-dynamicpv-66lr --container test-container-volume-csi-hostpath-dynamicpv-66lr -- /bin/sh -c echo test-after > /probe-volume/probe-file' Jan 11 20:31:44.245: INFO: stderr: "" Jan 11 20:31:44.245: INFO: stdout: "" Jan 11 20:31:44.245: INFO: Pod exec output: STEP: Waiting for container to stop restarting Jan 11 20:32:00.425: INFO: Container has restart count: 3 Jan 11 20:33:02.426: INFO: Container restart has stabilized Jan 11 20:33:02.426: INFO: Deleting pod "pod-subpath-test-csi-hostpath-dynamicpv-66lr" in namespace "provisioning-1157" Jan 11 20:33:02.517: INFO: Wait up to 5m0s for pod "pod-subpath-test-csi-hostpath-dynamicpv-66lr" to be fully deleted STEP: Deleting pod Jan 11 20:33:18.696: INFO: Deleting pod "pod-subpath-test-csi-hostpath-dynamicpv-66lr" in namespace "provisioning-1157" STEP: Deleting pvc Jan 11 20:33:18.786: INFO: Deleting PersistentVolumeClaim "csi-hostpath5jm2x" Jan 11 20:33:18.876: INFO: Waiting up to 5m0s for PersistentVolume pvc-76ea015f-1828-4278-8efa-1734fbf18558 to get deleted Jan 11 20:33:18.966: INFO: PersistentVolume pvc-76ea015f-1828-4278-8efa-1734fbf18558 found and phase=Bound (89.490727ms) Jan 11 20:33:24.056: INFO: PersistentVolume pvc-76ea015f-1828-4278-8efa-1734fbf18558 was removed STEP: Deleting sc STEP: uninstalling csi-hostpath driver Jan 11 20:33:24.147: INFO: deleting *v1.ServiceAccount: provisioning-1157/csi-attacher Jan 11 20:33:24.239: INFO: deleting *v1.ClusterRole: external-attacher-runner-provisioning-1157 Jan 11 20:33:24.330: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-provisioning-1157 Jan 11 20:33:24.422: INFO: deleting *v1.Role: provisioning-1157/external-attacher-cfg-provisioning-1157 Jan 11 20:33:24.513: INFO: deleting *v1.RoleBinding: provisioning-1157/csi-attacher-role-cfg Jan 11 20:33:24.604: INFO: deleting *v1.ServiceAccount: provisioning-1157/csi-provisioner Jan 11 20:33:24.696: INFO: deleting *v1.ClusterRole: external-provisioner-runner-provisioning-1157 Jan 11 20:33:24.787: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-provisioning-1157 Jan 11 20:33:24.886: INFO: deleting *v1.Role: provisioning-1157/external-provisioner-cfg-provisioning-1157 Jan 11 20:33:24.978: INFO: deleting *v1.RoleBinding: provisioning-1157/csi-provisioner-role-cfg Jan 11 20:33:25.069: INFO: deleting *v1.ServiceAccount: provisioning-1157/csi-snapshotter Jan 11 20:33:25.164: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-provisioning-1157 Jan 11 20:33:25.256: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-provisioning-1157 Jan 11 20:33:25.347: INFO: deleting *v1.Role: provisioning-1157/external-snapshotter-leaderelection-provisioning-1157 Jan 11 20:33:25.440: INFO: deleting *v1.RoleBinding: provisioning-1157/external-snapshotter-leaderelection Jan 11 20:33:25.532: INFO: deleting *v1.ServiceAccount: provisioning-1157/csi-resizer Jan 11 20:33:25.624: INFO: deleting *v1.ClusterRole: external-resizer-runner-provisioning-1157 Jan 11 20:33:25.716: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-provisioning-1157 Jan 11 20:33:25.807: INFO: deleting *v1.Role: provisioning-1157/external-resizer-cfg-provisioning-1157 Jan 11 20:33:25.898: INFO: deleting *v1.RoleBinding: provisioning-1157/csi-resizer-role-cfg Jan 11 20:33:25.990: INFO: deleting *v1.Service: provisioning-1157/csi-hostpath-attacher Jan 11 20:33:26.131: INFO: deleting *v1.StatefulSet: provisioning-1157/csi-hostpath-attacher Jan 11 20:33:26.224: INFO: deleting *v1beta1.CSIDriver: csi-hostpath-provisioning-1157 Jan 11 20:33:26.315: INFO: deleting *v1.Service: provisioning-1157/csi-hostpathplugin Jan 11 20:33:26.411: INFO: deleting *v1.StatefulSet: provisioning-1157/csi-hostpathplugin Jan 11 20:33:26.503: INFO: deleting *v1.Service: provisioning-1157/csi-hostpath-provisioner Jan 11 20:33:26.603: INFO: deleting *v1.StatefulSet: provisioning-1157/csi-hostpath-provisioner Jan 11 20:33:26.695: INFO: deleting *v1.Service: provisioning-1157/csi-hostpath-resizer Jan 11 20:33:26.792: INFO: deleting *v1.StatefulSet: provisioning-1157/csi-hostpath-resizer Jan 11 20:33:26.885: INFO: deleting *v1.Service: provisioning-1157/csi-snapshotter Jan 11 20:33:26.986: INFO: deleting *v1.StatefulSet: provisioning-1157/csi-snapshotter Jan 11 20:33:27.076: INFO: deleting *v1.ClusterRoleBinding: psp-csi-hostpath-role-provisioning-1157 [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:33:27.168: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "provisioning-1157" for this suite. Jan 11 20:33:41.533: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:33:44.857: INFO: namespace provisioning-1157 deletion completed in 17.596143564s • [SLOW TEST:150.483 seconds] [sig-storage] CSI Volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Driver: csi-hostpath] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:62 [Testpattern: Dynamic PV (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:92 should support restarting containers using file as subpath [Slow][LinuxOnly] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:318 ------------------------------ SSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:33:44.872: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename downward-api STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-9269 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating the pod Jan 11 20:33:48.987: INFO: Successfully updated pod "annotationupdated9c1aa2d-b826-4d46-93ae-60e8a874f206" [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:33:51.181: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9269" for this suite. Jan 11 20:34:05.543: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:34:08.881: INFO: namespace downward-api-9269 deletion completed in 17.609154085s • [SLOW TEST:24.010 seconds] [sig-storage] Downward API volume /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSS ------------------------------ [BeforeEach] [sig-apps] DisruptionController /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:33:11.812: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename disruption STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in disruption-7464 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] DisruptionController /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:52 [It] should block an eviction until the PDB is updated to allow it /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:200 STEP: Creating a pdb that targets all three pods in a test replica set STEP: Waiting for the pdb to be processed STEP: First trying to evict a pod which shouldn't be evictable STEP: locating a running pod STEP: Waiting for all pods to be running STEP: Updating the pdb to allow a pod to be evicted STEP: Waiting for the pdb to be processed STEP: Trying to evict the same pod we tried earlier which should now be evictable STEP: Waiting for all pods to be running [AfterEach] [sig-apps] DisruptionController /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:33:18.617: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "disruption-7464" for this suite. Jan 11 20:34:06.975: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:34:10.272: INFO: namespace disruption-7464 deletion completed in 51.564404605s • [SLOW TEST:58.460 seconds] [sig-apps] DisruptionController /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should block an eviction until the PDB is updated to allow it /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:200 ------------------------------ SSSS ------------------------------ [BeforeEach] [sig-apps] Job /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:33:18.101: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename job STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in job-9147 STEP: Waiting for a default service account to be provisioned in namespace [It] should fail when exceeds active deadline /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:130 STEP: Creating a job STEP: Ensuring job past active deadline [AfterEach] [sig-apps] Job /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:33:22.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-9147" for this suite. Jan 11 20:34:10.390: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:34:13.696: INFO: namespace job-9147 deletion completed in 51.578016623s • [SLOW TEST:55.595 seconds] [sig-apps] Job /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should fail when exceeds active deadline /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:130 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:33:19.097: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename kubectl STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-8032 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:225 [BeforeEach] Update Demo /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [It] should do a rolling update of a replication controller [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: creating the initial replication controller Jan 11 20:33:20.548: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config create -f - --namespace=kubectl-8032' Jan 11 20:33:21.522: INFO: stderr: "" Jan 11 20:33:21.522: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jan 11 20:33:21.522: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8032' Jan 11 20:33:21.962: INFO: stderr: "" Jan 11 20:33:21.962: INFO: stdout: "update-demo-nautilus-v7dj7 update-demo-nautilus-xpfvp " Jan 11 20:33:21.962: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config get pods update-demo-nautilus-v7dj7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8032' Jan 11 20:33:22.404: INFO: stderr: "" Jan 11 20:33:22.404: INFO: stdout: "" Jan 11 20:33:22.404: INFO: update-demo-nautilus-v7dj7 is created but not running Jan 11 20:33:27.404: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8032' Jan 11 20:33:27.891: INFO: stderr: "" Jan 11 20:33:27.891: INFO: stdout: "update-demo-nautilus-v7dj7 update-demo-nautilus-xpfvp " Jan 11 20:33:27.891: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config get pods update-demo-nautilus-v7dj7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8032' Jan 11 20:33:28.326: INFO: stderr: "" Jan 11 20:33:28.327: INFO: stdout: "true" Jan 11 20:33:28.327: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config get pods update-demo-nautilus-v7dj7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8032' Jan 11 20:33:28.771: INFO: stderr: "" Jan 11 20:33:28.771: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 11 20:33:28.771: INFO: validating pod update-demo-nautilus-v7dj7 Jan 11 20:33:28.952: INFO: got data: { "image": "nautilus.jpg" } Jan 11 20:33:28.952: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 11 20:33:28.952: INFO: update-demo-nautilus-v7dj7 is verified up and running Jan 11 20:33:28.952: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config get pods update-demo-nautilus-xpfvp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8032' Jan 11 20:33:29.390: INFO: stderr: "" Jan 11 20:33:29.390: INFO: stdout: "true" Jan 11 20:33:29.390: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config get pods update-demo-nautilus-xpfvp -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8032' Jan 11 20:33:29.822: INFO: stderr: "" Jan 11 20:33:29.822: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 11 20:33:29.822: INFO: validating pod update-demo-nautilus-xpfvp Jan 11 20:33:29.970: INFO: got data: { "image": "nautilus.jpg" } Jan 11 20:33:29.970: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 11 20:33:29.970: INFO: update-demo-nautilus-xpfvp is verified up and running STEP: rolling-update to new replication controller Jan 11 20:33:29.974: INFO: scanned /root for discovery docs: Jan 11 20:33:29.974: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-8032' Jan 11 20:33:47.041: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Jan 11 20:33:47.041: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. Jan 11 20:33:47.041: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8032' Jan 11 20:33:47.483: INFO: stderr: "" Jan 11 20:33:47.483: INFO: stdout: "update-demo-kitten-6zdsx update-demo-kitten-hq49b " Jan 11 20:33:47.483: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config get pods update-demo-kitten-6zdsx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8032' Jan 11 20:33:47.922: INFO: stderr: "" Jan 11 20:33:47.922: INFO: stdout: "true" Jan 11 20:33:47.922: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config get pods update-demo-kitten-6zdsx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8032' Jan 11 20:33:48.353: INFO: stderr: "" Jan 11 20:33:48.353: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Jan 11 20:33:48.353: INFO: validating pod update-demo-kitten-6zdsx Jan 11 20:33:48.532: INFO: got data: { "image": "kitten.jpg" } Jan 11 20:33:48.532: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Jan 11 20:33:48.532: INFO: update-demo-kitten-6zdsx is verified up and running Jan 11 20:33:48.532: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config get pods update-demo-kitten-hq49b -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8032' Jan 11 20:33:48.953: INFO: stderr: "" Jan 11 20:33:48.953: INFO: stdout: "true" Jan 11 20:33:48.953: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config get pods update-demo-kitten-hq49b -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8032' Jan 11 20:33:49.391: INFO: stderr: "" Jan 11 20:33:49.391: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Jan 11 20:33:49.391: INFO: validating pod update-demo-kitten-hq49b Jan 11 20:33:49.572: INFO: got data: { "image": "kitten.jpg" } Jan 11 20:33:49.572: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Jan 11 20:33:49.572: INFO: update-demo-kitten-hq49b is verified up and running [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:33:49.572: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8032" for this suite. Jan 11 20:34:19.933: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:34:23.248: INFO: namespace kubectl-8032 deletion completed in 33.585206876s • [SLOW TEST:64.151 seconds] [sig-cli] Kubectl client /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:275 should do a rolling update of a replication controller [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:34:10.279: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename emptydir STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-562 STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating a pod to test emptydir 0666 on tmpfs Jan 11 20:34:11.439: INFO: Waiting up to 5m0s for pod "pod-dc602f7e-2e72-46f5-8deb-13ce7b9e018d" in namespace "emptydir-562" to be "success or failure" Jan 11 20:34:11.529: INFO: Pod "pod-dc602f7e-2e72-46f5-8deb-13ce7b9e018d": Phase="Pending", Reason="", readiness=false. Elapsed: 89.101645ms Jan 11 20:34:13.618: INFO: Pod "pod-dc602f7e-2e72-46f5-8deb-13ce7b9e018d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.178508939s STEP: Saw pod success Jan 11 20:34:13.618: INFO: Pod "pod-dc602f7e-2e72-46f5-8deb-13ce7b9e018d" satisfied condition "success or failure" Jan 11 20:34:13.707: INFO: Trying to get logs from node ip-10-250-27-25.ec2.internal pod pod-dc602f7e-2e72-46f5-8deb-13ce7b9e018d container test-container: STEP: delete the pod Jan 11 20:34:13.896: INFO: Waiting for pod pod-dc602f7e-2e72-46f5-8deb-13ce7b9e018d to disappear Jan 11 20:34:13.986: INFO: Pod pod-dc602f7e-2e72-46f5-8deb-13ce7b9e018d no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:34:13.986: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-562" for this suite. Jan 11 20:34:20.344: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:34:23.680: INFO: namespace emptydir-562 deletion completed in 9.604010832s • [SLOW TEST:13.401 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:34:08.890: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename crd-webhook STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in crd-webhook-3545 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Jan 11 20:34:11.352: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714371651, loc:(*time.Location)(0x84bfb00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714371651, loc:(*time.Location)(0x84bfb00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714371651, loc:(*time.Location)(0x84bfb00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714371651, loc:(*time.Location)(0x84bfb00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-64d485d9bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 11 20:34:14.537: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Jan 11 20:34:14.627: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:34:15.837: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-3545" for this suite. Jan 11 20:34:22.200: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:34:25.518: INFO: namespace crd-webhook-3545 deletion completed in 9.58809652s [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:16.990 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSS ------------------------------ [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:93 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:31:47.925: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename provisioning STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in provisioning-8194 STEP: Waiting for a default service account to be provisioned in namespace [It] should support existing directories when readOnly specified in the volumeSource /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:377 STEP: deploying csi-hostpath driver Jan 11 20:31:48.938: INFO: creating *v1.ServiceAccount: provisioning-8194/csi-attacher Jan 11 20:31:49.028: INFO: creating *v1.ClusterRole: external-attacher-runner-provisioning-8194 Jan 11 20:31:49.028: INFO: Define cluster role external-attacher-runner-provisioning-8194 Jan 11 20:31:49.118: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-provisioning-8194 Jan 11 20:31:49.208: INFO: creating *v1.Role: provisioning-8194/external-attacher-cfg-provisioning-8194 Jan 11 20:31:49.297: INFO: creating *v1.RoleBinding: provisioning-8194/csi-attacher-role-cfg Jan 11 20:31:49.387: INFO: creating *v1.ServiceAccount: provisioning-8194/csi-provisioner Jan 11 20:31:49.477: INFO: creating *v1.ClusterRole: external-provisioner-runner-provisioning-8194 Jan 11 20:31:49.477: INFO: Define cluster role external-provisioner-runner-provisioning-8194 Jan 11 20:31:49.566: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-provisioning-8194 Jan 11 20:31:49.656: INFO: creating *v1.Role: provisioning-8194/external-provisioner-cfg-provisioning-8194 Jan 11 20:31:49.746: INFO: creating *v1.RoleBinding: provisioning-8194/csi-provisioner-role-cfg Jan 11 20:31:49.835: INFO: creating *v1.ServiceAccount: provisioning-8194/csi-snapshotter Jan 11 20:31:49.926: INFO: creating *v1.ClusterRole: external-snapshotter-runner-provisioning-8194 Jan 11 20:31:49.926: INFO: Define cluster role external-snapshotter-runner-provisioning-8194 Jan 11 20:31:50.015: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-provisioning-8194 Jan 11 20:31:50.105: INFO: creating *v1.Role: provisioning-8194/external-snapshotter-leaderelection-provisioning-8194 Jan 11 20:31:50.196: INFO: creating *v1.RoleBinding: provisioning-8194/external-snapshotter-leaderelection Jan 11 20:31:50.286: INFO: creating *v1.ServiceAccount: provisioning-8194/csi-resizer Jan 11 20:31:50.376: INFO: creating *v1.ClusterRole: external-resizer-runner-provisioning-8194 Jan 11 20:31:50.377: INFO: Define cluster role external-resizer-runner-provisioning-8194 Jan 11 20:31:50.466: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-provisioning-8194 Jan 11 20:31:50.556: INFO: creating *v1.Role: provisioning-8194/external-resizer-cfg-provisioning-8194 Jan 11 20:31:50.646: INFO: creating *v1.RoleBinding: provisioning-8194/csi-resizer-role-cfg Jan 11 20:31:50.736: INFO: creating *v1.Service: provisioning-8194/csi-hostpath-attacher Jan 11 20:31:50.829: INFO: creating *v1.StatefulSet: provisioning-8194/csi-hostpath-attacher Jan 11 20:31:50.920: INFO: creating *v1beta1.CSIDriver: csi-hostpath-provisioning-8194 Jan 11 20:31:51.009: INFO: creating *v1.Service: provisioning-8194/csi-hostpathplugin Jan 11 20:31:51.104: INFO: creating *v1.StatefulSet: provisioning-8194/csi-hostpathplugin Jan 11 20:31:51.193: INFO: creating *v1.Service: provisioning-8194/csi-hostpath-provisioner Jan 11 20:31:51.288: INFO: creating *v1.StatefulSet: provisioning-8194/csi-hostpath-provisioner Jan 11 20:31:51.378: INFO: creating *v1.Service: provisioning-8194/csi-hostpath-resizer Jan 11 20:31:51.471: INFO: creating *v1.StatefulSet: provisioning-8194/csi-hostpath-resizer Jan 11 20:31:51.561: INFO: creating *v1.Service: provisioning-8194/csi-snapshotter Jan 11 20:31:51.654: INFO: creating *v1.StatefulSet: provisioning-8194/csi-snapshotter Jan 11 20:31:51.744: INFO: creating *v1.ClusterRoleBinding: psp-csi-hostpath-role-provisioning-8194 Jan 11 20:31:51.833: INFO: Test running for native CSI Driver, not checking metrics Jan 11 20:31:51.833: INFO: Creating resource for dynamic PV STEP: creating a StorageClass provisioning-8194-csi-hostpath-provisioning-8194-sc7d6xz STEP: creating a claim Jan 11 20:31:51.922: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Jan 11 20:31:52.014: INFO: Waiting up to 5m0s for PersistentVolumeClaims [csi-hostpath2fc6t] to have phase Bound Jan 11 20:31:52.103: INFO: PersistentVolumeClaim csi-hostpath2fc6t found but phase is Pending instead of Bound. Jan 11 20:31:54.193: INFO: PersistentVolumeClaim csi-hostpath2fc6t found but phase is Pending instead of Bound. Jan 11 20:31:56.282: INFO: PersistentVolumeClaim csi-hostpath2fc6t found but phase is Pending instead of Bound. Jan 11 20:31:58.372: INFO: PersistentVolumeClaim csi-hostpath2fc6t found but phase is Pending instead of Bound. Jan 11 20:32:00.462: INFO: PersistentVolumeClaim csi-hostpath2fc6t found but phase is Pending instead of Bound. Jan 11 20:32:02.551: INFO: PersistentVolumeClaim csi-hostpath2fc6t found but phase is Pending instead of Bound. Jan 11 20:32:04.641: INFO: PersistentVolumeClaim csi-hostpath2fc6t found but phase is Pending instead of Bound. Jan 11 20:32:06.730: INFO: PersistentVolumeClaim csi-hostpath2fc6t found but phase is Pending instead of Bound. Jan 11 20:32:08.822: INFO: PersistentVolumeClaim csi-hostpath2fc6t found but phase is Pending instead of Bound. Jan 11 20:32:10.910: INFO: PersistentVolumeClaim csi-hostpath2fc6t found but phase is Pending instead of Bound. Jan 11 20:32:13.000: INFO: PersistentVolumeClaim csi-hostpath2fc6t found but phase is Pending instead of Bound. Jan 11 20:32:15.089: INFO: PersistentVolumeClaim csi-hostpath2fc6t found but phase is Pending instead of Bound. Jan 11 20:32:17.179: INFO: PersistentVolumeClaim csi-hostpath2fc6t found but phase is Pending instead of Bound. Jan 11 20:32:19.268: INFO: PersistentVolumeClaim csi-hostpath2fc6t found but phase is Pending instead of Bound. Jan 11 20:32:21.358: INFO: PersistentVolumeClaim csi-hostpath2fc6t found but phase is Pending instead of Bound. Jan 11 20:32:23.447: INFO: PersistentVolumeClaim csi-hostpath2fc6t found but phase is Pending instead of Bound. Jan 11 20:32:25.536: INFO: PersistentVolumeClaim csi-hostpath2fc6t found but phase is Pending instead of Bound. Jan 11 20:32:27.626: INFO: PersistentVolumeClaim csi-hostpath2fc6t found but phase is Pending instead of Bound. Jan 11 20:32:29.715: INFO: PersistentVolumeClaim csi-hostpath2fc6t found but phase is Pending instead of Bound. Jan 11 20:32:31.804: INFO: PersistentVolumeClaim csi-hostpath2fc6t found but phase is Pending instead of Bound. Jan 11 20:32:33.893: INFO: PersistentVolumeClaim csi-hostpath2fc6t found but phase is Pending instead of Bound. Jan 11 20:32:35.982: INFO: PersistentVolumeClaim csi-hostpath2fc6t found but phase is Pending instead of Bound. Jan 11 20:32:38.072: INFO: PersistentVolumeClaim csi-hostpath2fc6t found but phase is Pending instead of Bound. Jan 11 20:32:40.161: INFO: PersistentVolumeClaim csi-hostpath2fc6t found but phase is Pending instead of Bound. Jan 11 20:32:42.250: INFO: PersistentVolumeClaim csi-hostpath2fc6t found but phase is Pending instead of Bound. Jan 11 20:32:44.339: INFO: PersistentVolumeClaim csi-hostpath2fc6t found but phase is Pending instead of Bound. Jan 11 20:32:46.429: INFO: PersistentVolumeClaim csi-hostpath2fc6t found but phase is Pending instead of Bound. Jan 11 20:32:48.518: INFO: PersistentVolumeClaim csi-hostpath2fc6t found but phase is Pending instead of Bound. Jan 11 20:32:50.607: INFO: PersistentVolumeClaim csi-hostpath2fc6t found but phase is Pending instead of Bound. Jan 11 20:32:52.696: INFO: PersistentVolumeClaim csi-hostpath2fc6t found but phase is Pending instead of Bound. Jan 11 20:32:54.785: INFO: PersistentVolumeClaim csi-hostpath2fc6t found but phase is Pending instead of Bound. Jan 11 20:32:56.875: INFO: PersistentVolumeClaim csi-hostpath2fc6t found but phase is Pending instead of Bound. Jan 11 20:32:58.964: INFO: PersistentVolumeClaim csi-hostpath2fc6t found but phase is Pending instead of Bound. Jan 11 20:33:01.053: INFO: PersistentVolumeClaim csi-hostpath2fc6t found but phase is Pending instead of Bound. Jan 11 20:33:03.144: INFO: PersistentVolumeClaim csi-hostpath2fc6t found but phase is Pending instead of Bound. Jan 11 20:33:05.233: INFO: PersistentVolumeClaim csi-hostpath2fc6t found but phase is Pending instead of Bound. Jan 11 20:33:07.323: INFO: PersistentVolumeClaim csi-hostpath2fc6t found but phase is Pending instead of Bound. Jan 11 20:33:09.412: INFO: PersistentVolumeClaim csi-hostpath2fc6t found but phase is Pending instead of Bound. Jan 11 20:33:11.501: INFO: PersistentVolumeClaim csi-hostpath2fc6t found but phase is Pending instead of Bound. Jan 11 20:33:13.590: INFO: PersistentVolumeClaim csi-hostpath2fc6t found but phase is Pending instead of Bound. Jan 11 20:33:15.681: INFO: PersistentVolumeClaim csi-hostpath2fc6t found but phase is Pending instead of Bound. Jan 11 20:33:17.770: INFO: PersistentVolumeClaim csi-hostpath2fc6t found but phase is Pending instead of Bound. Jan 11 20:33:19.859: INFO: PersistentVolumeClaim csi-hostpath2fc6t found but phase is Pending instead of Bound. Jan 11 20:33:21.951: INFO: PersistentVolumeClaim csi-hostpath2fc6t found but phase is Pending instead of Bound. Jan 11 20:33:24.040: INFO: PersistentVolumeClaim csi-hostpath2fc6t found but phase is Pending instead of Bound. Jan 11 20:33:26.132: INFO: PersistentVolumeClaim csi-hostpath2fc6t found but phase is Pending instead of Bound. Jan 11 20:33:28.222: INFO: PersistentVolumeClaim csi-hostpath2fc6t found but phase is Pending instead of Bound. Jan 11 20:33:30.311: INFO: PersistentVolumeClaim csi-hostpath2fc6t found and phase=Bound (1m38.296854826s) STEP: Creating pod pod-subpath-test-csi-hostpath-dynamicpv-fdvp STEP: Creating a pod to test subpath Jan 11 20:33:30.585: INFO: Waiting up to 5m0s for pod "pod-subpath-test-csi-hostpath-dynamicpv-fdvp" in namespace "provisioning-8194" to be "success or failure" Jan 11 20:33:30.674: INFO: Pod "pod-subpath-test-csi-hostpath-dynamicpv-fdvp": Phase="Pending", Reason="", readiness=false. Elapsed: 89.166138ms Jan 11 20:33:32.764: INFO: Pod "pod-subpath-test-csi-hostpath-dynamicpv-fdvp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.178948378s Jan 11 20:33:34.853: INFO: Pod "pod-subpath-test-csi-hostpath-dynamicpv-fdvp": Phase="Pending", Reason="", readiness=false. Elapsed: 4.267914669s Jan 11 20:33:36.943: INFO: Pod "pod-subpath-test-csi-hostpath-dynamicpv-fdvp": Phase="Pending", Reason="", readiness=false. Elapsed: 6.358219521s Jan 11 20:33:39.088: INFO: Pod "pod-subpath-test-csi-hostpath-dynamicpv-fdvp": Phase="Pending", Reason="", readiness=false. Elapsed: 8.503089357s Jan 11 20:33:41.177: INFO: Pod "pod-subpath-test-csi-hostpath-dynamicpv-fdvp": Phase="Pending", Reason="", readiness=false. Elapsed: 10.592646886s Jan 11 20:33:43.267: INFO: Pod "pod-subpath-test-csi-hostpath-dynamicpv-fdvp": Phase="Pending", Reason="", readiness=false. Elapsed: 12.682343784s Jan 11 20:33:45.357: INFO: Pod "pod-subpath-test-csi-hostpath-dynamicpv-fdvp": Phase="Pending", Reason="", readiness=false. Elapsed: 14.771965344s Jan 11 20:33:47.447: INFO: Pod "pod-subpath-test-csi-hostpath-dynamicpv-fdvp": Phase="Pending", Reason="", readiness=false. Elapsed: 16.861837285s Jan 11 20:33:49.536: INFO: Pod "pod-subpath-test-csi-hostpath-dynamicpv-fdvp": Phase="Pending", Reason="", readiness=false. Elapsed: 18.950806131s Jan 11 20:33:51.625: INFO: Pod "pod-subpath-test-csi-hostpath-dynamicpv-fdvp": Phase="Succeeded", Reason="", readiness=false. Elapsed: 21.040337158s STEP: Saw pod success Jan 11 20:33:51.625: INFO: Pod "pod-subpath-test-csi-hostpath-dynamicpv-fdvp" satisfied condition "success or failure" Jan 11 20:33:51.714: INFO: Trying to get logs from node ip-10-250-7-77.ec2.internal pod pod-subpath-test-csi-hostpath-dynamicpv-fdvp container test-container-subpath-csi-hostpath-dynamicpv-fdvp: STEP: delete the pod Jan 11 20:33:51.929: INFO: Waiting for pod pod-subpath-test-csi-hostpath-dynamicpv-fdvp to disappear Jan 11 20:33:52.019: INFO: Pod pod-subpath-test-csi-hostpath-dynamicpv-fdvp no longer exists STEP: Deleting pod pod-subpath-test-csi-hostpath-dynamicpv-fdvp Jan 11 20:33:52.019: INFO: Deleting pod "pod-subpath-test-csi-hostpath-dynamicpv-fdvp" in namespace "provisioning-8194" STEP: Creating pod pod-subpath-test-csi-hostpath-dynamicpv-fdvp STEP: Creating a pod to test subpath Jan 11 20:33:52.198: INFO: Waiting up to 5m0s for pod "pod-subpath-test-csi-hostpath-dynamicpv-fdvp" in namespace "provisioning-8194" to be "success or failure" Jan 11 20:33:52.287: INFO: Pod "pod-subpath-test-csi-hostpath-dynamicpv-fdvp": Phase="Pending", Reason="", readiness=false. Elapsed: 88.917369ms Jan 11 20:33:54.376: INFO: Pod "pod-subpath-test-csi-hostpath-dynamicpv-fdvp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.178109677s Jan 11 20:33:56.465: INFO: Pod "pod-subpath-test-csi-hostpath-dynamicpv-fdvp": Phase="Pending", Reason="", readiness=false. Elapsed: 4.267352976s Jan 11 20:33:58.555: INFO: Pod "pod-subpath-test-csi-hostpath-dynamicpv-fdvp": Phase="Pending", Reason="", readiness=false. Elapsed: 6.357143368s Jan 11 20:34:00.644: INFO: Pod "pod-subpath-test-csi-hostpath-dynamicpv-fdvp": Phase="Pending", Reason="", readiness=false. Elapsed: 8.446753699s Jan 11 20:34:02.734: INFO: Pod "pod-subpath-test-csi-hostpath-dynamicpv-fdvp": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.536605872s STEP: Saw pod success Jan 11 20:34:02.734: INFO: Pod "pod-subpath-test-csi-hostpath-dynamicpv-fdvp" satisfied condition "success or failure" Jan 11 20:34:02.824: INFO: Trying to get logs from node ip-10-250-7-77.ec2.internal pod pod-subpath-test-csi-hostpath-dynamicpv-fdvp container test-container-subpath-csi-hostpath-dynamicpv-fdvp: STEP: delete the pod Jan 11 20:34:03.012: INFO: Waiting for pod pod-subpath-test-csi-hostpath-dynamicpv-fdvp to disappear Jan 11 20:34:03.103: INFO: Pod pod-subpath-test-csi-hostpath-dynamicpv-fdvp no longer exists STEP: Deleting pod pod-subpath-test-csi-hostpath-dynamicpv-fdvp Jan 11 20:34:03.103: INFO: Deleting pod "pod-subpath-test-csi-hostpath-dynamicpv-fdvp" in namespace "provisioning-8194" STEP: Deleting pod Jan 11 20:34:03.192: INFO: Deleting pod "pod-subpath-test-csi-hostpath-dynamicpv-fdvp" in namespace "provisioning-8194" STEP: Deleting pvc Jan 11 20:34:03.281: INFO: Deleting PersistentVolumeClaim "csi-hostpath2fc6t" Jan 11 20:34:03.371: INFO: Waiting up to 5m0s for PersistentVolume pvc-1872a4f3-75c9-47c5-8a22-1cc7ea995d0d to get deleted Jan 11 20:34:03.460: INFO: PersistentVolume pvc-1872a4f3-75c9-47c5-8a22-1cc7ea995d0d found and phase=Bound (89.339931ms) Jan 11 20:34:08.550: INFO: PersistentVolume pvc-1872a4f3-75c9-47c5-8a22-1cc7ea995d0d was removed STEP: Deleting sc STEP: uninstalling csi-hostpath driver Jan 11 20:34:08.641: INFO: deleting *v1.ServiceAccount: provisioning-8194/csi-attacher Jan 11 20:34:08.731: INFO: deleting *v1.ClusterRole: external-attacher-runner-provisioning-8194 Jan 11 20:34:08.822: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-provisioning-8194 Jan 11 20:34:08.913: INFO: deleting *v1.Role: provisioning-8194/external-attacher-cfg-provisioning-8194 Jan 11 20:34:09.008: INFO: deleting *v1.RoleBinding: provisioning-8194/csi-attacher-role-cfg Jan 11 20:34:09.098: INFO: deleting *v1.ServiceAccount: provisioning-8194/csi-provisioner Jan 11 20:34:09.189: INFO: deleting *v1.ClusterRole: external-provisioner-runner-provisioning-8194 Jan 11 20:34:09.280: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-provisioning-8194 Jan 11 20:34:09.371: INFO: deleting *v1.Role: provisioning-8194/external-provisioner-cfg-provisioning-8194 Jan 11 20:34:09.462: INFO: deleting *v1.RoleBinding: provisioning-8194/csi-provisioner-role-cfg Jan 11 20:34:09.553: INFO: deleting *v1.ServiceAccount: provisioning-8194/csi-snapshotter Jan 11 20:34:09.643: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-provisioning-8194 Jan 11 20:34:09.736: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-provisioning-8194 Jan 11 20:34:09.826: INFO: deleting *v1.Role: provisioning-8194/external-snapshotter-leaderelection-provisioning-8194 Jan 11 20:34:09.918: INFO: deleting *v1.RoleBinding: provisioning-8194/external-snapshotter-leaderelection Jan 11 20:34:10.009: INFO: deleting *v1.ServiceAccount: provisioning-8194/csi-resizer Jan 11 20:34:10.099: INFO: deleting *v1.ClusterRole: external-resizer-runner-provisioning-8194 Jan 11 20:34:10.190: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-provisioning-8194 Jan 11 20:34:10.280: INFO: deleting *v1.Role: provisioning-8194/external-resizer-cfg-provisioning-8194 Jan 11 20:34:10.371: INFO: deleting *v1.RoleBinding: provisioning-8194/csi-resizer-role-cfg Jan 11 20:34:10.462: INFO: deleting *v1.Service: provisioning-8194/csi-hostpath-attacher Jan 11 20:34:10.559: INFO: deleting *v1.StatefulSet: provisioning-8194/csi-hostpath-attacher Jan 11 20:34:10.650: INFO: deleting *v1beta1.CSIDriver: csi-hostpath-provisioning-8194 Jan 11 20:34:10.741: INFO: deleting *v1.Service: provisioning-8194/csi-hostpathplugin Jan 11 20:34:10.837: INFO: deleting *v1.StatefulSet: provisioning-8194/csi-hostpathplugin Jan 11 20:34:10.933: INFO: deleting *v1.Service: provisioning-8194/csi-hostpath-provisioner Jan 11 20:34:11.028: INFO: deleting *v1.StatefulSet: provisioning-8194/csi-hostpath-provisioner Jan 11 20:34:11.119: INFO: deleting *v1.Service: provisioning-8194/csi-hostpath-resizer Jan 11 20:34:11.216: INFO: deleting *v1.StatefulSet: provisioning-8194/csi-hostpath-resizer Jan 11 20:34:11.306: INFO: deleting *v1.Service: provisioning-8194/csi-snapshotter Jan 11 20:34:11.403: INFO: deleting *v1.StatefulSet: provisioning-8194/csi-snapshotter Jan 11 20:34:11.494: INFO: deleting *v1.ClusterRoleBinding: psp-csi-hostpath-role-provisioning-8194 [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:34:11.585: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "provisioning-8194" for this suite. Jan 11 20:34:23.943: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:34:27.255: INFO: namespace provisioning-8194 deletion completed in 15.579966102s • [SLOW TEST:159.330 seconds] [sig-storage] CSI Volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Driver: csi-hostpath] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:62 [Testpattern: Dynamic PV (default fs)] subPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:92 should support existing directories when readOnly specified in the volumeSource /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:377 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] Job /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:34:23.690: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename job STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in job-5099 STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks succeed /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:42 STEP: Creating a job STEP: Ensuring job reaches completions STEP: Ensuring pods for job exist [AfterEach] [sig-apps] Job /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:34:30.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-5099" for this suite. Jan 11 20:34:36.953: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:34:40.244: INFO: namespace job-5099 deletion completed in 9.558493425s • [SLOW TEST:16.553 seconds] [sig-apps] Job /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks succeed /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:42 ------------------------------ [BeforeEach] [k8s.io] Container Runtime /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:34:27.277: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename container-runtime STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-runtime-6244 STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set [NodeConformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:167 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jan 11 20:34:30.399: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:34:30.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-6244" for this suite. Jan 11 20:34:36.941: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:34:40.244: INFO: namespace container-runtime-6244 deletion completed in 9.571088786s • [SLOW TEST:12.967 seconds] [k8s.io] Container Runtime /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693 blackbox test /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 on terminated container /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:132 should report termination message [LinuxOnly] if TerminationMessagePath is set [NodeConformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:167 ------------------------------ SSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:33:31.574: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in csi-mock-volumes-1446 STEP: Waiting for a default service account to be provisioned in namespace [It] should be passed when podInfoOnMount=true /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:347 STEP: deploying csi mock driver Jan 11 20:33:33.537: INFO: creating *v1.ServiceAccount: csi-mock-volumes-1446/csi-attacher Jan 11 20:33:33.627: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-1446 Jan 11 20:33:33.627: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-1446 Jan 11 20:33:33.717: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-1446 Jan 11 20:33:33.807: INFO: creating *v1.Role: csi-mock-volumes-1446/external-attacher-cfg-csi-mock-volumes-1446 Jan 11 20:33:33.897: INFO: creating *v1.RoleBinding: csi-mock-volumes-1446/csi-attacher-role-cfg Jan 11 20:33:33.987: INFO: creating *v1.ServiceAccount: csi-mock-volumes-1446/csi-provisioner Jan 11 20:33:34.077: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-1446 Jan 11 20:33:34.077: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-1446 Jan 11 20:33:34.167: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-1446 Jan 11 20:33:34.257: INFO: creating *v1.Role: csi-mock-volumes-1446/external-provisioner-cfg-csi-mock-volumes-1446 Jan 11 20:33:34.348: INFO: creating *v1.RoleBinding: csi-mock-volumes-1446/csi-provisioner-role-cfg Jan 11 20:33:34.438: INFO: creating *v1.ServiceAccount: csi-mock-volumes-1446/csi-resizer Jan 11 20:33:34.528: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-1446 Jan 11 20:33:34.528: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-1446 Jan 11 20:33:34.618: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-1446 Jan 11 20:33:34.708: INFO: creating *v1.Role: csi-mock-volumes-1446/external-resizer-cfg-csi-mock-volumes-1446 Jan 11 20:33:34.800: INFO: creating *v1.RoleBinding: csi-mock-volumes-1446/csi-resizer-role-cfg Jan 11 20:33:34.890: INFO: creating *v1.ServiceAccount: csi-mock-volumes-1446/csi-mock Jan 11 20:33:34.980: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-1446 Jan 11 20:33:35.071: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-1446 Jan 11 20:33:35.161: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-1446 Jan 11 20:33:35.251: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-1446 Jan 11 20:33:35.341: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-1446 Jan 11 20:33:35.431: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-1446 Jan 11 20:33:35.521: INFO: creating *v1.StatefulSet: csi-mock-volumes-1446/csi-mockplugin Jan 11 20:33:35.612: INFO: creating *v1beta1.CSIDriver: csi-mock-csi-mock-volumes-1446 Jan 11 20:33:35.702: INFO: creating *v1.StatefulSet: csi-mock-volumes-1446/csi-mockplugin-attacher Jan 11 20:33:35.793: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-1446" STEP: Creating pod Jan 11 20:33:36.061: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Jan 11 20:33:36.153: INFO: Waiting up to 5m0s for PersistentVolumeClaims [pvc-ch87s] to have phase Bound Jan 11 20:33:36.243: INFO: PersistentVolumeClaim pvc-ch87s found but phase is Pending instead of Bound. Jan 11 20:33:38.333: INFO: PersistentVolumeClaim pvc-ch87s found and phase=Bound (2.179481956s) STEP: checking for CSIInlineVolumes feature Jan 11 20:33:42.988: INFO: Error getting logs for pod csi-inline-volume-6kd7j: the server rejected our request for an unknown reason (get pods csi-inline-volume-6kd7j) STEP: Deleting pod csi-inline-volume-6kd7j in namespace csi-mock-volumes-1446 STEP: Deleting the previously created pod Jan 11 20:33:55.259: INFO: Deleting pod "pvc-volume-tester-vm2g8" in namespace "csi-mock-volumes-1446" Jan 11 20:33:55.349: INFO: Wait up to 5m0s for pod "pvc-volume-tester-vm2g8" to be fully deleted STEP: Checking CSI driver logs Jan 11 20:34:09.628: INFO: CSI driver logs: mock driver started gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":""} gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-1446","vendor_version":"0.3.0","manifest":{"url":"https://github.com/kubernetes-csi/csi-test/mock"}},"Error":""} gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":""} gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":9}}}]},"Error":""} gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":""} gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-1446","vendor_version":"0.3.0","manifest":{"url":"https://github.com/kubernetes-csi/csi-test/mock"}},"Error":""} gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":""} gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":9}}}]},"Error":""} gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-8f45c616-9e28-4ca7-87d8-b00b7088ce46","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":{"volume":{"capacity_bytes":1073741824,"volume_id":"4","volume_context":{"name":"pvc-8f45c616-9e28-4ca7-87d8-b00b7088ce46"}}},"Error":""} gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-1446","vendor_version":"0.3.0","manifest":{"url":"https://github.com/kubernetes-csi/csi-test/mock"}},"Error":""} gRPCCall: {"Method":"/csi.v1.Node/NodeGetInfo","Request":{},"Response":{"node_id":"csi-mock-csi-mock-volumes-1446","max_volumes_per_node":2},"Error":""} gRPCCall: {"Method":"/csi.v1.Controller/ControllerPublishVolume","Request":{"volume_id":"4","node_id":"csi-mock-csi-mock-volumes-1446","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-8f45c616-9e28-4ca7-87d8-b00b7088ce46","storage.kubernetes.io/csiProvisionerIdentity":"1578774817890-8081-csi-mock-csi-mock-volumes-1446"}},"Response":{"publish_context":{"device":"/dev/mock","readonly":"false"}},"Error":""} gRPCCall: {"Method":"/csi.v1.Controller/ControllerPublishVolume","Request":{"volume_id":"4","node_id":"csi-mock-csi-mock-volumes-1446","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-8f45c616-9e28-4ca7-87d8-b00b7088ce46","storage.kubernetes.io/csiProvisionerIdentity":"1578774817890-8081-csi-mock-csi-mock-volumes-1446"}},"Response":{"publish_context":{"device":"/dev/mock","readonly":"false"}},"Error":""} gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}}]},"Error":""} gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","publish_context":{"device":"/dev/mock","readonly":"false"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-8f45c616-9e28-4ca7-87d8-b00b7088ce46/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-8f45c616-9e28-4ca7-87d8-b00b7088ce46","storage.kubernetes.io/csiProvisionerIdentity":"1578774817890-8081-csi-mock-csi-mock-volumes-1446"}},"Response":{},"Error":""} gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}}]},"Error":""} gRPCCall: {"Method":"/csi.v1.Node/NodePublishVolume","Request":{"volume_id":"4","publish_context":{"device":"/dev/mock","readonly":"false"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-8f45c616-9e28-4ca7-87d8-b00b7088ce46/globalmount","target_path":"/var/lib/kubelet/pods/9604d8c6-9a25-4a1f-935d-7536590e125c/volumes/kubernetes.io~csi/pvc-8f45c616-9e28-4ca7-87d8-b00b7088ce46/mount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"csi.storage.k8s.io/ephemeral":"false","csi.storage.k8s.io/pod.name":"pvc-volume-tester-vm2g8","csi.storage.k8s.io/pod.namespace":"csi-mock-volumes-1446","csi.storage.k8s.io/pod.uid":"9604d8c6-9a25-4a1f-935d-7536590e125c","csi.storage.k8s.io/serviceAccount.name":"default","name":"pvc-8f45c616-9e28-4ca7-87d8-b00b7088ce46","storage.kubernetes.io/csiProvisionerIdentity":"1578774817890-8081-csi-mock-csi-mock-volumes-1446"}},"Response":{},"Error":""} gRPCCall: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/9604d8c6-9a25-4a1f-935d-7536590e125c/volumes/kubernetes.io~csi/pvc-8f45c616-9e28-4ca7-87d8-b00b7088ce46/mount"},"Response":{},"Error":""} gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}}]},"Error":""} gRPCCall: {"Method":"/csi.v1.Node/NodeUnstageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-8f45c616-9e28-4ca7-87d8-b00b7088ce46/globalmount"},"Response":{},"Error":""} gRPCCall: {"Method":"/csi.v1.Controller/ControllerUnpublishVolume","Request":{"volume_id":"4","node_id":"csi-mock-csi-mock-volumes-1446"},"Response":{},"Error":""} Jan 11 20:34:09.628: INFO: Found volume attribute csi.storage.k8s.io/serviceAccount.name: default Jan 11 20:34:09.628: INFO: Found volume attribute csi.storage.k8s.io/pod.name: pvc-volume-tester-vm2g8 Jan 11 20:34:09.628: INFO: Found volume attribute csi.storage.k8s.io/pod.namespace: csi-mock-volumes-1446 Jan 11 20:34:09.628: INFO: Found volume attribute csi.storage.k8s.io/pod.uid: 9604d8c6-9a25-4a1f-935d-7536590e125c Jan 11 20:34:09.628: INFO: Found volume attribute csi.storage.k8s.io/ephemeral: false Jan 11 20:34:09.628: INFO: Found NodeUnpublishVolume: {Method:/csi.v1.Node/NodeUnpublishVolume Request:{VolumeContext:map[]}} STEP: Deleting pod pvc-volume-tester-vm2g8 Jan 11 20:34:09.628: INFO: Deleting pod "pvc-volume-tester-vm2g8" in namespace "csi-mock-volumes-1446" STEP: Deleting claim pvc-ch87s Jan 11 20:34:09.897: INFO: Waiting up to 2m0s for PersistentVolume pvc-8f45c616-9e28-4ca7-87d8-b00b7088ce46 to get deleted Jan 11 20:34:09.987: INFO: PersistentVolume pvc-8f45c616-9e28-4ca7-87d8-b00b7088ce46 found and phase=Released (89.179268ms) Jan 11 20:34:12.077: INFO: PersistentVolume pvc-8f45c616-9e28-4ca7-87d8-b00b7088ce46 was removed STEP: Deleting storageclass csi-mock-volumes-1446-sc STEP: Cleaning up resources STEP: uninstalling csi mock driver Jan 11 20:34:12.168: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-1446/csi-attacher Jan 11 20:34:12.259: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-1446 Jan 11 20:34:12.352: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-1446 Jan 11 20:34:12.442: INFO: deleting *v1.Role: csi-mock-volumes-1446/external-attacher-cfg-csi-mock-volumes-1446 Jan 11 20:34:12.533: INFO: deleting *v1.RoleBinding: csi-mock-volumes-1446/csi-attacher-role-cfg Jan 11 20:34:12.625: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-1446/csi-provisioner Jan 11 20:34:12.715: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-1446 Jan 11 20:34:12.806: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-1446 Jan 11 20:34:12.897: INFO: deleting *v1.Role: csi-mock-volumes-1446/external-provisioner-cfg-csi-mock-volumes-1446 Jan 11 20:34:12.989: INFO: deleting *v1.RoleBinding: csi-mock-volumes-1446/csi-provisioner-role-cfg Jan 11 20:34:13.080: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-1446/csi-resizer Jan 11 20:34:13.172: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-1446 Jan 11 20:34:13.263: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-1446 Jan 11 20:34:13.354: INFO: deleting *v1.Role: csi-mock-volumes-1446/external-resizer-cfg-csi-mock-volumes-1446 Jan 11 20:34:13.445: INFO: deleting *v1.RoleBinding: csi-mock-volumes-1446/csi-resizer-role-cfg Jan 11 20:34:13.538: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-1446/csi-mock Jan 11 20:34:13.629: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-1446 Jan 11 20:34:13.720: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-1446 Jan 11 20:34:13.811: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-1446 Jan 11 20:34:13.902: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-1446 Jan 11 20:34:13.993: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-1446 Jan 11 20:34:14.084: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-1446 Jan 11 20:34:14.175: INFO: deleting *v1.StatefulSet: csi-mock-volumes-1446/csi-mockplugin Jan 11 20:34:14.266: INFO: deleting *v1beta1.CSIDriver: csi-mock-csi-mock-volumes-1446 Jan 11 20:34:14.358: INFO: deleting *v1.StatefulSet: csi-mock-volumes-1446/csi-mockplugin-attacher [AfterEach] [sig-storage] CSI mock volume /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:34:14.538: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "csi-mock-volumes-1446" for this suite. Jan 11 20:34:42.900: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:34:46.218: INFO: namespace csi-mock-volumes-1446 deletion completed in 31.589231758s • [SLOW TEST:74.644 seconds] [sig-storage] CSI mock volume /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI workload information using mock driver /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:297 should be passed when podInfoOnMount=true /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:347 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:34:40.250: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename projected STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-3718 STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating configMap with name projected-configmap-test-volume-map-090037b4-da0e-4d95-9896-bb3bf1eb14a9 STEP: Creating a pod to test consume configMaps Jan 11 20:34:41.230: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-4a037823-3a03-4ad3-b23e-3659c786cd55" in namespace "projected-3718" to be "success or failure" Jan 11 20:34:41.320: INFO: Pod "pod-projected-configmaps-4a037823-3a03-4ad3-b23e-3659c786cd55": Phase="Pending", Reason="", readiness=false. Elapsed: 89.071113ms Jan 11 20:34:43.409: INFO: Pod "pod-projected-configmaps-4a037823-3a03-4ad3-b23e-3659c786cd55": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.178492549s STEP: Saw pod success Jan 11 20:34:43.409: INFO: Pod "pod-projected-configmaps-4a037823-3a03-4ad3-b23e-3659c786cd55" satisfied condition "success or failure" Jan 11 20:34:43.499: INFO: Trying to get logs from node ip-10-250-27-25.ec2.internal pod pod-projected-configmaps-4a037823-3a03-4ad3-b23e-3659c786cd55 container projected-configmap-volume-test: STEP: delete the pod Jan 11 20:34:43.687: INFO: Waiting for pod pod-projected-configmaps-4a037823-3a03-4ad3-b23e-3659c786cd55 to disappear Jan 11 20:34:43.776: INFO: Pod pod-projected-configmaps-4a037823-3a03-4ad3-b23e-3659c786cd55 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:34:43.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3718" for this suite. Jan 11 20:34:50.135: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:34:53.428: INFO: namespace projected-3718 deletion completed in 9.561349568s • [SLOW TEST:13.178 seconds] [sig-storage] Projected configMap /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:35 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:34:13.731: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in persistent-local-volumes-test-5653 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:152 [BeforeEach] [Volume type: blockfswithformat] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "ip-10-250-27-25.ec2.internal" using path "/tmp/local-volume-test-90705eaf-0876-4190-9c04-318342602dc9" Jan 11 20:34:16.820: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-5653 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-90705eaf-0876-4190-9c04-318342602dc9 && dd if=/dev/zero of=/tmp/local-volume-test-90705eaf-0876-4190-9c04-318342602dc9/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-90705eaf-0876-4190-9c04-318342602dc9/file' Jan 11 20:34:18.117: INFO: stderr: "5120+0 records in\n5120+0 records out\n20971520 bytes (21 MB, 20 MiB) copied, 0.0174037 s, 1.2 GB/s\n" Jan 11 20:34:18.117: INFO: stdout: "" Jan 11 20:34:18.117: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-5653 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-90705eaf-0876-4190-9c04-318342602dc9/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}' Jan 11 20:34:19.371: INFO: stderr: "" Jan 11 20:34:19.371: INFO: stdout: "/dev/loop0\n" Jan 11 20:34:19.371: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-5653 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkfs -t ext4 /dev/loop0 && mount -t ext4 /dev/loop0 /tmp/local-volume-test-90705eaf-0876-4190-9c04-318342602dc9 && chmod o+rwx /tmp/local-volume-test-90705eaf-0876-4190-9c04-318342602dc9' Jan 11 20:34:20.747: INFO: stderr: "mke2fs 1.44.5 (15-Dec-2018)\n" Jan 11 20:34:20.747: INFO: stdout: "Discarding device blocks: 1024/20480\b\b\b\b\b\b\b\b\b\b\b \b\b\b\b\b\b\b\b\b\b\bdone \nCreating filesystem with 20480 1k blocks and 5136 inodes\nFilesystem UUID: 5fdaa545-c931-4899-9877-ded76acb9b80\nSuperblock backups stored on blocks: \n\t8193\n\nAllocating group tables: 0/3\b\b\b \b\b\bdone \nWriting inode tables: 0/3\b\b\b \b\b\bdone \nCreating journal (1024 blocks): done\nWriting superblocks and filesystem accounting information: 0/3\b\b\b \b\b\bdone\n\n" STEP: Creating local PVCs and PVs Jan 11 20:34:20.747: INFO: Creating a PV followed by a PVC Jan 11 20:34:20.927: INFO: Waiting for PV local-pvhhc9c to bind to PVC pvc-g9l49 Jan 11 20:34:20.927: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-g9l49] to have phase Bound Jan 11 20:34:21.017: INFO: PersistentVolumeClaim pvc-g9l49 found and phase=Bound (89.565749ms) Jan 11 20:34:21.017: INFO: Waiting up to 3m0s for PersistentVolume local-pvhhc9c to have phase Bound Jan 11 20:34:21.106: INFO: PersistentVolume local-pvhhc9c found and phase=Bound (89.20215ms) [It] should be able to write from pod1 and read from pod2 /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 STEP: Creating pod1 STEP: Creating a pod Jan 11 20:34:23.735: INFO: pod "security-context-f5c4141f-76ce-4903-ab91-c2847fc2f926" created on Node "ip-10-250-27-25.ec2.internal" STEP: Writing in pod1 Jan 11 20:34:23.735: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-5653 security-context-f5c4141f-76ce-4903-ab91-c2847fc2f926 -- /bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file' Jan 11 20:34:25.190: INFO: stderr: "" Jan 11 20:34:25.190: INFO: stdout: "" Jan 11 20:34:25.190: INFO: podRWCmdExec out: "" err: Jan 11 20:34:25.190: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-5653 security-context-f5c4141f-76ce-4903-ab91-c2847fc2f926 -- /bin/sh -c cat /mnt/volume1/test-file' Jan 11 20:34:26.498: INFO: stderr: "" Jan 11 20:34:26.498: INFO: stdout: "test-file-content\n" Jan 11 20:34:26.498: INFO: podRWCmdExec out: "test-file-content\n" err: STEP: Deleting pod1 STEP: Deleting pod security-context-f5c4141f-76ce-4903-ab91-c2847fc2f926 in namespace persistent-local-volumes-test-5653 STEP: Creating pod2 STEP: Creating a pod Jan 11 20:34:31.038: INFO: pod "security-context-969f8668-fb44-4ada-baf0-a8fdf239e4d1" created on Node "ip-10-250-27-25.ec2.internal" STEP: Reading in pod2 Jan 11 20:34:31.038: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-5653 security-context-969f8668-fb44-4ada-baf0-a8fdf239e4d1 -- /bin/sh -c cat /mnt/volume1/test-file' Jan 11 20:34:32.313: INFO: stderr: "" Jan 11 20:34:32.313: INFO: stdout: "test-file-content\n" Jan 11 20:34:32.313: INFO: podRWCmdExec out: "test-file-content\n" err: STEP: Deleting pod2 STEP: Deleting pod security-context-969f8668-fb44-4ada-baf0-a8fdf239e4d1 in namespace persistent-local-volumes-test-5653 [AfterEach] [Volume type: blockfswithformat] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jan 11 20:34:32.404: INFO: Deleting PersistentVolumeClaim "pvc-g9l49" Jan 11 20:34:32.494: INFO: Deleting PersistentVolume "local-pvhhc9c" Jan 11 20:34:32.585: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-5653 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount /tmp/local-volume-test-90705eaf-0876-4190-9c04-318342602dc9' Jan 11 20:34:33.910: INFO: stderr: "" Jan 11 20:34:33.910: INFO: stdout: "" Jan 11 20:34:33.910: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-5653 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-90705eaf-0876-4190-9c04-318342602dc9/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}' Jan 11 20:34:35.213: INFO: stderr: "" Jan 11 20:34:35.213: INFO: stdout: "/dev/loop0\n" STEP: Tear down block device "/dev/loop0" on node "ip-10-250-27-25.ec2.internal" at path /tmp/local-volume-test-90705eaf-0876-4190-9c04-318342602dc9/file Jan 11 20:34:35.213: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-5653 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0' Jan 11 20:34:36.478: INFO: stderr: "" Jan 11 20:34:36.478: INFO: stdout: "" STEP: Removing the test directory /tmp/local-volume-test-90705eaf-0876-4190-9c04-318342602dc9 Jan 11 20:34:36.478: INFO: Running '/go/src/k8s.io/kubernetes/platforms/linux/amd64/kubectl --server=https://api.tmw0h-gg6.it.internal.staging.k8s.ondemand.com --kubeconfig=/tmp/tm/kubeconfig/shoot.config exec --namespace=persistent-local-volumes-test-5653 hostexec-ip-10-250-27-25.ec2.internal -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-90705eaf-0876-4190-9c04-318342602dc9' Jan 11 20:34:37.791: INFO: stderr: "" Jan 11 20:34:37.791: INFO: stdout: "" [AfterEach] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:34:37.882: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-5653" for this suite. Jan 11 20:34:50.244: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:34:53.545: INFO: namespace persistent-local-volumes-test-5653 deletion completed in 15.571237314s • [SLOW TEST:39.814 seconds] [sig-storage] PersistentVolumes-local /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: blockfswithformat] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume one after the other /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254 should be able to write from pod1 and read from pod2 /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 ------------------------------ SSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:34:46.237: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename projected STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-4778 STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating projection with secret that has name projected-secret-test-map-fc921243-dedc-4147-bde1-f83b5523f7a1 STEP: Creating a pod to test consume secrets Jan 11 20:34:47.061: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-8372260d-3ccb-449e-8809-0d518c9c147f" in namespace "projected-4778" to be "success or failure" Jan 11 20:34:47.151: INFO: Pod "pod-projected-secrets-8372260d-3ccb-449e-8809-0d518c9c147f": Phase="Pending", Reason="", readiness=false. Elapsed: 89.905277ms Jan 11 20:34:49.241: INFO: Pod "pod-projected-secrets-8372260d-3ccb-449e-8809-0d518c9c147f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.179791005s STEP: Saw pod success Jan 11 20:34:49.241: INFO: Pod "pod-projected-secrets-8372260d-3ccb-449e-8809-0d518c9c147f" satisfied condition "success or failure" Jan 11 20:34:49.330: INFO: Trying to get logs from node ip-10-250-27-25.ec2.internal pod pod-projected-secrets-8372260d-3ccb-449e-8809-0d518c9c147f container projected-secret-volume-test: STEP: delete the pod Jan 11 20:34:49.520: INFO: Waiting for pod pod-projected-secrets-8372260d-3ccb-449e-8809-0d518c9c147f to disappear Jan 11 20:34:49.609: INFO: Pod pod-projected-secrets-8372260d-3ccb-449e-8809-0d518c9c147f no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:34:49.609: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4778" for this suite. Jan 11 20:34:57.972: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:35:01.296: INFO: namespace projected-4778 deletion completed in 11.595100113s • [SLOW TEST:15.059 seconds] [sig-storage] Projected secret /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:34:53.430: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename services STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-1335 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:91 [It] should check NodePort out-of-range /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1252 STEP: creating service nodeport-range-test with type NodePort in namespace services-1335 STEP: changing service nodeport-range-test to out-of-range NodePort 26393 STEP: deleting original service nodeport-range-test STEP: creating service nodeport-range-test with out-of-range NodePort 26393 [AfterEach] [sig-network] Services /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:34:54.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1335" for this suite. Jan 11 20:35:00.902: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:35:05.721: INFO: namespace services-1335 deletion completed in 11.086911001s [AfterEach] [sig-network] Services /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:95 • [SLOW TEST:12.291 seconds] [sig-network] Services /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should check NodePort out-of-range /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1252 ------------------------------ [BeforeEach] [sig-node] ConfigMap /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:35:01.325: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename configmap STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-8016 STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating configMap configmap-8016/configmap-test-e3e75393-dea4-4516-99b8-e64967668062 STEP: Creating a pod to test consume configMaps Jan 11 20:35:02.150: INFO: Waiting up to 5m0s for pod "pod-configmaps-62bf7ae3-f3fa-455c-b567-70bd985dd33a" in namespace "configmap-8016" to be "success or failure" Jan 11 20:35:02.240: INFO: Pod "pod-configmaps-62bf7ae3-f3fa-455c-b567-70bd985dd33a": Phase="Pending", Reason="", readiness=false. Elapsed: 89.554061ms Jan 11 20:35:04.434: INFO: Pod "pod-configmaps-62bf7ae3-f3fa-455c-b567-70bd985dd33a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.283284847s STEP: Saw pod success Jan 11 20:35:04.434: INFO: Pod "pod-configmaps-62bf7ae3-f3fa-455c-b567-70bd985dd33a" satisfied condition "success or failure" Jan 11 20:35:04.633: INFO: Trying to get logs from node ip-10-250-27-25.ec2.internal pod pod-configmaps-62bf7ae3-f3fa-455c-b567-70bd985dd33a container env-test: STEP: delete the pod Jan 11 20:35:05.028: INFO: Waiting for pod pod-configmaps-62bf7ae3-f3fa-455c-b567-70bd985dd33a to disappear Jan 11 20:35:05.225: INFO: Pod pod-configmaps-62bf7ae3-f3fa-455c-b567-70bd985dd33a no longer exists [AfterEach] [sig-node] ConfigMap /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:35:05.225: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8016" for this suite. Jan 11 20:35:11.969: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:35:19.425: INFO: namespace configmap-8016 deletion completed in 14.01253379s • [SLOW TEST:18.101 seconds] [sig-node] ConfigMap /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:32 should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SS ------------------------------ [BeforeEach] [sig-cli] Kubectl Port forwarding /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:34:53.555: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename port-forwarding STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in port-forwarding-2012 STEP: Waiting for a default service account to be provisioned in namespace [It] should support forwarding over websockets /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:457 Jan 11 20:34:54.254: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Creating the pod STEP: Sending the expected data to the local port STEP: Reading data from the local port STEP: Verifying logs [AfterEach] [sig-cli] Kubectl Port forwarding /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:35:03.340: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "port-forwarding-2012" for this suite. Jan 11 20:35:15.940: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:35:21.948: INFO: namespace port-forwarding-2012 deletion completed in 18.472553243s • [SLOW TEST:28.393 seconds] [sig-cli] Kubectl Port forwarding /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 With a server listening on 0.0.0.0 /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:441 should support forwarding over websockets /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:457 ------------------------------ SS ------------------------------ Jan 11 20:35:21.952: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:35:05.723: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-wrapper-5011 STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:35:10.795: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-5011" for this suite. Jan 11 20:35:17.516: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:35:24.189: INFO: namespace emptydir-wrapper-5011 deletion completed in 13.213654683s • [SLOW TEST:18.466 seconds] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not conflict [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ Jan 11 20:35:24.190: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:34:40.245: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename statefulset STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in statefulset-9469 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:62 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:77 STEP: Creating service test in namespace statefulset-9469 [It] should have a working scale subresource [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating statefulset ss in namespace statefulset-9469 Jan 11 20:34:41.217: INFO: Found 0 stateful pods, waiting for 1 Jan 11 20:34:51.307: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 Jan 11 20:34:51.754: INFO: Deleting all statefulset in ns statefulset-9469 Jan 11 20:34:51.843: INFO: Scaling statefulset ss to 0 Jan 11 20:35:12.295: INFO: Waiting for statefulset status.replicas updated to 0 Jan 11 20:35:12.479: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:35:13.010: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-9469" for this suite. Jan 11 20:35:19.710: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:35:26.226: INFO: namespace statefulset-9469 deletion completed in 13.041929491s • [SLOW TEST:45.981 seconds] [sig-apps] StatefulSet /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693 should have a working scale subresource [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ Jan 11 20:35:26.228: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:35:19.430: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename projected STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-9592 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:106 STEP: Creating a pod to test downward API volume plugin Jan 11 20:35:20.840: INFO: Waiting up to 5m0s for pod "metadata-volume-84bae372-bcc6-48a8-97b1-3c825a9edd9c" in namespace "projected-9592" to be "success or failure" Jan 11 20:35:21.025: INFO: Pod "metadata-volume-84bae372-bcc6-48a8-97b1-3c825a9edd9c": Phase="Pending", Reason="", readiness=false. Elapsed: 184.540812ms Jan 11 20:35:23.202: INFO: Pod "metadata-volume-84bae372-bcc6-48a8-97b1-3c825a9edd9c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.361888177s STEP: Saw pod success Jan 11 20:35:23.202: INFO: Pod "metadata-volume-84bae372-bcc6-48a8-97b1-3c825a9edd9c" satisfied condition "success or failure" Jan 11 20:35:23.379: INFO: Trying to get logs from node ip-10-250-27-25.ec2.internal pod metadata-volume-84bae372-bcc6-48a8-97b1-3c825a9edd9c container client-container: STEP: delete the pod Jan 11 20:35:23.758: INFO: Waiting for pod metadata-volume-84bae372-bcc6-48a8-97b1-3c825a9edd9c to disappear Jan 11 20:35:23.939: INFO: Pod metadata-volume-84bae372-bcc6-48a8-97b1-3c825a9edd9c no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:35:23.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9592" for this suite. Jan 11 20:35:30.566: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:35:33.880: INFO: namespace projected-9592 deletion completed in 9.760248126s • [SLOW TEST:14.450 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:106 ------------------------------ Jan 11 20:35:33.881: INFO: Running AfterSuite actions on all nodes [BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:93 [BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:85 [BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:31:35.400: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename volume-expand STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in volume-expand-1264 STEP: Waiting for a default service account to be provisioned in namespace [It] should resize volume when PVC is edited while pod is using it /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:218 STEP: deploying csi-hostpath driver Jan 11 20:31:36.245: INFO: creating *v1.ServiceAccount: volume-expand-1264/csi-attacher Jan 11 20:31:36.335: INFO: creating *v1.ClusterRole: external-attacher-runner-volume-expand-1264 Jan 11 20:31:36.335: INFO: Define cluster role external-attacher-runner-volume-expand-1264 Jan 11 20:31:36.424: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-volume-expand-1264 Jan 11 20:31:36.514: INFO: creating *v1.Role: volume-expand-1264/external-attacher-cfg-volume-expand-1264 Jan 11 20:31:36.604: INFO: creating *v1.RoleBinding: volume-expand-1264/csi-attacher-role-cfg Jan 11 20:31:36.693: INFO: creating *v1.ServiceAccount: volume-expand-1264/csi-provisioner Jan 11 20:31:36.783: INFO: creating *v1.ClusterRole: external-provisioner-runner-volume-expand-1264 Jan 11 20:31:36.783: INFO: Define cluster role external-provisioner-runner-volume-expand-1264 Jan 11 20:31:36.873: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-volume-expand-1264 Jan 11 20:31:36.963: INFO: creating *v1.Role: volume-expand-1264/external-provisioner-cfg-volume-expand-1264 Jan 11 20:31:37.053: INFO: creating *v1.RoleBinding: volume-expand-1264/csi-provisioner-role-cfg Jan 11 20:31:37.142: INFO: creating *v1.ServiceAccount: volume-expand-1264/csi-snapshotter Jan 11 20:31:37.232: INFO: creating *v1.ClusterRole: external-snapshotter-runner-volume-expand-1264 Jan 11 20:31:37.232: INFO: Define cluster role external-snapshotter-runner-volume-expand-1264 Jan 11 20:31:37.322: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-volume-expand-1264 Jan 11 20:31:37.411: INFO: creating *v1.Role: volume-expand-1264/external-snapshotter-leaderelection-volume-expand-1264 Jan 11 20:31:37.501: INFO: creating *v1.RoleBinding: volume-expand-1264/external-snapshotter-leaderelection Jan 11 20:31:37.590: INFO: creating *v1.ServiceAccount: volume-expand-1264/csi-resizer Jan 11 20:31:37.680: INFO: creating *v1.ClusterRole: external-resizer-runner-volume-expand-1264 Jan 11 20:31:37.680: INFO: Define cluster role external-resizer-runner-volume-expand-1264 Jan 11 20:31:37.770: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-volume-expand-1264 Jan 11 20:31:37.859: INFO: creating *v1.Role: volume-expand-1264/external-resizer-cfg-volume-expand-1264 Jan 11 20:31:37.949: INFO: creating *v1.RoleBinding: volume-expand-1264/csi-resizer-role-cfg Jan 11 20:31:38.039: INFO: creating *v1.Service: volume-expand-1264/csi-hostpath-attacher Jan 11 20:31:38.133: INFO: creating *v1.StatefulSet: volume-expand-1264/csi-hostpath-attacher Jan 11 20:31:38.223: INFO: creating *v1beta1.CSIDriver: csi-hostpath-volume-expand-1264 Jan 11 20:31:38.313: INFO: creating *v1.Service: volume-expand-1264/csi-hostpathplugin Jan 11 20:31:38.406: INFO: creating *v1.StatefulSet: volume-expand-1264/csi-hostpathplugin Jan 11 20:31:38.497: INFO: creating *v1.Service: volume-expand-1264/csi-hostpath-provisioner Jan 11 20:31:38.590: INFO: creating *v1.StatefulSet: volume-expand-1264/csi-hostpath-provisioner Jan 11 20:31:38.680: INFO: creating *v1.Service: volume-expand-1264/csi-hostpath-resizer Jan 11 20:31:38.773: INFO: creating *v1.StatefulSet: volume-expand-1264/csi-hostpath-resizer Jan 11 20:31:38.862: INFO: creating *v1.Service: volume-expand-1264/csi-snapshotter Jan 11 20:31:38.956: INFO: creating *v1.StatefulSet: volume-expand-1264/csi-snapshotter Jan 11 20:31:39.045: INFO: creating *v1.ClusterRoleBinding: psp-csi-hostpath-role-volume-expand-1264 Jan 11 20:31:39.135: INFO: Test running for native CSI Driver, not checking metrics Jan 11 20:31:39.135: INFO: Creating resource for dynamic PV STEP: creating a StorageClass volume-expand-1264-csi-hostpath-volume-expand-1264-sc56rsv STEP: creating a claim Jan 11 20:31:39.314: INFO: Waiting up to 5m0s for PersistentVolumeClaims [csi-hostpathk4lln] to have phase Bound Jan 11 20:31:39.403: INFO: PersistentVolumeClaim csi-hostpathk4lln found but phase is Pending instead of Bound. Jan 11 20:31:41.492: INFO: PersistentVolumeClaim csi-hostpathk4lln found but phase is Pending instead of Bound. Jan 11 20:31:43.581: INFO: PersistentVolumeClaim csi-hostpathk4lln found but phase is Pending instead of Bound. Jan 11 20:31:45.671: INFO: PersistentVolumeClaim csi-hostpathk4lln found but phase is Pending instead of Bound. Jan 11 20:31:47.761: INFO: PersistentVolumeClaim csi-hostpathk4lln found but phase is Pending instead of Bound. Jan 11 20:31:49.850: INFO: PersistentVolumeClaim csi-hostpathk4lln found but phase is Pending instead of Bound. Jan 11 20:31:51.940: INFO: PersistentVolumeClaim csi-hostpathk4lln found but phase is Pending instead of Bound. Jan 11 20:31:54.029: INFO: PersistentVolumeClaim csi-hostpathk4lln found but phase is Pending instead of Bound. Jan 11 20:31:56.119: INFO: PersistentVolumeClaim csi-hostpathk4lln found but phase is Pending instead of Bound. Jan 11 20:31:58.208: INFO: PersistentVolumeClaim csi-hostpathk4lln found but phase is Pending instead of Bound. Jan 11 20:32:00.297: INFO: PersistentVolumeClaim csi-hostpathk4lln found but phase is Pending instead of Bound. Jan 11 20:32:02.387: INFO: PersistentVolumeClaim csi-hostpathk4lln found but phase is Pending instead of Bound. Jan 11 20:32:04.476: INFO: PersistentVolumeClaim csi-hostpathk4lln found but phase is Pending instead of Bound. Jan 11 20:32:06.566: INFO: PersistentVolumeClaim csi-hostpathk4lln found but phase is Pending instead of Bound. Jan 11 20:32:08.655: INFO: PersistentVolumeClaim csi-hostpathk4lln found but phase is Pending instead of Bound. Jan 11 20:32:10.745: INFO: PersistentVolumeClaim csi-hostpathk4lln found but phase is Pending instead of Bound. Jan 11 20:32:12.834: INFO: PersistentVolumeClaim csi-hostpathk4lln found but phase is Pending instead of Bound. Jan 11 20:32:14.923: INFO: PersistentVolumeClaim csi-hostpathk4lln found but phase is Pending instead of Bound. Jan 11 20:32:17.013: INFO: PersistentVolumeClaim csi-hostpathk4lln found but phase is Pending instead of Bound. Jan 11 20:32:19.103: INFO: PersistentVolumeClaim csi-hostpathk4lln found but phase is Pending instead of Bound. Jan 11 20:32:21.193: INFO: PersistentVolumeClaim csi-hostpathk4lln found but phase is Pending instead of Bound. Jan 11 20:32:23.284: INFO: PersistentVolumeClaim csi-hostpathk4lln found but phase is Pending instead of Bound. Jan 11 20:32:25.373: INFO: PersistentVolumeClaim csi-hostpathk4lln found but phase is Pending instead of Bound. Jan 11 20:32:27.464: INFO: PersistentVolumeClaim csi-hostpathk4lln found but phase is Pending instead of Bound. Jan 11 20:32:29.553: INFO: PersistentVolumeClaim csi-hostpathk4lln found but phase is Pending instead of Bound. Jan 11 20:32:31.643: INFO: PersistentVolumeClaim csi-hostpathk4lln found but phase is Pending instead of Bound. Jan 11 20:32:33.732: INFO: PersistentVolumeClaim csi-hostpathk4lln found but phase is Pending instead of Bound. Jan 11 20:32:35.821: INFO: PersistentVolumeClaim csi-hostpathk4lln found but phase is Pending instead of Bound. Jan 11 20:32:37.911: INFO: PersistentVolumeClaim csi-hostpathk4lln found but phase is Pending instead of Bound. Jan 11 20:32:40.000: INFO: PersistentVolumeClaim csi-hostpathk4lln found but phase is Pending instead of Bound. Jan 11 20:32:42.089: INFO: PersistentVolumeClaim csi-hostpathk4lln found but phase is Pending instead of Bound. Jan 11 20:32:44.181: INFO: PersistentVolumeClaim csi-hostpathk4lln found but phase is Pending instead of Bound. Jan 11 20:32:46.271: INFO: PersistentVolumeClaim csi-hostpathk4lln found but phase is Pending instead of Bound. Jan 11 20:32:48.360: INFO: PersistentVolumeClaim csi-hostpathk4lln found but phase is Pending instead of Bound. Jan 11 20:32:50.450: INFO: PersistentVolumeClaim csi-hostpathk4lln found but phase is Pending instead of Bound. Jan 11 20:32:52.539: INFO: PersistentVolumeClaim csi-hostpathk4lln found but phase is Pending instead of Bound. Jan 11 20:32:54.628: INFO: PersistentVolumeClaim csi-hostpathk4lln found but phase is Pending instead of Bound. Jan 11 20:32:56.718: INFO: PersistentVolumeClaim csi-hostpathk4lln found but phase is Pending instead of Bound. Jan 11 20:32:58.808: INFO: PersistentVolumeClaim csi-hostpathk4lln found but phase is Pending instead of Bound. Jan 11 20:33:00.897: INFO: PersistentVolumeClaim csi-hostpathk4lln found but phase is Pending instead of Bound. Jan 11 20:33:02.986: INFO: PersistentVolumeClaim csi-hostpathk4lln found but phase is Pending instead of Bound. Jan 11 20:33:05.076: INFO: PersistentVolumeClaim csi-hostpathk4lln found but phase is Pending instead of Bound. Jan 11 20:33:07.165: INFO: PersistentVolumeClaim csi-hostpathk4lln found but phase is Pending instead of Bound. Jan 11 20:33:09.255: INFO: PersistentVolumeClaim csi-hostpathk4lln found but phase is Pending instead of Bound. Jan 11 20:33:11.344: INFO: PersistentVolumeClaim csi-hostpathk4lln found but phase is Pending instead of Bound. Jan 11 20:33:13.434: INFO: PersistentVolumeClaim csi-hostpathk4lln found but phase is Pending instead of Bound. Jan 11 20:33:15.523: INFO: PersistentVolumeClaim csi-hostpathk4lln found but phase is Pending instead of Bound. Jan 11 20:33:17.612: INFO: PersistentVolumeClaim csi-hostpathk4lln found but phase is Pending instead of Bound. Jan 11 20:33:19.702: INFO: PersistentVolumeClaim csi-hostpathk4lln found but phase is Pending instead of Bound. Jan 11 20:33:21.793: INFO: PersistentVolumeClaim csi-hostpathk4lln found but phase is Pending instead of Bound. Jan 11 20:33:23.884: INFO: PersistentVolumeClaim csi-hostpathk4lln found but phase is Pending instead of Bound. Jan 11 20:33:25.974: INFO: PersistentVolumeClaim csi-hostpathk4lln found but phase is Pending instead of Bound. Jan 11 20:33:28.063: INFO: PersistentVolumeClaim csi-hostpathk4lln found but phase is Pending instead of Bound. Jan 11 20:33:30.152: INFO: PersistentVolumeClaim csi-hostpathk4lln found but phase is Pending instead of Bound. Jan 11 20:33:32.242: INFO: PersistentVolumeClaim csi-hostpathk4lln found but phase is Pending instead of Bound. Jan 11 20:33:34.332: INFO: PersistentVolumeClaim csi-hostpathk4lln found but phase is Pending instead of Bound. Jan 11 20:33:36.421: INFO: PersistentVolumeClaim csi-hostpathk4lln found but phase is Pending instead of Bound. Jan 11 20:33:38.510: INFO: PersistentVolumeClaim csi-hostpathk4lln found but phase is Pending instead of Bound. Jan 11 20:33:40.599: INFO: PersistentVolumeClaim csi-hostpathk4lln found but phase is Pending instead of Bound. Jan 11 20:33:42.689: INFO: PersistentVolumeClaim csi-hostpathk4lln found but phase is Pending instead of Bound. Jan 11 20:33:44.778: INFO: PersistentVolumeClaim csi-hostpathk4lln found but phase is Pending instead of Bound. Jan 11 20:33:46.868: INFO: PersistentVolumeClaim csi-hostpathk4lln found but phase is Pending instead of Bound. Jan 11 20:33:48.957: INFO: PersistentVolumeClaim csi-hostpathk4lln found but phase is Pending instead of Bound. Jan 11 20:33:51.046: INFO: PersistentVolumeClaim csi-hostpathk4lln found but phase is Pending instead of Bound. Jan 11 20:33:53.136: INFO: PersistentVolumeClaim csi-hostpathk4lln found but phase is Pending instead of Bound. Jan 11 20:33:55.225: INFO: PersistentVolumeClaim csi-hostpathk4lln found but phase is Pending instead of Bound. Jan 11 20:33:57.315: INFO: PersistentVolumeClaim csi-hostpathk4lln found but phase is Pending instead of Bound. Jan 11 20:33:59.404: INFO: PersistentVolumeClaim csi-hostpathk4lln found but phase is Pending instead of Bound. Jan 11 20:34:01.493: INFO: PersistentVolumeClaim csi-hostpathk4lln found but phase is Pending instead of Bound. Jan 11 20:34:03.584: INFO: PersistentVolumeClaim csi-hostpathk4lln found but phase is Pending instead of Bound. Jan 11 20:34:05.673: INFO: PersistentVolumeClaim csi-hostpathk4lln found but phase is Pending instead of Bound. Jan 11 20:34:07.763: INFO: PersistentVolumeClaim csi-hostpathk4lln found but phase is Pending instead of Bound. Jan 11 20:34:09.852: INFO: PersistentVolumeClaim csi-hostpathk4lln found but phase is Pending instead of Bound. Jan 11 20:34:11.942: INFO: PersistentVolumeClaim csi-hostpathk4lln found but phase is Pending instead of Bound. Jan 11 20:34:14.031: INFO: PersistentVolumeClaim csi-hostpathk4lln found but phase is Pending instead of Bound. Jan 11 20:34:16.121: INFO: PersistentVolumeClaim csi-hostpathk4lln found and phase=Bound (2m36.807203911s) STEP: Creating a pod with dynamically provisioned volume STEP: Expanding current pvc Jan 11 20:34:22.658: INFO: currentPvcSize {{5368709120 0} {} 5Gi BinarySI}, newSize {{6442450944 0} {} BinarySI} STEP: Waiting for cloudprovider resize to finish STEP: Waiting for file system resize to finish Jan 11 20:35:29.107: INFO: Deleting pod "security-context-ee5359a6-9781-4dc5-8a22-cabeb4436184" in namespace "volume-expand-1264" Jan 11 20:35:29.197: INFO: Wait up to 5m0s for pod "security-context-ee5359a6-9781-4dc5-8a22-cabeb4436184" to be fully deleted STEP: Deleting pod Jan 11 20:35:33.376: INFO: Deleting pod "security-context-ee5359a6-9781-4dc5-8a22-cabeb4436184" in namespace "volume-expand-1264" STEP: Deleting pvc Jan 11 20:35:33.465: INFO: Deleting PersistentVolumeClaim "csi-hostpathk4lln" Jan 11 20:35:33.556: INFO: Waiting up to 5m0s for PersistentVolume pvc-d6297dff-98b6-4e7d-8a12-15d18d9ab5fa to get deleted Jan 11 20:35:33.644: INFO: PersistentVolume pvc-d6297dff-98b6-4e7d-8a12-15d18d9ab5fa was removed STEP: Deleting sc STEP: uninstalling csi-hostpath driver Jan 11 20:35:33.735: INFO: deleting *v1.ServiceAccount: volume-expand-1264/csi-attacher Jan 11 20:35:33.826: INFO: deleting *v1.ClusterRole: external-attacher-runner-volume-expand-1264 Jan 11 20:35:33.916: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-volume-expand-1264 Jan 11 20:35:34.007: INFO: deleting *v1.Role: volume-expand-1264/external-attacher-cfg-volume-expand-1264 Jan 11 20:35:34.097: INFO: deleting *v1.RoleBinding: volume-expand-1264/csi-attacher-role-cfg Jan 11 20:35:34.188: INFO: deleting *v1.ServiceAccount: volume-expand-1264/csi-provisioner Jan 11 20:35:34.279: INFO: deleting *v1.ClusterRole: external-provisioner-runner-volume-expand-1264 Jan 11 20:35:34.369: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-volume-expand-1264 Jan 11 20:35:34.460: INFO: deleting *v1.Role: volume-expand-1264/external-provisioner-cfg-volume-expand-1264 Jan 11 20:35:34.550: INFO: deleting *v1.RoleBinding: volume-expand-1264/csi-provisioner-role-cfg Jan 11 20:35:34.640: INFO: deleting *v1.ServiceAccount: volume-expand-1264/csi-snapshotter Jan 11 20:35:34.731: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-volume-expand-1264 Jan 11 20:35:34.822: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-volume-expand-1264 Jan 11 20:35:34.912: INFO: deleting *v1.Role: volume-expand-1264/external-snapshotter-leaderelection-volume-expand-1264 Jan 11 20:35:35.002: INFO: deleting *v1.RoleBinding: volume-expand-1264/external-snapshotter-leaderelection Jan 11 20:35:35.093: INFO: deleting *v1.ServiceAccount: volume-expand-1264/csi-resizer Jan 11 20:35:35.184: INFO: deleting *v1.ClusterRole: external-resizer-runner-volume-expand-1264 Jan 11 20:35:35.275: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-volume-expand-1264 Jan 11 20:35:35.366: INFO: deleting *v1.Role: volume-expand-1264/external-resizer-cfg-volume-expand-1264 Jan 11 20:35:35.456: INFO: deleting *v1.RoleBinding: volume-expand-1264/csi-resizer-role-cfg Jan 11 20:35:35.547: INFO: deleting *v1.Service: volume-expand-1264/csi-hostpath-attacher Jan 11 20:35:35.641: INFO: deleting *v1.StatefulSet: volume-expand-1264/csi-hostpath-attacher Jan 11 20:35:35.733: INFO: deleting *v1beta1.CSIDriver: csi-hostpath-volume-expand-1264 Jan 11 20:35:35.823: INFO: deleting *v1.Service: volume-expand-1264/csi-hostpathplugin Jan 11 20:35:35.917: INFO: deleting *v1.StatefulSet: volume-expand-1264/csi-hostpathplugin Jan 11 20:35:36.008: INFO: deleting *v1.Service: volume-expand-1264/csi-hostpath-provisioner Jan 11 20:35:36.102: INFO: deleting *v1.StatefulSet: volume-expand-1264/csi-hostpath-provisioner Jan 11 20:35:36.193: INFO: deleting *v1.Service: volume-expand-1264/csi-hostpath-resizer Jan 11 20:35:36.287: INFO: deleting *v1.StatefulSet: volume-expand-1264/csi-hostpath-resizer Jan 11 20:35:36.378: INFO: deleting *v1.Service: volume-expand-1264/csi-snapshotter Jan 11 20:35:36.471: INFO: deleting *v1.StatefulSet: volume-expand-1264/csi-snapshotter Jan 11 20:35:36.562: INFO: deleting *v1.ClusterRoleBinding: psp-csi-hostpath-role-volume-expand-1264 [AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:35:36.656: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "volume-expand-1264" for this suite. Jan 11 20:35:49.014: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:35:52.243: INFO: namespace volume-expand-1264 deletion completed in 15.496815677s • [SLOW TEST:256.842 seconds] [sig-storage] CSI Volumes /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Driver: csi-hostpath] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:62 [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:92 should resize volume when PVC is edited while pod is using it /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:218 ------------------------------ Jan 11 20:35:52.246: INFO: Running AfterSuite actions on all nodes Jan 11 20:35:52.246: INFO: deleting *v1.ServiceAccount: volume-expand-8983/csi-attacher Jan 11 20:35:52.335: INFO: deleting *v1.ClusterRole: external-attacher-runner-volume-expand-8983 Jan 11 20:35:52.426: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-volume-expand-8983 Jan 11 20:35:52.516: INFO: deleting *v1.Role: volume-expand-8983/external-attacher-cfg-volume-expand-8983 Jan 11 20:35:52.605: INFO: deleting *v1.RoleBinding: volume-expand-8983/csi-attacher-role-cfg Jan 11 20:35:52.694: INFO: deleting *v1.ServiceAccount: volume-expand-8983/csi-provisioner Jan 11 20:35:52.783: INFO: deleting *v1.ClusterRole: external-provisioner-runner-volume-expand-8983 Jan 11 20:35:52.874: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-volume-expand-8983 Jan 11 20:35:52.964: INFO: deleting *v1.Role: volume-expand-8983/external-provisioner-cfg-volume-expand-8983 Jan 11 20:35:53.053: INFO: deleting *v1.RoleBinding: volume-expand-8983/csi-provisioner-role-cfg Jan 11 20:35:53.142: INFO: deleting *v1.ServiceAccount: volume-expand-8983/csi-snapshotter Jan 11 20:35:53.231: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-volume-expand-8983 Jan 11 20:35:53.321: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-volume-expand-8983 Jan 11 20:35:53.411: INFO: deleting *v1.Role: volume-expand-8983/external-snapshotter-leaderelection-volume-expand-8983 Jan 11 20:35:53.501: INFO: deleting *v1.RoleBinding: volume-expand-8983/external-snapshotter-leaderelection Jan 11 20:35:53.590: INFO: deleting *v1.ServiceAccount: volume-expand-8983/csi-resizer Jan 11 20:35:53.679: INFO: deleting *v1.ClusterRole: external-resizer-runner-volume-expand-8983 Jan 11 20:35:53.769: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-volume-expand-8983 Jan 11 20:35:53.860: INFO: deleting *v1.Role: volume-expand-8983/external-resizer-cfg-volume-expand-8983 Jan 11 20:35:53.949: INFO: deleting *v1.RoleBinding: volume-expand-8983/csi-resizer-role-cfg Jan 11 20:35:54.038: INFO: deleting *v1.Service: volume-expand-8983/csi-hostpath-attacher Jan 11 20:35:54.128: INFO: deleting *v1.StatefulSet: volume-expand-8983/csi-hostpath-attacher Jan 11 20:35:54.217: INFO: deleting *v1beta1.CSIDriver: csi-hostpath-volume-expand-8983 Jan 11 20:35:54.307: INFO: deleting *v1.Service: volume-expand-8983/csi-hostpathplugin Jan 11 20:35:54.396: INFO: deleting *v1.StatefulSet: volume-expand-8983/csi-hostpathplugin Jan 11 20:35:54.485: INFO: deleting *v1.Service: volume-expand-8983/csi-hostpath-provisioner Jan 11 20:35:54.574: INFO: deleting *v1.StatefulSet: volume-expand-8983/csi-hostpath-provisioner Jan 11 20:35:54.663: INFO: deleting *v1.Service: volume-expand-8983/csi-hostpath-resizer Jan 11 20:35:54.753: INFO: deleting *v1.StatefulSet: volume-expand-8983/csi-hostpath-resizer Jan 11 20:35:54.842: INFO: deleting *v1.Service: volume-expand-8983/csi-snapshotter Jan 11 20:35:54.931: INFO: deleting *v1.StatefulSet: volume-expand-8983/csi-snapshotter Jan 11 20:35:55.020: INFO: deleting *v1.ClusterRoleBinding: psp-csi-hostpath-role-volume-expand-8983 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:08:51.179: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename pods STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pods-7628 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:165 [It] should cap back-off at MaxContainerBackOff [Slow][NodeConformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:722 STEP: getting restart delay when capped Jan 11 20:20:14.880: INFO: getRestartDelay: restartCount = 7, finishedAt=2020-01-11 20:15:07 +0000 UTC restartedAt=2020-01-11 20:20:13 +0000 UTC (5m6s) Jan 11 20:25:25.551: INFO: getRestartDelay: restartCount = 8, finishedAt=2020-01-11 20:20:18 +0000 UTC restartedAt=2020-01-11 20:25:23 +0000 UTC (5m5s) Jan 11 20:30:32.951: INFO: getRestartDelay: restartCount = 9, finishedAt=2020-01-11 20:25:28 +0000 UTC restartedAt=2020-01-11 20:30:31 +0000 UTC (5m3s) STEP: getting restart delay after a capped delay Jan 11 20:35:47.056: INFO: getRestartDelay: restartCount = 10, finishedAt=2020-01-11 20:30:36 +0000 UTC restartedAt=2020-01-11 20:35:45 +0000 UTC (5m9s) [AfterEach] [k8s.io] Pods /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:35:47.056: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7628" for this suite. Jan 11 20:36:15.416: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:36:25.319: INFO: namespace pods-7628 deletion completed in 38.172574756s • [SLOW TEST:1654.140 seconds] [k8s.io] Pods /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693 should cap back-off at MaxContainerBackOff [Slow][NodeConformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:722 ------------------------------ Jan 11 20:36:25.320: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:34:23.265: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename var-expansion STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in var-expansion-2764 STEP: Waiting for a default service account to be provisioned in namespace [It] should not change the subpath mount on a container restart if the environment variable changes [sig-storage][NodeFeature:VolumeSubpathEnvExpansion][Slow] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/expansion.go:519 STEP: Creating pod var-expansion-8da48ecf-39e3-45fd-b4e5-74b580159b5a STEP: updating the pod Jan 11 20:34:28.908: INFO: Successfully updated pod "var-expansion-8da48ecf-39e3-45fd-b4e5-74b580159b5a" STEP: waiting for pod and container restart STEP: Failing liveness probe Jan 11 20:34:28.998: INFO: ExecWithOptions {Command:[/bin/sh -c rm /volume_mount/foo/test.log] Namespace:var-expansion-2764 PodName:var-expansion-8da48ecf-39e3-45fd-b4e5-74b580159b5a ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 11 20:34:28.998: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config Jan 11 20:34:29.823: INFO: Pod exec output: / STEP: Waiting for container to restart Jan 11 20:34:29.913: INFO: Container dapi-container, restarts: 0 Jan 11 20:34:40.003: INFO: Container dapi-container, restarts: 0 Jan 11 20:34:50.003: INFO: Container dapi-container, restarts: 0 Jan 11 20:35:00.003: INFO: Container dapi-container, restarts: 0 Jan 11 20:35:10.101: INFO: Container dapi-container, restarts: 1 Jan 11 20:35:10.101: INFO: Container has restart count: 1 STEP: Rewriting the file Jan 11 20:35:10.288: INFO: ExecWithOptions {Command:[/bin/sh -c echo test-after > /volume_mount/foo/test.log] Namespace:var-expansion-2764 PodName:var-expansion-8da48ecf-39e3-45fd-b4e5-74b580159b5a ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 11 20:35:10.288: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config Jan 11 20:35:11.823: INFO: Pod exec output: STEP: Waiting for container to stop restarting Jan 11 20:35:36.097: INFO: Container has restart count: 2 Jan 11 20:36:38.099: INFO: Container restart has stabilized STEP: test for subpath mounted with old value Jan 11 20:36:38.189: INFO: ExecWithOptions {Command:[/bin/sh -c test -f /volume_mount/foo/test.log] Namespace:var-expansion-2764 PodName:var-expansion-8da48ecf-39e3-45fd-b4e5-74b580159b5a ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 11 20:36:38.189: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config Jan 11 20:36:39.126: INFO: ExecWithOptions {Command:[/bin/sh -c test ! -f /volume_mount/newsubpath/test.log] Namespace:var-expansion-2764 PodName:var-expansion-8da48ecf-39e3-45fd-b4e5-74b580159b5a ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 11 20:36:39.126: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config Jan 11 20:36:39.991: INFO: Deleting pod "var-expansion-8da48ecf-39e3-45fd-b4e5-74b580159b5a" in namespace "var-expansion-2764" Jan 11 20:36:40.082: INFO: Wait up to 5m0s for pod "var-expansion-8da48ecf-39e3-45fd-b4e5-74b580159b5a" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:37:12.263: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-2764" for this suite. Jan 11 20:37:18.624: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:37:21.946: INFO: namespace var-expansion-2764 deletion completed in 9.593235925s • [SLOW TEST:178.682 seconds] [k8s.io] Variable Expansion /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693 should not change the subpath mount on a container restart if the environment variable changes [sig-storage][NodeFeature:VolumeSubpathEnvExpansion][Slow] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/expansion.go:519 ------------------------------ [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:34:25.887: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename container-probe STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-probe-9043 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:52 [It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:167 STEP: Creating pod liveness-ab2ddbd9-1c18-4740-87d7-49b7fd8f163f in namespace container-probe-9043 Jan 11 20:34:30.800: INFO: Started pod liveness-ab2ddbd9-1c18-4740-87d7-49b7fd8f163f in namespace container-probe-9043 STEP: checking the pod's current state and verifying that restartCount is present Jan 11 20:34:30.890: INFO: Initial restart count of pod liveness-ab2ddbd9-1c18-4740-87d7-49b7fd8f163f is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 11 20:38:32.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9043" for this suite. Jan 11 20:38:38.644: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:38:41.968: INFO: namespace container-probe-9043 deletion completed in 9.593692879s • [SLOW TEST:256.080 seconds] [k8s.io] Probing container /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693 should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:167 ------------------------------ Jan 11 20:38:41.969: INFO: Running AfterSuite actions on all nodes Jan 11 20:37:21.948: INFO: Running AfterSuite actions on all nodes Jan 11 20:38:42.017: INFO: Running AfterSuite actions on node 1 Jan 11 20:38:42.017: INFO: Skipping dumping logs from cluster Summarizing 6 Failures: [Fail] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] volume-expand [It] should not allow expansion of pvcs without AllowVolumeExpansion property /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:366 [Fail] [sig-storage] HostPath [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:1667 [Fail] [sig-storage] HostPath [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:1667 [Fail] [sig-storage] HostPath [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:1667 [Fail] [sig-storage] HostPath [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:1667 [Fail] [sig-storage] HostPath [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:1667 Ran 612 of 4731 Specs in 3853.924 seconds FAIL! -- 611 Passed | 1 Failed | 0 Pending | 4119 Skipped Ginkgo ran 1 suite in 1h4m26.303304545s Test Suite Failed Conformance test: not doing test setup. Running Suite: Kubernetes e2e suite =================================== Random Seed: 1578775123 - Will randomize all specs Will run 4731 specs Running in parallel across 8 nodes Jan 11 20:38:53.498: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Deleting namespaces I0111 20:38:53.884092 21557 suites.go:70] Waiting for deletion of the following namespaces: [] STEP: Waiting for namespaces to vanish Jan 11 20:38:55.978: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jan 11 20:38:56.249: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jan 11 20:38:56.628: INFO: 20 / 20 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jan 11 20:38:56.628: INFO: expected 12 pod replicas in namespace 'kube-system', 12 are Running and Ready. Jan 11 20:38:56.628: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jan 11 20:38:56.723: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'calico-node' (0 seconds elapsed) Jan 11 20:38:56.723: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jan 11 20:38:56.723: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'node-exporter' (0 seconds elapsed) Jan 11 20:38:56.723: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'node-problem-detector' (0 seconds elapsed) Jan 11 20:38:56.723: INFO: e2e test version: v1.16.4 Jan 11 20:38:56.812: INFO: kube-apiserver version: v1.16.4 Jan 11 20:38:56.813: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config Jan 11 20:38:56.905: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ Jan 11 20:38:56.818: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config Jan 11 20:38:57.184: INFO: Cluster IP family: ipv4 Jan 11 20:38:56.817: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config Jan 11 20:38:57.184: INFO: Cluster IP family: ipv4 Jan 11 20:38:56.819: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config Jan 11 20:38:57.184: INFO: Cluster IP family: ipv4 Jan 11 20:38:56.820: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config Jan 11 20:38:57.185: INFO: Cluster IP family: ipv4 Jan 11 20:38:56.818: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config Jan 11 20:38:57.184: INFO: Cluster IP family: ipv4 Jan 11 20:38:56.817: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config Jan 11 20:38:57.185: INFO: Cluster IP family: ipv4 Jan 11 20:38:56.817: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config Jan 11 20:38:57.183: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ Jan 11 20:39:00.295: INFO: Running AfterSuite actions on all nodes SS ------------------------------ Jan 11 20:39:00.296: INFO: Running AfterSuite actions on all nodes Jan 11 20:39:00.296: INFO: Running AfterSuite actions on all nodes S ------------------------------ Jan 11 20:39:00.296: INFO: Running AfterSuite actions on all nodes Jan 11 20:39:00.297: INFO: Running AfterSuite actions on all nodes Jan 11 20:39:00.298: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-storage] HostPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:38:59.897: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename hostpath Jan 11 20:39:00.273: INFO: Found PodSecurityPolicies; assuming PodSecurityPolicy is enabled. Jan 11 20:39:00.629: INFO: Found ClusterRoles; assuming RBAC is enabled. STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in hostpath-5863 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating a pod to test hostPath mode Jan 11 20:39:01.093: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-5863" to be "success or failure" Jan 11 20:39:01.182: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 89.657608ms Jan 11 20:39:03.273: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.179741326s STEP: Saw pod success Jan 11 20:39:03.273: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Jan 11 20:39:03.362: INFO: Trying to get logs from node ip-10-250-27-25.ec2.internal pod pod-host-path-test container test-container-1: STEP: delete the pod Jan 11 20:39:03.692: INFO: Waiting for pod pod-host-path-test to disappear Jan 11 20:39:03.782: INFO: Pod pod-host-path-test no longer exists Jan 11 20:39:03.782: FAIL: Unexpected error: <*errors.errorString | 0xc0031ec200>: { s: "expected \"mode of file \\\"/test-volume\\\": dtrwxrwx\" in container output: Expected\n : mount type of \"/test-volume\": tmpfs\n mode of file \"/test-volume\": dgtrwxrwxrwx\n \nto contain substring\n : mode of file \"/test-volume\": dtrwxrwx", } expected "mode of file \"/test-volume\": dtrwxrwx" in container output: Expected : mount type of "/test-volume": tmpfs mode of file "/test-volume": dgtrwxrwxrwx to contain substring : mode of file "/test-volume": dtrwxrwx occurred [AfterEach] [sig-storage] HostPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 STEP: Collecting events from namespace "hostpath-5863". STEP: Found 7 events. Jan 11 20:39:03.872: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-host-path-test: {default-scheduler } Scheduled: Successfully assigned hostpath-5863/pod-host-path-test to ip-10-250-27-25.ec2.internal Jan 11 20:39:03.872: INFO: At 2020-01-11 20:39:01 +0000 UTC - event for pod-host-path-test: {kubelet ip-10-250-27-25.ec2.internal} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/mounttest:1.0" already present on machine Jan 11 20:39:03.872: INFO: At 2020-01-11 20:39:01 +0000 UTC - event for pod-host-path-test: {kubelet ip-10-250-27-25.ec2.internal} Created: Created container test-container-1 Jan 11 20:39:03.872: INFO: At 2020-01-11 20:39:01 +0000 UTC - event for pod-host-path-test: {kubelet ip-10-250-27-25.ec2.internal} Started: Started container test-container-1 Jan 11 20:39:03.872: INFO: At 2020-01-11 20:39:01 +0000 UTC - event for pod-host-path-test: {kubelet ip-10-250-27-25.ec2.internal} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/mounttest:1.0" already present on machine Jan 11 20:39:03.873: INFO: At 2020-01-11 20:39:01 +0000 UTC - event for pod-host-path-test: {kubelet ip-10-250-27-25.ec2.internal} Created: Created container test-container-2 Jan 11 20:39:03.873: INFO: At 2020-01-11 20:39:02 +0000 UTC - event for pod-host-path-test: {kubelet ip-10-250-27-25.ec2.internal} Started: Started container test-container-2 Jan 11 20:39:03.962: INFO: POD NODE PHASE GRACE CONDITIONS Jan 11 20:39:03.962: INFO: Jan 11 20:39:04.145: INFO: Logging node info for node ip-10-250-27-25.ec2.internal Jan 11 20:39:04.235: INFO: Node Info: &Node{ObjectMeta:{ip-10-250-27-25.ec2.internal /api/v1/nodes/ip-10-250-27-25.ec2.internal af7f64f3-a5de-4df3-9e07-f69e835ab580 93021 0 2020-01-11 15:56:03 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:m5.large beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:us-east-1 failure-domain.beta.kubernetes.io/zone:us-east-1c kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-10-250-27-25.ec2.internal kubernetes.io/os:linux node.kubernetes.io/role:node worker.garden.sapcloud.io/group:worker-1 worker.gardener.cloud/pool:worker-1] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-ephemeral-1641":"ip-10-250-27-25.ec2.internal","csi-hostpath-ephemeral-3918":"ip-10-250-27-25.ec2.internal","csi-hostpath-provisioning-1550":"ip-10-250-27-25.ec2.internal","csi-hostpath-provisioning-181":"ip-10-250-27-25.ec2.internal","csi-hostpath-provisioning-5271":"ip-10-250-27-25.ec2.internal","csi-hostpath-provisioning-5738":"ip-10-250-27-25.ec2.internal","csi-hostpath-provisioning-6240":"ip-10-250-27-25.ec2.internal","csi-hostpath-provisioning-8445":"ip-10-250-27-25.ec2.internal","csi-hostpath-volume-expand-6586":"ip-10-250-27-25.ec2.internal","csi-hostpath-volume-expand-7991":"ip-10-250-27-25.ec2.internal","csi-hostpath-volume-expand-8205":"ip-10-250-27-25.ec2.internal","csi-hostpath-volumemode-2239":"ip-10-250-27-25.ec2.internal","csi-mock-csi-mock-volumes-104":"csi-mock-csi-mock-volumes-104","csi-mock-csi-mock-volumes-1062":"csi-mock-csi-mock-volumes-1062","csi-mock-csi-mock-volumes-1547":"csi-mock-csi-mock-volumes-1547","csi-mock-csi-mock-volumes-2239":"csi-mock-csi-mock-volumes-2239","csi-mock-csi-mock-volumes-3620":"csi-mock-csi-mock-volumes-3620","csi-mock-csi-mock-volumes-4203":"csi-mock-csi-mock-volumes-4203","csi-mock-csi-mock-volumes-4249":"csi-mock-csi-mock-volumes-4249","csi-mock-csi-mock-volumes-6381":"csi-mock-csi-mock-volumes-6381","csi-mock-csi-mock-volumes-7446":"csi-mock-csi-mock-volumes-7446","csi-mock-csi-mock-volumes-795":"csi-mock-csi-mock-volumes-795","csi-mock-csi-mock-volumes-8830":"csi-mock-csi-mock-volumes-8830"} node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.250.27.25/19 projectcalico.org/IPv4IPIPTunnelAddr:100.64.1.1 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:100.64.1.0/24,DoNotUse_ExternalID:,ProviderID:aws:///us-east-1c/i-0a8c404292a3c92e9,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-aws-ebs: {{25 0} {} 25 DecimalSI},cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{28730179584 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{8054267904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-aws-ebs: {{25 0} {} 25 DecimalSI},cpu: {{1920 -3} {} 1920m DecimalSI},ephemeral-storage: {{27293670584 0} {} 27293670584 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{6577812679 0} {} 6577812679 DecimalSI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2020-01-11 20:38:48 +0000 UTC,LastTransitionTime:2020-01-11 15:56:58 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2020-01-11 20:38:48 +0000 UTC,LastTransitionTime:2020-01-11 15:56:58 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2020-01-11 20:38:48 +0000 UTC,LastTransitionTime:2020-01-11 15:56:58 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2020-01-11 20:38:48 +0000 UTC,LastTransitionTime:2020-01-11 15:56:58 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2020-01-11 20:38:48 +0000 UTC,LastTransitionTime:2020-01-11 15:56:58 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2020-01-11 20:38:48 +0000 UTC,LastTransitionTime:2020-01-11 15:56:58 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2020-01-11 20:38:48 +0000 UTC,LastTransitionTime:2020-01-11 15:56:58 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-01-11 15:56:18 +0000 UTC,LastTransitionTime:2020-01-11 15:56:18 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-01-11 20:38:58 +0000 UTC,LastTransitionTime:2020-01-11 15:56:03 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-01-11 20:38:58 +0000 UTC,LastTransitionTime:2020-01-11 15:56:03 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-01-11 20:38:58 +0000 UTC,LastTransitionTime:2020-01-11 15:56:03 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-01-11 20:38:58 +0000 UTC,LastTransitionTime:2020-01-11 15:56:13 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.250.27.25,},NodeAddress{Type:Hostname,Address:ip-10-250-27-25.ec2.internal,},NodeAddress{Type:InternalDNS,Address:ip-10-250-27-25.ec2.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec280dba3c1837e27848a3dec8c080a9,SystemUUID:ec280dba-3c18-37e2-7848-a3dec8c080a9,BootID:89e42b89-b944-47ea-8bf6-5f2fe6d80c97,KernelVersion:4.19.86-coreos,OSImage:Container Linux by CoreOS 2303.3.0 (Rhyolite),ContainerRuntimeVersion:docker://18.6.3,KubeletVersion:v1.16.4,KubeProxyVersion:v1.16.4,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[eu.gcr.io/gardener-project/k8s.gcr.io/hyperkube@sha256:1d8d7ef8bae1a6c8564d97a7d83a3661ea4b43127b0a6d901f3cd4b1126ee102 eu.gcr.io/gardener-project/k8s.gcr.io/hyperkube:v1.16.4],SizeBytes:601224435,},ContainerImage{Names:[quay.io/kubernetes_incubator/nfs-provisioner@sha256:df762117e3c891f2d2ddff46ecb0776ba1f9f3c44cfd7739b0683bcd7a7954a8 quay.io/kubernetes_incubator/nfs-provisioner:v2.2.2],SizeBytes:391772778,},ContainerImage{Names:[gcr.io/google-samples/gb-frontend@sha256:35cb427341429fac3df10ff74600ea73e8ec0754d78f9ce89e0b4f3d70d53ba6 gcr.io/google-samples/gb-frontend:v6],SizeBytes:373099368,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa k8s.gcr.io/etcd:3.3.15],SizeBytes:246640776,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/volume/nfs@sha256:c2ad734346f608a5f7d69cfded93c4e8094069320657bd372d12ba21dea3ea71 gcr.io/kubernetes-e2e-test-images/volume/nfs:1.0],SizeBytes:225358913,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:195659796,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/quay_io/calico/node@sha256:d017c694acb9df5ad8e957a14b4c5a613c3a42771a34904f40c279dd2f61461e eu.gcr.io/gardener-project/3rd/quay_io/calico/node:v3.8.2-mod-1],SizeBytes:185406766,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/quay_io/calico/cni@sha256:fe6cb51f30add991b76eadfa26ec10fa8796383a1ddf807be5d4228725207b9d eu.gcr.io/gardener-project/3rd/quay_io/calico/cni:v3.8.2-mod-1],SizeBytes:153790666,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/k8s_gcr_io/node-problem-detector@sha256:00aceed3b4ef20d0d578aff3f904212daa2f0aaf18350d3e213cf4ca0703ccf0 eu.gcr.io/gardener-project/3rd/k8s_gcr_io/node-problem-detector:v0.7.1-mod-1],SizeBytes:96768084,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:1bafcc6fb1aa990b487850adba9cadc020e42d7905aa8a30481182a477ba24b0 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.10],SizeBytes:61365829,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6],SizeBytes:57345321,},ContainerImage{Names:[quay.io/k8scsi/csi-provisioner@sha256:0efcb424f1dde9b9fb11a1a14f2e48ab47e1c3f08bc3a929990dcfcb1f7ab34f quay.io/k8scsi/csi-provisioner:v1.4.0-rc1],SizeBytes:54431016,},ContainerImage{Names:[quay.io/k8scsi/csi-snapshotter@sha256:e3d3e742e32d00488fdb401045b9b1d033d7ca0ab6e760f77b24750fc95e5f70 quay.io/k8scsi/csi-snapshotter:v2.0.0-rc1],SizeBytes:51703561,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/quay_io/calico/typha@sha256:52298609a808087c774e95ded163e91828106bed6cf3117c51aba3f4d3b7943c eu.gcr.io/gardener-project/3rd/quay_io/calico/typha:v3.8.2],SizeBytes:49771411,},ContainerImage{Names:[quay.io/k8scsi/csi-attacher@sha256:26fccd7a99d973845df1193b46ebdcc6ab8dc5f6e6be319750c471fce1742d13 quay.io/k8scsi/csi-attacher:v1.2.0],SizeBytes:46226754,},ContainerImage{Names:[quay.io/k8scsi/csi-attacher@sha256:0aba670b4d9d6b2e720bbf575d733156c676b693ca26501235444490300db838 quay.io/k8scsi/csi-attacher:v1.1.0],SizeBytes:42839085,},ContainerImage{Names:[quay.io/k8scsi/csi-resizer@sha256:7d46fb6eb8b890dc546029d1565d502b4a1d974d33625c6ee2bc7991b77fc1a1 quay.io/k8scsi/csi-resizer:v0.2.0],SizeBytes:42817100,},ContainerImage{Names:[quay.io/k8scsi/csi-resizer@sha256:f315c9042e56def3c05c6b04fe79ec9da6d39ddc557ca365a76cf35964ea08b6 quay.io/k8scsi/csi-resizer:v0.1.0],SizeBytes:42623056,},ContainerImage{Names:[gcr.io/google-containers/debian-base@sha256:6966a0aedd7592c18ff2dd803c08bd85780ee19f5e3a2e7cf908a4cd837afcde gcr.io/google-containers/debian-base:0.4.1],SizeBytes:42323657,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonroot@sha256:d4ede5c74517090b6686219059118ed178cf4620f5db8781b32f806bb1e7395b gcr.io/kubernetes-e2e-test-images/nonroot:1.0],SizeBytes:42321438,},ContainerImage{Names:[redis@sha256:50899ea1ceed33fa03232f3ac57578a424faa1742c1ac9c7a7bdb95cdf19b858 redis:5.0.5-alpine],SizeBytes:29331594,},ContainerImage{Names:[quay.io/k8scsi/hostpathplugin@sha256:b4826e492fc1762fceaf9726f41575ca0a4567864d3d235da874818de18039de quay.io/k8scsi/hostpathplugin:v1.2.0-rc5],SizeBytes:28761497,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/quay_io/prometheus/node-exporter@sha256:fea82a3a79228af2840c72ff394d7446ace51ae035f5b26cd9767b250baf13b7 eu.gcr.io/gardener-project/3rd/quay_io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/echoserver@sha256:e9ba514b896cdf559eef8788b66c2c3ee55f3572df617647b4b0d8b6bf81cf19 gcr.io/kubernetes-e2e-test-images/echoserver:2.2],SizeBytes:21692741,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/regression-issue-74839-amd64@sha256:3b36bd80b97c532a774e7f6246797b8575d97037982f353476c703ba6686c75c gcr.io/kubernetes-e2e-test-images/regression-issue-74839-amd64:1.0],SizeBytes:19227369,},ContainerImage{Names:[quay.io/k8scsi/mock-driver@sha256:e0eed916b7d970bad2b7d9875f9ad16932f987f0f3d91ec5d86da68b0b5cc9d1 quay.io/k8scsi/mock-driver:v2.1.0],SizeBytes:16226335,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[quay.io/k8scsi/csi-node-driver-registrar@sha256:13daf82fb99e951a4bff8ae5fc7c17c3a8fe7130be6400990d8f6076c32d4599 quay.io/k8scsi/csi-node-driver-registrar:v1.1.0],SizeBytes:15815995,},ContainerImage{Names:[quay.io/k8scsi/livenessprobe@sha256:dde617756e0f602adc566ab71fd885f1dad451ad3fb063ac991c95a2ff47aea5 quay.io/k8scsi/livenessprobe:v1.1.0],SizeBytes:14967303,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/quay_io/calico/pod2daemon-flexvol@sha256:fd246ba4eb5b96a7b97bfd8d99eb823ba179e6eeb9852cb3e3f7bf2f44a800a8 eu.gcr.io/gardener-project/3rd/quay_io/calico/pod2daemon-flexvol:v3.8.2],SizeBytes:9371181,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1],SizeBytes:9349974,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6 gcr.io/kubernetes-e2e-test-images/kitten:1.0],SizeBytes:4747037,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0],SizeBytes:4732240,},ContainerImage{Names:[alpine@sha256:8421d9a84432575381bfabd248f1eb56f3aa21d9d7cd2511583c68c9b7511d10 alpine:3.7],SizeBytes:4206494,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0],SizeBytes:1450451,},ContainerImage{Names:[busybox@sha256:6915be4043561d64e0ab0f8f098dc2ac48e077fe23f488ac24b665166898115a busybox:latest],SizeBytes:1219782,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:bbc3a03235220b170ba48a157dd097dd1379299370e1ed99ce976df0355d24f0 busybox:1.27],SizeBytes:1129289,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/gcr_io/google_containers/pause-amd64@sha256:ffa28932647c3b6cab6a618aafe98d33dd185d96158ecf9b1addf042d6244025 k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea eu.gcr.io/gardener-project/3rd/gcr_io/google_containers/pause-amd64:3.1 k8s.gcr.io/pause:3.1],SizeBytes:742472,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-provisioning-8445^15e49ff2-34ae-11ea-98fd-0e6a2517c83d],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 11 20:39:04.236: INFO: Logging kubelet events for node ip-10-250-27-25.ec2.internal Jan 11 20:39:04.325: INFO: Logging pods the kubelet thinks is on node ip-10-250-27-25.ec2.internal Jan 11 20:39:04.431: INFO: kube-proxy-rq4kf started at 2020-01-11 15:56:04 +0000 UTC (0+1 container statuses recorded) Jan 11 20:39:04.431: INFO: Container kube-proxy ready: true, restart count 0 Jan 11 20:39:04.431: INFO: node-problem-detector-9z5sq started at 2020-01-11 15:56:04 +0000 UTC (0+1 container statuses recorded) Jan 11 20:39:04.431: INFO: Container node-problem-detector ready: true, restart count 0 Jan 11 20:39:04.431: INFO: node-exporter-l6q84 started at 2020-01-11 15:56:04 +0000 UTC (0+1 container statuses recorded) Jan 11 20:39:04.431: INFO: Container node-exporter ready: true, restart count 0 Jan 11 20:39:04.431: INFO: calico-node-m8r2d started at 2020-01-11 15:56:04 +0000 UTC (2+1 container statuses recorded) Jan 11 20:39:04.431: INFO: Init container install-cni ready: true, restart count 0 Jan 11 20:39:04.431: INFO: Init container flexvol-driver ready: true, restart count 0 Jan 11 20:39:04.431: INFO: Container calico-node ready: true, restart count 0 W0111 20:39:04.522965 21583 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 11 20:39:04.744: INFO: Latency metrics for node ip-10-250-27-25.ec2.internal Jan 11 20:39:04.744: INFO: Logging node info for node ip-10-250-7-77.ec2.internal Jan 11 20:39:04.835: INFO: Node Info: &Node{ObjectMeta:{ip-10-250-7-77.ec2.internal /api/v1/nodes/ip-10-250-7-77.ec2.internal 3773c02c-1fbb-4cbe-a527-8933de0a8978 93023 0 2020-01-11 15:55:58 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:m5.large beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:us-east-1 failure-domain.beta.kubernetes.io/zone:us-east-1c kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-10-250-7-77.ec2.internal kubernetes.io/os:linux node.kubernetes.io/role:node worker.garden.sapcloud.io/group:worker-1 worker.gardener.cloud/pool:worker-1] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-ephemeral-1155":"ip-10-250-7-77.ec2.internal","csi-hostpath-ephemeral-9708":"ip-10-250-7-77.ec2.internal","csi-hostpath-provisioning-1157":"ip-10-250-7-77.ec2.internal","csi-hostpath-provisioning-1947":"ip-10-250-7-77.ec2.internal","csi-hostpath-provisioning-2263":"ip-10-250-7-77.ec2.internal","csi-hostpath-provisioning-3332":"ip-10-250-7-77.ec2.internal","csi-hostpath-provisioning-4625":"ip-10-250-7-77.ec2.internal","csi-hostpath-provisioning-5877":"ip-10-250-7-77.ec2.internal","csi-hostpath-provisioning-638":"ip-10-250-7-77.ec2.internal","csi-hostpath-provisioning-8194":"ip-10-250-7-77.ec2.internal","csi-hostpath-provisioning-888":"ip-10-250-7-77.ec2.internal","csi-hostpath-provisioning-9667":"ip-10-250-7-77.ec2.internal","csi-hostpath-volume-1340":"ip-10-250-7-77.ec2.internal","csi-hostpath-volume-2441":"ip-10-250-7-77.ec2.internal","csi-hostpath-volume-expand-1240":"ip-10-250-7-77.ec2.internal","csi-hostpath-volume-expand-1264":"ip-10-250-7-77.ec2.internal","csi-hostpath-volume-expand-1929":"ip-10-250-7-77.ec2.internal","csi-hostpath-volume-expand-8983":"ip-10-250-7-77.ec2.internal","csi-hostpath-volumeio-3164":"ip-10-250-7-77.ec2.internal","csi-hostpath-volumemode-2792":"ip-10-250-7-77.ec2.internal","csi-mock-csi-mock-volumes-1446":"csi-mock-csi-mock-volumes-1446","csi-mock-csi-mock-volumes-4004":"csi-mock-csi-mock-volumes-4004","csi-mock-csi-mock-volumes-4733":"csi-mock-csi-mock-volumes-4733","csi-mock-csi-mock-volumes-8663":"csi-mock-csi-mock-volumes-8663"} node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.250.7.77/19 projectcalico.org/IPv4IPIPTunnelAddr:100.64.0.1 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:100.64.0.0/24,DoNotUse_ExternalID:,ProviderID:aws:///us-east-1c/i-0551dba45aad7abfa,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-aws-ebs: {{25 0} {} 25 DecimalSI},cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{28730179584 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{8054267904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-aws-ebs: {{25 0} {} 25 DecimalSI},cpu: {{1920 -3} {} 1920m DecimalSI},ephemeral-storage: {{27293670584 0} {} 27293670584 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{6577812679 0} {} 6577812679 DecimalSI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2020-01-11 20:38:28 +0000 UTC,LastTransitionTime:2020-01-11 15:56:28 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2020-01-11 20:38:28 +0000 UTC,LastTransitionTime:2020-01-11 15:56:28 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2020-01-11 20:38:28 +0000 UTC,LastTransitionTime:2020-01-11 15:56:28 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2020-01-11 20:38:28 +0000 UTC,LastTransitionTime:2020-01-11 15:56:28 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2020-01-11 20:38:28 +0000 UTC,LastTransitionTime:2020-01-11 15:56:28 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2020-01-11 20:38:28 +0000 UTC,LastTransitionTime:2020-01-11 15:56:28 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2020-01-11 20:38:28 +0000 UTC,LastTransitionTime:2020-01-11 15:56:28 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-01-11 15:56:16 +0000 UTC,LastTransitionTime:2020-01-11 15:56:16 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-01-11 20:38:59 +0000 UTC,LastTransitionTime:2020-01-11 15:55:58 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-01-11 20:38:59 +0000 UTC,LastTransitionTime:2020-01-11 15:55:58 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-01-11 20:38:59 +0000 UTC,LastTransitionTime:2020-01-11 15:55:58 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-01-11 20:38:59 +0000 UTC,LastTransitionTime:2020-01-11 15:56:08 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.250.7.77,},NodeAddress{Type:Hostname,Address:ip-10-250-7-77.ec2.internal,},NodeAddress{Type:InternalDNS,Address:ip-10-250-7-77.ec2.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec223a25fa514279256b8b36a522519a,SystemUUID:ec223a25-fa51-4279-256b-8b36a522519a,BootID:652118c2-7bd4-4ebf-b248-be5c7a65a3aa,KernelVersion:4.19.86-coreos,OSImage:Container Linux by CoreOS 2303.3.0 (Rhyolite),ContainerRuntimeVersion:docker://18.6.3,KubeletVersion:v1.16.4,KubeProxyVersion:v1.16.4,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[eu.gcr.io/gardener-project/k8s.gcr.io/hyperkube@sha256:1d8d7ef8bae1a6c8564d97a7d83a3661ea4b43127b0a6d901f3cd4b1126ee102 eu.gcr.io/gardener-project/k8s.gcr.io/hyperkube:v1.16.4],SizeBytes:601224435,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/quay_io/kubernetes-ingress-controller/nginx-ingress-controller@sha256:4980f4ee069f767334c6fb6a7d75fbdc87236542fd749e22af5d80f2217959f4 eu.gcr.io/gardener-project/3rd/quay_io/kubernetes-ingress-controller/nginx-ingress-controller:0.22.0],SizeBytes:551728251,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/volume/nfs@sha256:c2ad734346f608a5f7d69cfded93c4e8094069320657bd372d12ba21dea3ea71 gcr.io/kubernetes-e2e-test-images/volume/nfs:1.0],SizeBytes:225358913,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:195659796,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/quay_io/calico/node@sha256:d017c694acb9df5ad8e957a14b4c5a613c3a42771a34904f40c279dd2f61461e eu.gcr.io/gardener-project/3rd/quay_io/calico/node:v3.8.2-mod-1],SizeBytes:185406766,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/quay_io/calico/cni@sha256:fe6cb51f30add991b76eadfa26ec10fa8796383a1ddf807be5d4228725207b9d eu.gcr.io/gardener-project/3rd/quay_io/calico/cni:v3.8.2-mod-1],SizeBytes:153790666,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/k8s_gcr_io/kubernetes-dashboard-amd64@sha256:2f4fefeb964b1b7b09a3d2607a963506a47a6628d5268825e8b45b8a4c5ace93 eu.gcr.io/gardener-project/3rd/k8s_gcr_io/kubernetes-dashboard-amd64:v1.10.1],SizeBytes:121711221,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/k8s_gcr_io/node-problem-detector@sha256:00aceed3b4ef20d0d578aff3f904212daa2f0aaf18350d3e213cf4ca0703ccf0 eu.gcr.io/gardener-project/3rd/k8s_gcr_io/node-problem-detector:v0.7.1-mod-1],SizeBytes:96768084,},ContainerImage{Names:[eu.gcr.io/gardener-project/gardener/ingress-default-backend@sha256:17b68928ead12cc9df88ee60d9c638d3fd642a7e122c2bb7586da1a21eb2de45 eu.gcr.io/gardener-project/gardener/ingress-default-backend:0.7.0],SizeBytes:69546830,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6],SizeBytes:57345321,},ContainerImage{Names:[quay.io/k8scsi/csi-provisioner@sha256:0efcb424f1dde9b9fb11a1a14f2e48ab47e1c3f08bc3a929990dcfcb1f7ab34f quay.io/k8scsi/csi-provisioner:v1.4.0-rc1],SizeBytes:54431016,},ContainerImage{Names:[quay.io/k8scsi/csi-snapshotter@sha256:e3d3e742e32d00488fdb401045b9b1d033d7ca0ab6e760f77b24750fc95e5f70 quay.io/k8scsi/csi-snapshotter:v2.0.0-rc1],SizeBytes:51703561,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/quay_io/calico/typha@sha256:52298609a808087c774e95ded163e91828106bed6cf3117c51aba3f4d3b7943c eu.gcr.io/gardener-project/3rd/quay_io/calico/typha:v3.8.2],SizeBytes:49771411,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/quay_io/calico/kube-controllers@sha256:242c3e83e41c5ad4a246cba351360d92fb90e1c140cd24e42140e640a0ed3290 eu.gcr.io/gardener-project/3rd/quay_io/calico/kube-controllers:v3.8.2],SizeBytes:46809393,},ContainerImage{Names:[quay.io/k8scsi/csi-attacher@sha256:26fccd7a99d973845df1193b46ebdcc6ab8dc5f6e6be319750c471fce1742d13 quay.io/k8scsi/csi-attacher:v1.2.0],SizeBytes:46226754,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/coredns/coredns@sha256:b1f81b52011f91ebcf512111caa6d6d0896a65251188210cd3145d5b23204531 eu.gcr.io/gardener-project/3rd/coredns/coredns:1.6.3],SizeBytes:44255363,},ContainerImage{Names:[quay.io/k8scsi/csi-attacher@sha256:0aba670b4d9d6b2e720bbf575d733156c676b693ca26501235444490300db838 quay.io/k8scsi/csi-attacher:v1.1.0],SizeBytes:42839085,},ContainerImage{Names:[quay.io/k8scsi/csi-resizer@sha256:7d46fb6eb8b890dc546029d1565d502b4a1d974d33625c6ee2bc7991b77fc1a1 quay.io/k8scsi/csi-resizer:v0.2.0],SizeBytes:42817100,},ContainerImage{Names:[quay.io/k8scsi/csi-resizer@sha256:f315c9042e56def3c05c6b04fe79ec9da6d39ddc557ca365a76cf35964ea08b6 quay.io/k8scsi/csi-resizer:v0.1.0],SizeBytes:42623056,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/k8s_gcr_io/cpvpa-amd64@sha256:5843435c534f0368f8980b1635976976b087f0b2dcde01226d9216da2276d24d eu.gcr.io/gardener-project/3rd/k8s_gcr_io/cpvpa-amd64:v0.8.1],SizeBytes:40616150,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/k8s_gcr_io/cluster-proportional-autoscaler-amd64@sha256:2cdb0f90aac21d3f648a945ef929bfb81159d7453499b2dce6164c78a348ac42 eu.gcr.io/gardener-project/3rd/k8s_gcr_io/cluster-proportional-autoscaler-amd64:1.7.1],SizeBytes:40067731,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/k8s_gcr_io/metrics-server-amd64@sha256:c3c8fb8757c3236343da9239a266c6ee9e16ac3c98b6f5d7a7cbb5f83058d4f1 eu.gcr.io/gardener-project/3rd/k8s_gcr_io/metrics-server-amd64:v0.3.3],SizeBytes:39933796,},ContainerImage{Names:[redis@sha256:50899ea1ceed33fa03232f3ac57578a424faa1742c1ac9c7a7bdb95cdf19b858 redis:5.0.5-alpine],SizeBytes:29331594,},ContainerImage{Names:[quay.io/k8scsi/hostpathplugin@sha256:b4826e492fc1762fceaf9726f41575ca0a4567864d3d235da874818de18039de quay.io/k8scsi/hostpathplugin:v1.2.0-rc5],SizeBytes:28761497,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/quay_io/prometheus/node-exporter@sha256:fea82a3a79228af2840c72ff394d7446ace51ae035f5b26cd9767b250baf13b7 eu.gcr.io/gardener-project/3rd/quay_io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/echoserver@sha256:e9ba514b896cdf559eef8788b66c2c3ee55f3572df617647b4b0d8b6bf81cf19 gcr.io/kubernetes-e2e-test-images/echoserver:2.2],SizeBytes:21692741,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/quay_io/prometheus/blackbox-exporter@sha256:c09cbb653e4708a0c14b205822f56026669c6a4a7d0502609c65da2dd741e669 eu.gcr.io/gardener-project/3rd/quay_io/prometheus/blackbox-exporter:v0.14.0],SizeBytes:17584252,},ContainerImage{Names:[quay.io/k8scsi/mock-driver@sha256:e0eed916b7d970bad2b7d9875f9ad16932f987f0f3d91ec5d86da68b0b5cc9d1 quay.io/k8scsi/mock-driver:v2.1.0],SizeBytes:16226335,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[quay.io/k8scsi/csi-node-driver-registrar@sha256:13daf82fb99e951a4bff8ae5fc7c17c3a8fe7130be6400990d8f6076c32d4599 quay.io/k8scsi/csi-node-driver-registrar:v1.1.0],SizeBytes:15815995,},ContainerImage{Names:[quay.io/k8scsi/livenessprobe@sha256:dde617756e0f602adc566ab71fd885f1dad451ad3fb063ac991c95a2ff47aea5 quay.io/k8scsi/livenessprobe:v1.1.0],SizeBytes:14967303,},ContainerImage{Names:[eu.gcr.io/gardener-project/gardener/vpn-shoot@sha256:6054c6ae62c2bca2f07c913390c3babf14bb8dfa80c707ee8d4fd03c06dbf93f eu.gcr.io/gardener-project/gardener/vpn-shoot:0.16.0],SizeBytes:13732716,},ContainerImage{Names:[gcr.io/google-containers/startup-script@sha256:be96df6845a2af0eb61b17817ed085ce41048e4044c541da7580570b61beff3e gcr.io/google-containers/startup-script:v1],SizeBytes:12528443,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/quay_io/calico/pod2daemon-flexvol@sha256:fd246ba4eb5b96a7b97bfd8d99eb823ba179e6eeb9852cb3e3f7bf2f44a800a8 eu.gcr.io/gardener-project/3rd/quay_io/calico/pod2daemon-flexvol:v3.8.2],SizeBytes:9371181,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1],SizeBytes:9349974,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0],SizeBytes:4732240,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:4206620,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/gcr_io/google_containers/pause-amd64@sha256:ffa28932647c3b6cab6a618aafe98d33dd185d96158ecf9b1addf042d6244025 k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea eu.gcr.io/gardener-project/3rd/gcr_io/google_containers/pause-amd64:3.1 k8s.gcr.io/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 11 20:39:04.835: INFO: Logging kubelet events for node ip-10-250-7-77.ec2.internal Jan 11 20:39:04.925: INFO: Logging pods the kubelet thinks is on node ip-10-250-7-77.ec2.internal Jan 11 20:39:05.117: INFO: node-exporter-gp57h started at 2020-01-11 15:55:58 +0000 UTC (0+1 container statuses recorded) Jan 11 20:39:05.117: INFO: Container node-exporter ready: true, restart count 0 Jan 11 20:39:05.117: INFO: calico-kube-controllers-79bcd784b6-c46r9 started at 2020-01-11 15:56:08 +0000 UTC (0+1 container statuses recorded) Jan 11 20:39:05.117: INFO: Container calico-kube-controllers ready: true, restart count 0 Jan 11 20:39:05.117: INFO: metrics-server-7c797fd994-4x7v9 started at 2020-01-11 15:56:08 +0000 UTC (0+1 container statuses recorded) Jan 11 20:39:05.117: INFO: Container metrics-server ready: true, restart count 0 Jan 11 20:39:05.117: INFO: coredns-59c969ffb8-57m7v started at 2020-01-11 15:56:11 +0000 UTC (0+1 container statuses recorded) Jan 11 20:39:05.117: INFO: Container coredns ready: true, restart count 0 Jan 11 20:39:05.117: INFO: calico-typha-deploy-9f6b455c4-vdrzx started at 2020-01-11 16:21:07 +0000 UTC (0+1 container statuses recorded) Jan 11 20:39:05.117: INFO: Container calico-typha ready: true, restart count 0 Jan 11 20:39:05.117: INFO: kube-proxy-nn5px started at 2020-01-11 15:55:58 +0000 UTC (0+1 container statuses recorded) Jan 11 20:39:05.117: INFO: Container kube-proxy ready: true, restart count 0 Jan 11 20:39:05.117: INFO: calico-typha-horizontal-autoscaler-85c99966bb-6j6rp started at 2020-01-11 15:56:08 +0000 UTC (0+1 container statuses recorded) Jan 11 20:39:05.117: INFO: Container autoscaler ready: true, restart count 0 Jan 11 20:39:05.117: INFO: calico-typha-vertical-autoscaler-5769b74b58-r8t6r started at 2020-01-11 15:56:13 +0000 UTC (0+1 container statuses recorded) Jan 11 20:39:05.117: INFO: Container autoscaler ready: true, restart count 5 Jan 11 20:39:05.117: INFO: addons-nginx-ingress-controller-7c75bb76db-cd9r9 started at 2020-01-11 15:56:13 +0000 UTC (0+1 container statuses recorded) Jan 11 20:39:05.117: INFO: Container nginx-ingress-controller ready: true, restart count 0 Jan 11 20:39:05.117: INFO: vpn-shoot-5d76665b65-6rkww started at 2020-01-11 15:56:13 +0000 UTC (0+1 container statuses recorded) Jan 11 20:39:05.117: INFO: Container vpn-shoot ready: true, restart count 0 Jan 11 20:39:05.117: INFO: addons-nginx-ingress-nginx-ingress-k8s-backend-95f65778d-4fk7d started at 2020-01-11 15:56:08 +0000 UTC (0+1 container statuses recorded) Jan 11 20:39:05.117: INFO: Container nginx-ingress-nginx-ingress-k8s-backend ready: true, restart count 0 Jan 11 20:39:05.117: INFO: addons-kubernetes-dashboard-78954cc66b-69k8m started at 2020-01-11 15:56:08 +0000 UTC (0+1 container statuses recorded) Jan 11 20:39:05.117: INFO: Container kubernetes-dashboard ready: true, restart count 0 Jan 11 20:39:05.117: INFO: blackbox-exporter-54bb5f55cc-452fk started at 2020-01-11 15:55:58 +0000 UTC (0+1 container statuses recorded) Jan 11 20:39:05.117: INFO: Container blackbox-exporter ready: true, restart count 0 Jan 11 20:39:05.117: INFO: coredns-59c969ffb8-fqq79 started at 2020-01-11 15:56:08 +0000 UTC (0+1 container statuses recorded) Jan 11 20:39:05.117: INFO: Container coredns ready: true, restart count 0 Jan 11 20:39:05.117: INFO: calico-node-dl8nk started at 2020-01-11 15:55:58 +0000 UTC (2+1 container statuses recorded) Jan 11 20:39:05.117: INFO: Init container install-cni ready: true, restart count 0 Jan 11 20:39:05.117: INFO: Init container flexvol-driver ready: true, restart count 0 Jan 11 20:39:05.117: INFO: Container calico-node ready: true, restart count 0 Jan 11 20:39:05.117: INFO: node-problem-detector-jx2p4 started at 2020-01-11 15:55:58 +0000 UTC (0+1 container statuses recorded) Jan 11 20:39:05.117: INFO: Container node-problem-detector ready: true, restart count 0 W0111 20:39:05.208691 21583 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 11 20:39:05.438: INFO: Latency metrics for node ip-10-250-7-77.ec2.internal Jan 11 20:39:05.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-5863" for this suite. Jan 11 20:39:11.798: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:39:15.140: INFO: namespace hostpath-5863 deletion completed in 9.61068823s • Failure [15.243 seconds] [sig-storage] HostPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] [It] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Jan 11 20:39:03.782: Unexpected error: <*errors.errorString | 0xc0031ec200>: { s: "expected \"mode of file \\\"/test-volume\\\": dtrwxrwx\" in container output: Expected\n : mount type of \"/test-volume\": tmpfs\n mode of file \"/test-volume\": dgtrwxrwxrwx\n \nto contain substring\n : mode of file \"/test-volume\": dtrwxrwx", } expected "mode of file \"/test-volume\": dtrwxrwx" in container output: Expected : mount type of "/test-volume": tmpfs mode of file "/test-volume": dgtrwxrwxrwx to contain substring : mode of file "/test-volume": dtrwxrwx occurred /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:1667 ------------------------------ [BeforeEach] [sig-storage] HostPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:39:15.142: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename hostpath STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in hostpath-8690 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating a pod to test hostPath mode Jan 11 20:39:15.871: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-8690" to be "success or failure" Jan 11 20:39:15.961: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 89.591885ms Jan 11 20:39:18.051: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.179851778s STEP: Saw pod success Jan 11 20:39:18.051: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Jan 11 20:39:18.141: INFO: Trying to get logs from node ip-10-250-27-25.ec2.internal pod pod-host-path-test container test-container-1: STEP: delete the pod Jan 11 20:39:18.328: INFO: Waiting for pod pod-host-path-test to disappear Jan 11 20:39:18.419: INFO: Pod pod-host-path-test no longer exists Jan 11 20:39:18.419: FAIL: Unexpected error: <*errors.errorString | 0xc003a129b0>: { s: "expected \"mode of file \\\"/test-volume\\\": dtrwxrwx\" in container output: Expected\n : mount type of \"/test-volume\": tmpfs\n mode of file \"/test-volume\": dgtrwxrwxrwx\n \nto contain substring\n : mode of file \"/test-volume\": dtrwxrwx", } expected "mode of file \"/test-volume\": dtrwxrwx" in container output: Expected : mount type of "/test-volume": tmpfs mode of file "/test-volume": dgtrwxrwxrwx to contain substring : mode of file "/test-volume": dtrwxrwx occurred [AfterEach] [sig-storage] HostPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 STEP: Collecting events from namespace "hostpath-8690". STEP: Found 7 events. Jan 11 20:39:18.509: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-host-path-test: {default-scheduler } Scheduled: Successfully assigned hostpath-8690/pod-host-path-test to ip-10-250-27-25.ec2.internal Jan 11 20:39:18.509: INFO: At 2020-01-11 20:39:16 +0000 UTC - event for pod-host-path-test: {kubelet ip-10-250-27-25.ec2.internal} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/mounttest:1.0" already present on machine Jan 11 20:39:18.509: INFO: At 2020-01-11 20:39:16 +0000 UTC - event for pod-host-path-test: {kubelet ip-10-250-27-25.ec2.internal} Created: Created container test-container-1 Jan 11 20:39:18.509: INFO: At 2020-01-11 20:39:16 +0000 UTC - event for pod-host-path-test: {kubelet ip-10-250-27-25.ec2.internal} Started: Started container test-container-1 Jan 11 20:39:18.509: INFO: At 2020-01-11 20:39:16 +0000 UTC - event for pod-host-path-test: {kubelet ip-10-250-27-25.ec2.internal} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/mounttest:1.0" already present on machine Jan 11 20:39:18.509: INFO: At 2020-01-11 20:39:16 +0000 UTC - event for pod-host-path-test: {kubelet ip-10-250-27-25.ec2.internal} Created: Created container test-container-2 Jan 11 20:39:18.509: INFO: At 2020-01-11 20:39:16 +0000 UTC - event for pod-host-path-test: {kubelet ip-10-250-27-25.ec2.internal} Started: Started container test-container-2 Jan 11 20:39:18.602: INFO: POD NODE PHASE GRACE CONDITIONS Jan 11 20:39:18.602: INFO: Jan 11 20:39:18.784: INFO: Logging node info for node ip-10-250-27-25.ec2.internal Jan 11 20:39:18.874: INFO: Node Info: &Node{ObjectMeta:{ip-10-250-27-25.ec2.internal /api/v1/nodes/ip-10-250-27-25.ec2.internal af7f64f3-a5de-4df3-9e07-f69e835ab580 93093 0 2020-01-11 15:56:03 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:m5.large beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:us-east-1 failure-domain.beta.kubernetes.io/zone:us-east-1c kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-10-250-27-25.ec2.internal kubernetes.io/os:linux node.kubernetes.io/role:node worker.garden.sapcloud.io/group:worker-1 worker.gardener.cloud/pool:worker-1] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-ephemeral-1641":"ip-10-250-27-25.ec2.internal","csi-hostpath-ephemeral-3918":"ip-10-250-27-25.ec2.internal","csi-hostpath-provisioning-1550":"ip-10-250-27-25.ec2.internal","csi-hostpath-provisioning-181":"ip-10-250-27-25.ec2.internal","csi-hostpath-provisioning-5271":"ip-10-250-27-25.ec2.internal","csi-hostpath-provisioning-5738":"ip-10-250-27-25.ec2.internal","csi-hostpath-provisioning-6240":"ip-10-250-27-25.ec2.internal","csi-hostpath-provisioning-8445":"ip-10-250-27-25.ec2.internal","csi-hostpath-volume-expand-6586":"ip-10-250-27-25.ec2.internal","csi-hostpath-volume-expand-7991":"ip-10-250-27-25.ec2.internal","csi-hostpath-volume-expand-8205":"ip-10-250-27-25.ec2.internal","csi-hostpath-volumemode-2239":"ip-10-250-27-25.ec2.internal","csi-mock-csi-mock-volumes-104":"csi-mock-csi-mock-volumes-104","csi-mock-csi-mock-volumes-1062":"csi-mock-csi-mock-volumes-1062","csi-mock-csi-mock-volumes-1547":"csi-mock-csi-mock-volumes-1547","csi-mock-csi-mock-volumes-2239":"csi-mock-csi-mock-volumes-2239","csi-mock-csi-mock-volumes-3620":"csi-mock-csi-mock-volumes-3620","csi-mock-csi-mock-volumes-4203":"csi-mock-csi-mock-volumes-4203","csi-mock-csi-mock-volumes-4249":"csi-mock-csi-mock-volumes-4249","csi-mock-csi-mock-volumes-6381":"csi-mock-csi-mock-volumes-6381","csi-mock-csi-mock-volumes-7446":"csi-mock-csi-mock-volumes-7446","csi-mock-csi-mock-volumes-795":"csi-mock-csi-mock-volumes-795","csi-mock-csi-mock-volumes-8830":"csi-mock-csi-mock-volumes-8830"} node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.250.27.25/19 projectcalico.org/IPv4IPIPTunnelAddr:100.64.1.1 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:100.64.1.0/24,DoNotUse_ExternalID:,ProviderID:aws:///us-east-1c/i-0a8c404292a3c92e9,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-aws-ebs: {{25 0} {} 25 DecimalSI},cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{28730179584 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{8054267904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-aws-ebs: {{25 0} {} 25 DecimalSI},cpu: {{1920 -3} {} 1920m DecimalSI},ephemeral-storage: {{27293670584 0} {} 27293670584 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{6577812679 0} {} 6577812679 DecimalSI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2020-01-11 20:38:48 +0000 UTC,LastTransitionTime:2020-01-11 15:56:58 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2020-01-11 20:38:48 +0000 UTC,LastTransitionTime:2020-01-11 15:56:58 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2020-01-11 20:38:48 +0000 UTC,LastTransitionTime:2020-01-11 15:56:58 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2020-01-11 20:38:48 +0000 UTC,LastTransitionTime:2020-01-11 15:56:58 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2020-01-11 20:38:48 +0000 UTC,LastTransitionTime:2020-01-11 15:56:58 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2020-01-11 20:38:48 +0000 UTC,LastTransitionTime:2020-01-11 15:56:58 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2020-01-11 20:38:48 +0000 UTC,LastTransitionTime:2020-01-11 15:56:58 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-01-11 15:56:18 +0000 UTC,LastTransitionTime:2020-01-11 15:56:18 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-01-11 20:39:18 +0000 UTC,LastTransitionTime:2020-01-11 15:56:03 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-01-11 20:39:18 +0000 UTC,LastTransitionTime:2020-01-11 15:56:03 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-01-11 20:39:18 +0000 UTC,LastTransitionTime:2020-01-11 15:56:03 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-01-11 20:39:18 +0000 UTC,LastTransitionTime:2020-01-11 15:56:13 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.250.27.25,},NodeAddress{Type:Hostname,Address:ip-10-250-27-25.ec2.internal,},NodeAddress{Type:InternalDNS,Address:ip-10-250-27-25.ec2.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec280dba3c1837e27848a3dec8c080a9,SystemUUID:ec280dba-3c18-37e2-7848-a3dec8c080a9,BootID:89e42b89-b944-47ea-8bf6-5f2fe6d80c97,KernelVersion:4.19.86-coreos,OSImage:Container Linux by CoreOS 2303.3.0 (Rhyolite),ContainerRuntimeVersion:docker://18.6.3,KubeletVersion:v1.16.4,KubeProxyVersion:v1.16.4,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[eu.gcr.io/gardener-project/k8s.gcr.io/hyperkube@sha256:1d8d7ef8bae1a6c8564d97a7d83a3661ea4b43127b0a6d901f3cd4b1126ee102 eu.gcr.io/gardener-project/k8s.gcr.io/hyperkube:v1.16.4],SizeBytes:601224435,},ContainerImage{Names:[quay.io/kubernetes_incubator/nfs-provisioner@sha256:df762117e3c891f2d2ddff46ecb0776ba1f9f3c44cfd7739b0683bcd7a7954a8 quay.io/kubernetes_incubator/nfs-provisioner:v2.2.2],SizeBytes:391772778,},ContainerImage{Names:[gcr.io/google-samples/gb-frontend@sha256:35cb427341429fac3df10ff74600ea73e8ec0754d78f9ce89e0b4f3d70d53ba6 gcr.io/google-samples/gb-frontend:v6],SizeBytes:373099368,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa k8s.gcr.io/etcd:3.3.15],SizeBytes:246640776,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/volume/nfs@sha256:c2ad734346f608a5f7d69cfded93c4e8094069320657bd372d12ba21dea3ea71 gcr.io/kubernetes-e2e-test-images/volume/nfs:1.0],SizeBytes:225358913,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:195659796,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/quay_io/calico/node@sha256:d017c694acb9df5ad8e957a14b4c5a613c3a42771a34904f40c279dd2f61461e eu.gcr.io/gardener-project/3rd/quay_io/calico/node:v3.8.2-mod-1],SizeBytes:185406766,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/quay_io/calico/cni@sha256:fe6cb51f30add991b76eadfa26ec10fa8796383a1ddf807be5d4228725207b9d eu.gcr.io/gardener-project/3rd/quay_io/calico/cni:v3.8.2-mod-1],SizeBytes:153790666,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/k8s_gcr_io/node-problem-detector@sha256:00aceed3b4ef20d0d578aff3f904212daa2f0aaf18350d3e213cf4ca0703ccf0 eu.gcr.io/gardener-project/3rd/k8s_gcr_io/node-problem-detector:v0.7.1-mod-1],SizeBytes:96768084,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:1bafcc6fb1aa990b487850adba9cadc020e42d7905aa8a30481182a477ba24b0 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.10],SizeBytes:61365829,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6],SizeBytes:57345321,},ContainerImage{Names:[quay.io/k8scsi/csi-provisioner@sha256:0efcb424f1dde9b9fb11a1a14f2e48ab47e1c3f08bc3a929990dcfcb1f7ab34f quay.io/k8scsi/csi-provisioner:v1.4.0-rc1],SizeBytes:54431016,},ContainerImage{Names:[quay.io/k8scsi/csi-snapshotter@sha256:e3d3e742e32d00488fdb401045b9b1d033d7ca0ab6e760f77b24750fc95e5f70 quay.io/k8scsi/csi-snapshotter:v2.0.0-rc1],SizeBytes:51703561,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/quay_io/calico/typha@sha256:52298609a808087c774e95ded163e91828106bed6cf3117c51aba3f4d3b7943c eu.gcr.io/gardener-project/3rd/quay_io/calico/typha:v3.8.2],SizeBytes:49771411,},ContainerImage{Names:[quay.io/k8scsi/csi-attacher@sha256:26fccd7a99d973845df1193b46ebdcc6ab8dc5f6e6be319750c471fce1742d13 quay.io/k8scsi/csi-attacher:v1.2.0],SizeBytes:46226754,},ContainerImage{Names:[quay.io/k8scsi/csi-attacher@sha256:0aba670b4d9d6b2e720bbf575d733156c676b693ca26501235444490300db838 quay.io/k8scsi/csi-attacher:v1.1.0],SizeBytes:42839085,},ContainerImage{Names:[quay.io/k8scsi/csi-resizer@sha256:7d46fb6eb8b890dc546029d1565d502b4a1d974d33625c6ee2bc7991b77fc1a1 quay.io/k8scsi/csi-resizer:v0.2.0],SizeBytes:42817100,},ContainerImage{Names:[quay.io/k8scsi/csi-resizer@sha256:f315c9042e56def3c05c6b04fe79ec9da6d39ddc557ca365a76cf35964ea08b6 quay.io/k8scsi/csi-resizer:v0.1.0],SizeBytes:42623056,},ContainerImage{Names:[gcr.io/google-containers/debian-base@sha256:6966a0aedd7592c18ff2dd803c08bd85780ee19f5e3a2e7cf908a4cd837afcde gcr.io/google-containers/debian-base:0.4.1],SizeBytes:42323657,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonroot@sha256:d4ede5c74517090b6686219059118ed178cf4620f5db8781b32f806bb1e7395b gcr.io/kubernetes-e2e-test-images/nonroot:1.0],SizeBytes:42321438,},ContainerImage{Names:[redis@sha256:50899ea1ceed33fa03232f3ac57578a424faa1742c1ac9c7a7bdb95cdf19b858 redis:5.0.5-alpine],SizeBytes:29331594,},ContainerImage{Names:[quay.io/k8scsi/hostpathplugin@sha256:b4826e492fc1762fceaf9726f41575ca0a4567864d3d235da874818de18039de quay.io/k8scsi/hostpathplugin:v1.2.0-rc5],SizeBytes:28761497,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/quay_io/prometheus/node-exporter@sha256:fea82a3a79228af2840c72ff394d7446ace51ae035f5b26cd9767b250baf13b7 eu.gcr.io/gardener-project/3rd/quay_io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/echoserver@sha256:e9ba514b896cdf559eef8788b66c2c3ee55f3572df617647b4b0d8b6bf81cf19 gcr.io/kubernetes-e2e-test-images/echoserver:2.2],SizeBytes:21692741,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/regression-issue-74839-amd64@sha256:3b36bd80b97c532a774e7f6246797b8575d97037982f353476c703ba6686c75c gcr.io/kubernetes-e2e-test-images/regression-issue-74839-amd64:1.0],SizeBytes:19227369,},ContainerImage{Names:[quay.io/k8scsi/mock-driver@sha256:e0eed916b7d970bad2b7d9875f9ad16932f987f0f3d91ec5d86da68b0b5cc9d1 quay.io/k8scsi/mock-driver:v2.1.0],SizeBytes:16226335,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[quay.io/k8scsi/csi-node-driver-registrar@sha256:13daf82fb99e951a4bff8ae5fc7c17c3a8fe7130be6400990d8f6076c32d4599 quay.io/k8scsi/csi-node-driver-registrar:v1.1.0],SizeBytes:15815995,},ContainerImage{Names:[quay.io/k8scsi/livenessprobe@sha256:dde617756e0f602adc566ab71fd885f1dad451ad3fb063ac991c95a2ff47aea5 quay.io/k8scsi/livenessprobe:v1.1.0],SizeBytes:14967303,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/quay_io/calico/pod2daemon-flexvol@sha256:fd246ba4eb5b96a7b97bfd8d99eb823ba179e6eeb9852cb3e3f7bf2f44a800a8 eu.gcr.io/gardener-project/3rd/quay_io/calico/pod2daemon-flexvol:v3.8.2],SizeBytes:9371181,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1],SizeBytes:9349974,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6 gcr.io/kubernetes-e2e-test-images/kitten:1.0],SizeBytes:4747037,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0],SizeBytes:4732240,},ContainerImage{Names:[alpine@sha256:8421d9a84432575381bfabd248f1eb56f3aa21d9d7cd2511583c68c9b7511d10 alpine:3.7],SizeBytes:4206494,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0],SizeBytes:1450451,},ContainerImage{Names:[busybox@sha256:6915be4043561d64e0ab0f8f098dc2ac48e077fe23f488ac24b665166898115a busybox:latest],SizeBytes:1219782,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:bbc3a03235220b170ba48a157dd097dd1379299370e1ed99ce976df0355d24f0 busybox:1.27],SizeBytes:1129289,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/gcr_io/google_containers/pause-amd64@sha256:ffa28932647c3b6cab6a618aafe98d33dd185d96158ecf9b1addf042d6244025 k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea eu.gcr.io/gardener-project/3rd/gcr_io/google_containers/pause-amd64:3.1 k8s.gcr.io/pause:3.1],SizeBytes:742472,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-provisioning-8445^15e49ff2-34ae-11ea-98fd-0e6a2517c83d],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 11 20:39:18.874: INFO: Logging kubelet events for node ip-10-250-27-25.ec2.internal Jan 11 20:39:18.964: INFO: Logging pods the kubelet thinks is on node ip-10-250-27-25.ec2.internal Jan 11 20:39:19.061: INFO: node-problem-detector-9z5sq started at 2020-01-11 15:56:04 +0000 UTC (0+1 container statuses recorded) Jan 11 20:39:19.061: INFO: Container node-problem-detector ready: true, restart count 0 Jan 11 20:39:19.061: INFO: node-exporter-l6q84 started at 2020-01-11 15:56:04 +0000 UTC (0+1 container statuses recorded) Jan 11 20:39:19.061: INFO: Container node-exporter ready: true, restart count 0 Jan 11 20:39:19.061: INFO: calico-node-m8r2d started at 2020-01-11 15:56:04 +0000 UTC (2+1 container statuses recorded) Jan 11 20:39:19.061: INFO: Init container install-cni ready: true, restart count 0 Jan 11 20:39:19.061: INFO: Init container flexvol-driver ready: true, restart count 0 Jan 11 20:39:19.061: INFO: Container calico-node ready: true, restart count 0 Jan 11 20:39:19.061: INFO: kube-proxy-rq4kf started at 2020-01-11 15:56:04 +0000 UTC (0+1 container statuses recorded) Jan 11 20:39:19.061: INFO: Container kube-proxy ready: true, restart count 0 W0111 20:39:19.152161 21583 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 11 20:39:19.374: INFO: Latency metrics for node ip-10-250-27-25.ec2.internal Jan 11 20:39:19.374: INFO: Logging node info for node ip-10-250-7-77.ec2.internal Jan 11 20:39:19.465: INFO: Node Info: &Node{ObjectMeta:{ip-10-250-7-77.ec2.internal /api/v1/nodes/ip-10-250-7-77.ec2.internal 3773c02c-1fbb-4cbe-a527-8933de0a8978 93056 0 2020-01-11 15:55:58 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:m5.large beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:us-east-1 failure-domain.beta.kubernetes.io/zone:us-east-1c kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-10-250-7-77.ec2.internal kubernetes.io/os:linux node.kubernetes.io/role:node worker.garden.sapcloud.io/group:worker-1 worker.gardener.cloud/pool:worker-1] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-ephemeral-1155":"ip-10-250-7-77.ec2.internal","csi-hostpath-ephemeral-9708":"ip-10-250-7-77.ec2.internal","csi-hostpath-provisioning-1157":"ip-10-250-7-77.ec2.internal","csi-hostpath-provisioning-1947":"ip-10-250-7-77.ec2.internal","csi-hostpath-provisioning-2263":"ip-10-250-7-77.ec2.internal","csi-hostpath-provisioning-3332":"ip-10-250-7-77.ec2.internal","csi-hostpath-provisioning-4625":"ip-10-250-7-77.ec2.internal","csi-hostpath-provisioning-5877":"ip-10-250-7-77.ec2.internal","csi-hostpath-provisioning-638":"ip-10-250-7-77.ec2.internal","csi-hostpath-provisioning-8194":"ip-10-250-7-77.ec2.internal","csi-hostpath-provisioning-888":"ip-10-250-7-77.ec2.internal","csi-hostpath-provisioning-9667":"ip-10-250-7-77.ec2.internal","csi-hostpath-volume-1340":"ip-10-250-7-77.ec2.internal","csi-hostpath-volume-2441":"ip-10-250-7-77.ec2.internal","csi-hostpath-volume-expand-1240":"ip-10-250-7-77.ec2.internal","csi-hostpath-volume-expand-1264":"ip-10-250-7-77.ec2.internal","csi-hostpath-volume-expand-1929":"ip-10-250-7-77.ec2.internal","csi-hostpath-volume-expand-8983":"ip-10-250-7-77.ec2.internal","csi-hostpath-volumeio-3164":"ip-10-250-7-77.ec2.internal","csi-hostpath-volumemode-2792":"ip-10-250-7-77.ec2.internal","csi-mock-csi-mock-volumes-1446":"csi-mock-csi-mock-volumes-1446","csi-mock-csi-mock-volumes-4004":"csi-mock-csi-mock-volumes-4004","csi-mock-csi-mock-volumes-4733":"csi-mock-csi-mock-volumes-4733","csi-mock-csi-mock-volumes-8663":"csi-mock-csi-mock-volumes-8663"} node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.250.7.77/19 projectcalico.org/IPv4IPIPTunnelAddr:100.64.0.1 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:100.64.0.0/24,DoNotUse_ExternalID:,ProviderID:aws:///us-east-1c/i-0551dba45aad7abfa,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-aws-ebs: {{25 0} {} 25 DecimalSI},cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{28730179584 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{8054267904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-aws-ebs: {{25 0} {} 25 DecimalSI},cpu: {{1920 -3} {} 1920m DecimalSI},ephemeral-storage: {{27293670584 0} {} 27293670584 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{6577812679 0} {} 6577812679 DecimalSI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2020-01-11 20:38:28 +0000 UTC,LastTransitionTime:2020-01-11 15:56:28 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2020-01-11 20:38:28 +0000 UTC,LastTransitionTime:2020-01-11 15:56:28 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2020-01-11 20:38:28 +0000 UTC,LastTransitionTime:2020-01-11 15:56:28 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2020-01-11 20:38:28 +0000 UTC,LastTransitionTime:2020-01-11 15:56:28 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2020-01-11 20:38:28 +0000 UTC,LastTransitionTime:2020-01-11 15:56:28 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2020-01-11 20:38:28 +0000 UTC,LastTransitionTime:2020-01-11 15:56:28 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2020-01-11 20:38:28 +0000 UTC,LastTransitionTime:2020-01-11 15:56:28 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-01-11 15:56:16 +0000 UTC,LastTransitionTime:2020-01-11 15:56:16 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-01-11 20:39:09 +0000 UTC,LastTransitionTime:2020-01-11 15:55:58 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-01-11 20:39:09 +0000 UTC,LastTransitionTime:2020-01-11 15:55:58 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-01-11 20:39:09 +0000 UTC,LastTransitionTime:2020-01-11 15:55:58 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-01-11 20:39:09 +0000 UTC,LastTransitionTime:2020-01-11 15:56:08 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.250.7.77,},NodeAddress{Type:Hostname,Address:ip-10-250-7-77.ec2.internal,},NodeAddress{Type:InternalDNS,Address:ip-10-250-7-77.ec2.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec223a25fa514279256b8b36a522519a,SystemUUID:ec223a25-fa51-4279-256b-8b36a522519a,BootID:652118c2-7bd4-4ebf-b248-be5c7a65a3aa,KernelVersion:4.19.86-coreos,OSImage:Container Linux by CoreOS 2303.3.0 (Rhyolite),ContainerRuntimeVersion:docker://18.6.3,KubeletVersion:v1.16.4,KubeProxyVersion:v1.16.4,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[eu.gcr.io/gardener-project/k8s.gcr.io/hyperkube@sha256:1d8d7ef8bae1a6c8564d97a7d83a3661ea4b43127b0a6d901f3cd4b1126ee102 eu.gcr.io/gardener-project/k8s.gcr.io/hyperkube:v1.16.4],SizeBytes:601224435,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/quay_io/kubernetes-ingress-controller/nginx-ingress-controller@sha256:4980f4ee069f767334c6fb6a7d75fbdc87236542fd749e22af5d80f2217959f4 eu.gcr.io/gardener-project/3rd/quay_io/kubernetes-ingress-controller/nginx-ingress-controller:0.22.0],SizeBytes:551728251,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/volume/nfs@sha256:c2ad734346f608a5f7d69cfded93c4e8094069320657bd372d12ba21dea3ea71 gcr.io/kubernetes-e2e-test-images/volume/nfs:1.0],SizeBytes:225358913,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:195659796,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/quay_io/calico/node@sha256:d017c694acb9df5ad8e957a14b4c5a613c3a42771a34904f40c279dd2f61461e eu.gcr.io/gardener-project/3rd/quay_io/calico/node:v3.8.2-mod-1],SizeBytes:185406766,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/quay_io/calico/cni@sha256:fe6cb51f30add991b76eadfa26ec10fa8796383a1ddf807be5d4228725207b9d eu.gcr.io/gardener-project/3rd/quay_io/calico/cni:v3.8.2-mod-1],SizeBytes:153790666,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/k8s_gcr_io/kubernetes-dashboard-amd64@sha256:2f4fefeb964b1b7b09a3d2607a963506a47a6628d5268825e8b45b8a4c5ace93 eu.gcr.io/gardener-project/3rd/k8s_gcr_io/kubernetes-dashboard-amd64:v1.10.1],SizeBytes:121711221,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/k8s_gcr_io/node-problem-detector@sha256:00aceed3b4ef20d0d578aff3f904212daa2f0aaf18350d3e213cf4ca0703ccf0 eu.gcr.io/gardener-project/3rd/k8s_gcr_io/node-problem-detector:v0.7.1-mod-1],SizeBytes:96768084,},ContainerImage{Names:[eu.gcr.io/gardener-project/gardener/ingress-default-backend@sha256:17b68928ead12cc9df88ee60d9c638d3fd642a7e122c2bb7586da1a21eb2de45 eu.gcr.io/gardener-project/gardener/ingress-default-backend:0.7.0],SizeBytes:69546830,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6],SizeBytes:57345321,},ContainerImage{Names:[quay.io/k8scsi/csi-provisioner@sha256:0efcb424f1dde9b9fb11a1a14f2e48ab47e1c3f08bc3a929990dcfcb1f7ab34f quay.io/k8scsi/csi-provisioner:v1.4.0-rc1],SizeBytes:54431016,},ContainerImage{Names:[quay.io/k8scsi/csi-snapshotter@sha256:e3d3e742e32d00488fdb401045b9b1d033d7ca0ab6e760f77b24750fc95e5f70 quay.io/k8scsi/csi-snapshotter:v2.0.0-rc1],SizeBytes:51703561,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/quay_io/calico/typha@sha256:52298609a808087c774e95ded163e91828106bed6cf3117c51aba3f4d3b7943c eu.gcr.io/gardener-project/3rd/quay_io/calico/typha:v3.8.2],SizeBytes:49771411,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/quay_io/calico/kube-controllers@sha256:242c3e83e41c5ad4a246cba351360d92fb90e1c140cd24e42140e640a0ed3290 eu.gcr.io/gardener-project/3rd/quay_io/calico/kube-controllers:v3.8.2],SizeBytes:46809393,},ContainerImage{Names:[quay.io/k8scsi/csi-attacher@sha256:26fccd7a99d973845df1193b46ebdcc6ab8dc5f6e6be319750c471fce1742d13 quay.io/k8scsi/csi-attacher:v1.2.0],SizeBytes:46226754,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/coredns/coredns@sha256:b1f81b52011f91ebcf512111caa6d6d0896a65251188210cd3145d5b23204531 eu.gcr.io/gardener-project/3rd/coredns/coredns:1.6.3],SizeBytes:44255363,},ContainerImage{Names:[quay.io/k8scsi/csi-attacher@sha256:0aba670b4d9d6b2e720bbf575d733156c676b693ca26501235444490300db838 quay.io/k8scsi/csi-attacher:v1.1.0],SizeBytes:42839085,},ContainerImage{Names:[quay.io/k8scsi/csi-resizer@sha256:7d46fb6eb8b890dc546029d1565d502b4a1d974d33625c6ee2bc7991b77fc1a1 quay.io/k8scsi/csi-resizer:v0.2.0],SizeBytes:42817100,},ContainerImage{Names:[quay.io/k8scsi/csi-resizer@sha256:f315c9042e56def3c05c6b04fe79ec9da6d39ddc557ca365a76cf35964ea08b6 quay.io/k8scsi/csi-resizer:v0.1.0],SizeBytes:42623056,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/k8s_gcr_io/cpvpa-amd64@sha256:5843435c534f0368f8980b1635976976b087f0b2dcde01226d9216da2276d24d eu.gcr.io/gardener-project/3rd/k8s_gcr_io/cpvpa-amd64:v0.8.1],SizeBytes:40616150,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/k8s_gcr_io/cluster-proportional-autoscaler-amd64@sha256:2cdb0f90aac21d3f648a945ef929bfb81159d7453499b2dce6164c78a348ac42 eu.gcr.io/gardener-project/3rd/k8s_gcr_io/cluster-proportional-autoscaler-amd64:1.7.1],SizeBytes:40067731,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/k8s_gcr_io/metrics-server-amd64@sha256:c3c8fb8757c3236343da9239a266c6ee9e16ac3c98b6f5d7a7cbb5f83058d4f1 eu.gcr.io/gardener-project/3rd/k8s_gcr_io/metrics-server-amd64:v0.3.3],SizeBytes:39933796,},ContainerImage{Names:[redis@sha256:50899ea1ceed33fa03232f3ac57578a424faa1742c1ac9c7a7bdb95cdf19b858 redis:5.0.5-alpine],SizeBytes:29331594,},ContainerImage{Names:[quay.io/k8scsi/hostpathplugin@sha256:b4826e492fc1762fceaf9726f41575ca0a4567864d3d235da874818de18039de quay.io/k8scsi/hostpathplugin:v1.2.0-rc5],SizeBytes:28761497,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/quay_io/prometheus/node-exporter@sha256:fea82a3a79228af2840c72ff394d7446ace51ae035f5b26cd9767b250baf13b7 eu.gcr.io/gardener-project/3rd/quay_io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/echoserver@sha256:e9ba514b896cdf559eef8788b66c2c3ee55f3572df617647b4b0d8b6bf81cf19 gcr.io/kubernetes-e2e-test-images/echoserver:2.2],SizeBytes:21692741,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/quay_io/prometheus/blackbox-exporter@sha256:c09cbb653e4708a0c14b205822f56026669c6a4a7d0502609c65da2dd741e669 eu.gcr.io/gardener-project/3rd/quay_io/prometheus/blackbox-exporter:v0.14.0],SizeBytes:17584252,},ContainerImage{Names:[quay.io/k8scsi/mock-driver@sha256:e0eed916b7d970bad2b7d9875f9ad16932f987f0f3d91ec5d86da68b0b5cc9d1 quay.io/k8scsi/mock-driver:v2.1.0],SizeBytes:16226335,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[quay.io/k8scsi/csi-node-driver-registrar@sha256:13daf82fb99e951a4bff8ae5fc7c17c3a8fe7130be6400990d8f6076c32d4599 quay.io/k8scsi/csi-node-driver-registrar:v1.1.0],SizeBytes:15815995,},ContainerImage{Names:[quay.io/k8scsi/livenessprobe@sha256:dde617756e0f602adc566ab71fd885f1dad451ad3fb063ac991c95a2ff47aea5 quay.io/k8scsi/livenessprobe:v1.1.0],SizeBytes:14967303,},ContainerImage{Names:[eu.gcr.io/gardener-project/gardener/vpn-shoot@sha256:6054c6ae62c2bca2f07c913390c3babf14bb8dfa80c707ee8d4fd03c06dbf93f eu.gcr.io/gardener-project/gardener/vpn-shoot:0.16.0],SizeBytes:13732716,},ContainerImage{Names:[gcr.io/google-containers/startup-script@sha256:be96df6845a2af0eb61b17817ed085ce41048e4044c541da7580570b61beff3e gcr.io/google-containers/startup-script:v1],SizeBytes:12528443,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/quay_io/calico/pod2daemon-flexvol@sha256:fd246ba4eb5b96a7b97bfd8d99eb823ba179e6eeb9852cb3e3f7bf2f44a800a8 eu.gcr.io/gardener-project/3rd/quay_io/calico/pod2daemon-flexvol:v3.8.2],SizeBytes:9371181,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1],SizeBytes:9349974,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0],SizeBytes:4732240,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:4206620,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/gcr_io/google_containers/pause-amd64@sha256:ffa28932647c3b6cab6a618aafe98d33dd185d96158ecf9b1addf042d6244025 k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea eu.gcr.io/gardener-project/3rd/gcr_io/google_containers/pause-amd64:3.1 k8s.gcr.io/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 11 20:39:19.465: INFO: Logging kubelet events for node ip-10-250-7-77.ec2.internal Jan 11 20:39:19.555: INFO: Logging pods the kubelet thinks is on node ip-10-250-7-77.ec2.internal Jan 11 20:39:19.665: INFO: blackbox-exporter-54bb5f55cc-452fk started at 2020-01-11 15:55:58 +0000 UTC (0+1 container statuses recorded) Jan 11 20:39:19.665: INFO: Container blackbox-exporter ready: true, restart count 0 Jan 11 20:39:19.665: INFO: coredns-59c969ffb8-fqq79 started at 2020-01-11 15:56:08 +0000 UTC (0+1 container statuses recorded) Jan 11 20:39:19.665: INFO: Container coredns ready: true, restart count 0 Jan 11 20:39:19.665: INFO: calico-node-dl8nk started at 2020-01-11 15:55:58 +0000 UTC (2+1 container statuses recorded) Jan 11 20:39:19.665: INFO: Init container install-cni ready: true, restart count 0 Jan 11 20:39:19.665: INFO: Init container flexvol-driver ready: true, restart count 0 Jan 11 20:39:19.665: INFO: Container calico-node ready: true, restart count 0 Jan 11 20:39:19.665: INFO: node-problem-detector-jx2p4 started at 2020-01-11 15:55:58 +0000 UTC (0+1 container statuses recorded) Jan 11 20:39:19.665: INFO: Container node-problem-detector ready: true, restart count 0 Jan 11 20:39:19.665: INFO: calico-kube-controllers-79bcd784b6-c46r9 started at 2020-01-11 15:56:08 +0000 UTC (0+1 container statuses recorded) Jan 11 20:39:19.665: INFO: Container calico-kube-controllers ready: true, restart count 0 Jan 11 20:39:19.665: INFO: metrics-server-7c797fd994-4x7v9 started at 2020-01-11 15:56:08 +0000 UTC (0+1 container statuses recorded) Jan 11 20:39:19.665: INFO: Container metrics-server ready: true, restart count 0 Jan 11 20:39:19.665: INFO: node-exporter-gp57h started at 2020-01-11 15:55:58 +0000 UTC (0+1 container statuses recorded) Jan 11 20:39:19.665: INFO: Container node-exporter ready: true, restart count 0 Jan 11 20:39:19.665: INFO: coredns-59c969ffb8-57m7v started at 2020-01-11 15:56:11 +0000 UTC (0+1 container statuses recorded) Jan 11 20:39:19.665: INFO: Container coredns ready: true, restart count 0 Jan 11 20:39:19.665: INFO: calico-typha-deploy-9f6b455c4-vdrzx started at 2020-01-11 16:21:07 +0000 UTC (0+1 container statuses recorded) Jan 11 20:39:19.665: INFO: Container calico-typha ready: true, restart count 0 Jan 11 20:39:19.665: INFO: kube-proxy-nn5px started at 2020-01-11 15:55:58 +0000 UTC (0+1 container statuses recorded) Jan 11 20:39:19.665: INFO: Container kube-proxy ready: true, restart count 0 Jan 11 20:39:19.665: INFO: calico-typha-horizontal-autoscaler-85c99966bb-6j6rp started at 2020-01-11 15:56:08 +0000 UTC (0+1 container statuses recorded) Jan 11 20:39:19.665: INFO: Container autoscaler ready: true, restart count 0 Jan 11 20:39:19.665: INFO: calico-typha-vertical-autoscaler-5769b74b58-r8t6r started at 2020-01-11 15:56:13 +0000 UTC (0+1 container statuses recorded) Jan 11 20:39:19.665: INFO: Container autoscaler ready: true, restart count 5 Jan 11 20:39:19.665: INFO: addons-nginx-ingress-controller-7c75bb76db-cd9r9 started at 2020-01-11 15:56:13 +0000 UTC (0+1 container statuses recorded) Jan 11 20:39:19.665: INFO: Container nginx-ingress-controller ready: true, restart count 0 Jan 11 20:39:19.665: INFO: vpn-shoot-5d76665b65-6rkww started at 2020-01-11 15:56:13 +0000 UTC (0+1 container statuses recorded) Jan 11 20:39:19.665: INFO: Container vpn-shoot ready: true, restart count 0 Jan 11 20:39:19.665: INFO: addons-nginx-ingress-nginx-ingress-k8s-backend-95f65778d-4fk7d started at 2020-01-11 15:56:08 +0000 UTC (0+1 container statuses recorded) Jan 11 20:39:19.665: INFO: Container nginx-ingress-nginx-ingress-k8s-backend ready: true, restart count 0 Jan 11 20:39:19.665: INFO: addons-kubernetes-dashboard-78954cc66b-69k8m started at 2020-01-11 15:56:08 +0000 UTC (0+1 container statuses recorded) Jan 11 20:39:19.665: INFO: Container kubernetes-dashboard ready: true, restart count 0 W0111 20:39:19.756698 21583 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 11 20:39:20.003: INFO: Latency metrics for node ip-10-250-7-77.ec2.internal Jan 11 20:39:20.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-8690" for this suite. Jan 11 20:39:26.365: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:39:29.670: INFO: namespace hostpath-8690 deletion completed in 9.574884287s • Failure [14.527 seconds] [sig-storage] HostPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] [It] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Jan 11 20:39:18.419: Unexpected error: <*errors.errorString | 0xc003a129b0>: { s: "expected \"mode of file \\\"/test-volume\\\": dtrwxrwx\" in container output: Expected\n : mount type of \"/test-volume\": tmpfs\n mode of file \"/test-volume\": dgtrwxrwxrwx\n \nto contain substring\n : mode of file \"/test-volume\": dtrwxrwx", } expected "mode of file \"/test-volume\": dtrwxrwx" in container output: Expected : mount type of "/test-volume": tmpfs mode of file "/test-volume": dgtrwxrwxrwx to contain substring : mode of file "/test-volume": dtrwxrwx occurred /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:1667 ------------------------------ [BeforeEach] [sig-storage] HostPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:39:29.673: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename hostpath STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in hostpath-9981 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating a pod to test hostPath mode Jan 11 20:39:30.944: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-9981" to be "success or failure" Jan 11 20:39:31.034: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 89.75114ms Jan 11 20:39:33.124: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.179616793s STEP: Saw pod success Jan 11 20:39:33.124: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Jan 11 20:39:33.214: INFO: Trying to get logs from node ip-10-250-27-25.ec2.internal pod pod-host-path-test container test-container-1: STEP: delete the pod Jan 11 20:39:33.402: INFO: Waiting for pod pod-host-path-test to disappear Jan 11 20:39:33.492: INFO: Pod pod-host-path-test no longer exists Jan 11 20:39:33.492: FAIL: Unexpected error: <*errors.errorString | 0xc00286c870>: { s: "expected \"mode of file \\\"/test-volume\\\": dtrwxrwx\" in container output: Expected\n : mount type of \"/test-volume\": tmpfs\n mode of file \"/test-volume\": dgtrwxrwxrwx\n \nto contain substring\n : mode of file \"/test-volume\": dtrwxrwx", } expected "mode of file \"/test-volume\": dtrwxrwx" in container output: Expected : mount type of "/test-volume": tmpfs mode of file "/test-volume": dgtrwxrwxrwx to contain substring : mode of file "/test-volume": dtrwxrwx occurred [AfterEach] [sig-storage] HostPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 STEP: Collecting events from namespace "hostpath-9981". STEP: Found 8 events. Jan 11 20:39:33.583: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-host-path-test: {default-scheduler } Scheduled: Successfully assigned hostpath-9981/pod-host-path-test to ip-10-250-27-25.ec2.internal Jan 11 20:39:33.583: INFO: At 2020-01-11 20:39:31 +0000 UTC - event for pod-host-path-test: {kubelet ip-10-250-27-25.ec2.internal} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/mounttest:1.0" already present on machine Jan 11 20:39:33.583: INFO: At 2020-01-11 20:39:31 +0000 UTC - event for pod-host-path-test: {kubelet ip-10-250-27-25.ec2.internal} Created: Created container test-container-1 Jan 11 20:39:33.583: INFO: At 2020-01-11 20:39:31 +0000 UTC - event for pod-host-path-test: {kubelet ip-10-250-27-25.ec2.internal} Started: Started container test-container-1 Jan 11 20:39:33.583: INFO: At 2020-01-11 20:39:31 +0000 UTC - event for pod-host-path-test: {kubelet ip-10-250-27-25.ec2.internal} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/mounttest:1.0" already present on machine Jan 11 20:39:33.583: INFO: At 2020-01-11 20:39:31 +0000 UTC - event for pod-host-path-test: {kubelet ip-10-250-27-25.ec2.internal} Created: Created container test-container-2 Jan 11 20:39:33.583: INFO: At 2020-01-11 20:39:31 +0000 UTC - event for pod-host-path-test: {kubelet ip-10-250-27-25.ec2.internal} Started: Started container test-container-2 Jan 11 20:39:33.583: INFO: At 2020-01-11 20:39:33 +0000 UTC - event for pod-host-path-test: {kubelet ip-10-250-27-25.ec2.internal} Killing: Stopping container test-container-1 Jan 11 20:39:33.672: INFO: POD NODE PHASE GRACE CONDITIONS Jan 11 20:39:33.672: INFO: Jan 11 20:39:33.855: INFO: Logging node info for node ip-10-250-27-25.ec2.internal Jan 11 20:39:33.944: INFO: Node Info: &Node{ObjectMeta:{ip-10-250-27-25.ec2.internal /api/v1/nodes/ip-10-250-27-25.ec2.internal af7f64f3-a5de-4df3-9e07-f69e835ab580 93119 0 2020-01-11 15:56:03 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:m5.large beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:us-east-1 failure-domain.beta.kubernetes.io/zone:us-east-1c kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-10-250-27-25.ec2.internal kubernetes.io/os:linux node.kubernetes.io/role:node worker.garden.sapcloud.io/group:worker-1 worker.gardener.cloud/pool:worker-1] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-ephemeral-1641":"ip-10-250-27-25.ec2.internal","csi-hostpath-ephemeral-3918":"ip-10-250-27-25.ec2.internal","csi-hostpath-provisioning-1550":"ip-10-250-27-25.ec2.internal","csi-hostpath-provisioning-181":"ip-10-250-27-25.ec2.internal","csi-hostpath-provisioning-5271":"ip-10-250-27-25.ec2.internal","csi-hostpath-provisioning-5738":"ip-10-250-27-25.ec2.internal","csi-hostpath-provisioning-6240":"ip-10-250-27-25.ec2.internal","csi-hostpath-provisioning-8445":"ip-10-250-27-25.ec2.internal","csi-hostpath-volume-expand-6586":"ip-10-250-27-25.ec2.internal","csi-hostpath-volume-expand-7991":"ip-10-250-27-25.ec2.internal","csi-hostpath-volume-expand-8205":"ip-10-250-27-25.ec2.internal","csi-hostpath-volumemode-2239":"ip-10-250-27-25.ec2.internal","csi-mock-csi-mock-volumes-104":"csi-mock-csi-mock-volumes-104","csi-mock-csi-mock-volumes-1062":"csi-mock-csi-mock-volumes-1062","csi-mock-csi-mock-volumes-1547":"csi-mock-csi-mock-volumes-1547","csi-mock-csi-mock-volumes-2239":"csi-mock-csi-mock-volumes-2239","csi-mock-csi-mock-volumes-3620":"csi-mock-csi-mock-volumes-3620","csi-mock-csi-mock-volumes-4203":"csi-mock-csi-mock-volumes-4203","csi-mock-csi-mock-volumes-4249":"csi-mock-csi-mock-volumes-4249","csi-mock-csi-mock-volumes-6381":"csi-mock-csi-mock-volumes-6381","csi-mock-csi-mock-volumes-7446":"csi-mock-csi-mock-volumes-7446","csi-mock-csi-mock-volumes-795":"csi-mock-csi-mock-volumes-795","csi-mock-csi-mock-volumes-8830":"csi-mock-csi-mock-volumes-8830"} node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.250.27.25/19 projectcalico.org/IPv4IPIPTunnelAddr:100.64.1.1 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:100.64.1.0/24,DoNotUse_ExternalID:,ProviderID:aws:///us-east-1c/i-0a8c404292a3c92e9,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-aws-ebs: {{25 0} {} 25 DecimalSI},cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{28730179584 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{8054267904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-aws-ebs: {{25 0} {} 25 DecimalSI},cpu: {{1920 -3} {} 1920m DecimalSI},ephemeral-storage: {{27293670584 0} {} 27293670584 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{6577812679 0} {} 6577812679 DecimalSI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2020-01-11 20:38:48 +0000 UTC,LastTransitionTime:2020-01-11 15:56:58 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2020-01-11 20:38:48 +0000 UTC,LastTransitionTime:2020-01-11 15:56:58 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2020-01-11 20:38:48 +0000 UTC,LastTransitionTime:2020-01-11 15:56:58 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2020-01-11 20:38:48 +0000 UTC,LastTransitionTime:2020-01-11 15:56:58 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2020-01-11 20:38:48 +0000 UTC,LastTransitionTime:2020-01-11 15:56:58 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2020-01-11 20:38:48 +0000 UTC,LastTransitionTime:2020-01-11 15:56:58 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2020-01-11 20:38:48 +0000 UTC,LastTransitionTime:2020-01-11 15:56:58 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-01-11 15:56:18 +0000 UTC,LastTransitionTime:2020-01-11 15:56:18 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-01-11 20:39:28 +0000 UTC,LastTransitionTime:2020-01-11 15:56:03 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-01-11 20:39:28 +0000 UTC,LastTransitionTime:2020-01-11 15:56:03 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-01-11 20:39:28 +0000 UTC,LastTransitionTime:2020-01-11 15:56:03 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-01-11 20:39:28 +0000 UTC,LastTransitionTime:2020-01-11 15:56:13 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.250.27.25,},NodeAddress{Type:Hostname,Address:ip-10-250-27-25.ec2.internal,},NodeAddress{Type:InternalDNS,Address:ip-10-250-27-25.ec2.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec280dba3c1837e27848a3dec8c080a9,SystemUUID:ec280dba-3c18-37e2-7848-a3dec8c080a9,BootID:89e42b89-b944-47ea-8bf6-5f2fe6d80c97,KernelVersion:4.19.86-coreos,OSImage:Container Linux by CoreOS 2303.3.0 (Rhyolite),ContainerRuntimeVersion:docker://18.6.3,KubeletVersion:v1.16.4,KubeProxyVersion:v1.16.4,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[eu.gcr.io/gardener-project/k8s.gcr.io/hyperkube@sha256:1d8d7ef8bae1a6c8564d97a7d83a3661ea4b43127b0a6d901f3cd4b1126ee102 eu.gcr.io/gardener-project/k8s.gcr.io/hyperkube:v1.16.4],SizeBytes:601224435,},ContainerImage{Names:[quay.io/kubernetes_incubator/nfs-provisioner@sha256:df762117e3c891f2d2ddff46ecb0776ba1f9f3c44cfd7739b0683bcd7a7954a8 quay.io/kubernetes_incubator/nfs-provisioner:v2.2.2],SizeBytes:391772778,},ContainerImage{Names:[gcr.io/google-samples/gb-frontend@sha256:35cb427341429fac3df10ff74600ea73e8ec0754d78f9ce89e0b4f3d70d53ba6 gcr.io/google-samples/gb-frontend:v6],SizeBytes:373099368,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa k8s.gcr.io/etcd:3.3.15],SizeBytes:246640776,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/volume/nfs@sha256:c2ad734346f608a5f7d69cfded93c4e8094069320657bd372d12ba21dea3ea71 gcr.io/kubernetes-e2e-test-images/volume/nfs:1.0],SizeBytes:225358913,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:195659796,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/quay_io/calico/node@sha256:d017c694acb9df5ad8e957a14b4c5a613c3a42771a34904f40c279dd2f61461e eu.gcr.io/gardener-project/3rd/quay_io/calico/node:v3.8.2-mod-1],SizeBytes:185406766,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/quay_io/calico/cni@sha256:fe6cb51f30add991b76eadfa26ec10fa8796383a1ddf807be5d4228725207b9d eu.gcr.io/gardener-project/3rd/quay_io/calico/cni:v3.8.2-mod-1],SizeBytes:153790666,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/k8s_gcr_io/node-problem-detector@sha256:00aceed3b4ef20d0d578aff3f904212daa2f0aaf18350d3e213cf4ca0703ccf0 eu.gcr.io/gardener-project/3rd/k8s_gcr_io/node-problem-detector:v0.7.1-mod-1],SizeBytes:96768084,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:1bafcc6fb1aa990b487850adba9cadc020e42d7905aa8a30481182a477ba24b0 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.10],SizeBytes:61365829,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6],SizeBytes:57345321,},ContainerImage{Names:[quay.io/k8scsi/csi-provisioner@sha256:0efcb424f1dde9b9fb11a1a14f2e48ab47e1c3f08bc3a929990dcfcb1f7ab34f quay.io/k8scsi/csi-provisioner:v1.4.0-rc1],SizeBytes:54431016,},ContainerImage{Names:[quay.io/k8scsi/csi-snapshotter@sha256:e3d3e742e32d00488fdb401045b9b1d033d7ca0ab6e760f77b24750fc95e5f70 quay.io/k8scsi/csi-snapshotter:v2.0.0-rc1],SizeBytes:51703561,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/quay_io/calico/typha@sha256:52298609a808087c774e95ded163e91828106bed6cf3117c51aba3f4d3b7943c eu.gcr.io/gardener-project/3rd/quay_io/calico/typha:v3.8.2],SizeBytes:49771411,},ContainerImage{Names:[quay.io/k8scsi/csi-attacher@sha256:26fccd7a99d973845df1193b46ebdcc6ab8dc5f6e6be319750c471fce1742d13 quay.io/k8scsi/csi-attacher:v1.2.0],SizeBytes:46226754,},ContainerImage{Names:[quay.io/k8scsi/csi-attacher@sha256:0aba670b4d9d6b2e720bbf575d733156c676b693ca26501235444490300db838 quay.io/k8scsi/csi-attacher:v1.1.0],SizeBytes:42839085,},ContainerImage{Names:[quay.io/k8scsi/csi-resizer@sha256:7d46fb6eb8b890dc546029d1565d502b4a1d974d33625c6ee2bc7991b77fc1a1 quay.io/k8scsi/csi-resizer:v0.2.0],SizeBytes:42817100,},ContainerImage{Names:[quay.io/k8scsi/csi-resizer@sha256:f315c9042e56def3c05c6b04fe79ec9da6d39ddc557ca365a76cf35964ea08b6 quay.io/k8scsi/csi-resizer:v0.1.0],SizeBytes:42623056,},ContainerImage{Names:[gcr.io/google-containers/debian-base@sha256:6966a0aedd7592c18ff2dd803c08bd85780ee19f5e3a2e7cf908a4cd837afcde gcr.io/google-containers/debian-base:0.4.1],SizeBytes:42323657,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonroot@sha256:d4ede5c74517090b6686219059118ed178cf4620f5db8781b32f806bb1e7395b gcr.io/kubernetes-e2e-test-images/nonroot:1.0],SizeBytes:42321438,},ContainerImage{Names:[redis@sha256:50899ea1ceed33fa03232f3ac57578a424faa1742c1ac9c7a7bdb95cdf19b858 redis:5.0.5-alpine],SizeBytes:29331594,},ContainerImage{Names:[quay.io/k8scsi/hostpathplugin@sha256:b4826e492fc1762fceaf9726f41575ca0a4567864d3d235da874818de18039de quay.io/k8scsi/hostpathplugin:v1.2.0-rc5],SizeBytes:28761497,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/quay_io/prometheus/node-exporter@sha256:fea82a3a79228af2840c72ff394d7446ace51ae035f5b26cd9767b250baf13b7 eu.gcr.io/gardener-project/3rd/quay_io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/echoserver@sha256:e9ba514b896cdf559eef8788b66c2c3ee55f3572df617647b4b0d8b6bf81cf19 gcr.io/kubernetes-e2e-test-images/echoserver:2.2],SizeBytes:21692741,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/regression-issue-74839-amd64@sha256:3b36bd80b97c532a774e7f6246797b8575d97037982f353476c703ba6686c75c gcr.io/kubernetes-e2e-test-images/regression-issue-74839-amd64:1.0],SizeBytes:19227369,},ContainerImage{Names:[quay.io/k8scsi/mock-driver@sha256:e0eed916b7d970bad2b7d9875f9ad16932f987f0f3d91ec5d86da68b0b5cc9d1 quay.io/k8scsi/mock-driver:v2.1.0],SizeBytes:16226335,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[quay.io/k8scsi/csi-node-driver-registrar@sha256:13daf82fb99e951a4bff8ae5fc7c17c3a8fe7130be6400990d8f6076c32d4599 quay.io/k8scsi/csi-node-driver-registrar:v1.1.0],SizeBytes:15815995,},ContainerImage{Names:[quay.io/k8scsi/livenessprobe@sha256:dde617756e0f602adc566ab71fd885f1dad451ad3fb063ac991c95a2ff47aea5 quay.io/k8scsi/livenessprobe:v1.1.0],SizeBytes:14967303,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/quay_io/calico/pod2daemon-flexvol@sha256:fd246ba4eb5b96a7b97bfd8d99eb823ba179e6eeb9852cb3e3f7bf2f44a800a8 eu.gcr.io/gardener-project/3rd/quay_io/calico/pod2daemon-flexvol:v3.8.2],SizeBytes:9371181,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1],SizeBytes:9349974,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6 gcr.io/kubernetes-e2e-test-images/kitten:1.0],SizeBytes:4747037,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0],SizeBytes:4732240,},ContainerImage{Names:[alpine@sha256:8421d9a84432575381bfabd248f1eb56f3aa21d9d7cd2511583c68c9b7511d10 alpine:3.7],SizeBytes:4206494,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0],SizeBytes:1450451,},ContainerImage{Names:[busybox@sha256:6915be4043561d64e0ab0f8f098dc2ac48e077fe23f488ac24b665166898115a busybox:latest],SizeBytes:1219782,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:bbc3a03235220b170ba48a157dd097dd1379299370e1ed99ce976df0355d24f0 busybox:1.27],SizeBytes:1129289,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/gcr_io/google_containers/pause-amd64@sha256:ffa28932647c3b6cab6a618aafe98d33dd185d96158ecf9b1addf042d6244025 k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea eu.gcr.io/gardener-project/3rd/gcr_io/google_containers/pause-amd64:3.1 k8s.gcr.io/pause:3.1],SizeBytes:742472,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-provisioning-8445^15e49ff2-34ae-11ea-98fd-0e6a2517c83d],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 11 20:39:33.945: INFO: Logging kubelet events for node ip-10-250-27-25.ec2.internal Jan 11 20:39:34.035: INFO: Logging pods the kubelet thinks is on node ip-10-250-27-25.ec2.internal Jan 11 20:39:34.131: INFO: calico-node-m8r2d started at 2020-01-11 15:56:04 +0000 UTC (2+1 container statuses recorded) Jan 11 20:39:34.131: INFO: Init container install-cni ready: true, restart count 0 Jan 11 20:39:34.131: INFO: Init container flexvol-driver ready: true, restart count 0 Jan 11 20:39:34.131: INFO: Container calico-node ready: true, restart count 0 Jan 11 20:39:34.131: INFO: kube-proxy-rq4kf started at 2020-01-11 15:56:04 +0000 UTC (0+1 container statuses recorded) Jan 11 20:39:34.131: INFO: Container kube-proxy ready: true, restart count 0 Jan 11 20:39:34.131: INFO: node-problem-detector-9z5sq started at 2020-01-11 15:56:04 +0000 UTC (0+1 container statuses recorded) Jan 11 20:39:34.131: INFO: Container node-problem-detector ready: true, restart count 0 Jan 11 20:39:34.131: INFO: node-exporter-l6q84 started at 2020-01-11 15:56:04 +0000 UTC (0+1 container statuses recorded) Jan 11 20:39:34.131: INFO: Container node-exporter ready: true, restart count 0 W0111 20:39:34.222740 21583 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 11 20:39:34.435: INFO: Latency metrics for node ip-10-250-27-25.ec2.internal Jan 11 20:39:34.435: INFO: Logging node info for node ip-10-250-7-77.ec2.internal Jan 11 20:39:34.525: INFO: Node Info: &Node{ObjectMeta:{ip-10-250-7-77.ec2.internal /api/v1/nodes/ip-10-250-7-77.ec2.internal 3773c02c-1fbb-4cbe-a527-8933de0a8978 93122 0 2020-01-11 15:55:58 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:m5.large beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:us-east-1 failure-domain.beta.kubernetes.io/zone:us-east-1c kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-10-250-7-77.ec2.internal kubernetes.io/os:linux node.kubernetes.io/role:node worker.garden.sapcloud.io/group:worker-1 worker.gardener.cloud/pool:worker-1] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-ephemeral-1155":"ip-10-250-7-77.ec2.internal","csi-hostpath-ephemeral-9708":"ip-10-250-7-77.ec2.internal","csi-hostpath-provisioning-1157":"ip-10-250-7-77.ec2.internal","csi-hostpath-provisioning-1947":"ip-10-250-7-77.ec2.internal","csi-hostpath-provisioning-2263":"ip-10-250-7-77.ec2.internal","csi-hostpath-provisioning-3332":"ip-10-250-7-77.ec2.internal","csi-hostpath-provisioning-4625":"ip-10-250-7-77.ec2.internal","csi-hostpath-provisioning-5877":"ip-10-250-7-77.ec2.internal","csi-hostpath-provisioning-638":"ip-10-250-7-77.ec2.internal","csi-hostpath-provisioning-8194":"ip-10-250-7-77.ec2.internal","csi-hostpath-provisioning-888":"ip-10-250-7-77.ec2.internal","csi-hostpath-provisioning-9667":"ip-10-250-7-77.ec2.internal","csi-hostpath-volume-1340":"ip-10-250-7-77.ec2.internal","csi-hostpath-volume-2441":"ip-10-250-7-77.ec2.internal","csi-hostpath-volume-expand-1240":"ip-10-250-7-77.ec2.internal","csi-hostpath-volume-expand-1264":"ip-10-250-7-77.ec2.internal","csi-hostpath-volume-expand-1929":"ip-10-250-7-77.ec2.internal","csi-hostpath-volume-expand-8983":"ip-10-250-7-77.ec2.internal","csi-hostpath-volumeio-3164":"ip-10-250-7-77.ec2.internal","csi-hostpath-volumemode-2792":"ip-10-250-7-77.ec2.internal","csi-mock-csi-mock-volumes-1446":"csi-mock-csi-mock-volumes-1446","csi-mock-csi-mock-volumes-4004":"csi-mock-csi-mock-volumes-4004","csi-mock-csi-mock-volumes-4733":"csi-mock-csi-mock-volumes-4733","csi-mock-csi-mock-volumes-8663":"csi-mock-csi-mock-volumes-8663"} node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.250.7.77/19 projectcalico.org/IPv4IPIPTunnelAddr:100.64.0.1 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:100.64.0.0/24,DoNotUse_ExternalID:,ProviderID:aws:///us-east-1c/i-0551dba45aad7abfa,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-aws-ebs: {{25 0} {} 25 DecimalSI},cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{28730179584 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{8054267904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-aws-ebs: {{25 0} {} 25 DecimalSI},cpu: {{1920 -3} {} 1920m DecimalSI},ephemeral-storage: {{27293670584 0} {} 27293670584 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{6577812679 0} {} 6577812679 DecimalSI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2020-01-11 20:39:29 +0000 UTC,LastTransitionTime:2020-01-11 15:56:28 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2020-01-11 20:39:29 +0000 UTC,LastTransitionTime:2020-01-11 15:56:28 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2020-01-11 20:39:29 +0000 UTC,LastTransitionTime:2020-01-11 15:56:28 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2020-01-11 20:39:29 +0000 UTC,LastTransitionTime:2020-01-11 15:56:28 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2020-01-11 20:39:29 +0000 UTC,LastTransitionTime:2020-01-11 15:56:28 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2020-01-11 20:39:29 +0000 UTC,LastTransitionTime:2020-01-11 15:56:28 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2020-01-11 20:39:29 +0000 UTC,LastTransitionTime:2020-01-11 15:56:28 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-01-11 15:56:16 +0000 UTC,LastTransitionTime:2020-01-11 15:56:16 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-01-11 20:39:29 +0000 UTC,LastTransitionTime:2020-01-11 15:55:58 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-01-11 20:39:29 +0000 UTC,LastTransitionTime:2020-01-11 15:55:58 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-01-11 20:39:29 +0000 UTC,LastTransitionTime:2020-01-11 15:55:58 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-01-11 20:39:29 +0000 UTC,LastTransitionTime:2020-01-11 15:56:08 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.250.7.77,},NodeAddress{Type:Hostname,Address:ip-10-250-7-77.ec2.internal,},NodeAddress{Type:InternalDNS,Address:ip-10-250-7-77.ec2.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec223a25fa514279256b8b36a522519a,SystemUUID:ec223a25-fa51-4279-256b-8b36a522519a,BootID:652118c2-7bd4-4ebf-b248-be5c7a65a3aa,KernelVersion:4.19.86-coreos,OSImage:Container Linux by CoreOS 2303.3.0 (Rhyolite),ContainerRuntimeVersion:docker://18.6.3,KubeletVersion:v1.16.4,KubeProxyVersion:v1.16.4,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[eu.gcr.io/gardener-project/k8s.gcr.io/hyperkube@sha256:1d8d7ef8bae1a6c8564d97a7d83a3661ea4b43127b0a6d901f3cd4b1126ee102 eu.gcr.io/gardener-project/k8s.gcr.io/hyperkube:v1.16.4],SizeBytes:601224435,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/quay_io/kubernetes-ingress-controller/nginx-ingress-controller@sha256:4980f4ee069f767334c6fb6a7d75fbdc87236542fd749e22af5d80f2217959f4 eu.gcr.io/gardener-project/3rd/quay_io/kubernetes-ingress-controller/nginx-ingress-controller:0.22.0],SizeBytes:551728251,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/volume/nfs@sha256:c2ad734346f608a5f7d69cfded93c4e8094069320657bd372d12ba21dea3ea71 gcr.io/kubernetes-e2e-test-images/volume/nfs:1.0],SizeBytes:225358913,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:195659796,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/quay_io/calico/node@sha256:d017c694acb9df5ad8e957a14b4c5a613c3a42771a34904f40c279dd2f61461e eu.gcr.io/gardener-project/3rd/quay_io/calico/node:v3.8.2-mod-1],SizeBytes:185406766,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/quay_io/calico/cni@sha256:fe6cb51f30add991b76eadfa26ec10fa8796383a1ddf807be5d4228725207b9d eu.gcr.io/gardener-project/3rd/quay_io/calico/cni:v3.8.2-mod-1],SizeBytes:153790666,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/k8s_gcr_io/kubernetes-dashboard-amd64@sha256:2f4fefeb964b1b7b09a3d2607a963506a47a6628d5268825e8b45b8a4c5ace93 eu.gcr.io/gardener-project/3rd/k8s_gcr_io/kubernetes-dashboard-amd64:v1.10.1],SizeBytes:121711221,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/k8s_gcr_io/node-problem-detector@sha256:00aceed3b4ef20d0d578aff3f904212daa2f0aaf18350d3e213cf4ca0703ccf0 eu.gcr.io/gardener-project/3rd/k8s_gcr_io/node-problem-detector:v0.7.1-mod-1],SizeBytes:96768084,},ContainerImage{Names:[eu.gcr.io/gardener-project/gardener/ingress-default-backend@sha256:17b68928ead12cc9df88ee60d9c638d3fd642a7e122c2bb7586da1a21eb2de45 eu.gcr.io/gardener-project/gardener/ingress-default-backend:0.7.0],SizeBytes:69546830,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6],SizeBytes:57345321,},ContainerImage{Names:[quay.io/k8scsi/csi-provisioner@sha256:0efcb424f1dde9b9fb11a1a14f2e48ab47e1c3f08bc3a929990dcfcb1f7ab34f quay.io/k8scsi/csi-provisioner:v1.4.0-rc1],SizeBytes:54431016,},ContainerImage{Names:[quay.io/k8scsi/csi-snapshotter@sha256:e3d3e742e32d00488fdb401045b9b1d033d7ca0ab6e760f77b24750fc95e5f70 quay.io/k8scsi/csi-snapshotter:v2.0.0-rc1],SizeBytes:51703561,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/quay_io/calico/typha@sha256:52298609a808087c774e95ded163e91828106bed6cf3117c51aba3f4d3b7943c eu.gcr.io/gardener-project/3rd/quay_io/calico/typha:v3.8.2],SizeBytes:49771411,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/quay_io/calico/kube-controllers@sha256:242c3e83e41c5ad4a246cba351360d92fb90e1c140cd24e42140e640a0ed3290 eu.gcr.io/gardener-project/3rd/quay_io/calico/kube-controllers:v3.8.2],SizeBytes:46809393,},ContainerImage{Names:[quay.io/k8scsi/csi-attacher@sha256:26fccd7a99d973845df1193b46ebdcc6ab8dc5f6e6be319750c471fce1742d13 quay.io/k8scsi/csi-attacher:v1.2.0],SizeBytes:46226754,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/coredns/coredns@sha256:b1f81b52011f91ebcf512111caa6d6d0896a65251188210cd3145d5b23204531 eu.gcr.io/gardener-project/3rd/coredns/coredns:1.6.3],SizeBytes:44255363,},ContainerImage{Names:[quay.io/k8scsi/csi-attacher@sha256:0aba670b4d9d6b2e720bbf575d733156c676b693ca26501235444490300db838 quay.io/k8scsi/csi-attacher:v1.1.0],SizeBytes:42839085,},ContainerImage{Names:[quay.io/k8scsi/csi-resizer@sha256:7d46fb6eb8b890dc546029d1565d502b4a1d974d33625c6ee2bc7991b77fc1a1 quay.io/k8scsi/csi-resizer:v0.2.0],SizeBytes:42817100,},ContainerImage{Names:[quay.io/k8scsi/csi-resizer@sha256:f315c9042e56def3c05c6b04fe79ec9da6d39ddc557ca365a76cf35964ea08b6 quay.io/k8scsi/csi-resizer:v0.1.0],SizeBytes:42623056,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/k8s_gcr_io/cpvpa-amd64@sha256:5843435c534f0368f8980b1635976976b087f0b2dcde01226d9216da2276d24d eu.gcr.io/gardener-project/3rd/k8s_gcr_io/cpvpa-amd64:v0.8.1],SizeBytes:40616150,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/k8s_gcr_io/cluster-proportional-autoscaler-amd64@sha256:2cdb0f90aac21d3f648a945ef929bfb81159d7453499b2dce6164c78a348ac42 eu.gcr.io/gardener-project/3rd/k8s_gcr_io/cluster-proportional-autoscaler-amd64:1.7.1],SizeBytes:40067731,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/k8s_gcr_io/metrics-server-amd64@sha256:c3c8fb8757c3236343da9239a266c6ee9e16ac3c98b6f5d7a7cbb5f83058d4f1 eu.gcr.io/gardener-project/3rd/k8s_gcr_io/metrics-server-amd64:v0.3.3],SizeBytes:39933796,},ContainerImage{Names:[redis@sha256:50899ea1ceed33fa03232f3ac57578a424faa1742c1ac9c7a7bdb95cdf19b858 redis:5.0.5-alpine],SizeBytes:29331594,},ContainerImage{Names:[quay.io/k8scsi/hostpathplugin@sha256:b4826e492fc1762fceaf9726f41575ca0a4567864d3d235da874818de18039de quay.io/k8scsi/hostpathplugin:v1.2.0-rc5],SizeBytes:28761497,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/quay_io/prometheus/node-exporter@sha256:fea82a3a79228af2840c72ff394d7446ace51ae035f5b26cd9767b250baf13b7 eu.gcr.io/gardener-project/3rd/quay_io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/echoserver@sha256:e9ba514b896cdf559eef8788b66c2c3ee55f3572df617647b4b0d8b6bf81cf19 gcr.io/kubernetes-e2e-test-images/echoserver:2.2],SizeBytes:21692741,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/quay_io/prometheus/blackbox-exporter@sha256:c09cbb653e4708a0c14b205822f56026669c6a4a7d0502609c65da2dd741e669 eu.gcr.io/gardener-project/3rd/quay_io/prometheus/blackbox-exporter:v0.14.0],SizeBytes:17584252,},ContainerImage{Names:[quay.io/k8scsi/mock-driver@sha256:e0eed916b7d970bad2b7d9875f9ad16932f987f0f3d91ec5d86da68b0b5cc9d1 quay.io/k8scsi/mock-driver:v2.1.0],SizeBytes:16226335,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[quay.io/k8scsi/csi-node-driver-registrar@sha256:13daf82fb99e951a4bff8ae5fc7c17c3a8fe7130be6400990d8f6076c32d4599 quay.io/k8scsi/csi-node-driver-registrar:v1.1.0],SizeBytes:15815995,},ContainerImage{Names:[quay.io/k8scsi/livenessprobe@sha256:dde617756e0f602adc566ab71fd885f1dad451ad3fb063ac991c95a2ff47aea5 quay.io/k8scsi/livenessprobe:v1.1.0],SizeBytes:14967303,},ContainerImage{Names:[eu.gcr.io/gardener-project/gardener/vpn-shoot@sha256:6054c6ae62c2bca2f07c913390c3babf14bb8dfa80c707ee8d4fd03c06dbf93f eu.gcr.io/gardener-project/gardener/vpn-shoot:0.16.0],SizeBytes:13732716,},ContainerImage{Names:[gcr.io/google-containers/startup-script@sha256:be96df6845a2af0eb61b17817ed085ce41048e4044c541da7580570b61beff3e gcr.io/google-containers/startup-script:v1],SizeBytes:12528443,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/quay_io/calico/pod2daemon-flexvol@sha256:fd246ba4eb5b96a7b97bfd8d99eb823ba179e6eeb9852cb3e3f7bf2f44a800a8 eu.gcr.io/gardener-project/3rd/quay_io/calico/pod2daemon-flexvol:v3.8.2],SizeBytes:9371181,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1],SizeBytes:9349974,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0],SizeBytes:4732240,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:4206620,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/gcr_io/google_containers/pause-amd64@sha256:ffa28932647c3b6cab6a618aafe98d33dd185d96158ecf9b1addf042d6244025 k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea eu.gcr.io/gardener-project/3rd/gcr_io/google_containers/pause-amd64:3.1 k8s.gcr.io/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 11 20:39:34.526: INFO: Logging kubelet events for node ip-10-250-7-77.ec2.internal Jan 11 20:39:34.615: INFO: Logging pods the kubelet thinks is on node ip-10-250-7-77.ec2.internal Jan 11 20:39:34.720: INFO: addons-kubernetes-dashboard-78954cc66b-69k8m started at 2020-01-11 15:56:08 +0000 UTC (0+1 container statuses recorded) Jan 11 20:39:34.720: INFO: Container kubernetes-dashboard ready: true, restart count 0 Jan 11 20:39:34.721: INFO: blackbox-exporter-54bb5f55cc-452fk started at 2020-01-11 15:55:58 +0000 UTC (0+1 container statuses recorded) Jan 11 20:39:34.721: INFO: Container blackbox-exporter ready: true, restart count 0 Jan 11 20:39:34.721: INFO: coredns-59c969ffb8-fqq79 started at 2020-01-11 15:56:08 +0000 UTC (0+1 container statuses recorded) Jan 11 20:39:34.721: INFO: Container coredns ready: true, restart count 0 Jan 11 20:39:34.721: INFO: calico-node-dl8nk started at 2020-01-11 15:55:58 +0000 UTC (2+1 container statuses recorded) Jan 11 20:39:34.721: INFO: Init container install-cni ready: true, restart count 0 Jan 11 20:39:34.721: INFO: Init container flexvol-driver ready: true, restart count 0 Jan 11 20:39:34.721: INFO: Container calico-node ready: true, restart count 0 Jan 11 20:39:34.721: INFO: node-problem-detector-jx2p4 started at 2020-01-11 15:55:58 +0000 UTC (0+1 container statuses recorded) Jan 11 20:39:34.721: INFO: Container node-problem-detector ready: true, restart count 0 Jan 11 20:39:34.721: INFO: node-exporter-gp57h started at 2020-01-11 15:55:58 +0000 UTC (0+1 container statuses recorded) Jan 11 20:39:34.721: INFO: Container node-exporter ready: true, restart count 0 Jan 11 20:39:34.721: INFO: calico-kube-controllers-79bcd784b6-c46r9 started at 2020-01-11 15:56:08 +0000 UTC (0+1 container statuses recorded) Jan 11 20:39:34.721: INFO: Container calico-kube-controllers ready: true, restart count 0 Jan 11 20:39:34.721: INFO: metrics-server-7c797fd994-4x7v9 started at 2020-01-11 15:56:08 +0000 UTC (0+1 container statuses recorded) Jan 11 20:39:34.721: INFO: Container metrics-server ready: true, restart count 0 Jan 11 20:39:34.721: INFO: coredns-59c969ffb8-57m7v started at 2020-01-11 15:56:11 +0000 UTC (0+1 container statuses recorded) Jan 11 20:39:34.721: INFO: Container coredns ready: true, restart count 0 Jan 11 20:39:34.721: INFO: calico-typha-deploy-9f6b455c4-vdrzx started at 2020-01-11 16:21:07 +0000 UTC (0+1 container statuses recorded) Jan 11 20:39:34.721: INFO: Container calico-typha ready: true, restart count 0 Jan 11 20:39:34.721: INFO: kube-proxy-nn5px started at 2020-01-11 15:55:58 +0000 UTC (0+1 container statuses recorded) Jan 11 20:39:34.721: INFO: Container kube-proxy ready: true, restart count 0 Jan 11 20:39:34.721: INFO: calico-typha-horizontal-autoscaler-85c99966bb-6j6rp started at 2020-01-11 15:56:08 +0000 UTC (0+1 container statuses recorded) Jan 11 20:39:34.721: INFO: Container autoscaler ready: true, restart count 0 Jan 11 20:39:34.721: INFO: calico-typha-vertical-autoscaler-5769b74b58-r8t6r started at 2020-01-11 15:56:13 +0000 UTC (0+1 container statuses recorded) Jan 11 20:39:34.721: INFO: Container autoscaler ready: true, restart count 5 Jan 11 20:39:34.721: INFO: addons-nginx-ingress-controller-7c75bb76db-cd9r9 started at 2020-01-11 15:56:13 +0000 UTC (0+1 container statuses recorded) Jan 11 20:39:34.721: INFO: Container nginx-ingress-controller ready: true, restart count 0 Jan 11 20:39:34.721: INFO: vpn-shoot-5d76665b65-6rkww started at 2020-01-11 15:56:13 +0000 UTC (0+1 container statuses recorded) Jan 11 20:39:34.721: INFO: Container vpn-shoot ready: true, restart count 0 Jan 11 20:39:34.721: INFO: addons-nginx-ingress-nginx-ingress-k8s-backend-95f65778d-4fk7d started at 2020-01-11 15:56:08 +0000 UTC (0+1 container statuses recorded) Jan 11 20:39:34.721: INFO: Container nginx-ingress-nginx-ingress-k8s-backend ready: true, restart count 0 W0111 20:39:34.811947 21583 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 11 20:39:35.060: INFO: Latency metrics for node ip-10-250-7-77.ec2.internal Jan 11 20:39:35.060: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-9981" for this suite. Jan 11 20:39:41.420: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:39:44.729: INFO: namespace hostpath-9981 deletion completed in 9.577898161s • Failure [15.056 seconds] [sig-storage] HostPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] [It] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Jan 11 20:39:33.492: Unexpected error: <*errors.errorString | 0xc00286c870>: { s: "expected \"mode of file \\\"/test-volume\\\": dtrwxrwx\" in container output: Expected\n : mount type of \"/test-volume\": tmpfs\n mode of file \"/test-volume\": dgtrwxrwxrwx\n \nto contain substring\n : mode of file \"/test-volume\": dtrwxrwx", } expected "mode of file \"/test-volume\": dtrwxrwx" in container output: Expected : mount type of "/test-volume": tmpfs mode of file "/test-volume": dgtrwxrwxrwx to contain substring : mode of file "/test-volume": dtrwxrwx occurred /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:1667 ------------------------------ [BeforeEach] [sig-storage] HostPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:39:44.731: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename hostpath STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in hostpath-6173 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating a pod to test hostPath mode Jan 11 20:39:45.460: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-6173" to be "success or failure" Jan 11 20:39:45.549: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 89.387025ms Jan 11 20:39:47.639: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.179052171s STEP: Saw pod success Jan 11 20:39:47.639: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Jan 11 20:39:47.729: INFO: Trying to get logs from node ip-10-250-27-25.ec2.internal pod pod-host-path-test container test-container-1: STEP: delete the pod Jan 11 20:39:47.916: INFO: Waiting for pod pod-host-path-test to disappear Jan 11 20:39:48.005: INFO: Pod pod-host-path-test no longer exists Jan 11 20:39:48.005: FAIL: Unexpected error: <*errors.errorString | 0xc003ec15c0>: { s: "expected \"mode of file \\\"/test-volume\\\": dtrwxrwx\" in container output: Expected\n : mount type of \"/test-volume\": tmpfs\n mode of file \"/test-volume\": dgtrwxrwxrwx\n \nto contain substring\n : mode of file \"/test-volume\": dtrwxrwx", } expected "mode of file \"/test-volume\": dtrwxrwx" in container output: Expected : mount type of "/test-volume": tmpfs mode of file "/test-volume": dgtrwxrwxrwx to contain substring : mode of file "/test-volume": dtrwxrwx occurred [AfterEach] [sig-storage] HostPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 STEP: Collecting events from namespace "hostpath-6173". STEP: Found 7 events. Jan 11 20:39:48.096: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-host-path-test: {default-scheduler } Scheduled: Successfully assigned hostpath-6173/pod-host-path-test to ip-10-250-27-25.ec2.internal Jan 11 20:39:48.096: INFO: At 2020-01-11 20:39:46 +0000 UTC - event for pod-host-path-test: {kubelet ip-10-250-27-25.ec2.internal} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/mounttest:1.0" already present on machine Jan 11 20:39:48.096: INFO: At 2020-01-11 20:39:46 +0000 UTC - event for pod-host-path-test: {kubelet ip-10-250-27-25.ec2.internal} Created: Created container test-container-1 Jan 11 20:39:48.096: INFO: At 2020-01-11 20:39:46 +0000 UTC - event for pod-host-path-test: {kubelet ip-10-250-27-25.ec2.internal} Started: Started container test-container-1 Jan 11 20:39:48.096: INFO: At 2020-01-11 20:39:46 +0000 UTC - event for pod-host-path-test: {kubelet ip-10-250-27-25.ec2.internal} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/mounttest:1.0" already present on machine Jan 11 20:39:48.096: INFO: At 2020-01-11 20:39:46 +0000 UTC - event for pod-host-path-test: {kubelet ip-10-250-27-25.ec2.internal} Created: Created container test-container-2 Jan 11 20:39:48.096: INFO: At 2020-01-11 20:39:46 +0000 UTC - event for pod-host-path-test: {kubelet ip-10-250-27-25.ec2.internal} Started: Started container test-container-2 Jan 11 20:39:48.186: INFO: POD NODE PHASE GRACE CONDITIONS Jan 11 20:39:48.186: INFO: Jan 11 20:39:48.368: INFO: Logging node info for node ip-10-250-27-25.ec2.internal Jan 11 20:39:48.457: INFO: Node Info: &Node{ObjectMeta:{ip-10-250-27-25.ec2.internal /api/v1/nodes/ip-10-250-27-25.ec2.internal af7f64f3-a5de-4df3-9e07-f69e835ab580 93192 0 2020-01-11 15:56:03 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:m5.large beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:us-east-1 failure-domain.beta.kubernetes.io/zone:us-east-1c kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-10-250-27-25.ec2.internal kubernetes.io/os:linux node.kubernetes.io/role:node worker.garden.sapcloud.io/group:worker-1 worker.gardener.cloud/pool:worker-1] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-ephemeral-1641":"ip-10-250-27-25.ec2.internal","csi-hostpath-ephemeral-3918":"ip-10-250-27-25.ec2.internal","csi-hostpath-provisioning-1550":"ip-10-250-27-25.ec2.internal","csi-hostpath-provisioning-181":"ip-10-250-27-25.ec2.internal","csi-hostpath-provisioning-5271":"ip-10-250-27-25.ec2.internal","csi-hostpath-provisioning-5738":"ip-10-250-27-25.ec2.internal","csi-hostpath-provisioning-6240":"ip-10-250-27-25.ec2.internal","csi-hostpath-provisioning-8445":"ip-10-250-27-25.ec2.internal","csi-hostpath-volume-expand-6586":"ip-10-250-27-25.ec2.internal","csi-hostpath-volume-expand-7991":"ip-10-250-27-25.ec2.internal","csi-hostpath-volume-expand-8205":"ip-10-250-27-25.ec2.internal","csi-hostpath-volumemode-2239":"ip-10-250-27-25.ec2.internal","csi-mock-csi-mock-volumes-104":"csi-mock-csi-mock-volumes-104","csi-mock-csi-mock-volumes-1062":"csi-mock-csi-mock-volumes-1062","csi-mock-csi-mock-volumes-1547":"csi-mock-csi-mock-volumes-1547","csi-mock-csi-mock-volumes-2239":"csi-mock-csi-mock-volumes-2239","csi-mock-csi-mock-volumes-3620":"csi-mock-csi-mock-volumes-3620","csi-mock-csi-mock-volumes-4203":"csi-mock-csi-mock-volumes-4203","csi-mock-csi-mock-volumes-4249":"csi-mock-csi-mock-volumes-4249","csi-mock-csi-mock-volumes-6381":"csi-mock-csi-mock-volumes-6381","csi-mock-csi-mock-volumes-7446":"csi-mock-csi-mock-volumes-7446","csi-mock-csi-mock-volumes-795":"csi-mock-csi-mock-volumes-795","csi-mock-csi-mock-volumes-8830":"csi-mock-csi-mock-volumes-8830"} node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.250.27.25/19 projectcalico.org/IPv4IPIPTunnelAddr:100.64.1.1 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:100.64.1.0/24,DoNotUse_ExternalID:,ProviderID:aws:///us-east-1c/i-0a8c404292a3c92e9,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-aws-ebs: {{25 0} {} 25 DecimalSI},cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{28730179584 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{8054267904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-aws-ebs: {{25 0} {} 25 DecimalSI},cpu: {{1920 -3} {} 1920m DecimalSI},ephemeral-storage: {{27293670584 0} {} 27293670584 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{6577812679 0} {} 6577812679 DecimalSI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2020-01-11 20:39:48 +0000 UTC,LastTransitionTime:2020-01-11 15:56:58 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2020-01-11 20:39:48 +0000 UTC,LastTransitionTime:2020-01-11 15:56:58 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2020-01-11 20:39:48 +0000 UTC,LastTransitionTime:2020-01-11 15:56:58 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2020-01-11 20:39:48 +0000 UTC,LastTransitionTime:2020-01-11 15:56:58 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2020-01-11 20:39:48 +0000 UTC,LastTransitionTime:2020-01-11 15:56:58 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2020-01-11 20:39:48 +0000 UTC,LastTransitionTime:2020-01-11 15:56:58 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2020-01-11 20:39:48 +0000 UTC,LastTransitionTime:2020-01-11 15:56:58 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-01-11 15:56:18 +0000 UTC,LastTransitionTime:2020-01-11 15:56:18 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-01-11 20:39:38 +0000 UTC,LastTransitionTime:2020-01-11 15:56:03 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-01-11 20:39:38 +0000 UTC,LastTransitionTime:2020-01-11 15:56:03 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-01-11 20:39:38 +0000 UTC,LastTransitionTime:2020-01-11 15:56:03 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-01-11 20:39:38 +0000 UTC,LastTransitionTime:2020-01-11 15:56:13 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.250.27.25,},NodeAddress{Type:Hostname,Address:ip-10-250-27-25.ec2.internal,},NodeAddress{Type:InternalDNS,Address:ip-10-250-27-25.ec2.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec280dba3c1837e27848a3dec8c080a9,SystemUUID:ec280dba-3c18-37e2-7848-a3dec8c080a9,BootID:89e42b89-b944-47ea-8bf6-5f2fe6d80c97,KernelVersion:4.19.86-coreos,OSImage:Container Linux by CoreOS 2303.3.0 (Rhyolite),ContainerRuntimeVersion:docker://18.6.3,KubeletVersion:v1.16.4,KubeProxyVersion:v1.16.4,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[eu.gcr.io/gardener-project/k8s.gcr.io/hyperkube@sha256:1d8d7ef8bae1a6c8564d97a7d83a3661ea4b43127b0a6d901f3cd4b1126ee102 eu.gcr.io/gardener-project/k8s.gcr.io/hyperkube:v1.16.4],SizeBytes:601224435,},ContainerImage{Names:[quay.io/kubernetes_incubator/nfs-provisioner@sha256:df762117e3c891f2d2ddff46ecb0776ba1f9f3c44cfd7739b0683bcd7a7954a8 quay.io/kubernetes_incubator/nfs-provisioner:v2.2.2],SizeBytes:391772778,},ContainerImage{Names:[gcr.io/google-samples/gb-frontend@sha256:35cb427341429fac3df10ff74600ea73e8ec0754d78f9ce89e0b4f3d70d53ba6 gcr.io/google-samples/gb-frontend:v6],SizeBytes:373099368,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa k8s.gcr.io/etcd:3.3.15],SizeBytes:246640776,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/volume/nfs@sha256:c2ad734346f608a5f7d69cfded93c4e8094069320657bd372d12ba21dea3ea71 gcr.io/kubernetes-e2e-test-images/volume/nfs:1.0],SizeBytes:225358913,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:195659796,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/quay_io/calico/node@sha256:d017c694acb9df5ad8e957a14b4c5a613c3a42771a34904f40c279dd2f61461e eu.gcr.io/gardener-project/3rd/quay_io/calico/node:v3.8.2-mod-1],SizeBytes:185406766,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/quay_io/calico/cni@sha256:fe6cb51f30add991b76eadfa26ec10fa8796383a1ddf807be5d4228725207b9d eu.gcr.io/gardener-project/3rd/quay_io/calico/cni:v3.8.2-mod-1],SizeBytes:153790666,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/k8s_gcr_io/node-problem-detector@sha256:00aceed3b4ef20d0d578aff3f904212daa2f0aaf18350d3e213cf4ca0703ccf0 eu.gcr.io/gardener-project/3rd/k8s_gcr_io/node-problem-detector:v0.7.1-mod-1],SizeBytes:96768084,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:1bafcc6fb1aa990b487850adba9cadc020e42d7905aa8a30481182a477ba24b0 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.10],SizeBytes:61365829,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6],SizeBytes:57345321,},ContainerImage{Names:[quay.io/k8scsi/csi-provisioner@sha256:0efcb424f1dde9b9fb11a1a14f2e48ab47e1c3f08bc3a929990dcfcb1f7ab34f quay.io/k8scsi/csi-provisioner:v1.4.0-rc1],SizeBytes:54431016,},ContainerImage{Names:[quay.io/k8scsi/csi-snapshotter@sha256:e3d3e742e32d00488fdb401045b9b1d033d7ca0ab6e760f77b24750fc95e5f70 quay.io/k8scsi/csi-snapshotter:v2.0.0-rc1],SizeBytes:51703561,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/quay_io/calico/typha@sha256:52298609a808087c774e95ded163e91828106bed6cf3117c51aba3f4d3b7943c eu.gcr.io/gardener-project/3rd/quay_io/calico/typha:v3.8.2],SizeBytes:49771411,},ContainerImage{Names:[quay.io/k8scsi/csi-attacher@sha256:26fccd7a99d973845df1193b46ebdcc6ab8dc5f6e6be319750c471fce1742d13 quay.io/k8scsi/csi-attacher:v1.2.0],SizeBytes:46226754,},ContainerImage{Names:[quay.io/k8scsi/csi-attacher@sha256:0aba670b4d9d6b2e720bbf575d733156c676b693ca26501235444490300db838 quay.io/k8scsi/csi-attacher:v1.1.0],SizeBytes:42839085,},ContainerImage{Names:[quay.io/k8scsi/csi-resizer@sha256:7d46fb6eb8b890dc546029d1565d502b4a1d974d33625c6ee2bc7991b77fc1a1 quay.io/k8scsi/csi-resizer:v0.2.0],SizeBytes:42817100,},ContainerImage{Names:[quay.io/k8scsi/csi-resizer@sha256:f315c9042e56def3c05c6b04fe79ec9da6d39ddc557ca365a76cf35964ea08b6 quay.io/k8scsi/csi-resizer:v0.1.0],SizeBytes:42623056,},ContainerImage{Names:[gcr.io/google-containers/debian-base@sha256:6966a0aedd7592c18ff2dd803c08bd85780ee19f5e3a2e7cf908a4cd837afcde gcr.io/google-containers/debian-base:0.4.1],SizeBytes:42323657,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonroot@sha256:d4ede5c74517090b6686219059118ed178cf4620f5db8781b32f806bb1e7395b gcr.io/kubernetes-e2e-test-images/nonroot:1.0],SizeBytes:42321438,},ContainerImage{Names:[redis@sha256:50899ea1ceed33fa03232f3ac57578a424faa1742c1ac9c7a7bdb95cdf19b858 redis:5.0.5-alpine],SizeBytes:29331594,},ContainerImage{Names:[quay.io/k8scsi/hostpathplugin@sha256:b4826e492fc1762fceaf9726f41575ca0a4567864d3d235da874818de18039de quay.io/k8scsi/hostpathplugin:v1.2.0-rc5],SizeBytes:28761497,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/quay_io/prometheus/node-exporter@sha256:fea82a3a79228af2840c72ff394d7446ace51ae035f5b26cd9767b250baf13b7 eu.gcr.io/gardener-project/3rd/quay_io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/echoserver@sha256:e9ba514b896cdf559eef8788b66c2c3ee55f3572df617647b4b0d8b6bf81cf19 gcr.io/kubernetes-e2e-test-images/echoserver:2.2],SizeBytes:21692741,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/regression-issue-74839-amd64@sha256:3b36bd80b97c532a774e7f6246797b8575d97037982f353476c703ba6686c75c gcr.io/kubernetes-e2e-test-images/regression-issue-74839-amd64:1.0],SizeBytes:19227369,},ContainerImage{Names:[quay.io/k8scsi/mock-driver@sha256:e0eed916b7d970bad2b7d9875f9ad16932f987f0f3d91ec5d86da68b0b5cc9d1 quay.io/k8scsi/mock-driver:v2.1.0],SizeBytes:16226335,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[quay.io/k8scsi/csi-node-driver-registrar@sha256:13daf82fb99e951a4bff8ae5fc7c17c3a8fe7130be6400990d8f6076c32d4599 quay.io/k8scsi/csi-node-driver-registrar:v1.1.0],SizeBytes:15815995,},ContainerImage{Names:[quay.io/k8scsi/livenessprobe@sha256:dde617756e0f602adc566ab71fd885f1dad451ad3fb063ac991c95a2ff47aea5 quay.io/k8scsi/livenessprobe:v1.1.0],SizeBytes:14967303,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/quay_io/calico/pod2daemon-flexvol@sha256:fd246ba4eb5b96a7b97bfd8d99eb823ba179e6eeb9852cb3e3f7bf2f44a800a8 eu.gcr.io/gardener-project/3rd/quay_io/calico/pod2daemon-flexvol:v3.8.2],SizeBytes:9371181,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1],SizeBytes:9349974,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6 gcr.io/kubernetes-e2e-test-images/kitten:1.0],SizeBytes:4747037,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0],SizeBytes:4732240,},ContainerImage{Names:[alpine@sha256:8421d9a84432575381bfabd248f1eb56f3aa21d9d7cd2511583c68c9b7511d10 alpine:3.7],SizeBytes:4206494,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0],SizeBytes:1450451,},ContainerImage{Names:[busybox@sha256:6915be4043561d64e0ab0f8f098dc2ac48e077fe23f488ac24b665166898115a busybox:latest],SizeBytes:1219782,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:bbc3a03235220b170ba48a157dd097dd1379299370e1ed99ce976df0355d24f0 busybox:1.27],SizeBytes:1129289,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/gcr_io/google_containers/pause-amd64@sha256:ffa28932647c3b6cab6a618aafe98d33dd185d96158ecf9b1addf042d6244025 k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea eu.gcr.io/gardener-project/3rd/gcr_io/google_containers/pause-amd64:3.1 k8s.gcr.io/pause:3.1],SizeBytes:742472,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-provisioning-8445^15e49ff2-34ae-11ea-98fd-0e6a2517c83d],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 11 20:39:48.457: INFO: Logging kubelet events for node ip-10-250-27-25.ec2.internal Jan 11 20:39:48.547: INFO: Logging pods the kubelet thinks is on node ip-10-250-27-25.ec2.internal Jan 11 20:39:48.644: INFO: calico-node-m8r2d started at 2020-01-11 15:56:04 +0000 UTC (2+1 container statuses recorded) Jan 11 20:39:48.644: INFO: Init container install-cni ready: true, restart count 0 Jan 11 20:39:48.644: INFO: Init container flexvol-driver ready: true, restart count 0 Jan 11 20:39:48.644: INFO: Container calico-node ready: true, restart count 0 Jan 11 20:39:48.644: INFO: kube-proxy-rq4kf started at 2020-01-11 15:56:04 +0000 UTC (0+1 container statuses recorded) Jan 11 20:39:48.644: INFO: Container kube-proxy ready: true, restart count 0 Jan 11 20:39:48.644: INFO: node-problem-detector-9z5sq started at 2020-01-11 15:56:04 +0000 UTC (0+1 container statuses recorded) Jan 11 20:39:48.644: INFO: Container node-problem-detector ready: true, restart count 0 Jan 11 20:39:48.644: INFO: node-exporter-l6q84 started at 2020-01-11 15:56:04 +0000 UTC (0+1 container statuses recorded) Jan 11 20:39:48.644: INFO: Container node-exporter ready: true, restart count 0 W0111 20:39:48.735250 21583 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 11 20:39:48.948: INFO: Latency metrics for node ip-10-250-27-25.ec2.internal Jan 11 20:39:48.948: INFO: Logging node info for node ip-10-250-7-77.ec2.internal Jan 11 20:39:49.038: INFO: Node Info: &Node{ObjectMeta:{ip-10-250-7-77.ec2.internal /api/v1/nodes/ip-10-250-7-77.ec2.internal 3773c02c-1fbb-4cbe-a527-8933de0a8978 93155 0 2020-01-11 15:55:58 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:m5.large beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:us-east-1 failure-domain.beta.kubernetes.io/zone:us-east-1c kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-10-250-7-77.ec2.internal kubernetes.io/os:linux node.kubernetes.io/role:node worker.garden.sapcloud.io/group:worker-1 worker.gardener.cloud/pool:worker-1] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-ephemeral-1155":"ip-10-250-7-77.ec2.internal","csi-hostpath-ephemeral-9708":"ip-10-250-7-77.ec2.internal","csi-hostpath-provisioning-1157":"ip-10-250-7-77.ec2.internal","csi-hostpath-provisioning-1947":"ip-10-250-7-77.ec2.internal","csi-hostpath-provisioning-2263":"ip-10-250-7-77.ec2.internal","csi-hostpath-provisioning-3332":"ip-10-250-7-77.ec2.internal","csi-hostpath-provisioning-4625":"ip-10-250-7-77.ec2.internal","csi-hostpath-provisioning-5877":"ip-10-250-7-77.ec2.internal","csi-hostpath-provisioning-638":"ip-10-250-7-77.ec2.internal","csi-hostpath-provisioning-8194":"ip-10-250-7-77.ec2.internal","csi-hostpath-provisioning-888":"ip-10-250-7-77.ec2.internal","csi-hostpath-provisioning-9667":"ip-10-250-7-77.ec2.internal","csi-hostpath-volume-1340":"ip-10-250-7-77.ec2.internal","csi-hostpath-volume-2441":"ip-10-250-7-77.ec2.internal","csi-hostpath-volume-expand-1240":"ip-10-250-7-77.ec2.internal","csi-hostpath-volume-expand-1264":"ip-10-250-7-77.ec2.internal","csi-hostpath-volume-expand-1929":"ip-10-250-7-77.ec2.internal","csi-hostpath-volume-expand-8983":"ip-10-250-7-77.ec2.internal","csi-hostpath-volumeio-3164":"ip-10-250-7-77.ec2.internal","csi-hostpath-volumemode-2792":"ip-10-250-7-77.ec2.internal","csi-mock-csi-mock-volumes-1446":"csi-mock-csi-mock-volumes-1446","csi-mock-csi-mock-volumes-4004":"csi-mock-csi-mock-volumes-4004","csi-mock-csi-mock-volumes-4733":"csi-mock-csi-mock-volumes-4733","csi-mock-csi-mock-volumes-8663":"csi-mock-csi-mock-volumes-8663"} node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.250.7.77/19 projectcalico.org/IPv4IPIPTunnelAddr:100.64.0.1 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:100.64.0.0/24,DoNotUse_ExternalID:,ProviderID:aws:///us-east-1c/i-0551dba45aad7abfa,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-aws-ebs: {{25 0} {} 25 DecimalSI},cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{28730179584 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{8054267904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-aws-ebs: {{25 0} {} 25 DecimalSI},cpu: {{1920 -3} {} 1920m DecimalSI},ephemeral-storage: {{27293670584 0} {} 27293670584 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{6577812679 0} {} 6577812679 DecimalSI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2020-01-11 20:39:29 +0000 UTC,LastTransitionTime:2020-01-11 15:56:28 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2020-01-11 20:39:29 +0000 UTC,LastTransitionTime:2020-01-11 15:56:28 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2020-01-11 20:39:29 +0000 UTC,LastTransitionTime:2020-01-11 15:56:28 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2020-01-11 20:39:29 +0000 UTC,LastTransitionTime:2020-01-11 15:56:28 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2020-01-11 20:39:29 +0000 UTC,LastTransitionTime:2020-01-11 15:56:28 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2020-01-11 20:39:29 +0000 UTC,LastTransitionTime:2020-01-11 15:56:28 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2020-01-11 20:39:29 +0000 UTC,LastTransitionTime:2020-01-11 15:56:28 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-01-11 15:56:16 +0000 UTC,LastTransitionTime:2020-01-11 15:56:16 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-01-11 20:39:39 +0000 UTC,LastTransitionTime:2020-01-11 15:55:58 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-01-11 20:39:39 +0000 UTC,LastTransitionTime:2020-01-11 15:55:58 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-01-11 20:39:39 +0000 UTC,LastTransitionTime:2020-01-11 15:55:58 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-01-11 20:39:39 +0000 UTC,LastTransitionTime:2020-01-11 15:56:08 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.250.7.77,},NodeAddress{Type:Hostname,Address:ip-10-250-7-77.ec2.internal,},NodeAddress{Type:InternalDNS,Address:ip-10-250-7-77.ec2.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec223a25fa514279256b8b36a522519a,SystemUUID:ec223a25-fa51-4279-256b-8b36a522519a,BootID:652118c2-7bd4-4ebf-b248-be5c7a65a3aa,KernelVersion:4.19.86-coreos,OSImage:Container Linux by CoreOS 2303.3.0 (Rhyolite),ContainerRuntimeVersion:docker://18.6.3,KubeletVersion:v1.16.4,KubeProxyVersion:v1.16.4,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[eu.gcr.io/gardener-project/k8s.gcr.io/hyperkube@sha256:1d8d7ef8bae1a6c8564d97a7d83a3661ea4b43127b0a6d901f3cd4b1126ee102 eu.gcr.io/gardener-project/k8s.gcr.io/hyperkube:v1.16.4],SizeBytes:601224435,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/quay_io/kubernetes-ingress-controller/nginx-ingress-controller@sha256:4980f4ee069f767334c6fb6a7d75fbdc87236542fd749e22af5d80f2217959f4 eu.gcr.io/gardener-project/3rd/quay_io/kubernetes-ingress-controller/nginx-ingress-controller:0.22.0],SizeBytes:551728251,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/volume/nfs@sha256:c2ad734346f608a5f7d69cfded93c4e8094069320657bd372d12ba21dea3ea71 gcr.io/kubernetes-e2e-test-images/volume/nfs:1.0],SizeBytes:225358913,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:195659796,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/quay_io/calico/node@sha256:d017c694acb9df5ad8e957a14b4c5a613c3a42771a34904f40c279dd2f61461e eu.gcr.io/gardener-project/3rd/quay_io/calico/node:v3.8.2-mod-1],SizeBytes:185406766,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/quay_io/calico/cni@sha256:fe6cb51f30add991b76eadfa26ec10fa8796383a1ddf807be5d4228725207b9d eu.gcr.io/gardener-project/3rd/quay_io/calico/cni:v3.8.2-mod-1],SizeBytes:153790666,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/k8s_gcr_io/kubernetes-dashboard-amd64@sha256:2f4fefeb964b1b7b09a3d2607a963506a47a6628d5268825e8b45b8a4c5ace93 eu.gcr.io/gardener-project/3rd/k8s_gcr_io/kubernetes-dashboard-amd64:v1.10.1],SizeBytes:121711221,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/k8s_gcr_io/node-problem-detector@sha256:00aceed3b4ef20d0d578aff3f904212daa2f0aaf18350d3e213cf4ca0703ccf0 eu.gcr.io/gardener-project/3rd/k8s_gcr_io/node-problem-detector:v0.7.1-mod-1],SizeBytes:96768084,},ContainerImage{Names:[eu.gcr.io/gardener-project/gardener/ingress-default-backend@sha256:17b68928ead12cc9df88ee60d9c638d3fd642a7e122c2bb7586da1a21eb2de45 eu.gcr.io/gardener-project/gardener/ingress-default-backend:0.7.0],SizeBytes:69546830,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6],SizeBytes:57345321,},ContainerImage{Names:[quay.io/k8scsi/csi-provisioner@sha256:0efcb424f1dde9b9fb11a1a14f2e48ab47e1c3f08bc3a929990dcfcb1f7ab34f quay.io/k8scsi/csi-provisioner:v1.4.0-rc1],SizeBytes:54431016,},ContainerImage{Names:[quay.io/k8scsi/csi-snapshotter@sha256:e3d3e742e32d00488fdb401045b9b1d033d7ca0ab6e760f77b24750fc95e5f70 quay.io/k8scsi/csi-snapshotter:v2.0.0-rc1],SizeBytes:51703561,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/quay_io/calico/typha@sha256:52298609a808087c774e95ded163e91828106bed6cf3117c51aba3f4d3b7943c eu.gcr.io/gardener-project/3rd/quay_io/calico/typha:v3.8.2],SizeBytes:49771411,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/quay_io/calico/kube-controllers@sha256:242c3e83e41c5ad4a246cba351360d92fb90e1c140cd24e42140e640a0ed3290 eu.gcr.io/gardener-project/3rd/quay_io/calico/kube-controllers:v3.8.2],SizeBytes:46809393,},ContainerImage{Names:[quay.io/k8scsi/csi-attacher@sha256:26fccd7a99d973845df1193b46ebdcc6ab8dc5f6e6be319750c471fce1742d13 quay.io/k8scsi/csi-attacher:v1.2.0],SizeBytes:46226754,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/coredns/coredns@sha256:b1f81b52011f91ebcf512111caa6d6d0896a65251188210cd3145d5b23204531 eu.gcr.io/gardener-project/3rd/coredns/coredns:1.6.3],SizeBytes:44255363,},ContainerImage{Names:[quay.io/k8scsi/csi-attacher@sha256:0aba670b4d9d6b2e720bbf575d733156c676b693ca26501235444490300db838 quay.io/k8scsi/csi-attacher:v1.1.0],SizeBytes:42839085,},ContainerImage{Names:[quay.io/k8scsi/csi-resizer@sha256:7d46fb6eb8b890dc546029d1565d502b4a1d974d33625c6ee2bc7991b77fc1a1 quay.io/k8scsi/csi-resizer:v0.2.0],SizeBytes:42817100,},ContainerImage{Names:[quay.io/k8scsi/csi-resizer@sha256:f315c9042e56def3c05c6b04fe79ec9da6d39ddc557ca365a76cf35964ea08b6 quay.io/k8scsi/csi-resizer:v0.1.0],SizeBytes:42623056,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/k8s_gcr_io/cpvpa-amd64@sha256:5843435c534f0368f8980b1635976976b087f0b2dcde01226d9216da2276d24d eu.gcr.io/gardener-project/3rd/k8s_gcr_io/cpvpa-amd64:v0.8.1],SizeBytes:40616150,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/k8s_gcr_io/cluster-proportional-autoscaler-amd64@sha256:2cdb0f90aac21d3f648a945ef929bfb81159d7453499b2dce6164c78a348ac42 eu.gcr.io/gardener-project/3rd/k8s_gcr_io/cluster-proportional-autoscaler-amd64:1.7.1],SizeBytes:40067731,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/k8s_gcr_io/metrics-server-amd64@sha256:c3c8fb8757c3236343da9239a266c6ee9e16ac3c98b6f5d7a7cbb5f83058d4f1 eu.gcr.io/gardener-project/3rd/k8s_gcr_io/metrics-server-amd64:v0.3.3],SizeBytes:39933796,},ContainerImage{Names:[redis@sha256:50899ea1ceed33fa03232f3ac57578a424faa1742c1ac9c7a7bdb95cdf19b858 redis:5.0.5-alpine],SizeBytes:29331594,},ContainerImage{Names:[quay.io/k8scsi/hostpathplugin@sha256:b4826e492fc1762fceaf9726f41575ca0a4567864d3d235da874818de18039de quay.io/k8scsi/hostpathplugin:v1.2.0-rc5],SizeBytes:28761497,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/quay_io/prometheus/node-exporter@sha256:fea82a3a79228af2840c72ff394d7446ace51ae035f5b26cd9767b250baf13b7 eu.gcr.io/gardener-project/3rd/quay_io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/echoserver@sha256:e9ba514b896cdf559eef8788b66c2c3ee55f3572df617647b4b0d8b6bf81cf19 gcr.io/kubernetes-e2e-test-images/echoserver:2.2],SizeBytes:21692741,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/quay_io/prometheus/blackbox-exporter@sha256:c09cbb653e4708a0c14b205822f56026669c6a4a7d0502609c65da2dd741e669 eu.gcr.io/gardener-project/3rd/quay_io/prometheus/blackbox-exporter:v0.14.0],SizeBytes:17584252,},ContainerImage{Names:[quay.io/k8scsi/mock-driver@sha256:e0eed916b7d970bad2b7d9875f9ad16932f987f0f3d91ec5d86da68b0b5cc9d1 quay.io/k8scsi/mock-driver:v2.1.0],SizeBytes:16226335,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[quay.io/k8scsi/csi-node-driver-registrar@sha256:13daf82fb99e951a4bff8ae5fc7c17c3a8fe7130be6400990d8f6076c32d4599 quay.io/k8scsi/csi-node-driver-registrar:v1.1.0],SizeBytes:15815995,},ContainerImage{Names:[quay.io/k8scsi/livenessprobe@sha256:dde617756e0f602adc566ab71fd885f1dad451ad3fb063ac991c95a2ff47aea5 quay.io/k8scsi/livenessprobe:v1.1.0],SizeBytes:14967303,},ContainerImage{Names:[eu.gcr.io/gardener-project/gardener/vpn-shoot@sha256:6054c6ae62c2bca2f07c913390c3babf14bb8dfa80c707ee8d4fd03c06dbf93f eu.gcr.io/gardener-project/gardener/vpn-shoot:0.16.0],SizeBytes:13732716,},ContainerImage{Names:[gcr.io/google-containers/startup-script@sha256:be96df6845a2af0eb61b17817ed085ce41048e4044c541da7580570b61beff3e gcr.io/google-containers/startup-script:v1],SizeBytes:12528443,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/quay_io/calico/pod2daemon-flexvol@sha256:fd246ba4eb5b96a7b97bfd8d99eb823ba179e6eeb9852cb3e3f7bf2f44a800a8 eu.gcr.io/gardener-project/3rd/quay_io/calico/pod2daemon-flexvol:v3.8.2],SizeBytes:9371181,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1],SizeBytes:9349974,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0],SizeBytes:4732240,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:4206620,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/gcr_io/google_containers/pause-amd64@sha256:ffa28932647c3b6cab6a618aafe98d33dd185d96158ecf9b1addf042d6244025 k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea eu.gcr.io/gardener-project/3rd/gcr_io/google_containers/pause-amd64:3.1 k8s.gcr.io/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 11 20:39:49.038: INFO: Logging kubelet events for node ip-10-250-7-77.ec2.internal Jan 11 20:39:49.128: INFO: Logging pods the kubelet thinks is on node ip-10-250-7-77.ec2.internal Jan 11 20:39:49.232: INFO: blackbox-exporter-54bb5f55cc-452fk started at 2020-01-11 15:55:58 +0000 UTC (0+1 container statuses recorded) Jan 11 20:39:49.232: INFO: Container blackbox-exporter ready: true, restart count 0 Jan 11 20:39:49.232: INFO: coredns-59c969ffb8-fqq79 started at 2020-01-11 15:56:08 +0000 UTC (0+1 container statuses recorded) Jan 11 20:39:49.232: INFO: Container coredns ready: true, restart count 0 Jan 11 20:39:49.232: INFO: calico-node-dl8nk started at 2020-01-11 15:55:58 +0000 UTC (2+1 container statuses recorded) Jan 11 20:39:49.232: INFO: Init container install-cni ready: true, restart count 0 Jan 11 20:39:49.232: INFO: Init container flexvol-driver ready: true, restart count 0 Jan 11 20:39:49.232: INFO: Container calico-node ready: true, restart count 0 Jan 11 20:39:49.232: INFO: node-problem-detector-jx2p4 started at 2020-01-11 15:55:58 +0000 UTC (0+1 container statuses recorded) Jan 11 20:39:49.232: INFO: Container node-problem-detector ready: true, restart count 0 Jan 11 20:39:49.232: INFO: node-exporter-gp57h started at 2020-01-11 15:55:58 +0000 UTC (0+1 container statuses recorded) Jan 11 20:39:49.232: INFO: Container node-exporter ready: true, restart count 0 Jan 11 20:39:49.232: INFO: calico-kube-controllers-79bcd784b6-c46r9 started at 2020-01-11 15:56:08 +0000 UTC (0+1 container statuses recorded) Jan 11 20:39:49.232: INFO: Container calico-kube-controllers ready: true, restart count 0 Jan 11 20:39:49.232: INFO: metrics-server-7c797fd994-4x7v9 started at 2020-01-11 15:56:08 +0000 UTC (0+1 container statuses recorded) Jan 11 20:39:49.232: INFO: Container metrics-server ready: true, restart count 0 Jan 11 20:39:49.232: INFO: coredns-59c969ffb8-57m7v started at 2020-01-11 15:56:11 +0000 UTC (0+1 container statuses recorded) Jan 11 20:39:49.232: INFO: Container coredns ready: true, restart count 0 Jan 11 20:39:49.232: INFO: calico-typha-deploy-9f6b455c4-vdrzx started at 2020-01-11 16:21:07 +0000 UTC (0+1 container statuses recorded) Jan 11 20:39:49.232: INFO: Container calico-typha ready: true, restart count 0 Jan 11 20:39:49.232: INFO: kube-proxy-nn5px started at 2020-01-11 15:55:58 +0000 UTC (0+1 container statuses recorded) Jan 11 20:39:49.232: INFO: Container kube-proxy ready: true, restart count 0 Jan 11 20:39:49.232: INFO: calico-typha-horizontal-autoscaler-85c99966bb-6j6rp started at 2020-01-11 15:56:08 +0000 UTC (0+1 container statuses recorded) Jan 11 20:39:49.232: INFO: Container autoscaler ready: true, restart count 0 Jan 11 20:39:49.232: INFO: calico-typha-vertical-autoscaler-5769b74b58-r8t6r started at 2020-01-11 15:56:13 +0000 UTC (0+1 container statuses recorded) Jan 11 20:39:49.232: INFO: Container autoscaler ready: true, restart count 5 Jan 11 20:39:49.232: INFO: addons-nginx-ingress-controller-7c75bb76db-cd9r9 started at 2020-01-11 15:56:13 +0000 UTC (0+1 container statuses recorded) Jan 11 20:39:49.232: INFO: Container nginx-ingress-controller ready: true, restart count 0 Jan 11 20:39:49.232: INFO: vpn-shoot-5d76665b65-6rkww started at 2020-01-11 15:56:13 +0000 UTC (0+1 container statuses recorded) Jan 11 20:39:49.232: INFO: Container vpn-shoot ready: true, restart count 0 Jan 11 20:39:49.232: INFO: addons-nginx-ingress-nginx-ingress-k8s-backend-95f65778d-4fk7d started at 2020-01-11 15:56:08 +0000 UTC (0+1 container statuses recorded) Jan 11 20:39:49.232: INFO: Container nginx-ingress-nginx-ingress-k8s-backend ready: true, restart count 0 Jan 11 20:39:49.232: INFO: addons-kubernetes-dashboard-78954cc66b-69k8m started at 2020-01-11 15:56:08 +0000 UTC (0+1 container statuses recorded) Jan 11 20:39:49.232: INFO: Container kubernetes-dashboard ready: true, restart count 0 W0111 20:39:49.323284 21583 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 11 20:39:49.535: INFO: Latency metrics for node ip-10-250-7-77.ec2.internal Jan 11 20:39:49.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-6173" for this suite. Jan 11 20:39:55.895: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:39:59.205: INFO: namespace hostpath-6173 deletion completed in 9.578983951s • Failure [14.474 seconds] [sig-storage] HostPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] [It] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Jan 11 20:39:48.005: Unexpected error: <*errors.errorString | 0xc003ec15c0>: { s: "expected \"mode of file \\\"/test-volume\\\": dtrwxrwx\" in container output: Expected\n : mount type of \"/test-volume\": tmpfs\n mode of file \"/test-volume\": dgtrwxrwxrwx\n \nto contain substring\n : mode of file \"/test-volume\": dtrwxrwx", } expected "mode of file \"/test-volume\": dtrwxrwx" in container output: Expected : mount type of "/test-volume": tmpfs mode of file "/test-volume": dgtrwxrwxrwx to contain substring : mode of file "/test-volume": dtrwxrwx occurred /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:1667 ------------------------------ [BeforeEach] [sig-storage] HostPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 11 20:39:59.207: INFO: >>> kubeConfig: /tmp/tm/kubeconfig/shoot.config STEP: Building a namespace api object, basename hostpath STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in hostpath-8496 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating a pod to test hostPath mode Jan 11 20:39:59.942: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-8496" to be "success or failure" Jan 11 20:40:00.031: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 89.36599ms Jan 11 20:40:02.121: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.17942453s STEP: Saw pod success Jan 11 20:40:02.121: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Jan 11 20:40:02.211: INFO: Trying to get logs from node ip-10-250-27-25.ec2.internal pod pod-host-path-test container test-container-1: STEP: delete the pod Jan 11 20:40:02.400: INFO: Waiting for pod pod-host-path-test to disappear Jan 11 20:40:02.490: INFO: Pod pod-host-path-test no longer exists Jan 11 20:40:02.490: FAIL: Unexpected error: <*errors.errorString | 0xc003c7aed0>: { s: "expected \"mode of file \\\"/test-volume\\\": dtrwxrwx\" in container output: Expected\n : mount type of \"/test-volume\": tmpfs\n mode of file \"/test-volume\": dgtrwxrwxrwx\n \nto contain substring\n : mode of file \"/test-volume\": dtrwxrwx", } expected "mode of file \"/test-volume\": dtrwxrwx" in container output: Expected : mount type of "/test-volume": tmpfs mode of file "/test-volume": dgtrwxrwxrwx to contain substring : mode of file "/test-volume": dtrwxrwx occurred [AfterEach] [sig-storage] HostPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 STEP: Collecting events from namespace "hostpath-8496". STEP: Found 7 events. Jan 11 20:40:02.581: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-host-path-test: {default-scheduler } Scheduled: Successfully assigned hostpath-8496/pod-host-path-test to ip-10-250-27-25.ec2.internal Jan 11 20:40:02.581: INFO: At 2020-01-11 20:40:00 +0000 UTC - event for pod-host-path-test: {kubelet ip-10-250-27-25.ec2.internal} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/mounttest:1.0" already present on machine Jan 11 20:40:02.581: INFO: At 2020-01-11 20:40:00 +0000 UTC - event for pod-host-path-test: {kubelet ip-10-250-27-25.ec2.internal} Created: Created container test-container-1 Jan 11 20:40:02.581: INFO: At 2020-01-11 20:40:00 +0000 UTC - event for pod-host-path-test: {kubelet ip-10-250-27-25.ec2.internal} Started: Started container test-container-1 Jan 11 20:40:02.581: INFO: At 2020-01-11 20:40:00 +0000 UTC - event for pod-host-path-test: {kubelet ip-10-250-27-25.ec2.internal} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/mounttest:1.0" already present on machine Jan 11 20:40:02.581: INFO: At 2020-01-11 20:40:00 +0000 UTC - event for pod-host-path-test: {kubelet ip-10-250-27-25.ec2.internal} Created: Created container test-container-2 Jan 11 20:40:02.581: INFO: At 2020-01-11 20:40:00 +0000 UTC - event for pod-host-path-test: {kubelet ip-10-250-27-25.ec2.internal} Started: Started container test-container-2 Jan 11 20:40:02.670: INFO: POD NODE PHASE GRACE CONDITIONS Jan 11 20:40:02.670: INFO: Jan 11 20:40:02.853: INFO: Logging node info for node ip-10-250-27-25.ec2.internal Jan 11 20:40:02.943: INFO: Node Info: &Node{ObjectMeta:{ip-10-250-27-25.ec2.internal /api/v1/nodes/ip-10-250-27-25.ec2.internal af7f64f3-a5de-4df3-9e07-f69e835ab580 93219 0 2020-01-11 15:56:03 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:m5.large beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:us-east-1 failure-domain.beta.kubernetes.io/zone:us-east-1c kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-10-250-27-25.ec2.internal kubernetes.io/os:linux node.kubernetes.io/role:node worker.garden.sapcloud.io/group:worker-1 worker.gardener.cloud/pool:worker-1] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-ephemeral-1641":"ip-10-250-27-25.ec2.internal","csi-hostpath-ephemeral-3918":"ip-10-250-27-25.ec2.internal","csi-hostpath-provisioning-1550":"ip-10-250-27-25.ec2.internal","csi-hostpath-provisioning-181":"ip-10-250-27-25.ec2.internal","csi-hostpath-provisioning-5271":"ip-10-250-27-25.ec2.internal","csi-hostpath-provisioning-5738":"ip-10-250-27-25.ec2.internal","csi-hostpath-provisioning-6240":"ip-10-250-27-25.ec2.internal","csi-hostpath-provisioning-8445":"ip-10-250-27-25.ec2.internal","csi-hostpath-volume-expand-6586":"ip-10-250-27-25.ec2.internal","csi-hostpath-volume-expand-7991":"ip-10-250-27-25.ec2.internal","csi-hostpath-volume-expand-8205":"ip-10-250-27-25.ec2.internal","csi-hostpath-volumemode-2239":"ip-10-250-27-25.ec2.internal","csi-mock-csi-mock-volumes-104":"csi-mock-csi-mock-volumes-104","csi-mock-csi-mock-volumes-1062":"csi-mock-csi-mock-volumes-1062","csi-mock-csi-mock-volumes-1547":"csi-mock-csi-mock-volumes-1547","csi-mock-csi-mock-volumes-2239":"csi-mock-csi-mock-volumes-2239","csi-mock-csi-mock-volumes-3620":"csi-mock-csi-mock-volumes-3620","csi-mock-csi-mock-volumes-4203":"csi-mock-csi-mock-volumes-4203","csi-mock-csi-mock-volumes-4249":"csi-mock-csi-mock-volumes-4249","csi-mock-csi-mock-volumes-6381":"csi-mock-csi-mock-volumes-6381","csi-mock-csi-mock-volumes-7446":"csi-mock-csi-mock-volumes-7446","csi-mock-csi-mock-volumes-795":"csi-mock-csi-mock-volumes-795","csi-mock-csi-mock-volumes-8830":"csi-mock-csi-mock-volumes-8830"} node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.250.27.25/19 projectcalico.org/IPv4IPIPTunnelAddr:100.64.1.1 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:100.64.1.0/24,DoNotUse_ExternalID:,ProviderID:aws:///us-east-1c/i-0a8c404292a3c92e9,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-aws-ebs: {{25 0} {} 25 DecimalSI},cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{28730179584 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{8054267904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-aws-ebs: {{25 0} {} 25 DecimalSI},cpu: {{1920 -3} {} 1920m DecimalSI},ephemeral-storage: {{27293670584 0} {} 27293670584 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{6577812679 0} {} 6577812679 DecimalSI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2020-01-11 20:39:48 +0000 UTC,LastTransitionTime:2020-01-11 15:56:58 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2020-01-11 20:39:48 +0000 UTC,LastTransitionTime:2020-01-11 15:56:58 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2020-01-11 20:39:48 +0000 UTC,LastTransitionTime:2020-01-11 15:56:58 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2020-01-11 20:39:48 +0000 UTC,LastTransitionTime:2020-01-11 15:56:58 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2020-01-11 20:39:48 +0000 UTC,LastTransitionTime:2020-01-11 15:56:58 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2020-01-11 20:39:48 +0000 UTC,LastTransitionTime:2020-01-11 15:56:58 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2020-01-11 20:39:48 +0000 UTC,LastTransitionTime:2020-01-11 15:56:58 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-01-11 15:56:18 +0000 UTC,LastTransitionTime:2020-01-11 15:56:18 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-01-11 20:39:58 +0000 UTC,LastTransitionTime:2020-01-11 15:56:03 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-01-11 20:39:58 +0000 UTC,LastTransitionTime:2020-01-11 15:56:03 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-01-11 20:39:58 +0000 UTC,LastTransitionTime:2020-01-11 15:56:03 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-01-11 20:39:58 +0000 UTC,LastTransitionTime:2020-01-11 15:56:13 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.250.27.25,},NodeAddress{Type:Hostname,Address:ip-10-250-27-25.ec2.internal,},NodeAddress{Type:InternalDNS,Address:ip-10-250-27-25.ec2.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec280dba3c1837e27848a3dec8c080a9,SystemUUID:ec280dba-3c18-37e2-7848-a3dec8c080a9,BootID:89e42b89-b944-47ea-8bf6-5f2fe6d80c97,KernelVersion:4.19.86-coreos,OSImage:Container Linux by CoreOS 2303.3.0 (Rhyolite),ContainerRuntimeVersion:docker://18.6.3,KubeletVersion:v1.16.4,KubeProxyVersion:v1.16.4,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[eu.gcr.io/gardener-project/k8s.gcr.io/hyperkube@sha256:1d8d7ef8bae1a6c8564d97a7d83a3661ea4b43127b0a6d901f3cd4b1126ee102 eu.gcr.io/gardener-project/k8s.gcr.io/hyperkube:v1.16.4],SizeBytes:601224435,},ContainerImage{Names:[quay.io/kubernetes_incubator/nfs-provisioner@sha256:df762117e3c891f2d2ddff46ecb0776ba1f9f3c44cfd7739b0683bcd7a7954a8 quay.io/kubernetes_incubator/nfs-provisioner:v2.2.2],SizeBytes:391772778,},ContainerImage{Names:[gcr.io/google-samples/gb-frontend@sha256:35cb427341429fac3df10ff74600ea73e8ec0754d78f9ce89e0b4f3d70d53ba6 gcr.io/google-samples/gb-frontend:v6],SizeBytes:373099368,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa k8s.gcr.io/etcd:3.3.15],SizeBytes:246640776,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/volume/nfs@sha256:c2ad734346f608a5f7d69cfded93c4e8094069320657bd372d12ba21dea3ea71 gcr.io/kubernetes-e2e-test-images/volume/nfs:1.0],SizeBytes:225358913,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:195659796,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/quay_io/calico/node@sha256:d017c694acb9df5ad8e957a14b4c5a613c3a42771a34904f40c279dd2f61461e eu.gcr.io/gardener-project/3rd/quay_io/calico/node:v3.8.2-mod-1],SizeBytes:185406766,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/quay_io/calico/cni@sha256:fe6cb51f30add991b76eadfa26ec10fa8796383a1ddf807be5d4228725207b9d eu.gcr.io/gardener-project/3rd/quay_io/calico/cni:v3.8.2-mod-1],SizeBytes:153790666,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/k8s_gcr_io/node-problem-detector@sha256:00aceed3b4ef20d0d578aff3f904212daa2f0aaf18350d3e213cf4ca0703ccf0 eu.gcr.io/gardener-project/3rd/k8s_gcr_io/node-problem-detector:v0.7.1-mod-1],SizeBytes:96768084,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:1bafcc6fb1aa990b487850adba9cadc020e42d7905aa8a30481182a477ba24b0 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.10],SizeBytes:61365829,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6],SizeBytes:57345321,},ContainerImage{Names:[quay.io/k8scsi/csi-provisioner@sha256:0efcb424f1dde9b9fb11a1a14f2e48ab47e1c3f08bc3a929990dcfcb1f7ab34f quay.io/k8scsi/csi-provisioner:v1.4.0-rc1],SizeBytes:54431016,},ContainerImage{Names:[quay.io/k8scsi/csi-snapshotter@sha256:e3d3e742e32d00488fdb401045b9b1d033d7ca0ab6e760f77b24750fc95e5f70 quay.io/k8scsi/csi-snapshotter:v2.0.0-rc1],SizeBytes:51703561,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/quay_io/calico/typha@sha256:52298609a808087c774e95ded163e91828106bed6cf3117c51aba3f4d3b7943c eu.gcr.io/gardener-project/3rd/quay_io/calico/typha:v3.8.2],SizeBytes:49771411,},ContainerImage{Names:[quay.io/k8scsi/csi-attacher@sha256:26fccd7a99d973845df1193b46ebdcc6ab8dc5f6e6be319750c471fce1742d13 quay.io/k8scsi/csi-attacher:v1.2.0],SizeBytes:46226754,},ContainerImage{Names:[quay.io/k8scsi/csi-attacher@sha256:0aba670b4d9d6b2e720bbf575d733156c676b693ca26501235444490300db838 quay.io/k8scsi/csi-attacher:v1.1.0],SizeBytes:42839085,},ContainerImage{Names:[quay.io/k8scsi/csi-resizer@sha256:7d46fb6eb8b890dc546029d1565d502b4a1d974d33625c6ee2bc7991b77fc1a1 quay.io/k8scsi/csi-resizer:v0.2.0],SizeBytes:42817100,},ContainerImage{Names:[quay.io/k8scsi/csi-resizer@sha256:f315c9042e56def3c05c6b04fe79ec9da6d39ddc557ca365a76cf35964ea08b6 quay.io/k8scsi/csi-resizer:v0.1.0],SizeBytes:42623056,},ContainerImage{Names:[gcr.io/google-containers/debian-base@sha256:6966a0aedd7592c18ff2dd803c08bd85780ee19f5e3a2e7cf908a4cd837afcde gcr.io/google-containers/debian-base:0.4.1],SizeBytes:42323657,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonroot@sha256:d4ede5c74517090b6686219059118ed178cf4620f5db8781b32f806bb1e7395b gcr.io/kubernetes-e2e-test-images/nonroot:1.0],SizeBytes:42321438,},ContainerImage{Names:[redis@sha256:50899ea1ceed33fa03232f3ac57578a424faa1742c1ac9c7a7bdb95cdf19b858 redis:5.0.5-alpine],SizeBytes:29331594,},ContainerImage{Names:[quay.io/k8scsi/hostpathplugin@sha256:b4826e492fc1762fceaf9726f41575ca0a4567864d3d235da874818de18039de quay.io/k8scsi/hostpathplugin:v1.2.0-rc5],SizeBytes:28761497,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/quay_io/prometheus/node-exporter@sha256:fea82a3a79228af2840c72ff394d7446ace51ae035f5b26cd9767b250baf13b7 eu.gcr.io/gardener-project/3rd/quay_io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/echoserver@sha256:e9ba514b896cdf559eef8788b66c2c3ee55f3572df617647b4b0d8b6bf81cf19 gcr.io/kubernetes-e2e-test-images/echoserver:2.2],SizeBytes:21692741,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/regression-issue-74839-amd64@sha256:3b36bd80b97c532a774e7f6246797b8575d97037982f353476c703ba6686c75c gcr.io/kubernetes-e2e-test-images/regression-issue-74839-amd64:1.0],SizeBytes:19227369,},ContainerImage{Names:[quay.io/k8scsi/mock-driver@sha256:e0eed916b7d970bad2b7d9875f9ad16932f987f0f3d91ec5d86da68b0b5cc9d1 quay.io/k8scsi/mock-driver:v2.1.0],SizeBytes:16226335,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[quay.io/k8scsi/csi-node-driver-registrar@sha256:13daf82fb99e951a4bff8ae5fc7c17c3a8fe7130be6400990d8f6076c32d4599 quay.io/k8scsi/csi-node-driver-registrar:v1.1.0],SizeBytes:15815995,},ContainerImage{Names:[quay.io/k8scsi/livenessprobe@sha256:dde617756e0f602adc566ab71fd885f1dad451ad3fb063ac991c95a2ff47aea5 quay.io/k8scsi/livenessprobe:v1.1.0],SizeBytes:14967303,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/quay_io/calico/pod2daemon-flexvol@sha256:fd246ba4eb5b96a7b97bfd8d99eb823ba179e6eeb9852cb3e3f7bf2f44a800a8 eu.gcr.io/gardener-project/3rd/quay_io/calico/pod2daemon-flexvol:v3.8.2],SizeBytes:9371181,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1],SizeBytes:9349974,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6 gcr.io/kubernetes-e2e-test-images/kitten:1.0],SizeBytes:4747037,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0],SizeBytes:4732240,},ContainerImage{Names:[alpine@sha256:8421d9a84432575381bfabd248f1eb56f3aa21d9d7cd2511583c68c9b7511d10 alpine:3.7],SizeBytes:4206494,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0],SizeBytes:1450451,},ContainerImage{Names:[busybox@sha256:6915be4043561d64e0ab0f8f098dc2ac48e077fe23f488ac24b665166898115a busybox:latest],SizeBytes:1219782,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:bbc3a03235220b170ba48a157dd097dd1379299370e1ed99ce976df0355d24f0 busybox:1.27],SizeBytes:1129289,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/gcr_io/google_containers/pause-amd64@sha256:ffa28932647c3b6cab6a618aafe98d33dd185d96158ecf9b1addf042d6244025 k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea eu.gcr.io/gardener-project/3rd/gcr_io/google_containers/pause-amd64:3.1 k8s.gcr.io/pause:3.1],SizeBytes:742472,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-provisioning-8445^15e49ff2-34ae-11ea-98fd-0e6a2517c83d],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 11 20:40:02.944: INFO: Logging kubelet events for node ip-10-250-27-25.ec2.internal Jan 11 20:40:03.033: INFO: Logging pods the kubelet thinks is on node ip-10-250-27-25.ec2.internal Jan 11 20:40:03.130: INFO: node-problem-detector-9z5sq started at 2020-01-11 15:56:04 +0000 UTC (0+1 container statuses recorded) Jan 11 20:40:03.130: INFO: Container node-problem-detector ready: true, restart count 0 Jan 11 20:40:03.130: INFO: node-exporter-l6q84 started at 2020-01-11 15:56:04 +0000 UTC (0+1 container statuses recorded) Jan 11 20:40:03.130: INFO: Container node-exporter ready: true, restart count 0 Jan 11 20:40:03.130: INFO: calico-node-m8r2d started at 2020-01-11 15:56:04 +0000 UTC (2+1 container statuses recorded) Jan 11 20:40:03.130: INFO: Init container install-cni ready: true, restart count 0 Jan 11 20:40:03.130: INFO: Init container flexvol-driver ready: true, restart count 0 Jan 11 20:40:03.130: INFO: Container calico-node ready: true, restart count 0 Jan 11 20:40:03.130: INFO: kube-proxy-rq4kf started at 2020-01-11 15:56:04 +0000 UTC (0+1 container statuses recorded) Jan 11 20:40:03.130: INFO: Container kube-proxy ready: true, restart count 0 W0111 20:40:03.220640 21583 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 11 20:40:03.443: INFO: Latency metrics for node ip-10-250-27-25.ec2.internal Jan 11 20:40:03.443: INFO: Logging node info for node ip-10-250-7-77.ec2.internal Jan 11 20:40:03.533: INFO: Node Info: &Node{ObjectMeta:{ip-10-250-7-77.ec2.internal /api/v1/nodes/ip-10-250-7-77.ec2.internal 3773c02c-1fbb-4cbe-a527-8933de0a8978 93226 0 2020-01-11 15:55:58 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:m5.large beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:us-east-1 failure-domain.beta.kubernetes.io/zone:us-east-1c kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-10-250-7-77.ec2.internal kubernetes.io/os:linux node.kubernetes.io/role:node worker.garden.sapcloud.io/group:worker-1 worker.gardener.cloud/pool:worker-1] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-ephemeral-1155":"ip-10-250-7-77.ec2.internal","csi-hostpath-ephemeral-9708":"ip-10-250-7-77.ec2.internal","csi-hostpath-provisioning-1157":"ip-10-250-7-77.ec2.internal","csi-hostpath-provisioning-1947":"ip-10-250-7-77.ec2.internal","csi-hostpath-provisioning-2263":"ip-10-250-7-77.ec2.internal","csi-hostpath-provisioning-3332":"ip-10-250-7-77.ec2.internal","csi-hostpath-provisioning-4625":"ip-10-250-7-77.ec2.internal","csi-hostpath-provisioning-5877":"ip-10-250-7-77.ec2.internal","csi-hostpath-provisioning-638":"ip-10-250-7-77.ec2.internal","csi-hostpath-provisioning-8194":"ip-10-250-7-77.ec2.internal","csi-hostpath-provisioning-888":"ip-10-250-7-77.ec2.internal","csi-hostpath-provisioning-9667":"ip-10-250-7-77.ec2.internal","csi-hostpath-volume-1340":"ip-10-250-7-77.ec2.internal","csi-hostpath-volume-2441":"ip-10-250-7-77.ec2.internal","csi-hostpath-volume-expand-1240":"ip-10-250-7-77.ec2.internal","csi-hostpath-volume-expand-1264":"ip-10-250-7-77.ec2.internal","csi-hostpath-volume-expand-1929":"ip-10-250-7-77.ec2.internal","csi-hostpath-volume-expand-8983":"ip-10-250-7-77.ec2.internal","csi-hostpath-volumeio-3164":"ip-10-250-7-77.ec2.internal","csi-hostpath-volumemode-2792":"ip-10-250-7-77.ec2.internal","csi-mock-csi-mock-volumes-1446":"csi-mock-csi-mock-volumes-1446","csi-mock-csi-mock-volumes-4004":"csi-mock-csi-mock-volumes-4004","csi-mock-csi-mock-volumes-4733":"csi-mock-csi-mock-volumes-4733","csi-mock-csi-mock-volumes-8663":"csi-mock-csi-mock-volumes-8663"} node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.250.7.77/19 projectcalico.org/IPv4IPIPTunnelAddr:100.64.0.1 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:100.64.0.0/24,DoNotUse_ExternalID:,ProviderID:aws:///us-east-1c/i-0551dba45aad7abfa,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-aws-ebs: {{25 0} {} 25 DecimalSI},cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{28730179584 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{8054267904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-aws-ebs: {{25 0} {} 25 DecimalSI},cpu: {{1920 -3} {} 1920m DecimalSI},ephemeral-storage: {{27293670584 0} {} 27293670584 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{6577812679 0} {} 6577812679 DecimalSI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2020-01-11 20:39:29 +0000 UTC,LastTransitionTime:2020-01-11 15:56:28 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2020-01-11 20:39:29 +0000 UTC,LastTransitionTime:2020-01-11 15:56:28 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2020-01-11 20:39:29 +0000 UTC,LastTransitionTime:2020-01-11 15:56:28 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2020-01-11 20:39:29 +0000 UTC,LastTransitionTime:2020-01-11 15:56:28 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2020-01-11 20:39:29 +0000 UTC,LastTransitionTime:2020-01-11 15:56:28 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2020-01-11 20:39:29 +0000 UTC,LastTransitionTime:2020-01-11 15:56:28 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2020-01-11 20:39:29 +0000 UTC,LastTransitionTime:2020-01-11 15:56:28 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-01-11 15:56:16 +0000 UTC,LastTransitionTime:2020-01-11 15:56:16 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-01-11 20:39:59 +0000 UTC,LastTransitionTime:2020-01-11 15:55:58 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-01-11 20:39:59 +0000 UTC,LastTransitionTime:2020-01-11 15:55:58 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-01-11 20:39:59 +0000 UTC,LastTransitionTime:2020-01-11 15:55:58 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-01-11 20:39:59 +0000 UTC,LastTransitionTime:2020-01-11 15:56:08 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.250.7.77,},NodeAddress{Type:Hostname,Address:ip-10-250-7-77.ec2.internal,},NodeAddress{Type:InternalDNS,Address:ip-10-250-7-77.ec2.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec223a25fa514279256b8b36a522519a,SystemUUID:ec223a25-fa51-4279-256b-8b36a522519a,BootID:652118c2-7bd4-4ebf-b248-be5c7a65a3aa,KernelVersion:4.19.86-coreos,OSImage:Container Linux by CoreOS 2303.3.0 (Rhyolite),ContainerRuntimeVersion:docker://18.6.3,KubeletVersion:v1.16.4,KubeProxyVersion:v1.16.4,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[eu.gcr.io/gardener-project/k8s.gcr.io/hyperkube@sha256:1d8d7ef8bae1a6c8564d97a7d83a3661ea4b43127b0a6d901f3cd4b1126ee102 eu.gcr.io/gardener-project/k8s.gcr.io/hyperkube:v1.16.4],SizeBytes:601224435,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/quay_io/kubernetes-ingress-controller/nginx-ingress-controller@sha256:4980f4ee069f767334c6fb6a7d75fbdc87236542fd749e22af5d80f2217959f4 eu.gcr.io/gardener-project/3rd/quay_io/kubernetes-ingress-controller/nginx-ingress-controller:0.22.0],SizeBytes:551728251,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/volume/nfs@sha256:c2ad734346f608a5f7d69cfded93c4e8094069320657bd372d12ba21dea3ea71 gcr.io/kubernetes-e2e-test-images/volume/nfs:1.0],SizeBytes:225358913,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:195659796,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/quay_io/calico/node@sha256:d017c694acb9df5ad8e957a14b4c5a613c3a42771a34904f40c279dd2f61461e eu.gcr.io/gardener-project/3rd/quay_io/calico/node:v3.8.2-mod-1],SizeBytes:185406766,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/quay_io/calico/cni@sha256:fe6cb51f30add991b76eadfa26ec10fa8796383a1ddf807be5d4228725207b9d eu.gcr.io/gardener-project/3rd/quay_io/calico/cni:v3.8.2-mod-1],SizeBytes:153790666,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/k8s_gcr_io/kubernetes-dashboard-amd64@sha256:2f4fefeb964b1b7b09a3d2607a963506a47a6628d5268825e8b45b8a4c5ace93 eu.gcr.io/gardener-project/3rd/k8s_gcr_io/kubernetes-dashboard-amd64:v1.10.1],SizeBytes:121711221,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/k8s_gcr_io/node-problem-detector@sha256:00aceed3b4ef20d0d578aff3f904212daa2f0aaf18350d3e213cf4ca0703ccf0 eu.gcr.io/gardener-project/3rd/k8s_gcr_io/node-problem-detector:v0.7.1-mod-1],SizeBytes:96768084,},ContainerImage{Names:[eu.gcr.io/gardener-project/gardener/ingress-default-backend@sha256:17b68928ead12cc9df88ee60d9c638d3fd642a7e122c2bb7586da1a21eb2de45 eu.gcr.io/gardener-project/gardener/ingress-default-backend:0.7.0],SizeBytes:69546830,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6],SizeBytes:57345321,},ContainerImage{Names:[quay.io/k8scsi/csi-provisioner@sha256:0efcb424f1dde9b9fb11a1a14f2e48ab47e1c3f08bc3a929990dcfcb1f7ab34f quay.io/k8scsi/csi-provisioner:v1.4.0-rc1],SizeBytes:54431016,},ContainerImage{Names:[quay.io/k8scsi/csi-snapshotter@sha256:e3d3e742e32d00488fdb401045b9b1d033d7ca0ab6e760f77b24750fc95e5f70 quay.io/k8scsi/csi-snapshotter:v2.0.0-rc1],SizeBytes:51703561,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/quay_io/calico/typha@sha256:52298609a808087c774e95ded163e91828106bed6cf3117c51aba3f4d3b7943c eu.gcr.io/gardener-project/3rd/quay_io/calico/typha:v3.8.2],SizeBytes:49771411,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/quay_io/calico/kube-controllers@sha256:242c3e83e41c5ad4a246cba351360d92fb90e1c140cd24e42140e640a0ed3290 eu.gcr.io/gardener-project/3rd/quay_io/calico/kube-controllers:v3.8.2],SizeBytes:46809393,},ContainerImage{Names:[quay.io/k8scsi/csi-attacher@sha256:26fccd7a99d973845df1193b46ebdcc6ab8dc5f6e6be319750c471fce1742d13 quay.io/k8scsi/csi-attacher:v1.2.0],SizeBytes:46226754,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/coredns/coredns@sha256:b1f81b52011f91ebcf512111caa6d6d0896a65251188210cd3145d5b23204531 eu.gcr.io/gardener-project/3rd/coredns/coredns:1.6.3],SizeBytes:44255363,},ContainerImage{Names:[quay.io/k8scsi/csi-attacher@sha256:0aba670b4d9d6b2e720bbf575d733156c676b693ca26501235444490300db838 quay.io/k8scsi/csi-attacher:v1.1.0],SizeBytes:42839085,},ContainerImage{Names:[quay.io/k8scsi/csi-resizer@sha256:7d46fb6eb8b890dc546029d1565d502b4a1d974d33625c6ee2bc7991b77fc1a1 quay.io/k8scsi/csi-resizer:v0.2.0],SizeBytes:42817100,},ContainerImage{Names:[quay.io/k8scsi/csi-resizer@sha256:f315c9042e56def3c05c6b04fe79ec9da6d39ddc557ca365a76cf35964ea08b6 quay.io/k8scsi/csi-resizer:v0.1.0],SizeBytes:42623056,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/k8s_gcr_io/cpvpa-amd64@sha256:5843435c534f0368f8980b1635976976b087f0b2dcde01226d9216da2276d24d eu.gcr.io/gardener-project/3rd/k8s_gcr_io/cpvpa-amd64:v0.8.1],SizeBytes:40616150,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/k8s_gcr_io/cluster-proportional-autoscaler-amd64@sha256:2cdb0f90aac21d3f648a945ef929bfb81159d7453499b2dce6164c78a348ac42 eu.gcr.io/gardener-project/3rd/k8s_gcr_io/cluster-proportional-autoscaler-amd64:1.7.1],SizeBytes:40067731,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/k8s_gcr_io/metrics-server-amd64@sha256:c3c8fb8757c3236343da9239a266c6ee9e16ac3c98b6f5d7a7cbb5f83058d4f1 eu.gcr.io/gardener-project/3rd/k8s_gcr_io/metrics-server-amd64:v0.3.3],SizeBytes:39933796,},ContainerImage{Names:[redis@sha256:50899ea1ceed33fa03232f3ac57578a424faa1742c1ac9c7a7bdb95cdf19b858 redis:5.0.5-alpine],SizeBytes:29331594,},ContainerImage{Names:[quay.io/k8scsi/hostpathplugin@sha256:b4826e492fc1762fceaf9726f41575ca0a4567864d3d235da874818de18039de quay.io/k8scsi/hostpathplugin:v1.2.0-rc5],SizeBytes:28761497,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/quay_io/prometheus/node-exporter@sha256:fea82a3a79228af2840c72ff394d7446ace51ae035f5b26cd9767b250baf13b7 eu.gcr.io/gardener-project/3rd/quay_io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/echoserver@sha256:e9ba514b896cdf559eef8788b66c2c3ee55f3572df617647b4b0d8b6bf81cf19 gcr.io/kubernetes-e2e-test-images/echoserver:2.2],SizeBytes:21692741,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/quay_io/prometheus/blackbox-exporter@sha256:c09cbb653e4708a0c14b205822f56026669c6a4a7d0502609c65da2dd741e669 eu.gcr.io/gardener-project/3rd/quay_io/prometheus/blackbox-exporter:v0.14.0],SizeBytes:17584252,},ContainerImage{Names:[quay.io/k8scsi/mock-driver@sha256:e0eed916b7d970bad2b7d9875f9ad16932f987f0f3d91ec5d86da68b0b5cc9d1 quay.io/k8scsi/mock-driver:v2.1.0],SizeBytes:16226335,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[quay.io/k8scsi/csi-node-driver-registrar@sha256:13daf82fb99e951a4bff8ae5fc7c17c3a8fe7130be6400990d8f6076c32d4599 quay.io/k8scsi/csi-node-driver-registrar:v1.1.0],SizeBytes:15815995,},ContainerImage{Names:[quay.io/k8scsi/livenessprobe@sha256:dde617756e0f602adc566ab71fd885f1dad451ad3fb063ac991c95a2ff47aea5 quay.io/k8scsi/livenessprobe:v1.1.0],SizeBytes:14967303,},ContainerImage{Names:[eu.gcr.io/gardener-project/gardener/vpn-shoot@sha256:6054c6ae62c2bca2f07c913390c3babf14bb8dfa80c707ee8d4fd03c06dbf93f eu.gcr.io/gardener-project/gardener/vpn-shoot:0.16.0],SizeBytes:13732716,},ContainerImage{Names:[gcr.io/google-containers/startup-script@sha256:be96df6845a2af0eb61b17817ed085ce41048e4044c541da7580570b61beff3e gcr.io/google-containers/startup-script:v1],SizeBytes:12528443,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/quay_io/calico/pod2daemon-flexvol@sha256:fd246ba4eb5b96a7b97bfd8d99eb823ba179e6eeb9852cb3e3f7bf2f44a800a8 eu.gcr.io/gardener-project/3rd/quay_io/calico/pod2daemon-flexvol:v3.8.2],SizeBytes:9371181,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1],SizeBytes:9349974,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0],SizeBytes:4732240,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:4206620,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[eu.gcr.io/gardener-project/3rd/gcr_io/google_containers/pause-amd64@sha256:ffa28932647c3b6cab6a618aafe98d33dd185d96158ecf9b1addf042d6244025 k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea eu.gcr.io/gardener-project/3rd/gcr_io/google_containers/pause-amd64:3.1 k8s.gcr.io/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 11 20:40:03.533: INFO: Logging kubelet events for node ip-10-250-7-77.ec2.internal Jan 11 20:40:03.623: INFO: Logging pods the kubelet thinks is on node ip-10-250-7-77.ec2.internal Jan 11 20:40:03.725: INFO: calico-node-dl8nk started at 2020-01-11 15:55:58 +0000 UTC (2+1 container statuses recorded) Jan 11 20:40:03.725: INFO: Init container install-cni ready: true, restart count 0 Jan 11 20:40:03.725: INFO: Init container flexvol-driver ready: true, restart count 0 Jan 11 20:40:03.725: INFO: Container calico-node ready: true, restart count 0 Jan 11 20:40:03.725: INFO: node-problem-detector-jx2p4 started at 2020-01-11 15:55:58 +0000 UTC (0+1 container statuses recorded) Jan 11 20:40:03.725: INFO: Container node-problem-detector ready: true, restart count 0 Jan 11 20:40:03.725: INFO: node-exporter-gp57h started at 2020-01-11 15:55:58 +0000 UTC (0+1 container statuses recorded) Jan 11 20:40:03.725: INFO: Container node-exporter ready: true, restart count 0 Jan 11 20:40:03.725: INFO: calico-kube-controllers-79bcd784b6-c46r9 started at 2020-01-11 15:56:08 +0000 UTC (0+1 container statuses recorded) Jan 11 20:40:03.725: INFO: Container calico-kube-controllers ready: true, restart count 0 Jan 11 20:40:03.725: INFO: metrics-server-7c797fd994-4x7v9 started at 2020-01-11 15:56:08 +0000 UTC (0+1 container statuses recorded) Jan 11 20:40:03.725: INFO: Container metrics-server ready: true, restart count 0 Jan 11 20:40:03.725: INFO: coredns-59c969ffb8-57m7v started at 2020-01-11 15:56:11 +0000 UTC (0+1 container statuses recorded) Jan 11 20:40:03.725: INFO: Container coredns ready: true, restart count 0 Jan 11 20:40:03.725: INFO: calico-typha-deploy-9f6b455c4-vdrzx started at 2020-01-11 16:21:07 +0000 UTC (0+1 container statuses recorded) Jan 11 20:40:03.725: INFO: Container calico-typha ready: true, restart count 0 Jan 11 20:40:03.725: INFO: kube-proxy-nn5px started at 2020-01-11 15:55:58 +0000 UTC (0+1 container statuses recorded) Jan 11 20:40:03.725: INFO: Container kube-proxy ready: true, restart count 0 Jan 11 20:40:03.725: INFO: calico-typha-horizontal-autoscaler-85c99966bb-6j6rp started at 2020-01-11 15:56:08 +0000 UTC (0+1 container statuses recorded) Jan 11 20:40:03.725: INFO: Container autoscaler ready: true, restart count 0 Jan 11 20:40:03.725: INFO: calico-typha-vertical-autoscaler-5769b74b58-r8t6r started at 2020-01-11 15:56:13 +0000 UTC (0+1 container statuses recorded) Jan 11 20:40:03.725: INFO: Container autoscaler ready: true, restart count 5 Jan 11 20:40:03.725: INFO: addons-nginx-ingress-controller-7c75bb76db-cd9r9 started at 2020-01-11 15:56:13 +0000 UTC (0+1 container statuses recorded) Jan 11 20:40:03.725: INFO: Container nginx-ingress-controller ready: true, restart count 0 Jan 11 20:40:03.726: INFO: vpn-shoot-5d76665b65-6rkww started at 2020-01-11 15:56:13 +0000 UTC (0+1 container statuses recorded) Jan 11 20:40:03.726: INFO: Container vpn-shoot ready: true, restart count 0 Jan 11 20:40:03.726: INFO: addons-nginx-ingress-nginx-ingress-k8s-backend-95f65778d-4fk7d started at 2020-01-11 15:56:08 +0000 UTC (0+1 container statuses recorded) Jan 11 20:40:03.726: INFO: Container nginx-ingress-nginx-ingress-k8s-backend ready: true, restart count 0 Jan 11 20:40:03.726: INFO: addons-kubernetes-dashboard-78954cc66b-69k8m started at 2020-01-11 15:56:08 +0000 UTC (0+1 container statuses recorded) Jan 11 20:40:03.726: INFO: Container kubernetes-dashboard ready: true, restart count 0 Jan 11 20:40:03.726: INFO: blackbox-exporter-54bb5f55cc-452fk started at 2020-01-11 15:55:58 +0000 UTC (0+1 container statuses recorded) Jan 11 20:40:03.726: INFO: Container blackbox-exporter ready: true, restart count 0 Jan 11 20:40:03.726: INFO: coredns-59c969ffb8-fqq79 started at 2020-01-11 15:56:08 +0000 UTC (0+1 container statuses recorded) Jan 11 20:40:03.726: INFO: Container coredns ready: true, restart count 0 W0111 20:40:03.816899 21583 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 11 20:40:04.030: INFO: Latency metrics for node ip-10-250-7-77.ec2.internal Jan 11 20:40:04.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-8496" for this suite. Jan 11 20:40:10.390: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 20:40:13.705: INFO: namespace hostpath-8496 deletion completed in 9.584542911s • Failure [14.499 seconds] [sig-storage] HostPath /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] [It] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Jan 11 20:40:02.490: Unexpected error: <*errors.errorString | 0xc003c7aed0>: { s: "expected \"mode of file \\\"/test-volume\\\": dtrwxrwx\" in container output: Expected\n : mount type of \"/test-volume\": tmpfs\n mode of file \"/test-volume\": dgtrwxrwxrwx\n \nto contain substring\n : mode of file \"/test-volume\": dtrwxrwx", } expected "mode of file \"/test-volume\": dtrwxrwx" in container output: Expected : mount type of "/test-volume": tmpfs mode of file "/test-volume": dgtrwxrwxrwx to contain substring : mode of file "/test-volume": dtrwxrwx occurred /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:1667 ------------------------------ Jan 11 20:40:13.708: INFO: Running AfterSuite actions on all nodes Jan 11 20:39:00.295: INFO: Running AfterSuite actions on all nodes Jan 11 20:40:13.743: INFO: Running AfterSuite actions on node 1 Jan 11 20:40:13.743: INFO: Skipping dumping logs from cluster Summarizing 5 Failures: [Fail] [sig-storage] HostPath [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:1667 [Fail] [sig-storage] HostPath [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:1667 [Fail] [sig-storage] HostPath [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:1667 [Fail] [sig-storage] HostPath [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:1667 [Fail] [sig-storage] HostPath [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.16.4-beta.0.50+d9a25890317058/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:1667 Ran 1 of 4731 Specs in 80.458 seconds FAIL! -- 0 Passed | 1 Failed | 0 Pending | 4730 Skipped Ginkgo ran 1 suite in 1m30.024017433s Test Suite Failed